text
stringlengths
9
7.94M
\begin{document} \maketitle \noindent{\bf Abstract} Let $u = \{u(t, x); (t,x)\in \mathbb R_+\times \mathbb R\}$ be the solution to a linear stochastic heat equation driven by a Gaussian noise, which is a Brownian motion in time and a fractional Brownian motion in space with Hurst parameter $H\in(0, 1)$. For any given $x\in \mathbb R$ (resp. $t\in \mathbb R_+$), we show a decomposition of the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) as the sum of a fractional Brownian motion with Hurst parameter $H/2$ (resp. $H$) and a stochastic process with $C^{\infty}$-continuous trajectories. Some applications of those decompositions are discussed. \vskip0.3cm \noindent {\bf Keywords} {Stochastic heat equation; fractional Brownian motion; path regularity; law of the iterated logarithm.} \vskip0.3cm \noindent {\bf Mathematics Subject Classification (2000)}{ 60G15, 60H15, 60G17.} \maketitle \section{Introduction} Consider the following one-dimensional stochastic heat equation \begin{equation}\label{eq SPDE} \frac{\partial u}{\partial t}=\frac{\kappa}{2}\frac{\partial ^2 u}{\partial x^2}+\dot W, \ \ \ t\ge0, x\in \mathbb R, \end{equation} with some initial condition $u(0,x) \equiv0$, $\dot W=\frac{\partial^2 W}{\partial t\partial x}$, where $W$ is a centered Gaussian process with covariance given by \begin{equation}\label{eq cov} \mathbb E[W(s,x)W(t,y)]=\frac12\left(|x|^{2H}+|y|^{2H}-|x-y|^{2H} \right)(s\wedge t), \ \ \end{equation} for any $s,t\ge0, x, y\in\mathbb R$, with $H \in (0,1)$. That is, $W$ is a standard Brownian motion in time and a fractional Browinian motion (fBm for short) with Hurst parameter $H$ in space. When $H=1/2$, $\dot W$ is a space-time white noise and $u$ is the classical stochastic convolution, which has been understood very well (see e.g., \cite{Walsh}). Theorem 3.3 in \cite{K} tells us that the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) can be represented as the sum of a fBm with Hurst parameter $1/4$ (resp. $1/2$) and a stochastic process with $C^{\infty}$-continuous trajectories. Hence, locally $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$ ) behaves as a fBm with Hurst parameter $1/4$ (resp. $1/2$), and it has the same regularity (such as the H\"older continuity, the global and local moduli of continuities, Chung-type law of the iterated logarithm) as a fBm with Hurst parameter $1/4$ (resp. $1/2$). See Lei and Nualart \cite{LN} for earlier related work. When $H\in(1/2,1)$, Mueller and Wu \cite{MW} used such a decomposition to study the critical dimension for hitting points for the fBm; Tudor and Xiao \cite{TX} studied the sample regularities of the solution for the fractional-colored stochastic heat equation by using this decomposition. When $H\in (0,1/2)$, the spatial covariance $\Lambda$, given in $$ \mathbb E\left[\dot W(s,x)\dot W(t,y)\right]=\delta_0(t-s)\Lambda(x-y), $$ is a distribution, which fails to be positive. The study of stochastic partial differential equations with this kind of noises lies outside the scope of application of the classical references (see, e.g., \cite{ Dal, PZ, DPZ}). It seems that the decomposition results in \cite{MW} and \cite{TX} are very hard to be extended to the case of $H\in (0,1/2)$. Recently, the problems of the stochastic partial differential equation driven by a fractional Gaussian noise in space with $H\in(0,1/2)$ have attracted many authors' attention. For example, Balan et al. \cite{BJQ} studied the existence and uniqueness of a mild solution for stochastic heat equation with affine multiplicative fractional noise in space, that is, the diffusion coefficient is given by an affine function $\sigma(x)=ax+b$. They established the H\"older continuity of the solution in \cite{BJQ2016}. The case of the general nonlinear coefficient $\sigma$, which has a Lipschitz derivative and satisfies $\sigma(0)=0$, has been studied in Hu et al. \cite{HHLNT}. In this paper, we give unified forms of the decompositions for the stochastic convolution about both temporal and spatial variables when $H\in (0,1)$. That is, for any given $x\in \mathbb R$ (resp. $t\in \mathbb R_+$), we show a decomposition of the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) as the sum of a fractional Brownian motion with Hurst parameter $H/2$ (resp. $H$) and a stochastic process with $C^{\infty}$-continuous trajectories. Those decompositions not only lead to a better understanding of the H\"older regularity of the stochastic convolution \eqref{eq SPDE}, but also give the uniform and local moduli of continuities and Chung-type law of iterated logarithm. Notice that our decompositions are natural extensions of \cite[Theorem 3.3]{K}, and they are given in different forms with that obtained in \cite{MW} and \cite{TX}. The rest of this paper is organized as follows. In Section 2, we recall some results about the Gaussian noise and the stochastic convolution. The main results are given in Section 3, and their proofs are given in Section 4. \section{The Gaussian noise and the stochastic convolution} In this section, we introduce the Gaussian noise and corresponding stochastic integration, borrowed from \cite{BJQ} and \cite{PT}. Let $\mathcal S$ be the space of Schwartz functions, and $\mathcal S'$ be its dual, the space of tempered distributions. The Fourier transform of a function $u\in \mathcal S$ is defined as $$ \mathcal Fu(\xi)=\int_{\mathbb R}e^{-i\xi x}u(x)dx, $$ and the inverse Fourier transform is given by $\mathcal F^{-1}u(\xi)=(2\pi)^{-1}\mathcal Fu(-\xi)$. Given a domain $G\subset \mathbb R^n$ for some $n\ge1$, let $\mathcal D(G)$ be the space of all real-valued infinitely differential functions with compact support on $G$. According to \cite[Theorem 3.1]{PT}, the noise $W$ can be represented by a zero-mean Gaussian family $\{W(\phi), \phi\in \mathcal D((0,\infty)\times \mathbb R)\}$ defined on a complete probability space $(\Omega, \mathcal F, \mathbb P)$, whose covariance is given by \begin{equation}\label{eq cov 2} \mathbb E[W(\phi)W(\psi)]=c_{1, H}\int_{\mathbb R_+\times \mathbb R} \mathcal F\phi(s,\xi)\overline{\mathcal F\psi(s,\xi)}|\xi|^{1-2H}dsd\xi,\ \ \ \ H \in (0,1), \end{equation} for any $\phi ,\psi \in \mathcal D((0,\infty)\times \mathbb R)$, where the Fourier transforms $\mathcal F \phi, \mathcal F\psi$ are understood as Fourier transforms in space only, $\bar z$ is the conjugation of a complex number $z$ and \begin{equation}\label{const c1H} c_{1, H}=\frac{1}{2\pi} \Gamma(2H+1)\sin(H \pi). \end{equation} Let $\mathcal H$ be the Hilbert space obtained by completing $\mathcal D (\mathbb R)$ under the inner production, \begin{equation} \langle \phi, \psi\rangle_{\mathcal H}= \left\{ \begin{aligned}\label{eq inn} &c_{2, H}^2\int_{\mathbb R^2} (\phi( x+y)-\phi(x))(\psi( x+y)-\psi(x))|y|^{2H-2} dxdy,& H&\in(0,1/2);\\ & \int_{\mathbb R} \phi(x)\psi(x)dx, & H&=1/2;\\ &c_{3, H}^2\int_{\mathbb R^2} \phi( x+y)\psi(x)|y|^{2H-2} dxdy, &H&\in(1/2,1), \end{aligned} \ \right. \end{equation} for any $\phi ,\psi \in \mathcal H $, where $c_{2, H}^2=H\left(\frac12-H\right)$, $c_{3, H}^2=H(2H-1).$ Denote $\|\phi\|_{\mathcal H}:=\sqrt{\langle \phi, \phi\rangle_{\mathcal H}}$ for any $\phi\in \mathcal H$. Then \begin{align}\label{eq cov 3} \mathbb E[W(\phi)W(\psi)] =\mathbb E\left[\int_{\mathbb R_+}\langle \phi(s), \psi(s)\rangle_{\mathcal H} ds\right]. \end{align} See e.g., \cite{HHLNT, PT, TTV}. Let \begin{equation}\label{eq p} p_t(x)=\frac{1}{\sqrt{2\pi \kappa t}}e^{-\frac{x^2}{2\kappa t}} \end{equation} be the heat kernel on the real line related to $\frac{\kappa}{2}\Delta$. \begin{definition}\label{def solut} We say that a random field $u=\{ u(t, x); t\in [0, T], x\in \mathbb R\}$ is a mild solution of \eqref{eq SPDE}, if $u$ is predictable and for any $(t, x)\in [0, T]\times \mathbb R$, \begin{equation}\label{eq solut} u(t,x)= \int_0^t\int_{\mathbb R}p_{t-s}(x-y)W(ds,dy),\ \ \ a.s.. \end{equation} It is usually called the {\it stochastic convolution}. Denote $u(t,x)$ by $u_t(x)$ for any $(t,x)\in \mathbb R_+\times\mathbb R$. \end{definition} \section{Main results} Recall that a mean-zero Gaussian process $\{X_t\}_{t\ge0}$ is called a (two-sided) fractional Brownian motion with Hurst parameter $H\in (0,1)$, if it satisfies \begin{align}\label{eq fBm} X_0=0, \ \ \ \mathbb E\left(|X_t-X_s|^2\right)=|t-s|^{2H}, \ \ \ t, s\in \mathbb R. \end{align} \begin{theorem}\label{thm main} For $H\in (0,1)$, the following results hold for the stochastic convolution $u=\{u_t(x); (t,x)\in \mathbb R_+\times \mathbb R\}$ given by \eqref{eq solut}: \begin{itemize} \item[(a)] For every $x\in \mathbb R$, there exists a fBm $\{X_t\}_{t\ge0}$ with Hurst parameter $H/{2}$, such that $$ u_t(x)- C_{1, H, \kappa} X_t, \ \ \ \ t\ge0, $$ defines a mean-zero Gaussian process with a version that is continuous on $\mathbb R_+$ and infinitely differentiable on $(0, \infty)$, where \begin{equation}\label{eq c} C_{1, H, \kappa}:=\left( \frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)} \right)^{\frac12}. \end{equation} \item[(b)] For every $t>0$, there exists a fBm $\{B(x)\}_{x\in\mathbb R}$ with Hurst parameter $H$, such that $$ u_t(x)- \kappa^{-\frac12} B(x),\ \ \ \ x\in\mathbb R, $$ defines a Gaussian random field with a version that is continuous on $\mathbb R$ and infinitely differentiable on $\mathbb R$. \end{itemize} \end{theorem} Let us observe that Theorem \ref{thm main} says that, locally $t\mapsto u_t(x)$ behaves as a fBm with Hurst parameter $H/2$ and $x\mapsto u_t(x)$ behaves as a fBm with Hurst parameter $H$. Thus, for instance, it follows from this observation and known facts about fBm (see e.g., \cite{MRm}, \cite[Chapter 1]{Mis}, or \cite{Xiao2008}), we can obtain the following sample regularities of the stochastic convolution. By applying Theorem \ref{thm main} and the H\"older continuity result of fBms \cite[Chapter 1]{Mis}, we have the following well-known results, (see e.g., \cite[Chapter 3]{Walsh}, \cite[Theorem 1.1]{BJQ2016}). \begin{corollary} \begin{itemize} \item[(a)] For every $x\in \mathbb R$, the stochastic process $t\mapsto u_t(x)$ is a.s. H\"older continuous of parameter $H/2-\varepsilon$ for every $\varepsilon >0$. \item[(b)] For every $t>0$, the stochastic process $x\mapsto u_t(x)$ is a.s. H\"older continuous of parameter $H-\varepsilon$ for every $\varepsilon >0$. \end{itemize} \end{corollary} By applying Theorem \ref{thm main} and the variations of fBms (see e.g.,\cite{MRm}), we have the following results. \begin{corollary}\label{prop sample regularity11} Let $\mathcal N$ be a standard normal random variable. Then, for every $x\in \mathbb R$ and $[a,b]\subset \mathbb R_+$, \begin{align}\label{eq variation 1} \lim_{n\rightarrow \infty}\sum_{a2^n\le i\le b 2^n}\left[u_{(i+1)/2^n}(x)-u_{i/2^n}(x) \right]^{\frac 2 H}= (b-a)C_{1, H, \kappa}^{2/H}\mathbb E\left[|\mathcal N|^{2/H}\right], \ \ a.s.; \end{align} and for every $t>0$, $[c,d]\subset \mathbb R$, \begin{align}\label{eq variation 2} \lim_{n\rightarrow \infty}\sum_{c2^n\le i\le d 2^n}\left[u_{t}((i+1)/2^n)-u_{t}(i/2^n) \right]^{\frac 1 H}= (d-c) C_{2, H, \kappa}^{1/H} \mathbb E\left[|\mathcal N|^{1/H}\right], \ \ a.s.. \end{align} \end{corollary} By applying Theorem \ref{thm main} and the global and local moduli of continuity results for fBms (see e.g., \cite[Chapter 7]{MRm}), we have \begin{corollary}\label{prop sample regularity2 } \begin{itemize} \item[(a)](Global moduli of continuity for intervals). For every $x\in \mathbb R$ and $[a,b]\subset \mathbb R_+$, we have \begin{align}\label{eq LIL 1} \lim_{\varepsilon \rightarrow 0+}\sup_{s,t\in[a,b], 0<|t-s|\le \varepsilon}\frac{|u_{t}(x)-u_{s}(x)|}{ |t-s|^{H/2}\sqrt{2\ln (1/|t-s|)} }= C_{3, H, \kappa},\ \ \ a.s.; \end{align} and for every $t>0$, $[c,d]\subset \mathbb R$, we have \begin{align}\label{eq LIL 2} \lim_{\varepsilon \rightarrow 0+}\sup_{x,y\in[c,d], 0<|x-y|\le \varepsilon}\frac{|u_{t}(x)-u_{t}(y)|}{ |x-y|^{H}\sqrt{2\ln (1/|x-y|)} }= C_{4, H, \kappa},\ \ \ a.s.. \end{align} \item[(b)](Local moduli of continuity for intervals). For every $t>0$ and $x\in \mathbb R$ , we have \begin{align}\label{eq local moduli 1} \varlimsup_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|t-s|\le \varepsilon} |u_{t}(x)-u_{s}(x)|}{\varepsilon ^{H/2}\sqrt{2\ln\ln (1/\varepsilon )} }= C_{5, H, \kappa},\ \ \ a.s.; \end{align} and \begin{align}\label{eq local moduli 2} \varlimsup_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|x-y|\le \varepsilon} |u_{t}(x)-u_{t}(y)|}{\varepsilon ^{H}\sqrt{2\ln\ln (1/\varepsilon)} }= C_{6, H, \kappa},\ \ \ a.s.. \end{align} \end{itemize} \end{corollary} By applying Theorem \ref{thm main} and the Chung-type law of iterated logarithm in \cite [Theorem 3.3]{MR}, we have \begin{corollary}\label{prop sample regularity2 } For every $t>0$ and $x\in \mathbb R$, we have \begin{align}\label{eq CLIL 1} \varliminf_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|t-s|\le \varepsilon} |u_{t}(x)-u_{s}(x)|}{\left(\varepsilon/ \ln\ln (1/\varepsilon)\right)^{H/2} }= C_{7, H, \kappa},\ \ \ a.s.; \end{align} and \begin{align}\label{eq CLIL 1} \varliminf_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|x-y|\le \varepsilon} |u_{t}(x)-u_{t}(y)|}{\left(\varepsilon/ \ln\ln (1/\varepsilon)\right)^{H} }= C_{8, H, \kappa},\ \ \ a.s.. \end{align} \end{corollary} \section{The proof of Theorem \ref{thm main}} \subsection{The proof of (a) in Theorem \ref{thm main}} The method of proof is similar to those of \cite[Theorem 3.3]{K} and \cite[Proposition 3.1]{FKM}, but is complicated in the fractional noise case. Choose and fix some $x\in \mathbb R$. For every $t,\varepsilon >0$, by \eqref{eq solut}, we have \begin{align*} &u_{t+\varepsilon}(x)-u_t(x)\notag\\ =& \int_0^t\int_{\mathbb R}[p_{t+\varepsilon-s}(x-y)-p_{t-s}(x-y)] W(ds, dy)+ \int_t^{t+\varepsilon}\int_{\mathbb R} p_{t+\varepsilon-s}(x-y) W(ds, dy). \end{align*} Let \begin{align*} J_1:=&\int_0^t\int_{\mathbb R}[p_{t+\varepsilon-s}(x-y)-p_{t-s}(x-y)] W(ds, dy),\\ J_2:=&\int_t^{t+\varepsilon}\int_{\mathbb R} p_{t+\varepsilon-s}(x-y) W(ds, dy). \end{align*} The construction of the Gaussian noise $W$, which is white in time, ensures that $J_1$ and $J_2$ are independent mean-zero Gaussian random variables. Thus, \begin{align*} \mathbb E\left[|u_{t+\varepsilon}(x)-u_t(x)|^2 \right] = \mathbb E(J_1^2)+\mathbb E(J_2^2). \end{align*} Let us compute their variances respectively. First, we compute the variance of $J_2$ by noting that \begin{align*} \mathbb E(J_2^2)=&c_{1, H}\int_t^{t+\varepsilon} \int_{\mathbb R}\left|\mathcal F p_{t+\varepsilon-s}(\xi) \right|^{2}|\xi|^{1-2H}ds d\xi\\ =& c_{1, H} \int_t^{t+\varepsilon} \int_{\mathbb R} e^{-\kappa ( t+\varepsilon-s) |\xi|^2}|\xi|^{1-2H}ds d\xi\\ =& c_{1, H} \int_0^{\varepsilon} \int_{\mathbb R} e^{-\kappa s |\xi|^2}|\xi|^{1-2H}ds d\xi. \end{align*} The change of variables $\tau:=\sqrt{\kappa s}\xi$ yields \begin{align}\label{eq J2} \mathbb E(J_2^2)=& c_{1, H} \int_0^{\varepsilon} \int_{\mathbb R} e^{-|\tau|^2}|\tau|^{1-2H} (\kappa s)^{-1+H} ds d\tau\notag\\ =& c_{1, H}\Gamma(1-H) H^{-1}\kappa^{H-1}\varepsilon^{H}. \end{align} For the term $J_1$, we have \begin{align}\label{eq J1} \mathbb E(J_1^2)=& c_{1, H}\int_0^{t} \int_{\mathbb R}\left|\mathcal F p_{t+\varepsilon-s}(\xi)- \mathcal F p_{t-s}(\xi)\right|^{2}|\xi|^{1-2H}ds d\xi\notag\\ =& c_{1, H}\int_0^{t} \int_{\mathbb R}\left|\mathcal F p_{s+\varepsilon}(\xi)- \mathcal F p_{s}(\xi)\right|^{2}|\xi|^{1-2H}ds d\xi\notag\\ =&c_{1, H}\int_0^{t} \int_{\mathbb R}e^{-\kappa s|\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi. \end{align} This integral is hard to evaluate. By the change of variables and Lemma \ref{lem integ} in appendix, we have \begin{align}\label{eq J12} \int_0^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi =& \kappa^{-1}\int_{\mathbb R} \left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H} d\xi\notag\\ =& \varepsilon^H \kappa ^{H-1}\int_{\mathbb R}\left(1-e^{-\frac{ |\tau|^2}{2}}\right)^2 |\tau|^{-1-2H} d\tau\notag\\ =&\Gamma(1-H) H^{-1}(2^{1-H}-1)\kappa^{H-1}\varepsilon^H. \end{align} Therefore, \begin{align}\label{eq J13} \mathbb E(J_1^2)=& c_{1, H} \Gamma(1-H) H^{-1}(2^{1-H}-1)\kappa^{H-1}\varepsilon^H\notag\\ &- c_{1, H}\int_t^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}-1\right)^2 |\xi|^{1-2H}ds d\xi, \end{align} and hence \begin{align}\label{eq J3} \mathbb E(J_1^2)+\mathbb E(J_2^2)= &c_{1, H}\Gamma(1-H) H^{-1} 2^{1-H} \kappa^{H-1}\varepsilon^{H}\notag\\ &- c_{1, H}\int_t^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi. \end{align} Let $\eta$ denote a white noise on $\mathbb R$ independent of $W$, and consider the Gaussian process $\{T_t\}_{t\ge0}$ defined by $$ T_t:=\left(\frac{c_{1,H}}{\kappa}\right)^{\frac12}\int_{-\infty}^{\infty} \left( 1- e^{-\frac{ \kappa t |\xi|^2}{2}}\right) |\xi|^{-\frac12-H}\eta(d\xi),\ \ t\ge0. $$ Then $\{T_t\}_{t\ge0}$ is a well-defined mean-zero Wiener integral process, $T_0=0$, and $$ {\rm Var} (T_t)=\frac{c_{1,H}}{\kappa}\int_{-\infty}^{\infty} \left( 1- e^{-\frac{ \kappa t |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi<\infty, \ \ \text{for all } t>0. $$ Furthermore, we note that for any $t,\varepsilon>0$, \begin{align}\label{eq T} \mathbb E(|T_{t+\varepsilon}-T_t|^2)=&\frac{c_{1,H}}{\kappa} \int_{-\infty}^{\infty} \left( e^{-\frac{ \kappa t |\xi|^2}{2}}- e^{-\frac{ \kappa (t+\varepsilon) |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi\notag\\ =&\frac{c_{1,H}}{\kappa}\int_{-\infty}^{\infty} e^{- \kappa t |\xi|^2} \left( 1 - e^{-\frac{ \kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi\notag\\ =& c_{1, H} \int_t^{\infty} \int_{-\infty}^{\infty} e^{- \kappa s |\xi|^2} \left( 1 - e^{-\frac{ \kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H} dsd\xi. \end{align} This is precisely the missing integral in \eqref{eq J3}. Therefore, by the independence of $T$ and $u$, we can rewrite \eqref{eq J3} as follows: \begin{align} &\mathbb E\left(\left|(u_{t+\varepsilon}(x)+T_{t+\varepsilon})-(u_t(x)+T_t)\right|^2\right)\notag\\ =&c_{1, H}\Gamma(1-H) H^{-1}\kappa^{H-1} 2^{1-H}\varepsilon^{H}\notag\\ =&\frac{ \sin( H \pi )\Gamma(2H+1)\Gamma(1-H) }{2^H H \kappa^{1-H} \pi} \varepsilon^{H}\notag\\ =& \frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)}\varepsilon^{H}, \end{align} This implies that $$ X_t:=\left(\frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)}\right)^{-\frac12}(u_t(x)+T_t), \ \ \ t\ge0, $$ is a fBm with Hurst parameter $H/2$. Using the same argument in the proof of \cite [Lemma 3.6]{K}, we know that the random process $\{T(t)\}_{t\ge0}$ has a version that is infinitely-differentiable on $(0, \infty)$. \subsection{The proof of (b) in Theorem \ref{thm main}} This result has been proved in \cite[Proposition 3.1]{FKM} when $H=1/2$. We will prove it for the case of $H\neq 1/2$. For any $t>0, x\in \mathbb R$, let \begin{equation}\label{eq S} S_t(x):=\int_t^{\infty} \int_{\mathbb R} \left[p_s(x-w)-p_s(w)\right]\zeta(ds,dw), \end{equation} where $p_t(x)$ is given by \eqref{eq p}, $\zeta$ is a Gaussian noise independent with $W$, which is white in time and fractional in space variable with Hurst parameter $H$. By the argument in the proof of \cite [Lemma 3.6]{K}, we know that $\{S_t(x)\}_{x\in \mathbb R}$ admits a $C^{\infty}$-version for any $t>0$. Next, we will prove that \begin{equation}\label{eq claim} \mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]= \kappa^{-1}\varepsilon^{2H}. \end{equation} Then \begin{equation}\label{aim} B(x):= \kappa^{1/2}(u_t(x)+S_t(x)),\ \ \ \ x\in\mathbb R, \end{equation} is a two-side fBm with parameter $H$, and (b) in Theorem \ref{thm main} holds. In the remaining part, we will prove \eqref{eq claim} for $H\in (0,1/2)$ and $H\in (1/2,1)$, respectively. \subsubsection{$(0<H<1/2)$} For any fixed $t>0$ and $\varepsilon>0$, by Plancherel's identity with respect to $y$ and the explicit formula for $\mathcal F p_t$, we have \begin{align} &\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&c_{2, H}^2\int_0^t\int_{\mathbb R^2}\Big[(p_{t-s}(x+\varepsilon-y+z)-p_{t-s}(x-y+z))-(p_{t-s}(x+\varepsilon-y)-p_{t-s}(x-y))\Big]^2 \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times |z|^{2H-2}dsdydz\notag\\ =& \frac{1}{2\pi}c_{2,H}^2 \int_0^t \int_{\mathbb R^2} e^{-\kappa (t-s) |\xi|^2}\left|e^{i\xi z}-1\right|^2\left|e^{i \xi \varepsilon}-1\right|^2 |z|^{2H-2}ds d\xi dz. \end{align} Since $|e^{i\xi z}-1|^2=2(1-\cos (\xi z))$ and for any $\alpha\in(0,1)$, \begin{equation}\label{eq iden1} \int_0^{\infty} \frac{1-cos(\xi z)}{z^{1+\alpha}}dz= \alpha^{-1}\Gamma(1-\alpha)\cos(\pi\alpha/2) \xi^{\alpha}, \end{equation} (see \cite [Lemma D.1]{BJQ}), we have \begin{align}\label{uoff} &\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&\frac{2 \Gamma(2H)\sin( H \pi)}{(1-2H) \pi}c_{2, H}^2 \int_0^t \int_{\mathbb R} e^{-\kappa (t-s)|\xi|^2} \left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}dsd\xi \notag \\ =&\frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \int_{\mathbb R}\left(1- e^{- \kappa t |\xi|^2}\right)\left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \notag \\ =& \frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \notag \\ &\ \ \ \times\left(\frac{\Gamma(1-2H)\cos( H \pi)\varepsilon^{2H}}{H} - \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \right). \end{align} Recall $S_t(x)$ defined by \eqref{eq S}. Using the same techniques above, we have \begin{align} & \mathbb E\left(|S_t(x+\varepsilon)-S_t(x)|^2 \right) \notag \\ =& \frac{2\Gamma(2H)\sin( H \pi)}{ (1-2H)\pi}c_{2, H}^2 \int_t^{\infty} \int_{\mathbb R} e^{-\kappa s |\xi|^2} \left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}ds d\xi\notag \\ =&\frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi, \end{align} which is exactly the missing integral in \eqref{uoff}. By the independence of $W$ and $\zeta$, we know that $u_t(x)$ and $S_t(x)$ are independent. Therefore, we have \begin{equation} \begin{aligned} &\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]\\ =& \frac{\sin(2 H \pi)\Gamma(2H)\Gamma(1-2H) }{\kappa \pi } \varepsilon^{2H}\\ =& \kappa^{-1} \varepsilon^{2H}. \end{aligned} \end{equation} \quad \subsubsection{$(1/2<H<1)$} For any fixed $t>0$ and $\varepsilon>0$, \begin{align} &\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&c_{3, H}^2\int_0^t\int_{\mathbb R^2} (p_{t-s}(x+\varepsilon-y+z)-p_{t-s}(x-y+z))(p_{t-s}(x+\varepsilon-y)-p_{t-s}(x-y)) \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times |z|^{2H-2}dsdydz. \end{align} Since $p_{t-s}(x+y)p_{t-s}(x)=\frac12\left[p^2_{t-s}(x+y)+p^2_{t-s}(x)-(p_{t-s}(x+y)-p_{t-s}(x))^2\right]$, by Plancherel's identity with respect to $x$ and the explicit formula for $\mathcal F p_t$, we have $$ \int_{\mathbb R}\int_{\mathbb R} p_{t-s}(x+y)p_{t-s}(x)|y|^{2H-2} dxdy = \frac{1}{2\pi}\int_{\mathbb R}\int_{\mathbb R} e^{-\kappa (t-s) |\xi|^2} \cos(\xi y) |y|^{2H-2} d\xi dy.$$ Therefore, \begin{align} &\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&\frac{1}{2\pi}c_{3, H}^2\int_0^t\int_{\mathbb R^2}e^{-\kappa (t-s) |\xi|^2}\left[ 2\cos(\xi z)-\cos(\xi(\varepsilon+z))-\cos(\xi(\varepsilon-z))\right]|z|^{2H-2}ds d\xi dz \notag \\ =&\frac1\pi c_{3, H}^2\int_0^t\int_{\mathbb R^2}e^{-\kappa (t-s) |\xi|^2}\cos(\xi z)(1-\cos(\xi \varepsilon))|z|^{2H-2}ds d\xi dz. \end{align} By formula $\left( 3.761-9 \right)$ of \cite {GR}, we know that \begin{equation}\label{eq formula} \int_0^{\infty} \frac{\cos(ax)}{x^{1-\mu}}dx=\frac{\Gamma(\mu)}{a^\mu}\cos(\pi\mu/2),\ \ \ \text{for any} \ \mu \in (0,1), a>0. \end{equation} Since $H \in (1/2,1)$, using \eqref{eq formula} with $\mu=2H-1,$ we have \begin{align} &\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&\frac{2\Gamma(2H-1)\cos(\pi(2H-1)/2)}{\pi}c_{3, H}^2\int_0^t\int_{\mathbb R}e^{-\kappa (t-s) |\xi|^2} (1-\cos(\xi \varepsilon) |\xi|^{1-2H}ds d\xi \notag \\ =&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \int_{\mathbb R}\left(1- e^{- \kappa t |\xi|^2}\right)\left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \notag \\ =&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \notag \\ &\ \ \ \times \left( -\frac{\Gamma(2-2H)\cos( H \pi)}{H(2H-1)}\varepsilon^{2H} - \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \right), \end{align} where the last equality we used the identity: \begin{equation} \int_0^{\infty} \frac{1-cos(\xi z)}{z^{1+\alpha}}dz= -\alpha^{-1}(\alpha-1)^{-1}\Gamma(2-\alpha)\cos(\pi\alpha/2) \xi^{\alpha},\ \ \ \ \alpha\in(1,2), \end{equation} (see \cite [Lemma D.1]{BJQ}). Recall $S_t(x)$ given by \eqref{eq S}. Using the same techniques above, we have \begin{align} & \mathbb E\left(|S_t(x+\varepsilon)-S_t(x)|^2 \right) \notag \\ =& \frac{2\Gamma(2H-1)\sin( H \pi)}{\pi}c_{3, H}^2 \int_t^{\infty} \int_{\mathbb R} e^{-\kappa s |\xi|^2} \left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}ds d\xi\notag \\ =&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi. \end{align} By the independence of $W$ and $\zeta$, we know that $u_t(x)$ and $S_t(x)$ are independent. Therefore, we have \begin{equation} \begin{aligned} &\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]\\ =& \frac{- \sin(2H \pi) \Gamma(2H-1) \Gamma(2-2H) }{\kappa \pi} \varepsilon^{2H}\\ =&\kappa^{-1} \varepsilon^{2H}. \end{aligned} \end{equation} \section{Appendix} \begin{lemma}\label{lem integ} The following identity holds: $$\int_0^{\infty}\left(e^{-\frac{x^2}{2}}-1\right)^2 x^{-1-2H} dx= \Gamma(1-H) H^{-1}(2^{-H}-2^{-1}).$$ \end{lemma} \begin{proof} The proof is inspired by Lemma A.1 in \cite{K}. By the change of variables $w=x^2/2$, we have \begin{align*} \int_0^{\infty}\left(e^{-\frac{x^2}{2}}-1\right)^2 x^{-1-2H} dx =& 2^{-1-H} \int_0^{\infty}\left(e^{-w}-1\right)^2 w^{-1-H} dw\notag\\ =& 2^{-1-H} (I_{0,1}-I_{1,2}), \end{align*} where $I_{a, b}:=\int_0^{\infty}\left( e^{-aw}- e^{-b w}\right) w^{-1-H} dw$ for all $a,b\ge0$. Since $e^{-aw}-e^{-b w}=w\int_a^b e^{-rw}dr$, we have \begin{align*} I_{a, b}= \int_0^{\infty}\int_a^b e^{-rw} w^{-H}drdw = \Gamma(1-H) \int_a^b r ^{-1+H}dr = \Gamma(1-H) H^{-1} (b^H-a^H). \end{align*} Thus, $I_{0,1}-I_{1,2}=\Gamma(1-H) H^{-1} (2-2^H)$, and the lemma follows. \end{proof} \vskip0.5cm \vskip0.5cm \noindent{\bf Acknowledgments}: R. Wang is supported by NSFC (11871382), Chinese State Scholarship Fund Award by the China Scholarship Council and Youth Talent Training Program by Wuhan University. \vskip0.5cm \end{document}
\begin{document} \begin{abstract} In this paper we consider the steady water wave problem for waves that possess a merely $L_r-$integrable vorticity, with $r\in(1,\infty)$ being arbitrary. We first establish the equivalence of the three formulations--the velocity formulation, the stream function formulation, and the height function formulation-- in the setting of strong solutions, regardless of the value of $r$. Based upon this result and using a suitable notion of weak solution for the height function formulation, we then establish, by means of local bifurcation theory, the existence of small amplitude capillary and capillary-gravity water waves with a $L_r-$integrable vorticity. \end{abstract} \maketitle \section{Introduction}\label{Sec:1} We consider the classical problem of traveling waves that propagate at the surface of a two-dimensional inviscid and incompressible fluid of finite depth. Our setting is general enough to incorporate the case when the vorticity of the fluid is merely $L_r-$integrable, with $r>1$ being arbitrary. The existence of solutions of the Euler equations in ${\mathbb R}^n$ describing flows with an unbounded vorticity distribution has been addressed lately by several authors, cf. \cite{ Ke11, MB02, Vi00} and the references therein, whereas for traveling free surface waves in two-dimensions there are so far no existence results which allow for a merely $L_r$-integrable vorticity. In our setting, the hydrodynamical problem is modeled by the steady Euler equations, to which we refer to as the {\em velocity formulation}. For classical solutions and in the absence of stagnation points, there are two equivalent formulations, namely the {\em stream function} and the {\em height function} formulation, the latter being related to the semi-Lagrangian Dubreil-Jacotin transformation. This equivalence property stays at the basis of the existence results of classical solutions with general H\"older continuous vorticity, cf. \cite{CoSt04} for gravity waves and \cite{W06b, W06a} for waves with capillarity. Very recent, taking advantage of the weak formulation of the governing equations, it was rigorously established that there exist gravity waves \cite{CS11} and capillary-gravity waves \cite{CM13xx, MM13x} with a discontinuous and bounded vorticity. The waves found in the latter references are obtained in the setting of strong solutions when the equations of motion are satisfied in $L_r,$ $r>2$ in \cite{CS11}, respectively $L_\infty$ in \cite{CM13xx, MM13x}. The authors of \cite{CS11} also prove the equivalence of the formulations in the setting of $L_r$-solutions, under the restriction that $r>2$. Our first main result, Theorem \ref{T:EQ}, establishes the equivalence of the three formulations for strong solutions that possess Sobolev and weak H\"older regularity. For this we rely on the regularity properties of such solutions, cf. \cite{AC11, EM13x, MM13x}. This equivalence holds for gravity, capillary-gravity, and pure capillary waves without stagnation points and with a $L_r$-integrable vorticity, without making any restrictions on $r\in(1,\infty).$ The equivalence result Theorem \ref{T:EQ} stays at the basis of our second main result, Theorem \ref{T:MT}, where we establish the existence of small amplitude capillary and capillary-gravity water waves having a $L_r-$integrable vorticity distribution for any $r\in(1,\infty).$ On physical background, studying waves with an unbounded vorticity is relevant in the setting of small-amplitude wind generated waves, when capillarity plays an important role. These waves may possess a shear layer of high vorticity adjacent to the wave surface \cite{O82, PB74}, fact which motivates us to consider unbounded vorticity distributions. {Moreover, an unbounded vorticity at the bed is also physically relevant, for example when describing turbulent flows along smooth channels (see the empirical law on page 109 in \cite{B62}).} In contrast to the irrotational case when, in the absence of an underlying current, the qualitative features of the flow are well-understood \cite{C12, Co06, DH07}, in the presence of a underlying, even uniform \cite{CoSt10, HH12, SP88}, current many aspects of the flow are more difficult to study, or are even untraceable, and one has to rely often on numerical simulations, cf. \cite{KO1, KO2, SP88}. For example, by allowing for a discontinuous vorticity, the latter studies display the influence of a favorable or adverse wind on the amplitude of the waves, or describe extremely high rotational waves and the the flow pattern of waves with eddies. The rigorous existence of waves with capillarity was obtained first in the setting of irrotational waves \cite{MJ89, JT85, JT86, RS81} and it was only recently extended to the setting of waves with constant vorticity and stagnation points \cite{ CM13a, CM13, CM13x} (see also \cite{CV11}). In the context of waves with a general H\"older continuous \cite{W06b, W06a} or discontinuous \cite{CM13xx, MM13x} vorticity the existence results are obtained by using the height function formulation and concern only small amplitude waves without stagnation points. Theorem \ref{T:MT}, which is the first rigorous existence result for waves with unbounded vorticity, is obtained by taking advantage of the weak interpretation of the height function formulation. More precisely, recasting the nonlinear second-order boundary condition on the surface into a nonlocal and nonlinear equation of order zero enables us to introduce the notion of weak (which is shown to be strong) solution for the problem in a suitable analytic setting. By means of local bifurcation theory and ODE techniques we then find local real-analytic curves consisting, with the exception of a single laminar flow solution, only of non-flat symmetric capillary (or capillary-gravity) water waves. The methods we apply are facilitated by the presence of capillary effects (see e.g. the proof of Lemma \ref{L:4}), though not on the value of the surface tension coefficient, the existence question for pure gravity waves with unbounded vorticity being left as an open problem. The outline of the paper is as follows: we present in Section \ref{Sec:2} the mathematical setting and establish the equivalence of the formulations in Theorems \ref{T:EQ}. We end the section by stating our main existence result Theorem \ref{T:MT}, the Section \ref{Sec:3} being dedicated to its proof. \section{Classical formulations of the steady water wave problem and the main results}\label{Sec:2} Following a steady periodic wave from a reference frame which moves in the same direction as the wave and with the same speed $c$, the equations of motion are the steady-state Euler equations \begin{subequations}\label{eq:P} \begin{equation}\label{eq:Euler} \left\{ \begin{array}{rllll} ({u}-c) { u}_x+{ v}{ u}_y&=&-{ P}_x,\\ ({ u}-c) { v}_x+{ v}{ v}_y&=&-{ P}_y-g,\\ { u}_x+{v}_y&=&0 \end{array} \right.\qquad \text{in $\Omega_\eta,$} \end{equation} with $x$ denoting the direction of wave propagation and $y$ being the height coordinate. We assumed that the free surface of the wave is the graph $y=\eta(x),$ that the fluid has constant unitary density, and that the flat fluid bed is located at $y=-d$. Hereby, $\eta$ has zero integral mean over a period and $d>0$ is the average mean depth of the fluid. Moreover, $\Omega_\eta $ is the two-dimensional fluid domain \[ \Omega_\eta:=\{(x,y)\,:\,\text{$ x\in\mathbb{S} $ and $-d<y<\eta(x)$}\}, \] with $\mathbb{S}:={\mathbb R}/(2\pi{\mathbb Z})$ denoting the unit circle. This notation expresses the $2\pi$-periodicity in $x $ of $\eta,$ of the velocity field $(u, v), $ and of the pressure $P$. The equations \eqref{eq:Euler} are supplemented by the following boundary conditions \begin{equation}\label{eq:BC} \left\{ \begin{array}{rllll} P&=&{P}_0-\sigma\eta''/(1+\eta'^2)^{3/2}&\text{on $ y=\eta(x)$},\\ v&=&({ u}-c) \eta'&\text{on $ y=\eta(x)$},\\ v&=&0 &\text{on $ y=-d$}, \end{array} \right. \end{equation} the first relation being a consequence of Laplace-Young's equation which states that the pressure jump across an interface is proportional to the mean curvature of the interface. We used $ P_0$ to denote the constant atmospheric pressure and $\sigma>0$ is the surface tension coefficient. Finally, the vorticity of the flow is the scalar function \begin{equation*} \omega:= { u}_y-{ v}_x\qquad\text{in $\Omega_\eta$.} \end{equation*} \end{subequations} The velocity formulation \eqref{eq:P} can be re-expressed in terms of the stream function $\psi $, which is introduced via the relation $\nabla \psi=(-v,u-c)$ in $\Omega_\eta$, cf. Theorem \ref{T:EQ}, and it becomes a free boundary problem \begin{equation}\label{eq:psi} \left\{ \begin{array}{rllll} \Delta \psi&=&\gamma(-\psi)&\text{in}&\Omega_\eta,\\ \displaystyle|\nabla\psi|^2+2g(y+d)-2\sigma\frac{\eta''}{(1+\eta'^2)^{3/2}}&=&Q&\text{on} &y=\eta(x),\\ \psi&=&0&\text{on}&y=\eta(x),\\ \psi&=&-p_0&\text{on} &y=-d. \end{array} \right. \end{equation} Hereby, the constant $p_0<0$ represents the relative mass flux, $Q\in{\mathbb R}$ is related to the total head, and the function $\gamma:(p_0,0)\to{\mathbb R}$ is the vorticity function, that is \begin{equation}\label{vor} \omega(x,y)=\gamma(-\psi(x,y)) \end{equation} for $(x,y)\in \Omega_\eta.$ The equivalence of the velocity formulation \eqref{eq:P} and of the stream function formulation \eqref{eq:psi} in the setting of classical solutions without stagnation points, that is when \begin{equation}\label{SC} u-c<0\qquad\text{in $\overline \Omega_\eta$} \end{equation} has been established in \cite{Con11, CoSt04}. We emphasize that the assumption \eqref{SC} is crucial when proving the existence of the vorticity function $\gamma$. Additionally, the condition \eqref{SC} guarantees in the classical setting considered in these references that the semi-hodograph transformation $\Phi:\overline\Omega_\eta\to\overline\Omega$ given by \begin{equation}\label{semH} \Phi(x,y):=(q,p)(x,y):=(x,-\psi(x,y))\qquad \text{for $(x,y)\in\overline\Omega_\eta$}, \end{equation} where $\Omega:=\mathbb{S}\times(p_0,0),$ is a diffeomorphism. This property is used to show that the previous two formulations \eqref{eq:P} and \eqref{eq:psi} can be re-expressed in terms of the so-called height function $h:\overline \Omega\to{\mathbb R}$ defined by \begin{equation}\label{hodo} h(q,p):=y+d \qquad\text{for $(q,p)\in\overline\Omega$}. \end{equation} More precisely, one obtains a quasilinear elliptic boundary value problem \begin{equation}\label{PB} \left\{ \begin{array}{rllll} (1+h_q^2)h_{pp}-2h_ph_qh_{pq}+h_p^2h_{qq}-\gamma h_p^3&=&0&\text{in $\Omega$},\\ \displaystyle 1+h_q^2+(2gh-Q)h_p^2-2\sigma \frac{h_p^2h_{qq}}{(1+h_q^2)^{3/2}}&=&0&\text{on $p=0$},\\ h&=&0&\text{on $ p=p_0,$} \end{array} \right. \end{equation} the condition \eqref{SC} being re-expressed as \begin{equation}\label{PBC} \min_{\overline \Omega}h_p>0. \end{equation} The equivalence of the three formulations \eqref{eq:P}, \eqref{eq:psi}, and \eqref{PB} of the water wave problem, when the vorticity is only $ L_r-$integrable, has not been established yet for the full range $r\in(1,\infty)$. In the context of strong solutions, when the equations of motion are assumed to hold in $L_r$, there is a recent result \cite[Theorem 2]{CS11} established in the absence of capillary forces. This result though is restricted to the case when $r>2,$ this condition being related to Sobolev's embedding $W^2_r\hookrightarrow C^{1+\alpha}$ in two dimensions. In the same context, but for solutions that possess weak H\"older regularity, there is a further equivalence result \cite[Theorem 1]{VZ12}, but again one has to restrict the range of H\"older exponents. Our equivalence result, cf. Theorem \ref{T:EQ} and Remark \ref{R:-2} below, is true for all $r\in(1,\infty)$ and was obtained in the setting of strong solutions that possess, additionally to Sobolev regularity, weak H\"older regularity, the H\"older exponent being related in our context to Sobolev's embedding in only one dimension. This result enables us to establish, cf. Theorem \ref{T:MT} and Remark \ref{R:0}, the existence of small-amplitude capillary-gravity and pure capillary water waves with $L_r-$integrable vorticity function for any $r\in(1,\infty).$ We denote in the following by $\mathop{\rm tr}\nolimits_0$ the trace operator with respect to the boundary component $p=0$ of $\overline\Omega,$ that is $\mathop{\rm tr}\nolimits_0v=v(\cdot,0)$ for all $v\in C(\overline\Omega).$ In the following, we use several times the following product formula \begin{equation}\label{PF} \qquad \partial(uv)=u\partial v+v\partial u\qquad\text{for all $u,v\in W^1_{1,loc}$ with $uv, u\partial v+v\partial u\in L_{1,loc},$} \end{equation} cf. relation (7.18) in \cite{GT01}. \begin{thm}[Equivalence of the three formulations]\label{T:EQ} Let $r\in(1,\infty)$ be given and define $\alpha=(r-1)/r\in(0,1).$ Then, the following are equivalent \begin{itemize} \item[$(i)$] the height function formulation together with \eqref{PBC} for $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ such that $\mathop{\rm tr}\nolimits_0 h \in W^2_r(\mathbb{S})$, and $\gamma\in L_r((p_0,0))$; \item[$(ii)$] the stream function formulation for $\eta\in W^2_r(\mathbb{S})$, $\psi\in C^{1+\alpha}(\overline\Omega_\eta)\cap W^2_r(\Omega_\eta)$ satisfying $\psi_y<0$ in $\overline\Omega_\eta$, and $\gamma\in L_r((p_0,0))$; \item[$(iii)$] the velocity formulation together with \eqref{SC} for $u,v, P\in C^{\alpha}(\overline\Omega_\eta)\cap W^1_r(\Omega_\eta),$ and $\eta\in W^2_r(\mathbb{S}).$ \end{itemize} \end{thm} \begin{rem}\label{R:-2} Our equivalence result is true for both capillary and capillary-gravity water waves. Moreover, it is also true for pure gravity waves when the proof is similar, with modifications just when proving that $(iii)$ implies $(i)$: instead of using \cite[Theorem 5.1]{MM13x} one has to rely on the corresponding regularity result established for gravity waves, cf. Theorem 1.1 in \cite{EM13x}. We emphasize also that the condition $\mathop{\rm tr}\nolimits_0 h \in W^2_r(\mathbb{S})$ requested at $(i)$ is not a restriction. In fact, as a consequence of $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ being a strong solution of \eqref{PB}-\eqref{PBC} for $\gamma\in L_r((p_0,0))$, we know that the wave surface and all the other streamlines are real-analytic curves, cf. \cite[Theorem 5.1]{MM13x} and \cite[Theorem 1.1]{EM13x}. Particularly, $\mathop{\rm tr}\nolimits_0 h$ is a real-analytic function, i.e. $\mathop{\rm tr}\nolimits_0 h\in C^\omega(\mathbb{S})$. Furthermore, in view of the same references, all weak solutions $h\in C^{1+\alpha}(\overline\Omega)$ of \eqref{PB}, cf. Definition \ref{D:1} (or \cite{CS11} for gravity waves), satisfy $h \in W^2_r(\mathbb{S})$. \end{rem} \begin{proof}[Proof of Theorem \ref{T:EQ}] Assume first $(i)$ and let \begin{align}\label{DEF} d:=\frac{1}{2\pi}\int_0^{2\pi} \mathop{\rm tr}\nolimits_0 h\, dq\in(0,\infty)\qquad\text{and}\qquad \eta:=\mathop{\rm tr}\nolimits_0 h-d\in W^2_r(\mathbb{S}). \end{align} We prove that there exists a unique function $\psi\in C^{1+\alpha}(\overline\Omega_\eta)$ with the property that \begin{align}\label{PP1} y+d-h(x,-\psi(x,y))=0\qquad\text{for all $(x,y)\in\overline\Omega_\eta.$} \end{align} To this end, let $H:\mathbb{S}\times{\mathbb R}\to{\mathbb R}$ to be a continuous extension of $h$ to $\mathbb{S}\times{\mathbb R}$, having the property that $H(q,\cdot)\in C^1({\mathbb R})$ is strictly increasing and has a bounded derivative for all $q\in\mathbb{S}.$ Moreover, define the function $F:\mathbb{S}\times{\mathbb R}\times{\mathbb R}\to {\mathbb R}$ by setting \begin{align*} F(x,y,p)=y+d-H(x,p). \end{align*} For every fixed $x\in\mathbb{S}$, we have \[\text{$F(x,\cdot,\cdot)\in C^1({\mathbb R}\times{\mathbb R},{\mathbb R}),$ \quad $F(x,\eta(x),0)=0$,\quad and $F_p(x,\cdot,\cdot)=-H_p(x,\cdot)<0$. }\] Using the implicit function theorem, we find a $C^1-$function $\psi(x,\cdot):(\eta(x)-\varepsilon,\eta(x)+\varepsilon)\to{\mathbb R}$ with the property that \begin{align*} F(x,y,-\psi(x,y))=0 \qquad\text{for all $y\in (\eta(x)-\varepsilon,\eta(x)+\varepsilon)$}. \end{align*} As $\psi_y(x,y)=1/F_p(x,y,-\psi(x,y))$, we deduce that $\psi(x,\cdot) $ is a strictly decreasing function which maps, due to the boundedness of $H_p(x,\cdot)$, bounded intervals onto bounded intervals. Therefore, $\psi(x,\cdot) $ can be defined on $(-\infty,\eta(x)]$. In view of $F(x,-d,p_0)=0,$ we get that $\psi(x,-d)=-p_0$ for each $x\in\mathbb{S}.$ Observe also that, due to the periodicity of $H$ and $\eta$, $\psi$ is $2\pi-$periodic with respect to $x$, while, because use of $F\in C^1(\mathbb{S}\times{\mathbb R}\times[p_0,0]),$ we have $\psi\in C^1(\Omega_\eta)$. Since the relation \eqref{PP1} is satisfied in $\overline\Omega_\eta$, it is easy to see now that in fact $\psi\in C^{1+\alpha}(\overline\Omega_\eta).$ In order to show that $\psi$ is the desired stream function, we prove that $\psi\in W^2_r(\Omega_\eta).$ Noticing that the relation \eqref{PP1} yields \begin{align}\label{RE1} \psi_y(x,y)=-\frac{1}{h_p(x,-\psi(x,y))}\qquad\text{and}\qquad \psi_x(x,y)=\frac{h_q(x,-\psi(x,y))}{h_p(x,-\psi(x,y))} \end{align} in $\overline\Omega_\eta,$ the variable transformation \eqref{semH}, integration by parts, and the fact that $h$ is a strong solution of \eqref{PB} yield \begin{align*} \Delta\psi[\widetilde\phi]=& -\int_{\Omega_\eta}\left(\psi_y\widetilde\phi_y+\psi_x\widetilde\phi_x\right)\, d(x,y)=-\int_{\Omega}\left(h_q\phi_q-\frac{1+h_q^2}{h_p}\phi_p\right)\, d(q,p)\\ =&\int_\Omega\left(h_{qq}-\frac{2h_qh_{pq}}{h_p}+\frac{(1+h_q^2)h_{pp}}{h_p^2}\right)\phi\, d(q,p)=\int_\Omega(\gamma \phi)h_p\, d(q,p) \\ =&\int_{\Omega_\eta} \gamma(-\psi)\widetilde \phi \, d(x,y) \end{align*} for all $\widetilde \phi\in C^\infty_0(\Omega_\eta),$ whereby we set $\phi:=\widetilde\phi\circ\Phi^{-1}.$ This shows that $\Delta\psi=\gamma(-\psi)\in L_r(\Omega_\eta)$. Taking into account that $\psi(x,y)=p_0(y-\eta(x))/(d+\eta(x))$ for $(x,y)\in\partial\Omega_\eta,$ whereby in fact $\eta\in C^\omega(\mathbb{S}),$ c.f. Remark \ref{R:-2}, we find by elliptic regularity, cf. e.g. \cite[Theorems 3.6.3 and 3.6.4]{CW98}, that $\psi\in W^2_r(\Omega_\eta).$ It is also easy to see that $(\eta,\psi)$ satisfy also the second relation of \eqref{eq:psi}, and this completes our arguments in this case. We now show that $(ii)$ implies $(iii)$. To this end, we define \begin{align}\label{PP2} (u-c,v)&:=(\psi_y,-\psi_x)\qquad\text{and}\qquad P:=-\frac{|\nabla\psi|^2}{2}-g(y+d)-\Gamma(-\psi)+P_0+\frac{Q}{2}, \end{align} where $\Gamma$ is given by \begin{equation}\label{E:G}\Gamma(p):=\int_0^p\gamma(s)\, ds\qquad\text{for $p\in[p_0,0].$}\end{equation} Clearly, we have that $u,v\in C^{\alpha}(\overline\Omega_\eta) \cap W^1_r(\Omega_\eta) $ and $\Gamma\in C^\alpha([p_0,0])\cap W^1_r((p_0,0)).$ Moreover, because $\psi\in C^{1+ \alpha}(\overline\Omega_\eta)\cap W^2_r(\Omega_\eta),$ the formula \eqref{PF} shows that $|\nabla\psi|^2\in W^1_r(\Omega_\eta),$ and therefore also $P\in C^{\alpha}(\overline\Omega_\eta)\cap W^1_r(\Omega_\eta).$ The boundary conditions \eqref{eq:BC} are easy to check. Furthermore, the conservation of mass equation is a direct consequence of the first relation of \eqref{PP2}. We are left with the conservation of momentum equations. Therefore, we observe the function $\Gamma(-\psi)$ is differentiable almost everywhere and its partial derivatives belong to $L_r(\Omega_\eta)$, meaning that $\Gamma(-\psi)\in W^1_r(\Omega_\eta)$, cf. \cite{DD12}, the gradient $\nabla(\Gamma(-\psi))$ being determined by the chain rule. Taking now the weak derivative with respect to $x$ and $y$ in the second equation of \eqref{PP2}, respectively, we obtain in view of \eqref{PF}, the conservation of momentum equations. We now prove that $(iii)$ implies $(ii)$. Thus, choose $u,v, P\in C^{\alpha}(\overline\Omega_\eta)\cap W^1_r(\Omega_\eta) $ and $\eta\in W^2_r(\mathbb{S}) $ such that $(\eta, u-c, v,P)$ is a solution of the velocity formulation. We define \begin{equation} \psi(x,y):=-p_0+\int_{-d}^{y} (u(x,s)-c)\, ds\qquad\text{for $(x,y)\in\overline\Omega_\eta,$} \end{equation} with $p_0$ being a negative constant. It is not difficult to see that the function $\psi$ belongs to $ C^{1+\alpha}(\overline\Omega_\eta)\cap W^2_r(\Omega_\eta) $ and that it satisfies $\nabla\psi=(-v,u-c).$ The latter relation allows us to pick $p_0$ such that $\psi=0$ on $y=\eta(x).$ Also, we have that $\psi=-p_0$ on the fluid bed. We next show that the vorticity of the flow satisfies the relation \eqref{vor} for some $\gamma\in L_r((p_0,0)).$ To this end, we proceed as in \cite{ BM11} and use the property that the mapping $\Phi$ given by \eqref{semH} is an isomorphism of class $C^{1+\alpha}$ to compute that \begin{align*} \partial_q (\omega\circ \Phi^{-1})[ \phi]=&\int_{\Omega_\eta} (v_x-u_y)((u-c) \widetilde\phi_x+v\widetilde\phi_y)\, d(x,y) \end{align*} for all $ \phi\in C^\infty_0(\Omega).$ Again, we set $\widetilde\phi :=\phi\circ\Phi\in C^{1+\alpha}_0(\overline\Omega_\eta).$ Since our assumption $(iii)$ implies that $(u-c)^2$ and $v^2$ belong to $W^1_r(\Omega_\eta)$, cf. \eqref{PF}, density arguments, \eqref{eq:Euler}, and integration by parts yield \begin{align*} \partial_q (\omega\circ \Phi^{-1})[ \phi]=&\int_{\Omega_\eta} ((u-c)v_x+vv_y)\widetilde\phi_x\, d(x,y)-\int_{\Omega_\eta} ((u-c)u_x+vu_y)\widetilde\phi_y\, d(x,y)\\[1ex] =&-\int_{\Omega_\eta} (P_y+g)\widetilde\phi_x\, d(x,y)+\int_{\Omega_\eta} P_x\widetilde\phi_y\, d(x,y)=0. \end{align*} Consequently, there exists $\gamma\in L_r((p_0,0))$ with the property that $\omega\circ\Phi^{-1}=\gamma$ almost everywhere in $\Omega$. This shows that \eqref{vor} is satisfied in $L_r(\Omega_\eta).$ Next, we observe that the same arguments used when proving that $(ii)$ implies $(iii)$ yield that the energy \[ E:=P+\frac{|\nabla\psi|^2}{2}+g(y+d)+\Gamma(-\psi) \] is constant in $\overline\Omega_\eta.$ Defining $Q:=2(E-P_0),$ one can now easily see that $(\eta,\psi)$ satisfies \eqref{eq:psi}, and we have established $(ii)$. In the final part of the proof we assume that $(ii)$ is satisfied and we prove $(i)$. Therefore, we let $h:\overline\Omega\to{\mathbb R}$ be the mapping defined by \eqref{hodo} (or equivalently \eqref{PP1}). Then, we get that $ h\in C^{1+\alpha}(\overline\Omega)$ verifies the relations \eqref{PP1} and \eqref{RE1}. Consequently, $\mathop{\rm tr}\nolimits_0 h\in W^2_r(\mathbb{S})$ and one can easily see that the boundary conditions of \eqref{PB} and \eqref{PBC} are satisfied. In order to show that $h$ belongs to $W^2_r(\Omega)$ and it also solves the first relation of \eqref{PB}, we observe that the first equation of \eqref{eq:psi} can be written in the equivalent form \begin{equation}\label{eq:psi2} (\psi_x\psi_y)_x+\frac{1}{2}\left(\psi_y^2-\psi_x^2\right)_y+(\Gamma(-\psi))_y=0\qquad\text{in $L_r(\Omega_\eta)$}. \end{equation} Therewith and using the change of variables \eqref{semH}, we find \begin{align*} &\int_\Omega\frac{h_q}{h_p} \phi_q-\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)\phi_p\, d(q,p)\\[1ex] &=-\int_{\Omega_\eta}\left((\psi_x\psi_y)_x+\frac{1}{2}\left(\psi_y^2-\psi_x^2\right)_y+(\Gamma(-\psi))_y\right)\widetilde\phi\, d(x,y)=0, \end{align*} for all $\phi\in C^1_0(\Omega)$ and with $\widetilde\phi :=\phi\circ\Phi\in C^{1+\alpha}_0(\overline\Omega_\eta)$. Hence, $h\in C^{1+\alpha}(\overline\Omega)$ is a weak solution of the height function formulation, cf. Definition \ref{D:1}. We are now in the position to use the regularity result Theorem 5.1 in \cite{MM13x} which states that the distributional derivatives $\partial_q^mh$ also belong to $C^{1+\alpha}(\overline\Omega)$ for all $m\geq1.$ Particularly, setting $m=1$, we find that $h_p$ is differentiable with respect to $q$ and $\partial_q(h_p)=\partial_p(h_q)\in C^\alpha(\overline\Omega).$ Exploiting the fact that $h$ is a weak solution, we see that the distributional derivatives \[ \partial_q\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right),\quad \partial_p\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)=\partial_q\left( \frac{h_q}{h_p}\right) \] belong both to $C^\alpha(\overline\Omega)\subset L_r(\Omega).$ Additionally, $1+h_q^2\in C^{1+\alpha}(\overline\Omega)$ and regarding $\Gamma$ as an element of $ W^1_r(\Omega)$, we obtain \[ \frac{1 }{h_p^2}\in W^1_r(\Omega). \] Because $h$ satisfies \eqref{PBC} and recalling that $h_p$ is a bounded function, \cite[Theorem 7.8]{GT01} implies that $h_p\in W^1_r(\Omega).$ Hence, $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ and it is not difficult to see that $h$ satisfies the first equation of \eqref{PB} in $L_r(\Omega)$, cf. \eqref{PF}. This completes our arguments. \end{proof} We now state our main existence result. \begin{thm}[Existence result]\label{T:MT} We fix $r\in(1,\infty)$ , $p_0\in(-\infty,0),$ and define the H\"older exponent $\alpha:=(r-1)/r\in(0,1).$ We also assume that the vorticity function $\gamma$ belongs to $L_r((p_0,0)).$ Then, there exists a {positive integer $N$} such that for each {integer $n\geq N$} there exists a local real-analytic curve $ {\mathcal{C}_{n}}\subset C^{1+\alpha}(\overline\Omega)$ consisting only of strong solutions of the problem \eqref{PB}-\eqref{PBC}. Each solution $h\in {\mathcal{C}_{n}}$, {$n\geq N,$} satisfies additionally \begin{itemize} \item[$(i)$] $h\in W^2_r(\Omega)$, \item[$(ii)$] $h(\cdot,p)$ is a real-analytic map for all $p\in[p_0,0].$ \end{itemize} Moreover, each curve ${\mathcal{C}_{n}}$ contains a laminar flow solution and all the other points on the curve describe waves that have minimal period $2\pi/n $, only one crest and trough per period, and are symmetric with respect to the crest line. \end{thm} \begin{rem}\label{R:0} While proving Theorem \ref{T:MT} we make no restriction on the constant $g$, meaning that the result is true for capillary-gravity waves but also in the context of capillary waves (when we set $g=0$). Sufficient conditions which allow us to choose {$N=1$} in Theorem \ref{T:MT} can be found in Lemma \ref{L:9}. Also, if $\gamma\in C((p_0,0)),$ the solutions found in Theorem \ref{T:MT} are classical as one can easily show that, additionally to the regularity properties stated in Theorem \ref{T:EQ}, we also have $h\in C^2(\Omega),$ $\psi\in C^2(\Omega_\eta)$, and $(u,v,P)\in C^1(\Omega_\eta)$. \end{rem} \section{Weak solutions for the height function formulation}\label{Sec:3} This last section is dedicated to proving Theorem \ref{T:MT}. Therefore, we pick $r\in(1,\infty)$ and let $\alpha=(r-1)/r\in(0,1)$ be fixed in the remainder of this paper. The formulation \eqref{PB} is very useful when trying to determine classical solution of the water wave problem \cite{W06b, W06a}. However, when the vorticity function belongs to $L_r((p_0,0)),$ $r\in(1,\infty),$ the curvature term and the lack of regularity of the vorticity function gives rise to several difficulties when trying to consider the equations \eqref{PB} in a suitable (Sobolev) analytic setting. For example, the trivial solutions of \eqref{PB}, see Lemma \ref{L:LFS} below, belong merely to $W^2_r(\Omega)\cap C^{1+\alpha}(\overline\Omega).$ When trying to prove the Fredholm property of the linear operator associated to the linearization of the problem around these trivial solutions, one has to deal with an elliptic equation in divergence form and having coefficients merely in $W^1_r(\Omega)\cap C^{\alpha}(\overline\Omega),$ cf. \eqref{L1}. The solvability of elliptic boundary value problems in $W^2_r(\Omega)$ requires in general though more regularity from the coefficients. Also, the trace $\mathop{\rm tr}\nolimits_0 h_{qq}$ which appears in the second equation of \eqref{PB} is meaningless for functions in $W^2_r(\Omega).$ Nevertheless, using the fact that the operator $(1-\partial_q^2):H^2(\mathbb{S})\to L_2(\mathbb{S})$ is an isomorphism and the divergence structure of the first equation of \eqref{PB}, that is \[\left(\frac{h_q}{h_p}\right)_q-\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)_p=0\qquad\text{in $\Omega$,}\] with $\Gamma$ being defined by the relation \eqref{E:G}, one can introduce the following definition of a weak solution of \eqref{PB}. \begin{defn}\label{D:1} A function $h\in C^{1}(\overline\Omega)$ which satisfies \eqref{PBC} is called a {\em weak solution} of \eqref{PB} if we have \begin{subequations}\label{WF} \begin{align} h+(1-\partial_q^2)^{-1}\mathop{\rm tr}\nolimits_0\left( \frac{\left(1+h_q^2+(2gh-Q)h_p^2\right)(1+h_q^2)^{3/2}}{2\sigma h_p^2}-h\right)=&0\qquad\text{on $p=0$;}\label{PB0}\\[1ex] h=&0\qquad\text{on $p=p_0$;}\label{PB1} \end{align} and if $h$ satisfies the following integral equation \begin{equation}\label{PB2} \int_\Omega\frac{h_q}{h_p}\phi_q-\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)\phi_p\, d(q,p)=0\qquad\text{for all $\phi\in C^1_0(\Omega)$.} \end{equation} \end{subequations} \end{defn} Clearly, any strong solution $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ with $\mathop{\rm tr}\nolimits_0 h \in W^2_r(\mathbb{S})$ is a weak solution of \eqref{PB}. Furthermore, because of \eqref{PB0}, any weak solution of \eqref{PB} has additional regularity on the boundary component $p=0,$ that is $\mathop{\rm tr}\nolimits_0 h\in C^2(\mathbb{S}).$ The arguments used in the last part of the proof of Theorem \ref{T:EQ} show in fact that any weak solution $h$ which belongs to $C^{1+\alpha}(\overline\Omega)$ is a strong solution of \eqref{PB} (as stated in Theorem \ref{T:EQ} $(i)$). The formulation \eqref{WF} has the advantage that in can be recast as an operator equation in a functional setting that enables us to use bifurcation results to prove existence of weak solutions. To present this setting, we introduce the following Banach spaces: \begin{align*} X&\!:=\!\left\{\widetilde h\in C^{1+\alpha}_{2\pi/n}(\overline\Omega)\,:\, \text{$\widetilde h$ is even in $q$ and $\widetilde h\big|_{p=p_0}=0$}\right\},\\ Y_1&\!:=\!\{f\in\mathcal{D}'(\Omega)\,:\, \text{$f=\partial_q\phi_1+\partial_p\phi_2$ for $\phi_1,\phi_2\in C^\alpha_{2\pi/n}(\overline\Omega)$ with $\phi_1$ odd and $\phi_2$ even in $q$}\},\\ Y_2&\!:=\!\{\varphi\in C^{1+\alpha}_{2\pi/n}(\mathbb{S})\,:\, \text{$\varphi$ is even}\}, \end{align*} the positive integer $n\in{\mathbb N}$ being fixed later on. The subscript $2\pi/n$ is used to express $2\pi/n-$periodic in $q$. We recall that $Y_1$ is a Banach space with the norm \[ \|f\|_{Y_1}:=\inf\{\|\phi_1\|_\alpha+\|\phi_2\|_\alpha\,:\, f=\partial_q\phi_1+\partial_p\phi_2\}. \] In the following lemma we determine all laminar flow solutions of \eqref{WF}. They correspond to waves with a flat surface $\eta=0$ and having parallel streamlines. \begin{lemma}[Laminar flow solutions]\label{L:LFS} Let $\Gamma_{M}:=\max_{[p_0,0]}\Gamma$. For every $\lambda\in(2\Gamma_M,\infty)$, the function $H(\cdot;\lambda)\in W^2_r((p_0,0))$ with \[ H(p;\lambda):=\int_{p_0}^p \frac{1}{\sqrt{\lambda-2\Gamma(s)}}\, ds\qquad\text{for $p\in[p_0,0]$} \] is a weak solution of \eqref{WF} provided that \[ Q=Q(\lambda):=\lambda+2g\int_{p_0}^0\frac{1}{\sqrt{\lambda-2\Gamma(p)}}\, dp. \] There are no other weak solutions of \eqref{WF} that are independent of $q$. \end{lemma} \begin{proof} It readily follows from \eqref{PB2} that if $H$ is a weak solution of \eqref{WF} that is independent of the variable $q$, then $2\Gamma+1/H_p^2=0$ in $\mathcal{D}'((p_0,0)).$ The expression for $H$ is obtained now by using the relation \eqref{PB1}. When verifying the boundary condition \eqref{PB0}, the relation $(1-\partial_q^2)^{-1}\xi=\xi$ for all $\xi\in{\mathbb R}$ yields that $Q$ has to be equal with $Q(\lambda).$ \end{proof} Because $H(\cdot;\lambda)\in W^2_r((p_0,0))$, we can interpret by means of Sobolev's embedding $H(\cdot;\lambda)$ as being an element of $X.$ We now are in the position of reformulating the problem \eqref{WF} as an abstract operator equation. Therefore, we introduce the nonlinear and nonlocal operator $\mathcal{F}:=(\mathcal{F}_1,\mathcal{F}_2):(2\Gamma_M,\infty)\times X\to Y:=Y_1\times Y_2$ by the relations \begin{align*} \mathcal{F}_1(\lambda,{\widetilde h})\!:=\!&\left(\frac{{\widetilde h}_q}{H_p+{\widetilde h}_p}\right)_q-\left(\Gamma+\frac{1+{\widetilde h}_q^2}{2(H_p+{\widetilde h}_p)^2}\right)_p,\\ \mathcal{F}_2(\lambda,{\widetilde h})\!:=\!&\mathop{\rm tr}\nolimits_0{\widetilde h}+(1-\partial_q^2)^{-1}\mathop{\rm tr}\nolimits_0\!\left(\!\frac{\left(1+{\widetilde h}_q^2+(2g(H+{\widetilde h})-Q) (H_p+{\widetilde h}_p)^2\right)(1+{\widetilde h}_q^2)^{3/2}}{2\sigma(H_p+{\widetilde h}_p)^2}-{\widetilde h}\!\right) \end{align*} for $(\lambda,{\widetilde h})\in (2 \Gamma_M,\infty)\times X,$ whereby $H=H(\cdot;\lambda)$ and $Q=Q(\lambda)$ are defined in Lemma \ref{L:LFS}. The operator $\mathcal{F}$ is well-defined and it depends real-analytically on its arguments, that is \begin{align}\label{BP0} \mathcal{F}\in C^\omega((2\Gamma_M,\infty)\times X, Y). \end{align} With this notation, determining the weak solutions of the problem \eqref{PB} reduces to determining the zeros $(\lambda,{\widetilde h})$ of the equation \begin{align}\label{BP} \mathcal{F}(\lambda,{\widetilde h})=0\qquad\text{in $Y$ } \end{align} for which ${\widetilde h}+H(\cdot;\lambda)$ satisfies \eqref{PBC}. From the definition of $\mathcal{F}$ we know that the laminar flow solutions of \eqref{PB} correspond to the trivial solutions of $\mathcal{F}$ \begin{align}\label{BP1} \mathcal{F}(\lambda,0)=0\qquad\text{for all $\lambda\in(2 \Gamma_M,\infty).$} \end{align} Actually, if $(\lambda,{\widetilde h})$ is a solution of \eqref{BP}, the function $h:={\widetilde h}+H(\cdot;\lambda)\in X$ is a weak solution of \eqref{PB} when $Q=Q(\lambda)$, provided that ${\widetilde h}$ is sufficiently small in $C^1(\overline\Omega).$ In order to use the theorem on bifurcation from simple eigenvalues due to Crandall and Rabinowitz \cite{CR71} in the setting of \eqref{BP}, we need to determine special values of $\lambda$ for which the Fr\'echet derivative $\partial_{\widetilde h}\mathcal{F}(\lambda,0)\in\mathcal{L}(X,Y)$, defined by \[ \partial_{\widetilde h}\mathcal{F}(\lambda,0)[w]:=\lim_{\varepsilon\to0}\frac{\mathcal{F}(\lambda,\varepsilon w)-\mathcal{F}(\lambda,0)}{\varepsilon}\qquad\text{for $w\in X$,} \] is a Fredholm operator of index zero with a one-dimensional kernel. To this end, we compute that $\partial_{\widetilde h} \mathcal{F} (\lambda,0)=:(L,T)$ with $L\in\mathcal{L}(X,Y_1) $ and $T\in\mathcal{L}(X,Y_2)$ being given by \begin{equation}\label{L1} \begin{aligned} Lw:=& \left(\frac{w_q}{H_p}\right)_q+\left(\frac{w_p}{H_p^3}\right)_p,\\ Tw:=&\mathop{\rm tr}\nolimits_0 w+(1-\partial_q^2)^{-1} \mathop{\rm tr}\nolimits_0 \left(\frac{gw-\lambda^{3/2}w_p}{\sigma}-w\right) \end{aligned}\qquad\quad\text{for $w\in X,$} \end{equation} and with $H=H(\cdot;\lambda)$ as in Lemma \ref{L:LFS}. We now study the properties of the linear operator $\partial_{\widetilde h} \mathcal{F} (\lambda,0)$, $\lambda>2\Gamma_M.$ Recalling that $H\in C^{1+\alpha}([p_0,0]),$ we obtain together with \cite[Theorem 8.34]{GT01} the following result. \begin{lemma}\label{L:2} The Fr\' echet derivative $\partial_{\widetilde h} \mathcal{F}(\lambda,0)\in\mathcal{L}(X,Y)$ is a Fredholm operator of index zero for each $\lambda\in(2\Gamma_M,\infty).$ \end{lemma} \begin{proof} See the proof of Lemma 4.1 in \cite{MM13x}. \end{proof} In order to apply the previously mentioned bifurcation result, we need to determine special values for $\lambda$ such that the kernel of $\partial_{\widetilde h} \mathcal{F}(\lambda,0)$ is a subspace of $X$ of dimension one. To this end, we observe that if $0\neq w\in X$ belongs to the kernel of $\partial_{\widetilde h} \mathcal{F}(\lambda,0)$, the relation $Lw=0$ in $Y_1$ implies that, for each $k\in{\mathbb N},$ the Fourier coefficient \[ w_k(p):=\langle w(\cdot, p)|\cos(kn\cdot)\rangle_{L_2}:=\int_0^{2\pi} w(q,p)\cos(knq)\, dq\qquad\text{for $p\in[p_0,0]$} \] belongs to $C^{1+\alpha}([p_0,0])$ and solves the equation \begin{align}\label{EQ:M} \left(\frac{w_k'}{H_p^3}\right)'-\frac{(kn)^2w_k}{H_p}=0\qquad\text{in $\mathcal{D}'((p_0,0)).$} \end{align} Additionally, multiplying the relation $Tw=0$ by $\cos(knq)$ we determine, in virtue of the symmetry of the operator $(1-\partial_q^2)^{-1},$ that is \begin{align*} \langle f|(1-\partial_q^2)^{-1}g\rangle_{L_2}=\langle (1-\partial_q^2)^{-1}f|g\rangle_{L_2}\qquad\text{for all $f,g\in L_2(\mathbb{S})$}, \end{align*} a further relation \[(g+\sigma (kn)^2)w_k(0)=\lambda^{3/2}w_k'(0).\] Finally, because of $w\in X$, we get $w_k(p_0)=0$. Since $W^1_r((p_0,0))$ is an algebra for any $r\in(1,\infty),$ cf. \cite{A75}, it is easy to see that $w_k$ belongs to $ W^2_r((p_0,0))$ and that it solves the system \begin{equation}\label{E:m} \left\{ \begin{array}{rlll} (a^3(\lambda) w')'-\mu a(\lambda)w&=&0 &\text{in $L_r((p_0,0))$,}\\ (g+\sigma\mu)w(0)&=&\lambda^{3/2}w'(0),\\ w(p_0)&=&0, \end{array}\right. \end{equation} when $\mu=(kn)^2.$ For simplicity, we set $a(\lambda):=a(\lambda;\cdot):=\sqrt{\lambda-2\Gamma}\in W^1_r((p_0,0)).$ Our task is to determine special values for $\lambda$ with the property that the system \eqref{E:m} has nontrivial solutions, which form a one-dimensional subspace of $W^2_r((p_0,0))$, { only for $\mu=n^2.$} Therefore, given $(\lambda,\mu)\in(2 \Gamma_M,\infty)\times[0,\infty),$ we introduce the Sturm-Liouville type operator $R_{\lambda,\mu}:W^2_{r,0} \to L_r((p_0,0))\times {\mathbb R}$ by \begin{equation*} R_{\lambda,\mu}w:= \begin{pmatrix} (a^3(\lambda) w')'-\mu a(\lambda)w\\ (g+\sigma\mu)w(0)-\lambda^{3/2}w'(0) \end{pmatrix}\qquad\text{for $w\in W^2_{r,0},$} \end{equation*} whereby $W^2_{r,0}:=\{w\in W^2_r((p_0,0))\,:\, w(p_0)=0\}.$ Additionally, for $(\lambda,\mu)$ as above, we let $v_i\in W^2_r((p_0,0))$, with $v_i:=v_i(\cdot;\lambda,\mu)$, denote the unique solutions of the initial value problems \begin{equation}\label{ERU} \left\{\begin{array}{lll} (a^3(\lambda) v_1')'-\mu a(\lambda)v_1=0\qquad \text{in $L_r((p_0,0))$},\\[1ex] v_1(p_0)=0,\quad v_1'(p_0)=1, \end{array} \right. \end{equation} and \begin{equation}\label{ERUa} \left\{\begin{array}{lll} (a^3(\lambda)v_2')'-\mu a(\lambda)v_2=0\qquad \text{in $L_r((p_0,0))$},\\[1ex] v_2(0)=\lambda^{3/2},\quad v_2'(0)=g+\sigma\mu. \end{array} \right. \end{equation} Similarly as in the bounded vorticity case $\gamma\in L_\infty((p_0,0)) $ considered in \cite{MM13x}, we have the following property. \begin{prop}\label{P:2} Given $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty),$ $R_{\lambda,\mu}$ is a Fredholm operator of index zero and its kernel is at most one-dimensional. Furthermore, the kernel of $R_{\lambda,\mu}$ is one-dimensional exactly when the functions $v_i$, $i=1,2,$ given by \eqref{ERU} and \eqref{ERUa}, are linearly dependent. In the latter case we have $\mathop{\rm Ker}\nolimits R_{\lambda,\mu}=\mathop{\rm span}\nolimits\{v_1\}.$ \end{prop} \begin{proof} First of all, $R_{\lambda,\mu}$ can be decomposed as the sum $R_{\lambda,\mu}=R_I+R_c$, whereby \[ R_Iw:= \begin{pmatrix} (a^3(\lambda)w')'-\mu a(\lambda)w\\ -\lambda^{3/2}w'(0) \end{pmatrix} \qquad \text{and}\qquad R_cw:= \begin{pmatrix} 0\\ (g+\sigma\mu) w(0) \end{pmatrix} \] for all $w\in W^2_{r,0}.$ It is not difficult to see that $R_c$ is a compact operator. Next, we show that $R_I:W^2_{r,0} \to L_r((p_0,0))\times {\mathbb R}$ is an isomorphism. Indeed, if $w\in W^2_{r,0}$ solves the equation $R_Iw=(f,A), $ with $ (f,A)\in L_r((p_0,0))\times {\mathbb R}$, then, since $W^2_r((p_0,0))\hookrightarrow C^{1+\alpha}([p_0,0]),$ we have \begin{equation}\label{VF} \int_{p_0}^0\left(a^3(\lambda)w'\varphi'+\mu a(\lambda)w\varphi\right)dp=-A\varphi(0)-\int_{p_0}^0 f\varphi\, dp \end{equation} for all $\varphi\in H_*:=\{w\in W^1_2((p_0,0))\,:\, w(p_0)=0\}$. The right-hand side of \eqref{VF} defines a linear functional in $\mathcal{L}(H_*,{\mathbb R})$ and that the left-hand side corresponds to a bounded bilinear and coercive functional in $H_*\times H_*.$ Therefore, the existence and uniqueness of a solution $w\in H_*$ of \eqref{VF} follows from the Lax-Milgram theorem, cf. \cite[Theorem 5.8]{GT01}. In fact, one can easily see that $w_*\in W^2_{r,0}$, so that $R_I$ is indeed an isomorphism. That the kernel of $R_{\lambda,\mu}$ is at most one-dimensional can be seen from the observation that if $w_1,w_2\in W^2_r((p_0,0))$ are solutions of $(a^3(\lambda) w')'-\mu a(\lambda)w=0$, then \begin{equation}\label{BV}a^3(\lambda)(w_1w_2'-w_2w_1')=const. \qquad\text{in $[p_0,0]$}.\end{equation} Particularly, if $w_1, w_2\in W^2_{r,0},$ we obtain, in view of $a(\lambda)>0 $ in $[p_0,0],$ that $w_1$ and $w_2$ are linearly dependent. To finish the proof, we notice that if the functions $v_1$ and $v_2$, given by \eqref{ERU} and \eqref{ERUa}, are linearly dependent, then they both belong to $\mathop{\rm Ker}\nolimits R_{\lambda,\mu}.$ Moreover, if $0\neq v\in \mathop{\rm Ker}\nolimits R_{\lambda,\mu},$ the relation \eqref{BV} yields that $v$ is collinear with both $v_1 $ and $v_2$, argument which completes our proof. \end{proof} In view of the Proposition \ref{P:2}, we are left to determine $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty)$ for which the Wronskian \[ W(p;\lambda,\mu):=\left| \begin{array}{lll} v_1&v_2\\ v_1'&v_2' \end{array} \right| \] vanishes on the entire interval $[p_0,0].$ Recalling \eqref{BV}, we arrive at the problem of determining the zeros of the real-analytic (\eqref{ERU} and \eqref{ERUa} can be seen as initial value problems for first order ordinary differential equations) function $W(0;\cdot,\cdot):(2\Gamma_M,\infty)\times [0,\infty)\to{\mathbb R}$ defined by \begin{equation}\label{DEFG} W(0;\lambda,\mu):=\lambda^{3/2}v_1'(0;\lambda,\mu)-(g+\sigma\mu)v_1(0;\lambda,\mu). \end{equation} We emphasize that the methods used in \cite{CM13xx, MM13x, W06b, W06a} in order to study the solutions of $W(0;\cdot,\cdot)=0$ cannot be used for general $L_r-$integrable vorticity functions. Indeed, the approach {chosen in the context of classical $C^{2+\alpha}-$solutions} in \cite{W06b, W06a} is based on regarding the Sturm-Liouville problem \eqref{E:m} as a non standard eigenvalue problem (the boundary condition depends on the eigenvalue $\mu$). For this, the author of \cite{W06b, W06a} introduces a Pontryagin space with a indefinite inner product and uses abstract results pertaining to this setting. In our context such considerations are possible only when restricting $r\geq 2.$ On the other hand, the methods used in \cite{CM13xx, MM13x} are based on direct estimates for the solution of \eqref{ERU}, but these estimates rely to a large extent on the boundedness of $\gamma.$ Therefore, we need to find a new approach when allowing for general $L_r-$integrable vorticity functions. {Our strategy is as follows: in a first step we find a constant $\lambda_0\geq 2\Gamma_M $ such that the function $ W(p;\lambda,\cdot)$ changes sign on $(0,\infty)$ for all $\lambda>\lambda_0,$ cf. Lemmas \ref{L:1} and \ref{L:4}. For this, the estimates established in Lemma \ref{L:3} within the setting of ordinary differential equations are crucial. In a second step, cf. Lemmas \ref{L:5} and \ref{L:6}, we prove that $ W(p;\lambda,\cdot)$ changes sign exactly once on $(0,\infty)$, the particular value where $ W(p;\lambda,\cdot)$ vanishes being called $\mu(\lambda).$ The properties of the mapping $\lambda\mapsto \mu(\lambda)$ derived in Lemma \ref{L:6} are the core of the analysis of the kernel of $\partial_{\widetilde h}\mathcal{F}(\lambda,0).$ } As a first result, we state the following lemma. \begin{lemma}\label{L:1} There exists a unique minimal $\lambda_0\geq 2\Gamma_M$ such that $W(0;\lambda,0)>0$ for all $\lambda>\lambda_0.$ \end{lemma} \begin{proof} First, we note that given $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty)$, the function $v_1$ satisfies the following integral relation \begin{equation}\label{v1} v_1(p)=\int_{p_0}^p\frac{a^3(\lambda;p_0)}{a^3(\lambda;{s})}\, d{s}+\mu\int_{p_0}^p\frac{1}{a^3(\lambda;s)}\int_{p_0}^sa(\lambda;r)v_1(r)\, dr\, ds\qquad\text{for $p\in[p_0,0].$} \end{equation} Particularly, $v_1$ is a strictly increasing function on $[p_0,0]$. Furthermore, since $a(\lambda;0)=\lambda^{1/2},$ we get \begin{align*} W(0;\lambda,0)=&a^3(\lambda;p_0)-g\int_{p_0}^0\frac{a^3(\lambda;p_0)}{a^3(\lambda;p)}\, dp=a^3(\lambda;p_0)\left(1-g\int_{p_0}^0\frac{1}{a^3(\lambda;p)}\, dp\right)\to_{\lambda\to\infty}\infty. \end{align*} This proves the claim. \end{proof} We note that if $g=0,$ then $\lambda_0=2\Gamma_M.$ In the context of capillary-gravity water waves it is possible to choose, in the case of a bounded vorticity function, $\lambda_0>2\Gamma_M$ as being the unique solution of the equation $W(0;\lambda_0,0)=0$. In contrast, for certain unbounded vorticity functions $\gamma\in L_r((p_0,0)),$ with $r\in(1,\infty),$ the latter equation has no zeros in $(2\Gamma_M,\infty).$ Indeed, if we set $\gamma(p):=\delta(-p)^{-1/(kr)}$ for $p\in(p_0,0),$ where $\delta>0$ and $k,r\in(1,3) $ satisfy $kr<3,$ then $\gamma\in L_r((p_0,0))$ and, for sufficiently large $\delta $ (or small $p_0$), we have \begin{align*} \inf_{\lambda>2\Gamma_M} W(0;\lambda,0)>0. \end{align*} This property leads to restrictions on the wavelength of the water waves bifurcating from the laminar flow solutions found in Lemma \ref{L:LFS}, cf. Proposition \ref{P:3}. The estimates below will be used in Lemma \ref{L:4} to bound the integral mean and the first order moment of the solution $v_1$ of \eqref{ERU} on intervals $[p_1(\mu),0]$ with $p_1(\mu)\nearrow0$ as $\mu\to\infty.$ \begin{lemma}\label{L:3} Let $p_1\in(p_0,0)$, $A, B\in(0,\infty),$ and $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty)$ be fixed and define the positive constants \begin{equation}\label{constants} \begin{aligned} &\underline C:=\min_{p\in[p_1,0]}\frac{a^3(\lambda;p_1)}{a^3(\lambda;p)},\quad \overline C:=\max_{p\in[p_1,0]}\frac{a^3(\lambda;p_1)}{a^3(\lambda;p)},\\ &\underline D:=\min_{s,p\in[p_1,0]}\frac{a(\lambda;s)}{a^3(\lambda;p)}, \quad \overline D:=\max_{s,p\in[p_1,0]}\frac{a(\lambda;s)}{a^3(\lambda;p)}. \end{aligned} \end{equation} Then, if $v\in W^2_r((p_1,0))$ is the solution of \begin{equation}\label{EEE} \left\{\begin{array}{lll} (a^3(\lambda) v')'-\mu a(\lambda)v=0\qquad \text{in $L_r((p_1,0))$},\\[1ex] v(p_1)=A,\quad v'(p_1)=B, \end{array} \right. \end{equation} we have the following estimates \begin{align} \int_{p_1}^0v(p)\, dp\!\leq& -\frac{A\mu^{-1/2}\sinh(p_1\sqrt{\overline D}\mu^{1/2})}{\sqrt{\overline D}}+\frac{B\overline C\mu^{-1}\left(\cosh(p_1\sqrt{\overline D}\mu^{1/2})-1\right)}{\overline D},\label{FE1}\\[1ex] \int_{p_1}^0(-p)v(p)\, dp\!\geq& \frac{A\mu^{-1 }\left(\cosh(p_1\sqrt{\underline D}\mu^{1/2})-1\right)}{\underline D}+\frac{B\underline C\mu^{-1}\!\left(\sqrt{\underline D}p_1 - \sinh(p_1\sqrt{\underline D}\mu^{1/2}) \mu^{-1/2}\right)}{\underline D^{3/2}}\label{FE2}. \end{align} \end{lemma} \begin{proof} It directly follows from \eqref{EEE} that \begin{align}\label{Der} v'(p)=\frac{a^3(\lambda;p_1)}{a^3(\lambda;p)}B+\mu\int_{p_1}^p\frac{a (\lambda;s)}{a^3(\lambda;p)}v(s)\, ds\qquad\text{for all $p\in[p_1,0],$} \end{align} and therefore \[ v'(p)\leq B\overline C+\mu \overline D\int_{p_1}^pv(s)\, ds \qquad\text{in $p\in[p_1,0],$} \] cf. \eqref{constants}. Letting now $\overline u:[p_0,0]\to{\mathbb R}$ be the function defined by \[ \overline u(p):=\int_{p_1}^pv(s)\, ds\qquad\text{for $p\in[p_1,0],$} \] we find that $\overline u\in W^3_r((p_0,0))$ solves the following problem \begin{equation*} \overline u''-\mu \overline D\overline u\leq B\overline C\quad \text{in $(p_1,0)$},\qquad \overline u(p_1)=0,\, \, \overline u'(p_1)=A. \end{equation*} It is not difficult to see that $\overline u\leq \overline z$ on $[p_1,0],$ where $\overline z$ denotes the solution of the initial value problem \begin{equation*} \overline z''-\mu \overline D \overline z= B\overline C\quad \text{in $(p_1,0)$},\qquad \overline z(p_1)=0,\, \, \overline z'(p_1)=A. \end{equation*} The solution $\overline z$ of this problem can be determined explicitly \begin{align*} \overline z(p)=\frac{A\sinh(\sqrt{\overline D}\mu^{1/2}(p-p_1))}{\sqrt{\overline D\mu}}+\frac{B\overline C\left(\cosh(\sqrt{\overline D}\mu^{1/2}(p-p_1))-1\right)}{\overline D\mu}, \qquad p\in[p_1,0], \end{align*} which gives, in virtue of $\overline u(0)\leq \overline z(0),$ the first estimate \eqref{FE1}. In order to prove the second estimate \eqref{FE2}, we first note that integration by parts leads us to \begin{align*} \int_{p_1}^0(-p)v(p)\,dp =\int_{p_1}^0\int_{p_1}^p v(s)\, ds\,dp \qquad\text{in $[p_1,0],$} \end{align*} so that it is natural to define the function $\underline u:[p_0,0]\to{\mathbb R}$ by the relation \[ \underline u(p):=\int_{p_1}^p\int_{p_1}^rv(s)\, ds\, dr\qquad\text{for $p\in[p_1,0].$} \] Recalling \eqref{Der}, we find similarly as before that \[ v'(p)\geq B\underline C+\mu \underline D\int_{p_1}^pv(s)\, ds \qquad\text{in $p\in[p_1,0],$} \] and integrating this inequality over $(p_1,p)$, with $p\in(p_1,0)$, we get \begin{align*} v(p)\geq A+B\underline C(p-p_1)+\mu \underline D\int_{p_1}^p\int_{p_1}^rv(s)\, ds\, dr \qquad\text{in $p\in[p_1,0].$} \end{align*} Whence, $\underline u\in W^4_r((p_0,0))$ solves the problem \begin{equation*} \underline u''-\mu \underline D\,\underline u\geq A+B\underline C(p-p_1)\quad \text{in $(p_1,0)$},\qquad \underline u(p_1)=0,\, \, \underline u'(p_1)=0. \end{equation*} As the right-hand side of the above inequality is positive, we find that $\underline u\geq \underline z$ on $[p_1,0],$ where $\underline z$ stands now for the solution of the problem \begin{equation*} \underline z''-\mu \underline D \, \underline z= A+B\underline C(p-p_1)\quad \text{in $(p_1,0)$},\qquad \underline z(p_1)=0,\, \, \underline z'(p_1)=0. \end{equation*} One can easily verify that $\underline z$ has the following expression \begin{align*} \underline z(p)=&\frac{A\left(\cosh(\sqrt{\underline D}\mu^{1/2}(p-p_1))-1\right)}{ \underline D\mu }\\ &+\frac{B\underline C\left( \underline D^{-1/2}\sinh(\sqrt{\underline D}\mu^{1/2}(p-p_1))\mu^{-1/2}-(p-p_1)\right)}{\underline D\mu} \end{align*} for $ p\in[p_1,0] $, and, since $\underline u(0)\geq \underline z(0),$ we obtain the desired estimate \eqref{FE2}. \end{proof} The estimates \eqref{FE1} and \eqref{FE2} are the main tools when proving the following result. \begin{lemma}\label{L:4} Given $\lambda> 2\Gamma_M,$ we have that \begin{align}\label{ES} \lim_{\mu\to\infty} W(0;\lambda,\mu)=-\infty. \end{align} \end{lemma} \begin{proof} Recalling the relations \eqref{DEFG} and \eqref{v1}, we write $W(0;\lambda,\mu)=T_1+\mu T_2,$ whereby we defined \begin{align*} T_1&:=a^3(\lambda;p_0)\left(1-(g+\sigma\mu)\int_{p_0}^0\frac{1}{a^3(\lambda;p)}\, dp\right),\\ T_2&:=\int_{p_0}^0a(\lambda;p)v_1(p)\, dp-(g+\sigma\mu)\int_{p_0}^0\frac{1}{a^3(\lambda;s)}\int_{p_0}^sa(\lambda;r)v_1(r)\, dr\, ds. \end{align*} Because $a(\lambda)$ is a continuous and positive function that does on depend on $\mu$, it is easy to see that $T_1\to-\infty$ as $\mu\to\infty.$ In the remainder of this proof we show that \begin{equation}\label{QE1} \lim_{\mu\to\infty} T_2=-\infty. \end{equation} In fact, since $a(\lambda)$ is bounded from below and from above in $(0, \infty)$, we see, by using integration by parts, that \eqref{QE1} holds provided that there exists a constant $\beta\in(0,1)$ such that \begin{equation}\label{QE2} \lim_{\mu\to\infty} \left( \int_{p_0}^0v_1(p)\, dp-\mu^{\beta}\int_{p_0}^0(-p)v_1(p)\, dp\right)=-\infty. \end{equation} We now fix $\beta\in(1/2,1)$ and prove that \eqref{QE2} is satisfied if we make this choice for $\beta.$ Therefore, we first choose $\gamma\in(1/2,\beta)$ with \begin{align}\label{ch} \frac{2\beta-1}{2\gamma-1}=4. \end{align} Because for sufficiently large $\mu$ we have \begin{align*} \int_{p_0}^{-\mu^{-\gamma}}v_1(p)\, dp-\mu^{\beta}\int_{p_0}^{-\mu^{-\gamma}}(-p)v_1(p)\, dp\leq&\int_{p_0}^{-\mu^{-\gamma}}v_1(p)\, dp-\mu^{\beta}\int_{p_0}^{-\mu^{-\gamma}}\mu^{-\gamma}v_1(p)\, dp\\[1ex] =&(1-\mu^{\beta-\gamma})\int_{p_0}^{-\mu^{-\gamma}}v_1(p)\, dp\to_{\mu\to\infty}-\infty, \end{align*} we are left to show that \begin{align}\label{QE3} \limsup_{\mu\to\infty}\left(\int_ {-\mu^{-\gamma}}^0v_1(p)\, dp-\mu^{\beta}\int_{-\mu^{-\gamma}}^0(-p)v_1(p)\, dp\right)<\infty. \end{align} The difficulty of showing \eqref{QE2} is mainly caused by the fact that the function $v_1$ grows very fast with $\mu.$ However, because the volume of the interval of integration in \eqref{QE3} decreases also very fast when $\mu\to\infty$, the estimates derived in Lemma \ref{L:3} are accurate enough to establish \eqref{QE3}. To be precise, for all $\mu>(-1/p_0)^{1/\gamma}$, we set $p_1:=-\mu^{-\gamma}$, $A:=v_1(p_1),$ $B:=v_1'(p_1)$, and obtain that the solution $v_1$ of \eqref{EEE} satisfies \begin{align}\label{QE4} \int_ {-\mu^{-\gamma}}^0v_1(p)\, dp-\mu^{\beta}\int_{-\mu^{-\gamma}}^0(-p)v_1(p)\, dp\leq \frac{A \sinh(\sqrt{\overline D}\mu^{1/2-\gamma})}{\underline D\mu^{1/2}}E_1+\frac{B\underline C}{\underline D\mu}E_2, \end{align} whereby $A, B, \overline C,\underline C,\overline D,\underline D$ are functions of $\mu$ now, cf. \eqref{constants}, and \begin{align*} E_1&:=\frac{\underline D}{\sqrt{\overline D}}- \mu^{\beta-1/2 }\frac{ \cosh( \sqrt{\underline D}\mu^{1/2-\gamma})-1 }{ \sinh(\sqrt{\overline D}\mu^{1/2-\gamma})},\\[1ex] E_2&:= \frac{\overline C\underline D}{\underline C\overline D} \left(\cosh(\sqrt{\overline D}\mu^{1/2-\gamma})-1\right) -\mu^{\beta -\gamma}\left(\frac{\sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}{\sqrt{\underline D}\mu^{1/2-\gamma}}-1\right). \end{align*} Recalling that $\gamma>1/2$ and that $A$,$B,$ $\overline C,\underline C,\overline D,\underline D$ are all positive, it suffices to show that $E_1$ and $E_2$ are negative when $\mu$ is large. In order to prove this property, we infer from \eqref{constants} that, as $\mu\to\infty,$ we have \[ \overline D\to \lambda^{-1},\qquad \overline D\to \lambda^{-1},\qquad \overline C\to 1, \qquad \underline C\to1. \] Moreover, using the substitution $t:=\sqrt{\underline D}\mu^{1/2-\gamma} $ and l'Hospital's rule, we find \begin{align*} \lim_{\mu\to\infty}E_1=&1-\lim_{\mu\to\infty}\mu^{\beta-1/2 }\frac{ \cosh( \sqrt{\underline D}\mu^{1/2-\gamma})-1 }{ \sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}\frac{\sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}{ \sinh(\sqrt{\overline D}\mu^{1/2-\gamma})}\\[1ex] =&1-\lim_{\mu\to\infty}\mu^{\beta-1/2 }\frac{ \cosh( \sqrt{\underline D}\mu^{1/2-\gamma})-1 }{ \sinh(\sqrt{\underline D}\mu^{1/2-\gamma})} =1-\frac{1}{\lambda^2} \lim_{t\searrow0}\frac{ \cosh( t)-1 }{ t^4\sinh(t)}=-\infty, \end{align*} cf. \eqref{ch}, and by similar arguments \begin{align*} \lim_{\mu\to\infty}E_2=&-\lim_{\mu\to\infty}\mu^{\beta -\gamma}\left(\frac{\sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}{\sqrt{\underline D}\mu^{1/2-\gamma}}-1\right)=-\frac{1}{\lambda^{3/2}} \lim_{t\searrow0}\frac{ \sinh( t)-t }{ t^4}=-\infty. \end{align*} Hence, the right-hand side of \eqref{QE4} is negative when $\mu$ is sufficiently large, fact which proves the desired inequality \eqref{QE3}. \end{proof} Combining the Lemmas \ref{L:1} and \ref{L:4}, we see that the equation $W(0;\cdot,\cdot)=0$ has at least a solution for each $\lambda>\lambda_0.$ Concerning the sign of the first order derivatives $W_\lambda(0;\cdot,\cdot)$ and $W_\mu(0;\cdot,\cdot)$ at the zeros of $W(0;\cdot,\cdot)$, which will be used below to show that $W(0;\cdot,\cdot)$ has a unique zero for each $\lambda>\lambda_0$, the results established for a H\"older continuous \cite{W06b, W06a} or for a bounded vorticity function \cite{CM13xx, MM13x} extend also to the case of a $L_r$-integrable vorticity function, without making any restriction on $r\in(1,\infty).$ \begin{lemma}\label{L:5} Assume that $(\overline\lambda,\overline\mu)\in(\lambda_0,\infty)\times(0,\infty)$ satisfies $W(0; \overline\lambda,\overline\mu)=0.$ Then, we have \begin{align}\label{slim} W_\lambda(0; \overline\lambda,\overline\mu)>0\qquad\text{and}\qquad W_\mu(0; \overline\lambda,\overline\mu)<0. \end{align} \end{lemma} \begin{proof} The Proposition \ref{P:2} and the discussion following it show that $\mathop{\rm Ker}\nolimits R_{\overline\lambda,\overline\mu} =\mathop{\rm span}\nolimits\{v_1\}$, whereby $v_1:=v_1(\cdot;\overline\lambda,\overline\mu)$. To prove the first claim, we note that the algebra property of $W^1_r((p_0,0))$ yields that the partial derivative $v_{1,\lambda}:=\partial_\lambda v_{1}(\cdot,\overline\lambda,\overline\mu)$ belongs to $W^2_r((p_0,0))$ and solves the problem \begin{equation}\label{v1l} \left\{\begin{array}{lll} (a^{3}(\overline\lambda)v_{1,\lambda}')'-\overline\mu a(\overline\lambda) v_{1,\lambda}= -(3a^2(\overline\lambda)a_{\lambda}(\overline\lambda)v_1')'+\overline\mu a_{\lambda}(\overline\lambda)v_1\qquad\text{in $ L_{r}((p_0,0))$,}\\[1ex] v_{1,\lambda}(p_0)= v_{1,\lambda}'(p_0)=0, \end{array}\right. \end{equation} where $a_{\lambda}(\overline\lambda)=1/(2a(\overline\lambda))$. Because of the embedding $W^2_r((p_0,0))\hookrightarrow C^{1+\alpha}([p_0,0]),$ we find, by multiplying the differential equation satisfied by $v_1$, cf. \eqref{ERU}, with $v_{1,\lambda}$ and the first equation of \eqref{v1l} with $v_1$, and after subtracting the resulting relations the first claim of \eqref{slim} \begin{align*} W_{\lambda}(0;\overline\lambda,\overline\mu)&=\overline\lambda^{3/2}v_{1,\lambda}'(0)+\frac{3}{2}\overline\lambda^{1/2}v_1'(0)-(g+\sigma\overline\mu)v_{1,\lambda}(0)\\ &=\frac{1}{v_1(0)}\left(\int_{p_0}^{0}\frac{3a(\overline\lambda)}{2} v_1'^{ 2}+\frac{\overline\mu}{2a(\overline\lambda)}v_1^2\, dp\right)>0.\end{align*} For the second claim, we find as above that $v_{1,\mu}:=\partial_\mu v_{1}(\cdot,\overline\lambda,\overline\mu)\in W^2_r((p_0,0))$ is the unique solution of the problem \begin{equation}\label{v1mu} \left\{\begin{array}{lll}(a^{3}(\overline\lambda)v_{1,\mu}')'-\overline\mu a(\overline\lambda) v_{1,\mu}=a(\overline\lambda)v_1\qquad \text{in $ L_{r}((p_0,0))$},\\[1ex] v_{1,\mu}(p_0)=v_{1,\mu}'(p_0)=0. \end{array}\right. \end{equation} Also, if we multiply the differential equation satisfied by $v_1$ with $v_{1,\mu}$ and the first equation of \eqref{v1mu} with $v_1$, we get after building the difference of these relations \begin{align*} \int_{p_0}^0\!a(\overline\lambda)v_1^2\, dp=\overline\lambda^{3/2}\!v_{1,\mu}'(0)v_1(0)-\overline\lambda^{3/2}\!v_1'(0)v_{1,\mu}(0)=v_1(0)\left(\!\overline\lambda^{3/2}v_{1,\mu}'(0)-(g+\sigma\overline \mu)v_{1,\mu}(0)\!\right)\!, \end{align*} the last equality being a consequence of the fact that $v_1$ and $v_2:=v_2(\cdot;\overline\lambda,\overline\mu) $ are collinear for this choice of the parameters. Therefore, we have \begin{align}\label{qqq} W_\mu(0;\overline\lambda,\overline\mu)=\overline\lambda^{3/2}v_{1,\mu}'(0)-\sigma v_1(0)-(g+\sigma\overline\mu)v_{1,\mu}(0)=\frac{1}{v_1(0)} \left(\int_{p_0}^0a(\overline\lambda)v_1^2\, dp-\sigma v_1^2(0)\right). \end{align} In order to determine the sign of the latter expression, we multiply the first equation of \eqref{ERU} by $v_1$ and get, by using once more the collinearity of $v_1$ and $v_2,$ that \[ \int_{p_0}^0a(\overline\lambda)v_1^2\, dp-\sigma v_1^2(0)=\frac{1}{\overline\mu}\left(gv_1^2(0)-\int_{p_0}^0a^3(\overline\lambda)v_1'^2\, dp\right). \] If $g=0$, the latter expression is negative and we are done. On the other hand, if we consider gravity effects, because of $\overline\mu>0,$ it is easy to see that $a^{3/2}(\overline\lambda)v_1'$ and $a^{-3/2}(\overline\lambda)$ are linearly independent functions, fact which ensures together with Lemma \ref{L:1} and with H\"older's inequality that \begin{align*} gv_1^2(0)&=g\left(\int_{p_0}^0a^{3/2}(\overline\lambda)v_1'\frac{1}{a^{3/2}(\overline\lambda)}\, dp\right)^2\\ &<g\left(\int_{p_0}^0a^{3 }(\overline\lambda)v_1'^2 \, dp\right)\left(\int_{p_0}^0 \frac{1}{a^{3 }(\overline\lambda)}\, dp\right)\leq \int_{p_0}^0a^{3 }(\overline\lambda)v_1'^2 \, dp, \end{align*} and the desired claim follows from \eqref{qqq}. \end{proof} We conclude with the following result. \begin{lemma}\label{L:6} Given $\lambda>\lambda_0,$ there exists a unique zero $\mu=\mu(\lambda)\in (0,\infty)$ of the equation $W(0;\lambda,\mu(\lambda))=0.$ The function \[\mu:(\lambda_0,\infty)\to(\inf_{(\lambda_0,\infty)}\mu(\lambda),\infty),\qquad \lambda\mapsto\mu(\lambda)\] is strictly increasing, real-analytic, and bijective. \end{lemma} \begin{proof} Given $\lambda>\lambda_0,$ it follows from the Lemmas \ref{L:1} and \ref{L:4} that there exists a constant $\mu(\lambda)>0$ such that $W(0;\lambda,\mu(\lambda))=0.$ The uniqueness of this constant, and the real-analyticity and the monotonicity of $\lambda\mapsto\mu(\lambda)$ follow readily from Lemma \ref{L:5} and the implicit function theorem. To complete the proof, let us assume that we found a sequence $\lambda_n\to\infty$ such that $(\mu(\lambda_n))_n$ is bounded. Denoting by $v_{1n}$ the (strictly increasing) solution of \eqref{ERU} when $(\lambda,\mu)=(\lambda_n,\mu(\lambda_n)),$ we infer from \eqref{v1} that there exists a constant $C>0$ such that \[ v_{1n}(p)\leq C\left(1+\int_{p_0}^pv_{1n}(s)\, ds\right)\qquad\text{for all $n\geq 1$ and $p\in[p_0,0].$} \] Gronwall's inequality yields that the sequence $(v_{1n})_n$ is bounded in $C([p_0,0])$ and, together with \eqref{v1}, we find that \[0=W(0;\lambda_n,\mu(\lambda_n))\geq a^3(\lambda_n;p_0)-(g+\sigma\mu(\lambda_n))v_{1n}(0)\underset{n\to\infty}\to\infty.\] This is a contradiction, and the proof is complete. \end{proof} We choose now the integer $N$ from Theorem \ref{T:MT}, to be {the smallest positive integer which} satisfies \begin{equation}\label{eq:rest} { N^2}>\inf_{(\lambda_0,\infty)}\mu(\lambda). \end{equation} Invoking Lemma \ref{L:6}, we {find a sequence $(\lambda_n)_{n\geq N}\subset (\lambda_0,\infty)$ having the properties that $\lambda_n\nearrow\infty$ and \begin{equation}\label{eq:sec} \text{$\mu(\lambda_n)=n^2$ \qquad for all $n\geq N.$} \end{equation}} We conclude the previous analysis with the following result. \begin{prop}\label{P:3} Let {$N\in{\mathbb N}$ be defined by \eqref{eq:rest}. Then, for each $n\geq N $, the Fr\'echet derivative $\partial_{\widetilde h} \mathcal{F}(\lambda_n,0)\in\mathcal{L}(X,Y)$, with $\lambda_n$ defined by \eqref{eq:sec}, is a Fredholm operator of index zero with a one-dimensional kernel $\mathop{\rm Ker}\nolimits\partial_{\widetilde h} \mathcal{F}(\lambda_n,0)=\mathop{\rm span}\nolimits\{w_n\}$, whereby $w_n\in X$ is the function $w_n(q,p):=v_1(p;\lambda_n,n^2)\cos(nq)$ for all $(q,p)\in\overline\Omega.$} \end{prop} \begin{proof} The result is a consequence of the Lemmas \ref{L:2} and \ref{L:6}, and of Proposition \ref{P:2}. \end{proof} In order to apply the theorem on bifurcations from simple eigenvalues to the equation \eqref{BP}, we still have to verify the transversality condition \begin{equation}\label{eq:TC} \partial_{\lambda {\widetilde h}}\mathcal{F}(\lambda_n,0)[w_n]\notin\mathop{\rm Im}\nolimits \partial_{\widetilde h} \mathcal{F}(\lambda_n,0) \end{equation} for $n\geq N.$ \begin{lemma}\label{L:TC} The transversality condition \eqref{eq:TC} is satisfied for all {$n\geq N$}. \end{lemma} \begin{proof} The proof is similar to that of the Lemmas 4.4 and 4.5 in \cite{MM13x}, and therefore we omit it. \end{proof} We come to the proof of our main existence result. \begin{proof}[Proof of Theorem \ref{T:MT}] {Let $N$ be defined by \eqref{eq:rest}, and let $(\lambda_n)_{n\geq N}\subset (\lambda_0,\infty)$ be the sequence defined by \eqref{eq:sec}.} Invoking the relations \eqref{BP0}, \eqref{BP1}, the Proposition \ref{P:3}, and the Lemma \ref{L:TC}, we see that all the assumptions of the theorem on bifurcations from simple eigenvalues of Crandall and Rabinowitz \cite{CR71} are satisfied for the equation \eqref{BP} at each of the points $\lambda=\lambda_n,$ $n\geq N.$ Therefore, for each $n\geq N$, there exists $\varepsilon_n>0$ and a real-analytic curve \[\text{$(\widetilde \lambda_n,{\widetilde h}_n):(\lambda_n-\varepsilon_n,\lambda_n+\varepsilon_n)\to (2\Gamma_M,\infty)\times X,$ }\] consisting only of solutions of the problem \eqref{BP}. Moreover, as $s\to0$, we have that \begin{equation}\label{asex} \widetilde\lambda_n(s)=\lambda_n+O(s)\quad \text{in ${\mathbb R}$},\qquad {\widetilde h}_n(s)=sw_n+O(s^2)\quad \text{in $X$}, \end{equation} whereby $w_n\in X$ is the function defined in Proposition \ref{P:3}. Furthermore, in a neighborhood of $(\lambda_n,0),$ the solutions of \eqref{BP} are either laminar or are located on the local curve $(\widetilde\lambda_n,{\widetilde h}_n)$. The constants $\varepsilon_n$ are chosen sufficiently small to guarantee that $H(\cdot;\widetilde\lambda_n(s))+{\widetilde h}_n(s)$ satisfies \eqref{PBC} for all $|s|<\varepsilon_n$ and all $n\geq N.$ For each integer $n\geq N,$ the curve ${\mathcal{C}_{n}}$ mentioned in Theorem \ref{T:MT} is parametrized by $[s\mapsto H(\cdot;\widetilde\lambda_n(s))+{\widetilde h}_n(s)]\in C^\omega((-\varepsilon_n,\varepsilon_n),X).$ We pick now a function $h$ on one of the local curves ${\mathcal{C}_{n}}$. In order to show that this weak solution of \eqref{WF} belongs to $ W^2_r(\Omega)$, we first infer from Theorem 5.1 in \cite{MM13x} that the distributional derivatives $\partial_q^mh$ also belong to $C^{1+\alpha}(\overline\Omega)$ for all $m\geq1.$ Using the same arguments as in the last part of the proof of Theorem \ref{T:EQ}, we find that $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ satisfies the first equation of \eqref{PB} in $L_r(\Omega)$. Because $(1-\partial_q^2)^{-1}\in \mathcal{L}(C^\alpha(\mathbb{S}), C^{2+\alpha}(\mathbb{S}))$, the equation \eqref{PB0} yields that $\mathop{\rm tr}\nolimits_0 h\in C^{2+\alpha}(\mathbb{S})$, and therefore $h$ is a strong solution of \eqref{PB}. Moreover, by \cite[Corollary 5.2]{MM13x}, result which shows that the regularity properties of the streamlines of classical solutions \cite{Hen10, DH12} persist even for weak solutions with merely integrable vorticity, $[q\mapsto h(q,p)]$ is a real-analytic map for any $p\in[p_0,0]$. Finally, because of \eqref{asex}, it is not difficult to see that any solution $h=H(\cdot;\widetilde \lambda_n(s))+{\widetilde h}_n(s)\in{\mathcal{C}_{n}},$ with $s\neq0 $ sufficiently small, corresponds to waves that possess a single crest per period and which are symmetric with respect to the crest (and trough) line. \end{proof} As noted in the discussion following Lemma \ref{L:1}, when $r\in(1,3),$ there are examples of vorticity functions $\gamma\in L_r((p_0,0))$ for which the mapping $\lambda\mapsto\mu(\lambda)$ defined in Lemma \ref{L:6} is bounded away from zero on $(\lambda_0,\infty)$. This property imposes restrictions (through the { positive integer $N$}) on the wave length of the water waves solutions bifurcating from the laminar flows, cf. Theorem \ref{T:MT}. The lemma below gives, in the context of capillary-gravity waves, sufficient conditions which ensure that $\mu:(\lambda_0,\infty)\to(0,\infty)$ is a bijective mapping, which corresponds to the choice $N=1$ in Theorem \ref{T:MT}, situation when no restrictions are needed. On the other hand, when considering pure capillary waves and if $\mu:(\lambda_0,\infty)\to(0,\infty)$ is a bijective mapping, then necessarily $\Gamma_M=\Gamma(p_0),$ and the problems \eqref{ERU} and \eqref{ERUa} become singular as $\lambda\to \lambda_0=2\Gamma_M.$ Therefore, finding sufficient conditions in this setting appears to be much more involved. \begin{lemma}\label{L:9} Let $r\geq3$, $\gamma\in L_r((p_0,0))$ and assume that $g>0$. Then, $\lambda_0>2\Gamma_M$ and {the integer $N$ in Theorem \ref{T:MT} satisfies $N=1$}, provided that \begin{align}\label{eq.condCG} \int_{p_0}^0a(\lambda_0)\left(\int_{p_0}^p\frac{1}{a^3(\lambda_0;s)}\, ds\right)^2\, dp<\frac{\sigma}{g^2}. \end{align} \end{lemma} \begin{proof} Let us assume that $\Gamma(p_1)=\Gamma_M$ for some $p_1\in[p_0,0)$ (the case when $p_1=0$ is similar). Then, if $\delta<1$ is such that $p_1+\delta<0,$ we have \begin{align*} \lim_{\lambda\searrow2\Gamma_M}\int_{p_0}^0 \frac{dp}{a^3(\lambda;p)}&=\lim_{\varepsilon\searrow 0}\int_{p_0}^0\frac{dp}{\sqrt{\varepsilon+2(\Gamma(p_1)-\Gamma(p))}^3}\geq c\lim_{\varepsilon\searrow 0}\int_{p_1}^{p_1+\delta}\frac{dp}{\varepsilon^{3/2}+\left|\int_{p_1}^p\gamma(s)\, ds\right|^{3/2}}\\ & \geq c\lim_{\varepsilon\searrow 0}\int_{p_1}^{p_1+\delta}\!\frac{dp}{\varepsilon^{3/2}+\left\| \gamma \right\|_{L_r}^{3/2}|p-p_1|^{3\alpha/2}}\geq c\lim_{\varepsilon\searrow 0}\int_{p_1}^{p_1+\delta}\!\frac{dp}{\varepsilon+p-p_1}=\infty \end{align*} with $\alpha=(r-1)/r$ and with $c$ denoting positive constants that are independent of $\varepsilon$. We have used the relation $3\alpha/2\geq1$ for $r\geq3.$ In view of Lemma \ref{L:1}, we find that $\lambda_0>2\Gamma_M$ is the unique zero of $W(0;\cdot,0).$ Recalling now \eqref{qqq} and the relation \eqref{v1}, one can easily see, because of $W(0;\lambda_0,0)=0,$ that the condition \eqref{eq.condCG} yields $W_\mu(0;\lambda_0,0)<0$. Since Lemma \ref{L:6} implies $W(0;\lambda_0, \inf_{(\lambda_0,\infty)}\mu)=0,$ the relation $W_\mu(0;\lambda_0,0)<0$ together with Lemma \ref{L:5} guarantee that $\inf_{(\lambda_0,\infty)}\mu=0$. This proves the claim. \end{proof} \end{document}
\begin{document} \title{Variation formulas for an extended Gompf invariant} \begin{abstract} In 1998, R. Gompf defined a homotopy invariant $\theta_G$ of oriented 2-plane fields in 3-manifolds. This invariant is defined for oriented 2-plane fields $\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\xi)$ is a torsion element of $H^2(M;\b Z)$. In this article, we define an extension of the Gompf invariant for all compact oriented 3-manifolds with boundary and we study its iterated variations under Lagrangian-preserving surgeries. It follows that the extended Gompf invariant is a degree two invariant with respect to a suitable finite type invariant theory. \end{abstract} \section*{Introduction} \renewcommand{\arabic{section}.\arabic{theorem}}{\arabic{theorem}} \subsection*{Context} In \cite{gompf}, R. Gompf defined a homotopy invariant $\theta_G$ of oriented 2-plane fields in 3-manifolds. This invariant is defined for oriented 2-plane fields $\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\xi)$ is a torsion element of $H^2(M;\b Z)$. This invariant appears, for instance, in the construction of an absolute grading for the Heegaard-Floer homology groups, see \cite{GH}. Since the positive unit normal of an oriented 2-plane field of a Riemannian 3-manifold $M$ is a section of its unit tangent bundle $UM$, homotopy classes of oriented 2-plane fields of $M$ are in one-to-one correspondence with homotopy classes of sections of $UM$. Thus, the invariant $\theta_G$ may be regarded as an invariant of homotopy classes of nowhere zero vector fields, also called \textit{combings}. In that setting, the Gompf invariant is defined for \textit{torsion combings} of closed oriented 3-manifolds $M$, \textit{ie}\ combings $X$ such that the Euler class $e_2(X^\perp)$ of the normal bundle $X^\perp$ is a torsion element of $H^2(M;\b Z)$. \\ In \cite{lescopcombing}, C. Lescop proposed an alternative definition of $\theta_G$ using a Pontrjagin construction from the combing viewpoint. Here, we use a similar approach to show how to define Pontrjagin numbers for torsion combings by using pseudo-parallelizations, which are a generalization of parallelizations. This enables us to define a relative extension of the Gompf invariant for torsion combings in all compact oriented 3-manifolds with boundary. We also study the iterated variations under Lagrangian-preserving surgeries of this extended invariant and prove that it is a degree two invariant with respect to a suitable finite type invariant theory. In such a study, pseudo-parallelizations reveal decisive since they are, in some sense, compatible with Lagrangian-preserving surgeries while genuine parallelizations are not. \subsection*{Conventions} In this article, compact oriented 3-manifolds may have boundary unless otherwise mentioned. All manifolds are implicitly equipped with Riemannian structures. The statements and the proofs are independent of the chosen Riemannian structures. \\ If $M$ is an oriented manifold and if $A$ is a submanifold of $M$, let $TM$, resp. $TA$, denote the tangent bundles to $M$, resp. $A$, and let $NA$ refer to the orthogonal bundle to $A$ in $M$, which is canonically isomorphic to the normal bundle to $A$ in $M$. The fibers of $NA$ are oriented so that $NA \oplus TA = TM$ fiberwise and the boundaries of all compact manifolds are oriented using the outward normal first convention. \\ If $A$ and $B$ are transverse submanifolds of an oriented manifold $M$, their intersection is oriented so that $N(A\cap B) = NA \oplus NB$, fiberwise. Moreover, if $A$ and $B$ have comple\-mentary dimensions, \textit{ie}\ if $\mbox{dim}(A)+\mbox{dim}(B)=\mbox{dim}(M)$, let $\varepsilon_{A\cap B}(x) = 1$ if $x \in A\cap B$ is such that $T_xA\oplus T_xB = T_xM$ and $\varepsilon_{A\cap B}(x) = -1$ otherwise. If $A$ and $B$ are compact transverse submani\-folds of an oriented manifold $M$ with complementary dimensions, the \textit{algebraic intersection of $A$ and $B$ in $M$} is $$ \langle A, B \rangle_M = \sum_{x \in A\cap B} \varepsilon_{A\cap B}(x). $$ Let $L_1$ and $L_2$ be two rational cycles of an oriented $n$-manifold $M$. Assume that $L_1$ and $L_2$ bound two rational chains $\Sigma_1$ and $\Sigma_2$, respectively. If $L_1$ is transverse to $\Sigma_2$, if $L_2$ is transverse to $\Sigma_1$ and if $\mbox{dim}(L_1)+\mbox{dim}(L_2) = n-1$, then the \textit{linking number of $L_1$ and $L_2$ in $M$} is $$ lk_M(L_1,L_2) = \langle \Sigma_1 , L_2 \rangle_M = (-1)^{\tiny{n-\mbox{dim}(L_2)}}\langle L_1 , \Sigma_2 \rangle_M. $$ \subsection*{Setting and statements} A \textit{combing} $(X,\sigma)$ of a compact oriented 3-manifold $M$ is a section $X$ of the unit tangent bundle $UM$ together with a nonvanishing section $\sigma$ of the restriction $X^\perp_{|\partial M}$ of the normal bundle $X^\perp$ to $\partial M$. For simplicity's sake, the section $\sigma$ may be omitted in the notation of a combing. For any combing $(X,\sigma)$, note that $\rho(X)=(X_{|\partial M}, \sigma, X_{|\partial M} \wedge \sigma)$, where $\wedge$ denotes the cross product, is a trivialization of $TM_{|\partial M}$. So, a combing of a compact oriented 3-manifold $M$ may also be seen as a pair $(X,\rho)$ where $X$ is a section of $UM$ that is the first vector of a trivialization $\rho$ of $TM_{|\partial M}$ together with this trivialization. \\ Two combings $(X,\sigma_X)$ and $(Y,\sigma_Y)$ of a compact oriented 3-manifold $M$ are said to be \textit{transverse} when the graph $X(M)$ is transverse to $Y(M)$ and $-Y(M)$ in $UM$. The combings $(X,\sigma_X)$ and $(Y,\sigma_Y)$ are said to be \textit{$\partial$-compatible} when $X_{|\partial M}= Y_{|\partial M}$, $\sigma_X = \sigma_Y$, $X(\mathring M)$ is transverse to $Y(\mathring M)$ and $-Y(\mathring M)$ in $UM$, and $$ \overline{X(\mathring M)\cap Y(\mathring M)} \cap UM_{|\partial M} =\emptyset. $$ \iffalse If $X(\mathring M)$ is transverse to $Y(\mathring M)$ and $-Y(\mathring M)$ in $UM$ and if $$ \overline{X(\mathring M)\cap Y(\mathring M)} \cap UM_{|\partial M} =\emptyset $$ then $(X,\sigma_X)$ and $(Y,\sigma_Y)$ are \textit{almost transverse}. Two combings $(X,\sigma_X)$ and $(Y,\sigma_Y)$ of $M$ are said to be \textit{coincide on $\partial M$} if $X_{|\partial M}= Y_{|\partial M}$ and $\sigma_X=\sigma_Y$. \fi When $(X,\sigma_X)$ and $(Y,\sigma_Y)$ are $\partial$-compatible, define two links $L_{X=Y}$ and $L_{X=-Y}$ as follows. First, let $P_M$ denote the projection from $UM$ to $M$ and set $$ L_{X=-Y} = P_M (X(M) \cap (-Y)(M)). $$ Second, there exists a link $L_{X=Y}$ in $\mathring{M}$ such that $$ P_M (X(M) \cap Y(M)) = \partial M \sqcup L_{X=Y}. $$ If $(X,\sigma)$ is a combing of a compact oriented 3-manifold $M$, its \textit{relative Euler class} $e_2^M(X^{\perp}\hspace{-1mm}, \sigma)$ in $H^2(M,\partial M;\b Z)$ is an obstruction to extending the section $\sigma$ as a nonvanishing section of $X^\perp$. This obstruction is such that its Poincaré dual $P(e_2^M(X^\perp, \sigma))$ is represented by the zero set of a generic section of $X^\perp$ extending $\sigma$. This zero set is oriented by its coorientation induced by the orientation of $X^\perp$. When $M$ is closed, the \textit{Euler class} $e_2(X^\perp)$ of $X$ is just this obstruction to finding a nonvanishing section of $X^\perp$. \\ A combing $(X,\sigma)$ of a compact oriented 3-manifold $M$ is a \textit{torsion combing} if $e_2^M(X^\perp,\sigma)$ is a torsion element of $H^2(M,\partial M ; \b Z)$, \textit{ie}\ $\left[e_2^M(X^\perp,\sigma)\right]=0$ in $H^2(M,\partial M ; \b Q)$. \\ Let $M_1$ and $M_2$ be two compact oriented 3-manifolds. The manifolds $M_1$ and $M_2$ are said to have \textit{identified boundaries} if a collar of $\partial M_1$ in $M_1$ and a collar of $\partial M_2$ in $M_2$ are identified. In this case, ${TM_1}_{|\partial M_1}= \b R n_1 \oplus T \partial M_1$ is naturally identified with ${TM_2}_{|\partial M_2}= \b R n_2 \oplus T \partial M_2$ by an identification that maps the outward normal vector field $n_1$ to $M_1$ to the outward normal vector field $n_2$ to $M_2$. \\ If $\tau_1$ and $\tau_2$ are parallelizations of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $\tau_1$ and $\tau_2$ coincide on $\partial M_1 \simeq \partial M_2$, then the \textit{first relative Pontrjagin number of $\tau_1$ and $\tau_2$} is an element $p_1(\tau_1,\tau_2)$ of $\b Z$ which corresponds to the Pontrjagin obstruction to extending a specific trivialization $\tau(\tau_1,\tau_2)$ of $TW \otimes \b C$ defined on the boundary of a cobordism $W$ from $M_1$ to $M_2$ with signature zero (see Subsection~\ref{ssec_defpara} or \cite[Subsection 4.1]{lescopcombing}). In the case of a parallelization $\tau$ of a closed oriented 3-manifold $M$, we get an absolute version. The \textit{Pontrjagin number $p_1(\tau)$ of $\tau$} is the relative Pontrjagin number $p_1(\tau_\emptyset,\tau)$ where $\tau_\emptyset$ is the parallelization of the empty set. Hence, for two parallelizations $\tau_1$ and $\tau_2$ of some closed oriented 3-manifolds, $$ p_1(\tau_1,\tau_2) = p_1(\tau_2) - p_1(\tau_1). $$ In \cite{lescopcombing}, using an interpretation of the variation of Pontrjagin numbers of parallelizations as an intersection of chains, C. Lescop showed that such a variation can be computed using only the first vectors of the parallelizations. This led her to the following theorem, which contains a definition of the \textit{Pontrjagin numbers} for torsion combings of closed oriented 3-manifolds. \begin{theorem}[{\cite[Theorem 1.2 \& Subsection 4.3]{lescopcombing}}] \label{thm_defp1X} Let $M$ be a closed oriented 3-manifold. There exists a unique map $$ p_1 \ : \ \lbrace \mbox{homotopy classes of torsion combings of } M \rbrace \longrightarrow \b Q $$ such that : \begin{enumerate}[(i)] \item for any combing $X$ on $M$ such that $X$ extends to a parallelization $\tau$ of $M$ : $$ p_1([X])=p_1(\tau), $$ \item if $X$ and $Y$ are two transverse torsion combings of $M$, then $$ p_1([Y])-p_1([X])= 4 \cdot lk(L_{X=Y},L_{X=-Y}). $$ \end{enumerate} Furthermore, $p_1$ coincides with the Gompf invariant : for any torsion combing $X$, $$ p_1([X])=\theta_G(X^\perp). $$ \end{theorem} In this article, we study the variations of the Pontrjagin numbers of torsion combings of compact oriented 3-manifolds with respect to specific surgeries, called \textit{Lagrangian-preserving surgeries}, which are defined as follows. \\ A \textit{rational homology handlebody of genus $g\in \b N$}, or $\b Q$HH for short, is a compact oriented 3-manifold with the same homology with coefficients in $\b Q$ as the standard genus $g$ handlebody. Note that the boundary of a genus $g$ rational homology handlebody is homeomorphic to the standard closed connected oriented surface of genus $g$. The \textit{Lagrangian} of a $\b Q$HH $A$ is $$ \go L_A := \mbox{ker}\left(i^A_* : H_1(\partial A; \b Q) \longrightarrow H_1(A;\b Q)\right) $$ where $i^A$ is the inclusion of $\partial A$ into $A$. An \textit{LP$_\b Q$-surgery datum} in a compact oriented 3-manifold $M$ is a triple $(A,B,h)$, or $(\sfrac{B}{A})$ for short, where $A \subset M$, where $B$ and $A$ are rational homology handlebodies and where $h : \partial A \rightarrow \partial B$ is an identification homeomorphism, called \textit{LP$_\b Q$-identification}, such that $h_*(\go L_A)=\go L_B$. Performing the \textit{LP$_\b Q$-surgery} associated with the datum $(A,B,h)$ in $M$ consists in constructing the manifold : $$ M \left( \sfrac{B}{A} \right) = \left( M \setminus \mathring A \right) \ \bigcup_h \ B. $$ If $(M,X)$ is a compact oriented 3-manifold equipped with a combing, if $(A,B,h)$ is an LP$_\b Q$-surgery datum in $M$, and if $X_{B}$ is a combing of $B$ that coincides with $X$ on $\partial A \simeq \partial B$, then $(A,B,h,X_{B})$, or $(\sfrac{B}{A},X_B)$ for short, is an \textit{LP$_\b Q$-surgery datum in $(M,X)$}. Performing the \textit{LP$_\b Q$-surgery} associated with the datum $(A,B,h,X_{B})$ in $(M,X)$ consists in constructing the manifold $M \left( \sfrac{B}{A} \right)$ equipped with the combing : $$ X(\sfrac{B}{A}) = \left\lbrace \begin{aligned} & X & &\mbox{on $M\setminus \mathring A$}, \\ & X_{B} & &\mbox{on $B$.} \end{aligned} \right. $$ The main result of this article is a variation formula for Pontrjagin numbers -- see Theorem~\ref{thm_D2nd} below -- which reads as follows in the special case of compact oriented 3-manifolds without boundary. \begin{theorem} \label{thm_D2nd0} Let $(M,X)$ be a closed oriented 3-manifold equipped with a combing and let $\lbrace (\sfrac{B_i}{A_i},X_{B_i}) \rbrace_{i \in \lbrace 1,2 \rbrace}$ be two disjoint LP$_\b Q$-surgeries in $(M,X)$ (\textit{ie}\ $A_1$ and $A_2$ are disjoint). For all $I \subset \lbrace 1,2 \rbrace$, let $(M_I,X^I)$ be the combed manifold obtained by performing the surgeries associated to the data $ \lbrace (\sfrac{B_i}{A_i},X_{B_i})\rbrace_{i \in I}$. If $\lbrace X^I \rbrace_{I \subset \lbrace 1,2 \rbrace}$ is a family of torsion combings of the $\lbrace M_I \rbrace_{I\subset \lbrace 1,2 \rbrace}$, then $$ \sum_{I\subset\lbrace 1 , 2 \rbrace} (-1)^{|I|} p_1([X^I]) = - 2 \cdot lk_M \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}), L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})\right), $$ where the right-hand side of the equality is defined as follows. For all $i \in \lbrace 1 , 2 \rbrace$, let $$ H_1(A_i;\b Q) \stackrel{i^{A_i}_*}{\longleftarrow} \frac{H_1(\partial A_i;\b Q)}{\go L_{A_i}} \stackrel{h_*}{=} \frac{H_1(\partial B_i;\b Q)}{\go L_{B_i}} \stackrel{i^{B_i}_*}{\longrightarrow} H_1(B_i;\b Q) $$ be the sequence of isomorphisms induced by the inclusions $i^{A_i}$ and $i^{B_i}$. There exists a unique homology class $L_{\lbrace X^I\rbrace}(\sfrac{B_i}{A_i})$ in $H_1(A_i; \b Q)$ such that for any nonvanishing section $\sigma_i$ of $X^\perp_{|\partial A_i}$~: $$ L_{\lbrace X^I \rbrace}(\sfrac{B_i}{A_i}) = i^{A_i}_* \circ (i^{B_i}_*)^{-1}\left(P\big(e_2^{B_i}(X_{B_i}^\perp, \sigma_i)\big)\right) - P\big(e_2^{A_i}(X^\perp_{|A_i}, \sigma_i)\big), $$ where $P$ stands for Poincaré duality isomorphisms from $H^2(A_i,\partial A_i;\b Q)$ to $H_1(A_i;\b Q)$ or from $H^2(B_i,\partial B_i;\b Q)$ to $H_1(B_i;\b Q)$. Furthermore, the homology classes $L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1})$ and $L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})$ are mapped to zero in $H_1(M;\b Q)$ and the map $$ lk_M : \mbox{\textup{ker}}\big(H_1(A_1;\b Q) \rightarrow H_1(M; \b Q)\big) \times \mbox{\textup{ker}}\big(H_1(A_2;\b Q) \rightarrow H_1(M; \b Q)\big) \longrightarrow \b Q $$ is well-defined. \end{theorem} \begin{exampleemempty} Consider $\b S^3$ equipped with a parallelization $\tau : \b S^3 \times \b R^3 \rightarrow T\b S^3$ which extends the standard parallelization of the unit ball. In this ball, consider a positive Hopf link and let $A_1 \sqcup A_2$ be a tubular neighborhood of this link. Let $X$ be the combing $\tau(e_1)=\tau(.,e_1)$, where $e_1=(1,0,0)\in \b S^2$, and let $B_1 = A_1$ and $B_2=A_2$. Identify $A_1$ and $A_2$ with $\b D^2 \times \b S^1$ and consider a smooth map $g : \b D^2 \rightarrow \b S^2$ such that $g(\partial \b D^2) = e_1$, and such that $-e_1$ is a degree 1 regular value of $g$ with a single preimage $\omega$. Finally, for $i \in \lbrace 1,2 \rbrace$, let $X_{B_i}$ be the combing : $$ X_{B_i} : \left\lbrace \begin{aligned} \b D^2 \times \b S^1 &\longrightarrow UM \\ (z,u) &\longmapsto \tau((z,u),g(u)). \end{aligned} \right. $$ In this case, $L_{X(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in \lbrace 1,2 \rbrace})=-X} = L_{X_{B_1}=-X_{|A_1}} \cup L_{X_{B_2}=-X_{|A_2}}$, and, for $i \in \lbrace 1,2 \rbrace$, using the identification of $A_i$ with $\b D^2 \times \b S^1$, the link $L_{X_{B_i}=-X_{|A_i}}$ reads $\lbrace \omega \rbrace \times \b S^1$. As we will see in Proposition~\ref{prop_linksinhomologyI}, for $i \in \lbrace 1,2 \rbrace$, $L_{\lbrace X^I \rbrace}(\sfrac{B_i}{A_i}) = 2 [L_{X_{B_i}=-X_{|A_i}}]$. Eventually, $$ \sum_{I\subset\lbrace 1 , 2 \rbrace} (-1)^{|I|} p_1([X^I]) = -8. $$ \end{exampleemempty} In general, for an LP$_\b Q$-surgery datum $(\sfrac{B}{A})$ in a compact oriented 3-manifold $M$, a trivialization of $TM_{|(M\setminus \mathring A)}$ cannot be extended as a parallelization of $M(\sfrac{B}{A})$. It follows that LP$_\b Q$-surgeries cannot be expressed as local moves on parallelized compact oriented 3-manifolds. This makes computing the variation of Pontrjagin numbers of torsion combings under LP$_\b Q$-surgeries tricky since Pontrjagin numbers of torsion combings are defined with respect to Pontrjagin numbers of parallelizations. \\ However, if $M$ is a compact oriented 3-manifold and if $\rho$ is a trivialization of $TM_{|\partial M}$, then the obstruction to finding a parallelization of $M$ which coincides with $\rho$ on $\partial M$ is an element of $H^2(M,\partial M; \sfrac{\b Z}{2\b Z})$ -- hence, its Poincaré dual is an element $[\gamma]$ of $H_1(M;\sfrac{\b Z}{2\b Z})$ -- and it is possible to get around such an obstruction thanks to the notion of \textit{pseudo-parallelization} developed by C.~Lescop. Let us postpone the formal definition to Subsection~\ref{ssec_defppara} (see also \cite{lescopcube}) and, for the time being, let us just mention that a pseudo-parallelization $\bar\tau$ of a compact oriented 3-manifold $M$ is a triple $(N(\gamma); \tau_e, \tau_d)$ where $N(\gamma)$ is a framed tubular neighborhood of a link $\gamma$ in $\mathring M$, $\tau_e$ is a parallelization of $M\setminus N(\gamma)$ and $\tau_d : N(\gamma)\times \b R^3 \rightarrow TN(\gamma)$ is a parallelization of $N(\gamma)$ such that there exists a section $E_1^d$ of $UM$ : $$ E_1^d : \left\lbrace \begin{aligned} m \in M\setminus \mathring N(\gamma) &\longmapsto \tau_e(m,e_1) \\ m \in N(\gamma) &\longmapsto \tau_d(m,e_1). \end{aligned} \right. $$ Let us finally mention that $\bar\tau$ also determines a section $E_1^g$ of $UM$ which coincides with $E_1^d$ on $M\setminus \mathring N(\gamma)$. The sections $E_1^d$ and $E_1^g$ are the \textit{Siamese sections} of $\bar\tau$ and the link $\gamma$ is the \textit{link of the pseudo-parallelization $\bar\tau$}. \\ To a pseudo-parallelization, C. Lescop showed that it is possible to associate a \textit{complex trivialization} up to homotopy, see Definition~\ref{def_complextriv}. This leads to a natural extension of the notion of first relative Pontrjagin numbers of parallelizations to pseudo-parallelizations. Furthermore, as in the case of parallelizations, a pseudo-parallelization $\bar\tau$ of a compact oriented 3-manifold $M$ admits \textit{pseudo-sections} $\bar\tau(M\times\lbrace v\rbrace)$ which are 3-chains of $UM$, for all $v \in \b S^2$. In the special case $v =e_1$ the pseudo-section $\bar\tau(M\times \lbrace e_1 \rbrace)$ of $\bar\tau$ can be written as : $$ \bar\tau(M\times \lbrace e_1 \rbrace) = \frac{E_1^d(M) + E_1^g(M)}{2}. $$ A combing $(X,\sigma)$ of $M$ is said to be \textit{compatible with $\bar\tau$} if $(X,\sigma)$ is $\partial$-compatible with $(E_1^d,{E_2^e}_{|\partial M})$ and $(E_1^g,{E_2^e}_{|\partial M})$, where $E_2^e$ is the second vector of $\tau_e$, and if $$ L_{E_1^d=X} \cap L_{E_1^g=-X}=\emptyset \mbox{ \ and \ } L_{E_1^g=X} \cap L_{E_1^d=-X}=\emptyset. $$ If $(X,\sigma)$ and $\bar\tau$ are compatible, then $\rho(X)=\bar\tau_{|\partial M}$ and we get two disjoint rational combinations of oriented links in $\mathring M$ : $$ L_{\bar\tau = X} = \frac{L_{E_1^d= X} + L_{E_1^g= X}}{2} \mbox{ \ and \ } L_{\bar\tau = - X} = \frac{L_{E_1^d=- X} + L_{E_1^g=- X}}{2}. $$ Pseudo-parallelizations allow us to revisit the definition of Pontrjagin numbers and to generalize it to torsion combings of compact oriented 3-manifolds with non empty boundary as follows. Let $P_{\b S^2}$ denote the standard projection from $W\times \b S^2$ to $\b S^2$, for any manifold $W$. \begin{lemma} \label{lem1} Let $(X,\sigma)$ be a torsion combing of a compact oriented 3-manifold $M$, let $\bar\tau$ be a pseudo-parallelization of $M$, and let $E_1^d$ and $E_1^g$ be the Siamese sections of $\bar\tau$. If $\bar\tau$ and $(X,\sigma)$ are compatible, then the expression $$ 4\cdot lk_M(L_{\bar\tau=X} , L_{\bar\tau=-X}) - lk_{\b S^2} \left( e_1-(-e_1) \ , \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{{E_1}^d={-E_1}^g}) \right) $$ depends only on the homotopy class of $(X,\sigma)$. It will be denoted $p_1(\bar\tau,[X])$ and its opposite will be written $p_1([X],\bar\tau)$. \end{lemma} \begin{theorem} \label{thm_defp1Xb} Let $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$ be torsion combings of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$ coincide on the boundary. For $i\in\lbrace 1,2\rbrace$, let $\bar\tau_i$ be a pseudo-parallelization of $M_i$ such that $\bar\tau_i$ and $(X_i,\sigma_{X_i})$ are compatible. The expression $$ p_1([X_1],[X_2])= p_1( [X_1],\bar\tau_1) + p_1(\bar\tau_1, \bar\tau_2) + p_1(\bar\tau_2, [X_2]) $$ depends only on the homotopy classes of $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$, and it defines \textup{the first relative Pontrjagin number of $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$}. Moreover, if $M_1$ and $M_2$ are closed, then $$ p_1([X_1],[X_2])=p_1([X_2])-p_1([X_1]). $$ \end{theorem} Under the assumptions of Theorem \ref{thm_D2nd0}, we see that it would be impossible to naively define $p_1([X_{|A_1}],[{X_{\lbrace 1 \rbrace}}_{|B_1}])$ as $p_1([X])-p_1([{X_{\lbrace 1 \rbrace}}])$, where $X$ extends $X_{|A_1}$ to the closed manifold $M$, and ${X_{\lbrace 1 \rbrace}}$ extends ${X_{\lbrace 1 \rbrace}}_{|B_1}$ in the same way to $M(\sfrac{B_1}{A_1})$. Indeed Theorem \ref{thm_D2nd0} and the example that follows it show that the expression $\left(p_1([X])-p_1([{X_{\lbrace 1 \rbrace}}])\right)$ depends on the combed manifold $(M,X)$ into which $(A_1,X_{|A_1})$ has been embedded. It even depends on the combing $X$ that extends the combing $X_{|A_1}$ of $A_1$ to $M$ for the fixed manifold $M$ of this example, since $$ \left( p_1([X]) - p_1([{X_{\lbrace 1 \rbrace}}]) \right) - \left( p_1([{X_{\lbrace 2 \rbrace}}]) - p_1([{X_{\lbrace 1,2 \rbrace}}]) \right) =-8 $$ there. \\ Theorem \ref{thm_defp1Xb} translates as follows in the closed case and it bridges a gap between the two dissimilar generalizations of the Pontrjagin numbers of parallelizations for pseudo-parallelizations and for torsion combings in closed oriented 3-manifolds. \begin{corollary} \label{cor_p1Xppara} Let $X$ be a torsion combing of a closed oriented 3-manifold $M$ and let \linebreak $\bar\tau (N(\gamma);\tau_e,\tau_d)$ be a pseudo-parallelization of $M$. Let $E_1^d$ and $E_1^g$ denote the Siamese sections of $\bar\tau$. If $X$ and $\bar \tau$ are compatible, then $$ \begin{aligned} p_1([X]) = p_1(\bar \tau) &+ 4\cdot lk_M(L_{\bar\tau=X} , L_{\bar\tau=-X}) \\ &- lk_{\b S^2} \left( e_1-(-e_1) \ , \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{{E_1}^d={-E_1}^g}) \right). \end{aligned} $$ \end{corollary} Another special case is when genuine parallelizations can be used. The closed case with genuine parallelizations is nothing but C. Lescop's definition of the Pontrjagin number of torsion combings in closed oriented 3-manifolds stated above. \begin{corollary} \label{cor_compactpara} Let $(X_1,\sigma_1)$ and $(X_2,\sigma_2)$ be torsion combings of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $(X_1,\sigma_1)$ and $(X_2,\sigma_2)$ coincide on the boundary. If, for $i \in \lbrace 1 , 2 \rbrace$, $\tau_i \hspace{-1mm}=\hspace{-1mm} (E_1^i,E_2^i,E_3^i)$ is a parallelization of $M_i$ such that $(X_i,\sigma_i)$ and $(E_1^i,{E_2^i}_{|\partial M_i})$ are $\partial$-compatible, then $$ p_1([X_1],[X_2]) = p_1(\tau_1 , \tau_2) + 4 \cdot lk_{M_2}(L_{E_1^{2}=X_2} \ , \ L_{E_1^{2}=-X_2}) - 4 \cdot lk_{M_1}(L_{E_1^{1}=X_1} \ , \ L_{E_1^{1}=-X_1}). $$ \end{corollary} Finally, for torsion combings defined on a fixed compact oriented 3-manifold (which may have boundary), we have the following simple variation formula, as in the closed case. \begin{theorem} \label{formuleplus} If $(X, \sigma)$ and $(Y,\sigma)$ are $\partial$-compatible torsion combings of a compact oriented 3-manifold $M$, then $$ p_1([X],[Y]) = 4 \cdot lk_M(L_{X=Y}, L_{X=-Y}). $$ \end{theorem} \def\mbox{spin$^c$}{\mbox{spin$^c$}} Let $M$ be a compact connected oriented 3-manifold. For all section $\sigma$ of $TM_{|\partial M}$, let $\mbox{spin$^c$}(M,\sigma)$ denote the \textit{set of $\mbox{spin$^c$}$-structures on $M$ relative to $\sigma$}, \textit{ie}\ the set of homotopy classes on $M \setminus \lbrace \omega \rbrace$ of combings $(X,\sigma)$ of $M$, where $\omega$ is any point in $\mathring M$ (see \cite{gmdeloup}, for a detailed presentation of $\mbox{spin$^c$}$-structures). Thanks to Theorem~\ref{formuleplus}, it is possible to classify the torsion combings of a fixed $\mbox{spin$^c$}$-structure up to homotopy, thus generalizing a property of the Gompf invariant in the closed case. I thank Gw\'{e}na\"{e}l Massuyeau for suggesting this statement. \begin{theorem} \label{GM} Let $(X,\sigma)$ and $(Y,\sigma)$ be $\partial$-compatible torsion combings of a compact connected oriented 3-manifold $M$ which represent the same $\mbox{spin$^c$}$-structure. The combings $(X,\sigma)$ and $(Y,\sigma)$ are homotopic relatively to the boundary if and only if $p_1([X],[Y]) =0$. \end{theorem} The key tool in the proof of Theorem~\ref{thm_defp1Xb} is the following generalization of the interpretation of the variation of the Pontrjagin numbers of parallelizations as an algebraic intersection of three chains. \begin{theorem} \label{prop_varasint} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$ and whose links are disjoint. For any $v \in \b S^2$, there exists a 4-chain $C_4(\tau,\bar\tau ;v)$ of $[0,1]\times UM$ transverse to the boundary of $[0,1] \times UM$ such that $$ \partial C_4(\tau,\bar\tau ;v) = \lbrace 1 \rbrace \times \bar\tau(M\times\lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau(M\times \lbrace v \rbrace) - [0,1]\times \tau(\partial M \times \lbrace v \rbrace) $$ and for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ : $$ p_1(\tau,\bar\tau)= 4 \cdot \langle C_4(\tau,\bar\tau ;x),C_4(\tau,\bar\tau ;y),C_4(\tau,\bar\tau ;z) \rangle_{[0,1]\times UM} $$ for any triple of pairwise transverse $C_4(\tau,\bar\tau,x)$, $C_4(\tau,\bar\tau,y)$ and $C_4(\tau,\bar\tau,z)$ that satisfy the hypotheses above. \end{theorem} Our general variation formula for Pontrjagin numbers of torsion combings reads as follows for all compact oriented 3-manifolds. \begin{theorem}\label{thm_D2nd} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing, \linebreak let $\lbrace (\sfrac{B_i}{A_i},X_{B_i}) \rbrace_{i \in \lbrace 1,2 \rbrace}$ be two disjoint LP$_\b Q$-surgeries in $(M,X)$, and, for all $I \subset \lbrace 1 , 2 \rbrace$, \linebreak let $X^I = X(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$. If $\lbrace X^I \rbrace_{I \subset \lbrace 1,2 \rbrace}$ is a family of torsion combings of the manifolds \linebreak $M_I=M(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$, then $$ p_1([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1, 2 \rbrace}])-p_1([X],[X^{\lbrace 1 \rbrace}]) = - 2 \cdot lk_M \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}), L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})\right), $$ where the right-hand side is defined as in Theorem~\ref{thm_D2nd0}. \end{theorem} A direct consequence of this variation formula is that the extended Gompf invariant for torsion combings of compact oriented 3-manifolds is a degree two finite type invariant with respect to LP$_\b Q$-surgeries. \begin{corollary} \label{cor_FTcombings} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing, let $\lbrace ( \sfrac{B_i}{A_i} , X_{B_i} ) \rbrace_{i\in \lbrace 1, \ldots, k\rbrace}$ be a family of disjoint LP$_\b Q$-surgeries in $(M,X)$, and, for all $I \subset \lbrace 1 , \ldots, k \rbrace$, let $(M_I,X^I)$ be the combed manifold obtained by performing the surgeries associated to the data $\lbrace (\sfrac{B_i}{A_i}, X_{B_i}) \rbrace_{i \in I}$. If $k\geqslant 3$, and if $\lbrace X^I \rbrace_{I \subset \lbrace 1, \ldots, k \rbrace}$ is a family of torsion combings of the $\lbrace M_I \rbrace_{I \subset \lbrace 1, \ldots, k \rbrace}$, then $$ \sum_{I \subset \lbrace 2 , \ldots , k \rbrace} (-1)^{\card(I)} \ p_1 \left( [X^I] , [X^{I\cup\lbrace 1 \rbrace}] \right)=0. $$ If $\partial M=\emptyset$, this reads $$ \sum_{I \subset \lbrace 1 , \ldots , k \rbrace} (-1)^{\card(I)} \ p_1 \left( [X^I] \right)=0. $$ \end{corollary} In the first section of this article, we give details on Lagrangian-preserving surgeries, combings and pseudo-parallelizations. Then, we review the definitions of Pontrjagin numbers of parallelizations and pseudo-parallelizations. The second section ends with a proof of Theorem~\ref{prop_varasint}. The third section is devoted to the proof of Theorem~\ref{thm_defp1Xb} and Theorem~\ref{GM}. Finally, we study the variations of Pontrjagin numbers with respect to Lagrangian-preserving surgeries, and finish the last section by proving Theorem~\ref{thm_D2nd}.\\ \begin{large}\textbf{Acknowledgments.}\end{large} First, let me thank C. Lescop and J.-B. Meilhan for their thorough guidance and support. I also thank M. Eisermann and G. Massuyeau for their careful reading and their useful remarks. \renewcommand{\arabic{section}.\arabic{theorem}}{\arabic{section}.\arabic{theorem}} \section{More about ...} \subsection{Lagrangian-preserving surgeries} \label{ssec_LPsurgeries} Let us first note three easy lemmas, the proofs of which are left to the reader. \begin{lemma} \label{prop_redefLPs} Let $(\sfrac{B}{A})$ be an LP$_\b Q$-surgery datum in a compact oriented 3-manifold $M$ and let $L_1$ and $L_2$ be links in $M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$. If $L_1$ and $L_2$ are rationally null-homologous in $M$, then they are null-homologous in $M(\sfrac{B}{A})$ and $$ lk_{M(\sfrac{B}{A})}(L_1,L_2) = lk_M(L_1,L_2). $$ \end{lemma} \iffalse \begin{proof} For $i \in \lbrace 1,2 \rbrace$, let $\Sigma_i$ be a 2-chain in $M$ such that $\partial \Sigma_i=L_i$ and $\Sigma_i$ is transverse to $\partial A$. Since $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery, $[\Sigma_i\cap \partial A]= 0$ in $H_1(B; \b Q)$. Therefore, for all $i \in \lbrace 1,2 \rbrace$, there exists a 2-chain $\Sigma'_i$ in $M(\sfrac{B}{A})$ such that $$ \partial \Sigma'_i = L_i \mbox{ \ and \ } \Sigma'_i \cap (M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B) = \Sigma_i \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A). $$ As a consequence $L_1$ and $L_2$ are null-homologous in $M(\sfrac{B}{A})$ and, since $L_2 \subset M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$, it follows that $$ lk_{M(\sfrac{B}{A})}(L_1,L_2) = \langle \Sigma'_1 , L_2 \rangle_{M(\sfrac{B}{A})} = \langle \Sigma_1 , L_2 \rangle_M = lk_M(L_1,L_2). $$ \end{proof} \fi A \textit{rational homology 3-sphere}, or a $\b Q$HS for short, is a closed oriented 3-manifold with the same homology with rational coefficients as $\b S^3$. \begin{lemma} \label{prop-LP2} Let $(\sfrac{B}{A})$ be an LP$_\b Q$-surgery in a compact oriented 3-manifold $M$. If $M$ is a $\b Q$HS, then $M(\sfrac{B}{A})$ is a $\b Q$HS. \end{lemma} \iffalse \begin{proof} Let $M$ be a $\b Q$HS and let $A$ be a $\b Q$HH of genus $g\in \b N$. Using the Mayer-Vietoris sequence associated to $M=A\cup (M\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring A)$ shows that $M\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$ is a $\b Q$HH of genus $g$ and that the inclusions of $\partial A$ into $A$ and into $M\setminus \mathring A$ induce an isomorphism $$ H_1(\partial A ; \b Q) \simeq H_1(A; \b Q)\oplus H_1(M\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring A ; \b Q). $$ For details, see \cite[Sublemma 4.6]{moussardFTIQHS}. It follows that $M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B$ is also a genus $g$ $\b Q$HH and, since $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery, the inclusions of $\partial B$ into $B$ and into $M(\sfrac{B}{A})\setminus \mathring B$ induce an isomorphism $$ H_1(\partial B ; \b Q) \simeq H_1(B; \b Q)\oplus H_1(M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring B ; \b Q). $$ Using this isomorphism in the Mayer-Vietoris sequence associated to the splitting \linebreak $M(\sfrac{B}{A})=B\cup (M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring B)$ shows that $M(\sfrac{B}{A})$ is a $\b Q$HS. \end{proof} \fi \begin{lemma} \label{phom} If $A$ is a compact connected orientable 3-manifold with connected boundary and if the map $i^A_* : H_1(\partial A ; \b Q) \rightarrow H_1(A; \b Q)$ induced by the inclusion of $\partial A$ into $A$ is surjective, then $A$ is a rational homology handlebody. \end{lemma} \iffalse \begin{proof} First, for such a manifold $A$, we have $H_0(A;\b Q)\simeq \b Q$ and $H_3(A;\b Q)\simeq 0$. Second, using the hypothesis on $i^A_*$ in the exact sequence associated to $(A, \partial A)$, we get that $H_1(A,\partial A;\b Q)= 0$. Using Poincaré duality and the universal coefficient theorem, it follows that $$ H_2(A;\b Q) \simeq H^1(A,\partial A;\b Q) \simeq \mbox{Hom}(H_1(A,\partial A;\b Q),\b Q) = 0. $$ Moreover, we get the following exact sequence from the exact sequence associated to $(A,\partial A)$~: $$ 0\rightarrow H_2(A,\partial A; \b Q) \rightarrow H_1(\partial A;\b Q) \rightarrow H_1(A;\b Q) \rightarrow 0. $$ It follows that $\mbox{dim}(H_2(A,\partial A;\b Q))+\mbox{dim}(H_1(A;\b Q))=\mbox{dim}(H_1(\partial A;\b Q)) = 2g$, where $g$ denotes the genus of $\partial A$. However, $$ H_2(A,\partial A;\b Q) \simeq H^1(A;\b Q) \simeq \mbox{Hom}(H_1(A;\b Q),\b Q), $$ hence $\mbox{dim}(H_1(A;\b Q))=g$. \end{proof} \fi \begin{proposition} Let $A$ be a compact submanifold with connected boundary of a $\b Q$HS $M$, let $B$ be a compact oriented 3-manifold and let $h:\partial A \rightarrow \partial B$ be a homeomorphism. If the surgered manifold $M(\sfrac{B}{A})$ is a $\b Q$HS and if $$ lk_{M(\sfrac{B}{A})}(L_1,L_2) = lk_M(L_1,L_2) $$ for all disjoint links $L_1$ and $L_2$ in $M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$, then $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery. \end{proposition} \begin{proof} Using the Mayer-Vietoris exact sequences associated to $M=A\cup (M\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring A)$ and \linebreak $M(\sfrac{B}{A})=B\cup (M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring B)$, we get that the maps $i_*^A : H_1(\partial A ; \b Q) \longrightarrow H_1(A; \b Q)$ and \linebreak $i_*^B : H_1(\partial B ; \b Q) \longrightarrow H_1(B; \b Q)$ induced by the inclusions of $\partial A$ and $\partial B$ into $A$ and $B$ are surjective. Using Lemma~\ref{phom}, it follows that $A$ and $B$ are rational homology handlebodies. Moreover, $A$ and $B$ have the same genus since $h : \partial A \rightarrow \partial B$ is a homeomorphism. \\ Let $P_{\go L_A}$ and $P_{\go L_B}$ denote the projections from $H_1(\partial A;\b Q)$ onto $\go L_A$ and $\go L_B$, respectively, with kernel $\go L_{M\setminus \mathring A}$. Consider a collar $[0,1]\times \partial A$ of $\partial A$ such that $\lbrace 0 \rbrace \times \partial A \simeq \partial A$ and note that for all 1-cycles $x$ and $y$ of $\partial A$ : $$ \langle P_{\go L_A}(y), x \rangle_{\partial A} = lk_M(\lbrace 1 \rbrace \times y, \lbrace 0 \rbrace \times x ) = lk_{M(\sfrac{B}{A})}(\lbrace 1 \rbrace \times y, \lbrace 0 \rbrace \times x) =\langle P_{\go L_B}(y), x \rangle_{\partial B}, $$ so that $P_{\go L_B}=P_{\go L_A}$ and $h_*(\go L_A)=\go L_B$. \end{proof} \subsection{Combings} \begin{proposition} \label{prop_linksandsigns} If $X$ and $Y$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$, then $$ L_{X=Y}=L_{Y=X} \mbox{ \ and \ } L_{X=Y}= - L_{-X=-Y}. $$ \end{proposition} \begin{proof} First, by definition, the link $L_{X=Y}$ is the projection of the intersection of the sections $X(\mathring M)$ and $Y(\mathring M)$. This intersection is oriented so that $$ NX(\mathring M) \oplus NY(\mathring M) \oplus T(X(\mathring M)\cap Y(\mathring M)) $$ orients $UM$, fiberwise. Since the normal bundles $NX(\mathring M)$ and $NY(\mathring M)$ have dimension 2, the isomorphism permuting them is orientation-preserving so that $L_{X=Y}=L_{Y=X}$. Second, $(-X)(\mathring M)\cap(-Y)(\mathring M)$ is the image of $X(\mathring M)\cap Y(\mathring M)$ under the map $\iota$ from $UM$ to itself which acts on each fiber as the antipodal map. This map reverses the orientation of $UM$ as well as the coorientations of $X(\mathring M)$ and $Y(\mathring M)$, \textit{ie}\ $$ \begin{aligned} &N(-X)(M)=-\iota(NX(M)), \ N(-Y)(M)=-\iota(NY(M)). \\ \end{aligned} $$ Since $N(-X)(M) \oplus N(-Y)(M) \oplus T((-X)(M)\cap (-Y)(M))$ has the orientation of $UM$ $$ T((-X)(M)\cap (-Y)(M)) = -\iota(T(X(M)\cap Y(M))). $$ Hence, $L_{X=Y}= - L_{-X=-Y}$. \end{proof} \begin{definition} Let $M$ be a compact oriented 3-manifold and let $L$ be a link in $\mathring M$. Define the \textit{blow up of $M$ along $L$} as the 3-manifold $Bl(M,L)$ constructed from $M$ in which $L$ is replaced by its unit normal bundle in $M$. The 3-manifold $Bl(M,L)$ inherits a canonical differential structure. See \cite[Definition 3.5]{lescopcombing} for a detailed description. \end{definition} \begin{lemma} \label{lem_phomotopy} Let $X$ and $Y$ be $\partial$-compatible combings of a compact oriented 3-manifold $M$. There exists a 4-chain $\bar F(X,Y)$ of $UM$ with boundary~: $$ \partial \bar F(X,Y) = Y(M) - X(M) + UM_{| L_{X=-Y}}. $$ \end{lemma} \begin{proof} To construct the desired 4-chain, start with the partial homotopy from $X$ to $Y$ $$ \tilde F(X,Y) : \left\lbrace \begin{aligned} \ [0,1] \times (M\setminus L_{X=-Y}) &\longrightarrow UM\\ (s,m) & \longmapsto \left( m , \l H_X^Y(s,m) \right) \end{aligned} \right. $$ where $\l H_X^Y(s,m)$ is the unique point of the shortest geodesic arc from $X(m)$ to $Y(m)$ such that $$ d_{\b S^2}(X(m),\l H_X^Y(s,m)) = s \cdot d_{\b S^2}(X(m),Y(m)) $$ where $d_{\b S^2}$ denotes the usual distance on $\b S^2$. Next, extend the map $$ (s,m) \longmapsto \l H_X^Y(s,m) $$ on the blow up of $M$ along $L_{X=-Y}$. The section $X$ induces a map $$ X : NL_{X=-Y} \longrightarrow -Y^\perp(L_{X=-Y}) $$ which is a diffeomorphism on a neighborhood of $\lbrace 0 \rbrace \times L_{X=-Y}$ since $X$ and $Y$ are $\partial$-compatible combings. Furthermore, this diffeomorphism is orientation-preserving by definition of the orientation on $L_{X=-Y}$. So, for $n \in UN_mL_{X=-Y}$, $\l H_X^Y(s,n)$ can be defined as the unique point at distance $s\pi$ from $X(m)$ on the unique half great circle from $X(m)$ to $Y(m)$ through $T_m X(n)$. Thanks to transversality again, the set $\lbrace \l H_X^Y(s,n) \ | \ s \in [0,1], \ n \in UN_mL_{X=-Y} \rbrace$ is a whole sphere $\b S^2$ for any fixed $m \in L_{X=-Y}$, so that $$ \partial \tilde F(X,Y)([0,1]\times Bl(M,L_{X=-Y})) = Y(M) - X(M) + \partial_{int} $$ where $\partial_{int} \simeq L_{X=-Y} \times \b S^2 $ (see \cite[Proof of Proposition 3.6]{lescopcombing} for the orientation of $\partial_{int}$). Finally, let $\bar F(X,Y)=\tilde F(X,Y)([0,1]\times Bl(M,L_{X=-Y}))$. \end{proof} If $X$ and $Y$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$ and if $\sigma$ is a nonvanishing section of $X^\perp_{|\partial M}$, let $\l H^{-Y}_{X,\sigma}$ denote the map from $[0,1] \times (M\hspace{-0.5mm}\setminus\hspace{-0.5mm} L_{X=Y})$ to $UM$ such that, for all $(s,m)$ in $[0,1] \times \partial M$, $\l H^{-Y}_{X,\sigma}(s,m)$ is the unique point at distance $s\pi$ from $X(m)$ on the unique geodesic arc starting from $X(m)$ in the direction of $\sigma(m)$ to $-X(m)=-Y(m)$ and, for all $(s,m)$ in $[0,1] \times (\mathring M \setminus L_{X=Y})$, $\l H^{-Y}_{X,\sigma}(s,m)$ is the unique point on the shortest geodesic arc from $X(m)$ to $-Y(m)$ such that $$ d_{\b S^2}(X(m),\l H^{-Y}_{X,\sigma}(s,m)) = s \cdot d_{\b S^2}(X(m),-Y(m)). $$ As in the previous proof, $\l H^{-Y}_{X,\sigma}$ may be extended as a map from $[0,1]\times Bl(M,L_{X=Y})$ to $UM$. In the case of $X=Y$, for all section $\sigma$ of $X^\perp$, nonvanishing on $\partial M$, let $L_{\sigma=0}$ denote the oriented link $\lbrace m \in M \ | \ \sigma(m)= 0 \rbrace$ and define a map $\l H_{X,\sigma}^{-X}$ as the map from $[0,1] \times (M\hspace{-0.5mm}\setminus\hspace{-0.5mm} L_{\sigma=0})$ to $UM$ such that, for all $(s,m)$ in $[0,1] \times (M\hspace{-0.5mm}\setminus\hspace{-0.5mm} L _{\sigma=0})$, $\l H_{X,\sigma}^{-X}(s,m)$ is the unique point at distance $s\pi$ from $X(m)$ on the unique geodesic arc starting from $X(m)$ in the direction of $\sigma(m)$ to $-X(m)$. Note that $L_{\sigma=0}\cap\partial M = \emptyset$, and $[L_{\sigma=0}]=P(e^M_2(X,\sigma_{|\partial M}))$. Here again, $\l H_{X,\sigma}^{-X}$ may be extended as a map from $[0,1]\times Bl(M,L_{\sigma=0})$ to $UM$. \\ In order to simplify notations, if $A$ is a submanifold of a compact oriented 3-manifold $M$, we may implicitly use a parallelization of $M$ to write $UM_{|A}$ as $A\times \b S^2$. \begin{proposition} \label{prop_links} If $(X,\sigma)$ and $(Y,\sigma)$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$, then, in $H_3(UM;\b Z)$, $$ \begin{aligned} \ [L_{X=-Y} \times \b S^2 ] &= [X(M) - Y(M)] \\ \ [L_{X=Y} \times \b S^2 ] &= [X(M) - (-Y)(M) + \l H^{-Y}_{X,\sigma} ([0,1]\times \partial M)]. \end{aligned} $$ \end{proposition} \begin{proof} The first identity is a direct consequence of Lemma~\ref{lem_phomotopy}. The second one can be obtained using a similar construction. Namely, construct a 4-chain $\bar F(X,-Y)$ using the partial homotopy from $X$ to $-Y$ : $$ \tilde F(X,-Y) : \left\lbrace \begin{aligned} \ [0,1] \times (M\setminus L_{X=Y}) &\longrightarrow UM\\ (s,m) & \longmapsto \left( m , \l H^{-Y}_{X,\sigma}(s,m) \right). \end{aligned} \right. $$ As in the proof of Lemma~\ref{lem_phomotopy}, $\tilde F(X,-Y)$ can be extended to $[0,1] \times Bl(M,L_{X=Y})$. Finally, we get a 4-chain $\bar F(X,-Y)$ of $UM$ with boundary : $$ \partial \bar F(X,-Y) = (-Y)(M) - X(M) - \l H^{-Y}_{X,\sigma} ([0,1]\times \partial M) + UM_{|L_{X=Y}}. $$ \end{proof} \begin{proposition} \label{prop_euler} Let $X$ be a combing of a compact oriented 3-manifold $M$ and let \linebreak $P : H^2(M, \partial M;\b Z) \rightarrow H_1(M;\b Z)$ be the Poincaré duality isomorphism. If $M$ is closed, then, in $H_3(UM;\b Z)$, $$ [P(e_2(X^\perp)) \times \b S^2 ] = [X(M) - (-X)(M)], $$ where $[P(.)\times S^2]$ abusively denotes the homology class of the preimage of a representative of $P(.)$ under the bundle projection $UM \rightarrow M$. In general, if $\sigma$ is a section of $X^\perp$ such that $L_{\sigma=0}\cap\partial M = \emptyset$ then, in $H_3(UM;\b Z)$, $$ [ P(e_2^M(X^\perp,\sigma_{|\partial M})) \times \b S^2 ] = [X(M) - (-X)(M) +\l H_{X,\sigma}^{-X}([0,1]\times \partial M)]. $$ \end{proposition} \begin{proof} Recall that $P(e_2^M(X^\perp,\sigma_{|\partial M}))=[L_{\sigma=0}]$. Perturbing $X$ by using $\sigma$, construct a section $Y$ homotopic to $X$ that coincides with $X$ on $\partial M$ and such that $[L_{X=Y}] = P(e_2^M(X^\perp,\sigma_{|\partial M}))$. Using Proposition~\ref{prop_links}, $$ [L_{X=Y} \times \b S^2 ] = [X(M) - (-Y)(M) + \l H^{-Y}_{X,\sigma_{|\partial M}} ([0,1]\times \partial M)], $$ so that $$ [ P(e_2^M(X^\perp,\sigma_{|\partial M})) \times \b S^2 ] = [X(M) - (-X)(M) +\l H_{X,\sigma}^{-X}([0,1]\times \partial M)]. $$ \end{proof} \begin{proposition} \label{prop_linksinhomologyI} If $(X,\sigma)$ and $(Y,\sigma)$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$, then, in $H_1(M;\b Z)$, $$ \begin{aligned} 2 \cdot [L_{X=-Y}] &= P(e_2^M(X^\perp,\sigma)) - P(e_2^M(Y^\perp,\sigma)), \\ 2 \cdot [L_{X=Y}] &= P(e_2^M(X^\perp,\sigma)) + P(e_2^M(Y^\perp,\sigma)). \end{aligned} $$ \end{proposition} \begin{proof} Extend $\sigma$ as a section $\bar\sigma$ of $X^\perp$. Using Propositions~\ref{prop_linksandsigns},~\ref{prop_links}~and~\ref{prop_euler}, we get, in $H_3(UM;\b Z)$, $$ \begin{aligned} 2 \ \cdot \ [L_{X=-Y}\times \b S^2] &= [L_{X=-Y} \times \b S^2] - [L_{-X=Y} \times \b S^2] \\ &= [X(M)-Y(M)]-[(-X)(M)-(-Y)(M)] \\ &= [X(M)-Y(M)-(-X)(M)+(-Y)(M) \\ & \hspace{5mm}+\l H_{X,\bar\sigma}^{-X}([0,1]\times \partial M) -\l H_{X,\bar\sigma}^{-X} ([0,1]\times \partial M) ] \\ &= [X(M)-(-X)(M)+\l H_{X,\bar\sigma}^{-X}([0,1]\times \partial M)] \\ &- [Y(M)-(-Y)(M)+\l H_{X,\bar\sigma}^{-X}([0,1]\times \partial M) ] \\ &= [P(e_2^M(X^\perp,\sigma)) \times \b S^2] - [P(e_2^M(Y^\perp,\sigma)) \times \b S^2 ], \end{aligned} $$ $$ \begin{aligned} 2 \cdot [L_{X=Y}\times \b S^2] &= [L_{X=Y} \times \b S^2] - [L_{-X=-Y} \times \b S^2] \\ &= [X(M) - (-Y)(M)+\l H_{X,\bar\sigma_{|\partial M}}^{-Y} ([0,1]\times \partial M)]\\ &- [(-X)(M) - Y(M) +\l H_{-X,\bar\sigma_{|\partial M}}^{Y} ([0,1]\times \partial M)] \\ &= [ P(e_2^M(X^\perp,\sigma)) \times \b S^2 ] - [ P(e_2^M((-Y)^\perp,\sigma)) \times \b S^2 ] \\ &= [ P(e_2^M(X^\perp,\sigma)) \times \b S^2 ] + [ P(e_2^M(Y^\perp,\sigma)) \times \b S^2 ]. \end{aligned} $$ \iffalse &= [X(M) - (-Y)(M)+\l H_{X,\sigma}^{-X} ([0,1]\times \partial M)]\\ &- [(-X)(M) - Y(M) + \l H_{-Y,\sigma}^{Y} ([0,1]\times \partial M)] \\ &= [X(M)-(-X)(M) + \l H_{X,\sigma}^{-X} ([0,1]\times \partial M)] \\ &- [(-Y)(M)-Y(M)+ \l H_{-Y,\sigma}^{Y} ([0,1]\times \partial M)] \\ \fi \end{proof} \begin{remark} If $M$ is a compact oriented 3-manifold and if $\sigma$ is a trivialization of $TM_{|\partial M}$, then the set $\mbox{spin$^c$}(M,\sigma)$ is a $H^2(M,\partial M; \b Z)$-affine space and the map $$ c : \left\lbrace \begin{aligned} \mbox{spin$^c$}(M,\sigma) &\longrightarrow H^2(M,\partial M; \b Z) \\ [X]^c &\longmapsto e_2^M(X^\perp, \sigma) \end{aligned} \right. $$ is affine over the multiplication by 2. Moreover, $[X]^c-[Y]^c \in H^2(M,\partial M; \b Z) \simeq H_1(M; \b Z)$ is represented by $L_{X=-Y}$, hence $2 \cdot [L_{X=-Y}] = P(e_2^M(X^\perp,\sigma)) - P(e_2^M(Y^\perp,\sigma))$. See \cite[Section 1.3.4]{gmdeloup} for a detailed presentation using this point of view. Both Proposition \ref{prop_linksinhomologyI} and Corollary \ref{corrplus} below are already-known results. For instance, Corollary \ref{corrplus} is also present in \cite{lescopcombing} (Lemma 2.16). \end{remark} \begin{corollary} \label{corrplus} If $X$ and $Y$ are transverse combings of a closed oriented 3-manifold $M$, then, in $H_1(M;\b Z)$, $$ \begin{aligned} 2 \cdot [L_{X=-Y}] &= P(e_2(X^\perp)) - P(e_2(Y^\perp)), \\ 2 \cdot [L_{X=Y}] &= P(e_2(X^\perp)) + P(e_2(Y^\perp)). \end{aligned} $$ \end{corollary} \subsection{Pseudo-parallelizations} \label{ssec_defppara} \hspace{-3mm} A \textit{pseudo-parallelization} $\bar\tau \hspace{-1mm} = \hspace{-1mm} (N(\gamma); \tau_e, \tau_d)$ of a compact oriented 3-manifold $M$ is a triple~where \begin{enumerate}[\textbullet] \setlength{\itemsep}{0pt} \setlength{\parskip}{5pt} \item $\gamma$ is a link in $\mathring{M}$, \item $N(\gamma)$ is a tubular neighborhood of $\gamma$ with a given product struture : $$ N(\gamma)\simeq [a,b] \times \gamma \times [-1,1], $$ \item $\tau_e$ is a genuine parallelization of $\smash{M\setminus\mathring{N(\gamma)}}$, \item $\tau_d$ is a genuine parallelization of $N(\gamma)$ such that $$ \tau_d = \left\lbrace \begin{aligned} & \tau_e &\mbox{ on } \partial (\left[ a , b \right] \times \gamma \times \left[ -1 , 1 \right]) \setminus \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \\ & \tau_e \circ \l T_\gamma &\mbox{ on } \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \end{aligned} \right. $$ where $\l T_\gamma$ is $$ \l T_\gamma : \left\lbrace \begin{aligned} ( [a,b] \times \gamma \times \left[ -1 , 1 \right] )\times \b R^3 & \longrightarrow ([a,b] \times \gamma \times \left[ -1 , 1 \right]) \times \b R^3 \\ ((t,c,u),v) &\longmapsto ((t,c,u),R_{e_1, \pi+\theta(u)}(v)). \end{aligned} \right. $$ where $R_{e_1, \pi+\theta(u)}$ is the rotation of axis $e_1$ and angle $\pi+\theta(u)$, and where $\theta : [-1,1] \rightarrow [-\pi,\pi]$ is a smooth increasing map constant equal to $\pi$ on the interval $[-1,-1+\varepsilon]$ ($\varepsilon \in ]0,\sfrac{1}{2}[$), and such that $\theta(-x)=-\theta(x)$. \end{enumerate} Note that a pseudo-parallelization whose link is empty is a parallelization. \begin{lemma} \label{lem_extendpparallelization} If $M$ is a compact oriented 3-manifold with boundary and if $\rho$ is a trivialization of $TM_{|\partial M}$, there exists a pseudo-parallelization $\bar\tau$ of $M$ that coincides with $\rho$ on $\partial M$. \end{lemma} \begin{proof} The obstruction to extending the trivialization $\rho$ as a parallelization of $M$ can be represented by an element $[\gamma] \in H_1(M;\pi_1(SO(3)))$ where $\gamma$ is a link in $M$. It follows that $\rho$ can be extended on $M\setminus \mathring {N(\gamma)}$ where $N(\gamma)$ is a tubular neighborhood of $\gamma$. Finally, according to \cite[Lemma 10.2]{lescopcube}, it is possible to extend $\rho$ as a pseudo-parallelization on each torus of~$N(\gamma)$. \end{proof} Thanks to Lemma \ref{lem_extendpparallelization}, an LP$_\b Q$-surgery in a rational homology 3-sphere equipped with a pseudo-parallelization can be seen as a local move. This is not the case for an LP$_\b Q$-surgery in a rational homology 3-sphere equipped with a genuine parallelization. \\ \iffalse Unlike the case of a parallelized rational homology 3-sphere, in the case of a rational homology 3-sphere equipped with a pseudo-parallelization, an LP$_\b Q$-surgery can be seen as a local move thanks to Lemma \ref{lem_extendpparallelization} above. \fi Before we move on to the definition of pseudo-sections \textit{ie}\ the counterpart of sections of parallelizations for pseudo-parallelizations, we need the following. \begin{definition} \label{def_addinner} Let $\bar \tau = (N(\gamma); \tau_e, \tau_d)$ be a pseudo-parallelization of a compact oriented 3-manifold. An \textit{additional inner parallelization} is a map $\tau_g$ such that $$ \tau_g : \left\lbrace \begin{aligned} \ [a,b]\times \gamma \times [-1,1] \times \b R^3 &\longrightarrow TN(\gamma) \\ ((t,c,u),v) & \longmapsto \tau_d \left( \l T_\gamma^{-1} ((t,c,u),\l F(t,u)(v))\right) \end{aligned} \right. $$ where, choosing $\varepsilon \in \ ]0,\sfrac{1}{2}[$, $\l F$ is a map such that $$ \l F : \left\lbrace \begin{aligned} \ [a,b]\times[-1,1] & \longrightarrow SO(3) \\ (t,u) & \longmapsto \left\lbrace \begin{aligned} & \mbox{Id}_{SO(3)} & \mbox{for $|u|>1-\varepsilon$} \\ & R_{e_1,\pi + \theta(u)} & \mbox{for $t < a + \varepsilon$} \\ & R_{e_1,-\pi - \theta(u)} & \mbox{for $t > b - \varepsilon$} \end{aligned} \right. \end{aligned} \right. $$ which exists since $\pi_1(SO(3))\hspace{-1mm}=\sfrac{\b Z}{2 \b Z}$ and which is well-defined up to homotopy since $\pi_2(SO(3))\hspace{-1mm}=~\hspace{-2mm}0$. \end{definition} From now on, we will always consider pseudo-parallelizations together with an additional inner parallelization. Finally, note that if $\bar \tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold together with an additional inner parallelization, then : \begin{enumerate}[\textbullet] \setlength{\itemsep}{0pt} \setlength{\parskip}{5pt} \item the parallelizations $\tau_e$, $\tau_d$ and $\tau_g$ agree on $\partial N(\gamma) \setminus \lbrace b \rbrace \times \gamma \times [-1,1]$, \item $\tau_g = \tau_e \circ \l T_\gamma^{-1}$ on $\lbrace b \rbrace \times \gamma \times [-1,1]$. \end{enumerate} \begin{definition} A \textit{pseudo-section} of a pseudo-parallelization of a compact oriented 3-manifold $M$ together with an additional inner parallelization, $\bar \tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$, is a 3-cycle of \linebreak $(UM,UM_{|\partial M})$ of the following form : $$ \begin{aligned} \bar \tau (M\times \lbrace v \rbrace ) &= \tau_e((M\setminus \mathring N(\gamma))\times \lbrace v \rbrace ) \\ &+ \frac{ \tau_d(N(\gamma)\times \lbrace v \rbrace) + \tau_g(N(\gamma)\times \lbrace v \rbrace) + \tau_e( \lbrace b \rbrace \times \gamma \times C_2(v)) }{2} \end{aligned} $$ where $v\in \b S^2$ and $C_2(v)$ is the 2-chain of $\left[ -1 , 1 \right] \times \b S^1(v)$ of Figure~\ref{fig_C2v}, where $\b S^1(v)$ stands for the circle of $\b S^2$ that lies on the plane orthogonal to $e_1$ and passes through $v$. Note that : $$ \begin{aligned} \partial C_2(v) &= \lbrace (u, R_{e_1,\pi+\theta(u)} (v)) \ | u \in \left[ -1 , 1 \right] \rbrace \\ &+ \lbrace (u, R_{e_1,-\pi-\theta(u)} (v)) \ | u \in \left[ -1 , 1 \right] \rbrace - 2 \cdot \left[ -1, 1 \right] \times \lbrace v \rbrace. \end{aligned} $$ \end{definition} \begin{center} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-2.25,-1) rectangle (13,2.25); \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (0,0) -- (4,2) -- (0,2) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (7.5,2) -- (7.5,0) -- (11.5,0) -- cycle; \draw (0,2)-- (4,2); \draw (4,0)-- (4,2); \draw (4,0)-- (0,0); \draw (0,0)-- (0,2); \draw (7.5,0)-- (7.5,2); \draw (7.5,0)-- (11.5,0); \draw (11.5,0)-- (11.5,2); \draw (7.5,2)-- (11.5,2); \draw (0,0)-- (4,2); \draw (0,2)-- (4,0); \draw (7.5,2)-- (11.5,0); \draw (11.5,2)-- (7.5,0); \draw (-2,1.25) node[anchor=north west] {$C_2(v)=$}; \draw (-0.5,-0.25) node[anchor=north west] {$-1$}; \draw (3.75,-0.25) node[anchor=north west] {$1$}; \draw (7,-0.25) node[anchor=north west] {$-1$}; \draw (11.25,-0.25) node[anchor=north west] {$1$}; \draw [->] (0,0) -- (4,0); \draw [->] (7.5,0) -- (11.5,0); \draw [->] (11.5,0) -- (11.5,2); \draw [->] (4,0) -- (4,2); \draw (4,2) node[anchor=north west] {$\b S^1(v)$}; \draw (11.5,2) node[anchor=north west] {$\b S^1(v)$}; \draw [color=zzttqq] (0,0)-- (4,2); \draw [color=zzttqq] (4,2)-- (0,2); \draw [color=zzttqq] (0,2)-- (0,0); \draw [color=zzttqq] (7.5,2)-- (7.5,0); \draw [color=zzttqq] (7.5,0)-- (11.5,0); \draw [color=zzttqq] (11.5,0)-- (7.5,2); \draw (5.58,1.13) node[anchor=north west] {$-$}; \end{tikzpicture} \captionof{figure}{The 2-chain $C_2(v)$ where we cut the annulus $\left[ -1,1 \right] \times \b S^1(v)$ along $\left[ -1, 1 \right] \times \lbrace v \rbrace$.} \label{fig_C2v} \end{center} \begin{definition} If $\bar\tau = (N(\gamma);\tau_e,\tau_d, \tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold $M$, let the \textit{Siamese sections} of $\bar\tau$ denote the following sections of $UM$: $$ E_1^{d}: \left\lbrace \begin{aligned} m \in M\setminus \mathring N(\gamma) & \longmapsto \tau_e(m,e_1) \\ m \in N(\gamma) & \longmapsto \tau_d(m,e_1) \end{aligned} \right. \hspace{3mm}\mbox{ and }\hspace{3mm} E_1^{g} : \left\lbrace \begin{aligned} m \in M\setminus \mathring N(\gamma) & \longmapsto \tau_e(m,e_1) \\ m \in N(\gamma) & \longmapsto \tau_g(m,e_1). \end{aligned} \right. $$ \end{definition} As already mentioned in the introduction, note that when $\bar\tau = (N(\gamma);\tau_e,\tau_d, \tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold $M$, its pseudo-section at $e_1$ reads $$ \bar\tau(M\times \lbrace e_1 \rbrace) = \frac{E_1^d(M)+E_1^g(M)}{2} $$ where $E_1^d$ and $E_1^g$ are the Siamese sections of $\bar\tau$. \section{From parallelizations to pseudo-parallelizations} \subsection{Pontrjagin numbers of parallelizations} \label{ssec_defpara} In this subsection we review the definition of first relative Pontrjagin numbers for parallelizations of compact connected oriented 3-manifolds. For a detailed presentation of these objects we refer to \cite[Section 5]{lescopEFTI} and \cite[Subsection~4.1]{lescopcombing}. \\ Let $C_1$ and $C_2$ be compact connected oriented 3-manifolds with identified boundaries. Recall that a \textit{cobordism from $C_1$ to $C_2$} is a compact oriented 4-manifold $W$ whose boundary reads $$ \partial W = - C_1 \bigcup_{\partial C_1 \simeq \lbrace 0 \rbrace \times C_1 } -[0,1] \times \partial C_1 \bigcup_{\partial C_2 \simeq \lbrace 1 \rbrace \times C_1} C_2. $$ Moreover, we require $W$ to be identified with $[0,1[\times C_1$ or $]0,1]\times C_2$ on collars of $\partial W$. \\ Recall that any compact oriented 3-manifold bounds a compact oriented 4-manifold, so that a cobordism from $C_1$ to $C_2$ always exists. Also recall that the signature of a 4-manifold is the signature of the intersection form on its second homology group with real coefficients and that any 4-manifold can be turned into a 4-manifold with signature zero by performing connected sums with copies of $\pm \b C P^2$. So let us fix a connected cobordism $W$ from $C_1$ to $C_2$ with signature zero. \\ Now consider a parallelization $\tau_1$, resp. $\tau_2$, of $C_1$, resp. $C_2$. Define the vector field $\vec n$ on a collar of $\partial W$ as follows. Let $\vec n$ be the unit tangent vector to $[0,1]\times \lbrace x \rbrace$ where $x \in C_1$ or $C_2$. Define $\tau(\tau_1, \tau_2)$ as the trivialization of $TW \otimes \b C$ over $\partial W$ obtained by stabilizing $\tau_1$ or $\tau_2$ into $\vec n \oplus \tau_1$ or $\vec n \oplus \tau_2$ and tensoring with $\b C$. In general, this trivialization does not extend as a parallelization of $W$. This leads to a Pontrjagin obstruction class $p_1(W;\tau(\tau_1, \tau_2))$ in $H^4(W, \partial W, \pi_3(SU(4)))$. Since $\pi_3(SU(4)) \simeq \b Z$, there exists $p_1(\tau_1, \tau_2)\in \b Z$ such that $p_1(W;\tau(\tau_1, \tau_2))=p_1(\tau_1, \tau_2)[W,\partial W]$. Let us call $p_1(\tau_1, \tau_2)$ the \textit{first relative Pontrjagin number of $\tau_1$ and $\tau_2$}. \\ Similarly, define the \textit{Pontrjagin number $p_1(\tau)$ of a parallelization $\tau$} of a closed connected oriented 3-manifold $M$, by taking a connected oriented 4-manifold $W$ with boundary $M$, a collar of $\partial W$ identified with $]0,1] \times M$ and $\vec n$ as the outward normal vector field over $\partial W$. \\ We have not actually defined the sign of the Pontrjagin numbers. We will not give details here on how to define it, instead we refer to \cite[\S 15]{MS} or \cite[p.44]{lescopEFTI}. Let us only mention that $p_1$ is the opposite of the second Chern class $c_2$ of the complexified tangent bundle. \subsection{Pontrjagin numbers for pseudo-parallelizations} \begin{definition} \label{def_complextriv} Let $\bar \tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$, a \textit{complex trivialization} $\bar \tau _{\b C}$ associated to $\bar\tau$ is a trivialization of $TM \otimes \b C$ such that~: \begin{enumerate}[\textbullet] \setlength{\itemsep}{0pt} \setlength{\parskip}{5pt} \item $\bar \tau_\b C$ is \textit{special} (\textit{ie}\ its determinant is one everywhere) with respect to the trivialization of the determinant bundle induced by the orientation of $M$, \item on $M\setminus \mathring{N(\gamma)}$, $\bar \tau_\b C = \tau_e \otimes 1_\b C$, \item for $m=(t,c,u)\in [a,b] \times \gamma \times [-1,1]$, $\bar \tau_\b C (m,.)= (m,\tau_d (t,c,u)(\l G(t,u)(.)))$, \end{enumerate} where $\l G$ is a map so that : $$ \l G : \left\lbrace \begin{aligned} \ [a,b] \times [-1,1] &\longrightarrow \ SU(3) \\ (t,u) &\longmapsto \left\lbrace \begin{aligned} & \mbox{Id}_{SU(3)} &\mbox{for $|u| > 1 - \varepsilon$} \\ & \mbox{Id}_{SU(3)} &\mbox{for $t < a+\varepsilon$} \\ & R_{e_1,-\pi-\theta(u)} &\mbox{for $t>b -\varepsilon$}. \end{aligned} \right. \end{aligned} \right. $$ Note that such a smooth map $\l G$ on $[a,b]\times[-1,1]$ exists since $\pi_1(SU(3))=\lbrace 1 \rbrace$. Moreover, $\l G$ is well-defined up to homotopy since $\pi_2(SU(3))=\lbrace 0 \rbrace$. \end{definition} Pseudo-parallelizations, or pseudo-trivializations, have been first used in \cite[Section 4.3]{lescopKT}, but they have been first defined in \cite[Section 10]{lescopcube}. Note that our conventions are slightly different. \begin{definition} Let $\bar\tau_1$ and $\bar\tau_2$ be pseudo-parallelizations of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries and let $W$ be a cobordism from $M_1$ to $M_2$ with signature zero. As in the case of genuine parallelizations, define a trivialization $\tau(\bar\tau_1, \bar\tau_2)$ of $TW \otimes \b C$ over $\partial W$ using the special complex trivializations $\bar\tau_{1,\b C}$ and $\bar\tau_{2,\b C}$ associated to $\bar\tau_1$ and $\bar\tau_2$, respectively. The \textit{first relative Pontrjagin number of $\bar\tau_1$ and $\bar\tau_2$} is the Pontrjagin obstruction $p_1(\bar\tau_1,\bar\tau_2)$ to extending the trivialization $\tau(\bar\tau_1, \bar\tau_2)$ as a trivialization of $TW\otimes \b C$ over $W$. \end{definition} Finally, if $\bar\tau$ is a pseudo-parallelization of a closed oriented 3-manifold $M$, then define the \textit{Pontrjagin number $p_1(\bar\tau)$ of the pseudo-parallelization $\bar\tau$} as $p_1(\tau_\emptyset, \bar \tau)$ as before. \subsection[Variation of $p_1$ as an intersection of three 4-chains]{Variation of $p_1$ as an intersection of three 4-chains} In this subsection, we give a proof of Theorem~\ref{prop_varasint}, which expresses the relative Pon\-trjagin numbers (resp. the variation of Pontrjagin numbers) of pseudo-parallelizations in compact (resp. closed) oriented 3-manifolds as an algebraic intersection of three 4-chains. \begin{lemma} \label{simppara} If $\bar \tau=(N(\gamma);\tau_e,\tau_d,\tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold $M$, if $E_1^d$ and $E_1^g$ are its Siamese sections and if $E_2^e$ denotes the second vector of $\tau_e$, then $P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) = -P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) = [\gamma]$ in $H_1(M;\b Z)$. \end{lemma} \begin{proof} Since $E_1^d$ and $E_1^e$ coincide on $M\setminus \mathring N(\gamma)$, the obstruction to extending $E_2^e$ as a section of ${E_1^d}^\perp$ is the obstruction to extending $E_2^e$ as a section of ${E_1^d}^\perp_{|N(\gamma)}$. However, parallelizing $N(\gamma)$ with $\tau_d$ and using that $$ \tau_d = \left\lbrace \begin{aligned} & \tau_e &\mbox{ on } \partial (\left[ a , b \right] \times \gamma \times \left[ -1 , 1 \right]) \setminus \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \\ & \tau_e \circ \l T_\gamma &\mbox{ on } \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \end{aligned} \right. $$ we get that $E_2^e$ induces a degree +1 map ${E_2^e}_{|\alpha} : \alpha \rightarrow \b S^1$ on any meridian $\alpha$ of $N(\gamma)$. It follows that $$ P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) = + [\gamma]. $$ Similarly, parallelizing $N(\gamma)$ with $\tau_g$, $E_2^e$ induces a degree -1 map ${E_2^e}_{|\alpha} : \alpha \rightarrow \b S^1$ on any meridian $\alpha$ of $N(\gamma)$, so that $$ P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) = - [\gamma]. $$ \end{proof} Recall that for a combing $X$ of a compact oriented 3-manifold $M$ and a pseudo-parallelization $\bar\tau$ of $M$, if $X$ and $\bar\tau$ are compatible, then $$ L_{\bar\tau=X}=\frac{L_{E_1^d=X}+L_{E_1^g=X}}{2} \mbox{ \ \ and \ \ } L_{\bar\tau=-X}=\frac{L_{E_1^d=-X}+L_{E_1^g=-X}}{2} $$ where $E_1^d$ and $E_1^g$ denote the Siamese sections of $\bar\tau$. \begin{lemma} \label{nulexcep} Let $\bar \tau=(N(\gamma);\tau_e,\tau_d,\tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$. If $X$ is a torsion combing of $M$ compatible with $\bar \tau$, then $L_{\bar\tau=X}$ and $L_{\bar\tau=-X}$ are rationally null-homologous in $M$. \end{lemma} \begin{proof} Let $E_1^d$ and $E_1^g$ be the Siamese sections of $\bar\tau$. Using Proposition~\ref{prop_linksinhomologyI} and the fact that $X$ is a torsion combing, we get, in $H_1(M;\b Q)$, $$ \begin{aligned} \ 2 \cdot [L_{X=-E_1^d}+L_{X=-E_1^g} ] &= [-P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) -P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) ] \\ \ 2 \cdot [L_{X=E_1^d}+L_{X=E_1^g} ] &= [P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) + P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) ] \end{aligned} $$ where $E_2^e$ is the second vector of $\tau_e$. Conclude with Lemma~\ref{simppara}. \end{proof} \begin{definition} \label{def_Ft} Let $X$ and $Y$ be $\partial$-compatible combings of a compact oriented 3-manifold $M$. For all $t \in [0,1]$, let $\bar F_t(X,Y)$ denote the 4-chain of $[0,1]\times UM$~: $$ \bar F_t(X,Y) = [0,t] \times X(M) + \lbrace t \rbrace \times \bar F(X,Y) + [t,1] \times Y(M), $$ where $\bar F(X,Y)$ is a 4-chain of $UM$ as in Lemma~\ref{lem_phomotopy}. Note that : $$ \partial \bar F_t(X,Y) = \lbrace 1 \rbrace \times Y(M) - \lbrace 0 \rbrace \times X(M) - [0,1] \times X(\partial M) + \lbrace t \rbrace \times UM_{L_{X=-Y}}. $$ \end{definition} \begin{lemma} \label{lem_transfert} Let $\bar \tau=(N(\gamma);\tau_e,\tau_d,\tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$. If $X$ is a torsion combing of $M$ compatible with $\bar \tau$, then there exist 4-chains of $[0,1]\times UM$, $C_4^\pm(\bar\tau,X)$ and $C_4^\pm(X,\bar\tau)$, with boundaries : $$ \begin{aligned} \partial C_4^\pm(\bar\tau,X) &= \lbrace 1 \rbrace \times (\pm X)(M) - \lbrace 0 \rbrace \times \bar\tau(M\times \lbrace \pm e_1 \rbrace ) - [0,1] \times (\pm X)(\partial M) \\ \partial C_4^\pm(X,\bar\tau) &= \lbrace 1 \rbrace \times \bar\tau(M\times \lbrace \pm e_1 \rbrace ) - \lbrace 0 \rbrace \times (\pm X)(M) - [0,1] \times (\pm X)(\partial M) \\ \end{aligned} $$ \end{lemma} \begin{proof} Let $E_1^d$ and $E_1^g$ be the Siamese sections of $\bar\tau$ and just set $$ \begin{aligned} C_4^\pm(\bar\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F_t( \pm E_1^d, \pm X) + \bar F_t( \pm E_1^g, \pm X) - \lbrace t \rbrace \times UM_{|\Sigma(\pm e_1)} \right) \\ C_4^\pm(X,\bar\tau) &= \sfrac{1}{2} \cdot \left( \bar F_t( \pm X, \pm E_1^d) + \bar F_t( \pm X, \pm E_1^g) - \lbrace t \rbrace \times UM_{|-\Sigma(\pm e_1)} \right) \end{aligned} $$ where the 4-chains $\bar F_t$ are as in Definition~\ref{def_Ft} and where $\Sigma(\pm e_1)$ are rational 2-chains of $M$ bounded by $\pm(L_{E_1^d=-X}+L_{E_1^g=-X})$, which are rationally null-homologous according to Lemma~\ref{nulexcep}. \end{proof} \begin{remark} \label{rmkpcase} Recall that a genuine parallelization $\tau$ of a compact oriented 3-manifold is a pseudo-parallelization whose link is empty. In such a case, $E_1^d$ and $E_1^g$ are the first vector $E_1^\tau$ of the parallelization $\tau$ and the chains $C_4^{\pm}$ can be simply defined as $$ \begin{aligned} C_4^\pm(\tau,X) &= \bar F_t( \pm E_1^\tau, \pm X) - \lbrace t \rbrace \times UM_{|\Sigma(\pm e_1)} \\ C_4^\pm(X,\tau) &= \bar F_t( \pm X, \pm E_1^\tau) - \lbrace t \rbrace \times UM_{|-\Sigma(\pm e_1)} \end{aligned} $$ where the 4-chains $\bar F_t$ are as in Definition~\ref{def_Ft} and where $\Sigma(\pm e_1)$ are rational 2-chains of $M$ bounded by $\pm L_{E_1^\tau=-X}$. \end{remark} \begin{lemma} \label{lem_C4pme1} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$ and whose links are disjoint. For all $v\in \b S^2$, there exists a 4-chain $C_4(M,\tau,\bar\tau ;v)$ of $[0,1] \times UM$ such that $$ \partial C_4(M,\tau,\bar\tau ;v) = \lbrace 1 \rbrace \times \bar \tau (M \times \lbrace v \rbrace)- \lbrace 0 \rbrace \times \tau (M \times \lbrace v \rbrace) - [0,1] \times \tau (\partial M\times \lbrace v \rbrace). $$ \end{lemma} \begin{proof} Let us write $C_4(\tau,\bar\tau ;v)$ instead of $C_4(M,\tau,\bar\tau ;v)$ when there is no ambiguity. Since the 3-chains $\partial C_4(\tau,\bar\tau ;v)$, where $v \in \b S^2$, are homologous, it is enough to prove the existence of $C_4(\tau,\bar\tau ;e_1)$. First, let $X$ be a combing of $M$ such that $X$ is compatible with $\tau$ and $\bar \tau$. In general, this combing is not a torsion combing. Second, let $E_1^d$ and $E_1^g$ (resp. $\bar E_1^d$ and $\bar E_1^g$) denote the Siamese section of $\tau$, (resp. $\bar\tau$) and set $$ \begin{aligned} \bar F(\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F( E_1^d,X) + \bar F( E_1^g,X) \right) \\ \bar F(X,\bar\tau) &= \sfrac{1}{2} \cdot \left( \bar F(X, \bar E_1^d) + \bar F(X, \bar E_1^g)\right). \end{aligned} $$ These chains have boundaries : $$ \begin{aligned} \partial \bar F(\tau,X) &= X(M) - \tau(M\times \lbrace e_1 \rbrace) + UM_{|L_{\tau=-X}} \\ \partial \bar F(X,\bar\tau) &= \bar\tau(M\times \lbrace e_1 \rbrace) - X(M) + UM_{|-L_{\bar\tau=-X}}. \end{aligned} $$ Hence, for all $t \in [0,1]$, the 4-chain of $[0,1]\times UM$ $$ \bar F_t(\tau, \bar\tau; e_1) = [0,t] \times \tau(M\times \lbrace e_1 \rbrace) + \lbrace t \rbrace \times \left(\bar F(\tau,X) + \bar F(X,\bar\tau)\right) + [t,1] \times \bar\tau(M\times \lbrace e_1 \rbrace) $$ has boundary : $$ \begin{aligned} \partial \bar F_t(\tau, \bar\tau; e_1) &= \lbrace 1 \rbrace \times \bar\tau(M\times \lbrace e_1 \rbrace) - \lbrace 0 \rbrace \times \tau(M\times \lbrace e_1 \rbrace) \\ &- [0,1] \times \tau(\partial M\times \lbrace e_1 \rbrace) + \lbrace t \rbrace \times UM_{|L_{\tau=-X}\cup - L_{\bar\tau=-X}}. \end{aligned} $$ Thanks to Proposition~\ref{prop_linksinhomologyI} and Lemma~\ref{simppara}, in $H_1(M;\b Q)$ : $$ \begin{aligned} 4 \cdot [ L_{\tau=-X} ] &= 2\cdot [L_{E_1^d=-X} + L_{E_1^g=-X}] \\ &= [-2 \cdot P(e_2^M(X^\perp,{E_2^e}_{|\partial M}))\hspace{-1mm}+\hspace{-1mm}P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M}))\hspace{-1mm}+\hspace{-1mm}P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M}))] \\ &= -2 \cdot[P(e_2^M(X^\perp,{E_2^e}_{|\partial M}))] \end{aligned} $$ where $E_2^e$ is the second vector of $\tau_e$. Similarly, $2\cdot [L_{\bar\tau=-X}]=-[P(e_2^M(X^\perp,{E_2^e}_{|\partial M}))]$ in $H_1(M;\b Q)$. So, the link $L_{\tau=-X}\cup - L_{\bar\tau=-X}$ is rationally null-homologous in $M$, \textit{ie}\ there exists a rational 2-chain $\Sigma(\tau,\bar\tau)$ such that $\partial \Sigma(\tau,\bar\tau) = L_{\tau=-X}\cup - L_{\bar\tau=-X}$. Hence, we get a 4-chain $C_4(\tau,\bar\tau;e_1)$ as desired by setting $$ C_4(\tau,\bar\tau;e_1) = \bar F_t(\tau, \bar\tau; e_1) - \lbrace t \rbrace \times UM_{|\Sigma(\tau,\bar\tau)}. $$ \end{proof} \begin{lemma} \label{lem_welldefined} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$. If $x$, $y$ and $z$ are points in $\b S^2$ with pairwise different distances to $e_1$, then there exist pairwise transverse 4-chains $C_4(\tau,\bar\tau ;x)$, $C_4(\tau,\bar\tau ;y)$ and $C_4(\tau,\bar\tau ;z)$ as in Lemma~\ref{lem_C4pme1} and the algebraic intersection $\langle C_4(\tau,\bar\tau ;x), C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle_{[0,1]\times UM}$ only depends on $\tau$ and~$\bar\tau$. \end{lemma} \begin{proof} Pick any $x$, $y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ and consider some 4-chains $C_4(\tau,\bar\tau ;x)$, $C_4(\tau,\bar\tau ;y)$ and $C_4(\tau,\bar\tau ;z)$ such that, for $v \in \lbrace x , y , z \rbrace$, $$ \partial C_4(\tau,\bar\tau ;v) = \lbrace 1 \rbrace \times \bar\tau(M\times\lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau (M\times \lbrace v \rbrace ) - [0,1] \times \tau (\partial M \times \lbrace v \rbrace ). $$ The intersection of $C_4(\tau,\bar\tau;x)$, $C_4(\tau,\bar\tau;y)$ and $C_4(\tau,\bar\tau;z)$ is in the interior of $[0,1]\times UM$. The algebraic triple intersection of these three 4-chains only depends on the fixed boundaries and on the homology classes of the 4-chains. The space $H_4([0,1]\times UM ; \b Q)$ is generated by the classes of 4-chains $\Sigma \times \b S^2$ where $\Sigma$ is a surface in $M$. If $\Sigma \times \b S^2$ is such a 4-chain, then $$ \begin{aligned} & \langle C_4(\tau,\bar\tau ;x) + \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle - \langle C_4(\tau,\bar\tau ;x) , C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle \\ &= \langle \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle \\ &= \left\lbrace \begin{aligned} & \langle \Sigma\times \b S^2, [0,1] \times \tau(M \times \lbrace y \rbrace ) , [0,1] \times \tau(M \times \lbrace z \rbrace) \rangle \mbox{ , pushing $\Sigma \times \b S^2$ near $0$,} \\ & \langle \Sigma\times \b S^2, [0,1] \times \bar\tau(M \times \lbrace y \rbrace ) , [0,1] \times \bar \tau(M \times \lbrace z \rbrace) \rangle \mbox{ , pushing $\Sigma \times \b S^2$ near $1$.} \\ \end{aligned} \right. \end{aligned} $$ Hence, $\langle \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle $ is independent of $\tau$ and $\bar\tau$. So, use Lemma~\ref{lem_extendpparallelization} to extend a trivialization of $TM_{|\Sigma}$ as a pseudo-parallelization $\tau'$ that coincides with $\tau$ and $\bar\tau$ on $\partial M$. Considering this pseudo-parallelization we get $$ \langle \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle \hspace{-1mm}=\hspace{-1mm} \langle \Sigma\times \b S^2, [0,1] \times \tau'(M \times \lbrace y \rbrace ) , [0,1] \times \tau'(M \times \lbrace z \rbrace) \rangle \hspace{-1mm}=\hspace{-1mm} 0, $$ so that the algebraic triple intersection of the three chains $C_4(\tau,\bar\tau;x)$, $C_4(\tau,\bar\tau;y)$ and $C_4(\tau,\bar\tau;z)$ only depends on their fixed boundaries. \end{proof} \begin{proof}[Proof of Theorem~\ref{prop_varasint}] Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$ and whose links are disjoint. To conclude the proof of Theorem~\ref{prop_varasint}, we have to prove that for any $x$, $y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ : $$ p_1(\tau, \bar\tau)= 4 \cdot \langle C_4(M,\tau,\bar\tau ;x),C_4(M,\tau,\bar\tau ;y),C_4(M,\tau,\bar\tau ;z) \rangle_{[0,1]\times UM}. $$ First, we know from \cite[Lemma 10.9]{lescopcube} that this is true if $M$ is a $\b Q$HH of genus 1. Notice that it is also true if $M$ embeds in such a manifold. Indeed, if $\b H$ is a $\b Q$HH of genus 1 and if $M$ embeds in $\b H$ then, using Lemma~\ref{lem_extendpparallelization} and using that $\tau$ and $\bar \tau$ coincide on $\partial M$, there exists a pseudo-parallelization $\check \tau$ of $\b H \setminus \mathring M$ such that $$ \bar \tau_{\b H} : \left\lbrace \begin{aligned} m \in M &\mapsto \bar\tau(m,.) \\ m \in \b H\setminus \mathring M & \mapsto \check \tau(m,.) \end{aligned} \right. \mbox{ \ and \ } \tau_{\b H} : \left\lbrace \begin{aligned} m \in M &\mapsto \tau(m,.) \\ m \in \b H\setminus \mathring M & \mapsto \check \tau(m,.) \end{aligned} \right. $$ are pseudo-parallelizations of $\b H$. Furthermore, for any $v \in \b S^2$, let $C_4(\b H,v)$ be the 4-chain of $[0,1]\times U\b H$ : $$ C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;v) = C_4(M,\tau,\bar\tau ;v)\cup [0,1] \times \check \tau( (\b H \setminus \mathring M) \times \lbrace v \rbrace ) $$ where $C_4(M,\tau,\bar\tau ;v)$ is as in Lemma~\ref{lem_C4pme1}. The boundary of $C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;v)$ is~: $$ \partial C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;v) = \lbrace 1 \rbrace \times \bar\tau_{\b H}(M\times \lbrace v \rbrace ) - \lbrace 0 \rbrace \times \tau_{\b H}(M\times \lbrace v \rbrace ) - [0,1] \times \tau_{\b H}(\partial M\times \lbrace v \rbrace ). $$ Using the definition of Pontrjagin numbers of pseudo-parallelizations and the hypothesis on $\b H$, it follows that if $x,y$ and $z$ are points in $\b S^2$ with pairwise different distances to $e_1$ : $$ \begin{aligned} p_1(\tau,\bar\tau) = p_1(\tau_{\b H},\bar\tau_{\b H}) = 4 \cdot \langle C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;x) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;y) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;z) \rangle. \end{aligned} $$ Now note that : $$ \langle [0,1] \times \check \tau (\b H\setminus \mathring M \times \lbrace x \rbrace) , [0,1] \times \check \tau (\b H\setminus \mathring M \times \lbrace y \rbrace ), [0,1] \times \check \tau (\b H\setminus \mathring M \times \lbrace z \rbrace) \rangle = 0. $$ Indeed, if $\check \tau = (N(\check \gamma); \check \tau_e,\check \tau_d,\check \tau_g)$, for all $v\in \b S^2$ the pseudo-section of $\check\tau$ reads $$ \begin{aligned} \check \tau (M\times \lbrace v \rbrace ) &= \check\tau_e((M\setminus \mathring N(\check\gamma))\times \lbrace v \rbrace ) \\ &+ \frac{\check \tau_d(N(\check\gamma)\times \lbrace v \rbrace) + \check\tau_g(N(\check\gamma)\times \lbrace v \rbrace) + \check\tau_e( \lbrace b \rbrace \times \check\gamma \times C_2(v)) }{2}. \end{aligned} $$ The 3-chains $\check\tau_e((M\setminus \mathring N(\check\gamma))\times \lbrace v \rbrace )$, for $v\in \lbrace x,y,z \rbrace$, are pairwise disjoint since $\check\tau_e$ is a genuine parallelization and since $x,y$ and $z$ are pairwise distinct points in $\b S^2$. Moreover, the 3-chains $\check\tau_e( \lbrace b \rbrace \times \check\gamma \times C_2(v))$, for $v\in \lbrace x,y,z\rbrace$, are also pairwise disjoint since they are subsets of the $\check\tau_e(\lbrace b \rbrace \times \check\gamma \times \b S^1(v))$ , $v \in \lbrace x,y,z \rbrace$, which are pairwise disjoint since $x,y$ and $z$ have pairwise different distances to $e_1$. Finally, we have~: $$ \begin{aligned} \langle \check\tau_d(N(\check \gamma) \times \lbrace x\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace x\rbrace) , \check\tau_d(N(\check \gamma) \times \lbrace y\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace y\rbrace) , \\ \check\tau_d(N(\check \gamma) \times \lbrace z\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace z\rbrace) \rangle &= 0 \end{aligned} $$ since a triple intersection between the 3-chains $$ \lbrace \check\tau_d(N(\check \gamma) \times \lbrace v\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace v\rbrace)\rbrace_{v\in \lbrace x,y,z\rbrace} $$ would be contained in an intersection between two of the $\lbrace \check\tau_d(N(\check \gamma)\times \lbrace v\rbrace)\rbrace_{v\in \lbrace x,y,z \rbrace}$ or between two of the $\lbrace \check\tau_g(N(\check \gamma)\times \lbrace v\rbrace)\rbrace_{v\in \lbrace x,y,z \rbrace}$ which must be empty since $\check\tau_d$ and $\check\tau_g$ are genuine parallelizations. It follows that $$ \begin{aligned} p_1(\tau,\bar\tau) &= 4 \cdot\langle C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;x) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;y) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;z) \rangle \\ &= 4 \cdot\langle C_4(M,\tau,\bar\tau ;x) , C_4(M,\tau,\bar\tau ;y) , C_4(M,\tau,\bar\tau ;z) \rangle. \end{aligned} $$ Using the same construction, note also that it is enough to prove the statement when $M$ is a closed oriented 3-manifold since any oriented 3-manifold embeds into a closed one. \\ Let us finally prove Theorem~\ref{prop_varasint} when $M$ is a closed oriented 3-manifold. Consider a Heegaard splitting $M = H_1 \cup_\Sigma H_2$ such that there is a collar $\Sigma\times[0,1] \subset H_2$ of $\Sigma$ verifying $$ N(\bar \gamma) \cap \left(\Sigma\times[0,1] \right) = \emptyset \mbox{ \ \ and \ \ } N( \gamma) \cap \left(\Sigma\times[0,1] \right) = \emptyset $$ where $\gamma$ and $\bar\gamma$ are the links of $\tau$ and $\bar\tau$, respectively, and such that $\Sigma=\Sigma\times \lbrace 0 \rbrace$. Such a splitting can be obtained by considering a triangulation of $M$ containing $\gamma$ and $\bar\gamma$ in its 1-skeleton, and then defining $H_1$ as a tubular neighborhood of this 1-skeleton. Using Lemma~\ref{lem_extendpparallelization}, we can construct a pseudo-parallelization $\tau^c$ of $\Sigma\times [0,1]$ such that $\tau^c$ coincides with $\bar\tau$ on $\Sigma \times \lbrace 1 \rbrace$ and with $\tau$ on $\Sigma \times \lbrace 0 \rbrace$. Then, write $H'_1 = H_1 \cup (\Sigma\times[0,1])$ and $H'_2=H_2\setminus( \Sigma\times[0,1[)$ -- see Figure~\ref{figHH} -- and set $$ \check \tau : \left\lbrace \begin{aligned} (m,v) \in UH_1 &\longmapsto \tau(m,v) \\ (m,v) \in U(\Sigma\times[0,1]) &\longmapsto \tau^c(m,v) \\ (m,v) \in UH'_2 &\longmapsto \bar\tau(m,v). \end{aligned} \right. $$ \begin{center} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.75cm] \clip(-3.5,-3.5) rectangle (5.5,3.5); \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.15] (-3,2) -- (0,2) -- (0,-2) -- (-3,-2) -- cycle; \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.05] (0,2) -- (2,2) -- (2,-2) -- (0,-2) -- cycle; \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.15] (2,2) -- (5,2) -- (5,-2) -- (2,-2) -- cycle; \draw (0,2)-- (0,-2); \draw (2,2)-- (2,-2); \draw (0,2.4)-- (0,2.6); \draw (0,2.6)-- (5,2.6); \draw (5,2.4)-- (5,2.6); \draw (2,-2.4)-- (2,-2.6); \draw (2,-2.6)-- (-3,-2.6); \draw (-3,-2.6)-- (-3,-2.4); \begin{scriptsize} \draw (-1.75,0.25) node[anchor=north west] {$H_1$}; \draw (-1.75,-0.5) node[anchor=north west] {$\tau$}; \draw (3.25,0.25) node[anchor=north west] {$H'_2$}; \draw (3.25,-0.5) node[anchor=north west] {$\bar\tau$}; \draw (0.75,-0.5) node[anchor=north west] {$\tau^c$}; \draw (-0.75,-2.75) node[anchor=north west] {$H'_1$}; \draw (2.44,3.11) node[anchor=north west] {$H_2$}; \draw (-0.5,-2) node[anchor=north west] {$\Sigma\times\lbrace 0\rbrace$}; \draw (1.5,2.5) node[anchor=north west] {$\Sigma\times \lbrace 1 \rbrace$}; \end{scriptsize} \end{tikzpicture} \captionof{figure} {}\label{figHH} \end{center} For $v \in \b S^2$, consider some 4-chains $C_4(H_1,\tau,\check\tau ;v)$, $C_4(H_2,\tau,\check\tau ;v)$, $C_4(H'_1,\check\tau,\bar\tau ;v)$ and $C_4(H'_2,\check\tau,\bar\tau ;v)$ of $[0,1]\times UH_1$, $[0,1]\times UH_2$, $[0,1]\times UH'_1$ and $[0,1]\times UH'_2$, respectively, such that : $$ \begin{aligned} \partial C_4(H_1,\tau,\check\tau ;v) &\hspace{-1mm}=\hspace{-1mm} \lbrace 1 \rbrace \times \check \tau (H_1 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau (H_1\times \lbrace v \rbrace ) - [0,1] \times \tau (\partial H_1 \times \lbrace v \rbrace) \\ \partial C_4(H_2,\tau,\check\tau ;v) &\hspace{-1mm}=\hspace{-1mm} \lbrace 1 \rbrace \times \check \tau (H_2 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau (H_2\times \lbrace v \rbrace ) - [0,1] \times \tau (\partial H_2 \times \lbrace v \rbrace) \\ \end{aligned} $$ and $$ \begin{aligned} \partial C_4(H'_1,\check\tau,\bar\tau ;v) &\hspace{-1mm}=\hspace{-1mm} \lbrace 1 \rbrace \times \bar \tau (H'_1 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \check \tau (H'_1\times \lbrace v \rbrace ) - [0,1] \times \check \tau (\partial H'_1 \times \lbrace v \rbrace) \\ \partial C_4(H'_2,\check\tau,\bar\tau ;v) &\hspace{-1mm}=\hspace{-1mm}\lbrace 1 \rbrace \times \bar \tau (H'_2 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \check \tau (H'_2\times \lbrace v \rbrace ) - [0,1] \times \check \tau (\partial H'_2 \times \lbrace v \rbrace). \\ \end{aligned} $$ Since $H_1$ and $H_2$ embed in rational homology balls, for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ $$ \begin{aligned} p_1(\tau_{|H_1},\check\tau_{|H_1}) &= 4 \cdot\langle C_4(H_1,\tau,\check\tau ;x) , C_4(H_1,\tau,\check\tau ;y) , C_4(H_1,\tau,\check\tau ;z) \rangle_{[0,1]\times UH_1} \\ p_1(\tau_{|H_2},\check\tau_{|H_2}) &= 4 \cdot\langle C_4(H_2,\tau,\check\tau ;x) , C_4(H_2,\tau,\check\tau ;y) , C_4(H_2,\tau,\check\tau ;z) \rangle_{[0,1]\times UH_2} \\ \end{aligned} $$ so that, using $C_4(M,\tau,\check\tau;v) = C_4(H_1,\tau,\check\tau;v)+C_4(H_2,\tau,\check\tau;v)$ for $v\in \b S^2$, $$ p_1(\tau,\check\tau) = 4 \cdot\langle C_4(M,\tau,\check\tau ;x) , \ C_4(M,\tau,\check\tau ;y) , \ C_4(M,\tau,\check\tau ;z) \rangle_{[0,1]\times UM}. $$ Similarly, since $H'_1$ and $H'_2$ embed in rational homology balls, for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ $$ \begin{aligned} p_1(\check\tau_{|H'_1},\bar\tau_{|H'_1}) &= 4 \cdot\langle C_4(H'_1,\check\tau,\bar\tau ;x) , C_4(H'_1,\check\tau,\bar\tau ;y) , C_4(H'_1,\check\tau,\bar\tau ;z) \rangle _{[0,1]\times UH'_1} \\ p_1(\check\tau_{|H'_2},\bar\tau_{|H'_2}) &= 4 \cdot\langle C_4(H'_2,\check\tau,\bar\tau ;x) , C_4(H'_2,\check\tau,\bar\tau ;y) , C_4(H'_2,\check\tau,\bar\tau ;z)\rangle_{[0,1]\times UH'_2} \\ \end{aligned} $$ so that, using $C_4(M,\check\tau,\bar\tau;v) = C_4(H'_1,\check\tau,\bar\tau;v)+C_4(H'_2,\check\tau,\bar\tau;v)$ for $v\in \b S^2$, $$ p_1(\check\tau,\bar\tau) = 4 \cdot\langle C_4(M,\check\tau,\bar\tau;x), C_4(M,\check\tau,\bar\tau;y), C_4(M,\check\tau,\bar\tau;z) \rangle_{[0,1]\times UM}. $$ Eventually, reparameterizing and stacking $C_4(M,\tau,\check\tau;v)$ and $C_4(M,\check\tau,\bar\tau;v) $, for all $v \in \b S^2$ we get a 4-chain $C_4(M,\tau,\bar\tau; v)$ of $[0,1]\times UM$ such that $$ \partial C_4(M,\tau,\bar\tau;v) = \lbrace 1 \rbrace \times \bar \tau (M \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau(M\times \lbrace v \rbrace) - [0,1]\times \tau (\partial M \times \lbrace v \rbrace) $$ and such that for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ $$ p_1(\tau, \bar \tau)= 4 \cdot\langle C_4(M,\tau,\bar\tau;x), C_4(M,\tau,\bar\tau;y), C_4(M,\tau,\bar\tau;z) \rangle_{[0,1]\times UM}. $$ \end{proof} \section{From pseudo-parallelizations to torsion combings} \subsection{Variation of $p_1$ as an intersection of two 4-chains} \begin{definition} Let $M$ be a compact oriented 3-manifold. A trivialization $\rho$ of $TM_{|\partial M}$ is \textit{admissible} if there exists a section $X$ of $UM$ such that $(X,\rho)$ is a torsion combing of $M$. \end{definition} \begin{lemma} \label{lem_HtMb} Let $M$ be a compact oriented 3-manifold, let $\rho$ be an admissible trivialization of $TM_{|\partial M}$ and let $S_1, S_2 , \ldots , S_{\beta_1(M)}$ be surfaces in $M$ comprising a basis of $H_2(M;\b Q)$. The subspace $H_T^\rho(M)$ of $H_2(UM;\b Q)$ generated by $\lbrace [X(S_1)], \ldots, [X(S_{\beta_1(M)})] \rbrace$ where $X$ is a section of $UM$ such that $(X,\rho)$ is a torsion combing of $M$, only depends on $\rho$. \end{lemma} \begin{proof} Let $(Y,\rho)$ be another choice of torsion combing of $M$. Assume, without loss of generality, that $(X,\rho)$ and $(Y,\rho)$ are $\partial$-compatible, and let $C(X,Y)$ be the 4-chain of $UM$ $$ C(X,Y) = \bar F(X,Y) - UM_{|\Sigma_{X=-Y}} $$ constructed using Lemma~\ref{lem_phomotopy} and Proposition~\ref{prop_linksinhomologyI}, which provide $\bar F(X,Y)$ and a 2-chain $\Sigma_{X=-Y}$ of $M$ bounded by $L_{X=-Y}$, respectively. For $i \in \lbrace 1,2,\ldots,\beta_1(M)\rbrace$, $$ Y(S_i) - X(S_i) = \partial (C(X,Y) \cap UM_{|S_i}). $$ \end{proof} \begin{lemma} \label{lem_evaluationII} Let $M$ be a compact oriented 3-manifold, let $\rho$ be an admissible trivialization of $TM_{|\partial M}$ and let $(X,\rho)$ and $(Y,\rho)$ be $\partial$-compatible torsion combings of $M$. There exists a 4-chain $C_4(X,Y)$ of $[0,1]\times UM$ such that $$ \partial C_4(X,Y) = \lbrace 1 \rbrace \times Y(M) - \lbrace 0 \rbrace \times X(M) - [0,1] \times X(\partial M). $$ For any such chain $C_4(X,Y)$, if $C$ is a 2-cycle of $[0,1]\times UM$ then, $$ [C] = \langle C , C_4(X,Y) \rangle_{[0,1] \times UM} [S] \mbox{ \ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$,} $$ where $[S]$ is the homology class of the fiber of $UM$ in $H_2([0,1]\times UM ; \b Q)$. \end{lemma} \begin{proof} Observe that $H_2(UM;\b Q)$ is generated by the family $\lbrace [Z(S_1)], \ldots, [Z(S_{\beta_1(M)})], [S] \rbrace$ where $S_1, \ldots , S_{\beta_1(M)}$ are surfaces in $M$ comprising a basis of $H_2(M;\b Q)$ and where $Z$ is a torsion combing of $M$ that coincides with $X$ and $Y$ on $\partial M$. Let $C_4(X,Y)$ be the 4-chain $$ C_4(X,Y) = \bar F_t(X,Y) - \lbrace t \rbrace \times UM_{|\Sigma_{X=-Y}} $$ where $\bar F_t(X,Y)$ is a 4-chain as in Definition~\ref{def_Ft} and $\Sigma_{X=-Y}$ is a 2-chain of $M$ bounded by $L_{X=-Y}$ provided by Proposition~\ref{prop_linksinhomologyI}. The chain $C_4(X,Y)$ has the desired boundary. Note that $\langle [S], C_4(X,Y) \rangle = 1$. Moreover, $\langle [Z(\Sigma)], C_4(X,Y) \rangle = 0$ for any surface $\Sigma$ in $M$. Indeed, notice that $$ \langle [Z(\Sigma)], C_4(X,Y) \rangle = \left\lbrace \begin{aligned} &\langle [Z(\Sigma)], [0,1]\times X(M) \rangle \mbox{ , pushing $Z(\Sigma)$ before $t$,}\\ &\langle [Z(\Sigma)], [0,1]\times Y(M) \rangle \mbox{ , pushing $Z(\Sigma)$ after $t$.} \end{aligned} \right. $$ As a consequence, $\langle [Z(\Sigma)], C_4(X,Y) \rangle$ is independent of $X$ and $Y$. Let us prove that it is possible to construct a torsion combing $Z'$ that coincides with $X$ and $Y$ on $\partial M$ and such that $$ \langle [Z(\Sigma)], [0,1]\times Z'(M) \rangle=0. $$ Using the parallelization $\rho=(E_1^\rho,E_2^\rho,E_3^\rho)$ of $\partial M$ induced by $X$, define a homotopy $$ \l Z : [0,1] \times \partial M \rightarrow [0,1]\times UM_{|\partial M} $$ from $-Z_{|\partial M}$ to $Z_{|\partial M}$ along the unique geodesic arc passing through $E_3^\rho$. Since $\Sigma$ sits in $\mathring{M}$, we can get a collar $\l C\simeq [0,1]\times \partial M$ of $\partial M$ such that $\l C \cap \Sigma = \emptyset$ and $\lbrace 1 \rbrace \times \partial M = \partial M$. Finally set $Z'$ to coincide with $-Z$ on $M\setminus \mathring{\l C}$ and with the homotopy $\l Z$ on the collar. The combing $(Z',\rho)$ is a torsion combing. Indeed, $E_2^\rho$ can be extended as a nonvanishing section of ${Z'}^\perp_{|\l C}$ so that $$ e_2^M(Z'^\perp,E_2^\rho)=e_2^{M\setminus \mathring{\l C}}(Z'^\perp,E_2^\rho)=e_2^M(-Z^\perp,E_2^\rho)=-e_2^M(Z^\perp,E_2^\rho). $$ Finally, using the torsion combing $Z'$, we get $\langle [Z(\Sigma)], [0,1]\times Z'(M) \rangle = 0$. \\ To conclude the proof, assume that $C'_4(X,Y)$ is a 4-chain with same boundary as the chain $C_4(X,Y)$ we constructed, and let $C$ be a 2-cycle of $[0,1]\times UM$. The 2-cycle $C$ is homologous to a 2-cycle in $\lbrace 1 \rbrace \times UM$. Similarly, $(C'_4(X,Y) - C_4(X,Y))$ is homologous to a 4-cycle in $\lbrace 0 \rbrace \times UM$. Hence, $\langle C , \ C'_4(X,Y) - C_4(X,Y) \rangle = 0$. \end{proof} \begin{lemma} \label{bord} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$. Let $C_4(\tau,\bar\tau;\pm e_1)$ denote 4-chains of $[0,1]\times UM$ as in Theorem~\ref{prop_varasint} for $v=\pm e_1$. If the 4-chains $C_4(\tau,\bar\tau;\pm e_1)$ are transverse to each other, then $$ \begin{aligned} \partial (C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1)) &= \sfrac{1}{4} \cdot \lbrace 1 \rbrace \times \left( \bar E_1^d(L_{\bar E_1^d=-\bar E_1^g}) - (-\bar E_1^d)(L_{\bar E_1^d=-\bar E_1^g}) \right) \\ &- \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \left( E_1^d(L_{E_1^d=-E_1^g}) - (-E_1^d)(L_{E_1^d=-E_1^g}) \right) \end{aligned} $$ where $E_1^d$ and $E_1^g$, resp. $\bar E_1^d$ and $\bar E_1^g$, are the Siamese sections of $\tau$, resp. $\bar\tau$. \end{lemma} \begin{proof} Since $\tau$ and $\bar \tau$ coincide with a trivialization of $TM_{|\partial M}$ on $\partial M$, we have $$ \begin{aligned} \partial (C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1)) &= \lbrace 1 \rbrace \times \left( \bar\tau(M\times \lbrace e_1 \rbrace)\cap \bar\tau(M\times \lbrace -e_1 \rbrace) \right) \\ &- \lbrace 0 \rbrace \times \left( \tau(M\times \lbrace e_1 \rbrace)\cap \tau(M\times \lbrace -e_1 \rbrace) \right) \\ &=\sfrac{1}{4} \cdot \lbrace 1 \rbrace \times \left( \bar E_1^d(L_{\bar E_1^d=-\bar E_1^g}) + \bar E_1^g(L_{\bar E_1^g=-\bar E_1^d}) \right) \\ &- \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \left( E_1^d(L_{E_1^d=-E_1^g}) + E_1^g(L_{E_1^g=-E_1^d}) \right) . \end{aligned} $$ \iffalse &= ( \lbrace 1 \rbrace \times UM ) \bigcap \left( C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1) \right) \\ &- (\lbrace 0 \rbrace \times UM ) \bigcap \left( C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1) \right) \\ &=\sfrac{1}{4} \cdot \lbrace 1 \rbrace \times \left( \bar E_1^d(L_{\bar E_1^d=-\bar E_1^g}) - (-\bar E_1^d)(L_{\bar E_1^d=-\bar E_1^g}) \right) \\ &- \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \left( E_1^d(L_{E_1^d=-E_1^g}) - (-E_1^d)(L_{E_1^d=-E_1^g}) \right). \fi \end{proof} \begin{definition} \label{omega} Let $\bar\tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$, and let $E_1^d$ and $E_1^g$ denote its Siamese sections. Recall from Definition \ref{def_addinner} that the map $$ \tau_d^{-1} \circ \tau_g : [a,b] \times \gamma \times [-1,1] \times \b R^3 \rightarrow [a,b] \times \gamma \times [-1,1] \times \b R^3 $$ is such that $$ \begin{aligned} &\forall t \in [a,b], \ u \in [-1,1], \ c \in \gamma, \ v \in \b R^3 : \\ &\tau_d^{-1} \circ \tau_g ((t,c,u),v) = \l T_\gamma^{-1} ((t,c,u),\l F(t,u)(v)), \end{aligned} $$ Hence, $L_{E_1^d=-E_1^g}$ consists in parallels of $\gamma$ of the form $\lbrace t \rbrace \times \gamma \times \lbrace u \rbrace$. For all component $L$ of $L_{E_1^d=-E_1^g}$, there exists a point $e_2^L$ in $\b S^1(e_2)$ such that $L \times \lbrace e_2^L \rbrace =\tau_d^{-1} \circ \tau_g (L \times \lbrace e_2 \rbrace)$. Choose a point $e_2^\Omega$ in $\b S^1(e_2)$ distinct from $e_2$ and from the points $e_2^L$. Finally, set $$ \Omega(\bar\tau)=-\tau_d(L_{E_1^d=-E_1^g}\times[-e_1,e_1]_{e_2^\Omega}) $$ where $[-e_1,e_1]_{e_2^\Omega}$ is the geodesic arc from $-e_1$ to $e_1$ passing through $e_2^\Omega$. The 2-chain $\Omega(\bar\tau)$ can be seen as the projection of a homotopy from $-E_1^d$ to $E_1^d$ over $L_{E_1^d=-E_1^g}$. Note that $$ \partial \Omega(\bar\tau)= (-E_1^d)(L_{E_1^d=-E_1^g})- E_1^d(L_{E_1^d=-E_1^g}). $$ The choice of $e_2^\Omega$ ensures that $\Omega(\bar\tau) \cap \bar\tau(M\times \lbrace e_2 \rbrace)=\emptyset$. Note that $\Omega(\tau)=\emptyset$ when $\tau$ is a genuine parallelization. \end{definition} \begin{definition} \label{def_pgo} Let $M$ be a compact oriented 3-manifold and let $\rho$ be an admissible trivialization of $TM_{|\partial M}$. Let $\tau$ and $\bar\tau$ be pseudo-parallelizations of a compact oriented 3-manifold $M$ which coincide with $\rho$ on $TM_{|\partial M}$ and let $C_4(\tau,\bar\tau;\pm e_1)$ denote 4-chains of $[0,1]\times UM$ as in Theorem~\ref{prop_varasint}. Set $$ \go P(\tau,\bar\tau) = \lbrace 0 \rbrace \times \Omega(\tau) + 4 \cdot (C_4(\tau,\bar\tau;e_1) \cap C_4(\tau,\bar\tau;-e_1)) - \lbrace 1 \rbrace \times \Omega(\bar\tau). $$ When $(X,\rho)$ is a torsion combing of $M$, let $C_4^+(X,\bar\tau)$ and $C_4^-(X,\bar\tau)$ be 4-chains of $[0,1]\times UM$ as in Lemma~\ref{lem_transfert} and set $$ \begin{aligned} \go P(X,\bar\tau) = 4\cdot(C_4^+(X,\bar\tau)\cap C_4^-(X,\bar\tau)) - \lbrace 1 \rbrace \times \Omega(\bar\tau), \\ \go P(\bar\tau,X) = \lbrace 0 \rbrace \times \Omega(\bar\tau) + 4\cdot(C_4^+(\bar\tau,X)\cap C_4^-(\bar\tau,X)). \end{aligned} $$ \end{definition} According to Lemma~\ref{bord} and Definition~\ref{omega}, the 4-chains $\go P(\lambda,\mu)$ of Definition~\ref{def_pgo} above are cycles. In the remaining of this section, we prove that their classes read $p_1(\lambda,\mu)[S]$ in $H_2([0,1]\times UM;\b Q)/H_T^\rho(M)$. \begin{proposition} \label{prop_ppara} Let $M$ be a compact oriented 3-manifold, let $\rho$ be an admissible trivialization of $TM_{|\partial M}$ and let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of $M$ that coincide with $\rho$ on $\partial M$. Under the assumptions of Definition~\ref{def_pgo}, the class of $\go P(\tau,\bar\tau)$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ equals $p_1(\tau,\bar\tau)[S]$ where $[S]$ is the homology class of the fiber of $UM$ in $H_2([0,1]\times UM ; \b Q)$. \end{proposition} \begin{proof} The class of $\go P(\tau,\bar\tau)$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ is $$ \left[ \go P(\tau,\bar\tau) \right] = \langle \go P(\tau,\bar\tau) , C_4(X,Y) \rangle \cdot [S] $$ where $(X,\rho)$ and $(Y,\rho)$ are $\partial$-compatible torsion combings of $M$ and where $C_4(X,Y)$ is any 4-chain of $[0,1]\times UM$ as in Lemma~\ref{lem_evaluationII}. Let us construct a specific $C_4(X,Y)$ as follows. Let $C_4(\tau,\bar\tau;e_2)$ be as in Theorem~\ref{prop_varasint} where $e_2=(0,1,0)$. Since, $\partial C_4(\tau,\bar\tau;e_1)$ and $\partial C_4(\tau,\bar\tau;e_2)$ are homologous, it is possible to reparameterize and to stack the 4-chains $C_4^+(X,\tau)$, $C_4(\tau,\bar\tau;e_2)$ and $C_4^+(\bar\tau,Y)$ where the chains $C_4^+(X,\tau)$ and $C_4^+(\bar\tau,Y)$ are as in Lemma~\ref{lem_transfert}. It follows that, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$, $$ \begin{aligned} \left[ \go P(\tau,\bar\tau) \right] &= \langle \go P(\tau,\bar\tau) , C_4(X,Y) \rangle [S] \\ &= \langle \go P(\tau,\bar\tau) , C_4(\tau,\bar\tau;e_2) \rangle [S] \\ &= 4 \cdot \langle C_4(\tau,\bar\tau;e_1) \cap C_4(\tau,\bar\tau;-e_1) ,C_4(\tau,\bar\tau;e_2) \rangle [S] \\ &+ \langle \lbrace 0 \rbrace \times \Omega(\tau)- \lbrace 1 \rbrace \times \Omega(\bar\tau),C_4(\tau,\bar\tau;e_2) \rangle [S]. \end{aligned} $$ Now, note that $ \langle \lbrace 0 \rbrace \times \Omega(\tau)- \lbrace 1 \rbrace \times \Omega(\bar\tau),C_4(\tau,\bar\tau;e_2) \rangle=0$ since $$ \Omega(\tau) \cap \tau(M\times \lbrace e_2 \rbrace) = \emptyset \mbox{ \ and \ } \Omega(\bar \tau) \cap \bar \tau(M\times \lbrace e_2 \rbrace) = \emptyset. $$ Hence, $$ \left[ \go P(\tau,\bar\tau) \right]= 4 \cdot \langle C_4(\tau,\bar\tau;e_1),C_4(\tau,\bar\tau;-e_1),C_4(\tau,\bar\tau;e_2) \rangle [S] = p_1(\tau,\bar\tau) [S]. $$ \end{proof} \subsection[Pontrjagin numbers for combings of compact 3-manifolds]{Pontrjagin numbers for combings of compact 3-manifolds \\ Proof of Theorem~\ref{thm_defp1Xb}} \begin{lemma} \label{cor_reformcombingsb} Let $(X,\rho)$ be a torsion combing of a compact oriented 3-manifold $M$. Let $\bar\tau$ be a pseudo-parallelization of $M$ compatible with $X$. Let $\go P(\bar\tau,X)$ be as in Definition~\ref{def_pgo}. The class $[\go P(\bar\tau,X)]$ in $H_2([0,1]\times UM ; \b Q) / H_T^\rho(M)$ only depends on $\bar\tau$ and on the homotopy class of $X$. It will be denoted by $\tilde p_1(\bar\tau,[X])[S]$. \end{lemma} \begin{proof} Let $\tau$ be another pseudo-parallelization of $M$ which is compatible with $X$. Let $C_4^+(X,\tau)$ and $C_4^-(X,\tau)$ be fixed choices of 4-chains of $[0,1]\times UM$ as in Lemma~\ref{lem_transfert}. Using these 4-chains, construct the cycle $\go P(X,\tau)$ as in Definition~\ref{def_pgo}. Then, in the space $H_2([0,1]\times UM ; \b Q) / H_T^\rho(M)$, we have $$ \begin{aligned} [\go P(\bar\tau,X)] + [\go P(X,\tau)] &= [\go P(\bar\tau,X) + \go P(X,\tau)] \\ &= [ \lbrace 0 \rbrace \times \Omega(\bar\tau) + 4\cdot(C_4^+(\bar\tau,X)\cap C_4^-(\bar\tau,X)) \\ & \hspace{7mm} + 4\cdot(C_4^+(X,\tau)\cap C_4^-(X,\tau)) - \lbrace 1 \rbrace \times \Omega(\tau) ]. \end{aligned} $$ By reparameterizing and stacking $C_4^+(\bar\tau,X)$ and $C_4^+(X,\tau)$, resp. $C_4^-(\bar\tau,X)$ and $C_4^-(X,\tau)$, we get a 4-chain $C_4(\bar\tau,\tau,e_1)$, resp. $C_4(\bar\tau,\tau,-e_1)$, as in Lemma~\ref{lem_C4pme1}. It follows that $$ \begin{aligned} [\go P(\bar\tau,X)] + [\go P(X,\tau)] & \hspace{-1mm}=\hspace{-1mm} [ \lbrace 0 \rbrace \times \Omega(\bar\tau) + 4\cdot(C_4(\bar\tau,\tau,e_1) \cap C_4(\bar\tau,\tau,e_1)) - \lbrace 1 \rbrace \times \Omega(\tau) ] \\ &\hspace{-1mm}=\hspace{-1mm} [\go P(\bar\tau, \tau)] \end{aligned} $$ or, equivalently, $[\go P(\bar\tau,X)] = [\go P(\bar\tau, \tau)] - [\go P(X,\tau)]$. This proves the statement since $\go P(X,\tau)$ is independent of the choices made for $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$, and since, according to Proposition~\ref{prop_ppara}, the class $[\go P(\bar\tau, \tau)]$ is independent of the choices for $C_4(\bar\tau,\tau,e_1)$ and $C_4(\bar\tau,\tau,-e_1)$. \end{proof} \begin{proposition} \label{cor_reformcombings} If $\bar\tau$ is a pseudo-parallelization of a closed oriented 3-manifold $M$ and if $X$ is a torsion combing of $M$ compatible with $\bar\tau$, then $$ \tilde p_1(\bar\tau,[X]) = p_1([X])-p_1(\bar\tau). $$ \end{proposition} \begin{proof} According to Lemma~\ref{cor_reformcombingsb}, $\tilde p_1(\bar\tau,[X])$ is independent of the choices for $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$. Let us construct convenient 4-chains $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$. Let $\tau$ be a ge\-nuine parallelization of $M$. Thanks to Theorem~\ref{prop_varasint}, there exist two 4-chains of $[0,1]\times UM$, $C_4(\bar\tau,\tau;e_1)$ and $C_4(\bar\tau,\tau;-e_1)$, such that $$ \begin{aligned} \partial C_4(\bar\tau,\tau;e_1) &= \lbrace 1 \rbrace \times \tau(M\times \lbrace e_1 \rbrace) - \lbrace 0 \rbrace \times \bar \tau(M\times \lbrace e_1 \rbrace), \\ \partial C_4(\bar\tau,\tau;-e_1) &= \lbrace 1 \rbrace \times \tau(M\times \lbrace -e_1 \rbrace) - \lbrace 0 \rbrace \times \bar \tau(M\times \lbrace -e_1 \rbrace). \end{aligned} $$ Furthermore, as in Remark~\ref{rmkpcase}, construct two 4-chains $C_4^+(\tau,X)$ and $C_4^-(\tau,X)$ as $$ \begin{aligned} C_4^+(\tau, X) &= \bar F_{t_1}(E_1^\tau, X) - \lbrace t_1 \rbrace \times UM_{|\Sigma_{E_1^\tau=-X}}\\ C_4^-(\tau, X) &= \bar F_{t_2}(-E_1^\tau, -X) - \lbrace t_2 \rbrace \times UM_{|\Sigma_{-E_1^\tau=X}} \end{aligned} $$ where $E_1^\tau$ stands for the first vector of the parallelization $\tau$, where $t_1, t_2\in \ ]0,1[$, and where $\Sigma_{E_1^\tau=-X}$ and $\Sigma_{-E_1^\tau=X}$ are 2-chains with boundaries $L_{E_1^\tau=-X}$ and $L_{-E_1^\tau=X}$, respectively. Eventually, define $C_4^+(\bar\tau,X)$, resp. $C_4^-(\bar\tau,X)$, by reparameterizing and stacking the chains $C_4(\bar\tau,\tau;e_1)$ and $ C_4^+(\tau, X)$, resp. $C_4(\bar\tau,\tau;-e_1)$ and $ C_4^-(\tau, X)$. \\ Let us finally compute $[\lbrace 0 \rbrace \times \Omega(\bar\tau)+4\cdot(C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))]$. By construction, we have : $$ \begin{aligned} &[\lbrace 0 \rbrace \times \Omega(\bar\tau)+4 \cdot (C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))] \\ &\hspace{-1mm}=\hspace{-1mm} [\lbrace 0 \rbrace \times \Omega(\bar\tau)+4 \cdot (C_4(\bar\tau,\tau;e_1) \cap C_4(\bar\tau,\tau;-e_1))] + 4 [C_4^+(\tau, X) \cap C_4^-(\tau, X)], \end{aligned} $$ so that, using Proposition~\ref{prop_ppara}, $$ \begin{aligned} &[\lbrace 0 \rbrace \times \Omega(\bar\tau)+4 \cdot (C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))] \\ &= (p_1(\tau)-p_1(\bar\tau))[S] + 4 \cdot [C_4^+(\tau, X) \cap C_4^-(\tau, X)]. \end{aligned} $$ Now, using Definition~\ref{def_Ft}, $$ \begin{aligned} C_4^+(\tau, X) = \ & \left[ 0 , t_1 \right] \times E_1^\tau(M) + \lbrace t_1 \rbrace \times \bar F(E_1^\tau,X) \\ + \ & \left[ t_1, 1 \right] \times X(M) - \lbrace t_1 \rbrace \times UM_{|\Sigma_{E_1^\tau=-X}}\\ C_4^-(\tau, X) = \ & \left[ 0 , t_2 \right] \times (-E_1^\tau)(M) + \lbrace t_2 \rbrace \times \bar F(-E_1^\tau,-X) \\ + \ & \left[ t_2, 1 \right] \times (-X)(M) - \lbrace t_2 \rbrace \times UM_{|\Sigma_{-E_1^\tau=X}} \end{aligned} $$ so that, assuming $t_1 < t_2$ without loss of generality, $$ \begin{aligned} C_4^+(\tau, X) \cap C_4^-(\tau, X) = - \ & \lbrace t_1 \rbrace \times (-E_1^\tau)\left( \Sigma_{E_1^\tau=-X} \right) \\ + \ & [t_1 , t_2] \times (-E_1^\tau)\left( L_{-E_1^\tau = X} \right) \\ - \ & \lbrace t_2 \rbrace \times X(\Sigma_{-E_1^\tau=X}). \end{aligned} $$ It follows that, using Theorem~\ref{thm_defp1X} and Lemma~\ref{lem_evaluationII} with $C_4(E_1^\tau,E_1^\tau)=[0,1] \times E_1^\tau(M)$, $$ \begin{aligned} 4 \cdot [C_4^+(\tau, X) \cap C_4^-(\tau, X)] & \hspace{-1mm}=\hspace{-1mm} 4 \cdot \langle C_4^+(\tau, X) , C_4^-(\tau, X) , [0,1] \times E_1^\tau(M) \rangle [S] \\ &\italicegal4 \cdot lk(L_{E_1^\tau = X},L_{E_1^\tau = - X}) [S] \\ &\hspace{-1mm}=\hspace{-1mm}(p_1([X]) - p_1(\tau))[S] \\ \end{aligned} $$ in $H_2([0,1]\times UM; \b Q)/H_T(M)$, and, eventually, $$ \begin{aligned} [\lbrace 0 \rbrace \times \Omega(\bar\tau)&+4 \cdot (C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))] \\ &= (p_1(\tau)-p_1(\bar\tau))[S] + (p_1([X]) - p_1(\tau))[S] \\ &=(p_1([X]) - p_1(\bar\tau))[S]. \end{aligned} $$ \end{proof} \begin{lemma} \label{fond} If $(X,\rho)$ is a torsion combing of a compact oriented 3-manifold $M$ and if \linebreak $\bar\tau=(N(\gamma); \tau_e,\tau_d,\tau_g)$ is a pseudo-parallelization of $M$ compatible with $X$, then $$ \begin{aligned} \tilde p_1(\bar\tau, [X]) &= lk_M\left(L_{E_1^d=X} + L_{E_1^g=X} \ , \ L_{E_1^d=-X} + L_{E_1^g=-X} \right) \\ &- lk_{\b S^2}\left(e_1-(-e_1), \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g})\right) \end{aligned} $$ where $E_1^d$ and $E_1^g$ denote the Siamese sections of $\bar\tau$. \end{lemma} \begin{proof} We just have to evaluate the class of the 4-cycle $$ \go P(\bar\tau,X) = \lbrace 0 \rbrace \times \Omega(\bar\tau)+4\cdot(C_4^+(\bar\tau,X)\cap C_4^-(\bar\tau,X)) $$ in $H_2([0,1]\times UM ; \b Q) / H_T^\rho(M)$ for convenient 4-chains $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ with the prescribed boundaries. Let $t_1$ and $t_2$ in $]0,1[$, with $t_1 > t_2$, and set $$ \begin{aligned} C_4^+(\bar\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F_{t_1}(E_1^d,X)+\bar F_{t_1}(E_1^g,X) - \lbrace t_1 \rbrace \times UM_{|\Sigma(e_1)} \right)\\ C_4^-(\bar\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F_{t_2}(-E_1^d,-X)+\bar F_{t_2}(-E_1^g,-X) - \lbrace t_2 \rbrace \times UM_{|\Sigma(-e_1)} \right) \end{aligned} $$ where the chains $\bar F_t$ are as in Definition~\ref{def_Ft} and where, using Lemma~\ref{nulexcep}, $\Sigma(e_1)$ and $\Sigma(-e_1)$ are 2-chains of $M$ so that $$ \partial \Sigma(\pm e_1) = \pm(L_{E_1^d=-X} + L_{E_1^g=-X}). $$ These 4-chains do have the expected boundaries. Let us now describe $C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X)$ : \begin{enumerate}[\textbullet] \item on $[0,t_2[$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$ \sfrac{1}{4} \cdot [0,t_2[ \ \times \ E_1^d(L_{E_1^d=-E_1^g}) + \sfrac{1}{4} \cdot [0,t_2[ \ \times \ E_1^g(L_{E_1^g=-E_1^d}). $$ \item on $]t_2 , t_1[$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$ \sfrac{1}{2} \ \cdot \ ]t_2,t_1[ \ \times \ (-X)(L_{E_1^d=-X}) + \sfrac{1}{2} \ \cdot \ ]t_2,t_1[ \ \times \ (-X)(L_{E_1^g=-X}). $$ \item on $]t_1,1]$ : There is no intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ since they consist in $]t_1,1]\times X(M)$ and $]t_1,1]\times (-X)(M)$. \item at $t_2$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$ \begin{aligned} &\sfrac{1}{2} \cdot \lbrace t_2 \rbrace \times E_1^d(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^g,-X) \\ + &\sfrac{1}{2} \cdot \lbrace t_2 \rbrace \times E_1^g(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^d,-X) \\ - & \sfrac{1}{4} \cdot \lbrace t_2 \rbrace \times E_1^d(\Sigma(-e_1)) - \sfrac{1}{4} \cdot \lbrace t_2 \rbrace \times E_1^g(\Sigma(-e_1)) \end{aligned} $$ \item at $t_1$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$ \begin{aligned} &\sfrac{1}{2}\cdot \lbrace t_1 \rbrace \times \bar F(E_1^d,X) \cap \lbrace t_1 \rbrace \times (-X)(M) \\ + &\sfrac{1}{2}\cdot \lbrace t_1 \rbrace \times \bar F(E_1^g,X) \cap \lbrace t_1 \rbrace \times (-X)(M) \\ - & \sfrac{1}{2} \cdot \lbrace t_1 \rbrace \times (-X)(\Sigma(e_1)). \end{aligned} $$ \end{enumerate} It follows that : $$ \begin{aligned} &\langle \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \Omega(\bar\tau)+ C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X) , [0,1] \times X(M) \rangle_{[0,1]\times UM}\\ &= \sfrac{1}{4} \cdot \langle [0,t_2[ \ \times \ E_1^d(L_{E_1^d=-E_1^g}) + [0,t_2[ \ \times \ E_1^g(L_{E_1^g=-E_1^d}) , [0,1]\times X(M) \rangle_{[0,1]\times UM} \\ &+ \sfrac{1}{2} \cdot \langle \lbrace t_2 \rbrace \times E_1^d(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^g,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} \\ &+\sfrac{1}{2} \cdot \langle \lbrace t_2 \rbrace \times E_1^g(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^d,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} \\ &- \sfrac{1}{4} \cdot \langle \lbrace t_2 \rbrace \times E_1^d(\Sigma(-e_1)) + \lbrace t_2 \rbrace \times E_1^g(\Sigma(-e_1)) , [0,1] \times X(M) \rangle_{[0,1]\times UM} \\ &+ \sfrac{1}{4} \cdot \langle \Omega(\bar\tau) , X(M) \rangle_{UM}. \end{aligned} $$ Since $L_{E_1^g=X} \cap L_{E_1^d=-X}$ and $L_{E_1^g=-X} \cap L_{E_1^d=X}$ are empty : $$ \langle [0,t_2[ \ \times \ E_1^d(L_{E_1^d=-E_1^g}) + [0,t_2[ \ \times \ E_1^g(L_{E_1^g=-E_1^d}) , [0,1]\times X(M) \rangle_{[0,1]\times UM} = 0. $$ Furthermore, note that if $(m,v)$ is an intersection point of $$E_1^d(M)\cap \bar F(-E_1^g,-X) \cap X(M) $$ then, in particular, $v=E_1^d(m)=X(m)$ so that $-E_1^g(m)$ and $-X(m)$ are not antipodal since $L_{E_1^d=X} \cap L_{E_1^g=-X}=\emptyset$. It follows that $v=E_1^d(m)=X(m)$ should also sit on the shortest geodesic arc from $-E_1^g(m)$ to $-X(m)$. Since such a configuration is impossible, this triple intersection is empty, thus $$ \langle \lbrace t_2 \rbrace \times E_1^d(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^g,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} = 0. $$ Similarly, $$ \langle \lbrace t_2 \rbrace \times E_1^g(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^d,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} = 0. $$ Now, we have $$ \begin{aligned} &\langle \lbrace t_2 \rbrace \times E_1^d(\Sigma(-e_1)) + \lbrace t_2 \rbrace \times E_1^g(\Sigma(-e_1)) , [0,1] \times X(M) \rangle \\ &= - lk_M(L_{E_1^d=X} + L_{E_1^g=X}, \ L_{E_1^d=-X} + L_{E_1^g=-X}). \end{aligned} $$ Furthermore, recall Definition~\ref{omega} $$ \begin{aligned} \langle \Omega(\bar\tau) , X(M) \rangle_{UM} &= \langle \tau_d^{-1} (\Omega(\bar\tau)) , \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) \rangle_{L_{E_1^d=-E_1^g} \times \b S^2} \\ &= - \langle L_{E_1^d=-E_1^g} \hspace{-0.5mm} \times \hspace{-0.5mm} [-e_1,e_1]_{e_2^\Omega} \ , \ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) \rangle_{L_{E_1^d=-E_1^g} \times \b S^2} \end{aligned} $$ where $[-e_1,e_1]_{e_2^\Omega}$ is the geodesic arc of $\b S^2$ from $-e_1$ to $e_1$ passing through $e_2^\Omega$. Now, $L_{E_1^d=-E_1^g} \times \b S^2$ is oriented and an intersection $$ (m,v)\hspace{-0.5mm}\in\hspace{-0.5mm}L_{E_1^d=-E_1^g} \hspace{-0.5mm} \times \hspace{-0.5mm} [-e_1,e_1]_{e_2^\Omega} \cap \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) $$ is positive when $$ T_{(m,v)}(L_{E_1^d=-E_1^g} \times [-e_1,e_1]_{e_2^\Omega})\oplus T_{(m,v)}(\tau_d^{-1}\circ X(L_{E_1^d=-E_1^g}))=T_{(m,v)}(L_{E_1^d=-E_1^g}\times\b S^2) $$ as an oriented sum, which is equivalent to $$ T_{v}([-e_1,e_1]_{e_2^\Omega})\oplus T_{v}(P_{\b S^2}\circ\tau_d^{-1}\circ X(L_{E_1^d=-E_1^g}))=T_{v}(\b S^2) $$ as an oriented sum, where $P_{\b S^2}$ is the standard projection from $M\times \b S^2$ to $\b S^2$. See Figure \ref{orientation}. \begin{center} \definecolor{ccqqtt}{rgb}{0.8,0,0.2} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \definecolor{ccqqqq}{rgb}{0.8,0,0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.5cm,scale=0.9] \clip(9,-2.5) rectangle (23,10); \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.1] (13,4) -- (14.5,4) -- (14.5,-1) -- (13,-1) -- cycle; \fill[line width=0pt,dotted,color=zzttqq,fill=zzttqq,fill opacity=0.1] (16,4) -- (20,4) -- (20,-1) -- (16,-1) -- cycle; \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.1] (21.5,4) -- (23,4) -- (23,-1) -- (21.5,-1) -- cycle; \draw [line width=1.4pt] (16,4)-- (20,4); \draw [line width=1.4pt] (20,-1)-- (20,8); \draw [line width=1.4pt] (20,8)-- (22,9); \draw [->] (16,4) -- (20,4); \draw [->,line width=1.4pt] (21.5,4) -- (23,4); \draw [line width=1.4pt] (14.5,-1)-- (14.5,8); \draw [line width=1.4pt] (14.5,8)-- (16.5,9); \draw [->,line width=1.4pt,color=ccqqqq] (21.5,6) -- (23,6); \draw [->,line width=1.4pt,color=ccqqqq] (13,6) -- (14.5,6); \draw [->,line width=1.4pt] (13,4) -- (14.5,4); \draw [->] (16,-1) -- (16,4); \draw [->] (21.5,-1) -- (21.5,4); \draw[line width=1.4pt,color=ccqqqq] (19.12,4.3) -- (19.11,4.24) -- (19.1,4.18) -- (19.1,4.12) -- (19.09,4.05) -- (19.08,3.99) -- (19.07,3.93) -- (19.07,3.87) -- (19.06,3.81) -- (19.05,3.74) -- (19.05,3.68) -- (19.04,3.62) -- (19.04,3.56) -- (19.03,3.49) -- (19.02,3.43) -- (19.02,3.37) -- (19.01,3.31) -- (19.01,3.25) -- (19,3.18) -- (18.99,3.12) -- (18.99,3.06) -- (18.98,3) -- (18.98,2.94) -- (18.97,2.88) -- (18.96,2.82) -- (18.96,2.76) -- (18.95,2.7) -- (18.95,2.65) -- (18.94,2.59) -- (18.93,2.53) -- (18.93,2.48) -- (18.92,2.42) -- (18.91,2.37) -- (18.9,2.31) -- (18.9,2.26) -- (18.89,2.21) -- (18.88,2.16) -- (18.87,2.11) -- (18.86,2.06) -- (18.85,2.01) -- (18.85,1.97) -- (18.84,1.92) -- (18.83,1.88) -- (18.82,1.83) -- (18.81,1.79) -- (18.79,1.75) -- (18.78,1.71) -- (18.77,1.67) -- (18.76,1.64) -- (18.75,1.6) -- (18.73,1.57) -- (18.72,1.54) -- (18.71,1.51) -- (18.69,1.48) -- (18.68,1.45) -- (18.66,1.42) -- (18.65,1.4) -- (18.63,1.38) -- (18.61,1.36) -- (18.59,1.34) -- (18.58,1.32) -- (18.56,1.31) -- (18.54,1.29) -- (18.52,1.28) -- (18.5,1.27) -- (18.48,1.27) -- (18.45,1.26) -- (18.43,1.26) -- (18.41,1.26) -- (18.38,1.26) -- (18.36,1.27) -- (18.33,1.27) -- (18.31,1.28) -- (18.28,1.29) -- (18.25,1.3) -- (18.22,1.32) -- (18.19,1.34) -- (18.16,1.36) -- (18.13,1.38) -- (18.1,1.41) -- (18.07,1.44) -- (18.03,1.47) -- (18.02,1.48) -- (18.01,1.49) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5)(20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (19.99,6) -- (19.99,6) -- (19.98,6) -- (19.95,6) -- (19.92,5.99) -- (19.89,5.99) -- (19.86,5.98) -- (19.82,5.97) -- (19.79,5.96) -- (19.77,5.94) -- (19.74,5.93) -- (19.71,5.91) -- (19.68,5.89) -- (19.66,5.87) -- (19.63,5.84) -- (19.61,5.82) -- (19.58,5.79) -- (19.56,5.76) -- (19.54,5.73) -- (19.52,5.7) -- (19.49,5.67) -- (19.47,5.64) -- (19.45,5.6) -- (19.44,5.56) -- (19.42,5.52) -- (19.4,5.48) -- (19.38,5.44) -- (19.36,5.4) -- (19.35,5.36) -- (19.33,5.31) -- (19.32,5.26) -- (19.3,5.22) -- (19.29,5.17) -- (19.27,5.12) -- (19.26,5.07) -- (19.25,5.02) -- (19.24,4.97) -- (19.22,4.91) -- (19.21,4.86) -- (19.2,4.8) -- (19.19,4.75) -- (19.18,4.69) -- (19.17,4.63) -- (19.16,4.58) -- (19.15,4.52) -- (19.14,4.46) -- (19.13,4.4) -- (19.12,4.34) -- (19.12,4.3); \draw[line width=1.4pt,dash pattern=on 2pt off 4pt,color=ccqqtt] (17.44,2.52) -- (17.44,2.54) -- (17.43,2.56) -- (17.42,2.58) -- (17.41,2.6) -- (17.4,2.63) -- (17.4,2.65) -- (17.39,2.67) -- (17.38,2.69) -- (17.37,2.72) -- (17.37,2.74) -- (17.36,2.76) -- (17.35,2.78) -- (17.34,2.81) -- (17.33,2.83) -- (17.33,2.85) -- (17.32,2.88) -- (17.31,2.9) -- (17.3,2.92) -- (17.3,2.95) -- (17.29,2.97) -- (17.28,3) -- (17.27,3.02) -- (17.26,3.05) -- (17.26,3.07) -- (17.25,3.1) -- (17.24,3.12) -- (17.23,3.15) -- (17.23,3.18) -- (17.22,3.2) -- (17.21,3.23) -- (17.2,3.25) -- (17.19,3.28) -- (17.19,3.31) -- (17.18,3.33) -- (17.17,3.36) -- (17.16,3.39) -- (17.15,3.42) -- (17.15,3.44) -- (17.14,3.47) -- (17.13,3.5) -- (17.12,3.53) -- (17.12,3.56) -- (17.11,3.59) -- (17.1,3.61) -- (17.09,3.64) -- (17.08,3.67) -- (17.08,3.7) -- (17.07,3.73) -- (17.06,3.76) -- (17.05,3.79) -- (17.05,3.82) -- (17.04,3.85) -- (17.03,3.88) -- (17.02,3.91) -- (17.01,3.94) -- (17.01,3.97) -- (17,3.99) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4)(17.49,2.04)(18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (17.99,1.51) -- (17.99,1.51) -- (17.98,1.52) -- (17.97,1.53) -- (17.97,1.54) -- (17.96,1.54) -- (17.95,1.55) -- (17.94,1.56) -- (17.93,1.57) -- (17.93,1.58) -- (17.92,1.59) -- (17.91,1.6) -- (17.9,1.61) -- (17.9,1.62) -- (17.89,1.63) -- (17.88,1.64) -- (17.87,1.65) -- (17.86,1.66) -- (17.86,1.67) -- (17.85,1.69) -- (17.84,1.7) -- (17.83,1.71) -- (17.83,1.72) -- (17.82,1.73) -- (17.81,1.74) -- (17.8,1.76) -- (17.79,1.77) -- (17.79,1.78) -- (17.78,1.79) -- (17.77,1.81) -- (17.76,1.82) -- (17.76,1.83) -- (17.75,1.85) -- (17.74,1.86) -- (17.73,1.88) -- (17.72,1.89) -- (17.72,1.9) -- (17.71,1.92) -- (17.7,1.93) -- (17.69,1.95) -- (17.68,1.96) -- (17.68,1.98) -- (17.67,1.99) -- (17.66,2.01) -- (17.65,2.03) -- (17.65,2.04) -- (17.64,2.06) -- (17.63,2.08) -- (17.62,2.09) -- (17.61,2.11) -- (17.61,2.13) -- (17.6,2.14) -- (17.59,2.16) -- (17.58,2.18) -- (17.58,2.19) -- (17.57,2.21) -- (17.56,2.23) -- (17.55,2.25) -- (17.54,2.27) -- (17.54,2.29) -- (17.53,2.3) -- (17.52,2.32) -- (17.51,2.34) -- (17.51,2.36) -- (17.5,2.38) -- (17.49,2.4) -- (17.48,2.42) -- (17.47,2.44) -- (17.47,2.46) -- (17.46,2.48) -- (17.45,2.5) -- (17.44,2.52) -- (17.44,2.52); \draw[line width=1.4pt,color=ccqqtt] (16.44,5.61) -- (16.44,5.62) -- (16.43,5.63) -- (16.42,5.65) -- (16.41,5.66) -- (16.4,5.67) -- (16.4,5.68) -- (16.39,5.7) -- (16.38,5.71) -- (16.37,5.72) -- (16.37,5.73) -- (16.36,5.74) -- (16.35,5.75) -- (16.34,5.77) -- (16.33,5.78) -- (16.33,5.79) -- (16.32,5.8) -- (16.31,5.81) -- (16.3,5.82) -- (16.3,5.83) -- (16.29,5.83) -- (16.28,5.84) -- (16.27,5.85) -- (16.26,5.86) -- (16.26,5.87) -- (16.25,5.88) -- (16.24,5.88) -- (16.23,5.89) -- (16.23,5.9) -- (16.22,5.91) -- (16.21,5.91) -- (16.2,5.92) -- (16.19,5.92) -- (16.19,5.93) -- (16.18,5.94) -- (16.17,5.94) -- (16.16,5.95) -- (16.15,5.95) -- (16.15,5.96) -- (16.14,5.96) -- (16.13,5.97) -- (16.12,5.97) -- (16.12,5.97) -- (16.11,5.98) -- (16.1,5.98) -- (16.09,5.98) -- (16.08,5.99) -- (16.08,5.99) -- (16.07,5.99) -- (16.06,5.99) -- (16.05,5.99) -- (16.05,6) -- (16.04,6) -- (16.03,6) -- (16.02,6) -- (16.01,6) -- (16.01,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6)(17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4.01) -- (17,4.01) -- (16.99,4.02) -- (16.99,4.04) -- (16.98,4.07) -- (16.97,4.1) -- (16.97,4.13) -- (16.96,4.16) -- (16.95,4.19) -- (16.94,4.22) -- (16.93,4.25) -- (16.93,4.28) -- (16.92,4.31) -- (16.91,4.34) -- (16.9,4.37) -- (16.9,4.4) -- (16.89,4.42) -- (16.88,4.45) -- (16.87,4.48) -- (16.86,4.51) -- (16.86,4.53) -- (16.85,4.56) -- (16.84,4.59) -- (16.83,4.61) -- (16.83,4.64) -- (16.82,4.66) -- (16.81,4.69) -- (16.8,4.71) -- (16.79,4.74) -- (16.79,4.76) -- (16.78,4.79) -- (16.77,4.81) -- (16.76,4.84) -- (16.76,4.86) -- (16.75,4.88) -- (16.74,4.91) -- (16.73,4.93) -- (16.72,4.95) -- (16.72,4.97) -- (16.71,5) -- (16.7,5.02) -- (16.69,5.04) -- (16.68,5.06) -- (16.68,5.08) -- (16.67,5.1) -- (16.66,5.13) -- (16.65,5.15) -- (16.65,5.17) -- (16.64,5.19) -- (16.63,5.21) -- (16.62,5.23) -- (16.61,5.24) -- (16.61,5.26) -- (16.6,5.28) -- (16.59,5.3) -- (16.58,5.32) -- (16.58,5.34) -- (16.57,5.36) -- (16.56,5.37) -- (16.55,5.39) -- (16.54,5.41) -- (16.54,5.42) -- (16.53,5.44) -- (16.52,5.46) -- (16.51,5.47) -- (16.51,5.49) -- (16.5,5.51) -- (16.49,5.52) -- (16.48,5.54) -- (16.47,5.55) -- (16.47,5.57) -- (16.46,5.58) -- (16.45,5.59) -- (16.44,5.61); \draw (15,10) node[anchor=north west] {$\b S^2$}; \draw (20.5,10) node[anchor=north west] {$\b S^2$}; \draw (9.25,6.8) node[anchor=north west] {$\tau_d^{-1}\circ X(L_{E_1^d = -E_1^g})$}; \draw (14.8,-1.34) node[anchor=north west] {$(-e_1)$}; \draw (20.3,-1.34) node[anchor=north west] {$(-e_1)$}; \draw (15.3,4.4) node[anchor=north west] {$e_1$}; \draw (20.8,4.4) node[anchor=north west] {$e_1$}; \draw (11,4.64) node[anchor=north west] {$L_{E_1^d=-E_1^g}$}; \draw [color=black, line width=1.4pt] (18,1.5)-- ++(-4.5pt,0 pt) -- ++(9.0pt,0 pt) ++(-4.5pt,-4.5pt) -- ++(0 pt,9.0pt); \draw[color=black] (17.5,0.5) node {$(m,v)$}; \end{tikzpicture} \captionof{figure}{A positive intersection $(m,v)$ between the 2-chain $L_{E_1^d=-E_1^g}\times[-e_1,e_1]_{e_2^\Omega}$ and $\tau_d^{-1} \circ X (L_{E_1^d=-E_1^g})$ in $L_{E_1^d=-E_1^g}\times \b S^2$.\\}\label{orientation} \end{center} \noindent It follows that $$ \begin{aligned} \langle \Omega(\bar\tau) , X(M) \rangle_{UM} &= - \langle [-e_1,e_1]_{e_2^\Omega}, \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) \rangle_{\b S^2} \\ &= - lk_{\b S^2}\left(e_1-(-e_1), \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g})\right), \end{aligned} $$ \iffalse and, finally, $$ \begin{aligned} \langle \go P(\bar\tau,X), [0,1]\times X(M) \rangle_{[0,1]\times UM} &= lk_M\left(L_{E_1^d=X} + L_{E_1^g=X} \ , \ L_{E_1^d=-X} + L_{E_1^g=-X} \right) \\ &- lk_{\b S^2}\left(e_1-(-e_1), \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g})\right). \end{aligned} $$ \fi \end{proof} \begin{proof}[Proof of Lemma~\ref{lem1}] According to Lemmas~\ref{cor_reformcombingsb}~and~\ref{fond}, Lemma~\ref{lem1} is true for $p_1=\tilde p_1$. \end{proof} From now on, if $X$ is a torsion combing of a compact oriented 3-manifold $M$ and if $\bar\tau$ is a pseudo-parallelization of $M$ compatible with $X$, then set $$ p_1([X],\bar\tau) = \tilde p_1([X],\bar\tau) \mbox{ \ and \ } p_1(\bar\tau,[X]) = \tilde p_1(\bar\tau,[X]). $$ As an obvious consequence, we get the following lemma. \begin{lemma} \label{lempluplus} Under the assumptions of Lemma \ref{cor_reformcombingsb}, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$, the class of $\go P(\bar\tau,X)$ is $p_1(\bar\tau, X)[S]$. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm_defp1Xb}] Let $X_1$ and $X_2$ be torsion combings of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $X_1$ and $X_2$ coincide on the boundary. Let also $\bar\tau_1$ and $\bar\tau'_1$ be two pseudo-parallelizations of $M_1$ that extend the trivialization $\rho(X_1)$ and, similarly, let $\bar\tau_2$ be a pseudo-parallelization of $M_2$ that extends the trivialization $\rho(X_2)$. In such a context, let $$ \begin{aligned} p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2) &= p_1([X_1],\bar\tau_1) + p_1(\bar\tau_1,\bar\tau_2) + p_1(\bar\tau_2, [X_2]) \\ p_1([X_1],[X_2])(\bar\tau'_1,\bar\tau_2) &= p_1([X_1],\bar\tau'_1) + p_1(\bar\tau'_1,\bar\tau_2) + p_1(\bar\tau_2, [X_2]) \\ \end{aligned} $$ and note that $$ p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2) - p_1([X_1],[X_2])(\bar\tau'_1,\bar\tau_2) = p_1(\bar\tau_1,\bar\tau'_1) - \langle \go P(\bar\tau_1,\bar\tau'_1) , [0,1]\times X_1(M) \rangle. $$ Using Proposition~\ref{prop_ppara}, we get $p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2) - p_1([X_1],[X_2])(\bar\tau'_1,\bar\tau_2)=0$. In other words $p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2)$ is independent of $\bar\tau_1$. Similarly, it is also independent of $\bar\tau_2$ so that we can drop the pseudo-parallelizations from the notation. Eventually, using Lemma~\ref{lem1}, we get the formula of the statement. \\ For the second part of the statement, if $M_1$ and $M_2$ are closed, conclude with Proposition~\ref{cor_reformcombings}, which ensures that $p_1(\bar\tau_i,[X_i]) = p_1([X_i])-p_1(\bar\tau_i)$ for $i\in \lbrace 1,2 \rbrace$. \end{proof} Let us now end this section by proving Theorem \ref{formuleplus} and Theorem \ref{GM}, starting with the following. \begin{lemma} \label{lemmaplus1} Let $(X,\rho)$ and $(Y,\rho)$ be $\partial$-compatible torsion combings of a compact oriented 3-manifold $M$. Let $C_4(X,Y)$ and $C_4(-X,-Y)$ be 4-chains of $[0,1]\times UM$ as in Lemma \ref{lem_evaluationII}. The class of $\go P(X,Y) \hspace{-1mm}=\hspace{-1mm} 4\big(C_4(X,Y) \cap C_4(-X,-Y)\big)$ in the space $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ reads $p_1([X],[Y]) [S]$ where $[S]$ is the homology class of the fiber of $UM$ in $H_2([0,1]\times UM ; \b Q)$. \end{lemma} \begin{proof} Let $\bar \tau$ be a pseudo-parallelization of $M$ compatible with $X$ and $Y$. By Theorem \ref{thm_defp1Xb}, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ : $$ p_1([X],[Y])[S] = \big(p_1([X],\bar \tau) + p_1(\bar \tau,[Y]) \big)[S] $$ Then, using Lemma \ref{lempluplus}, $$ \begin{aligned} p_1([X],[Y])[S] &= [\go P(X,\bar \tau) + \go P(\bar \tau, Y)]\\ &= [4(C_4^+(X,\bar \tau)\cap C_4^-(X,\bar \tau)) + 4(C_4^+(\bar \tau, Y)\cap C_4^-(\bar \tau, Y)) ]. \end{aligned} $$ Hence, reparameterizing and stacking $C_4^+(X,\bar\tau)$ and $C_4^+(\bar\tau,Y)$, resp. $C_4^-(X,\bar\tau)$ and $C_4^-(\bar\tau,Y)$, we get a 4-chain $C_4(X,Y)$, resp. $C_4(-X,-Y)$, as in Lemma \ref{lem_evaluationII} and $$ p_1([X],[Y])[S]= 4 \cdot [C_4(X,Y)\cap C_4(-X,-Y)]. $$ To conclude the proof, see that if $C'_4(X,Y)$ is a 4-chain of $[0,1]\times UM$ with the same boundary as $C_4(X,Y)$, then $C'_4(X,Y)-C_4(X,Y)$ is homologous to a 4-cycle in $\lbrace 0 \rbrace \times UM$ so that $$ \begin{aligned} &[C'_4(X,Y) \cap C_4(-X,-Y)] - [C_4(X,Y) \cap C_4(-X,-Y)] \\ &= \left[\big(C'_4(X,Y)- C_4(X,Y) \big) \cap C_4(-X,-Y)\right] \end{aligned} $$ sits in $H_T^\rho(M)$. So, the class $[C_4(X,Y)\cap C_4(-X,-Y)]$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ is independent of the choices for $C_4(X,Y)$. Similarly, it is independent of the choices for $C_4(-X,-Y)$. \end{proof} \begin{proof}[Proof of Theorem \ref{formuleplus}] According to Lemma \ref{lemmaplus1}, it is enough to evaluate the class of the chain $4\big(C_4(X,Y) \cap C_4(-X,-Y)\big)$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ where $\rho = \rho(X)$ and where $C_4(X,Y)$ and $C_4(-X,-Y)$ are 4-chains of $[0,1]\times UM$ as in Lemma~\ref{lem_evaluationII}. Let us consider the 4-chains $$ \begin{aligned} C_4(X,Y) &= \bar F_{t_1}(X,Y) - \lbrace t_1 \rbrace \times UM_{|\Sigma_{X=-Y}},\\ C_4(-X,-Y) &= \bar F_{t_2}(-X,-Y) - \lbrace t_2 \rbrace \times UM_{|\Sigma_{-X=Y}} , \end{aligned} $$ where $0<t_1<t_2<1$, and where $\bar F_{t_1}(X,Y)$, resp. $\bar F_{t_2}(-X,-Y)$, is a 4-chain as in Definition~\ref{def_Ft} and $\Sigma_{X=-Y}$, resp. $\Sigma_{-X=Y}$, is a 2-chain of $M$ bounded by $L_{X=-Y}$, resp. $L_{-X=Y}$, provided by Proposition~\ref{prop_linksinhomologyI}. With these chains, $$ \begin{aligned} &C_4(X,Y) \cap C_4(-X,-Y) \\ &= -\lbrace t_1 \rbrace \times (-X)(\Sigma_{X=-Y}) + [t_1, t_2] \times (-X)(L_{-X=Y}) - \lbrace t_2 \rbrace \times Y(\Sigma_{-X=Y}). \end{aligned} $$ Hence, using Lemma \ref{lem_evaluationII} with $[0,1]\times X(M)$, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ : $$ \begin{aligned} [C_4(X,Y) \cap C_4(-X,-Y)] &= \langle C_4(X,Y)\cap C_4(-X,-Y) , [0,1] \times X(M) \rangle_{[0,1]\times UM} [S] \\ &= lk(L_{X=Y},L_{X=-Y}) [S]. \end{aligned} $$ \iffalse &= \langle - \lbrace t_1 \rbrace \times (-X)(\Sigma_{X=-Y}) + [t_1, t_2] \times (-X)(L_{-X=Y}) \\ & \hspace{6cm} - \lbrace t_2 \rbrace \times Y(\Sigma_{-X=Y}) , [0,1]\times X(M) \rangle_{[0,1]\times UM} [S] \\ &= - lk(L_{X=Y},L_{-X=Y}) [S] \\ \fi \end{proof} \iffalse \begin{proof} Let $(X,\sigma)$ and $(Y, \sigma)$ be $\partial$-compatible torsion combings of a compact oriented 3-manifold $M$ that represent the same $\mbox{spin$^c$}$-structure. By definition, $(X,\sigma)$ and $(Y, \sigma)$ are homotopic on $M\setminus \l B$ where $\l B$ is a 3-ball in $\mathring M$. \linebreak So, there exists a combing $Y'$ homotopic to $Y$ (on $M$) such that $L_{X=Y'} \cap \partial \l B = \emptyset$ and $L_{X=-Y'} \subset \l B$. Let $L^{\l B} = L_{X=Y'} \cap \l B$ and $L^{M\setminus \l B} = L_{X=Y'} \cap (M\setminus \l B)$. The link $L_{X=-Y'}$ bounds a compact oriented surface in $\l B$, hence $$ \begin{aligned} p_1([X], [Y]) = p_1([X], [Y']) &= 4\cdot lk(L_{X=Y'}, L_{X=-Y'})\\ &= 4\cdot lk(L^{M \setminus \l B} \sqcup L^{\l B}, L_{X=-Y'})\\ &= 4\cdot lk(L^{\l B}, L_{X=-Y'}). \end{aligned} $$ Finally, as in the closed case (see \cite[Subsection 2.3 and Corollary 2.22]{lescopcombing}), $X$ can be extended as a parallelization on $\l B$ so that $Y'$ induces a map from $\l B$ to $\b S^2$. Its class in $\pi_3(\b S^2)\simeq \b Z$ reads $lk(L^{\l B}, L_{X=-Y'})$. Hence, $p_1([X], [Y]) = 0$ if and only if $X$ and $Y$ are homotopic on $M$. \end{proof} \fi \begin{proof}[Proof of Theorem \ref{GM}] If $X$ and $Y$ are homotopic relatively to the boundary, then $p_1([X],[Y])=0$. \linebreak Conversely, consider two combings $X$ and $Y_0$ in the same $\mbox{spin$^c$}$-structure and assume that \linebreak $p_1([X],[Y_0])=0$. Since $Y_0$ is in the same $\mbox{spin$^c$}$-structure as $X$, there exists a homotopy from $Y_0$ to a combing $Y_1$ that coincides with $X$ outside a ball $\l B$ in $\mathring M$. \\ Let $\sigma$ be a unit section of $X^{\perp}_{|\l B}$, and let $(X, \sigma, X\wedge\sigma)$ denote the corresponding parallelization over $\l B$. Extend the unit section $\sigma$ as a generic section of $X^{\perp}$ such that $\sigma_{|\partial M}=\sigma(X)$, and deform $Y_1$ to $Y$ where $$ Y(m)=\frac{Y_1(m) + \chi(m) \sigma(m)}{\parallel Y_1(m) + \chi(m)\sigma(m)\parallel} $$ for a smooth map $\chi$ from $M$ to $[0,\varepsilon]$, such that $\chi^{-1}(0)=\partial M$ and $\chi$ maps the complement of a neighborhood of $\partial M$ to $\varepsilon$, where $\varepsilon$ is a small positive real number. The link $L_{X=Y}$ is the disjoint union of $L_{X=Y}\cap \l B$ and a link $L_2$ of $M \setminus \l B$, the link $L_{X=-Y}$ sits in $\l B$, and $$ 0=p_1(X,Y_0)=p_1(X,Y)= 4lk(L_{X=Y},L_{X=-Y}) $$ where $lk(L_{X=Y},L_{X=-Y})=lk(L_{X=Y}\cap \l B,L_{X=-Y})=0$.\\ The parallelization $(X, \sigma, X\wedge\sigma)$ turns the restriction $Y_{|\l B}$ into a map from the ball $\l B$ to $\b S^2 = \b S(\b R X \oplus \b R \sigma \oplus \b R X\wedge\sigma)$ constant on $\partial \l B$, thus into a map from $\l B / \partial \l B \simeq \b S^3$ to $\b S^2$, and it suffices to prove that this map is homotopic to the constant map to prove Theorem \ref{GM}. For this it suffices to prove that this map represents $0$ in $\pi_3(\b S^2) \simeq \b Z$. \\ There is a classical isomorphism from $\pi_3(\b S^2)$ to $\b Z$ that maps the class of a map $g$ from $\b S^3$ to $\b S^2$ to the linking number of the preimages of two regular points of $g$ under $g$ (see \cite{hopf} and \cite[Theorem 2]{pontrjagin}). It is easy to check that this map is well-defined, depends only on the homotopy class of $g$, and is a group morphism on $\pi_3(\b S^2)$ that maps the class of the Hopf fibration $\left((z_1,z_2) \in (\b S^3 \subset \b C^2) \mapsto (\sfrac{z_1}{z_2}) \in (\b C P^1=\b S^2) \right)$ to $\pm 1$. Therefore it is an isomorphism from $\pi_3(\b S^2)$ to $\b Z$. Since $Y$ is in the kernel of this isomorphism, it is homotopically trivial so that $Y$ is homotopic to a constant on $B$, relatively to the boundary of $B$, and $Y_0$ is homotopic to $X$ on $M$, relatively to the boundary of $M$. \end{proof} \section[Variation of Pontrjagin numbers under LP$_\b Q$-surgeries]{Variation of Pontrjagin numbers \\ under LP$_\b Q$-surgeries} \subsection{For pseudo-parallelizations} In this subsection we recall the variation formula and the finite type property of Pontrjagin numbers of pseudo-parallelizations, which are contained in \cite[Section 11]{lescopcube}. \def\mbox{$\overline{\tau}$}{\mbox{$\overline{\tau}$}} \begin{proposition} \label{prop_FTIppara} \noindent For $M$ a compact oriented 3-manifold and $(\sfrac{B}{A})$ an LP$_\b Q$-surgery in $M$, if ${\bar\tau}_{M}$ and ${\bar\tau}_{M(\sfrac{B}{A})}$ are pseudo-parallelizations of $M$ and $M(\sfrac{B}{A})$ which coincide on $M\setminus \mathring A$ and coincide with a genuine parallelization on $\partial A$, then $$ p_1({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M}) = p_1({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A}). $$ \end{proposition} \begin{proof} Let $W^-$ be a signature zero cobordism from $A$ to $B$. By definition, the obstruction $p_1({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A})$ is the Pontrjagin obstruction to extending the complex trivialization \linebreak $\tau({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A})$ of $TW^-_{\hspace{-1mm}|\partial W^-} \hspace{-1mm} \otimes \b C$ as a trivialization of $TW^- \hspace{-1mm}\otimes \b C$. Let $W^+ \hspace{-1.5mm}=\hspace{-1mm} [0,1]\times (M\setminus \mathring A)$ and let $V=-[0,1]\times\partial A$. As shown in \cite[Proof of Proposition 5.3 item 2]{lescopEFTI}, since $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery in $M$, the manifold $W=W^+\cup_V W^-$ has signature zero. Furthermore, since $W$ has signature zero, $p_1({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M})$ is the Pontrjagin obstruction to extending the triviali\-zation $\tau({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M})$ of $TW_{|\partial W} \otimes \b C$ as a trivialization of $TW\otimes \b C$. Finally, it is clear that $\tau({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M})$ extends as a trivialization of $TW_{|W^+} \otimes \b C$ so that $$ p_1({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M}) = p_1({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A}). $$ \end{proof} \begin{corollary} \label{cor_FTppara} Let $M$ be a compact oriented 3-manifold and let $\lbrace \sfrac{B_i}{A_i} \rbrace_{i\in \lbrace 1, \ldots , k\rbrace}$ be a family of disjoint LP$_\b Q$-surgeries where $k\geqslant2$. For any family $\lbrace \bar \tau_I \rbrace_{I \subset \lbrace 1, \ldots , k \rbrace}$ of pseudo-parallelizations of the $\lbrace M(\lbrace \sfrac{B_i}{A_i}\rbrace_{i\in I}) \rbrace_{I \subset \lbrace 1, \ldots , k \rbrace}$ whose links sit in $M\setminus(\cup_{i=1}^k \partial A_i)$ and such that, for all subsets $I, J\subset \lbrace 1, \ldots , k \rbrace$, $\bar \tau^I$ and $\bar \tau^J$ coincide on $(M\setminus \cup_{I\cup J}A_i)\cup_{I\cap J} B_i$, the following identity holds : $$ \sum_{I \subset \lbrace 2, \ldots , k \rbrace} (-1)^{\card(I)} p_1 (\bar \tau ^I,\bar \tau ^{I\cup\lbrace 1 \rbrace})=0. $$ \end{corollary} \subsection[Lemmas for the proof of Theorem~\ref{thm_D2nd}]{Lemmas for the proof of Theorem~\ref{thm_D2nd}} \begin{lemma} \label{lem_Xdsection} If $X$ is a combing of a compact oriented 3-manifold $M$ and if $\partial A$ is the connected boundary of a submanifold of $M$ of dimension 3, then the normal bundle $X^\perp_{|\partial A}$ admits a nonvanishing section. \end{lemma} \begin{proof} Parallelize $M$ so that $X$ induces a map $X_{|\partial A} : \partial A \rightarrow \b S^2$. This map must have degree 0 since $X$ extends this map to $A$ (so that $(X_{|\partial A})_* : H_2(\partial A;\b Q)\rightarrow H_2(\b S^2;\b Q)$ factors through the inclusion $H_2(\partial A ; \b Q) \rightarrow H_2(A;\b Q)$, which is zero). It follows that $X_{|\partial A}$ is homotopic to the map $(m \in\partial A \mapsto e_1 \in \b S^2)$ whose normal bundle admits a nonvanishing section. \end{proof} \begin{lemma} \label{ind} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing, let $(\sfrac{B}{A},X_B)$ be an LP$_\b Q$-surgery in $(M,X)$ and let $\sigma$ be a nonvanishing section of $X^\perp_{|\partial A}$. Let $P$ stand for Poincaré duality isomorphisms and recall the sequence of isomorphisms induced by the inclusions $i^{A_i}$ and $i^{B_i}$ $$ H_1(A;\b Q) \stackrel{i^{A}_*}{\longleftarrow} \frac{H_1(\partial A;\b Q)}{\go L_{A}} = \frac{H_1(\partial B;\b Q)}{\go L_{B}} \stackrel{i^{B}_*}{\longrightarrow} H_1(B;\b Q). $$ The class $ \left(i^A_* \circ {(i^B_*)}^{-1}\left(P(e_2^B(X_B^\perp, \sigma))\right) - P(e_2^A(X_{|A}^\perp, \sigma))\right)$ in $H_1( A ; \b Q)$ is independent of the choice of the section $\sigma$. \end{lemma} \begin{proof} Let us drop the inclusions $i^B_*$ and $i^A_*$ from the notation. According to Proposition~\ref{prop_euler}, the class $P(e_2^A(X_{|A}^\perp, \sigma))$ verifies $$ [ X(A) - (-X)(A) +\l H_{X,\sigma}^{-X}(\partial A \times [0,1]) ] = [ P(e_2^A(X_{|A}^\perp, \sigma)) \times \b S^2 ] \mbox{ \ in \ } H_1(UA ;\b Q). $$ It follows that, for another choice $\sigma'$ of section of $X^\perp_{|\partial A}$, $$ \begin{aligned} [P(e_2^B(X_B^\perp, \sigma)) \times \b S^2 & - P(e_2^B(X_B^\perp, \sigma')) \times \b S^2 ] \\ &= [\l H_{X,\sigma}^{-X}(\partial A\times [0,1])-\l H_{X,\sigma'}^{-X}(\partial A\times [0,1])] \\ &=[P(e_2^A(X_{|A}^\perp, \sigma)) \times \b S^2 - P(e_2^A(X_{|A}^\perp, \sigma')) \times \b S^2 ]. \end{aligned} $$ \end{proof} \begin{lemma} \label{zero} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing and let $(\sfrac{B}{A},X_B)$ be an LP$_\b Q$-surgery in $(M,X)$. If $(X,\sigma)$ is a torsion combing then $(X(\sfrac{B}{A}),\sigma)$ is a torsion combing if and only if $$ i_*^{A} \circ (i_*^B)^{-1} \big( P(e_2^B({X_{B}}^\perp, \zeta))\big) - P(e_2^A(X_{|A}^\perp, \zeta)) = 0 \mbox{ \ in $H_1(M; \b Q)$} $$ for some nonvanishing section $\zeta$ of $X^\perp_{|\partial A}$. \end{lemma} \begin{proof} By definition, we have $$ \begin{aligned} P(e_2(X^\perp,\sigma)) &= P(e_2^A(X_{|A}^\perp, \zeta)) + P(e_2^{M \setminus \mathring A}(X^\perp,\sigma \cup \zeta)) \\ P(e_2({X(\sfrac{B}{A})}^\perp,\sigma)) &= P(e_2^B(X_{B}^\perp, \zeta)) + P(e_2^{M \setminus \mathring A}(X'^\perp,\sigma \cup \zeta)) \\ \end{aligned} $$ where $\zeta$ is any nonvanishing section of $X^\perp_{|\partial A}$. So, it follows that, using appropriate identifications, $$ P(e_2(X(\sfrac{B}{A})^\perp,\sigma)) - P(e_2({X}^\perp,\sigma)) = P(e_2^B(X_{B}^\perp, \zeta)) - P(e_2^A(X_{|A}^\perp, \zeta)). $$ If $X$ is a torsion combing, then $P(e_2({X}^\perp,\sigma))$ is rationally null-homologous in $M$. Hence, $X(\sfrac{B}{A})$ is a torsion combing if and only if $$ i_*^{A} \circ (i_*^B)^{-1} \big( P(e_2^B({X_{B}}^\perp, \zeta))\big) - P(e_2^A(X_{|A}^\perp, \zeta)) = 0 \mbox{ \ in $H_1(M; \b Q)$}. $$ \end{proof} \begin{lemma} \label{norm} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing. Let $\lbrace (\sfrac{B_i}{A_i} , X_{B_i}) \rbrace_{i \in \lbrace 1,\ldots,k\rbrace}$ be a family of disjoint LP$_\b Q$-surgeries in $(M,X)$, where $k \in \b N\setminus \lbrace 0 \rbrace$. For all $I\subset \lbrace 1, \ldots, k \rbrace$, let $M_I=M(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$ and $X^I=X(\lbrace \sfrac{B_i}{A_i}\rbrace_{i \in I})$. There exists a family of pseudo-parallelizations $\lbrace\bar \tau^I\rbrace_{ I \subset \lbrace 1,\ldots,k\rbrace}$ of the $\lbrace M_I \rbrace_{I \subset \lbrace 1, \ldots , k \rbrace}$ such that : \begin{enumerate}[(i)] \item the third vector of $\bar\tau=\bar\tau^\emptyset$ coincides with $X$ on $\cup_{i=1}^k \partial A_i$, \item for all $I \subset \lbrace 1 , \ldots , k \rbrace$, $\bar\tau^I$ is compatible with $X^I$, \item for all $I \subset \lbrace 1,\ldots,k\rbrace$, if $\gamma^I$ denotes the link of $\bar \tau_I$, then $N(\gamma^I) \cap \left(\cup_{i=1}^k \partial A_i \right) = \emptyset$, \item for all $I, J \subset \lbrace 1,\ldots,k\rbrace$, $\bar\tau^I$ and $\bar\tau^J$ coincide on $(M\setminus \cup_{i\in I\cup J} A_i)\cup_{i\in I\cap J}B_i$, \item there exist links $L^\pm_{A_i}$ in $A_i$, $L^\pm_{B_i}$ in $B_i$ and $L^\pm_{ext}$ in $M \setminus \cup_{i=1}^k \mathring A_i$ such that, for all subset $I \subset \lbrace 1, \ldots , k \rbrace$~: $$ \begin{aligned} & 2 \cdot L_{\bar\tau^I=X^I} = L_{{E^d}^I=X^I} + L_{{E^g}^I=X^I} = L^+_{ext} + \sum_{i \in I} L^+_{B_i} + \sum_{i \in \lbrace 1,\ldots , k \rbrace \setminus I} L^+_{A_i} , \\ & 2 \cdot L_{\bar\tau^I=-X^I} = L_{{E^d}^I=-X^I} + L_{{E^g}^I=-X^I} = L^-_{ext} + \sum_{i \in I} L^-_{B_i} + \sum_{i \in \lbrace 1,\ldots , k \rbrace \setminus I} L^-_{A_i}, \end{aligned} $$ where ${E^d}^I$ and ${E^g}^I$ are the Siamese sections of $\bar\tau^I$. \end{enumerate} \end{lemma} \begin{proof} Let $\l C$ denote a collar of $\cup_{i=1}^k \partial A_i$. Using Lemma~\ref{lem_Xdsection}, construct a trivialization $\tau_e$ of $\cup_{i=1}^k TM_{|\l C}$ so that its third vector coincides with $X$ on $\l C$. Then use Lemma~\ref{lem_extendpparallelization} to extend $\tau_e$ as pseudo-parallelizations of the $\lbrace A_i \rbrace_{i\in \lbrace 1,\ldots , k \rbrace }$, of the $\lbrace B_i \rbrace_{i\in \lbrace 1,\ldots , k \rbrace}$ and of $M\setminus(\cup_{i=1}^k \mathring A_i)$. Finally, use these pseudo-parallelizations to construct the pseudo-parallelizations of the 3-manifolds $\lbrace M_I \rbrace_{I \subset \lbrace 1,\ldots , k \rbrace}$ as in the statement. \end{proof} \begin{lemma} \label{inter} In the context of Lemma~\ref{norm}, using the sequence of isomorphisms induced by the inclusions $i^{A_i}$ and $i^{B_i}$ $$ H_1(A_i;\b Q) \stackrel{i^{A_i}_*}{\longleftarrow} \frac{H_1(\partial A_i;\b Q)}{\go L_{A_i}} = \frac{H_1(\partial B_i;\b Q)}{\go L_{B_i}} \stackrel{i^{B_i}_*}{\longrightarrow} H_1(B_i;\b Q), $$ for all $i \in \lbrace 1 , \ldots , k \rbrace$, we have that, in $H_1(A_i ; \b Q)$, $$ \left[i_{\ast}^{A_i} \circ \left(i_{\ast}^{B_i}\right)^{-1}\left( L_{B_i}^{\pm}\right)- L_{A_i}^\pm\right] = \pm \left( i^{A_i}_* \circ(i^{B_i}_*)^{-1}\big(P(e_2^{B_i}(X_{B_i}^\perp, \sigma_i))\big) - P(e_2^{A_i}(X_{|A_i}^\perp, \sigma_i)) \right) $$ where $\sigma_i$ is any nonvanishing section of $X^\perp_{|\partial A_i}$. \end{lemma} \begin{proof} Let us drop the inclusions $i^B_*$ and $i^A_*$ from the notation. Let $i \in \lbrace 1,\ldots,k \rbrace$. According to Lemma~\ref{zero}, it is enough to prove the statement for a particular non vanishing section $\sigma_i$ of $X^\perp_{|\partial A_i}$. Recall that $A_i$ is equipped with a combing $X_{|A_i}$ and a pseudo-parallelization \linebreak $\bar\tau_{|A_i}=(N(\gamma\cap A_i); {\tau_e}_{|A_i}, {\tau_d}_{|A_i}, {\tau_g}_{|A_i})$ such that $X_{|\partial A_i}$ coincides with ${E_3^e}_{|\partial A_i}$ where $\tau_e=(E_1^e,E_2^e,E_3^e)$. Furthermore, $$ L^+_{A_i} = 2\cdot L_{\bar\tau_{|A_i}= X_{|A_i}} \mbox{ \ and \ } L^-_{A_i} = 2\cdot L_{\bar\tau_{|A_i}= -X_{|A_i}}. $$ Construct a pseudo-parallelization $\check\tau = (N(\check\gamma) ; \check \tau_e ,\check \tau_d ,\check \tau_g)$ of $A_i$ by modifying $\bar\tau$ as follows so that $\check\tau$ and $X$ coincide on $\partial A_i$. Consider a collar $\l C=[0,1]\times \partial A_i$ of $\partial A_i$ such that $\lbrace 1 \rbrace \times \partial A_i = \partial A_i$ and $\l C \cap \gamma = \emptyset$. Without loss of generality, assume that $X_{|A_i}$ coincides with $E_3^e$ on the collar $\l C$. Let $\check\tau$ coincide with $\bar\tau_{|A_i}$ on $\overline{A_i \setminus \l C}$. End the construction of $\check\tau$ by requiring $$ \forall (s,b) \in \l C = [0,1] \times \partial A_i, \ \forall v \in \b S^2 \ : \ \check\tau_e( (s,b),v) = \tau_e \left((s,b), R_{e_2, \frac{-\pi s}{2}}(v)\right). $$ Note that $\check\tau$ and $X_{|A_i}$ are compatible and that $$ L^+_{A_i} = 2\cdot L_{\check\tau= X_{|A_i}} \mbox{ \ and \ } L^-_{A_i} = 2\cdot L_{\check\tau= -X_{|A_i}}. $$ Using $\check\tau$ and Proposition~\ref{prop_linksinhomologyI}, we get $$ \begin{aligned} [L^+_{A_i}] &= P(e_2^{A_i}(X^\perp_{|A_i}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{d \perp}_{1}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{g \perp}_{1}, {\check E^e}_{2|\partial A_i})) \\ [L^-_{A_i}] &= - P(e_2^{A_i}(X^\perp_{|A_i}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{d \perp}_{1}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{g \perp}_{1}, {\check E^e}_{2|\partial A_i})) \end{aligned} $$ where $\check\tau_e=(\check E_1^e,\check E_2^e,\check E_3^e)$ and where ${\check E_1}^d$ and ${\check E_1}^g$ are the Siamese sections of $\check\tau$. By construction, it follows that $$ \begin{aligned} [L^+_{A_i}] &= P(e_2^{A_i}(X^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^d}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^g}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) \\ [L^-_{A_i}] &= -P(e_2^{A_i}(X^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^d}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^g}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) \end{aligned} $$ where $E_1^d$ and $E_1^g$ are the Siamese sections of $\bar\tau$. Using the same method, we also get that $$ \begin{aligned} [L^+_{B_i}] &= P(e_2^{B_i}(X^\perp_{B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^d}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^g}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})) \\ [L^-_{B_i}] &= -P(e_2^{B_i}(X^\perp_{B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^d}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^g}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})). \end{aligned} $$ Conclude with Lemma~\ref{simppara}. \end{proof} \subsection[Variation formula for torsion combings]{Variation formula for torsion combings} \begin{proof}[Proof of Theorem~\ref{thm_D2nd}] Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing. Let $\lbrace (\sfrac{B_i}{A_i},X_{B_i}) \rbrace_{i\in \lbrace 1, 2\rbrace}$ be two disjoint LP$_\b Q$-surgeries in $(M,X)$ and assume that, for all subset $I\subset\lbrace 1,2 \rbrace$, $X^I=X(\lbrace\sfrac{B_i}{A_i}\rbrace_{i \in I})$ is a torsion combing of the 3-manifold $M_I = M(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$. Note that, for all $I,J\subset \lbrace 1,2 \rbrace$, $X^I$ and $X^J$ coincide on $(M\setminus \cup_{i\in I\cup J} A_i)\cup_{i\in I\cap J}B_i$. Finally, let $\lbrace \bar\tau^I \rbrace_{I\subset \lbrace 1,2\rbrace}$ be a family of pseudo-parallelizations as in Lemma~\ref{norm}, let ${E_1^d}^I$ and ${E_1^g}^I$ denote the Siamese sections of $\bar\tau^I$ for all $I\subset \lbrace 1,2 \rbrace$ and let $L_I$ stand for $L_{{E_1^d}^I=-{E_1^g}^I}$. Using Corollary~\ref{cor_FTppara}, we have $$ \begin{aligned} p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right) &= p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right)- p_1 \left(\bar\tau^{\lbrace 2 \rbrace},\bar\tau^{\lbrace 1,2 \rbrace}\right) \\ & - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right) + p_1\left(\bar\tau,\bar\tau^{\lbrace 1 \rbrace}\right) \\ \end{aligned} $$ which, using Lemma~\ref{lem1} and Theorem~\ref{thm_defp1Xb}, reads $$ \begin{aligned} & 4 \cdot lk_{M_{\lbrace1,2\rbrace}}(L_{\bar\tau^{\lbrace 1,2 \rbrace}=X^{\lbrace 1,2 \rbrace}} \ , \ L_{\bar\tau^{\lbrace 1,2 \rbrace}=-X^{\lbrace 1,2 \rbrace}}) - 4 \cdot lk_{M_{\lbrace 2 \rbrace}}(L_{\bar\tau^{\lbrace 2 \rbrace}=X^{\lbrace 2 \rbrace}} \ , \ L_{\bar\tau^{\lbrace 2 \rbrace}=-X^{\lbrace 2 \rbrace}}) \\ & - lk_{\b S^2} \left(e_1-(-e_1) \ , \ P_{\b S^2} \circ (\tau^{\lbrace 1,2 \rbrace}_d)^{-1} \circ X^{\lbrace 1,2 \rbrace}(L_{\lbrace 1,2 \rbrace}) - P_{\b S^2} \circ (\tau^{\lbrace 2 \rbrace}_d)^{-1} \circ X^{\lbrace 2 \rbrace}(L_{\lbrace 2 \rbrace})\right)\\ &- 4 \cdot lk_{M_{\lbrace 1 \rbrace}}(L_{\bar\tau^{\lbrace 1 \rbrace}=X^{\lbrace 1 \rbrace}} \ , \ L_{\bar\tau^{\lbrace 1 \rbrace}=-X^{\lbrace 1 \rbrace}}) + 4 \cdot lk_{M}(L_{\bar\tau=X} \ , \ L_{\bar\tau=-X}) \\ & + lk_{\b S^2} \left(e_1-(-e_1) \ , \ P_{\b S^2} \circ (\tau^{\lbrace 1 \rbrace}_d)^{-1} \circ X^{\lbrace 1 \rbrace}(L_{\lbrace 1 \rbrace}) - P_{\b S^2} \circ (\tau_d)^{-1} \circ X(L) \right). \end{aligned} $$ This can further be reduced to the following by using Lemma~\ref{norm}, $$ \begin{aligned} & lk_M \left( L^+_{ext} + L^+_{A_1} + L^+_{A_2}, \ L^-_{ext} + L^-_{A_1} + L^-_{A_2} \right) \\ &- lk_{M_{\lbrace 1 \rbrace}} \left( L^+_{ext} + L^+_{B_1} + L^+_{A_2}, \ L^-_{ext} + L^-_{B_1} + L^-_{A_2} \right) \\ &- lk_{M_{\lbrace 2 \rbrace}} \left( L^+_{ext} + L^+_{A_1} + L^+_{B_2}, \ L^-_{ext} + L^-_{A_1} + L^-_{B_2} \right) \\ &+ lk_{M_{\lbrace 1,2 \rbrace}} \left( L^+_{ext} + L^+_{B_1} + L^+_{B_2}, \ L^-_{ext} + L^-_{B_1} + L^-_{B_2} \right). \end{aligned} $$ In order to compute these linking numbers, let us construct specific 2-chains. Let us introduce a more convenient set of notations. For all $i \neq j \in \lbrace 1,2 \rbrace$, let $$ L^\pm_{i}=L^\pm_{A_i}, \ L^\pm_{ij}= L^\pm_{A_i} + L^\pm_{A_j}, \ L^\pm_{eij}= L^\pm_{ext} + L^\pm_{A_i} + L^\pm_{A_j}. $$ Set also similar notations with primed indices where a primed index $i'$, $i \in \lbrace 1,2 \rbrace$, indicates that $L^\pm_{A_i}$ should be replaced by $L^\pm_{B_i}$. For instance, $L^\pm_{i'}=L^\pm_{B_i}$, $L^\pm_{i,j'}= L^\pm_{A_i} + L^\pm_{B_j}$, \textit{etc}. Using these notations, $p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right)$ reads : $$ \begin{aligned} lk_M(L_{e12}^+,L_{e12}^-)- lk_{M_{\lbrace 1 \rbrace}}(L_{e1'2}^+,L_{e1'2}^-) - lk_{M_{\lbrace 2 \rbrace}}(L_{e12'}^+,L_{e12'}^-) + lk_{M_{\lbrace 1,2 \rbrace}} (L_{e1'2'}^+,L_{e1'2'}^-) . \end{aligned} $$ Recall from Lemma~\ref{nulexcep} that there exist rational 2-chains $\Sigma_{e12}^\pm$ of $M$ which are bounded by the links $L^\pm_{e12}$. Similarly, there exist rational two chains $\Sigma^\pm_{e1'2'}$ bounded by $L^\pm_{e1'2'}$ in $M_{\lbrace 1,2\rbrace}$. Note that, for all $i \in \lbrace 1,2 \rbrace$, the 2-chains $\Sigma_{e12}^\pm\cap A_i$ are cobordisms between the $L^\pm_i$ and 1-chains $\ell^\pm_i$ in $\partial A_i$. Similarly, for all $i \in \lbrace 1,2 \rbrace$, the 2-chains $\Sigma_{e1'2'}^\pm\cap B_i$ are cobordisms between the $L^\pm_{i'}$ and 1-chains $\ell^\pm_{i'}$ in $\partial B_i$. Furthermore, according to Lemma~\ref{inter}, for all $i\in \lbrace 1,2 \rbrace$ and for any nonvanishing section $\sigma_i$ of $X^\perp_{|\partial A_i}$, in $H_1(\partial A_i ; \b Q)/\go L_{A_i}$ : $$ [\ell^\pm_{i'}-\ell^\pm_{i}]= \pm \big((i_*^{B_i})^{-1}(P(e_2^{B_i}(X_{B_i}^\perp, \sigma_i))) - (i_*^{A_i})^{-1}(P(e_2^{A_i}(X_{|A_i}^\perp, \sigma_i)))\big). $$ So, according to Lemma~\ref{zero}, for all $I \subset \lbrace 1,2 \rbrace$, there exists a 2-chain $S_{\sfrac{B_i}{A_i}}^I$ in $M_I$ which is bounded by $\ell^+_{i'}-\ell^+_i$. Finally, since $(\sfrac{B_1}{A_1})$ and $(\sfrac{B_2}{A_2})$ are LP$_\b Q$-surgeries, we can construct these chains so that $$ \begin{aligned} S_{\sfrac{B_1}{A_1}}^{\lbrace 1\rbrace} \cap (M_{\lbrace 1 \rbrace}\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A_2) &= S_{\sfrac{B_1}{A_1}}^{\lbrace 1,2 \rbrace} \cap (M_{\lbrace 1,2 \rbrace}\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B_2), \\ S_{\sfrac{B_2}{A_2}}^{\lbrace 2\rbrace} \cap (M_{\lbrace 2 \rbrace}\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A_1) &= S_{\sfrac{B_2}{A_2}}^{\lbrace 1,2 \rbrace} \cap (M_{\lbrace 1,2 \rbrace})\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B_1). \end{aligned} $$ Let us now return to the computation of $p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right)$. Using the 2-chains we constructed, we have : $$ \begin{aligned} lk_M(L_{e12}^+,L_{e12}^-) &= \langle \Sigma^+_{e12}, L_{e12}^- \rangle \\ lk_{M_{\lbrace 1 \rbrace}}(L_{e1'2}^+,L_{e1'2}^-) &= \langle \Sigma^+_{e12} \cap (M \setminus \mathring A_1) + S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace} + \Sigma^+_{e1'2'}\cap B_1, L_{e1'2}^- \rangle \\ lk_{M_{\lbrace 2 \rbrace}}(L_{e12'}^+,L_{e12'}^-) &= \langle \Sigma^+_{e12}\cap (M \setminus \mathring A_2) + S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace} + \Sigma^+_{e1'2'}\cap B_2 , L_{e12'}^- \rangle \\ lk_{M_{\lbrace 1,2 \rbrace}} (L_{e1'2'}^+,L_{e1'2'}^-)&=\langle \Sigma^+_{e12}\cap(M\setminus(\mathring A_1\hspace{-1mm}\cup\hspace{-1mm}\mathring A_2)) +S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace} \\ &\hspace{1cm}+S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace} + \Sigma^+_{e1'2'}\cap(B_1\cup B_2) , L_{e1'2'}^- \rangle. \\ \end{aligned} $$ So, the contribution of the intersections in $M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1\cup \mathring A_2)$ is zero since it reads : $$ \begin{aligned} & \langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) , L_{e}^- \rangle_{M\setminus (\mathring A_1\cup \mathring A_2)} \\ & -\langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) + S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) , L_{e}^- \rangle_{M \setminus (\mathring A_1\cup \mathring A_2)} \\ & -\langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) + S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)), L_{e}^- \rangle_{M\setminus (\mathring A_1\cup \mathring A_2)} \\ & +\langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) + S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) \\ &\hspace{5cm}+ S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) , L_{e}^- \rangle_{M\setminus (\mathring A_1\cup \mathring A_2)} . \end{aligned} $$ The contribution in $A_1$ is $$ \begin{aligned} \langle \Sigma^+_{e12} \cap A_1 , L_{1}^- \rangle_{A_1} - \langle\Sigma^+_{e12} \cap A_1 + S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap A_1 , L_{1}^- \rangle_{A_1} = - \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap A_1, L_{1}^- \rangle_{A_1}. \end{aligned} $$ In $A_2$, we similarly get $- \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace}\cap A_2, L_{2}^- \rangle_{A_2}$. The contribution in $B_1$ is $$ \begin{aligned} &-\hspace{-1mm}\langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace}\cap B_1 \hspace{-1mm}+\hspace{-1mm} \Sigma^+_{e1'2'}\cap B_1 , L_{1'}^- \rangle_{B_1} \hspace{-1mm}+\hspace{-1mm} \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap B_1 \hspace{-1mm} +\hspace{-1mm} S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap B_1 \hspace{-1mm} +\hspace{-1mm} \Sigma^+_{e1'2'}\cap B_1 , L_{1'}^- \rangle_{B_1} \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap B_1 , L_{1'}^- \rangle_{B_1} \end{aligned} $$ and, in $B_2$, we get $\langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap B_2 , L_{2'}^- \rangle_{B_2}$. Eventually, $L_{\lbrace X^I \rbrace}(\sfrac{B_i}{A_i}) = i_*^{A_i}([\ell_{i'}^+ - \ell_{i}^+])$ for $i \in \lbrace 1,2\rbrace$. Moreover, recall that $[\ell^+_{i'}-\ell^+_{i}]=-[\ell^-_{i'}-\ell^-_{i}]$ in $H_1(\partial A_i ; \b Q)/\go L_{A_i}$, and complete the computations~: $$ \begin{aligned} p_1 & \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right) \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap B_1 , L_{1'}^- \rangle_{B_1} + \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap B_2 , L_{2'}^- \rangle_{B_2} \\ &- \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap A_1, L_{1}^- \rangle_{A_1} - \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace} \cap A_2, L_{2}^- \rangle_{A_2} \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace} , \ell_{1'}^- \rangle_{M_{\lbrace 1,2 \rbrace}} + \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace} , \ell_{2'}^- \rangle_{M_{\lbrace 1,2 \rbrace}} \\ &- \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}, \ell_{1}^- \rangle_{M_{\lbrace 2 \rbrace}} - \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace} , \ell_{2}^- \rangle_{M_{\lbrace 1 \rbrace}} \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace} , \ell_{1'}^- - \ell_{1}^- \rangle_{M_{\lbrace 1,2 \rbrace}} + \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace} , \ell_{2'}^- - \ell_{2}^- \rangle_{M_{\lbrace 1,2 \rbrace}} \\ &=- 2 \cdot lk_M \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}) \ , \ L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2}) \right). \end{aligned} $$ \iffalse &= lk_{M_{\lbrace 1,2 \rbrace}} \left(L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2}) \ , \ - L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1})\right) \\ &+ \ lk_{M_{\lbrace 1,2 \rbrace}}\left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}) \ , \ - L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})\right) \\ &=- 2 \cdot lk_{M_{\lbrace 1,2 \rbrace}} \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}) \ , \ L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2}) \right) \\ \fi \end{proof} \nocite{hirzebruch,KM,pontrjagin,turaev,lickorish,rolfsen,MR1030042,MR1189008,MR1712769} \end{document}
\begin{document} \title{ Decomposing a Matrix into two Submatrices with Extremely Small Operator Norm} \begin{abstract} We give sufficient conditions on a matrix A ensuring the existence of a partition of this matrix into two submatrices with extremely small norm of the image of any vector. Under some weak conditions on a matrix A we obtain a partition of A with the extremely small $(1,q)$--norm of submatrices. \end{abstract} \small{Keywords: {\it submatrix, operator norm, partition of a matrix, Lunin's method}} \\ \normalsize This paper is devoted to the estimates of operator norms of submatrices. The subject is actively being developed and finds various applications. The present work can be viewed as a continuation of \cite{1} discussing the $(2,1)$--norm case. This case was studied earlier for matrices with orthonormal columns in \cite{2}, where an analogue of the partition (\ref{partition}) (see below) with the extremely small $(2,1)$--norm of the corresponding submatrices was obtained. Using the modified Lunin's method we prove an essential reinforcement of Assertion $4$ from \cite{1} and the generalization of Assertion $3$ to the case of the $(X, q)$--norm with $1\leq q<\infty$. We study the case of the $(1,q)$--norm in greater detail. For an $N\times n$ matrix $A$, viewed as an operator from $l_p^n$ to $l_q^N$, we define the $(p, q)$--norm: \begin{equation*} \left\| A\right\|_{(p,q)}=\sup\limits_{\left\| x \right\|_{l_p^n}\leq 1}\left\| Ax\right\| _{l_q^N}, \ 1\leq p, q\leq \infty. \end{equation*} In fact, Proposition \ref{assertion_norm} is proved here for a more general $(X,q)$--norm where $X$ is an $n$--dimensional norm space. We use the following notation: $\rk(A)$ is the rank of a matrix $A$, $\left< N\right>$ is the set of natural numbers $1, 2,\ldots, N$; $v_i$, $i\in \left< N\right>$ stands for the rows of $A$, $w_j$, $j\in\left<n\right>$ --- its columns. For a subset $\omega\subset\left<N\right>$ $A(\omega)$ denotes the submatrix of a matrix $A$ formed by the rows $v_i, i\in\omega$, $\overline{\omega}=\left<N\right>\setminus\omega$ . $( \cdot, \cdot)$ stands for the inner product in $\mathbb R ^n$, $\left\| x\right\|_p$ is the norm of $x\in\mathbb{R}^n$ in $l_p^n$, $1\leq p\leq\infty$. For a norm space $X$ $\left\|\cdot\right\|_X$ is the norm on X. The following condition is the counterpart of the condition on a matrix from \cite{1} in the case of an arbitrary $1\leq q<\infty$: \begin{equation}\label{cond} \forall x\in\mathbb R^n\ \ \forall i_0\in\left<N\right> \ \ |(v_{i_0}, x)|\leq\varepsilon\left(\sum_{i=1}^N|(v_i, x)|^q\right)^{1/q}. \end{equation} \begin{theorem}\label{assertion_pointwise} Assume that an $N\times n$ matrix $A$ stisfies (\ref{cond}) with $0<\varepsilon\leq (\rk(A))^{-1/q}$ and $1\leq q<\infty$. Then there exists a partition \begin{equation}\label{partition} \left<N\right>=\Omega_1\cup\Omega_2,\ \Omega_1\cap\Omega_2=\emptyset, \end{equation} such that \begin{equation}\label{obtained_estimate} \left\| A(\Omega_k)x\right\|_q\leq \gamma\left\| Ax\right\|_q, \ \ \gamma=\frac{1}{2^{1/q}}+ \frac{2+3\cdot 2^{-1/q}}{q}\left(\rk(A)\varepsilon^q\ln \frac{6q}{(\rk(A)\varepsilon^q)^{1/3}}\right)^{1/3} \end{equation} for any $x\in\mathbb{R}^n$ and $k=1,2$. \end{theorem} \begin{remark} No one knows whether such a partition exists or not if $1<\rk(A)\varepsilon^q$. \end{remark} \begin{proof}[Sketch of proof] First, we prove Proposition \ref{assertion_pointwise} for the case of $\rk(A)=n$. Denote $$\delta=\frac{(n\varepsilon^q)^{1/3}}{q}.$$ Let $X$ be the space $\mathbb{R}^n$ with the norm $\left\|x\right\|_X=\left\|Ax\right\|_q$ (it is a norm on $\mathbb{R}^n$ because $\rk(A)=n$). Let $S_X=\{x\in \mathbb{R}^n: \left\|x\right\|_X=1 \}$ be the unit sphere of $X$. Let $\mathbb{Y}$ be a $\delta$--net in the norm $\left\|\cdot\right\|_X$ on the sphere $S_X$ with at most $\leq(3/\delta)^n$ elements. Suppose that it is not the case, then for every partition (\ref{partition}) there exists a vector $x_1\in S_X$ such that $$ \left\| A(\Omega_1)x_1\right\|_q>\gamma\left\| Ax_1\right\|_q $$ (in this case let $\omega'=\Omega_1$, $x_{\omega'}=x_1$ ), or there exists a vector $x_2\in S_X$ such that $$ \left\| A(\Omega_2)x_2\right\|_q>\gamma\left\| Ax_2\right\|_q $$ (then we define $\omega'=\Omega_2$, $x_{\omega'}=x_2$ ). For every pair $(\Omega_1, \Omega_2)$ we find $\omega'$ and $x_{\omega'}$. Let $y_{\omega'}$ be one of the nearest to $x_{\omega'}$ vectors from the net $\mathbb{Y}$. There are $2^{N-1}-1$ different partitions of the set $\left<N\right>$ into two nonempty parts. Therefore there exists a vector $y_0\in \mathbb{Y}$ such that the set $K=\{\omega' : y_0=y_{\omega'}\}$ is large enough: \begin{equation}\label{K_est} |K|\geq(2^{N-1}-1)\left(\frac{\delta}{3}\right)^n \geq 2^N\left(\frac{\delta}{6}\right)^n. \end{equation} (Here we assume that $n>1$, otherwise Proposition \ref{assertion_pointwise} is obvious.) Therefore there is a vector $y_0\in S_X$ and at least $2^N(\delta/6)^n$ subsets $\omega'\subset\langle N\rangle$ for which $\left\|A(\omega')x_{\omega'}\right\|_q>\gamma \left\|Ax_{\omega'}\right\|_q$ and $\left\|y_0-x_{\omega'}\right\|_X<\delta$. Note that for $x\in S_X$ and $\omega\subset\langle N\rangle$ $\left\|A(\omega)x\right\|_q\leq \left\|A(\omega)\right\|_{(X,q)}\leq \left\|A\right\|_{(X,q)}$. Below we assume that $\gamma<1$, otherwise (\ref{obtained_estimate}) is obviously true. As $\gamma<1$, for $\omega'\in K$ we obtain: \begin{gather*} \left\| A(\omega')y_0\right\| _q\geq \left\| A(\omega')x_{\omega'}\right\| _q-\left\| A(\omega')(x_{\omega'}-y_0)\right\| _q>\\ >\gamma\left\| Ax_{\omega'}\right\| _q-\delta \left\| A(\omega')\left\{\frac{x_{\omega'}-y_0}{\left\|x_{\omega'}-y_0\right\|_X}\right\}\right\|_q\geq (\gamma\left\| Ay_0\right\| _q-\gamma\left\| A(x_{\omega'}-y_0)\right\| _q)- \end{gather*} \begin{equation*}\label{24} -\delta \left\| A\left(\frac{x_{\omega'}-y_0}{\left\|x_{\omega'}-y_0\right\|_X}\right)\right\| _q\geq \gamma\left\| Ay_0\right\| _q-2\delta \geq \gamma\left\| Ay_0\right\|_q-2\delta \left\| Ay_0\right\|_q= \left\| Ay_0\right\|_q\left(\gamma-2\delta\right). \end{equation*} Since $y_0\in S_X$, we have used $\left\|Ay_0\right\|_q=\left\|y_0\right\|_X=1$ in the last inequality. Let $R$ be an amount of subsets $\omega\subset\langle N\rangle$ for which $$ \left\| A(\omega)y_0\right\|_q\geq \left(\gamma-2\delta\right)\left\| Ay_0\right\|_q $$ holds. Let $K_1$ be the set of such subsets. Let us show that $R<2^N(\delta/6)^n$, then we will come to the contradiction, and it will complete the proof of Proposition \ref{assertion_pointwise} in the case of $\rk(A)=n$. Denote $M=3\cdot 2^{-1/q}$. Since $\delta\leq\phi(n, \varepsilon)$, then for $\omega'\in K_1$ we have: $$ \sum\limits_{i\in \omega'}{|(v_i,y_0)|^q}>(\gamma-2\delta)^q S> \left(\frac{1}{2^{1/q}}+M\phi(n, \varepsilon)\right)^q S\geq $$ $$ \geq \left(\frac{1}{2}+q\frac{1}{2^{(q-1)/q}}M\phi(n, \varepsilon)\right)S=\left(\frac{1}{2}+q\frac{2^{1/q}}{2}M\phi(n, \varepsilon)\right)S. $$ $R$ can be estimated as in the proof of Assertion $3$ from \cite{1}. Now, let a matrix have the rank $r<n$. Without loss of generality, we can assume that the vectors $w_1, \dots, w_{r}$ are linearly independent. It is clear that (\ref{cond}) holds for the matrix $\tilde{A}$, which consists from the first $r$ columns of $A$. We have $\rk{\tilde{A}}=r$, therefore there exists a partition of the form (\ref{partition}) such that (\ref{obtained_estimate}) holds. Let $w_j=\sum\limits_{i=1}^r \lambda_j^i w_i$. For a vector $x\in\mathbb{R}^n$ we construct the vector $\tilde{x}\in\mathbb{R}^r$ having coordinates $\tilde{x}_i=x_i+\sum\limits_{j=r+1}^n \lambda_j^ix_j$, then $Ax=\tilde{A}\tilde{x}$ and for $k=1,2$ $A(\Omega_k)x=\tilde{A}(\Omega_k)\tilde{x}$, so for the partition we have found (\ref{obtained_estimate}) also holds for the matrix $A$. \end{proof} \begin{corollary}\label{assertion_pointwise} Assume that an $N\times n$ matrix $A$ satisfies (\ref{cond}) with $0<\varepsilon\leq (\rk(A))^{-1/q}$ and $1\leq q<\infty$. Then there exists a partition (\ref{partition}) such that for any $x\in\mathbb{R}^n$ and $k=1,2$ we have \begin{equation*} \left(\frac{1}{2}-\psi\right)\sum_{i\in\Omega_k}|(v_i, x)|^q\leq \sum_{i=1}^N|(v_i, x)|^q\leq \left(\frac{1}{2}+\psi\right)\sum_{i\in\Omega_k}|(v_i, x)|^q, \end{equation*} where $$ \psi=2^{q+1} \left(\rk(A)\varepsilon^q\ln \frac{6q}{(\rk(A)\varepsilon^q)^{1/3}}\right)^{1/3}. $$ \end{corollary} The following proposition is a simple corollary of Proposition \ref{assertion_pointwise}. \begin{theorem}\label{assertion_norm} Let for an $N\times n$ matrix $A$ (\ref{cond}) hold for some $0<\varepsilon\leq (\rk(A))^{-1/q}$ and $1\leq q<\infty$. Then there exists a partition (\ref{partition}) such that for $k=1,2$ the following inequality holds $$ \left\| A(\Omega_k)\right\|_{(X,q)}\leq \gamma \left\| A\right\|_{(X,q)}, $$ where $\gamma$ is defined in the formulation of Proposition \ref{assertion_pointwise}. \end{theorem} The following proposition is analogous to Proposition \ref{assertion_norm} for the $(1,q)$--norm. Let $e_j$, $j\in\langle n\rangle$ be the standard basis in $\mathbb{R}^n$. \begin{theorem}\label{assertion_1_q_norm} If for an $N\times n$ matrix $A$ the inequality \begin{equation}\label{a_i_j_cond} |a_j^i|\leq \varepsilon \left\| w_j \right\|_q \end{equation} holds for some $1\leq q<\infty$ and $0<\varepsilon< 1$ and for every $i\in\langle N\rangle$, and $j\in\langle n\rangle$, then there exists a partition (\ref{partition}) such that for $k=1,2$ the following holds: $ \text{a) }\left\| A(\Omega_k)\right\|_{(1,q)}\leq \left(\frac{1}{2}+\frac{3}{2}\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)^{1/q} \left\| A\right\|_{(1,q)},$ $ \text{b) }\left\| A(\Omega_k)\right\|_{(1,q)}\leq \left(\frac{1}{2}+\frac{1}{2}\varepsilon^{q}\sqrt{N}(1+\log(\frac{n}{N}+1)^{1/2}\right)^{1/q} \left\| A\right\|_{(1,q)}, $ $ \text{c) } \left\| A(\Omega_k)\right\|_{(1,q)}\leq\left(\frac{1+n\varepsilon^q}{2}\right)^{1/q}\left\| A\right\|_{(1,q)}. $ \end{theorem} \begin{remark} In Proposition \ref{assertion_1_q_norm} we need sufficiently weak conditions (compared to Proposition \ref{assertion_pointwise}) on the elements of a matrix. \end{remark} \begin{proof} Since the function $\left\| Ax \right\|_q$ is convex, then the $(1, q)$--norm of a matrix is attained on one of the vectors from the standard basis. The proof of a) is indeed close to the previous arguments from Proposition \ref{assertion_pointwise}, so here we only show the sketch. Assume that our proposition is not true and for each partition (\ref{partition}) there exists a number $k$ such that $\left\| A(\Omega_k)\right\|_{(1,q)}>\left(1/2+(3/2)\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)^{1/q}\left\| A\right\|_{(1,q)}$. Denote $\omega'=\Omega_k$. The $(1,q)$--norm of the matrix $A_{\omega'}$ is attained on some vector $e_{j_{\omega'}}$, $j_{\omega'}\in\langle n\rangle$, therefore the following holds: $ \sum\limits_{i\in \omega'}|a^i_{j_{\omega'}}|^q>\left(1/2+(3/2)\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)\left\|w_{j_{\omega'}}\right\|^q$. Like in the proof of Proposition \ref{assertion_pointwise}, there exists $j_0\in \langle n\rangle$ such that the set $K=\{\omega' : j_{\omega'}=j_0\}$ is large enough: \begin{equation*} |K|\geq(2^{N-1}-1)/n> 2^{N-2}/n. \eqno(6) \end{equation*} It is easy to see that for every $\omega\in K$ \begin{equation*} \sum\limits_{i\in \omega}|a^i_{j_0}|^q>\left(1/2+(3/2)\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)\left\|w_{j_0}\right\|^q. \eqno(7) \end{equation*} So, for the proof of a) it is enough to check that a number $R$ of subsets $\omega\subset\langle N\rangle$ for which (7) holds is less than the right part of (6). The value $R$ is estimated as in the proof of Assertion $3$ from \cite{1}. To prove b) we use Corollary $5$ from \cite{3}. Let $\tilde w_j=(|a_j^1|^q,\dots, |a_j^N|^q)$ be a vector which is obtained from the $j$--th column of $A$ by raising the moduli of its coordinates to the power $q$. For all $j\in\langle n\rangle$ $\left\|w_j\right\|_q^q\leq \left\|A\right\|_{(1,q)}$, so (\ref{a_i_j_cond}) implies that $\left\|\tilde w_j\right\|_{\infty}\leq \varepsilon^q\left\|A\right\|_{(1,q)}^q$. Then due to Corollary from \cite{3} mentioned above there exists such a vector $\xi=(\xi_1,\dots,\xi_N)\in\mathbb R^N$, whose coordinates have modulus $1$ such that for every $j\in\langle n\rangle$ the following inequality holds: \begin{equation*} \left|( \tilde w_j, \xi )\right|\leq \varepsilon^q\sqrt{N}\left(1+\log\bigl(\frac{n}{N}+1\bigr)\right)^{1/2}\left\|A\right\|_{(1,q)}^q. \end{equation*} Let $\Omega_1=\{i\in\langle N\rangle : \xi_i=1\}$, $\Omega_2=\langle N\rangle\backslash \Omega_1=\{i\in\langle N\rangle : \xi_i=-1\}$. Let us check b). Denote $\theta=\sqrt{N}\left(1+\log\bigl(\frac{n}{N}+1\bigr)\right)^{1/2}$. For $k=1,2$ there exists $j_0^k\in\langle n\rangle$ such that \begin{gather*} \left\|A(\Omega_k)\right\|_{(1,q)}^q= \sum\limits_{i\in\Omega_k}|a_{j_0^k}^i|^q \leq \frac{1}{2}\left(\sum\limits_{i\in\langle N\rangle}|a_{j_0}^i|^q+\varepsilon^q\theta\left\|A\right\|_{(1,q)}^q\right) \leq \left(\frac{1}{2}+\frac{1}{2}\varepsilon^{q}\theta\right)\left\|A\right\|_{(1,q)}^q, \end{gather*} as required. To prove c) we apply the following theorem. \begin{thh}[\cite{4}, p. 287] Let $A_1,\dots, A_n$ be sets in $\mathbb{R}^n$ with finite Lebesgue measure, then there exists a ~hyperplane $\pi$ which divides the measure of each of them in half. \end{thh} Let $M=\max\limits_{i,j}\lbrace|a_j^i|^q\rbrace+1$. One can put in $\mathbb{R}^n$ $N$ cubes with sides equal to $M$ and parallel to the axes such that every hyperplane intersects at most $n$ of them. (It follows from the existence of $N$ points of the general position in $\mathbb{R}^n$ and the continuity of the equation of a plane.) Let us numerate these cubes. For $i\in\langle N\rangle$ let $u_i$ be the vertex of $i$--th cube with the smallest coordinates. For each entry $a_j^i$ of the matrix we define a parallelepiped $\widetilde{P_j^i}= [0,1]^{j-1}\times [1, 1+|a_j^i|^q]\times [0,1]^{n- j}$. We put $n$ rectangular parallelepipeds defined by the entries of the row $v_i$ ($P_j^i=u_i+ \widetilde{P_j^i}$) into the cube with number $i$. Note that $\mu(P_j^i)=|a_j^i|^q$. We call the set of $P_j^i$, $j\in\langle n \rangle$ for a fixed $i$ by an $i$--th $``$angle$"$. For $j\in\langle n \rangle$ let $A_j = \bigcup\limits_{i\in\langle N\rangle}{P_j^i}$. Applying the theorem mentioned above to $A_j$, we get a hyperplane $\pi$ which divides in half the measure of each $A_j$. Let $P_1$ and $P_2$ be halfspaces into which $\pi$ divides $\mathbb{R}^n$. By construction $\pi$ intersects at most $n$ cubes, consequently at most $n$ $``$angles$"$. It is clear now how to obtain a partition (\ref{partition}). We put the indices of the $``$angles$"$ which entirely belong to $P_1$ (or $P_2$) in $\Omega_1$ (in $\Omega_2$ correspondingly). We put the indices of the $``$angles$"$ which intersect both $P_1$ and $P_2$ in $\Omega_1$. Let $G$ be the set of such indices. Let us show that for every $j\in \langle n \rangle$ the $l_q^N$--norm of the column $w_j$ will decrease at least $\left(\frac{1+n\varepsilon^q}{2}\right)^{1/q}$ times under the partition. It will prove our proposition. Since $\pi$ divides in half the measure of $A_j$, then $$\sum\limits_{i\in \Omega_1 \backslash G } |a_j^i|^q + V_1 = \sum\limits_{i\in \Omega_2} |a_j^i|^q + V_2,$$ where $V_k$, $k=1,2$ stands for the volume of $\cup_{i\in\langle G\rangle}P_j^i\cap P_k$. From (\ref{a_i_j_cond}) and due to the fact that $\pi$ intersects at most $n$ $``$angles$"$, we have the following inequality: $$V_1+V_2 \leq n\varepsilon^q \sum\limits_{i\in \langle N\rangle}{|a_j^i|^q},$$ so for $k=1,2$ $$ \sum\limits_{i\in \Omega_k} |a_j^i|^q \leq \frac{1+n\varepsilon^q}{2}\sum\limits_{i\in \langle N\rangle} |a_j^i|^q. $$ Thus, Proposition \ref{assertion_1_q_norm} is proved. \end{proof} The following proposition shows that there is a case when (\ref{a_i_j_cond}) holds for $\varepsilon<1$ but for every partition one of the submatrices has the same $(1,q)$--norm as the whole matrix. \begin{theorem} For $n=2^{2k-1}$ there exists a $2k\times n$ -- matrix $A$ for which (\ref{a_i_j_cond}) holds for $\varepsilon^q\log_2{2n}\geq 2$, but for every partition of the form (\ref{partition}) the following equality holds: \begin{gather*} \max\biggl\{\left\|A(\Omega_1)\right\|_{(1, q)}, \left\|A(\Omega_2)\right\|_{(1, q)} \biggr\} = \left\|A\right\|_{(1, q)}. \end{gather*} \end{theorem} \begin{proof}[Sketch of proof] For every pair of the subsets $\omega$ and $\langle 2k\rangle\backslash \omega$ of the set $\langle 2k\rangle$ we choose the subset (any) of the largest cardinality. Let us numerate such subsets: $B_1, \ldots, B_{2^{2k-1}}$. We construct a matrix $A$ in the following way: if $i\in B_j$, then $a_j^i=\frac{1}{|B_j|^{1/q}}$, otherwise, $a_j^i=0$. It is easy to check that (\ref{a_i_j_cond}) holds for $A$ and that for any partition (\ref{partition}) either $\left\|A(\Omega_1)\right\|_{(1,q)}=\left\|A\right\|_{(1,q)}$ or $\left\|A(\Omega_2)\right\|_{(1,q)}=\left\|A\right\|_{(1,q)}$. \end{proof} Let $q=\infty$ and $A$ be an arbitrary matrix. There is no partition that decreases (even a little) $(X,\infty)$--norms of two submatrices. It is because one can find a row $v_{\sup}$ of the matrix $A$ such that $ \left\| A\right\|_{(X,\infty)}=\sup\limits_{\left\| x \right\|_{X}\leq 1}{\langle x, v_{\sup}\rangle}, $ and then the norm of the submatrix containing a row $v_{\sup}$, will be equal to the norm of $A$. \thanks{The work was supported by the Russian Federation Government Grant No. 14.W03.31.0031.} The paper is submitted to Mathematical Notes. \end{document}
\begin{document} \tolerance=1000 \begin{abstract} The purpose of this paper is to introduce a cohomology theory for abelian matched pairs of Hopf algebras and to explore its relationship to Sweedler cohomology, to Singer cohomology and to extension theory. An exact sequence connecting these cohomology theories is obtained for a general abelian matched pair of Hopf algebras, generalizing those of Kac and Masuoka for matched pairs of finite groups and finite dimensional Lie algebras. The morphisms in the low degree part of this sequence are given explicitly, enabling concrete computations. \end{abstract} \title{Cohomology of abelian matched pairs and the Kac sequence} \setcounter{section}{-1} \section{Introduction} In this paper we discuss various cohomology theories for Hopf algebras and their relation to extension theory. It is natural to think of building new algebraic objects from simpler structures, or to get information about the structure of complicated objects by decomposing them into simpler parts. Algebraic extension theories serve exactly that purpose, and the classification problem of such extensions is usually related to cohomology theories. In the case of Hopf algebras, extension theories are proving to be invaluable tools for the construction of new examples of Hopf algebras, as well as in the efforts to classify finite dimensional Hopf algebras. Hopf algebras, which occur for example as group algebras, as universal envelopes of Lie algebras, as algebras of representative functions on Lie groups, as coordinate algebras of algebraic groups and as Quantum groups, have many \lq group like\rq\; properties. In particular, cocommutative Hopf algebras are group object in the category of cocommutative coalgebras, and are very much related to ordinary groups and Lie algebras. In fact, over an algebraically closed field of characteristic zero, such a Hopf algebra is a semi-direct product of a group algebra by a universal envelope of a Lie algebra, hence just a group algebra if finite dimensional (see [MM, Ca, Ko] for the connected case, [Gr1,2, Sw2] for the general case). In view of these facts it appears natural to try to relate the cohomology of Hopf algebras to that of groups and Lie algebras. The first work in this direction was done by M.E. Sweedler [Sw1] and by G.I. Kac [Kac] in the late 1960's. Sweedler introduced a cohomology theory of algebras that are modules over a Hopf algebra (now called Sweedler cohomology). He compared it to group cohomology, to Lie algebra cohomology and to Amitsur cohomology. In that paper he also shows how the second cohomology group classifies cleft comodule algebra extensions. Kac considered Hopf algebra extensions of a group algebra $kT$ by the dual of a group algebra $k^N$ obtained from a matched pair of finite groups $(N,T)$, and found an exact sequence connecting the cohomology of the groups involved and the group of Hopf algebra extensions $\operatorname{Opext} (kT,k^N)$ $$\begin{array}{l} 0\to H^1(N\bowtie T,k^{\bullet})\to H^1(T,k^{\bullet}) \oplus H^1(N,k^{\bullet})\to \operatorname{Aut} (k^N\# kT) \\ \to H^2(N\bowtie T,k^{\bullet})\to H^2(T,k^{\bullet})\to\operatorname{Opext} (kT,k^N)\to H^3(N\bowtie T,k^{\bullet})\to ... \end{array}$$ which is now known as the Kac sequence. In the work of Kac all Hopf algebras are over the field of complex numbers and also carry the structure of a $C^*$-algebra. Such structures are now called Kac algebras. The generalization to arbitrary fields appears in recent work by A. Masuoka [Ma1,2], where it is also used to show that certain groups of Hopf algebra extensions are trivial. Masuoka also obtained a version of the Kac sequence for matched pairs of Lie bialgebras [Ma3], as well as a new exact sequence involving the group of quasi Hopf algebra extensions of a finite dimensional abelian Singer pair [Ma4]. In this paper we introduce a cohomology theory for general abelian matched pairs $(T,N,\mu,\nu)$, consisting of two cocommutative Hopf algebras acting compatibly on each other with bismash product $H=N\bowtie T$, and obtain a general Kac sequence $$\begin{array}{l} 0\to H^1(H,A)\to H^1(T,A)\oplus H^1(N,A)\to \mathcal H^1(T,N,A) \to H^2(H,A) \\ \to H^2(T,A)\oplus H^2(N,A)\to \mathcal H^2(T,N,A)\to H^3(H,k)\to ... \end{array}$$ relating the cohomology $\mathcal H^*(T,N,A)$ of the matched pair with coefficients in a module algebra $A$ to the Sweedler cohomologies of the Hopf algebras involved. For trivial coefficients the maps in the low degree part of the sequence are described explicitly. If $T$ is finite-dimensional then abelian matched pairs $(T,N,\mu ,\nu )$ are in bijective correspondence with abelian Singer pairs $(N,T^*)$, and we get a natural isomorphism $\mathcal H^*(T,N,k)\cong H^*(N,T^*)$ between the cohomology of the abelian matched pair and that of the corresponding abelian Singer pair. In particular, together with results from [Ho] one obtains $\mathcal{H}^1(T,N,k)\cong H^1(N,T^*)\cong \operatorname{Aut} (T^*\# N)$ and $\mathcal H^2(T,N,k)\cong H^2(N,T^*)\cong \operatorname{Opext} (N,T^*)$. The sequence gives information about extensions of cocommutative Hopf algebras by commutative ones. It can also be used in certain cases to compute the (low degree) cohomology groups a Hopf algebras. Such a sequence can of course not exist for non-abelian matched pairs, at least if the sequence is to consist of groups and not just pointed sets as in [Sch]. Together with the five term exact sequence for a smash product of Hopf algebras $H=N\rtimes T$ [M2], generalizing that of K. Tahara [Ta] for a semi-direct product of groups, $$\begin{array}{l} 1\to H^1_{meas}(T,\operatorname{Hom} (N,A))\to \tilde H^2(H,A)\to H^2(N,A)^T \\ \to H^2_{meas}(T,\operatorname{Hom}(N,A))\to \tilde H^3(H,A)\end{array}$$ it is possible in principle to give a procedure to compute the second cohomology group of any abelian matched pair of pointed Hopf algebras over a field of characteristic zero with a finite group of points and a reductive Lie algebra of primitives. In Section 1 abelian Singer pairs of Hopf algebras are reviewed. In particular we talk about the cohomology of an abelian Singer pair, about Sweedler cohomology and Hopf algebra extensions [Si, Sw1]. In the second section abelian matched pairs of Hopf algebras are discussed. We introduce a cohomology theory for an abelian matched pair of Hopf algebras with coefficients in a commutative module algebra, and in Section 4 we see how it compares to the cohomology of a Singer pair. The generalized Kac sequence for an abelian matched pair of Hopf algebra is presented in Section 5. The homomorphisms in the low dgree part of the sequence are given explicitly, so as to make it possible to use them in explicit calculations of groups of Hopf algebra extensions and low degree Sweedler cohomology groups. Section 6 examines how the tools introduced combined with some additional observations can be used to describe explicitly the second cohomology group of some abelian matched pairs. In the appendix some results from (co-)simplicial homological algebra used in the main body of the paper are presented. Throughout the paper ${_H\mathcal V}$, ${_H\mathcal A}$ and ${_H\mathcal C}$ denote the categories of left $H$-modules, $H$-module algebras and $H$-module coalgebras, respectively, for the Hopf algebra $H$ over the field $k$. Similarly, $\mathcal V^H$, $\mathcal A^H$ and $\mathcal C^H$ stand for the categories of right $H$-comodules, $H$-comodule algebras and $H$-comodule coalgebras, respectively. We use the Sweedler sigma notation for comultiplication: $\Delta(c)=c_1\otimes c_2$, $(1\otimes\Delta)\Delta(c)=c_1\otimes c_2\otimes c_3$ etc. In the cocommutative setting the indices are clear from the context and we will omit them whenever convenient. If $V$ is a vector space, then $V^n$ denotes its $n$-fold tensor power. \section{Cohomology of an abelian Singer pair} \subsection{Singer pairs} Let $(B,A)$ be a pair of Hopf algebras together with an action $\mu\colon B\otimes A\to A$ and a coaction $\rho\colon B\to B\otimes A$ so that $A$ is a $B$-module algebra and $B$ is an $A$-comodule coalgebra. Then $A\otimes B$ can be equipped with the cross product algebra structure as well as the cross product coalgebra structure. To ensure compatibility of these structures, i.e: to get a Hopf algebra, further conditions on $(B,A,\mu ,\rho)$ are necessary. These are most easily expressed in term of the action of $B$ on $A\otimes A,$ twisted by the coaction of $A$ on $B$, $$\mu_2=(\mu\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes 1\otimes 1)\colon B\otimes A\otimes A\to A\otimes A,$$ i.e: $b(a\otimes a')=b_{1B}(a)\otimes b_{1A}\cdot b_2(a')$, and the coaction of $A$ on $B\otimes B,$ twisted by the action of $B$ on $A$, $$\rho_2=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes\rho )\colon B\otimes B\to B\otimes B\otimes A,$$ i.e: $\rho_2(b\otimes b')=b_{1B}\otimes b'_B\otimes b_{1A}\cdot b_2(b'_A)$. Observe that for trivial coaction $\rho\colon B\to B\otimes A$ one gets the ordinary diagonal action of $B$ on $A\otimes A$, and for trivial action $\mu\colon B\otimes A\to A$ the diagonal coaction of $A$ on $B\otimes B$. \begin{Definition} The pair $(B,A,\mu ,\rho )$ is called an abelian Singer pair if $A$ is commutative, $B$ is cocommutative and the following are satisfied. \begin{enumerate} \item $(A,\mu)$ is a $B$-module algebra (i.e: an object of $_B\mathcal A$), \item $(B,\rho )$ is a $A$-comodule coalgebra (i.e: an object of $\mathcal C^A$), \item $\rho\operatorname{m}_B=(\operatorname{m}_B\otimes 1)\rho_2$, i.e: the diagram $$\begin{CD} B\otimes B @>\operatorname{m}_B >> B \\ @V\rho_2 VV @V\rho VV \\ B\otimes B\otimes A @>\operatorname{m}_B\otimes 1 >> B\otimes A \end{CD}$$ commutes, \item $\Delta_A\mu =\mu_2(1\otimes\Delta_A)$, i.e: the diagram $$\begin{CD} B\otimes A @>\mu >> A \\ @V1\otimes\Delta_A VV @V\Delta_A VV \\ B\otimes A\otimes A @>\mu_2 >> A\otimes A \end{CD}$$ commutes. \end{enumerate} \end{Definition} The twisted action of $B$ on $A^n$ and the twisted coaction of $A$ on $B^n$ can now be defined inductively: $$\mu_{n+1}=(\mu_n\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes 1^n\otimes 1) \colon B\otimes A^{n}\otimes A\to A^{n}\otimes A$$ with $\mu_1=\mu$ and $$\rho_{n+1}=(1\otimes 1^n\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes\rho_n)\colon B\otimes B^n\to B\otimes B^n\otimes A$$ with $\rho_1=\rho$. \subsection{(Co-)modules over Singer pairs} It is convenient to introduce the abelian category $_B\mathcal V^A$ of triples $(V,\omega ,\lambda)$, where \begin{enumerate} \item $\omega\colon B\otimes V\to V$ is a left $B$-module structure, \item $\lambda\colon V\to V\otimes A$ is a right $A$-comodule structure and \item the two equivalent diagrams $$\begin{CD} B\otimes V @>\omega >> V @. \quad \quad @. B\otimes V @>\omega >> B \\ @V1\otimes\lambda VV @V\lambda VV @. @V\lambda_{B\otimes V} VV @V\lambda VV \\ B\otimes V\otimes A @>\omega_{V\otimes A} >> V\otimes A @. \quad \quad @. B\otimes V\otimes A @>\omega \otimes 1>> V\otimes A \end{CD}$$ commute, where the twisted action $\omega_{V\otimes A}\colon B\otimes V\otimes A\to V\otimes A$ of $B$ on $V\otimes A$ is given by $\omega_{V\otimes A}=(\omega\otimes \operatorname{m}_A(1\otimes\mu )(14235)((\rho\otimes 1)\Delta_B\otimes 1\otimes 1)$ and the twisted coaction $\lambda_{B\otimes V}\colon B\otimes V\to B\otimes V\otimes A$ of $A$ on $B\otimes V$ by $\lambda_{B\otimes V}=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\otimes\Delta_B\otimes\lambda )$. \end{enumerate} The morphisms are $B$-linear and $A$-colinear maps. Observe that $(B,\operatorname{m}_B,\rho )$, $(A,\mu ,\Delta_A)$ and $(k, \epsilon_B\otimes 1, 1\otimes\iota_A)$ are objects of ${_B\mathcal V}^A$. Moreover, $( {_B\mathcal V}^A, \otimes , k)$ is a symmetric monoidal category, so that commutative algebras and cocommutative coalgebras are defined in $( {_B\mathcal V}^A, \otimes , k)$. The free functor $F\colon \mathcal V^A\to {{_B\mathcal V}^A}$, defined by $F(X,\alpha )=(B\otimes X, \alpha_{B\otimes X})$ with twisted $A$-coaction $\alpha_{B\otimes X}=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\mu))(14235)((\rho\otimes 1)\Delta_B\otimes\alpha )$ is left adjoint to the forgetful functor $U\colon {_B\mathcal V^A}\to \mathcal V^A$, with natural isomorphism $\theta\colon {_B\mathcal V^A}(FM,N)\to \mathcal V^A(M,UN)$ given by $\theta(f)(m)=f(1\otimes m)$ and $\theta^{-1}(g)(n\otimes m)=\mu_N(n\otimes g(m))$. The unit $\eta_M\colon M\to UF(M)$ and the counit $\epsilon_N\colon FU(N)\to N$ of the adjunction are given by $\eta_M=\iota_B\otimes 1$ and $\epsilon_N=\mu_N$, respectively, and give rise to a comonad $\mathbf{G}=(FU,\epsilon, \delta=F\eta U).$ Similarly, the cofree functor $L\colon {_B\mathcal V}\to {_B\mathcal V}^A$, defined by $L(Y,\beta )=(Y\otimes A, \beta_{Y\otimes A})$ with twisted $B$-action $\beta_{Y\otimes A}=(\beta\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes 1\otimes 1)$ is right adjoint to the forgetful functor $U\colon {_B\mathcal V}^A\to {_B\mathcal V}$, with natural isomorphism $\psi\colon {_B\mathcal V}(UM,N)\to {_B\mathcal V}^A(M,LN)$ given by $\psi (g)=(1\otimes g)\delta_M$ and $\psi^{-1}(f)=(1\otimes\epsilon_A)f$. The unit $\eta_M\colon M\to LU(M)$ and the counit $\epsilon_N\colon UL(N)\to N$ of the adjunction are given by $\eta_M=\delta_M$ and $\epsilon_N=1\otimes\epsilon_A$, respectively. They give rise to a monad (or triple) $\mathbf{T}=(LU,\eta ,\mu =L\epsilon U)$ on $_B\mathcal{V}^A$. The (non-commutative) square of functors $$\begin{CD} \mathcal V @>L>> \mathcal V^A \\ @VFVV @VFVV \\ {_B\mathcal V} @>L>> {_B\mathcal V}^A \end{CD}$$ together with the corresponding forgetful adjoint functors describes the situation. Observe that $ {_B\mathcal V}^A(G(M),T(N))\cong \mathcal V(UM,UN)$. These adjunctions, monads and comonads restrict to coalgebras and algebras. \subsection{Cohomology of an abelian Singer pair} The comonad $\mathbf G=(FU,\epsilon ,\delta =F\eta U)$ defined on ${_B\mathcal V}^A$ can be used to construct $B$-free simplicial resolutions $\mathbf X_B(N)$ with $X_n(N)=G^{n+1}N=B^{n+1}\otimes N$, faces and degeneracies $$\partial_i=G^i\epsilon_{G^{n-i}(N)}\colon X_{n+1}\to X_n, \quad s_i=G^i\delta_{G^{n-i}(N)}\colon X_n\to X_{n+1}$$ given by $\partial_i =1^i\otimes\operatorname{m}_B\otimes 1^{n+1-i}$ for $0\leq i\leq n$, $\partial_{n+1}=1^{n+1}\otimes\mu_N$, and $s_i=1^i\otimes\iota_B\otimes 1^{n+2-i}$ for $0\leq i\leq n$. The monad $\mathbf T=(LU,\eta ,\mu =L\epsilon U)$ on ${_B\mathcal V}^A$ can be used to construct $A$-cofree cosimplicial resolutions $\mathbf{Y}_A(M)$ with $Y_A^n(M)=T^{n+1}M=M\otimes A^{n+1}$, cofaces and codegeneracies $$\partial^i=T^{n+1-i}\eta_{T^i(M)}\colon Y^n\to Y^{n+1} \quad ,\quad s^i=T^{n_i}\mu_{T^i(M)}\colon Y^{n+1}\to Y^{n}$$ given by $\partial^0=\delta_M\otimes 1^{n+1}$, $\partial^i=1^{i-1}\otimes\Delta_A\otimes 1^{n+2-i}$ for $1\leq i\leq n+1$, and $s^i=1^{i+1}\otimes\epsilon_A\otimes 1^{n+1-i}$ for $0\leq i\leq n$. The total right derived functor of $$ {_B\operatorname{Reg}^A}=\mathcal{U} {_B\operatorname{Hom}^A}\colon ({_B\mathcal C^A})^{op}\times {_B\mathcal A^A}\to \operatorname{Ab}$$ is now defined by means of the simplicial $\mathbf G$-resolutions $\mathbf X_B(M)=\mathbf G^{*+1}M$ and the cosimplicial $\mathbf T$-resolutions $\mathbf Y_A(N)=\mathbf T^{*+1}N$ as $$R^*( {_B\operatorname{Reg}}^A(M,N)=H^*(\operatorname{Tot} {_B\operatorname{Reg}^A}(\mathbf X_B(M),\mathbf Y_A(N)).$$ \begin{Definition}\label{d12} The cohomology of a Singer pair $(B,A,\mu ,\rho)$ is given by $$H^*(B,A)=H^{*+1}(\operatorname{Tot}\mathbf Z_0)$$ where $\mathbf Z_0$ is the double cochain complex obtained from the double cochain complex $\mathbf Z={_B\operatorname{Reg}^A}(\mathbf X(k),\mathbf Y(k)))$ by deleting the $0^{th}$ row and the $0^{th}$ column. \end{Definition} \subsection{The normalized standard complex} Use the natural isomorphism $${_B\mathcal V}^A(FU(M),LU(N))\cong \mathcal V(UM,UN)$$ to get the standard double complex $$Z^{m,n}=({_B\operatorname{Reg}}^A(G^{m+1}(k)), T^{n+1}(k),\partial',\partial)\cong ( \operatorname{Reg}(B^{m},A^{n}),\partial', \partial).$$ For computational purposes it is useful to replace this complex by the normalized standard complex $Z_+$, where $Z_+^{m,n}=\operatorname{Reg}_+(B^m,A^n)$ is the intersection of the degeneracies, consisting of all convolution invertible maps $f\colon B^m\to A^n$ satisfying $f(1\otimes\ldots\otimes\eta\varepsilon\otimes\ldots\ot1)=\eta\varepsilon$ and $(1\otimes\ldots\otimes\eta\varepsilon\otimes\ldots\ot1)f=\eta\varepsilon$. In more detail, the normalized standard double complex is of the form \begin{small} $$\begin{CD} \operatorname{Reg}_+(k,k) @>\partial^{0,0}>> \operatorname{Reg}_+(B,k) @>\partial^{1,0}>> \operatorname{Reg}_+(B^2,k) @>\partial^{2,0}>>\operatorname{Reg}_+(B^3,k)\dots \\ @V\partial_{0,0}VV @V\partial_{1,0}VV @V\partial_{2,0}VV @V\partial_{3,0}VV \\ \operatorname{Reg}_+(k,A) @>\partial^{0,1}>>\operatorname{Reg}_+(B,A) @>\partial^{1,1}>>\operatorname{Reg}_+(B^2,A) @>\partial^{2,1}>>\operatorname{Reg}_+(B^3,k)\dots \\ @V\partial_{0,1}VV @V\partial_{1,1}VV @V\partial_{2,1}VV @V\partial_{3,1}VV \\ \operatorname{Reg}_+(k,A^2) @>\partial^{0,2}>>\operatorname{Reg}_+(B,A^2) @>\partial^{1,2}>>\operatorname{Reg}_+(B^2,A^2) @>\partial^{2,2}>>\operatorname{Reg}_+(B^3,A^2)\dots \\ @V\partial_{0,2}VV @V\partial_{1,2}VV @V\partial_{2,2}VV @V\partial_{3,2}VV \\ \operatorname{Reg}_+(k,A^3) @>\partial^{0,3}>>\operatorname{Reg}_+(B,A^3) @>\partial^{1,3}>>\operatorname{Reg}_+(B^2,A^3) @>\partial^{2,3}>>\operatorname{Reg}_+(B^3,A^3)\dots \\ @V\partial_{0,3}VV @V\partial_{1,3}VV @V\partial_{2,3}VV @V\partial_{3,3}VV \\ \vdots @. \vdots @. \vdots @. \vdots \end{CD}$$ \end{small} The coboundary maps $$d_{n,m}^i\colon \operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n+1},A^m)$$ defined by $$d_{n,m}^0\alpha =\mu_m(1_B\otimes \alpha),\ d_{n,m}^i\alpha =\alpha(1_{B^{i-1}}\otimes\operatorname{m}_B\otimes 1_{B^{n-i}}),\ d_{n,m}^{n+1}\alpha =\alpha\otimes\varepsilon,$$ for $1\le i\le n,$ are used to construct the horizontal differentials $$\partial_{n,m}\colon\operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n+1},A^m),$$ given by the \lq alternating' convolution product $$\partial_{n,m}\alpha=d_{n,m}^0\alpha*d_{n,m}^1\alpha^{-1}*d_{n,m}^2\alpha*\ldots *d_{n,m}^{n+1}\alpha^{(-1)^{n+1}}.$$ Dually the coboundaries $${d'}^i_{n,m}\colon\operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^n,A^{m+1})$$ defined by $${d'}^0_{n,m}\beta =(\beta\otimes 1_A)\rho_n,\ {d'}^i_{n,m}\beta = ( 1_{A^{i-1}}\otimes\Delta_A\otimes 1_{A^{n-i}})\beta,\ {d'}^{n+1}_{n,m}\beta =\eta\otimes\beta,$$ for $1\le i\le n$, determine the vertical differentials $$\partial^{n,m}\colon \operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n},A^{m+1}),$$ where $$\partial^{n,m}\beta={d'}^0_{n,m}\beta *{d'}^1_{n,m}\beta^{-1}*{d'}^2_{n,m}\beta*\ldots *d'^{n+1}_{n,m}\beta^{(-1)^{n+1}}.$$ The cohomology of the abelian Singer pair $(B,A,\mu,\rho)$ is by definition the cohomology of the total complex. $$\begin{array}{l} 0\to\operatorname{Reg}_+(B,A)\to \operatorname{Reg}_+(B^2,A)\oplus\operatorname{Reg}_+(B,A^2)\to\\ \ldots\to\bigoplus_{i=1}^n\operatorname{Reg}_+(B^{n+1-i},A^i)\to \ldots \end{array}$$ There are cannonical isomorphisms $H^1(B,A)\simeq \operatorname{Aut}(A\# B)$ and $H^2(B,A)\simeq \operatorname{Opext}(B,A)$ [Ho] (here $\operatorname{Opext}(B,A)=\operatorname{Opext}(B,A,\mu,\rho)$ denotes the abelian group of equivalence classes of those Hopf algebra extensions that give rise to the Singer pair $(B,A,\mu,\rho)$). \subsection{Special cases} In particular, for $A=k=M$ and $N$ a commutative $B$-module algebra we get Sweedler cohomology of $B$ with coefficients in $N$ [Sw1] $$H^*(B,N)=H^*(\operatorname{Tot} {_B\operatorname{Reg}}(\mathbf X(k),N))=H^*(\operatorname{Tot} {_B\operatorname{Reg}}(\mathbf G^{*+1}(k),N)).$$ In [Sw1] it is also shown that if $G$ is a group and $\mathbf{g}$ is a Lie algebra, then there are canonical isomorphisms $H^n(kG,A)\simeq H^n(G,\mathcal{U}(A))$ for $n\ge 1$ and $H^m(U\mathbf{g},A)\simeq H^m(\mathbf{g},A^+)$ for $m\ge 2$, where $\mathcal{U}(A)$ denotes the multiplicative group of units and $A^+$ denotes the underlying vector space. For $B=k=N$ and $M$ a cocommutative $A$-comodule coalgebra we get the dual version [Sw1,Do] $$H^*(M,A)=H^*(\operatorname{Tot} {\operatorname{Reg}^A}(M,\mathbf Y(k)))=H^*(\operatorname{Tot} {\operatorname{Reg}^A}(M,\mathbf T^{*+1}(k))).$$ \section{Cohomology of an abelian matched pair} \subsection{Abelian matched pairs} Here we consider pairs of cocommutative Hopf algebras $(T,N)$ together with a left action $\mu\colon T\otimes N\to N$, $\mu (t\otimes n)=t(n)$, and a right action $\nu\colon T\otimes N\to T$, $\nu (t\otimes n)=t^n$. Then we have the twisted switch $$\tilde\sigma =(\mu\otimes\nu )\Delta_{T\otimes N}\colon T\otimes N\to N\otimes T$$ or, in shorthand $\tilde\sigma (t\otimes n)=t_1(n_1)\otimes t_2^{n_2}$, which in case of trivial actions reduces to the ordinary switch $\sigma\colon T\otimes N\to N\otimes T$. \begin{Definition} Such a configuration $(T,N,\mu ,\nu )$ is called an abelian matched pair if \begin{enumerate} \item $N$ is a left $T$-module coalgebra, i.e: $\mu\colon T\otimes N\to N$ is a coalgebra map, \item $T$ is a right $N$-module coalgebra, i.e: $\nu\colon T\otimes N\to T$ is a coalgebra map, \item $N$ is a left $T$-module algebra with respect to the twisted left action $\tilde\mu =(1\otimes\mu )(\tilde\sigma\otimes 1)\colon T\otimes N\otimes N\to N$, in the sense that the diagrams $$\begin{CD} T\otimes N\otimes N @>1\otimes\operatorname{m}_N>> T\otimes N @. \quad \quad @. T\otimes k @>1\otimes\iota_N>> T\otimes N \\ @V\tilde\mu VV @V\mu VV @. @V\epsilon_T\otimes 1VV @V\mu VV \\ N\otimes N @>m_N>> N @. \quad \quad @. k @>\iota_N>> N \end{CD}$$ commute, i.e: $\mu (t\otimes nm)=\sum \mu (t_1\otimes n_1)\mu (\nu (t_2\otimes n_2)\otimes m)$ and $\mu (t\otimes 1)=\epsilon (t)1_N$, or in shorthand $t(nm)=t_1(n_1)t_2^{n_2}(m)$ and $t(1_N)=\epsilon (t)1_N$, \item $T$ is a right $N$-module algebra with respect to the twisted right action $\tilde\nu =(\nu\otimes 1 )(1\otimes\tilde\sigma )\colon T\otimes T\otimes N\to T\otimes T$, in the sense that the diagrams $$\begin{CD} T\otimes T\otimes N @>m_T\otimes 1>> T\otimes N @. \quad \quad @. k\otimes N @>\iota_T\otimes 1>> T\otimes N \\ @V\tilde\nu VV @V\nu VV @. @V1\otimes\epsilon_NVV @V\nu VV \\ T\otimes T @>m_T>> T @. \quad \quad @. k @>\iota_T>> T \end{CD}$$ commute, i.e: $\nu (ts\otimes n)=\sum \nu (t\otimes\mu (s_1\otimes n_1))\nu (s_2\otimes n_2)$ and $\nu (1_T\otimes n)=\epsilon (n)1_T$, or in shorthand $(ts)^n=t^{s_1(n_1)}s_2^{n_2}$ and $1_T^n=\epsilon (n)1_T$ \end{enumerate} \end{Definition} The bismash product Hopf algebra $(N\bowtie T,\operatorname{m},\Delta, \iota ,\epsilon ,S )$ is the tensor product coalgebra $N\otimes T$ with unit $\iota_{N\otimes T}\colon k\to N\otimes T$, twisted multiplication $$m=(m\otimes\operatorname{m})(1\otimes\tilde\sigma\otimes 1)\colon N\otimes T\otimes N\otimes T\to N\otimes T,$$ in short $\tilde\sigma (t\otimes n)=t_1(n_1)\otimes t_2^{n_2}$, $(n\otimes t)(m\otimes s)=nt_1(m_1)\otimes t_2^{m_2}s$, and antipode $$S=\tilde\sigma (S\otimes S)\sigma\colon N\otimes T\to N\otimes T,$$ i.e: $S(n\otimes t)=S(t_2)(S(n_2))\otimes S(t_1)^{S(n_1)}$. For a proof that this is a Hopf algebra see [Kas]. To avoid ambiguity we will often write $n\bowtie t$ for $n\otimes t$ in $N\bowtie T$. We also identify $N$ and $T$ with the Hopf subalgebras $N\bowtie k$ and $k\bowtie T$, respectively, i.e: $n\equiv n\bowtie 1$ and $t\equiv 1\bowtie t$. In this sense we write $n\bowtie t=nt$ and $tn=t_1(n_1)t_2^{n_2}$. If the action $\nu\colon T\otimes N\to T$ is trivial, then the bismash product $N\bowtie T$ becomes the smash product (or semi-direct product) $N\rtimes T$. An action $\mu\colon T\otimes N\to N$ is compatible with the trivial action $1\otimes\epsilon\colon T\otimes N\to T$, i.e: $(T,N,\mu , 1\otimes\epsilon )$ is a matched pair, if and only if $N$ is a $T$-module bialgebra and $\mu (t_1\otimes n)\otimes t_2=\mu (t_2\otimes n)\otimes t_1$. Note that the last condition is trivially satisfied if $T$ is cocommutative. To make calculations more transparent we start to use the abbreviated Sweedler sigma notation for the cocommutative setting whenever convenient. \begin{Lemma}[{[Ma3]}, Proposition 2.3]\label{l22} Let $(T,N,\mu,\nu)$ be an abelian matched pair. \begin{enumerate} \item A left $T$-module, left $N$-module $(V,\alpha ,\beta )$ is a left $N\bowtie T$-module if and only if $t(nv)=t(n)(t^{n}(v))$, i.e: if and only if with the twisted action $\tilde\alpha =(1\otimes\alpha )(\tilde\sigma\otimes 1)\colon T\otimes N\otimes V\to N\otimes V$ the square $$\begin{CD} T\otimes N\otimes V @>1\otimes\beta >> T\otimes V \\ @V\tilde\alpha VV @V\alpha VV \\ N\otimes V @> \beta >> V \end{CD}$$ commutes. \item A right $T$-module, right $N$-module $(W,\alpha ,\beta )$ is a right $N\bowtie T$-module if and only if $(v^t)^n=(v^{t(n)})^{t^{n}}$, i.e: if and only if with the twisted action $\tilde\beta =(\beta\otimes 1)(1\otimes\tilde\sigma )\colon W\otimes T\otimes N\to W\otimes T$ the square $$\begin{CD} W\otimes T\otimes N @>\alpha\otimes 1 >> W\otimes N \\ @V\tilde\beta VV @V\beta VV \\ W\otimes T @> \alpha >> W \end{CD}$$ commutes. \item Let $(V, \alpha )$ be a left $T$-module and $(W, \beta )$ a right $N$-module. Then \begin{enumerate} \item [(i)] $N\otimes V$ is a left $N\bowtie T$-module with $N$-action on the first factor and $T$-action given by $$\tilde\alpha =(1\otimes\alpha )\tilde\sigma\colon T\otimes N\otimes V\to N\otimes V,$$ that is $t(n\otimes v)= t_1(n_1)\otimes t_2^{n_2}(v)$. \item[(ii)] $W\otimes T$ is a right $N\bowtie T$-module with $T$-action on the right factor and $N$-action given by $$\tilde\beta =(\beta\otimes 1)(1\otimes\tilde\sigma )\colon W\otimes T\otimes N\to W\otimes T,$$ that is $(w\otimes t)^n= w^{t_2(n_2)}\otimes t_1^{n_1}$. Moreover, $W\otimes T$ is a left $N\bowtie T$-module by twisting the action via the antipode of $N\bowtie T$. \item[(iii)] The map $\psi\colon (N\bowtie T)\otimes V\otimes W\to (W\otimes T)\otimes (N\otimes V)$ defined by $\psi ((n\bowtie t)\otimes v\otimes w)=w^{S(t)(S(n))}\otimes S(t)^{S(n)}\otimes n\otimes tv$, is a $N\bowtie T$-homomorphism, when $N\bowtie T$ is acting on the first factor of $(N\bowtie T)\otimes V\otimes W$ and diagonally on $(W\otimes T)\otimes (N\otimes V)$\\ $(nt)(w\otimes s\otimes\operatorname{m}\otimes v)=w^{(sS(t))(S(n))}\otimes (sS(t))^{S(n)}\otimes nt(m)\otimes t^m(v).$\\ In particular, $(W\otimes T)\otimes (N\otimes V)$ is a free left $N\bowtie T$-module in which any basis of the vector space $(W\otimes k)\otimes (k\otimes V)$ is a $N\bowtie T$-free basis. \end{enumerate} \end{enumerate} \end{Lemma} Observe that the inverse of $\psi\colon (N\bowtie T)\otimes V\otimes W\to (W\otimes T)\otimes (N\otimes V)$ is given by $$\psi^{-1}((w\otimes t)\otimes (n\otimes v)=(n\bowtie S(t^n))\otimes (w^{t(n)}\otimes t^n(v)).$$ The twisted actions can now be extended by induction to higher tensor powers $$\mu_{p+1}=(1\otimes\mu_p)(\tilde\sigma\otimes 1^p)\colon T\otimes N^{p+1}\to N^{p+1}$$ so that $\mu_{p+1}(t\otimes n\otimes\mathbf m)=\mu (t\otimes n)\otimes \mu_p(\nu (t\otimes n)\otimes\mathbf m)$, $t(n\otimes\mathbf m)=t(n)\otimes t^{n}(\mathbf m)$ and $$\nu_{q+1}=(\nu_q\otimes 1)(1^q\otimes\tilde\sigma )\colon T^{q+1}\otimes N\to T^{q+1}$$ so that $\nu_{q+1}(\mathbf t\otimes s\otimes n)=\nu_q(\mathbf t\otimes\mu (s\otimes n))\otimes \nu (s\otimes n)$, $(\mathbf t\otimes s)^n={\mathbf t}^{s(n)}\otimes s^{n}$. Observe that the squares $$\begin{CD} T\otimes N^{p+1} @>\mu_{p+1}>> N^{p+1} @. \quad \quad @. T^{q+1}\otimes N @>\nu_{q+1}>> T^{q+1} \\ @V1\otimes fVV @VfVV @. @Vg\otimes 1VV @VgVV \\ T\otimes N^p @>\mu_p>> N^p @. \quad \quad @. T^q\otimes N @>\nu_q>> T^q \end{CD}$$ commute when $f=1^{i-1}\otimes\operatorname{m}_N\otimes 1^{p-i}$ for $1\leq i\leq p$ and $g=1^{j-1}\otimes\operatorname{m}_T\otimes 1^{q-j}$ for $1\leq j\leq q$, respectively. By part 3 (iii) of the lemma above $T^{i+1}\otimes N^{j+1}$ can be equipped with the $N\bowtie T$-module structure defined by $(nt)(\mathbf r\otimes s\otimes\operatorname{m}\otimes\mathbf k)=\mathbf r^{(sS(t))(S(n))}\otimes (sS(t))^{S(n)}\otimes nt(m)\otimes t^m(\mathbf k)$. \begin{Corollary}\label{c31} The map $\psi\colon (N\bowtie T)\otimes T^i\otimes N^j\to T^{i+1}\otimes N^{j+1}$, defined by $\psi ((nt)\otimes (\mathbf r\otimes\mathbf k)= \mathbf r^{S(t)(S(n)}\otimes S(t)^{S(n)}\otimes n\otimes t(\mathbf k)$, is an isomorphism of $N\bowtie T$-modules. \end{Corollary} The content of the Lemma \ref{l22} can be summarized in the square of \lq free\rq\ functors between monoidal categories $$\begin{CD} \mathcal V @> F_T>> {_T\mathcal V} \\ @VF_NVV @V\tilde F_NVV \\ _N\mathcal V @>\tilde F_T>> {_{N\bowtie T}\mathcal V} \end{CD}$$ each with a corresponding tensor preserving right adjoint forgetful functor. \subsection{The distributive law of a matched pair} The two comonads on $ {_{N\bowtie T}\mathcal V}$ given by $$\tilde{\mathbf G_T}=(\tilde G_T,\delta_T, \epsilon_T) \quad , \quad \tilde{\mathbf G_N}=(\tilde{G_N},\delta_N, \epsilon_N)$$ with $\tilde{G_T}=\tilde{F_T}\tilde{U_T}$, $\delta_T(t\otimes x)=t\otimes 1\otimes x$, $\epsilon_T(t\otimes x)=tx$, and with $\tilde{G_N}=\tilde{F_N}\tilde{U_N}$, $\delta_N(n\otimes x)=n\otimes 1\otimes x$, $\epsilon_N(n\otimes x)=nx$, satisfy a distributive law [Ba] $$\tilde\sigma\colon \tilde{G_T}\tilde{\mathbf G_N}\to \tilde{\mathbf G_N}\tilde{G_T}$$ given by $\tilde\sigma (t\otimes n\otimes -)=\tilde\sigma (t\otimes n)\otimes - =t_1(n_1)\otimes t_2^{n_2}\otimes - $. The equations for a distributive law $$\tilde{G_N}\delta_T\cdot\tilde\sigma =\tilde\sigma\tilde{G_T}\cdot \tilde{G_T}\tilde\sigma\cdot\delta_T\tilde{G_N} \quad , \quad \delta_N\tilde{G_T}\cdot\tilde\sigma =\tilde{G_N}\tilde\sigma\cdot\tilde\sigma\tilde{G_N}\cdot\tilde{G_T}\delta_N$$ and $$\epsilon_N\tilde{G_T}\cdot\tilde\sigma =\tilde{G_T}\epsilon_N \quad , \quad \tilde{G_N}\epsilon_T\cdot\tilde\sigma =\epsilon_T\tilde{G_N}$$ are easily verified. \begin{Proposition}[{[Ba]}, Th. 2.2] The composite $$\mathbf G=\mathbf G_N\circ_{\tilde\sigma}\mathbf G_T$$ with $G=(G_NG_T$, $\delta =G_N\tilde\sigma G_T\cdot\delta_N\delta_T$ and $\epsilon =\epsilon_N\epsilon_T)$ is again a comonad on $ {_{N\bowtie T}\mathcal V}$. Moreover, $\mathbf G=\mathbf G_{N\bowtie T}$. \end{Proposition} The antipode can be used to define a left action $$\nu_S=S\nu (S\otimes S)\sigma\colon N\otimes T\to T$$ by $n(t)=\nu_S(n\otimes t)=S\nu (S\otimes S)\sigma (n\otimes t)=S(S(t)^{S(n)})$ and a right action $$\mu_S=S\mu (S\otimes S)\sigma\colon N\otimes T\to N$$ by $n^t=\mu_S(n\otimes t)=S\mu (S\otimes S)\sigma (n\otimes t)=S(S(t)(S(n))$. The inverse of the twisted switch is then $$\tilde\sigma^{-1}=(\nu_S\otimes\mu_S)\Delta_{N\otimes T}\colon N\otimes T\to T\otimes N$$ given by $\tilde\sigma^{-1}(n\otimes t)=n_1(t_1)\otimes n_2^{t_2}$, and induces the inverse distributive law $$\tilde\sigma^{-1}\colon G_NG_T\to G_TG_N.$$ \vskip .5cm \subsection{Matched pair cohomology}\label{s23} For every Hopf algebra $H$ the category of $H$-modules ${_H\mathcal V}$ is symmetric monoidal. The tensor product of two $H$-modules $V$ and $W$ has underlying vector space the ordinary vector space tensor product $V\otimes W$ and diagonal $H$-action. Algebras and coalgebras in $_H\mathcal V$ are known as $H$-module algebras and $H$-module coalgebras, respectively. The adjoint functors and comonads of the last section therefore restrict to the situations where $\mathcal V$ is replaced by $\mathcal C$ or $\mathcal A$. In particular, if $(T,N,\mu ,\nu )$ is an abelian matched pair, $H=N\bowtie T$ and $C$ is a $H$-module coalgebra then $\mathbf X_H(C)$ is a canonical simplicial free $H$-module coalgebra resolution of $C$ and by the Corollary \ref{c31} the composite $\mathbf X_N(\mathbf X_T(C))$ is a simplicial double complex of free $H$-module coalgebras. \begin{Definition}\label{d25} The cohomology of an abelian matched pair $(T,N,\mu ,\nu )$ with coefficients in the commutative $N\bowtie T$-module algebra is defined by $$\mathcal{H}^*(T,N,A)=H^{*+1}(\operatorname{Tot} (\mathbf B_0),$$ where $B_0$ is the double cochain complex obtained from the double cochain complex $\mathbf B=C({_{N\bowtie T}\operatorname{Reg}}(\mathbf X_N(\mathbf X_T(k),A))$ by deleting the $0^{th}$ row and the $0^{th}$ column. \end{Definition} \subsection{The normalized standard complex} Let $H=N\bowtie T$ be a bismash product of an abelian matched pair of Hopf algebras and let the algebra $A$ be a left $N$ and a right $T$-module such that it is a left $H$-module via $nt(a)=n({a^{S(t)}})$, i.e. $(n(a))^{S(t)}=(t(n))(a^{S(t^n)}).$ Note that $\operatorname{Hom}(T^{p},A)$ becomes a left $N$-module via $n(f)({\mathbf t})=n(f(\nu_p({\mathbf t},n)))$ and $\operatorname{Hom}(N^{q},A)$ becomes a right $T$-module via $f^t({\mathbf n})=(f(\mu_q(t,{\mathbf n})))^t= S(t)(f(\mu_q(t,{\mathbf n})))$. The simplicial double complex $G_T^pG_N^q(k)=(T^{p}\otimes N^{q})_{p,q}$, $p,q\ge 1$ of free $H$-modules has horizontal face operators $1\otimes d_N^*\colon T^{p}\otimes N^{q+1}\to T^{p}\otimes N^{q}$, vertical face operators $d_T^*\otimes 1\colon T^{p+1}\otimes N^{q}\to T^{p}\otimes N^{q}$, horizontal degeneracies $1\otimes s_N^*\colon T^{p}\otimes N^{q}\to T^{p}\otimes N^{q+1}$ and vertical degeneracies $s_T^*\ot1\colon T^{p}\otimes N^{q}\to T^{p+1}\otimes N^{q}$, where $$d_N^i= 1^{i}\otimes\operatorname{m}\otimes 1^{q-i-1},\quad d_N^q= 1^q\otimes\varepsilon,\quad s_N^i= 1^{i}\otimes\eta\otimes 1^{q-i}$$ for $0\le i\le q-1$, and $$d_T^j= 1^{p-j-1}\otimes\operatorname{m}\otimes 1^{j},\quad d_T^p=\varepsilon\otimes 1^p,\quad s_T^j= 1^{p-j}\otimes\eta\otimes 1^{j}$$ for $0\le j\le p-1$. These maps preserve the $H$-module structure on $T^{p}\otimes N^{q}$. Apply the functor $\operatorname{_HReg}(\_,A)\colon {{_H\mathcal C}}^{op}\to \operatorname{Ab}$ to get a cosimplicial double complex of abelian groups $\mathbf B={\operatorname{_HReg}}(\mathbf X_N(\mathbf X_T(k),A)$ with $B^{p,q}=\operatorname{_HReg}(T^{p+1}\otimes N^{q+1},A)$, coface operators $\operatorname{_HReg}(d_{N*},A)$, $\operatorname{_HReg}(d_{T*},A)$ and codegeneracies are $\operatorname{_HReg}(s_{N*},A)$, $\operatorname{_HReg}(s_{T*},A)$. The isomorphism described in Corollary \ref{c31} induces an isomorphism of double complexes $\mathbf{B}(T,N,A)\cong\mathbf{C}(T,N,A)$ given by $${\operatorname{_HReg}}(T^{p+1}\otimes N^{q+1},A)\stackrel{\operatorname{_HReg}(\psi,A)}{\longrightarrow} {\operatorname{_HReg}}(H\otimes T^{p}\otimes N^{q},A) \stackrel{\theta}{\longrightarrow} \operatorname{Reg}(T^{p}\otimes N^{q},A)$$ for $p,q\ge 0$, where $C^{p,q}=\operatorname{Reg} (T^p\otimes N^q,A)$ is the abelian group of convolution invertible linear maps $f\colon N^{p}\otimes T^{q}\to A.$ The vertical differentials ${\delta_N}\colon C^{p,q}\to C^{p+1,q}$ and the horizontal differentials ${\delta_T}\colon C^{p,q}\to C^{p,q+1}$ are transported from ${\mathbf B}$ and turn out to be the twisted Sweedler differentials on the $N$ and $T$ parts, respectively. The coface operators are $$ {\delta_N}_i f({\mathbf t}\otimes{\mathbf n})=\begin{cases}s f({\mathbf t}\otimes n_1\otimes\ldots \otimes n_in_{i+1}\otimes\ldots\otimes n_{q+1}), \mbox{ for } i=1,\ldots ,q \\ n_1(f(\nu_q({\mathbf t}\otimes n_1)\otimes n_2\otimes\ldots\otimes n_{p+1})), \mbox{ for } i=0 \\ f({\mathbf t}\otimes n_1\otimes\ldots\otimes n_q)\varepsilon(n_{q+1}), \mbox{ for } i=q+1 \end{cases}$$ where ${\mathbf t}\in T^{p}$ and ${\mathbf n}=n_1\otimes\ldots\otimes n_{q+1}\in N^{q+1}$, and similarly $${\delta_T}_j f({\mathbf t}\otimes{\mathbf n})=\begin{cases}s f(t_{p+1}\otimes\ldots\otimes t_{j+1}t_{j}\otimes \ldots \otimes t_{1}\otimes {\mathbf n}), \mbox{ for } j=1,\ldots ,p \\ (f(t_{p+1}\otimes\ldots \otimes t_2\otimes \mu_p(t_{1}\otimes{\mathbf n})))^{t_1},i \mbox{ for } j=0 \\ \varepsilon(t_{p+1})f(t_p\otimes\ldots\otimes t_{1}\otimes{\mathbf n}), \mbox{ for } j=p+1 \end{cases}$$ where ${\mathbf t}=t_1\otimes\ldots t_{q+1}\in T^{q+1}$ and ${\mathbf n}\in N^{q}$. The differentials in the associated double cochain complex are the alternating convolution products $${\delta_N} f={\delta_N}_0f*{\delta_N}_1 f^{-1}*\ldots *{\delta_N}_{q+1}f^{\pm 1}$$ and $${\delta_T} f={\delta_T}_0f*{\delta_T}_1 f^{-1}*\ldots *{\delta_T}_{p+1}f^{\pm 1}.$$ In the associated normalized double complex $\mathbf C_+$, the $(p,q)$ term $C^{p,q}_+=\operatorname{Reg}_+(T^{p}\otimes N^{q},A)$ is the intersection of the degeneracy operators, that is, the abelian group of convolution invertible maps $f\colon T^{p}\otimes N^{q}\to A$ with $f(t_p\otimes\ldots\otimes t_1\otimes n_1\otimes\ldots n_q)=\varepsilon(t_p)\ldots \varepsilon(n_q)$, whenever one of $t_i$ or one of $n_j$ is in $k$. Then $\mathcal H^*(N,T,A)\cong H^{*+1}(\operatorname{Tot}\mathbf C_0)$, where $\mathbf C_0$ is the double complex obtained from $\mathbf C_+$ by replacing the edges by zero. The groups of cocycles ${\mathcal Z}^i(T,N,A)$ and the groups coboundaries ${\mathcal B}^i(T,N,A)$ consist of $i$-tuples of maps $(f_j)_{1\le j\le i}$, $f_j\colon T^{j}\otimes N^{i+1-j}\to A$ that satisfy certain conditions. We introduce the subgroups ${\mathcal Z}^i_p(T,N,A)\le {\mathcal Z}^i(T,N,A)$, that are spanned by $i$-tuples in which the $f_j$'s are trivial for $j\not= p$ and subgroups ${\mathcal B}_p^i={\mathcal Z}_p^i\cap {\mathcal B}^i\subset {\mathcal B}_i$. These give rise to subgroups of cohomology groups ${\mathcal H}_p^i={\mathcal Z}_p^i/{\mathcal B}_p^i\simeq ({\mathcal Z}_p^i+{\mathcal B}^i)/{\mathcal B}^i\subseteq {\mathcal H}^i$ which have a nice interpretation when $i=2$ and $p=1,2$; see Section \ref{mcp}. \label{s24} \section{The homomorphism $\pi\colon \mathcal{H}^2(T,N,A)\rightarrow H^{i,j}(T,N,A)$} If $T$ is a finite group and $N$ is a finite $T$-group, then we have the following exact sequence [M1] $$ H^2(N,k^\bullet)\stackrel{{\delta_T}}{\to}\operatorname{Opext}(kT,k^N)\stackrel{\pi}{\to} H^1(T,H^2(N,k^\bullet)). $$ Here we define a version of homomorphism $\pi$ for arbitrary smash products of cocommutative Hopf algebras. We start by introducing the Hopf algebra analogue of $H^i(T,H^j(N,k^\bullet))$. For positive $i,j$ and an abelian matched pair of Hopf algebras $(T,N)$, with the action of $N$ on $T$ trivial, we define \begin{eqnarray*} Z^{i,j}(T,N,A)&=&\{\alpha\in \operatorname{Reg}_+(T^{i}\otimes N^{j},A)|{\delta_N}\alpha=\varepsilon,\; \mbox{and}\\ &&\exists \beta\in \operatorname{Reg}_+(T^{i+1}\otimes N^{j-1},A):\;{\delta_T}\alpha={\delta_N}\beta\},\\ B^{i,j}(T,N,A)&=&\{\alpha\in \operatorname{Reg}_+(T^{i}\otimes N^{j},A)|\exists \gamma\in \operatorname{Reg}_+(T^{i}\otimes N^{j-1},A),\\ &&\exists \gamma'\in \operatorname{Reg}_+(T^{i-1}\it N^{j},A):\; \alpha={\delta_N} \gamma*{\delta_T}\gamma'\}.\\ H^{i,j}(T,N,A)&=&Z^{i,j}(T,N,A)/B^{i,j}(T,N,A). \end{eqnarray*} \begin{Remark} If $j=1$, then $$H^{i,1}(T,N,A) \simeq \mathcal{H}^i_i(T,N,A)\simeq H_{meas}^i(T,\operatorname{Hom}(N,A)),$$ where the $H^i_{meas}$ denotes the measuring cohomology [M2]. \end{Remark} \begin{Proposition} If $T=kG$ is a group algebra, then there is an isomorphism $$H^i(G,H^j(N,A))\simeq H^{i,j}(kG,N,A).$$ \end{Proposition} \begin{Remark} Here the \textbf{right} action of $G$ on $H^j(N,A)$ is given by precomposition. We can obtain symmetric results in case we start with a \textbf{right} action of $T=kG$ on $N$, hence a \textbf{left} action of $G$ on $H^j(N,A)$. \end{Remark} \begin{proof}[Proof (of the proposition above)] By inspection we have \begin{eqnarray*} Z^i(G,H^j(N,A))&=&Z^{i,j}(kG,N,A)/\{\alpha\colon G\to B^j(N,A)\},\\ B^i(G,H^j(N,A))&=&B^{i,j}(kG,N,A)/\{\alpha\colon G\to B^j(N,A)\}. \end{eqnarray*} Here we identify regular maps from $(kG)^{i}\otimes N^{j}$ to $A$ with set maps from $G^{\times i}$ to $\operatorname{Reg}(N^{j},A)$ in the obvious way. \end{proof} The following is a straightforward generalization of the Theorem 7.1 in [M2]. \begin{Theorem}\label{pi} The homomorphism $\pi\colon {\mathcal H}^2(T,N,A)\to H^{1,2}(T,N,A)$, induced by $(\alpha,\beta)\mapsto \alpha$, makes the following sequence $$ H^2(N,A)\oplus {\mathcal H}^2_2(T,N,A)\stackrel{{\delta_T}+\iota}{\longrightarrow}{\mathcal H}^2(T,N,A) \stackrel{\pi}{\to} H^{1,2}(T,N,A) $$ exact. \end{Theorem} \begin{proof} It is clear that $\pi{\delta_T}=0$ and obviously also $\pi({\mathcal H}^2_2)=0$. Suppose a cocycle pair $(\alpha,\beta)\in {\mathcal Z}^2(T,N,A)$ is such that $\alpha\in B^{1,2}(T,N,A)$. Then for some $\gamma\colon T\otimes N\to A$ and some $\gamma'\colon N\otimes N\to A$ we have $\alpha={\delta_N}\gamma*{\delta_T}\gamma'$, and hence $(\alpha,\beta)=({\delta_N}\gamma,\beta)*({\delta_T}\gamma',\varepsilon)\sim ({\delta_N}\gamma^{-1},{\delta_T}\gamma)*({\delta_N}\gamma,\beta)*({\delta_T}\gamma',\varepsilon)= (\varepsilon,{\delta_T}\gamma*\beta)*({\delta_T}\gamma',\varepsilon)\in {\mathcal Z}_2^2(T,N,A)*{\delta_T}(Z^2(N,A))$. \end{proof} \section{Comparison of Singer pairs and matched pairs} \subsection{Singer pairs vs. matched pairs}\label{s41} In this section we sketch a correspondence from matched pairs to Singer pairs. For more details we refer to [Ma3]. \begin{Definition} We say that an action $\mu\colon A\otimes M\to M$ is locally finite, if every orbit $A(m)=\{a(m)|a\in A\}$ is finite dimensional. \end{Definition} \begin{Lemma}[{[Mo1]}, Lemma 1.6.4]\label{Mo164} Let $A$ be an algebra and $C$ a coalgebra. \begin{enumerate} \item If $M$ is a right $C$-comodule via $\rho\colon M\to M\otimes C$, $\rho(m)= m_0\otimes\operatorname{m}_1$, then $M$ is a left $C^*$-module via $\mu\colon C^*\otimes M\to M$, $\mu(f\otimes\operatorname{m})=f(m_1)m_0$. \item Let $M$ be a left $A$-module via $\mu\colon A\otimes M\to M.$ Then there is (a unique) comodule structure $\rho\colon M\to M\otimes A^\circ$, such that $(1\otimes\operatorname{ev})\rho=\mu$ if and only if the action $\mu$ is locally finite. The coaction is then given by $\rho(m)=\sum f_i\otimes\operatorname{m}_i$, where $\{m_i\}$ is a basis for $A(m)$ and $f_i\in A^{\circ}\subseteq A^*$ are coordinate functions of $a(m)$, i.e. $a(m)=\sum f_i(a)m_i$. \end{enumerate} \end{Lemma} Let $(T,N,\mu,\nu)$ be an abelian matched pair and suppose $\mu\colon T\otimes N\to N$ is locally finite. Then the Lemma above gives a coaction $\rho\colon N\to N\otimes T^{\circ}$, $\rho(n)=n_N\otimes n_{T^\circ}$, such that $t(n)=\sum n_N\cdot n_{T^\circ}(t)$. There is a left action $\nu'\colon N\otimes T^*\to T^*$ given by pre-composition, i.e. $\nu'(n\otimes f)(t)=f(t^n)$. If $\mu$ is locally finite, it is easy to see that $\nu'$ restricts to $T^\circ\subseteq T^*$. \begin{Lemma}[{[Ma3]}, Lemma 4.1] If $(T,N,\mu,\nu)$ is an abelian matched pair with $\mu$ locally finite then the quadruple $(N,T^\circ,\nu',\rho)$ forms an abelian Singer pair. \end{Lemma} \begin{Remark} There is also a correspondence in the opposite direction [M3].\end{Remark} \subsection{Comparison of Singer and matched pair cohomologies} Let\break $(T,N,\mu,\nu)$ be an abelian matched pair of Hopf algebras, with $\mu$ locally finite and $(N,T^\circ,\nu',\rho)$ the Singer pair associated to it as above. The embedding $\operatorname{Hom}(N^{i},(T^\circ)^{j})\subseteq \operatorname{Hom}(N^{i},(T^{j})^*)\simeq\operatorname{Hom}(T^{j}\otimes N^{i},k)$ induced by the inclusion ${T^\circ}^{j}=(T^{j})^\circ \subseteq (T^{j})^*$ restricts to the embedding $\operatorname{Reg}_+(N^{i},(T^\circ)^{j})\subseteq \operatorname{Reg}_+(T^{j}\otimes N^{i},k)$. A routine calculation shows that it preserves the differentials, i.e. that it gives an embedding of double complexes, which is an isomorphism in case $T$ is finite dimensional. There is no apparent reason for the embedding of complexes to induce an isomorphism of cohomology groups in general. It is our conjecture that this is not always the case. In some cases we can compare the multiplication part of $H^2(N,T^\circ)$ (see the following section) and ${\mathcal H}^2_2(N,T,k)$. We use the following lemma for this purpose. \begin{Lemma}\label{compare} Let $(T,N,\mu,\nu)$ be an abelian matched pair with the action $\mu$ locally finite. If $f\colon T\otimes N^{i}\to k$ is a convolution invertible map, such that ${\delta_T} f=\varepsilon$, then for each ${\bf n}\in N^{i}$, the map $f_{\bf n}=f(\_,{\bf n})\colon T\to k$ lies in the finite dual $T^\circ\subseteq T^*$. \end{Lemma} \begin{proof} It suffices to show that the orbit of $f_{\bf n}$ under the action of $T$ (given by $(s(f_{\bf n})(t)=f_{\bf n}(ts)$) is finite dimensional (see [DNR], [Mo1] or [Sw2] for the description of finite duals). Using the fact that ${\delta_T} f=\varepsilon$ we get $s(f_{\bf n})(t)= f_{\bf n}(ts)= \sum f_{{\bf n}_1}(s_1)f_{\mu_i(s_2\otimes {\bf n}_2)}(t)$. Let $\Delta({\bf n})=\sum_j {\bf n'}_j\otimes {\bf n''}_j$. The action $\mu_i\colon T\otimes N^{i}\to N^{i}$ is locally finite, since $\mu\colon T\otimes N\to N$ is, and hence we can choose a finite basis $\{{\bf m}_p\}$ for $\operatorname{Span}\{\mu_i(s\otimes {\bf n''}_j)|s\in T\}$. Now note that $\{f_{{\bf m}_p}\}$ is a finite set which spans $T(f_{\bf n})$. \end{proof} \begin{Corollary}\label{cor1} If $(T,N,\mu,\nu)$ is an abelian matched pair, with $\mu$ locally finite and $(N,T^\circ,\omega,\rho)$ is the corresponding Singer pair, then ${\mathcal H}^1(T,N,k)=H^1(N,T^\circ)$. \end{Corollary} \subsection{The multiplication and comultiplication parts of the second cohomology group of a Singer pair}\label{mcp} Here we discuss in more detail the Hopf algebra extensions that have an \lq\lq unperturbed" multiplication and those that have an \lq\lq unperturbed" comultiplication, more precisely we look at two subgroups $\H_m^2(B,A)$ and $\H_c^2(B,A)$ of $H^2(B,A)\simeq \operatorname{Opext}(B,A)$, one generated by the cocycles with a trivial multiplication part and the other generated by the cocycles with a trivial comultiplication part [M1]. Let $$ \Z_c^2(B,A)=\{\beta\in \Reg_+(B,A\otimes A)|(\eta\varepsilon,\beta)\in Z^2(B,A)\}. $$ We shall identify $\Z_c^2(B,A)$ with a subgroup of $Z^2(B,A)$ via the injection $\beta\mapsto(\eta\varepsilon,\beta).$ Similarly let $$ \Z_m^2(B,A)=\{\alpha\in \Reg_+(B\otimes B,A)|(\alpha,\eta\varepsilon)\in Z^2(B,A)\}. $$ If $$ \B_c^2(B,A)=B^2(B,A)\cap \Z_c^2(B,A)\;\mbox{and}\; \B_m^2(B,A)=B^2(B,A)\cap \Z_m^2(B,A) $$ then we define $$ \H_c^2(B,A)=\Z_c^2(B,A)/\B_c^2(B,A)\; \mbox{and}\; \H_m^2(B,A)=\Z_m^2(B,A)/\B_m^2(B,A). $$ The identification of $\H_c^2(B,A)$ with a subgroup of $H^2(B,A)$ is given by $$ \H_c^2(B,A)\stackrel{\sim}{\to} (\Z_c^2(B,A)+B^2(B,A))/B^2(B,A)) \le H^2(B,A),$$ and similarly for $\H_m^2\le H^2$. Note that in case $T$ is finite dimensional $\H_c^2(N,T^*)\simeq {\mathcal H}^2_2(T,N,k)$ and \break $\H_m^2(N,T^*)\simeq {\mathcal H}^2_1(T,N,k)$ with $\mathcal{H}_p^i(T,N,k)$ as defined in Section \ref{s24}. \begin{Proposition}\label{cor2} Let $(T,N,\mu,\nu)$ be an abelian matched pair, with $\mu$ locally finite and let $(N,T^\circ,\omega,\rho)$ be the corresponding Singer pair. Then $$\H_m^2(N,T^\circ)\simeq {\mathcal H}^2_1(T,N,k).$$ \end{Proposition} \begin{proof} Observe that we have an inclusion $\Z_m^2(N,T^\circ)= \{\alpha\colon N\otimes N\to T^\circ|\partial\alpha=\varepsilon, \partial'\alpha=\varepsilon\}\subseteq \{\alpha\colon T\otimes N\otimes N\to k|{\delta_T}\alpha=\varepsilon, {\delta_N}\alpha=\varepsilon\}={\mathcal Z}^2_1(T,N,k)$. The inclusion is in fact an equality by Lemma \ref{compare}. Similarly the inclusion $\B_m^2(N,T^\circ)\subseteq {\mathcal B}^2_1(T,N,k)$ is an equality as well. \end{proof} \section{The generalized Kac sequence} \subsection{The Kac sequence of an abelian matched pair} We now start by sketching a conceptual way to obtain a generalized version of the Kac sequence for an arbitrary abelian matched pair of Hopf algebras relating the cohomology of the matched pair to Sweedler cohomology. Since it is difficult to describe the homomorphisms involved in this manner, we then proceed in the next section to give an explicit description of the low degree part of this sequence. \begin{Theorem} Let $H=N\bowtie T$, where $(T,N,\mu ,\nu )$ be an abelian matched pair of Hopf algebras, and let $A$ be a commutative left $H$-module algebra. Then there is a long exact sequence of abelian groups $$\begin{array}{l} 0\to H^1(H,A)\to H^1(T,A)\oplus H^1(N,A)\to \mathcal H^1(T,N,A)\to H^2(H,A) \\ \to H^2(T,A)\oplus H^2(N,A)\to \mathcal H^2(T,N,A)\to H^3(H,A)\to ... \end{array}$$ Moreover, if $T$ is finite dimensional then $(N,T^*)$ is an abelian Singer pair, $H^*(T,k)\cong H^*(k,N^*)$ and $\mathcal H^*(T,N,k)\cong H^*(N,T^*)$. \end{Theorem} \begin{proof} The short exact sequence of double cochain complexes $$0\to\mathbf B_0\to \mathbf B\to \mathbf B_1\to 0,$$ where $\mathbf B_1$ is the edge double cochain complex of $\mathbf B= {_H\operatorname{Reg}}(\mathbf{X}_T\mathbf{X}_N(k),A)$ as in Section \ref{s23}, induces a long exact sequence in cohomology $$\begin{array}{l} 0\to H^1(\operatorname{Tot} (\mathbf B))\to H^1(\operatorname{Tot} (\mathbf B_1))\to H^2(\operatorname{Tot} (\mathbf B_0))\to H^2(\operatorname{Tot} (\mathbf B)) \\ \to H^2(\operatorname{Tot} (\mathbf B_1))\to H^3(\operatorname{Tot} (\mathbf B_0))\to H^3(\operatorname{Tot} (\mathbf B))\to H^3(\operatorname{Tot} (\mathbf B_1))\to ... \end{array}$$ where $H^0(\operatorname{Tot} (\mathbf B_0))=0=H^1(\operatorname{Tot} (\mathbf B_0))$ and $H^0(\operatorname{Tot} (\mathbf B))=H^0(\operatorname{Tot} (\mathbf B_1))$ have already been taken into account. By Definition \ref{d25} $H^{*+1}(\operatorname{Tot} (\mathbf B_0))=\mathcal H^*(T,N,A)$ is the cohomology of the matched pair $(T,N,\mu ,\nu )$ with coefficients in $A$. Moreover, $H^*(\operatorname{Tot} (\mathbf B_1)\cong H^*(T,A)\oplus H^*(N,A)$ is a direct sum of Sweedler cohomologies. From the cosimplicial version of the Eilenberg-Zilber theorem (see Appendix) it follows that $H^*(\operatorname{Tot} (\mathbf B))\cong H^*(\operatorname{Diag} (\mathbf B))$. On the other hand, Barr's theorem [Ba, Th. 3.4] together with Corollary \ref{c31} says that $\operatorname{Diag} \mathbf X_T(\mathbf X_N(k))\simeq \mathbf X_H(k)$, and gives an equivalence $${_H\operatorname{Reg}}(\operatorname{Diag} \mathbf X_T(\mathbf X_N(k)),A)\simeq \operatorname{Diag} ({_H\operatorname{Reg}}(\mathbf X_T(\mathbf X_N(k)),A)=\operatorname{Diag} (\mathbf B)).$$ Thus, we get $$H^*(H,A)=H^*({_H\operatorname{Reg}}(\mathbf X_H(k),A))\cong H^*(\operatorname{Diag} (\mathbf B))\cong H^*(\operatorname{Tot} (\mathbf B)),$$ and the proof is complete. \end{proof} \subsection{Explicit description of the low degree part} The aim of this section is to define explicitly homomorphisms that make the following sequence \begin{eqnarray*} 0&\to& H^1(H,A)\stackrel{\operatorname{res}_2}{\longrightarrow}H^1(T,A)\oplus H^1(N,A) \stackrel{{\delta_N}*{\delta_T}}{\longrightarrow}{\mathcal H}^1(T,N,A) \stackrel{\phi}{\to}H^2(H,A)\\[0pt] &\stackrel{\operatorname{res}_2}{\longrightarrow}& H^2(T,A)\oplus H^2(N,A)\stackrel{{\delta_N}*{\delta_T}^{-1}}{\longrightarrow}{\mathcal H}^2(T,N,A)\stackrel{\psi}{\to}H^3(H,A). \end{eqnarray*} exact. This is the low degree part of the generalized Kac sequence. Here $H=N\bowtie T$ is the bismash product Hopf algebra arising from a matched pair $\mu\colon T\otimes N\to N$, $\nu\colon T\otimes N\to T$. Recall that we abbreviate $\mu(t,n)=t(n)$, $\nu(t,n)=t^n$. We shall also assume that $A$ is a trivial $H$-module. We define $\operatorname{res}_2=\operatorname{res}_2^i\colon H^i(H,A)\to H^i(T,A)\oplus H^i(N,A)$ to be the map $(\operatorname{res}_T,\operatorname{res}_N)\Delta$, more precisely if $f\colonH^{i}\to A$ is a cocycle, then it gets sent to a pair of cocycles $(f_{T},f_{N})$, where $f_T=f|_{T^{i}}$ and $f_N=f|_{N^{i}}$. By ${\delta_N}*{\delta_T}^{(-1)^{i+1}}$, we denote the composite $$\begin{array}{l} H^i(T,A)\oplus H^i(N,A)\stackrel{{\delta_N}\oplus{\delta_T}^{\pm 1}}{\longrightarrow} \mathcal{H}^i_i(T,N,A)\oplus \mathcal{H}^i_1(T,N,A) \\ \stackrel{\iota\oplus\iota}{\longrightarrow} {\mathcal H}^i(T,N,A)\oplus {\mathcal H}^i(T,N,A) \stackrel{*}{\to} {\mathcal H}^i(T,N,A). \end{array}$$ When $i=1$, the map just defined, sends a pair of cocycles $a\in Z^1(T,A)$, $b\in Z^1(N,A)$ to a map ${\delta_N} a*{\delta_T} b\colon T\otimes N\to A$ and if $i=2$ a pair of cocycles $a\inZ^2(T,A)$, $b\in Z^2(N,A)$ becomes a cocycle pair $({\delta_N} a,\varepsilon)*(\varepsilon,{\delta_T} b^{-1})=({\delta_N} a,{\delta_T} b^{-1}) \colon (T\otimes T\otimes N)\oplus (T\otimes N\otimes N)\to A$. Here ${\delta_N}$ and ${\delta_T}$ are the differentials for computing the cohomology of a matched pair described in Section \ref{s24}. The map $\phi\colon {\mathcal H}^1(T,N,A)\to H^2(H,A)$ assigns to a cocycle $\gamma\colon T\otimes N\to A$, a map $\phi(\gamma)\colon H\otimes H\to A$, which is characterized by $\phi(\gamma)(nt,n't')=\gamma(t,n').$ The homomorphism $\psi\colon {\mathcal H}^2(T,N,A)\to H^3(H,A)$ is induced by a map that sends a cocycle pair $(\alpha,\beta)\in {\mathcal Z}^2(T,N,A)$ to the cocycle $f=\psi(\alpha,\beta)\colon H\otimes H\otimes H\to A$ given by $$ f(nt,n't',n''t'')=\varepsilon(n)\varepsilon(t'')\alpha(t^{n'},t',n'')\beta(t,n',t'(n'')). $$ A direct, but lengthy computation shows that the maps just defined induce homomorphisms that make the sequence above exact [M3]. The most important tool in computations is the following lemma about the structure of the second cohomology group ${\mathcal H}^2(H,A)$ [M3]. \begin{Lemma}\label{mainlemma} Let $f\colon H\otimes H\to A$ be a cocycle. Define maps $g_f\colon H\to A$, $h\colon H\otimes H\to A$ and $f_c\colon T\otimes N\to A$ by $g_f(nt)=f(n\otimes t)$, $h=f*\delta g_f$ and $f_c(t\otimes n)=f(t\otimes n)f^{-1}(t(n)\otimes t^n)$. Then \begin{enumerate} \item $ h(nt,n't')=f_T(t^{n'},t')f_N(n,t'(n'))f_c(t,n') $ \item $ h_T=f_T,\; h_N=f_N,\; h|_{N\otimes T}=\varepsilon,\; h|_{T\otimes N}=h_c=f_c,\; g_h=\varepsilon $ \item the maps $f_T$ and $f_N$ are cocycles and ${\delta_N} f_T={\delta_T} f_c^{-1}$, ${\delta_T} f_N= {\delta_N} f_c^{-1}$ \item If $a\colon T\otimes T\to A$, $b\colon N\otimes N\to A$ are cocycles and $\gamma\colon T\otimes N\to A$ is a convolution invertible map, such that ${\delta_N} a={\delta_T}\gamma$ and ${\delta_T} b={\delta_N}\gamma$, then the map $f=f_{a,b,\gamma}\colon H\otimes H\to A$, defined by $$ f(nt,n't')=a(t^{n'},t')b(n,t(n'))\gamma^{-1}(t,n') $$ is a cocycle and $f_T=a$, $f_N=b$, $f_c=f|_{T\otimes N}=\gamma^{-1}$ and $f|_{N\otimes T}=\varepsilon$. \end{enumerate} \end{Lemma} \subsection{The locally finite case} Suppose that the action $\mu\colon T\otimes N\to N$ is locally finite and let $(N,T^\circ,\omega,\rho)$ be the Singer pair corresponding to the matched pair $(T,N,\mu,\nu)$ as in Section \ref{s41}. By Corollary \ref{cor1} we have ${\mathcal H}^1(T,N,k)=H^1(N,T^\circ)$. From the explicit description of the generalized Kac sequence, we see that $({\delta_N}*{\delta_T}^{-1})|_{H^2(T,A)}={\delta_N}\colon H^2(T,A)\to {\mathcal H}^2_2(N,T,A)$ and similarly that $({\delta_N}*{\delta_T}^{-1})|_{H^2(N,A)}={\delta_T}^{-1}\colon H^2(N,A)\to {\mathcal H}^2_1(N,T,A)$. By Proposition \ref{cor2} we have the equality ${\mathcal H}_1^2(T,N,k)=\H_m^2(N,T^\circ)$. Recall that $\H_m^2(N,T^\circ)\subseteq H^2(N,T^\circ)\simeq \operatorname{Opext}(N,T^\circ)$. If the action $\nu$ is locally finite as well, then there is also a (right) Singer pair $(T,N^\circ,\omega',\rho')$. By \lq{right\rq} we mean that we have a right action $\omega'\colon N^\circ\otimes T\to N^\circ$ and a right coaction $\rho'\colon T\otimes N^\circ\otimes T$. In this case we get that ${\mathcal H}^2_2(T,N,k)\simeq {{ \H_m^2}}'(T,N^\circ)\subseteq \operatorname{Opext}'(T,N^\circ)$. The dash refers to the fact that we have a right Singer pair. Define ${H^2_{mc}}=\H_m^2\cap \H_c^2$ and ${H^2_{mc}}'= {H^2_{m}}'\cap {H^2_{c}}'$ and note $H_{mc}(N,T^\circ)\simeq{\mathcal H}^2_2(N,T,k)\cap {\mathcal H}^2_1(N,T,k)\simeq {H_{mc}}'(T,N^\circ)$. Hence \begin{eqnarray*} \operatorname{im}({\delta_N}*{\delta_T}^{-1})&\subseteq& {\mathcal H}^2_1(T,N,k)+{\mathcal H}^2_2(T,N,k) \simeq \frac{{\mathcal H}^2_1(T,N,k)\oplus {\mathcal H}^2_2(T,N,k)}{{\mathcal H}^2_1(T,N,k)\cap {\mathcal H}^2_2(T,N,k)}\\ &=&\frac{\H_m^2(N,T^\circ)\oplus {\H_m^2}'(T,N^\circ)}{\langle {\mathrm H^2_{mc}}(N,T^\circ)\equiv{H^2_{mc}}'(T,N^\circ)\rangle}. \end{eqnarray*} In other words, $\operatorname{im}({\delta_N}*{\delta_T}^{-1})$ is contained in a subgroup of ${\mathcal H}^2(T,N,k)$, that is isomorphic to the pushout $$\begin{CD} {{H^2_{mc}}'(T,N^\circ)\simeq {H^2_{mc}}(N,T^\circ)} @> >> {{H^2_{m}}(N,T^\circ)} \\ @V VV @V VV \\ {{H^2_{m}}'(T,N^\circ)} @> >> X. \end{CD}$$ Hence if both actions $\mu$ and $\nu$ of the abelian matched pair $(T,N,\mu,\nu)$ are locally finite then we get the following version of the low degree part of the Kac sequence: \begin{eqnarray*} 0&\to& H^1(H,k)\stackrel{\operatorname{res}_2}{\longrightarrow}H^1(T,k)\oplus H^1(N,k) \stackrel{{\delta_N}*{\delta_T}}{\longrightarrow}{H}^1(N,T^\circ) \stackrel{\phi}{\to}H^2(H,k)\\ &\stackrel{\operatorname{res}_2}{\longrightarrow}& H^2(T,k)\oplus H^2(N,k)\stackrel{{\delta_N}*{\delta_T}^{-1}}{\longrightarrow}X\stackrel{\psi|_X}{\longrightarrow}H^3(H,k). \end{eqnarray*} \subsection{The Kac sequence of an abelian Singer pair} Here is a generalization of the Kac sequence relating Sweedler and Doi cohomology to Singer cohomology. \begin{Theorem} For any abelian Singer pair $(B,A,\mu,\rho)$ there is a long exact sequence $$\begin{array}{l} 0\to H^1(Tot Z)\to H^1(B,k)\oplus H^1(k,A)\to H^1(B,A)\to H^2(\operatorname{Tot} Z)\\ \to H^2(B,k)\oplus H^2(k,A)\to H^2(B,A)\to H^3(\operatorname{Tot} Z)\to\ldots, \end{array}$$ where $Z$ is the double complex from Definition \ref{d12}. Moreover, we always have $H^1(B,A)\cong \operatorname{Aut}(A\# B)$, $H^2(B,A)\cong\operatorname{Opext} (B,A)$ and $H^*(\operatorname{Tot} Z)\cong H^*(\operatorname{Diag} Z)$. If $A$ is finite dimensional then $H^*(\operatorname{Tot} Z)=H^*(A^*\bowtie B,k)$. \end{Theorem} \begin{proof} The short exact sequence of double cochain complexes $$0\to Z_0\to Z\to Z_1\to 0,$$ where $Z_1$ is the edge subcomplex of $Z= {_B\operatorname{Reg}^A}(\mathbf{X}_B(k),\mathbf{Y}_A(k))$, induces a long exact sequence $$\begin{array}{l} 0\to H^1(Tot Z)\to H^1(\operatorname{Tot} Z_1)\to H^2(\operatorname{Tot} Z_0)\to H^2(\operatorname{Tot} Z)\\ \to H^2(\operatorname{Tot} Z_1)\to H^3(\operatorname{Tot} Z_0)\to H^3(\operatorname{Tot} Z)\to H^3(\operatorname{Tot} Z_0)\to\ldots \end{array}$$ where $H^0(\operatorname{Tot} Z_0)=0=H^1(\operatorname{Tot} Z_0)$ and $H^0(\operatorname{Tot} Z)=H^0(\operatorname{Tot} Z_1)$ have already been taken into account. By definition $H^*(\operatorname{Tot} Z_0)=H^*(B,A)$ is the cohomology of the abelian Singer pair $(B,A,\mu ,\rho )$, and by [Ho] we have $H^1(B,A)\cong\operatorname{Aut} (A\# B)$ and $H^2(B,A)\cong\operatorname{Opext} (B,A)$. Moreover, we clearly have $H^*(\operatorname{Tot} Z_1)\cong H^*(B,k)\oplus H^*(k,A)$, where the summands are Sweedler and Doi cohomologies. By the cosimplicial Eilenberg-Zilber theorem (see appendix) there is a natural isomorphism $H^*(\operatorname{Tot} (\mathbf Z))\cong H^*(\operatorname{Diag} (\mathbf Z))$. Finally, if $A$ is finite dimensional then $\mathbf Z={_B\operatorname{Reg}^A}(\mathbf X(k),\mathbf Y(k))\cong {_{A^*\bowtie B}\operatorname{Reg}} (\mathbf{B}(k),k)$, where $\mathbf B(k)=\mathbf X_{A^*}(\mathbf X_B(k))$. \end{proof} \section{On the matched pair cohomology of pointed cocommutative Hopf algebras over fields of zero characteristic} In this section we describe a method which gives information about the second cohomology group ${\mathcal H}^2(T,N,A)$ of an abelian matched pair. \subsection{The method} Let $(T,N)$ be an abelian matched pair of pointed Hopf algebras, and $A$ a trivial $N\bowtie T$-module algebra. \begin{enumerate} \item Since $\operatorname{char} k=0$ and $T$ and $N$ are pointed we have $T\simeq UP(T) \rtimes kG(T)$ and $N\simeq UP(N)\rtimes kG(N)$ and $N\bowtie T\simeq U(P(T)\bowtie P(N)) \rtimes k(G(T)\bowtie G(N))$ [Gr1,2]. If $H$ is a Hopf algebra then $G(H)$ denotes the group of points and $P(H)$ denotes the Lie algebra of primitives. \item We can use the generalized Tahara sequence [M2] (see introduction) to compute $H^2(T)$, $H^2(N)$, $H^2(N\bowtie T)$. In particular if $G(T)$ is finite then the cohomology group ${H_{meas}^2}(kG(T),\operatorname{Hom}(UP(T),A))= H^{2,1}(kG(T),UP(T),A)= {\mathcal H}^2_2(kG(T),UP(T),A)$ is trivial and there is a direct sum decomposition $H^2(T)=H^2(P(T))^{G(T)}\oplus H^2(G(T))$; we get a similar decomposition for $H^2(N)$ if $G(N)$ is finite and for $H^2(N\bowtie T)$ in the case $G(T)$ and $G(N)$ are both finite. \item Since the Lie algebra cohomology groups $H^i({\bf g})$ admit a vector space structure, the cohomology groups $H^{1,2}(G,{\bf g},A)\simeq H^1(G,H^2({\bf g},A))$ are trivial if $G$ is finite (any additive group of a vector space over a field of zero characteristic is uniquely divisible). \item The exactness of the sequence from Theorem \ref{pi} implies that the maps ${\delta_T}\colon H^2(G(\_))\to {\mathcal H}^2(kG(\_),UP(\_),A)$ are surjective if $G(\_)$ is finite, hence by the generalized Kac sequence the kernels of the maps $\operatorname{res}_2^3\colon H^3(\_)\to H^3(P(\_)\oplusH^3(G(\_))$ are trivial. This then gives information about the kernel of the map $\operatorname{res}_2^3\colon H^3(N\bowtie T)\to H^3(T)\oplus H^3(N)$. \item Now use the exactness of the generalized Kac sequence \begin{eqnarray*} H^2(N\bowtie T)&\stackrel{\operatorname{res}_2^2}{\longrightarrow}&H^2(T)\oplus H^2(N)\stackrel{{\delta_T}+{\delta_N}^{-1}}{\longrightarrow} {\mathcal H}^2(T,N,A)\\ &\to& H^3(N\bowtie T)\stackrel{\operatorname{res}_2^3}{\longrightarrow}H^3(T)\oplusH^3(N) \end{eqnarray*} to get information about ${\mathcal H}^2(T,N,A)$. \end{enumerate} \subsection{Examples} Here we describe how the above procedure works on concrete examples. In the first three examples we restrict ourselves to a case in which one of the Hopf algebras involved is a group algebra. Let $T=UP(T)\rtimes kG(T)$ and $N=kG(N)$ and suppose that the matched pair of $T$ and $N$ arises from actions $G(T)\times G(N)\to G(N)$ and $(G(N)\rtimes G(T))\times P(T)\to P(T)$. If the groups $G(T)$ and $G(N)$ are finite and their orders are relatively prime, then the generalized Kac sequence shows that there is an injective homomorphism $$\Phi\colon \frac{H^2(P(T))^{G(T)}}{H^2(P(T))^{G(N)\rtimes G(T)}}\oplus \frac{H^2(G(N))}{H^2(G(N))^{G(T)}}\to {\mathcal H}^2(T,N,A).$$ Theorem \ref{pi} guarantees that the map $H^3(N\bowtie T)=H^3(U(P(T))\rtimes k(G(N)\rtimes G(T)))\to H^3(P(T))\oplus H^3(G(N)\rtimes G(T))$ is injective. Since the orders of $G(T)$ and $G(N)$ are assumed to be relatively prime the map $H^3(G(N)\rtimes G(T))\to H^3(G(N))\oplus H^3(G(T))$ is also injective. Hence the map $$\operatorname{res}_2^3\colon H^3(N\bowtie T)\to H^3(N)\oplus H^3(T)$$ must be injective as well, since the composite $H^3(N\bowtie T)\to H^3(N)\oplus H^3(T)\to H^3(G(N))\oplus H^3(P(T))\oplus H^3(G(T))$ is injective. Hence by the exactness of the generalized Kac sequence $\Phi$ is an isomorphism. \begin{Example}Let ${\bf g}=k\times k$ be the abelian Lie algebra of dimension 2 and let $G=C_2=\langle a \rangle$ be the cyclic group of order two. Furthermore assume that $G$ acts on ${\bf g}$ by switching the factors, i.e. $a(x,y)=(y,x)$. Recall that $U{\bf g}=k[x,y]$ and that $H^i_{Sweedler}(U{\bf g},A)=H^i_{Hochschild}(U{\bf g},A)$ for $i\ge 2$ and that $H^i_{Hochschild}(k[x,y],k)=k^{\oplus {i\choose 2 }}$. A computation shows that $G$ acts on $k\simeq H^2(k[x,y],k)$ by $a(t)=-t$ and hence $H^2(k[x,y],k)^G=0$. Thus the homomorphism $\pi$ (Theorem \ref{pi}) is the zero map and the homomorphism $k\simeq H^2(k[x,y],k)\stackrel{{\delta_T}}{\to}{\mathcal H}^2(kC_2,k[x,y],k)$ is an isomorphism.\end{Example} \begin{Example}[symmetries of a triangle] Here we describe an example arising from the action of the dihedral group $D_3$ on the abelian Lie algebra of dimension $3$ (basis consists of vertices of a triangle). More precisely let ${\bf g}=k\times k\times k$, $G=C_2=\langle a\rangle$, $H=C_3=\langle b \rangle$, the actions $G\times {\bf g}\to {\bf g}$, $H\times {\bf g}\to {\bf g}$ and $H\times G\to H$ are given by $a(x,y,z)=(z,y,x)$, $b(x,y,z)=(z,x,y)$ and $b^a=b^{-1}$ respectively. A routine computation reveals the following \begin{itemize} \item $C_2$ acts on $k\times k\times k\simeq H^2(k[x,y,z],k)$ by $a(u,v,w)=(-w,-v,-u)$, hence the $G$ stable part is $$H^2(k[x,y,z],k)^G=\{(u,0,-u)\}\simeq k.$$ \item $H=C_3$ acts on $k\times k\times k$ by $b(u,v,w)=(w,u,v)$ and the $H$ stable part is $H^2(k[x,y,z],k)^H= \{(u,u,u)\}\simeq k$. \item The $D_3=C_2\rtimes C_3$ stable part $H^2(k[x,y,z],k)^{D_3}$ is trivial. \end{itemize} Thus we have an isomorphism $k\times k^\bullet/(k^\bullet)^3\simeq {\mathcal H}^2(k[x,y,z]\rtimes kC_2,kC_3,k)$.\end{Example} \begin{Remark}The above also shows that there is an isomorphism $$k\times k\times k\simeq \mathcal{H}^2(k[x,y,z],kD_3,k).$$\end{Remark} \begin{Example} Let ${\bf g}=sl_n$, $G=C_2=\langle a\rangle$, $H=C_n=\langle b \rangle$, where $a$ is a matrix that has $1$'s on the skew diagonal and zeroes elsewhere and $b$ is the standard permutation matrix of order $n$. Let $H$ and $G$ act on $sl_n$ by conjugation in ${\mathcal M}_n$ and let $G$ act on $H$ by conjugation inside $GL_n$. Furthermore assume that $A$ is a finite dimensional trivial $U{\bf g}\rtimes k(H\rtimes G)$-module algebra. By Whitehead's second lemma $H^2({\bf g},A)=0$ and hence we get an isomorphism $\mathcal{U} A/(\mathcal{U} A)^n\simeq {\mathcal H}^2(Usl_n\rtimes kC_2,kC_n,A)$ if $n$ is odd. \end{Example} \begin{Example} Let $H=U{\bf g}\rtimes kG$, where ${\bf g}$ is an abelian Lie algebra and $G$ is a finite abelian group and assume the action of $H$ on itself is given by conjugation, i.e. $h(k)=h_1 kS(h_2)$. In this case it is easy to see that $H^2(H,A)^H=H^2(H,A)$ for any trivial $H$-module algebra $A$ and hence the homomorphism in the generalized Kac sequence $\delta_{H,1}\oplus\delta_{H,2}\colon H^2(H,A)\oplus H^2(H,A)\to {\mathcal H}^2(H,H,A)$ is trivial. Hence ${\mathcal H}^2(H,H,A)\simeq \ker(H^3(H\rtimes H,A)\to H^3(H,A)\oplus H^3(H,A)).$\end{Example} \begin{appendix} \section{Simplicial homological algebra} This is a collection of notions and results from simplicial homological algebra used in the main text. The emphasis is on the cohomology of cosimplicial objects, but the considerations are similar to those in the simplicial case [We]. \subsection{Simplicial and cosimplicial objects} Let $\mathbf\Delta$ denote the simplicial category [Mc]. If $\mathcal A$ is a category then the functor category $\mathcal A^{\mathbf\Delta^{op}}$ is the category of simplicial objects while $\mathcal A^{\mathbf\Delta}$ is the category of cosimplicial objects in $\mathcal A$. Thus a simplicial object in $\mathcal A$ is given by a sequence of objects $\{ X_n\}$ together with, for each $n\geq 0$, face maps $\partial_i\colon X^{n+1}\to X_n$ for $0\leq i\leq n+1$ and degeneracies $\sigma_j\colon X_n\to X_{n+1}$ for $0\leq j\leq n$ such that $\partial_i\partial_j=\partial_{j-1}\partial_i$ for $i<j$, $\sigma_i\sigma_j=\sigma_{j+1}\sigma_i$ for $i\leq j$, $\partial_i\sigma_j=\begin{cases} \sigma_{j-1}\partial_i,& \mbox{ if } i<j;\\ 1,& \mbox{ if } i=j, j+1;\\ \sigma_j\partial_{i-1},& \mbox{ if } i>j+1. \end{cases}$ A cosimplicial object in $\mathcal A$ is a sequence of objects $\{ X^n\}$ together with, for each $n\geq 0$, coface maps $\partial^i\colon X^n\to X^{n+1}$ for $0\leq i\leq n+1$ and codegeneracies $\sigma^j\colon X^{n+1}\to X^n$ such that $\partial^j\partial^i=\partial^i\partial^{j-1}$ for $i<j$, $\sigma^j\sigma^i=\sigma^i\sigma^{j+1}$ for $i\leq j$, $\sigma^j\partial^i=\begin{cases} \partial^i\sigma^{j-1},& \mbox{ if } i<j;\\ 1,& \mbox{ if } i=j,j+1;\\ \partial^{i-1}\sigma^j,& \mbox{ if } i>j+1. \end{cases}$ Two cosimplicial maps $f,g\colon X\to Y$ are homotopic if for each $n\geq 0$ there is a family of maps $\{ h^i\colon X^{n+1}\to Y^n|0\leq i\leq n\}$ in $\mathcal A$ such that $h^0\partial^0=f$, $h^n\partial^{n+1}=g$, $h^j\partial^i= \begin{cases} \partial^ih^{j-1},& \mbox{ if } i<j;\\ h^{i-1}\partial^i, & \mbox{ if } i=j\ne 0;\\ \partial^{i-1}h^j, & \mbox{ if } i>j+1, \end{cases}$ $h^j\sigma^i= \begin{cases} \sigma^ih^{j+1}, & \mbox{ if } i\leq j;\\ \sigma^{i-1}h^j, & \mbox{ if } i>j. \end{cases}$ \\Clearly, homotopy of cosimplicial maps is an equivalence relation. If ${X}$ is a cosimplicial object in an abelian category $\mathcal{A}$, then $C({X})$ denotes the associated cochain complex in $\mathcal{A}$, i.e. an object of the category of cochain complexes $\operatorname{Coch}(\mathcal{A})$. \begin{Lemma} For a cosimplicial object $X$ in the abelian category $\mathcal{A}$ let $N^n(X)=\cap_{i=0}^{n-1}\ker\sigma^i$ and $D^n(X)=\sum_{j=0}^{n-1}\operatorname{im}\partial^j$. Then $C(X)\cong N(X)\oplus D(X)$. Moreover, ${X}/{D(X)}\cong N(X)$ is a cochain complex with differentials given by $\partial^n\colon {X^n}/{D^n}\to {X^{n+1}}/{D^{n+1}}$, and $\pi^*(X)= H^*(N^*(X))$ is the sequence of cohomotopy objects of $X$. \end{Lemma} \begin{Theorem}[Cosimplicial Dold-Kan correspondence, {[We, 8.4.3]}] If $\mathcal A$ is an abelian category then \begin{enumerate} \item $N\colon\mathcal{A}^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal A)$ is an equivalence and $N(X)$ is a summand of $C(X)$; \item $\pi^*(X)=H^*(N(X))\cong H^*(C(X))$. \item If $\mathcal A$ has enough injectives, then $\pi^*=H^*N\colon \mathcal A^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal A)$ and $H^*C\colon A^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal A)$ are the sequences of right derived functors of $\pi^0=H^0N\colon \mathcal A^{\mathbf\Delta}\to \mathcal A$ and $H^0C\colon \mathcal A^{\mathbf\Delta}\to \mathcal A$, respectively. \end{enumerate} \end{Theorem} \begin{proof} (1) If $y\in N^n(X)\cap D^n(X)$ then $y=\sum_{i=0}^{n-1}\partial^i(x_i)$, where each $x_i\in X^{n-1}$. Suppose that $y=\partial^0(x)$ and $y\in N^n(X)$, then $0=\sigma^0(y)=\sigma^0\partial^0(x)=x$ and hence $y=\partial^0(x)=0$ Now proceed by induction on the largest $j$ such that $\partial^j(x_j)\neq 0$ So let $y=\sum_{i=0}^j\partial^i(x_i)$ such that $\partial^j(x_j)\neq 0$, i.e: $y\notin \sum_{i<j}\operatorname{im}\partial^i$, and $y\in N^n(X)$. Then $0=\sigma^j(y)=\sum_{i\leq j}\sigma^J\partial^i(x_i) =x_j+\sum{i<j}\sigma^j\partial^i(x_i)=x_j+\sum_{i<j}\partial^i\sigma^{j-1}(x_i)$. This implies that $x_j=\sum_{i<j}\partial^i\sigma^{j-1}(x_i)$ and hence $\partial^j(x_j)=-\sum_{i<j}\partial^j\partial^i\sigma^{j-1}(x_i) =-\sum_{i<j}\partial^i\partial^{j-1}\sigma^{j-1}(x_i)\in \sum_{i<j}\operatorname{im}\partial^i$, a contradiction. Thus, $N^n(X)\cap D^n(X)=0$. Now let us show that $D^n(X)+N^n(X)=C^n(X)$. Suppose that $y=\partial^0(x)$ for some $x\in X_{n-1}$ and $y\in N^n(x)=\cap_{i=0}^{n-1}\ker\sigma^i$. Then $0=\sigma^0(y)=\sigma^0\partial^0(x)=x$, so that $\sigma^i(y)\neq 0$. If $y'=y-\partial^i\sigma^i(y)$ then $y-y'\in D^n(X)$. For $i<j$ we get $\sigma^j(y')=\sigma^j(y)-\sigma^j\partial^i\sigma^i(y) =\sigma^j(y)-\partial^i\sigma^{j-1}\sigma^i(y) =\sigma^j(y)-\partial^i\sigma^i\sigma^j(y)=0$. Moreover, $\sigma^i(y')=\sigma^i(y)-\sigma^i\partial^i\sigma^i(y)=\sigma^i(y)-\sigma^i(y)=0$, so that $i-1$ is the largest index for which $\sigma^{i-1}y'\neq 0$. By induction, there is a $z\in D^n(X)$ such that $y-z\in N^n(X)$, and hence $y\in D^n(X)+N^n(X)$. It now follows that $\cap_{i=0}^{n-1}\ker\sigma^i =N^n(X)\cong X^n/{D^n(X)} =X^n/{\sum_{i=0}^{n-1}\operatorname{im}\partial^i}$. The differential $\partial^n\colon N^n(X)\to N^{n+1}(X)$ is given by $\partial^n(x+D^n(X))=\partial^n(x)+D^{n+1}(X)$. (2) By definition, see [We, 8.4.3]. (3) The functors $N\colon \mathcal A^{\mathbf\Delta}\to\mathcal{A}$ and $C\colon\mathcal{A}^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal{A})$ are exact. \end{proof} The inverse equivalence $K\colon \operatorname{Coch} (\mathcal A)\to \mathcal A^{\mathbf\Delta}$ has a description, similar to that for the simplicial case [We, 8.4.4]. \subsection{Cosimplicial bicomplexes} The category of cosimplicial bicomplexes in the abelian category $\mathcal A$ is the functor category $\mathcal A^{\mathbf\Delta\times\mathbf\Delta}=(\mathcal A^{\mathbf\Delta})^{\mathbf\Delta}$. In particular, in a cosimplicial bicomplex $X=\{ X^{p,q}\}$ in $\mathcal A$ \begin{enumerate} \item Horizontal and vertical cosimplicial identities are satisfied; \item Horizontal and vertical cosimplicial operators commute. \end{enumerate} The associated (unnormalized) cochain bicomplex $C(X)$ with $C(X)^{p,q}=X^{p,q}$ has horizontal and vertical differentials $$d_h=\sum_{i=0}^{p+1}(-1)^i\partial_h^i\colon X_{p,q}\to X^{p+1,q}\quad ,\quad d_v=\sum_{j=0}^{q+1}(-1)^{p+j}\partial_v^j\colon X^{p,q}\to X^{p,q+1}$$ so that $d_hd_v=d_vd_h$. The normalized cochain bicomplex $N(X)$ is obtained from $X$ by taking the normalized cochain complex of each row and each column. It is a summand of $CX$. The cosimplicial Dold-Kan theorem then says that $H^{**}(CX)\cong H^{**}(NX)$ for every cosimplicial bicomplex. The diagonal $\operatorname{diag}\colon \Delta\to \Delta\times\Delta$ induces the diagonalization functor $\operatorname{Diag} =\mathcal A^{\operatorname{diag}}\colon \mathcal A^{\Delta\times\Delta}\to\mathcal A^{\Delta}$, where $\operatorname{Diag}^p(X)=X^{p,p}$ with coface maps $\partial^i=\partial_h^i\partial_v^i\colon X^{p,p}\to X^{p+1,p+1}$ and codegeneracies $\sigma^j=\sigma_h^j\sigma_v^j\colon X^{p+1,p+1}\to X^{p,p}$ for $0\leq i\leq p+1$ and $0\leq j\leq p$, respectively. \begin{Theorem}[The cosimplicial Eilenberg-Zilber Theorem.] Let $\mathcal A$ be an abelian category with enough injectives. There is a natural isomorphism $$\pi^*(\operatorname{Diag} X)=H^*(C\operatorname{Diag} (X))\cong H^*(\operatorname{Tot} (X)),$$ where $\operatorname{Tot}(X)$ denotes the total complex associated to the double cochain complex $CX$. Moreover, there is a convergent first quadrant cohomological spectral sequence $$E_1^{p,q}=\pi_v^q(X^{p,*})\quad ,\quad E_2^{p,q}=\pi_h^p\pi_v^q(X)\Rightarrow \pi^{p+q}(\operatorname{Diag} X).$$ \end{Theorem} \begin{proof} It suffices to show that $\pi^0\operatorname{Diag} \cong H^0(\operatorname{Tot} X)$, and that $$\pi^*\operatorname{Diag},\; H^*\operatorname{Tot} \colon\mathcal A^{\Delta\times\Delta }\to \mathcal A^{\mathbf N}$$ are sequences of right derived functors. First observe that $\pi^0(\operatorname{Diag} X)=\operatorname{eq} (\partial_h^0\partial_v^0, \partial_h^0\partial_v^0\colon X^{0,0}\to X^{1,1})$, while $H^0(\operatorname{Tot} (X))=\ker ((\partial_h^0-\partial_h^1, \partial_v^0-\partial_v^1)\colon X^{0,0}\to X^{10}\oplus X^{01})$. But $\partial_h^0\partial_v^0x=\partial_h^1\partial_v^1x$ implies that $\partial_v^0x=\sigma_h^0\partial_h^0\partial_v^0x =\sigma_h^0\partial_h^1\partial_v^1x =\partial_v^1x$, since $\sigma_h^0\partial_h^0 =1=\sigma_h^0\partial_h^1$, and similarly $\partial_h^0x=\sigma_v^0\partial_h^0\partial_v^0x =\sigma_v^0\partial_h^1\partial_v^1x =\partial_h^1x$, since $\sigma_v^0\partial_v^0 =1=\sigma_v^0\partial_v^1$, so that $\pi^0(\operatorname{Diag} X)\subseteq H^0(\operatorname{Tot} (X))$. Conversely, if $\partial_h^0x=\partial_h^1x$ and $\partial_v^0x=\partial_v^1x$ then $\partial_h^0\partial_v^0x=\partial_h^0\partial_v^1x =\partial_v^1\partial_h^0x=\partial_v^1\partial_h^1x =\partial_h^1\partial_v^1x$, and hence $H^0(\operatorname{Tot} (X))\subseteq \pi^0(\operatorname{diag} X)$. The additive functors $\operatorname{Diag}\colon \mathcal A^{\Delta\times\Delta}\to \mathcal A^{\Delta}$ and $\operatorname{Tot}\colon\mathcal A^{\Delta\times\Delta}\to \operatorname{Coch} (\mathcal A)$ are obviously exact, while $\pi^*, H^*$ are cohomological $\delta$-functors, so that both $\pi^*\operatorname{Diag} ,H^*\operatorname{Tot}\colon\mathcal A^{\Delta\times\Delta}\to \operatorname{Coch} (\mathcal A)$ are cohomological $\delta$ functors. The claim is that this cohomological $\delta$ functors are universal, i.e: the right derived functors of $\pi^0\operatorname{Diag} ,H^0\operatorname{Tot} C\colon \mathcal A^{\Delta\times\Delta}\to \mathcal A$, respectively. Since $\mathcal A$ has enough injectives, so does $\operatorname{Coch} (\mathcal A)$ by [We, Ex. 2.3.4], and hence by the Dold-Kan equivalence $\mathcal A^{\Delta}$ and $\mathcal A^{\Delta\times\Delta}$ have enough injectives. Moreover, by the next lemma, both $\operatorname{Diag}$ and $\operatorname{Tot} $ preserve injectives. It therefore follows that $$\begin{array}{l} \pi^*\operatorname{Diag} =(R^*\pi^0)\operatorname{Diag} =R^*(\pi^0\operatorname{Diag} ),\\ H^*\operatorname{Tot} =(R^*H^0)\operatorname{Tot} =R^*(H^0\operatorname{Tot} ). \end{array}$$ The canonical cohomological first quadrant spectral sequence associated with the cochain bicomplex $C(X)$ has $$E_1^{p,q}=H_v^q(C^{p,*}(X))=\pi_v^q(X^{p,*})\quad ,\quad E_2^{p,q}=H_h^p(C(\pi_v^q(X))=\pi_h^p\pi_v^q(X)$$ and converges finitely to $H^{p+q}(\operatorname{Tot} (X))\cong \pi^{p+q}(\operatorname{diag} X)$. \end{proof} \begin{Lemma} The functors $\operatorname{Diag}\colon \mathcal A^{\Delta\times\Delta}\to\mathcal A^{\Delta}$ and $\operatorname{Tot}\colon\mathcal A^{\Delta\times\Delta}\to\operatorname{Coch}\mathcal A$ preserve injectives. \end{Lemma} \begin{proof} A cosimplicial bicomplex $J$ is an injective object in $\mathcal A^{\Delta\times\Delta}$ if and only if \begin{enumerate} \item each $J^{p,q}$ is an injective object of $\mathcal A$, \item each row and each column is cosimplicially null-homotopic, i.e: the identity map is cosimplicially homotopic to the zero map, \item the vertical homotopies $h_v^j\colon J^{*,q}\to J^{*,q-1}$ for $0\leq j\leq q-1$ are cosimplicial maps. \end{enumerate} It then follows that $\operatorname{Diag} (J)$ is an injective object in $\mathcal A^{\Delta}$, since $J^{p,p}$ is injective in $\mathcal A$ for every $p\geq 0$ and the maps $h^i=h_h^ih_v^i\colon J^{p,p}\to J^{p-1,p-1}$, $0\leq i\leq p-1$ and $p>0$, form a contracting cosimplicial homotopy, i.e: a the identity map od $\operatorname{Diag} J$ is cosimplicially null-homotopic. On the other hand $\operatorname{Tot} (J)$ is a non-negative cochain complex of injective objects in $\mathcal A$, so it is injective in $\operatorname{Coch} (\mathcal A)$ if and only if it is split-exact, that is if and only if it is exact. But every column of the associated cochain bicomplex $C(J)$ is acyclic, since $H_v^*(J^{p,*})=\pi^*(J^{p,*})=0$. The exactness of $\operatorname{Tot} (J)$ now follows from the convergent spectral sequence with $E_1^{p,q}=H^q(C^{p,*}(J))=0$ and $E_2^{p,q}=H_h^p(H_v^q(C(J)) \Rightarrow H^{p+q}(\operatorname{Tot} (J))$. \end{proof} \vskip .5cm \subsection{The cosimplicial Alexander-Whitney map} The cosimplicial Alexander Whitney map gives an explicit formula for the isomorphism in the Eilenberg-Zilber theorem. For $p+q=n$ let $$g_{p,q}=d^n_hd^{n-1}_h\ldots d^{p+1}_hd^0_v\ldots d^0_v\colon X^{p,q}\to X^{n,n}$$ and $g^n=(g^{p,q})\colon \operatorname{Tot}^n (X)\to X^{n,n}$. This defines a natural cochain map $g\colon \operatorname{Tot} (X)\to C(\operatorname{Diag} X)$, which induces a morphism of universal $\delta$-functors $$g^*\colon H^*(\operatorname{Tot} (X))\to H^*(C(\operatorname{Diag} X))=\pi^*(\operatorname{Diag} X).$$ Moreover, $g^0\colon \operatorname{Tot}^0(X)=X^0=C^0(\operatorname{Diag} X)$, and hence $$g^0\colon H^0(\operatorname{Tot} (X))\to H^0(C(\operatorname{Diag} X))=\pi^0(\operatorname{Diag} X).$$ The cosimplicial Alexander Whitney map is therefore (up to equivalence) the unique cochain map inducing the isomorphism in the Eilenberg-Zilber theorem. The inverse map $f\colon C(\operatorname{Diag} X)\to \operatorname{Tot} (X)$ is given by the shuffle coproduct formula $$f^{p,q}=\sum_{(p,q)-\mbox{shuffles}}(-1)^{\mu}\sigma^{\mu (n)}_h\ldots \sigma^{\mu (p+1)}_h\sigma^{\mu (p)}_v\ldots \sigma^{\mu (1)}_v\colon X^{n.n}\to X^{p,q},$$ and is a natural cochain map. It induces a natural isomorphism $\pi^0(\operatorname{Diag} X)=H^0(C(\operatorname{Diag} X))\cong H^0(\operatorname{Tot} (X))$, and thus $$f^*\colon \pi^*(\operatorname{Diag} X)=H^*(C(\operatorname{Diag} X))\cong H^*(\operatorname{Tot} (X))$$ is the unique isomorphism of universal $\delta$-functors given in the cosimplicial Eilenber-Zilber theorem. In particular, $f^*$ is the inverse of $g^*$. \end{appendix} \end{document}
\begin{document} \title{Intercomparison of Machine Learning Methods for Statistical Downscaling: The Case of Daily and Extreme Precipitation} \abstract{ Statistical downscaling of global climate models (GCMs) allows researchers to study local climate change effects decades into the future. A wide range of statistical models have been applied to downscaling GCMs but recent advances in machine learning have not been explored. In this paper, we compare four fundamental statistical methods, Bias Correction Spatial Disaggregation (BCSD), Ordinary Least Squares, Elastic-Net, and Support Vector Machine, with three more advanced machine learning methods, Multi-task Sparse Structure Learning (MSSL), BCSD coupled with MSSL, and Convolutional Neural Networks to downscale daily precipitation in the Northeast United States. Metrics to evaluate of each method's ability to capture daily anomalies, large scale climate shifts, and extremes are analyzed. We find that linear methods, led by BCSD, consistently outperform non-linear approaches. The direct application of state-of-the-art machine learning methods to statistical downscaling does not provide improvements over simpler, longstanding approaches. } \section{Introduction} The sustainability of infrastructure, ecosystems, and public health depends on a predictable and stable climate. Key infrastructure allowing society to function, including power plants and transportation systems, are built to sustain specific levels of climate extremes and perform optimally in it's expected climate. Studies have shown that the changing climate has had, and will continue to have, significant impacts on critical infrastructure~\cite{ganguli2015water,neumann2015climate}. Furthermore, climate change is having dramatic negative effects to ecosystems, from aquatic species to forests ecosystems, caused by increases in greenhouse gases and temperatures~\cite{walther2002ecological,parmesan2006ecological,hansen2013high}. Increases in frequency and duration of heat waves, droughts, and flooding is damaging public health~\cite{haines2006climate,frumkin2008climate}. Global Circulation Models (GCMs) are used to understand the effects of the changing climate by simulating known physical processes up to two hundred years into the future. The computational resources required to simulate the global climate on a large scale is enormous, limiting models to coarse spatial and temporal scale projections. While the coarse scale projections are useful in understanding climate change at a global and continental level, regional and local understanding is limited. Most often, the critical systems society depends on exist at the regional and local scale, where projections are most limited. Downscaling techniques are applied to provide climate projections at finer spatial scales, exploiting GCMs to build higher resolution outputs. Statistical and dynamical are the two classes of techniques used for downscaling. The statistical downscaling (SD) approach aims to learn a statistical relationship between coarse scale climate variables (ie. GCMs) and high resolution observations. The other approach, dynamical downscaling, joins the coarse grid GCM projections with known local and regional processes to build Regional Climate Models (RCMs). RCMs are unable to generalize from one region to another as the parameters and physical processes are tuned to specific regions. Though RCMs are useful for hypothesis testing, their lack of generality across regions and extensive computational resources required are strong disadvantages. \subsection{Statistical Downscaling} Statistical downscaling methods are further categorized into three approaches, weather generators, weather typing, and transfer functions. Weather generators are typically used for temporal downscaling, rather than spatial downscaling. Weather typing, also known as the method of analogues, searches for a similar historical coarse resolution climate state that closely represents the current state. Though this method has shown reasonable results~\cite{frost2011comparison}, in most cases, it is unable to satisfy the non-stationarity assumption in SD. Lastly, transfer functions, or regression methods, are commonly used for SD by learning functional relationships between historical precipitation and climate variables to project high resolution precipitation. A wide variety of regression methods have been applied to SD, ranging from Bias Correction Techniques to Artificial Neural Networks. Traditional methods include Bias Correction Spatial Disaggregation (BCSD)~\cite{wood2002long} and Automated Regression Based Downscaling (ASD)~\cite{hessami2008automated} and are the most widely used. BSCD assumes that the climate variable being downscaled is well simulated by GCMs, which often is not the case with variables such as precipitation~\cite{schiermeier2010real}. Rather than relying on projections of the climate variable being downscaled, regression methods can be used to estimate the target variable. For instance, precipitation can be projected using a regression model with variables such as temperature, humidity, and sea level pressure over large spatial grids. High dimensionality of covariates leads to multicollinearity and overfitting in statistical models stemming from a range of climate variables over three dimensional space. ASD improves upon multiple linear regression by selecting covariates implicitly, using covariate selection techniques such as backward stepwise regression and partial correlations. The Least Absolute Shrinkage and Selection Operator (Lasso), a widely used method for high dimensional regression problems through the utilization of a $l_1$ penalty term, is analogous to ASD and has shown superior results in SD~\cite{tibshirani1996regression,hammami2012predictor}. Principle component analysis (PCA) is another popular approach to dimensionality reduction in SD~\cite{tatli2004statistical,Ghosh2010,jakob2011empirical}, decomposing the features into a lower dimensional space to minimize multicollinearity between covariates. PCA is disadvantaged by the inability to infer which covariates are most relevant to the problem, steering many away from the method. Other methods for SD include Support Vector Machines (SVM)~\cite{Ghosh2010}, Artificial Neural Networks (ANNs)~\cite{taylor2000quantile,Coulibaly2005}, and Bayesian Model Averaging~\cite{Zhang2015}. Many studies have aimed to compare and quantify a subset of the SD models presented above by downscaling averages and/or extremes at a range of temporal scales. For instance, Burger et al. presented an intercomparison on five state-of-the-art methods for downscaling temperature and precipitation at a daily temporal resolution to quantify extreme events~\cite{Burger2012}. Another recent study by Gutmann et al. presented an intercomparison of methods on daily and monthly aggregated precipitation~\cite{gutmann2014intercomparison}. These studies present a basis for comparing SD models by downscaling at a daily temporal resolution to estimate higher level climate statistics, such as extreme precipitation and long-term droughts. In this paper we follow this approach to test the applicability of more advanced machine learning models to downscaling. \subsection{Multi-task Learning for Statistical Downscaling} Traditionally, SD has focused on downscaling a locations independently without accounting for clear spatial dependencies in the system. Fortunately, numerous machine learning advances may aid SD in exploiting such dependencies. Many of these advancements focus on an approach known as multi-task learning, aiming to learn multiple tasks simultaneously rather than in isolation. A wide variety of studies have shown that exploiting related tasks through multi-task learning (MTL) greatly outperforms single-task models, from computer vision~\cite{zhang2012robust} to biology~\cite{kim2010tree}. Consider the work presented by~\cite{evgeniou2007multi} in which increasing the number of tasks leads to more significant feature selection and lower test error through the inclusion of task relatedness and regularization terms in the objective function. MTL has also displayed the ability to uncover and exploit structure between task relationships~\cite{zhang2012convex,chen2011integrating,argyriou2007spectral}. Recently Goncalves et al. presented a novel method, Multi-task Sparse Structure Learning (MSSL), ~\cite{goncalves2014multi} and applied it to GCM ensembles in South America. MSSL aims to exploit sparsity in both the set of covariates as well as the structure between tasks, such as set of similar predictands, through alternating optimization of weight and precision (inverse covariance) matrices. The results showed significant improvements in test error over Linear Regression and Multi-model Regression with Spatial Smoothing (a special case of MSSL with a pre-defined precision matrix). Along with a lower error, MSSL captured spatial structure including long range teleconnections between some coastal cities. The ability to harness this spatial structure and task relatedness within a GCM ensembles drives our attention toward MTL in other climate applications. Consider, in SD, each location in a region as a task with an identical set of possible covariates. These tasks are related through strong unknown spatial dependencies which can be harnessed for SD projections. In the common high dimensional cases of SD, sparse features learned will provide greater significance as presented by~\cite{evgeniou2007multi}. Furthermore, the structure between locations will be learned and may aid projections. MSSL, presented by \cite{goncalves2014multi}, accounts for sparse feature selection and structure between tasks. In this study we aim to compare traditional statistical downscaling approaches, BCSD, Multiple Linear Regression, Lasso, and Support Vector Machines, against new approaches in machine learning, Multi-task Sparse Structure Learning and Convolutional Neural Networks (CNNs). During experimentation we apply common training architectures as part of the automated statistical downscaling framework. Results are then analyzed with a variety of metrics including, root mean Square error (RMSE), bias, skill of estimating underlying distributions, correlation, and extreme indices. \section{Statistical Downscaling Methods} \subsection{Bias Corrected Spatial Disaggregation} BCSD~\cite{wood2002long} is widely used in the downscaling community due to its simplicity~\cite{abatzoglou2012comparison,Burger2012,wood2004hydrologic,maurer2010utility}. Most commonly, GCM data is bias corrected followed by spatial disaggregation on monthly data and then temporally disaggregated to daily projections. Temporal disaggregation is performed by selecting a month at random and adjusting the daily values to reproduce it's statistical distribution, ignoring daily GCM projections. Thrasher et al. presented a process applying BCSD directly to daily projections~\cite{thrasher2012technical}, removing the step of temporal disaggregation. We the following steps with overlapping a reanalysis dataset and gridded observation data. 1) Bias correction of daily projections using observed precipitation. Observed precipitation is first remapped to match the reanalysis grid. For each day of the year values are pooled, $\pm$ 15 days, from the reanalysis and observed datasets to build a quantile mapping. With the quantile mapping computed, the reanalysis data points are mapped, bias corrected, to the same distribution as the observed data. When applying this method to daily precipitation detrending the data is not necessary because of the lack of trend and is therefore not applied. 2) Spatial disaggregation of the bias-corrected reanalysis data. Coarse resolution reanalysis is then bilinearly interpolated to the same grid as the observation dataset. To preserve spatial details of the fine-grained observations, the average precipitation of each day of the year is computed from the observation and set as scaling factors. These scaling factors are then multiplied to the daily interpolated GCM projections to provide downscaled GCM projections. \subsection{Automated Statistical Downscaling} ASD is a general framework for statistical downscaling incorporating covariate selection and prediction~\cite{hessami2008automated}. Downscaling of precipitation using ASD requires two essential steps: 1. Classify rainy/non-rainy days ($\geq$ 1mm), 2. Predict precipitation totals for rainy days. The predicted precipitation can then be written as: \begin{equation} \begin{aligned} \label{eq:asd} \textbf{E}[Y] = R * E[Y | R] \text{ where } R = \begin{cases} 0, & \text{if}\ \textbf{P}(Rainy) < 0.5 \\ 1, & \text{otherwise} \end{cases} \end{aligned} \end{equation} Formulating $R$ as a binary variable preserves rainy and non-rainy days. We test this framework using five pairs of classification and regression techniques. \subsubsection{Multiple Linear Regression} The simplicity of Multiple Linear Regression (MLR) motivated its use in SD, particularly as part of SDSM~\cite{wilby2002sdsm} and ASD~\cite{hessami2008automated}. To provide a baseline relative to the following methods, we apply a variation of MLP using PCA. As discussed previously, PCA is implemented to reduce the dimensionality of a high dimensional feature space by selecting the components that account for a percentage (98\% in our implementation) of variance in the data. These principle components, $X$, are used as inputs to classify and predict precipitation totals. We apply a logistic regression model to classify rainy versus non-rainy days. MLP is then applied to rainy days to predict precipitation amounts, $Y$: \begin{equation} \begin{aligned} \label{eq:mlp} & \hat{\beta} = \arg\!\min_{\beta} \parallel Y - X\beta \parallel \\ \end{aligned} \end{equation} This particular formulation will aid in comparison to ~\cite{Ghosh2010} where PCA is coupled with an SVM. \subsubsection{Elastic-Net} Covariate selection can be done in a variety of methods, such as backward stepwise regression and partial correlations. Automatic covariate selection through the use of regularization terms, such as the $L_1/L_2$ norms in the statistical methods Lasso~\cite{tibshirani1996regression}, Ridge~\cite{hoerl1970ridge}, and Elastic-Net~\cite{zou2005regularization}. Elastic-Net uses a linear combination of $L_1/L_2$ norms which we will apply in this intercomparison. Given a set of covariates $X$ and observations $Y$, Elastic-net is defined as: \begin{equation} \begin{aligned} \label{eq:elnet} & \hat{\beta} = \arg\!\min_{\beta} \big( \parallel Y - X\beta \parallel_2^2 + \lambda_1 \parallel \beta \parallel_1 + \lambda_2 \parallel \beta \parallel_2^2 \big) \\ \end{aligned} \end{equation} The $L_1$ norm forces uninformative covariate coefficients to zero while the $L_2$ norm enforces smoothness while allowing correlated covariates to persist. Cross-validation is applied with a grid-search to find the optimal parameter values for $\lambda_1$ and $\lambda_2$. High-dimensional Elastic-Net is much less computational than stepwise regression techniques and most often leads to more generalizable models. A similar approach is applied to the classification step by using a logistic regression with an $L_1$ normalization term. Previous studies have considered the use of Lasso for SD~\cite{hammami2012predictor} but to our knowledge, none have considered Elastic-Net. \subsubsection{Support Vector Machine Regression} Ghosh et al. introduced a coupled approach of PCA and Support Vector Machine Regression (SVR) for statistical downscaling~\cite{ghosh2008statistical,Ghosh2010}. The use of SVR for downscaling aims to capture non-linear effects in the data. As discussed previously, PCA is implemented to reduce the dimensionality of a high dimensional feature space by selecting the components that account for a percentage (98\% in our implementation) of variance in the data. Following dimensionality reduction, SVR is used to define the transfer function between the principle components and observed precipitation. Given a set of covariates (the chosen principle components) $X \in \mathbb{R}^{n \times m}$ and $Y \in \mathbb{R}^n$ with $d$ covariates and $n$ samples, the support vector regression is defined as~\cite{smola1997support}: \begin{equation} \begin{aligned} & f(x) = \sum_{i=1}^d w_i \times K(x_i, x) + b \end{aligned} \end{equation} where $K(x_i, x)$ and $w_i$ are the kernel functions and their corresponding weights with a bias term $b$. The support vectors are selected during training by optimizing the number of points from the training data to define the relationship between then predictand ($Y$) and predictors ($X$). Parameters $C$ and $\epsilon$ are set during training, which we set to $1.0$ and $0.1$ respectively, corresponding to regularization and loss sensitivity. A linear kernel function is applied to limit overfitting to the training set. Furthermore, support vector classifier was used for classification of rainy versus non-rainy days. \subsubsection{Multi-task Sparse Structure Learning} Recent work in Multi-task Learning aims to exploit structure in the set of predictands while keeping a sparse feature set. Multi-task Sparse Structure Learning (MSSL) in particular learns the structure between predictands while enforcing sparse feature selection (\cite{goncalves2014multi}). Goncalves et al. presented MSSL's exceptional ability to predict temperature through ensembles of GCMs while learning interesting teleconnections between locations (\cite{goncalves2014multi}). Moreover, the generalized framework of MSSL allows for implementation of classification and regression models. Applying MSSL to downscaling with least squares regression (logistic regression for classification), we denote $K$ as the number of tasks (observed locations), $n$ as the number of samples, and $d$ as the number of covariates with predictor $X \in \mathbb{R}^{n \times d}$, and predictand $Y \in \mathbb{R}^{n \times K}$. As proposed in \cite{goncalves2014multi}, optimization over the precision matrix, $\boldsymbol{\Omega}$, is defined as \begin{equation} \label{eq:MSSL} \begin{aligned} \min_{\boldsymbol{W},\boldsymbol{\Omega} \succ 0} \bigg\{ \dfrac{1}{2} \sum_{k=1}^K \parallel X \boldsymbol{W}_k - Y_k \parallel_2^2 - \dfrac{K}{2} \text{log}|\boldsymbol{\Omega}| + Tr(\boldsymbol{W} \boldsymbol{\Omega} \boldsymbol{W}^T) + \lambda \parallel \boldsymbol{\Omega} \parallel_1 + \gamma \parallel \boldsymbol{W} \parallel_1 \bigg\}\\ \end{aligned} \end{equation} \noindent where $\boldsymbol{W} \in \mathbb{R}^{d \times K}$ is the weight matrix and $\boldsymbol{\Omega} \in \mathbb{R}^{K \times k}$ is an inverse precision matrix. The $L_1$ regularization parameters $\lambda$ and $\gamma$ enforce sparsity over $\boldsymbol{\Omega}$ and $\boldsymbol{W}$. $\boldsymbol{\Omega}$ represents the structure contained between the high resolution observations. Alternating minimization is applied to (\ref{eq:MSSL}) 1. Initialize $\boldsymbol{\Omega}^0 = I_k, \boldsymbol{W}^0 = \textbf{0}_{d\boldsymbol{X} k}$ \\ 2. for t=1,2,3,.. \textbf{do} \begin{equation} \boldsymbol{W}^{t+1} | \boldsymbol{\Omega}^t = \min_{\boldsymbol{W}} \bigg\{ \dfrac{1}{2}\sum_{k=1}^K \parallel X_k \boldsymbol{W}_k - Y_k \parallel_2^2 + Tr(\boldsymbol{W} \boldsymbol{\Omega} \boldsymbol{W}^T) + \gamma \parallel \boldsymbol{W} \parallel_1 \bigg\} \\ \label{eq:MSSL-W} \end{equation} \begin{equation} \boldsymbol{\Omega}^{t+1} | \boldsymbol{W}^{t+1} = \min_{\boldsymbol{\Omega}} \bigg\{ Tr(\boldsymbol{W} \boldsymbol{\Omega} \boldsymbol{W}^T) - \dfrac{K}{2} log|\boldsymbol{\Omega}| + \lambda \parallel \boldsymbol{\Omega} \parallel_1 \bigg\} \\ \label{eq:MSSL-Omega} \end{equation} \noindent \ref{eq:MSSL-W} and \ref{eq:MSSL-Omega} are independently approximated through Alternating Direction Method of Multipliers (ADMM). Furthermore, by assuming the predictors of each task is identical (as it is for SD), \ref{eq:MSSL-W} is updated using Distributed-ADMM across the feature space \cite{boyd2011distributed}. MSSL enforces similarity between rows of $\boldsymbol{W}$ by learning the structure $\boldsymbol{\Omega}$. For example, two locations which are nearby in space may tend to exhibit similar properties. MSSL will the exploit these properties and impose similarity in their corresponding linear weights. By enforcing similarity in linear weights, we are encouraging smoothness of SD projections between highly correlated locations. $L_1$ regularization over $\boldsymbol{W}$ and $\boldsymbol{\Omega}$ jointly encourages sparseness and does not force structure. The parameters encouraging sparseness, $\gamma$ and $\lambda$, are chosen from a validation set using the grid-search technique. These steps are applied for both regression and classification. \subsubsection{Convolutional Neural Networks} \begin{figure} \caption{Given a set of GCM inputs $\boldsymbol{Y}$, the first layer extracts a set of feature maps followed by a pooling layer. A second convolution layer is then applied to the reduced feature space and pooled one more time. The second pooling layer is then flattened and fully connected to the high resolution observations.} \label{fig:cnn-framework} \end{figure} Artificial Neural Networks (ANN) have been widely applied to SD with mixed results~\cite{taylor2000quantile,schoof2001downscaling,Burger2012}, to name a few. In the past, ANNs had difficulty converging to a local minimum. Recent progress in deep learning has renewed interested in ANNs and are beginning to have impressive results in many applications, including image classification and speech recognition~\cite{krizhevsky2012imagenet,hinton2012deep,basu2015learning}. In particular, Convolutional Neural Networks (CNNs) have greatly impacted computer vision applications by extracting, representing, and condensing spatial properties of the image~\cite{krizhevsky2012imagenet}. SD may benefit from CNN advances by learning spatial representations of GCMs. Though CNNs rely on a high number of samples to reduce overfitting, dropout has been shown to be an effective method of reducing overfitting with limited samples~\cite{srivastava2014dropout}. We note that the number of observations available to daily statistical downscaling may cause overfitting. CNNs rely on two types of layers, a convolution layer and a pooling layer. In the convolution layer, a patch of size $3 \times 3$ is chosen and slid with a stride of 1 around the image. A non-linear transformation is applied to each patch resulting in 8 filters. Patches of size $2 \times 2$ are then pooled by selecting the maximum unit with a stride of $2$. A second convolution layer with a $3 \times 3$ patch to $2$ filters is followed by a max pooling layer of size $3 \times 3$ with stride $3$. The increase of pooling size decreases the dimensionality further. The last pooling layer is then vectorized and densely connected to each high resolution location. This architecture is presented in Figure 1. Multiple variables and pressure levels from our reanalysis dataset are represented as channels in the CNN input. Our CNN is trained using the traditional back propagation optimization with a decreasing learning rate. During training, dropout with probability 0.5 is applied the densely connected layer. This method aims to exploit the spatial structure contained in the GCM. A sigmoid function is applied to the output layer for classification. To our knowledge, this is the first application of CNNs to statistical downscaling. \subsection{Bias Corrected Spatial Disaggregation with MSSL} To further understand the use of BCSD in Statistical Downscaling, we propose a technique to estimate the errors introduced in BCSD. As presented above, BCSD utilizes a relatively simple quantile mapping approach to statistical downscaling following by interpolation and spatial scaling. Following the BCSD estimates of the observed climate, we compute the presented errors, which may be consistent and have a predictive signal. Modeling such errors using the transfer function approaches above, such as MSSL, may uncover this signal and improve BCSD projections. To apply this technique, the following steps are taken: \begin{enumerate} \item Apply BCSD to the coarse scale climate variable and compute the errors. \item Excluding a hold out dataset, use MSSL where they predictand is the computed errors and the predictands are from a different set of climate variables, such as Temperature, Wind, Sea Level Pressure, etc. \item Subtract the expected errors modeled by step 2 from BCSD projections in step 1. \end{enumerate} \noindent The transfer function learned in step 2 is then applicable to future observations. \section{Data} The Northeastern United States endures highly variable season and annual weather patterns. Variable climate and weather patterns combined with diverse topology provides difficulty in regional climate projection. Precipitation in particular varies heavily in frequency and intensity seasonally and annually~\cite{karl1998secular}. We choose this region to provide an in-depth comparison of statistical downscaling techniques for daily precipitation and extremes. \subsection{United States Unified Gauge-Based Analysis of Precipitation} High resolution gridded precipitation datasets often provide high uncertainties due to a lack of gauge based observations, poor quality control, and interpolation procedures. Fortunately, precipitation gauge data in the continental United States is dense with high temporal resolution (hourly and daily). The NOAA Climate Prediction Center CPC Unified Gauge-Based Analysis of Precipitation exploits the dense network of rain gauges to provide a quality controlled high resolution (0.25$^{\circ}$ by 0.25$^{\circ}$) gridded daily precipitation dataset from 1948 to the current date. State of the art quality control~\cite{chen2008assessing} and interpolation~\cite{xie2007gauge} techniques are applied giving us high confidence in the data. We select all locations within the northeastern United States watershed. \subsection{NASA Modern-Era Retrospective Analysis for Research and Applications 2 (MERRA-2)} Reanalysis datasets are often used as proxies to GCMs for statistical downscaling when comparing methods due to their low resolution gridded nature with a range of pressure levels and climate variables. Uncertainties and biases occur in each dataset, but state-of-the-art reanalysis datasets attempt to mitigate these issues. NASA's MERRA-2 reanalysis dataset~\cite{rienecker2011merra} was chosen after consideration of NCEP Reanalysis I/II~\cite{kalnay1996ncep} and ERA-Interm~\cite{dee2011era} datasets. \cite{kossin2015validating} showed the reduced bias of MERRA and ERA-Interm over NCEP Reanalysis II, which is most often used in SD studies. MERRA-2 provides a significant temporal resolution from 1980 to present with relatively high spatial resolution (0.50$^{\circ}$ by 0.625$^{\circ}$). Satellite data provided by NASA’s GEOS-5 project in conjunction with NASA's data assimilation system when producing MERRA-2~\cite{rienecker2011merra}. Only variables available from the CCSM4 GCM model are selected as covariates for our SD models. Temperature, vertical wind, horizontal wind, and specific humidity are chosen from pressure levels 500hpa, 700hpa, and 850hpa. At the surface level, temperature, sea level pressure, and specific humidity are chosen as covariates. To most closely resemble CCSM4, each variable is spatially upscaled to 1.00$^{\circ}$ to 1.25$^{\circ}$ at a daily resolution. A large box centralized around the Northeastern Region ranging from 35$^{\circ}$ to 50$^{\circ}$ latitude and 270$^{\circ}$ to 310$^{\circ}$ longitude is used for each variable. When applying the BCSD model, we use a spatially upscaled Land Precipitation MERRA-2 Reanalysis dataset at a daily temporal resolution. Bilinear interpolation is applied over the coast to allow for quantile mapping of coastal locations as needed. \section{Experiments and Evaluation} In-depth evaluation of downscaling techniques is crucial in testing and understanding their credibility. The implicit assumptions in SD must be clearly understood and tested when applicable. Firstly, SD models assume that the predictors chosen credibly represent the variability in the predictands. This assumption is partially validated through the choice of predictors presented above, which physically represents variability of precipitation. The remainder of the assumption must be tested through experimentation and statistical tests between downscaled projections and observations. The second assumption then requires the statistical attributes of predictands and predictors to be valid outside of the data using for statistical modeling. A hold out set will be used to test the feasibility of this assumption at daily, monthly, and annually temporal resolutions. Third, the climate change signal must be incorporated in the predictands through GCMs. Predictands chosen for this experiment are available through CMIP5 CCSM4 simulations. It is understood that precipitation is not well simulated by GCMs and therefore not used in ASD models~\cite{schiermeier2010real}. To test these assumptions, we provide in-depth experiments, analysis, and statistical metrics for each method presented above. The years 1980-2004 are used from training and years 2005-2014 are used for testing, taken from the overlapping time period of MERRA-2 and CPC Precipitation. For each method (excluding the special case of BCSD), we chose all covariates from each variable, pressure level, and grid point presented above, totaling 12,781 covariates. Each method applies either dimensionality reduction or regularization techniques to reduce complexity of this high dimensional dataset. Separate models are trained for each season (DJF, MAM, JJA, SON) and used to project the corresponding observations. \noindent Analysis and evaluation of downscaled projections aim to cover three themes: \begin{enumerate} \item Ability to capture daily anomalies. \item Ability to respond to large scale climate trends on monthly and yearly temporal scales. \item Ability to capture extreme precipitation events. \end{enumerate} Similar evaluation techniques were applied in recent intercomparison studies of SD~\cite{Burger2012,gutmann2014intercomparison}. Evaluation of daily anomalies are tested through comparison of bias (Projected - Observed), Root Mean Square Error (RMSE), correlations, and a skill score~\cite{perkins2007evaluation}. The skill score presented by~\cite{perkins2007evaluation} measures how similar two probability density functions are from a range of 0 to 1 where 1 corresponds to identical distributions. Statistics are presented for winter (DJF), summer (JJA), and annually to understand season credibility. Statistics for spring and fall are computed but not presented in order to minimize overlapping climate states and simply results. Each of the measures are computed independently in space then averaged to a single metric. Large scale climate trends are tested by aggregating daily precipitation to monthly and annual temporal scales. The aggregated projections are then compared using Root Mean Square Error (RMSE), correlations, and a skill score as presented in~\cite{perkins2007evaluation}. Due to the limited number of data points in the monthly and yearly projections, we estimate each measure using the entire set of projections and observations. Climate indices are used for evaluation of SD models' ability to estimate extreme events. Four metrics from ClimDEX (http://www.clim-dex.org), chosen to encompass a range of extremes, will be utilized for evaluation, as presented by B\"{u}rger~\cite{Burger2012}. \begin{enumerate} \item CWD - Consecutive wet days $\geq$ 1mm \item R20 - Very heavy wet days $\geq$ 20mm \item RX5day - Monthly consecutive maximum 5 day precip \item SDII - Daily intensity index = Annual total / precip days $\geq$ 1m \end{enumerate} Metrics will be computed on observations and downscaled estimates followed by annual (or monthly) comparisons. For example, correlating the maximum number of consecutive wet days per year between observations and downscaled estimates measures each SD models' ability to capture yearly anomalies. A skill score will also be utilized to understand abilities of reproducing statistical distributions. \section{Results} Results presented below are evaluated using a hold-out set, years 2005-2014. Each model's ability to capture daily anomalies, long scale climate trends, and extreme events are presented. Our goal is to understand a SD model's overall ability to provide credible projections rather than one versus one comparisons, therefore statistical significance was not computed when comparing statistics. \subsection{Daily Anomalies} \begin{figure} \caption{Each map presents the spatial bias, or directional error, of the model. White represents no bias produced by the model while red and blue respectively show positive and negative biases.} \label{fig:bias} \end{figure} \begin{figure} \caption{Root mean square error (RMSE) is computed for each downscaling location and method. Each boxplot presents the distribution of all RMSEs for the respective method. The box shows the quartiles while the whiskers shows the remaining distribution, with outliers displayed by points.} \label{fig:daily-rmse} \end{figure} \begin{table} \small \noindent\makebox[\textwidth]{ \begin{tabularx}{1.2\textwidth}{>{\em}l|XXX|XXX|XXX|XXX} \toprule {} & \multicolumn{3}{c|}{Bias (mm)} & \multicolumn{3}{c|}{Correlation} & \multicolumn{3}{c|}{RMSE (mm)} & \multicolumn{3}{c}{Skill Score} \\ Season & Annual & DJF & JJA & Annual & DJF & JJA & Annual & DJF & JJA & Annual & DJF & JJA \\ \midrule BCSD & -0.44 & -0.36 & -0.36 & 0.52 & 0.49 & 0.46 & 0.75 & 0.65 & 0.81 & 0.93 & 0.92 & 0.89 \\ PCAOLS & -0.89 & -0.71 & -1.16 & 0.55 & 0.60 & 0.49 & 0.70 & 0.55 & 0.75 & 0.82 & 0.81 & 0.76 \\ PCASVR & 0.37 & 0.04 & 0.20 & 0.33 & 0.39 & 0.31 & 1.10 & 0.79 & 1.05 & 0.91 & 0.87 & 0.87 \\ ELNET & -0.88 & -0.66 & -1.16 & 0.64 & 0.69 & 0.55 & 0.65 & 0.50 & 0.72 & 0.84 & 0.85 & 0.78 \\ MSSL & -1.58 & -1.20 & -2.05 & 0.62 & 0.64 & 0.54 & 0.68 & 0.55 & 0.74 & 0.92 & 0.90 & 0.88 \\ BCSD-MSSL & -0.16 & -0.10 & -0.02 & 0.58 & 0.60 & 0.50 & 0.69 & 0.56 & 0.77 & 0.79 & 0.80 & 0.74 \\ CNN & -3.27 & -2.72 & -3.68 & 0.58 & 0.63 & 0.55 & 0.87 & 0.69 & 0.90 & 0.73 & 0.74 & 0.67 \\ \bottomrule \end{tabularx}} \caption{Daily statistical metrics averaged over space for annual, winter, and summer projections. Bias measures the directional error from each model. Correlation (larger is better) and RMSE (lower is better) describe the models ability to capture daily fluxuations in precipitation. The skill score statistic measure the model's ability to estimate the observed probability distribution.} \label{tab:daily-stats} \end{table} Evaluation of daily anomalies depends on a model's ability to estimate daily precipitation given the state of the system. This is equivalent to analyzing the error between projections and observations. Four statistical measures are used to evaluate these errors: bias, Pearson Correlation, skill score, and root mean square error (RMSE), as presented in Figure 2, Figure 3, and Table 1). All daily precipitation measures are computed independently in space and averaged to provide a single value. This approach is taken to summarize the measures as simply as possible. Figure 2 shows the spatial representation of annual bias in Table 1. Overall, methods tend underestimate precipitation annually and seasonally with only PCASVR overestimating. BCSD-MSSL shows the lowest annual and summer bias and second lowest winter bias. BCSD is consistently under projects daily precipitation, but by modeling the possible error with MSSL, bias is reduced. PCAOLS and ELNET are less biased compared to MSSL. CNN has a strong tendency underestimate precipitation. Figure 2 shows consistent negative bias through space for BCSD, ELNET, PCAOLS, MSSL, and CNN while PCASVR shows no discernible pattern. Correlation measures in Table 1 presents a high linear relationship between projections and observations for the models ELNET (0.64 annually) and MSSL (0.62 annually). We find that BCSD has a lower correlation even in the presence of error correction in BCSD-MSSL. PCASVR provides low correlations, averaging 0.33 annually, but PCAOLS performs substantially better at 0.55. The skill score is used to measure a model's ability to reproduce the underlying distribution of observed precipitation where a higher value is better between 0 and 1. BCSD, MSSL, and PCASVR have the largest skill scores, 0.93, 0.92, and 0.91 annually. We find that modeling the errors of BCSD decreases the ability to replicate the underlying distribution. The more basic linear models, PCAOLS and ELNET, present lower skill scores. The much more complex CNN model has difficulty replicating the distribution. RMSE, presented in Figure 3 and Table 1, measures the overall ability of prediction by squaring the absolute errors. The boxplot in Figure 3, where the box present the quartiles and whiskers the remaining distributions with outliers as points, shows the distribution of RMSE annually over space. The regularized models of ELNET and MSSL have similar error distributions and outperform others. CNN, similar to its under performance in bias, shows a poor ability to minimize error. The estimation of error produced by BCSD-MSSL aids in lowering the RMSE of plain BCSD. PCAOLS reasonably minimizes RMSE while PCASVR severely under-performs compared to all other models. Regression models applied minimize error during optimization while BCSD does not. Seasonally, winter is easier to project with summer being a bit more challenging. \subsection{Large Climate Trends} \begin{figure} \caption{The average root mean square Error for each month with each line representing a single downscaling model.} \label{fig:monthly-rmse} \end{figure} \begin{table} \centering \small \begin{tabular}{>{\em}l|rr|rr|rr} \toprule {} & \multicolumn{2}{c|}{RMSE (mm)} & \multicolumn{2}{c|}{Skill Score} & \multicolumn{2}{c}{Correlation} \\ Time-frame & Month & Year & Month & Year & Month & Year \\ \midrule BCSD & 31.97 & 204.78 & 0.88 & 0.63 & 0.85 & 0.64 \\ PCAOLS & 50.01 & 362.73 & 0.75 & 0.27 & 0.63 & 0.41 \\ PCASVR & 92.17 & 414.40 & 0.83 & 0.69 & 0.29 & 0.15 \\ ELNET & 46.96 & 353.67 & 0.76 & 0.27 & 0.71 & 0.50 \\ MSSL & 62.63 & 597.80 & 0.56 & 0.05 & 0.67 & 0.40 \\ BCSD-MSSL & 31.24 & 155.04 & 0.88 & 0.87 & 0.82 & 0.60 \\ CNN & 112.21 & 1,204.27 & 0.01 & 0.00 & 0.59 & 0.54 \\ \bottomrule \end{tabular} \caption{Large Scale Projection Results: After aggregating daily downscaled estimates to monthly and yearly time scales, RMSE and Skill are computed per location and averaged.} \label{tab:res-largscale} \end{table} Analysis of a SD model's ability to capture large scale climate trends can be done by aggregating daily precipitation to monthly and annual temporal scales. To increase the confidence in our measures, presented in Table II and Figures 4 and 5, we compare all observations and projections in a single computation, rather than separating by location and averaging. Table 2 and Figure 4 show a wide range of RMSE. A clear difficulty in projecting precipitation in the fall, October in particular, is presented by each time-series in Figure 4. The difference in overall predictability relative to RMSE between the models is evident. BCSD and BCSD-MSSL have significantly lower monthly RMSEs compared to the others. Annually, BCSD-MSSL reduced RMSE by 25\% compared to plain BCSD. The linear models, ELNET, MSSL, and PCAOLS, have similar predictability while the non-linear models suffer, CNN being considerably worse. The skill scores in Table 2 show more difficulty in estimating the annual distribution versus monthly distribution. On a monthly scale BCSD and BCSD-MSSL skill scores outperform all other models but BCSD suffers slightly on an annual basis. However, BCSD-MSSL does not lose any ability to estimate the annual distribution. PCAOLS annual skill score is remarkably higher than the monthly skill score. Furthermore, the three linear models outperform BCSD on an annual basis. PCASVR's skill score suffers on an annual scale and CNN has no ability to estimate the underlying distribution. Correlation measures between the models and temporal scales show much of the same. BCSD has the highest correlations in both monthly (~0.85) and yearly (~0.64) scales while BCSD-MSSL are slightly lower. CNN correlations fall just behind BCSD and BCSD-MSSL. PCASVR fails with correlation values of 0.22 and 0.18. ELNET has slightly higher correlations in relation to MSSL and PCAOLS. \subsection{Extreme Events} \begin{figure} \caption{Annual precipitation observed (x-axis) and projected (y-axis) for each model is presented along with the corresponding Pearson Correlation. Each point represents a single location and year.} \label{fig:annual-corr} \end{figure} \begin{figure} \caption{The daily intensity index (Annual Precipitation/Number of Precipitation Days) averaged per year.} \label{fig:sdii} \end{figure} \begin{table} \centering \small \begin{tabular}{>{\em}l|rrrr|rrrr} \toprule {} & \multicolumn{4}{c|}{Correlation} & \multicolumn{4}{c}{Skill Score} \\ {Metric} & CWD & R20 & RX5day & SDII & CWD & R20 & RX5day & SDII \\ \midrule model & & & & & & & & \\ BCSD & \textbf{0.43} & \textbf{0.83} & \textbf{0.73} & \textbf{0.70} & 0.71 & 0.80 & \textbf{0.84} & 0.44 \\ PCAOLS & 0.25 & 0.65 & 0.44 & 0.67 & 0.69 & 0.60 & 0.65 & 0.44 \\ PCASVR & 0.24 & 0.81 & 0.19 & 0.25 & 0.78 & \textbf{0.89} & 0.80 & \textbf{0.65} \\ ELNET & 0.36 & 0.71 & 0.57 & 0.64 & 0.79 & 0.62 & 0.63 & 0.35 \\ MSSL & 0.33 & \textbf{0.84} & 0.56 & 0.52 & \textbf{0.90} & 0.63 & 0.57 & 0.16 \\ BCSD-MSSL & 0.25 & \textbf{0.83} & 0.70 & \textbf{0.69} & 0.41 & 0.75 & \textbf{0.84} & 0.08 \\ CNN & 0.07 & --- & 0.33 & -0.30 & 0.05 & 0.59 & 0.01 & 0.00 \\ \bottomrule \end{tabular} \caption{Statistics for ClimDEX Indices: For each model's downscaled estimate we compute four extreme indices, consecutive wet days (CWD), very heavy wet days (R20), maximum 5 day precipitation (RX5day), and daily intensity index (SDII), for each location. We then compare these indices to those extracted from observations to compute correlation and skill metrics.} \label{tab:climdex} \end{table} A SD model's ability to downscale extremes from reanalysis depends on both the response to observed anomalies and ability to reproduce the underlying distribution. Resulting correlation measures present the response to observed anomalies, shown in Figure 6 and Table 3. We find that BCSD has higher correlations for three metrics, namely consecutive wet days, very heavy wet days, and daily intensity index along with a similar results from 5-day maximum precipitation. Furthermore, modeling BCSD's expected errors with BCSD-MSSL decreases the ability to estimate the chosen extreme indices. Non-linear methods, PCASVR and CNN, suffer greatly in comparison to more basic bias correction and linear approaches. The linear methods, PCAOLS, ELNET, and MSSL, provide similar correlative performance. A skill score is used to quantify each method's ability to estimate an indices statistical distribution, presented in Table 3. Contrary to correlative results, PCASVR outperforms the other methods on two metrics, very heavy wet days and daily intensity index, with better than average scores on the other two metrics. BCSD also performs reasonably well in terms of skill scores while BCSD-MSSL suffers from the added complexity. MSSL estimates the number of consecutive wet days well but is less skilled on other metrics. The very complex CNN model has little ability to recover such distributions. Figure 6 displays a combination of correlative power and magnitude estimate of the daily intensity index. The SDII metric is computed from total annual precipitation and number of wet days. A low SDII metric corresponds to either a relatively large number of estimated wet days or low annual precipitation. We find that the on average methods underestimate this intensity. Based on Figure 5 we see that CNN severely underestimates annual precipitation, causing a low SDII. In contrast, PCASVR overestimates annual precipitation and intensity. Inconsistent results of PCASVR and CNN indicates that capturing non-linear relationships is outweighed by overfitting. However, BCSD and linear methods are more consistent throughout each metric. \section{Discussion and Conclusion} The ability of statistical downscaling methods to produce credible results is necessary for a multitude of applications. Despite numerous studies experimenting with a wide range of models for statistical downscaling, none have clearly outperformed others. In our study, we experiment with the off-the-shelf applicability of machine learning advances to statistical downscaling in comparison to traditional approaches. Multi-task Sparse Structure learning, an approach that exploits similarity between tasks, was expected to increase accuracy beyond automated statistical downscaling approaches. We find that MSSL does not provide improvements beyond ELNET, an ASD approach. Furthermore, the parameter set, estimated through cross-validation, attributed no structure aiding prediction. The recent popularity in deep learning along with it's ability to capture spatial information, namely Convolutional Neural Networks, motived us to experiment with basic architectures for statistical downscaling. CNNs benefit greatly by implicitly learning abstract non-linear spatial features based on the target variable. This approach proved to poorly estimate downscaled estimates relative to simpler methods. We hypothesize that implicitly learning abstract features rather than preserving the granular feature spaced caused poor performance. More experimentation with CNNs in a different architecture may still provide valuable results. BCSD, a popular approach to statistical downscaling, outperformed the more complex models in estimating underlying statistical distributions and climate extremes. In many cases, correcting BCSD's error with MSSL increased daily correlative performance but decreased skill of estimating the distribution. From this result, we can conclude that a signal aiding in prediction was lost during quantile mapping, interpolation, or spatial scaling. Future work may study and improve each step independently to increase overall performance. Of the seven statistical downscaling approaches studied, the traditional BCSD and ASD methods outperformed non-linear methods, namely Convolutional Neural Network and Support Vector regression, while downscaling daily precipitations. We find that BCSD is skilled at estimating the statistical distribution of daily precipitation, generating better estimates of extreme events. The expectation of CNN and MSSL, two recent machine learning advances which we found most applicable to statistical downscaling, to outperform basic modeled proved false. Improvements and customization of machine learning methods is needed to provide more credible projections. \section*{Acknowledgments} This work was funded by NSF CISE Expeditions in Computing award 1029711, NSF CyberSEES award 1442728, and NSF BIGDATA award 1447587. MERRA-2 climate reanalysis datasets used were provided by the Global Modeling and Assimilation Office at NASA's Goddard Space Flight Center. The CPC Unified Gauge-Based Analysis was provided by NOAA Climate Prediction Center. \end{document}
\begin{document} \title{ Restrained double Roman domination of a graph} \author{{\small Doost Ali Mojdeh$^{a}$\thanks{Corresponding author} , Iman Masoumi$^{b}$ Lutz Volkmann$^{c}$}\\ \\ {\small $^{a}$Department of Mathematics, Faculty of Mathematical Sciences}\\{\small University of Mazandaran, Babolsar, Iran}\\{\small email: [email protected]}\\ \\ {\small $^{b}$Department of Mathematics, University of Tafresh}\\{\small Tafresh, Iran}\\ {\small email: i\[email protected]}\\ \\ {\small $^{c}$Lehrstuhl II f\"{u}r Mathematik, RWTH Aachen University}\\ {\small 52056 Aachen, Germany}\\ {\small email: [email protected]} } \date{} \maketitle \begin{abstract} For a graph $G=(V,E)$, a restrained double Roman dominating function is a function $f:V\rightarrow\{0,1,2,3\}$ having the property that if $f(v)=0$, then the vertex $v$ must have at least two neighbors assigned $2$ under $f$ or one neighbor $w$ with $f(w)=3$, and if $f(v)=1$, then the vertex $v$ must have at least one neighbor $w$ with $f(w)\geq2$, and at the same time, the subgraph $G[V_0]$ which includes vertices with zero labels has no isolated vertex. The weight of a restrained double Roman dominating function $f$ is the sum $f(V)=\sum_{v\in V}f(v)$, and the minimum weight of a restrained double Roman dominating function on $G$ is the restrained double Roman domination number of $G$. We initiate the study of restrained double Roman domination with proving that the problem of computing this parameter is $NP$-hard. Then we present an upper bound on the restrained double Roman domination number of a connected graph $G$ in terms of the order of $G$ and characterize the graphs attaining this bound. We study the restrained double Roman domination versus the restrained Roman domination. Finally, we characterized all trees $T$ attaining the exhibited bound. \end{abstract} \textbf{2010 Mathematical Subject Classification:} 05C69 \textbf{Keywords}: Domination, restrained Roman domination, restrained double Roman domination. \section{Introduction} Throughout this paper, we consider $G$ as a finite simple graph with vertex set $V=V(G)$ and edge set $E=E(G)$. We use \cite{west} as a reference for terminology and notation which are not explicitly defined here. The open neighborhood of a vertex $v$ is denoted by $N(v)$, and its closed neighborhood is $N[v]=N(v)\cup \{v\}$. The minimum and maximum degrees of $G$ are denoted by $\delta(G)$ and $\Delta(G)$, respectively. Given subsets $A,B \subseteq V(G)$, by $[A,B]$ we mean the set of all edges with one end point in $A$ and the other in $B$. For a given subset $S\subseteq V(G)$, by $G[S]$ we represent the subgraph induced by $S$ in $G$. A tree $T$ is a double star if it contains exactly two vertices that are not leaves. A double star with $p$ and $q$ leaves attached to each support vertex, respectively, is denoted by $S_{p,q}$. A wounded spider is a tree obtained from subdividing at most $n-1$ edges of a star $K_{1,n}$. A wounded spider obtained by subdividing $t \le n-1$ edges of $K_{1,n}$, is denoted by $ws(1,n, t)$.\\ A set $S\subseteq V(G)$ is called a dominating set if every vertex not in $S$ has a neighbor in $S$. The domination number $\gamma(G)$ of $G$ is the minimum cardinality among all dominating sets of $G$. A restrained dominating set ($RD$ set) in a graph $G$ is a dominating set $S$ in $G$ for which every vertex in $V(G)-S$ is adjacent to another vertex in $V(G)-S$. The restrained domination number ($RD$ number) of $G$, denoted by $\gamma_r(G)$, is the smallest cardinality of an $RD$ set of $G$. This concept was formally introduced in \cite{domke} (albeit, it was indirectly introduced in \cite{hattingh, haynes}). The variants of restrained domination have been already worked. For instance, a total restrained domination of a graph $G$ is an $RD$ set of $G$ for which the subgraph induced by the dominating set of $G$ has no isolated vertex, which can be referred to the \cite{chen}. Secure restrained dominating set $(SRDS)$ which is a set $S \subseteq V(G)$ for which $S$ is restrained dominating and for all $u \in V\setminus S$ there exists $v \in S\cap N(u)$ such that $(S\setminus \{v\})\cup \{u\}$ is restrained dominating set \cite{roushini}. The restrained Roman dominating function is a Roman dominating function $f: V(G) \to \{0,1,2\}$ such that the subgraph induced by the set $\{v\in V(G): f(v)=0\}$ has no isolated vertex, \cite{roushini1}. The restrained Italian dominating function ($RIDF$) is an Italian dominating function $f: V(G) \to \{0,1,2\}$ such that the subgraph induced by the set $\{v\in V(G): f(v)=0\}$ has no isolated vertex, \cite{samadi}. These results motivates us to consider a double Roman dominating function $f$ for which the subgraph induced by $V_0^f$ has no isolated vertex, which is the concept that we stand on it as new parameter namely restrained double Roman domination and will be investigated in this paper Beeler \emph{et al}. (2016) \cite{bhh} introduced the concept of double Roman domination of a graph.\\ If $ f:V(G)\rightarrow \{0,1,2,3\}$ is a function, then let $(V_0,V_1,V_2,V_3)$ be the ordered partition of $V(G)$ induced by $f$, where $V_i=\{v\in V(G):f(v)=i\}$ for $i=0,1,2,3$. There is a 1-1 correspondence between the function $f$ and the ordered partition $(V_0,V_1,V_2,V_3)$. So we will write $f=(V_0,V_1,V_2,V_3)$. A double Roman dominating function (DRD function for short) of a graph $G$ is a function $ f:V(G)\rightarrow \{0,1,2,3\}$ for which the following conditions are satisfied. \begin{itemize} \item[(a)] If $f(v)=0$, then the vertex $v$ must have at least two neighbors in $V_2$ or one neighbor in $V_3$. \item[(b)] If $f(v)=1$ , then the vertex $v$ must have at least one neighbor in $V_2\cup V_3$. \end{itemize} This parameter was also studied in \cite{al}, \cite{jr}, \cite{mojdeh} and \cite{zljs}. Accordingly, a restrained double Roman dominating function ({$RDRD$} function for short) is a double Roman dominating function $f:V\rightarrow\{0,1,2,3\}$ having the property that: the subgraph induced by $V_0$ (the vertices with zero labels under $f$) $G[V_0]$ has no isolated vertex. The restrained double Roman domination number ($RDRD$ number) $\gamma_{rdR}(G)$ is the minimum weight of an $RDRD$ function $f$ of $G$. For the sake of convenience, an $RDRD$ function $f$ of a graph $G$ with weight $\gamma_{rdR}(G)$ is called a $\gamma_{rdR}(G)$-function. This paper is organized as follows. We prove that the restrained double Roman domination problem is $NP$-hard even for general graphs. Then, we present an upper bound on the restrained double Roman domination number of a connected graph $G$ in terms of the order of $G$ and characterize the graphs attaining this bound. We study the restrained double Roman domination versus the restrained Roman domination. Finally, we characterize trees $T$ by the given restrained double Roman domination number of $T$. \section{Complexity and computational issues} We consider the problem of deciding whether a graph $G$ has an $RDRD$ function of weight at most a given integer. That is stated in the following decision problem.\\ We shall prove the $NP$-completeness by reducing the following vertex cover decision problem, which is known to be $NP$-complete. \framebox{ \parbox{1\linewidth}{ VERTEX COVER DECISION PROBLEM INSTANCE: A graph $G = (V,E)$ and a positive integer $p \le |V (G)|$. QUESTION: Does there exist a subset $C \subseteq V (G)$ of size at most $p$ such that for each edge $xy \in E(G)$ we have $x \in C$ or $y \in C$?}} \begin{theorem} \emph{(Karp \cite{karp} )}\label{the-karp} Vertex cover decision problem is $NP$-complete for general graphs. \end{theorem} \framebox{ \parbox{1\linewidth}{ RISTRAINED DOUBLE ROMAN DOMINATION problem ($RDRD$ problem)\\ INSTANCE: A graph $G$ and an integer $p\leq |V(G)|$.\\ QUESTION: Is there an $RDRD$ function $f$ for $G$ of weight at most $p$?}}\\ \begin{theorem}\label{the-NP} The restrained double Roman domination problem is $NP$-complete for general graphs. \end{theorem} \begin{proof} We transform the vertex cover decision problem for general graphs to the restrained double Roman domination decision problem for general graphs. For a given graph $G = (V(G), E(G))$, let $ m= 3|V (G)| + 4$ and construct a graph $H = (V(H),E(H))$ as follows. Let $V(H) = \{x_i : 1 \le i \le m\} \cup \{y\} \cup V (G) \cup \{u_{j_i} : 1 \le i \le m\ \mbox{for\ each}\ e_j \in E(G)\}$, and let $$E(H)=\{x_ix_{i+1}: (\mbox{mod}\ m)\ 1\le i\le m\}$$ $$\ \ \ \cup \{x_iy: 1\le i\le m\} \cup \{vy: v\in V(G)\}$$ $$\ \ \ \cup \{vu_{j_i}: v\ \mbox{is\ the\ vertex\ of\ edge}\ e_j\in E(G)\ \mbox{and}\ 1\le i \le m \}$$ $$\ \ \ \cup \{u_{j_i}u_{j_{(i+1)}}\ (\mbox{mod}\ m) : 1 \le i \le m\}.$$ Figure 1 shows the graph $H$ obtained from $G = P_4=a_1a_2a_3a_4$ by the above procedure. Note that, since $m= 3|V (G)| + 4=16$ for this example and $G$ \begin{figure} \caption{The graph $G=P_4$ and $H$.} \label{fig:g1-g2-g3} \end{figure} has three edges $e_1, e_2, e_3$, $$H[\{x_i: 1\le i\le 16\}] \cong H[\{u_{1_i}: 1\le i\le 16\}] \cong H[\{u_{2_i}: 1\le i\le 16\}]\cong H[\{u_{3_i}: 1\le i\le 16\}]\cong C_{16}$$ $y$ is adjacent to $x_i$ for $1\le i \le 16$ and $a_l$ for $1\le l\le 4$; $u_{j_i}$ is adjacent to both $a_j$ and $a_{j+1}$ for $1\le j\le 3$ and $1\le i\le 16$. We claim that $G$ has a vertex cover of size at most $k$ if and only if $H$ has an RDRDF with weight at most $3k+3$. Hence the $NP$-completeness of the restrained double Roman domination problem in general graphs will be equivalent to the $NP$-completeness of vertex cover problem. First, if $G$ has a vertex cover $C$ of size at most $k$, then the function $f$ defined on $V(G)$ by $f(v) = 3$ for $v \in C \cup \{y\}$ and $f(v) = 0$ otherwise, is an RDRDF with weight at most $3k + 3$. On the other hand, suppose that $g$ is an RDRDF on $H$ with weight at most $3k + 3$. If $g(y)\ne 3$, then there exist two cases. Case 1. Let $g(y)\in \{0,1\}$. Then $$\sum_{i=1}^{m}g(x_i)\ge \gamma_{rdR}(C_m) \ge \gamma_{dR}(C_m)\ge m >3|V(G)|+3 \ge 3k+3$$ that is a contradiction. Case 2. Let $g(y)=2$ and $C_m=\{x_ix_{i+1}: (\mod\ m)\ 1\le i\le m\}$. Then $g(C_m)\ge 2m/3$ and $g(H)\ge 2m/3 +2k+2 = 2(3|V (G)| + 4)/3 +2k+2\ge 4k+14/3> 3k+3$ which is a contradiction. Thus $g(y) = 3$. Similarly, we have $g(u) = 3$ or $g(v) = 3$ for any $e = uv \in E(G)$. Therefore $C = \{v \in V : g(v) = 3\}$ is a vertex cover of $G$ and $3|C| + 3 \le w(g) \le 3k + 3$. Consequently, $|C| \le k$. \end{proof} \section{$RDRD$ number of some graphs} In this section we investigate the exact value of the restrained double Roman domination number of some graphs. \begin{observation}\label{the-com-par} For complete graph $K_n$ and complete bipartite graph $K_{m,n}$,\\ \emph{(i)} $\gamma_{rdR}(K_n)=3$ for $n\ge 2$.\\ \emph{(ii)} $\gamma_{rdR}(K_{n,m})=6$ for $m,n\ge 2$.\\ \emph{(iii)} $\gamma_{rdR}(K_{1,m})=m+2.$ \emph{(iv)} $\gamma_{rdR}(K_{n_1,n_2,\cdots, n_m})=\left\{ \begin{array}{ll} 3, & \mbox{if}\ \emph{min}\{n_1 ,n_2,\cdots, n_m\}=1,\\ 6, & \hbox{otherwise.} \end{array} \right.$\\. \end{observation} \begin{theorem}\label{the-path} For a path $P_n$ $(n\geq 4)$, $\gamma_{rdR}(P_n)=n+2$.\\ \end{theorem} \begin{proof} Assume that $n\ge 4$ and $P_n=v_1v_2\cdots v_n$. Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le n/3-1,\ h(v_{1})=h(v_n)=1$ and $h(v)=0$ otherwise, whenever $n \equiv 0 \,({\rm mod}\, 3)$.\\ Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+1})=3$ for $0\le i \le (n-1)/3$ and $h(v)=0$ otherwise, whenever $n \equiv 1\, ({\rm mod}\, 3)$. \\ Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le (n-2)/3,\ h(v_{1})=1$ and $h(v)=0$ otherwise, whenever $n \equiv 2\, ({\rm mod}\, 3)$. Therefore $\gamma_{rdR}(P_n)\le n+2$ for $n\ge 4$. Now we prove the inverse inequality. It is straightforward to verify that $\gamma_{rdR}(P_n)=n+2$ for $4\le n\le 6$. For $n\ge 7$ we proceed by induction on $n$. Let $n\ge 7$ and let the inverse inequality be true for every path of order less than $n$. Assume that $f = (V_0, V_1, V_2, V_3)$ is a $\gamma_{rdR}$-function of $P_n$. It is well known that $f(v_n)\ne 0$. If $f(v_n)=1$, then $f(v_{n-1}) \ge 2$. Define $g: P_{n-1} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-1$. Clearly, $g$ is an RDRD-function of $P_{n-1}$. It follows from the induction hypothesis that $$\gamma_{rdR}(P_n)=w(f)=w(g)+1\ge \gamma_{rdR}(P_{n-1})+1\ge (n-1)+2 +1\ge n+2.$$ If $f(v_{n}) =2$, then $f(v_{n-1})= 1$ and $f(v_{n-2})\ge 1$. Define $g: P_{n-2} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-2$. Clearly, $g$ is a $RDRD$-function of $P_{n-2}$. As above we obtain, $$\gamma_{rdR}(P_n)=w(f)=w(g)+3\ge \gamma_{rdR}(P_{n-2})+3\ge (n-2)+2 +3=n+3.$$ If $f(v_{n}) =3$, then $f(v_{n-1})= 0$, $f(v_{n-2})= 0$ and $f(v_{n-3})= 3$. Define $g: P_{n-3} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-3$. Clearly, $g$ is a $RDRD$-function of $P_{n-3}$. It also follows from the induction hypothesis that $$\gamma_{rdR}(P_n)=w(f)=w(g)+3\ge \gamma_{rdR}(P_{n-3})+3\ge (n-3)+2 +3=n+2.$$ Thus the proof is complete.\\ \end{proof} \begin{theorem}\label{the-cycle} For a cycle $C_n$, $(n\ge 3)$, $\gamma_{rdR}(C_n)=\left\{ \begin{array}{ll} n, & \mbox{if}\ n \equiv 0\ (\mbox{mod}\ 3), \\ n+2, & \hbox{otherwise.} \end{array} \right.$\\ \end{theorem} \begin{proof} Assume that $n\ge 3$ and $C_n=v_1v_2\cdots v_nv_1$. Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i})=3$ for $1\le i \le n/3$ and $h(v)=0$ otherwise, whenever $n \equiv 0\, ({\rm mod}\, 3)$.\\ Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i+1})=3$ for $0\le i \le (n-1)/3$ and $h(v)=0$ otherwise, whenever $n \equiv 1\, ({\rm mod}\, 3)$. \\ Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le (n-2)/3,\ h(v_{1})=1$ and $h(v)=0$ otherwise, whenever $n \equiv 2\, ({\rm mod}\, 3)$. Therefore $$\gamma_{rdR}(C_n)\le \left\{ \begin{array}{ll} n, & \mbox{if}\ n \equiv 0\ (\mbox{mod}\ 3), \\ n+2, & \hbox{otherwise.} \end{array} \right.$$ Now we prove the inverse inequality. For $n \equiv 0 \,({\rm mod}\, 3)$, since $\gamma_{rdR}(C_n)\ge \gamma_{dR}(C_n)=n$, (see \cite{al, bhh}), clearly the result holds. Let $n \not \equiv 0\ (\mbox{mod}\ 3)$ and let $f = (V_0, V_1, V_2, V_3)$ be a $\gamma_{rdR}$-function of $C_n$. Since the neighbor of vertex of weight $0$ is a vertex of weight $3$ and a vertex of weight $0$, if $n \not \equiv 0\ (\mbox{mod}\ 3)$, there are two adjacent vertices $v_i, v_{i+1}$ in $C_n$ such that their weights are positive. Now, if $f(v_i)\ge 2$ and $f(v_{i+1})\ge 2$, then by removing the edge $v_iv_{i+1}$, the resulted graph is $P_n$. Define $g: P_{n} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n$. Clearly, $g$ is an RDRD-function of $P_{n}$ with $w(g)=w(f)$. Since $w(g)\ge n+2$ then $w(f)\ge n+2$.\\ Let $f(v_i)\ge 2$ and $f(v_{i+1})= 1$. Then $f(v_{i+2})\ge 1$. Now remove the edge $v_{i+1}v_{i+2}$ and obtain a $P_n$. Define $g: P_{n} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n$. Clearly, $g$ is an RDRD-function of $P_{n}$ with $w(g)=w(f)$. Thus $w(f)\ge n+2$. \\ Let $f(v_i)=f(v_{i+1})= 1$. As above, we remove the edge $v_iv_{i+1}$ and the resulted graph $P_n$ has an RDRD-function $g$ of weight at least $w(f)$. That is $w(f)\ge n+2$. Therefore the proof is complete. \end{proof} \section{Upper bounds on the $RDRD$ number} In this section we obtain sharp upper bounds on the restrained double Roman domination number of a graph. \begin{proposition}\label{2n-1} Let $G$ be a connected graph of order $n\ge 2$. Then $\gamma_{rdR}(G) \le 2n-1$, with equality if and only if $n=2$. \end{proposition} \begin{proof} If $w$ is a vertex of $G$, then define the fuction $f$ by $f(w)=1$ and $f(x)=2$ for $x\in V(G)\setminus\{w\}$. Since $G$ is connected of order $n\ge 2$, we observe that $f$ is an RDRD function of $G$ of weight $2n-1$ and thus $\gamma_{rdR}(G) \le 2n-1$. If $n\ge 3$, then $G$ contains a vertex $w$ with at least two neighbors $u$ and $v$. Now define the function $g$ by $g(u)=g(v)=1$ and $g(x)=2$ for $x\in V(G)\setminus\{u,v\}$. Then $g$ is an RDRD function of $G$ of weight $2n-2$ and so $\gamma_{rdR}(G) \le 2n-2$ in this case. Since $\gamma_{rdR}(K_2)=3=2\cdot 2-1$, the proof is complete. \end{proof} \begin{proposition}\label{diam} Let $G$ be a connected graph of order $n\ge 2$. Then $\gamma_{rdR}(G) \le 2n+1 - diam(G)$ and this bound is sharp for the path $P_n$ ($n\ge 4$). \end{proposition} \begin{proof} By Theorem \ref{the-path}, $\gamma_{rdR}(P_n) \le n+2$. Let $P=v_1v_2\cdots v_{diam(G)+1}$ be a diametrical path in $G$. Let $g$ be a $\gamma_{rdR}$-function of $P$. Then $w(g)\le diam(G)+3$. Now we define an RDRD-function $f$ as:\\ $$f(x)=\left\{ \begin{array}{ll} 2, & x \notin V(P),\\ g(x), & \hbox{otherwise.} \end{array} \right.$$ It is clear that $f$ is an RDRD-function of $G$ of weight $w(f) \le 2(n-(diam(G)+1)) + diam (G)+3$. Therefore $\gamma_{rdR}(G) \le 2n+1 - diam(G)$.\\ Theorem \ref{the-path} shows the sharpness of this bound. \end{proof} \begin{proposition} Let $G$ be a connected graph of order $n$ and circumference $c(G)<\infty$. Then $\gamma_{rdR}(G) \le 2n +2 - c(G)$, and this bound is sharp for each cycle $C_n$ with $3 \nmid n$. \end{proposition} \begin{proof} Let $C$ be a longest cycle of $G$, that means $|V(C)|=c(G)$. By Theorem \ref{the-cycle}, $\gamma_{rdR}(C) \le c(G)+2$. Let $h$ be a $\gamma_{rdR}$-function on $C$. Then $w(h)\le c(G)+2$. Now we define an RDRD-function $f$ as:\\ $$f(x)=\left\{ \begin{array}{ll} 2, & x\notin V(C), \\ h(x), & \hbox{otherwise.} \end{array} \right.$$\\ It is clear that $f$ is an RDRD-function of $G$ of weight $w(f) \le 2(n-c(G)) + c(G)+2$. Therefore $\gamma_{rdR}(G) \le 2n+2 - c(G)$.\\ For sharpness, if $G=C_n$ and $3\nmid n$, then $\gamma_{rdR}(C_n)=n+2= 2n+2 - n=2n+2-c(G)$. \end{proof} \begin{observation}\label{1} Let $G$ be a graph and $f=(V_0,V_1,V_2)$ a $\gamma_{rR}$-function of $G$. Then $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|$. \end{observation} \begin{proof} Let $G$ be a graph and $f=(V_0,V_1,V_2)$ a $\gamma_{rR}$-function of $G$. We define a function $g=(V_0',V_2',V_3')$ as follows: $V_0'=V_0$, $V_2'=V_1$, $V_3'=V_2$. Note that under $g$, every vertex with a label $0$ has a neighbor assigned $3$ and each vertex with label $1$ becomes a vertex with label $2$ and also $G[V_0']$ has no isolated vertex. Hence, $g$ is a restrained double Roman dominating function. Thus, $\gamma_{rdR}(G)\leq 2|V_2'|+3|V_3'|=2|V_1|+3|V_2|$. \end{proof} Clearly, the bound of observation \ref{1} is sharp, as can be seen with the path $G=P_4$, where $\gamma_{rR}(G)=4$ and $\gamma_{rdR}(G)=6$. We also note that strict inequality in the bound can be achieved by the subdivided star $G=S(K_{1,k})$ which formed by subdividing each edge of the star $K_{1,k}$, for $k\geq 3$, exactly once. Then it is simple to check that $\gamma_{rR}(G)=2k+1$ and $\gamma_{rdR}(G)=3k$. Hence, $|V_1|=1$ and $|V_2|=k$, and so, $3k=\gamma_{rdR}(G)<2|V_1|+3|V_2|=2+3k.$ \begin{lemma}\label{lem1} If a graph $G$ has a non-pendant edge, then there is a $\gamma_{rdR}(G)$-function $f = (V_0, V_1, V_2, V_3)$ such that $V_0\cup V_1 \ne \emptyset$. \end{lemma} \begin{proof} If $\gamma_{rdR}(G)<2n$, then obviously $V_0\cup V_1 \ne \emptyset$. Now we show that $\gamma_{rdR}(G)<2n$. Let $uw$ be a non-pendant edge with $\deg(u)$ and $\deg(w)$ be at least $2$.\\ Assume that $N_{G}(u)\cap N_{G}(w)\ne \emptyset$, and let $v$ be a vertex in $N_{G}(u)\cap N_{G}(w)$. Then the function $f=(V_0 =\{u,w\}, V_1 =\emptyset, V_2= V(G) \setminus \{u,w,v\}, V_3=\{v\})$ is an RDRD-function of $G$ with $w(f)\le 2n-3$.\\ Assume that $N_{G}(u)\cap N_{G}(w)= \emptyset$, and let $a\in N_G(u)\setminus \{w\}$ and $b\in N_G(w)\setminus \{u\}$. Then the function $f=(V_0= \{u,w\}, V_1=\emptyset, V_2= V(G) \setminus \{u,w,a,b\}, V_3=\{a,b\})$ is an RDRD-function of $G$ with $w(f)\le 2n-2$. This completes the proof. All in all the proof is complete. \end{proof} \begin{figure} \caption{The graph $H_{10},\ F_{9}$ .} \label{fig:g1-g2-g3} \end{figure} For any integer $n \ge 3$, let $H_n$ be the graph obtained from $(n-2)/2$ copies of $K_2$ and a copy of $K_1$ by adding a new vertex and joining it to both leaves of each $K_2$ and the given $K_1$, and let $F_n$ be the graph obtained from $(n-2)/2$ copies of $K_2$ by adding a new vertex and joining it to both leaves of each $K_2$. Thus for $n \ge 4$, $H_n$ have a vertex of degree $n-1$, a vertex of degree $1$ and other vertices of degree two and for $n \ge 3$, $F_n$ have a vertex of degree $n-1$ and other vertices of degree two. Figure 2 shows the graph $H_{10}$ and $F_9$. Let $\mathcal{H} = \{H_n :n \ge 4\ \mbox{is\ even}\}$ and $\mathcal{F} = \{F_n :n \ge 3\ \mbox{is\ odd}\}$. \begin{theorem}\label{the} For every connected graph $G$ of order $n \ge 3$ with $m$ edges, $\gamma_{rdR}(G) \ge 2n + 1- \lceil(4m-1)/3\rceil$, with equality if and only if $G \in \mathcal{H}\cup \mathcal{F}$ or $G\in\{K_{1,2},K_{1,3},K_{1,4}\}$. \end{theorem} \begin{proof} If $G=K_{1,n-1}$ is a star, then $\gamma_{rdR}(G)=n+1$ and $m=n-1$. Now it is easy to see that $\gamma_{rdR}(K_{1,n-1})= 2n + 1- \lceil(4m-1)/3\rceil$ for $3\le n\le 5$ and $\gamma_{rdR}(K_{1,n-1})>2n + 1- \lceil(4m-1)/3\rceil$ for $n\ge 6$. Next assume that $G$ is not a star. By Lemma \ref{lem1} there is a $\gamma_{rdR}(G)$-function of $f= (V_0, V_1, V_2, V_3)$ such that $V_0\cup V_1\ne \emptyset$. It is well known that, the induced subgraph $G[V_0]$ has no isolated vertex. Therefore, $|E(G[V_0])| \ge |V_0|/2$. Let $V'_0=\{v\in V_0: N(v) \subseteq V_2\}$ and $V''_0=\{v\in V_0: v \,\,{has\,\,a\,\,neighbor\,\,in}\,\,V_3\}$. Then $|E(V_0,V_2)| \ge 2|V'_0|$, $|E(V_0,V_3)| \ge |V''_0|$ and $|E(V_1,V_2\cup V_3)| \ge |V_1|$. Therefore $$|E(G)|=m\ge |V_0|/2+ 2|V'_0|+ |V''_0|+ |V_1|.$$ Since $|V_0|=|V'_0|+|V''_0|$, we deduce that \begin{equation}\label{EQ11} (4m-1)/3\ge 2|V_0|+ 4/3|V'_0|+ 4/3|V_1|-1/3 \end{equation} and thus \begin{equation}\label{EQ12} 2n+1 - \lceil(4m-1)/3\rceil \le 2n+1 - (4m-1)/3 \le 2n+1 -2|V_0|-4/3|V'_0|- 4/3|V_1|+1/3. \end{equation} Since $\gamma_{rdR}(G) = |V_1|+2|V_2|+3|V_3|$, $|V_0|+|V_1|+|V_2|+|V_3|=n$ and $2n+1=2|V_0|+2|V_1|+2|V_2|+2|V_3|+1$, we obtain \begin{eqnarray*} 2n+1 -2|V_0|-4/3|V'_0|- 4/3|V_1|+1/3 & = & -4/3|V'_0|+ 2/3|V_1|+2|V_2|+2|V_3|+4/3\\ & = & \gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3. \end{eqnarray*} Next we will show that \begin{equation}\label{EQ13} \gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G) \end{equation} or $\gamma_{rdR}(G) \ge 2n + 1- \lceil(4m-1)/3\rceil$. If $|V'_0|\ge 1$, then $-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le 0$ and so $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$.\\ Let now $|V'_0|=0$. Note that the condition $V_0\cup V_1\ne \emptyset$ implies $V''_0 \cup V_1\ne \emptyset$.\\ Assume next that $V_1= \emptyset$. We deduce that $|V''_0|\ge 1$ and therefore $|V_3|\ge 1$. If there are at least two vertices of weight $3$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3<\gamma_{rdR}(G)$.\\ If there is only one vertex of weight $3$, then $m\ge n-1+\frac{n-1}{2}=\frac{3(n-1)}{2}$. We deduce that $\gamma_{rdR}(G)\ge 3 \ge 2n+1 -\left\lceil \frac{6(n-1)-1}{3}\right\rceil \ge 2n+1 -\left\lceil \frac{4m-1}{3}\right\rceil$, with equality if and only if $|V_2|=0$, $n$ is odd and $m=\frac{3(n-1)}{2}$, that means $G\in{\cal F}$.\\ Now assume that $|V_1|\ge 1$. If $|V''_0|\ge 1$, then $|V_3|\ge 1$ and thus $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$. Next let $|V''_0|=0$. If $|V_3|\ge 1$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$. Now assume that $|V_3|=0$. This implies that all vertices have weight $1$ or $2$. If $3\le n\le 5$, then it is easy to see that $\gamma_{rdR}(G)> 2n+1-\left\lceil\frac{4m-1}{3}\right\rceil$. Let now $n\ge 6$. If $|V_1|\ge 5$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3<\gamma_{rdR}(G)$. Otherwise $|V_1|\le 4$, $|V_2|\ge n-4$ and $m\ge n-1$. This implies $$\gamma_{rdR}(G)\ge 2(n-4)+4=2n-4>2n+1-\left\lceil\frac{4(n-1)-1}{3}\right\rceil\ge 2n+1-\left\lceil\frac{4m-1}{3}\right\rceil.$$ \\. Thus $\gamma_{rdR}(G) \ge 2n+1 -(4m-1)/3\ge 2n+1 -\lceil(4m-1)/3\rceil$.\\ For equality: If $G\in \mathcal{H}$, then $G=H_n$ for $n\ge 4$ even and $|E(H_n)|=3(n-2)/2+1$. Thus $2n+1- (4(3(n-2)/2+1)-1)/3= 2n+1- \lceil (4(3(n-2)/2+1)-1)/3 \rceil = 2n+1 -2(n-2)-1=4= \gamma_{rdR}(H_n)$. If $G\in \mathcal{F}$, then $G=F_n$ for $n\ge 3$ odd and $|E(F_n)|=3(n-1)/2$. Thus $2n+1- \lceil(4(3(n-1)/2))-1/3\rceil = 2n+1 -2(n-1)=3= \gamma_{rdR}(F_n)$. Conversely, assume that $\gamma_{rdR}(G)=2n+1-\lceil(4m-1)/3\rceil$. Then all inequalities occurring in the proof become equalities. In the case $|V_1|=0$, we have seen above that we have equality if and only if $G\in{\cal F}$. In the case $|V_1|\ge 1$, we have seen above that $|V_3|\ge 1$. Therefore the equality in Inequality (\ref{EQ13}) leads to $|V_3|=|V_1|=1$ and $|V'_0|=0$. Hence $V_0=V''_0$. Thus equality in Inquality (\ref{EQ11}) or equivalently, in the inequality $|E(G)|=m\ge |V_0|/2+ 2|V'_0|+ |V''_0|+ |V_1|$ leads to $m=3/2|V''_0|+1$. Now let the vertices $v, u$ be of weight $3,1$ respectively. Then $m=|E(G)| \ge |E(v, V''_0)| + G[V''_0] +1 \ge |V''_0|+ 1/2|V''_0|+1=3/2|V''_0|+1$. If $|V_2|\ne 0$, then the connectivity of $G$ leads to the contradiction $m\ge 3/2|V''_0|+2$. Consequently, $|V_2|=0, |V_0|=(2m-2)/3$ and $u$ and $v$ are adjacent. Since $G$ is connected, $G\in \mathcal{H}$. \\ \end{proof} \section{$RDRD$-set versus $RRD$-set} One of the aim of studying these parameters is that to see the related between them and compare each together. \begin{proposition}\label{cor1} For any graph $G$, $\gamma_{rdR}(G)\leq 2\gamma_{rR}(G)$ with equality if and only if $G=\overline{K_n}$. \end{proposition} \begin{proof} Let $f=(V_0,V_1,V_2)$ be a $\gamma_{rR}$-function of $G$. Since $\gamma_{rR}(G)=|V_1|+2|V_2|$, by Observation \ref{1}, we have that $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|=\gamma_{rR}(G)+|V_1|+|V_2|\leq 2\gamma_{rR}(G)$. If $\gamma_{rdR}(G)=2\gamma_{rR}(G)=2|V_1|+4|V_2|$, then since $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|$, we must have $V_2=\emptyset$. Hence, $V_0=\emptyset$ must hold, and so $V=V_1$. By definition of $\gamma_{rR}$-function, we deduce that no two vertices in $G$ are adjacent, for otherwise, if $u$ and $v$ are adjacent, then only one of them in every $\gamma_{rdR}$-function on $G$ has a label of $2$ which contradicts with $\gamma_{rdR}(G)=2\gamma_{rR}(G)$. \end{proof} The proof of Lemma \ref{lem1} shows the next proposition. \begin{proposition} If $G$ contains a triangle, then $\gamma_{rdR}(G) \le 2n-3$. \end{proposition} \begin{theorem}\label{the-111} For every graph $G$, $\gamma_{rR}(G) < \gamma_{rdR}(G$). \end{theorem} \begin{proof} Let $f=(V_0,V_1,V_2,V_3)$ be a $\gamma_{rdR}(G)$-function. If $V_3 \ne \emptyset$, then $(V'_0=V_0, V'_1=V_1 , V'_2=V_2\cup V_3)$ is an RRD-function $g$ such that $w(g)< w(f)$. Let $V_3=\emptyset$. If $V_0=\emptyset$, then, since $V_2 \ne \emptyset$, $g=(\emptyset, V'_1=V_1\cup V_2, \emptyset)$ is an RRD-function such that $w(g)< w(f)$. If $V_0 \ne \emptyset$, then $|V_2|\ge 2$. Let $f(v)=2$ for a vertex $v$. Then $g=(V'_0=V_0, V'_1=V_1\cup \{v\}, V'_2=V_2-\{v\})$ is an RRD-function $g$ for which $w(g)< w(f)$. Therefore $\gamma_{rR}(G) < \gamma_{rdR}(G)$. \end{proof} As an immediate consequence of Proposition \ref{cor1}, we have. \begin{corollary} For any nontrivial connected graph $G$, $\gamma_{rdR}(G) < 2\gamma_{rR}(G)$. \end{corollary} \begin{theorem}\label{the-222} Let $G$ be a graph of order $n$. Then $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$ if and only if $G$ is one of the following graphs.\\ \emph{1}. $G$ has a vertex of degree $n-1$.\\ \emph{2}. There exists a subset $S$ of $V(G)$ such that:\\ \emph{2.1}. every vertex of $V-S$ is adjacent to a vertex in $S$,\\ \emph{2.2}. there are two subsets $A_0$ and $A_1$ of $V-S$ with $A_0\cup A_1=V-S$ such that $A_0$ is the set of non-isolated vertices in $N(S)$ and each vertex in $A_0$ has at least two neighbors in $S$,\\ \emph{2.3}. for any $2$-subset $\{a,b\}$ of $S$, $N(\{a,b\})\cup A_0 \ne \emptyset$ and for a $3$-subset $\{x,y,z\}$ of $S$, if $\{x,y,z\} \cap A_0 \ne \emptyset$, then there are three vertices $u,v,w$ in $A_0$ such that $N(u)\cup S=\{x,y\}$, $N(v)\cup S=\{x,z\}$ and $N(w)\cup S=\{y,z\}$. \end{theorem} \begin{proof} Let $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$ with a $\gamma_{rdR}(G)$-function $f=(V_0,V_1,V_2,V_3)$ and a $\gamma_{rR}(G)$-function $g=(U_0,U_1,U_2)$. If $V_3\ne \emptyset$, then $|V_3|=1$. Because if $|V_3|\ge 2$ then by changing $3$ to $2$ we obtain a RRD-function $h$ with $w(h)< w(g)$, a contradiction. Let $V_3=\{v\}$. In addition, we note that $|V_2|=0$. I we suppose that $|V_2|\ge 1$, then let $u\in V_2$. Then $h=(V'_0=V_0, V'_1=V_1\cup \{u\}, V'_2=V_2\cup \{v\})$ is an RRD-function $g$ for which $w(h)< w(g)$, a contradiction. Thus all vertices different from $v$ are adjacent to the vertex $v$ such that the non-isolated vertices in $N(v)$ are assigned with $0$ and the isolated vertices in $N(v)$ are assigned with $1$. In this case $U_0=V_0, U_1=V_1$ and $U_2=V_3$.\\ If $V_3= \emptyset$, then $V_2\ne \emptyset$ and $|V_2| \ge 2$. In this case, there must exist a vertex $v\in V_2$ such that $U_0= V_0, U_1=V_1\cup \{v\}$ and $U_2=V_2-\{v\}$. There is such a function $f$ if we guarantee a subset $S$ of $V(G)$ with each vertex of weight $2$ for which every other vertex in $V-S$ has to adjacent to a vertex of $S$, that is the condition 2.1 holds.\\ Since we can only change one of vertices of weight $2$ in $f$ to a vertex of weight $1$ in $g$, there must be existed two subsets $A_0$ and $A_1$ in $V-S$ such that the conditions 2.2 and 2.3 hold.\\ Conversely, if the condition 1 holds, then $f=(V_0, V_1, \emptyset, V_3=\{v\})$ and $g=(U_0=V_0, U_1=V_1, U_2=\{v\})$ are $\gamma_{rdR}(G)$-function and $\gamma_{rR}(G)$-function respectively where $V_0$ is the set of non-isolated vertices in $N(v)$ and $V_1$ is the set of isolated vertices in $N(v)$. Thus $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$.\\ If the condition 2 holds, then we can have only one vertex of weight $2$ in $G$ under $f$ such that it changes to the weight $1$ in $G$ under $g$. Thus $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$. \end{proof} We showed that for any graph $G$, $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)$ and the equality holds if and on if $G$ is a trivial graph $\overline{K_n}$. Hence, for any nontrivial graph $G$, $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. Now we characterise graph $G$ with this property $\gamma_{rdR}(G)= 2\gamma_{rR}(G)-1$. \begin{theorem} If $G$ is a nontrivial graph, then $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. If $\gamma_{rdR}(G)=2\gamma_{rR}(G)-1$, then $G$ consists of a $K_2$ and $n-2$ isolated vertices or $G$ consists of a vertex $h$ and two disjoint vertex sets $H$ and $R$ such that $H=N(h)$, $G[H]$ does not have isolated vertices, $G[R]$ is trivial, there is no edge between $h$ and $R$ and $N(h)\cap N(R)\neq N(h)$. \end{theorem} \begin{proof} Since $G$ is a nontrivial graph, Proposition \ref{cor1} implies $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. Now we investigate the equality.\\ Let $\gamma_{rdR}(G)= 2\gamma_{rR}(G)-1$, where $f=(V_0,V_1,V_2,V_3)$ is a $\gamma_{rdR}(G)$-function and $g=(U_0,U_1,U_2)$ is a $\gamma_{rR}(G)$-function. Then $2|U_1|+4|U_2|-1=|V_1|+2|V_2|+3|V_3|$. On the other hand, since $2|U_1|+4|U_2|-1=|V_1|+2|V_2|+3|V_3|=\gamma_{rdR}(G) \le 2|U_1|+3|U_2|$, it follows that $|U_2|\le 1$. If $U_2=\emptyset$, then $|U_0|=0$ and therefore $|U_1|=n$. Using the inequality above, we obtain $$2n-1=2|U_1|-1\le\gamma_{rdR}(G)\le 2|U_1|=2n.$$ If $\gamma_{rdR}(G)=2n$, then $G$ is trivial, a contradiction. If $\gamma_{rdR}(G)=2n-1$, then Proposition \ref{2n-1} shows that $G$ consists of a $K_2$ and $n-2$ isolated vertices. Let now $|U_2|= 1$ such that $U_2=\{h\}$, $H=N(h)$, $R=V(G)\setminus N[h]=\{u_1,u_1,\ldots,u_p\}$. Clearly, $U_0\subseteq H$ and $R\subseteq U_1$. If $H$ contains exactly $s\ge 1$ isolated vertices, then $\gamma_{rR}(G)=2+s+p$ and therefore $\gamma_{rdR}(G)\le 3+s+2p\le 2\gamma_{rR}(G)-2$, a contradiction. Hence $H=N(h)$ does not contain isolated vertices and thus $\gamma_{rR}(G)=p+2$. If $G[R]$ contains an edge, then we obtain the contradiction $\gamma_{rdR}(G)\le 3+2p-1=2p+2\le 2\gamma_{rR}(G)-2$. Thus $G[R]$ is trivial. If there is an edge between $h$ and $R$, then we also obtain the contradiction $\gamma_{rdR}(G)\le 3+2p-1=2p+2\le 2\gamma_{rR}(G)-2$. If $N(h)\cap N(R)=N(h)$, then $f=(H,\emptyset,\{h\}\cup R,\emptyset)$ is an RDRD function of $G$, and hence $\gamma_{rdR}(G)\le 2p+2\le 2\gamma_{rR}(G)-2$, a contradiction. \end{proof} \section{Trees} In this section we study the restrained double Roman domination of trees.\\ \begin{theorem}\label{the-tree1} If $T$ is a tree of order $n\geq2$, then $\gamma_{rdR}(T)\leq \lceil\frac{3n-1}{2}\rceil$. The equality holds if $T\in\{P_2,P_3,P_4,P_5, S_{1,2}, ws(1,n, n-1), ws(1,n, n-2)\}$. \end{theorem} \begin{proof} Let $T$ be a tree of order $n\geq2$. We will proceed by induction on $n$. If $n=2$, then $\gamma_{rdR}(T)=3=\lceil\frac{3n-1}{2}\rceil$. If $n\geq3$, then $diam(T)\geq2$. If $diam(T)=2$, then $T$ is the star $K_{1,n-1}$ for $n\geq3$ and $\gamma_{rdR}(T)=n+1\leq \lceil\frac{3n-1}{2}\rceil$. If $diam(T)=3$, then $T$ is a double star $S_{r,s}$ for $1\leq r\leq s$. Hence, $n=r+s+2\geq4$. If $r=1=s$, then $T=P_4$ and $\gamma_{rdR}(T)=6\leq\lceil\frac{12-1}{2}\rceil$. If $r=1, s\ge 2$, then $n=s+3$ and $\gamma_{rdR}(T)=s+5\leq\lceil\frac{3(s+3)-1}{2}\rceil$. If $r\ge 2, s\ge 2$, then $n=r+s+2$ and $\gamma_{rdR}(T)=r+s+4\leq\lceil\frac{3(r+s+2)-1}{2}\rceil$.\\ Hence, we may assume that $diam(T)\geq4$. This implies that $n\geq5$. Assume that any tree $T'$ with order $2\leq n'<n$ has $\gamma_{rdR}(T')\leq \lceil\dfrac{3n'-1}{2}\rceil$. Among all longest paths in $T$, choose $P$ to be one that maximizes the degree of its next-to-last vertex $v$, and let $w$ be a leaf neighbor of $v$. Note that by our choice of $v$, every child of $v$ is a leaf. Since $deg(v)\geq2$, the vertex $v$ has at least one leaf as a child. Now we put $T'=T-T_v$ where the order of the substar $T_v$ is $k+1$ with $k\geq1$. Note that since $diam(T)\geq4$, $T'$ has at least three vertices, that is, $n'\geq3$. Let $f'$ be a $\gamma_{rdR}$-function of $T'$. Form $f$ from $f'$ by letting $f(x)=f'(x)$ for all $x\in V(T')$, $f(v)=2$, and $f(z)=1$ for all leaf neighbors of $v$. Thus $f$ is a restrained double Roman dominating function of $T$, implying that $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+k+2 \le \lceil\dfrac{3(n-k-1)-1}{2}\rceil+k+2=\lceil\dfrac{3n-k}{2}\rceil \leq \lceil\dfrac{3n-1}{2}\rceil$.\\ If $T\in \{P_2,P_3,P_4,P_5, S_{1,2}, , ws(1,n, n-1), ws(1,n, n-2)\}$, then clearly $\gamma_{rdR}(T)=\lceil\dfrac{3n-1}{2}\rceil$. \end{proof} \begin{theorem}\label{the-tree2} For every tree $T$ of order $n\geq 3$, with $l$ leaves and $s$ support vertices, we have $\gamma_{rdR}(T)\leq\dfrac{4n+2s-l}{3}$, and this bound is sharp for the family of stars ($K_{1,n-1}$ $n\geq 3$), double stars, caterpillars for which each vertex is a leaf or a support vertex and all support vertices have even degree $2m$ or at most two end support vertices has degree $2m-1$ and the other support vertices has degree $2m$, wounded spiders in which the central vertex is adjacent with at least two leaves. \end{theorem} \begin{proof} Let $T$ be a tree with order $n\geq3$. Since $n\geq3$, $diam(T)\geq2$. If $diam(T)=2$, then $T$ is the star $K_{1,n-1}$ for $n\geq3$ and $\gamma_{rdR}(T)=n+1\leq\dfrac{4n+2-(n-1)}{3}=\dfrac{3n+3}{3}=n+1$. If $diam(T)=3$, then $T$ is a double star $S_{r,t}$ for $1\leq r\leq t$. We have $\gamma_{rdR}(T)=n+2=\dfrac{4n+2s-l}{3}$. Hence, we may assume $diam(T)\geq4$. Thus, $n\geq5$. Assume that any tree $T'$ with order $3\leq n'<n$, $l'$ leaves and $s'$ support vertices has $\gamma_{rdR}(T')\leq\dfrac{4n'+2s'-l'}{2}$. Among all longest paths in $T$, choose $P$ to be one that maximizes the degree of its next-to-last vertex $u$, and let $x$ be a leaf neighbor of $u$, $w$ be a parent vertex of $v$ and $v$ be a parent vertex of $u$. Note that by our choice of $u$, every child of $u$ is a leaf. Since $t=deg(u)\geq2$, the vertex $u$ has at least one leaf children. We now consider the two cases are as follows:\\ \textbf{Case 1}. $deg(v)\geq3$. In this case, we put $T'=T-T_u$, where the order of the star $T_u$ is $t$ with $t\geq2$. Note that since $diam(T)\geq4$, $T'$ has at least three vertices, that is, $n'\geq3$. Let $f'$ be a $\gamma_{rdR}$-function of $T'$. Thus we have $n'=n-t$, $l'=l-(t-1)$ and $s'=s-1$. Clearly, $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+1\leq\dfrac{4(n-t)+2(s-1)-(l-(t-1))}{3}+t+1=\dfrac{4n+2s-l}{3}$.\\ \textbf{Case 2}. $deg(v)=2$. We now consider the following two subcases.\\ \textbf{i}. $deg(w)>2$. Then we put $T'=T-T_v$ where order of subtree $T_v$ is $t+1$. Clearly, we have $n'=n-(t+1)$, $s'=s-1$ and $l'=l-(t-1)$. Thus, $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+2\leq \dfrac{4(n-t-1)+2(s-1)-(l-(t-1))}{3}+t+2=\dfrac{4n+2s-l-1}{3}\leq \dfrac{4n+2s-l}{3}$.\\ \textbf{ii}. $deg(w)=2$. Then we put $T'=T-T_v$, where the order of the subtree $T_v$ is $t+1$. Thus in this case, $w$ in the subtree $T'$ becomes a leaf and we have $n'=n-(t+1)$, $s'\le s$ and $l'= l-(t-1)+1$. Thus, $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+2\leq \dfrac{4(n-t-1)+2(s)-(l-(t-1)+1)}{3}+t+2=\dfrac{4n+2s-l}{3}$. \end{proof} \begin{theorem}\label{the-tree3} If $T$ is a tree, then $\gamma_r(T)+1\le \gamma_{rdR}(T)\le 3\gamma_r(T)$, and equality for the lower bound holds if and only if $T$ is a star. The upper bound is sharp for the paths $P_{m}$ ($m\equiv 1\ \mbox{mod}\ 3$), The cycles $C_n$ ($n\equiv 0,\,1\ \mbox{mod}\ 3$), the complete graphs $K_n$, the complete bipartite graphs $K_{n,m}\ (m,n \ge 2)$ and the multipartite graphs $K_{n_1,n_2,\cdots, n_m},\ (m\ge 3)$. \end{theorem} \begin{proof} Let $T$ be a tree. Since at least one vertex has value $2$ under any $RDRD$ function of $T$, we see that $\gamma_r(T)+1 \le \gamma_{rdR}(T)$. If we assign the value $3$ to the vertices in a $\gamma_r(T)$-set, then we obtain an RDRD function of $T$. Therefore $\gamma_{rdR}(T)\le 3\gamma_r(T)$.\\ The sharpness of the upper bound is deuced from Propositions 1-7 of \cite{domke} and Observation \ref{the-com-par}, Theorem \ref{the-path} and Theorem \ref{the-cycle}.\\ For equality of the lower bound, if $T=K_{1,n-1}$ is a star, then it is clear $\gamma_{rdR}(T)=n+1$ and $\gamma_{r}(T)=n$. If $T$ is a tree and $\gamma_{rdR}(T)=\gamma_{r}(T)+1$, then we have only one vertex of value $2$ in any $\gamma_{rdR}(T)$-function and the other vertices of positive weight have value $1$. In addition, the vertices of value 1 are adjacent to the vertex of value 2, and therefore $T$ is a star. least one vertex \end{proof} The following result gives us the RDRD of $G$ in terms of the size of $E(G)$, and order of $G$. \begin{proposition}\label{prop-tree4} Let $G$ be a connected graph $G$ of order $n\ge 2$ with $m$ edges. Then $\gamma_{rdR}(G) \le 4m-2n+3$, with equality if and only if $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$. \end{proposition} \begin{proof} For the given connected graph, $m\ge n-1$ and according to Proposition \ref{diam} $\gamma_{rdR}(G) \le 2n-1 =4n-4 -2n +3\le 4m-2n+3$.\\ If $\gamma_{rdR}(G) = 4m-2n+3$, then $m=n-1$ and $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$.\\ Conversely, assume that $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$. Hence $\gamma_{rdR}(G) = 4m-2n+3$. \end{proof} \section{Conclusions and problems} The concept of restrained double Roman domination in graphs was initially investigated in this paper. We studied the computational complexity of this concept and proved some bounds on the $RDRD$ number of graphs. In the case of trees, we characterized all trees attaining the exhibited bound. We now conclude the paper with some problems suggested by this research. \\ $\bullet$ For any graph $G$, provided the characterizations of graphs with small or large $RDRD$ numbers. $\bullet$ It is also worthwhile proving some other nontrivial sharp bounds on $\gamma_{rdR}(G)$ for general graphs $G$ or some well-known families such as, chordal, planar, triangle-free, or claw-free graphs. \\ $\bullet$ The decision problem RESTRAINED DOUBLE ROMAN DOMINATION is NP-complete for general graphs, as proved in Theorem \ref{the-NP}. By the way, there might be some families of graphs such that $RDRD$ is NP-complete for them or there might be some polynomial-time algorithms for computing the $RDRD$ number of some well-known families of graphs, for instance, trees. Can you provide these families?\\ $\bullet$ In Theorems \ref{the-tree1} and \ref{the-tree2} we showed upper bounds for $\gamma_{rdR}(T)$. The sufficient and necessity conditions for equality may be problems. \end{document}
\begin{document} \title{A note on the $\mathbb Z_2$-equivariant Montgomery-Yang correspondence} \author{Yang Su} \address{Hua Loo-Keng Key Laboratory of Mathematics \newline \indent Chinese Academy of Sciences \newline\indent Beijing, 100190, China} \email{suyang{@}math.ac.cn} \date{August 19, 2009} \begin{abstract} In this paper, a classification of free involutions on $3$-dimensional homotopy complex projective spaces is given. By the $\mathbb Z_2$-equivariant Montgomery-Yang correspondence, we obtain all smooth involutions on $S^6$ with fixed-point set an embedded $S^3$. \end{abstract} \maketitle \section{Introduction}\label{sec:one} In \cite{M-Y}, Montgomery and Yang established a $1:1$ correspondence between the set of isotopy classes of smooth embeddings $S^3 \hookrightarrow S^6$, $C_3^3$, and the set of diffeomorphism classes of smooth manifolds homotopy equivalent to the $3$-dimensional complex projective space $\mathbb C \mathrm P^3$ ( these manifolds are called homotopy $\mathbb C \mathrm P^3$). It is known that the latter set is identified with the set of integers by the first Pontrjagin class of the manifold. Therefore so is the set $C_3^3$. In a recent paper \cite{Lv-Li}, Bang-he Li and Zhi L\"u established a $\mathbb Z_2$-equivariant version of the Montgomery-Yang correspondence. Namely, they proved that there is a $1:1$ correspondence between the set of smooth involutions on $S^6$ with fixed-point set an embedded $S^3$ and the set of smooth free involutions on homotopy $\mathbb C \mathrm P^3$. This correspondence gives a way of studying involutions on $S^6$ with fixed-point set an embedded $S^3$ by looking at free involutions on homotopy $\mathbb C \mathrm P^3$. As an application, by combining this correspondence and a result of Petrie \cite{Petrie}, saying that there are infinitely many homotopy $\mathbb C \mathrm P^3$'s which admit free involutions, the authors constructed infinitely many counter examples for the Smith conjecture, which says that only the unknotted $S^3$ in $S^6$ can be the fixed-point set of an involution on $S^6$. In this note we study the classification of the orbit spaces of free involutions on homotopy $\mathbb C \mathrm P^3$. As a consequence, we get the classification of free involutions on homotopy $\mathbb C \mathrm P^3$, and further by the $\mathbb Z_2$-equivariant Montgomery-Yang correspondence, the classification of involutions on $S^6$ with fixed-point set an embedded $S^3$. The manifolds $X^6$ homotopy equivalent to $\mathbb C \mathrm P^3$ are classified up to diffeomorphism by the first Pontrjagin class $p_1(X) =(24j+4)x^2$, $j \in \mathbb Z$, where $x^2$ is the canonical generator of $H^4(X;\mathbb Z)$ (c.~f.~\cite{M-Y}, \cite{Wall6}). We denote the manifold with $p_1 =(24j+4)x^2$ by $H\cp^3_j$. \begin{theorem}\label{thm:one} The manifold $H\cp^3_j$ admits a (orientation reversing) smooth free involution if and only if $j$ is even. On every $H\cp^3_{2k}$, there are exactly two free involutions up to conjugation.\footnote{The same result was also obtained independently by Bang-he Li (unpublished).} \end{theorem} \begin{corollary}\label{coro:one} An embedded $S^3$ in $S^6$ is the fixed-point set of an involution on $S^6$ if and only if its Montgomery-Yang correspondence is $H\cp^3_{2k}$. For each embedding satisfying the condition, there are exactly two such involutions up to conjugation. \end{corollary} Theorem \ref{thm:one} is a consequence of a classification theorem (Theorem \ref{thm:two}) of the orbit spaces. Theorem \ref{thm:two} will be shown in Section $3$ by the classical surgery exact sequence of Browder-Novikov-Sullivan-Wall. In Section $2$ we show some topological properties of the orbit spaces, which will be needed in the solution of the classification problem. \section{Topology of the Orbit Space} In this section we summarize the topological properties of the orbit space of a smooth free involution on a homotopy $\mathbb C \mathrm P^3$. Some of the properties are also given in \cite{Lv-Li}. Here we give shorter proofs from a different point of view and in a different logical order. Let $\tau$ be a smooth free involution on $H\cp^3$, a homotopy $\mathbb C \mathrm P^3$. Denote the orbit manifold by $M$. \begin{example} The $3$-dimensional complex projective space $\mathbb C \mathrm P^3$ can be viewed as the sphere bundle of a $3$-dimensional real vector bundle over $S^4$. The fiberwise antipodal map $\tau_0$ is a free involution on $\mathbb C \mathrm P^3$ (c.~f.~\cite[A.1]{Petrie}). Denote the orbit space by $M_0$. \end{example} As a consequence of the Lefschetz fixed-point theorem and the multiplicative structure of $H^*(H\mathbb C \mathrm P^3)$, we have \begin{lemma}\cite[Theorem 1.4]{Lv-Li} $\tau$ must be orientation reversing. \end{lemma} \begin{lemma}\label{lemma:cohom} The cohomology ring of $M$ with $\mathbb Z_2$-coefficients is $$H^*(M;\mathbb Z_2)=\mathbb Z_2[t,q]/(t^3=0, q^2=0),$$ where $|t|=1$, $|q|=4$. \end{lemma} \begin{proof} Note the the fundamental group of $M$ is $\mathbb Z_2$. There is a fibration $\widetilde{M} \to M \to \mathbb R \mathrm P^{\infty}$, where $M \to \mathbb R \mathrm P^{\infty}$ is the classifying map of the covering. We apply the Leray-Serre spectral sequence. Since $\widetilde{M}$ is homotopy equivalent to $\mathbb C \mathrm P^3$, the nontrivial $E_2$-terms are $E_2^{p,q}=H^p(\mathbb R \mathrm P^{\infty};\mathbb Z_2)$ for $q=0,2,4,6$. Therefore all differentials $d_2$ are trivial, and henceforth $E_2=E_3$. Now the differential $d_3 \colon E_3^{0,2} \to E_3^{3,0}$ must be an isomorphism. For otherwise the multiplicative structure of the spectral sequence implies that the spectral sequence collapses at the $E_3$-page, which implies that $H^*(M;\mathbb Z_2)$ is nontrivial for $* >6$, a contradiction. Then it is easy to see that $M$ has the claimed cohomology ring. \end{proof} \begin{remark} There is an exact sequence (cf.~\cite{Brown}) $$H_3(\mathbb Z/2) \to \mathbb Z \otimes_{\mathbb Z[\mathbb Z/2]} \mathbb Z_- \to H_2(M) \to H_2(\mathbb Z/2),$$ where $\mathbb Z_-$ is the nontrivial $\mathbb Z[\mathbb Z_2]$-module. By this exact sequence, $H_2(M)$ is either $\mathbb Z_2$ or trivial. $H^2(M;\mathbb Z_2)\cong \mathbb Z_2$ implies that $H_2(M) =0$. This was shown in \cite[Lemma 2.1]{Lv-Li} by geometric arguments. \end{remark} Now let's consider the Postnikov system of $M$. Since $\pi_1(M) \cong \mathbb Z_2$, $\pi_2(M) \cong \mathbb Z$ and the action of $\pi_1(M)$ on $\pi_2(M)$ is nontrivial, following \cite{Baues}, there are two candidates for the second space $P_2(M)$ of the Postnikov system, which are distinguished by their homology groups in low dimensions. See \cite[pp.265]{Olbermann} and \cite[Section 2A]{Su}. Let $\lambda$ be the free involution on $\cp^{\infty}$, mapping $[z_0, z_1, z_2, z_3, \cdots]$ to $[-z_1, z_0, -z_3, z_2, \cdots]$. Let $Q = (\cp^{\infty} \times S^{\infty})/(\lambda, -1)$, where $-1$ denotes the antipodal map on $S^{\infty}$, then there is a fibration $\cp^{\infty} \to Q \to \rp^{\infty}$ which corresponds to the nontrivial $k$-invariant. Lemma \ref{lemma:cohom} implies that $P_2(M)=Q$ since $Q$ has the same homology as $M$ in low dimensions. Let $f_2 \colon M \to Q$ be the second Postnikov map, since $\pi_i(M) \cong \pi_i(\mathbb C \mathrm P^3)=0$ for $3 \le i \le 6$, $f_2$ is actually a $7$-equivalence and $Q$ is the $6$-th space of the Postnikov system of $M$. By the formality of constructing the Postnikov system, all the orbit spaces have the same Postnikov system. Therefore we have shown \begin{proposition}\cite[Lemma 3.2]{Lv-Li} \label{prop:hmtp} The orbit spaces of free involutions on homotopy $\mathbb C \mathrm P^3$ are all homotopy equivalent. \end{proposition} Now let us consider the characteristic classes of $M$. \begin{lemma} The total Stiefel-Whitney class of $M$ is $w(M)=1+t+t^2$, where $t \in H^1(M;\mathbb Z_2)$ is the generator. \end{lemma} \begin{proof} The involution $\tau$ is orientation reversing, therefore $M$ is nonorientable and $w_1(M)=t$. The Steenrod square $Sq^2 \colon H^4(M;\mathbb Z_2) \to H^6(M;\mathbb Z_2)$ is trivial, this can be seen by looking at $M_0$, whose $4$-dimensional cohomology classes are pulled back from $S^4$. Therefore the second Wu class is $v_2(M)=0$. Thus by the Wu formula $w(M)=Sqv(M)$ it is seen that the total Stiefel-Whitney class of $M$ is $w(M)=1+t+t^2$. \end{proof} Let $\pi \colon H\cp^3 \to M$ be the projection map, the $\pi^*p_1(M)=p_1(H\cp^3)$. \begin{lemma} The induced map $\pi^* \colon H^4(M) \to H^4(H\cp^3)$ is an isomorphism. \end{lemma} \begin{proof} Apply the Leray-Serre spectral sequence for integral cohomology to the fibration $\widetilde{M} \to M \to \rp^{\infty}$, the $E_2$-terms are $E_2^{p,q}=H^p(\rp^{\infty};\underline{H^q(\widetilde{M})})$. It is known that $H^3(M)=H^3(Q)=0$ and $H^5(M)=H^5(Q)=0$ (for $H^*(Q)$, see \cite[pp.265]{Olbermann}), therefore $E_{\infty}^{0,4}=E_2^{0,4}=H^4(\widetilde{M})$ is the only nonzero term in the line $p+q=4$. This shows that the edge homomorphism is an isomorphism, which is just $\pi^*$. \end{proof} Therefore the first Pontrjagin class of $M$ is $p_1(M)=(24j+4)u$ ($j \in \mathbb Z$), where $u=\pi^*(x^2)$ is the canonical generator of $H^4(M)$. \section{Classification of the orbit spaces} By Proposition \ref{prop:hmtp}, every orbit space $M$ is homotopy equivalent to $M_0$. Thus the set of conjugation classes of free involutions on homotopy $\mathbb C \mathrm P^3$ is in $1:1$ correspondence to the set of diffeomorphism classes of smooth manifolds homotopy equivalent to $M_0$. Denote the latter by $\mathcal M (M_0)$. Let $\mathscr{S}(M_0)$ be the smooth structure set of $M_0$, $Aut(M_0)$ be the set of homotopy classes of self-equivalences of $M_0$. There is an action of $Aut(M_0)$ on $\mathscr S(M_0)$ with orbit set $\mathcal M(M_0)$. (Since the Whitehead group of $\mathbb Z_2$ is trivial, we omit the decoration $s$ all over.) The surgery exact sequence for $M_0$ is $$L_7(\mathbb Z_2^-) \to \mathscr S(M_0) \to [M_0, G/O] \to L_6(\mathbb Z_2^-).$$ By \cite[Theorem 13A.1]{Wall}, $L_7(\mathbb Z_2^-)=0$, $L_6(\mathbb Z_2^-) \stackrel{c}{\cong} \mathbb Z_2$, where $c$ is the Kervaire-Arf invariant. Since $\dim M_0=6$ and $PL/O$ is $6$-connected, we have an isomorphism $[M_0, G/O] \cong [M, G/PL]$. For a given surgery classifying map $g \colon M_0 \to G/PL$, the Kervaire-Arf invariant is given by the Sullivan formula (\cite{Sullivan}, \cite[Theorem 13B.5]{Wall}) \begin{eqnarray*} c(M_0, g) & = & \langle w(M_0) \cdot g^*\kappa, [M_0] \rangle \\ & = & \langle (1+t+t^2) \cdot g^*(1+Sq^2+Sq^2Sq^2)(k_2+k_6), [M_0] \rangle \\ & = & \langle g^*k_6, [M_0] \rangle . \end{eqnarray*} Now since $M_0$ has only $2$-torsion, and modulo the groups of odd order we have $$G/PL \simeq Y \times \prod_{i \ge 2}(K(\mathbb Z_2, 4i-2) \times K(\mathbb Z,4i)),$$ where $Y=K(\mathbb Z_2,2) \times_{\delta Sq^2} K(\mathbb Z,4)$, we have $[M_0, G/PL]=[M_0,Y] \times [M_0, K(\mathbb Z_2,6)]$. $k_6$ is the fundamental class of $K(\mathbb Z_2,6)$. Therefore the surgery exact sequence implies \begin{lemma}\label{lemma:str} $\mathscr S (M_0) \cong [M_0, Y]$. \end{lemma} The projection $\pi \colon \mathbb C \mathrm P^3 \to M_0$ induces a homomorphism $\pi^* \colon [M_0, Y] \to [\mathbb C \mathrm P^3, Y]$, and $[\mathbb C \mathrm P^3,Y]$ is isomorphic to $\mathbb Z$ through the splitting invariant $s_4$ (\cite[Lemma 14C.1]{Wall}). Let $\Phi=s_4 \circ \pi^*$ be the composition. \begin{lemma}\label{lemma:exact} There is a short exact sequence $\mathbb Z_2 \to [M_0,Y] \stackrel{\Phi}{\rightarrow} 2\mathbb Z$. \end{lemma} \begin{proof} We have $[\mathbb C \mathrm P^3, Y]=[\mathbb C \mathrm P^2, Y]$, and according to Sullivan \cite{Sullivan}, the exact sequence $$L_4(1) \stackrel{\cdot 2}{\rightarrow} [\mathbb C \mathrm P^2, Y] \to [\mathbb C \mathrm P^1,Y]$$ is non-splitting. Let $p \colon Y \to K(\mathbb Z_2,2)$ be the projection map, then for any $f \in [\mathbb C \mathrm P^3, Y]$, $s_4(f) \in 2\mathbb Z$ if and only if $p \circ f \colon \mathbb C \mathrm P^3 \to K(\mathbb Z_2,2)$ is null-homotopic. Now by Lemma \ref{lemma:cohom}, the homomorphism $H^2(M_0;\mathbb Z_2) \to H^2(\mathbb C \mathrm P^3;\mathbb Z_2)$ is trivial. Therefore for any $g \in [M_0, Y]$, the composition $p \circ g \circ \pi$ is null-homotopic, thus $\mathrm{Im} \Phi \subset 2\mathbb Z$. On the other hand, since $\pi^* \colon H^4(M_0;\mathbb Z) \to H^4(\mathbb C \mathrm P^3)$ is an isomorphism, any map $f \colon \mathbb C \mathrm P^3 \to K(\mathbb Z,4)$ factors through some $g' \colon M_0 \to K(\mathbb Z,4)$. Let $i \colon K(\mathbb Z,4) \to Y$ be the fiber inclusion, since $s_4(i\circ f)$ takes any value in $2\mathbb Z$, so does $\Phi(i \circ g')$. Let $h \colon M_0 \to K(\mathbb Z_2,2)$ be a map corresponding to the nontrivial cohomology class in $H^2(M_0;\mathbb Z_2)$. By obstruction theory, there is a lifting $g \colon M_0 \to Y$. By the previous argument, there is also a map $g' \colon M_0 \to Y$ such that $\Phi(g)=\Phi(g')$, but $ p \circ g' \colon M_0 \to K(\mathbb Z_2,2)$ is null-homotopic. Therefore the kernel of $\Phi$ consists of two elements. \end{proof} \begin{remark} In \cite{Petrie} Petrie showed that every homotopy $\mathbb C \mathrm P^3$ admits free involution. It was pointed out by Dovermann, Masuda and Schultz \cite[pp.~4]{DMS} that since the class $G$ is in fact twice the generator of $H^4(S^4)$, Petrie's computation actually shows that every $H\cp^3_{2k}$ admits free involution, which is consistent with Lemma \ref{lemma:exact}. \end{remark} The set of diffeomorphism classes of manifolds homotopy equivalent to $M_0$, $\mathcal M(M_0)$, is the orbit set $\mathscr S(M_0)/Aut(M_0)$. In general, the determination of the action of $Aut(M_0)$ on the structure set is very difficult. But in our case, the situation is quite simple, since \begin{lemma}\label{lemma:action} The group of self-equivalences $Aut(M_0)$ is the trivial group. \end{lemma} \begin{proof} A special CW-complex structure of $M_0$ was given in \cite[pp.~885]{Lv-Li}: $M_0$ is a $\mathbb R \mathrm P^2$-bundle over $S^4$, therefore it is the union of two copies of $\mathbb R \mathrm P^2 \times D^4$, glued along boundaries. Choose a CW-complex structure of $\mathbb R \mathrm P^2$, we have a product CW-structure on one copy of $ \mathbb R \mathrm P^2 \times D^4$, and by shrinking the other copy of $\mathbb R \mathrm P^2 \times D^4$ to the core $\mathbb R \mathrm P^2$, we get a CW-complex structure on $M_0$, whose $2$-skeleton is $\mathbb R \mathrm P^2$. Let $\varphi \in Aut(M_0)$ be a self homotopy equivalence of $M_0$. By cellular approximation, we may assume that $\varphi$ maps $\mathbb R \mathrm P^2$ to $\mathbb R \mathrm P^2$. It is easy to see that $\varphi|_{\mathbb R \mathrm P^2}$ is homotopic $\mathrm{id}_{\mathbb R \mathrm P^2}$. Therefore, by homotopy extension, we may further assume that $\varphi|_{\mathbb R \mathrm P^2}=\mathrm{id}_{\mathbb R \mathrm P^2}$. The obstruction to construct a homotopy between $\varphi$ and $\mathrm{id}_{M_0}$, which is the identity on $\mathbb R \mathrm P^2$, is in $H^i(M,\mathbb R \mathrm P^2;\pi_i(M_0))$. Since $\pi_i(M_0)=0$ for $3 \le i \le 6$ and $H^1(M_0,\mathbb R \mathrm P^2;\mathbb Z_2)=H^2(M_0,\mathbb R \mathrm P^2;\mathbb Z)=0$, all the obstruction groups are zero. Therefore $\varphi \simeq \mathrm{id}_{M_0}$. \end{proof} Combine Lemma \ref{lemma:str}, Lemma \ref{lemma:exact} and Lemma \ref{lemma:action}, we have a classification of manifolds homotopy equivalent to $M_0$. \begin{theorem}\label{thm:two} Let $M^6$ be a smooth manifold homotopy equivalent to $M_0$, then $p_1(M)=(48j+4)u$, where $u\in H^4(M;\mathbb Z)$ is the canonical generator; for each $j \in \mathbb Z$, up to diffeomorphism, there are two such manifolds with the same $p_1=48j+4$. \end{theorem} Theorem \ref{thm:one} and Corollary \ref{coro:one} are direct consequences of this theorem. \end{document}
\begin{document} \begin{singlespace} \title{Sample Path Large Deviations\for Stochastic Evolutionary Game Dynamics hanks{We thank Michel Bena\"im for extensive discussions about this paper and related topics, and two anonymous referees and an Associate Editor for helpful comments. Financial support from NSF Grants SES-1155135 and SES-1458992, U.S. Army Research Office Grant MSN201957, and U.S. Air Force OSR Grant FA9550-09-0538 are gratefully acknowledged.} \begin{abstract} We study a model of stochastic evolutionary game dynamics in which the probabilities that agents choose suboptimal actions are dependent on payoff consequences. We prove a sample path large deviation principle, characterizing the rate of decay of the probability that the sample path of the evolutionary process lies in a prespecified set as the population size approaches infinity. We use these results to describe excursion rates and stationary distribution asymptotics in settings where the mean dynamic admits a globally attracting state, and we compute these rates explicitly for the case of logit choice in potential games. \end{abstract} \end{singlespace} \section{Introduction} Evolutionary game theory concerns the dynamics of aggregate behavior of populations of strategically interacting agents. To define these dynamics, one specifies the population size $N$, the game being played, and the revision protocol agents follow when choosing new actions. Together, these objects define a Markov chain $\mathbf{X}^N$ over the set of population states---that is, of distributions of agents across actions. From this common origin, analyses of evolutionary game dynamics generally proceed in one of two directions. One possibility is to consider deterministic dynamics, which describe the evolution of aggregate behavior using an ordinary differential equation. More precisely, a \emph{deterministic evolutionary dynamic} is a map that assigns population games to dynamical systems on the set of population states. The replicator dynamic (\cite{TayJon78}), the best response dynamic (\cite{GilMat91}), and the logit dynamic (\cite{FudLev98}) are prominent examples. In order to derive deterministic dynamics from the Markovian model of individual choice posited above, one can consider the limiting behavior of the Markov chains as the population size $N$ approaches infinity. \cite{Kur70}, \cite{Ben98}, \cite{BenWei03,BenWei09}, and \cite{RotSan13} show that if this limit is taken, then over any finite time span, it becomes arbitrarily likely that the Markov chain is very closely approximated by solutions to a differential equation---the \emph{mean dynamic}---defined by the Markov chain's expected increments. Different revision protocols generate different deterministic dynamics: for instance, the replicator dynamic can be obtained from a variety of protocols based on imitation, while the best response and logit dynamics are obtained from protocols based on exact and perturbed optimization, respectively. \footnote{For imitative dynamics, see \cite{Hel92}, \cite{Wei95}, \cite{BjoWei96}, \cite{Hof95im}, and \cite{Sch98}; for exact and perturbed best response dynamics, see \cite{RotSan13} and \cite{HofSan07}, respectively. For surveys, see \cite{SanPGED,SanHB}.} These deterministic dynamics describe typical behavior in a large population, specifying how the population settles upon a stable equilibrium, a stable cycle, or a more complex stable set. They thus provide theories of how equilibrium behavior is attained, or of how it may fail to be attained. At the same time, if the process $\mathbf{X}^N$ is ergodic---for instance, if there is always a small chance of a revising player choosing any available action---then any stable state or other stable set of the mean dynamic is only temporarily so: equilibrium must break down, and new equilibria must emerge. Behavior over very long time spans is summarized by the stationary distribution of the process. This distribution is typically concentrated near a single stable set, the identity of which is determined by the relative probabilities of transitions between stable sets. This last question is the subject of the large literature on stochastic stability under evolutionary game dynamics. \footnote{Key early contributions include \cite{FosYou90}, \cite{KanMaiRob93}, and \cite{You93}; for surveys, see \cite{You98} and \cite{SanPGED}.} The most commonly employed framework in this literature is that of \cite{KanMaiRob93} and \cite{KanRob95, KanRob98}. These authors consider a population of fixed size, and suppose that agents employ the \emph{best response with mutations} rule: with high probability a revising agent plays an optimal action, and with the complementary probability chooses a action uniformly at random. They then study the long run behavior of the stochastic game dynamic as the probability of mutation approaches zero. The assumption that all mistakes are equally likely makes the question of equilibrium breakdown simple to answer, as the unlikelihood of a given sample path depends only on the number of suboptimal choices it entails. This eases the determination of the stationary distribution, which is accomplished by means of the well-known Markov chain tree theorem. \footnote{See \pgcite{FreWen98}{Lemma 6.3.1} or \pgcite{You93}{Theorem 4}.} To connect these two branches of the literature, one can consider the questions of equilibrium breakdown and stochastic stability in the large population limit, describing the behavior of the processes $\mathbf{X}^N$ when this behavior differs substantially from that of the mean dynamic. Taking a key first step in this direction, this paper establishes a \emph{sample path large deviation principle}: for any prespecified set of sample paths $\Phi$ of a fixed duration, we characterize the rate of decay of the probability that the sample path of $\mathbf{X}^N$ lies in $\Phi$ as $N$ grows large. This large deviation principle is the basic preliminary to obtaining characterizations of the expected waiting times before transitions between equilibria and of stationary distribution asymptotics. As we noted earlier, most work in stochastic evolutionary game theory has focused on evolution under the best response with mutations rule, and on properties of the small noise limit. In some contexts, it seems more realistic to follow the approach taken here, in which the probabilities of mistakes depend on their costs. \footnote{Important early work featuring this assumption includes the logit model of \cite{Blu93,Blu97} and the probit model of \cite{MyaWal03}. For more recent work and references, see \cite{SanSIM,SanORDERS}, \cite{Sta12}, and \cite{SanSta16}.} Concerning the choice of limits, \cite{BinSam97} argue that the large population limit is more appropriate than the small noise limit for most economic modeling. However, the technical demands of this approach have restricted previous analyses to the two-action case. \footnote{See \cite{BinSamVau95}, \cite{BinSam97}, \cite{Blu93}, and \cite{SanSIM,SanORDERS}.} This paper provides a necessary first step toward obtaining tractable analyses of large population limits in many-action environments. In order to move from the large deviation principle to statements about the long run behavior of the stochastic process, one can adapt the analyses of \cite{FreWen98} of diffusions with vanishing noise parameters to our setting of sequences of Markov chains running on increasingly fine grids in the simplex. In Section \ref{sec:App}, we explain how the large deviation principle can be used to estimate the waiting times to reach sets of states away from an attractor and to describe the asymptotic behavior of the stationary distribution in cases where the mean dynamic admits a globally attracting state. We prove that when agents playing a potential game make decisions using the logit choice rule, the control problems in the statement of the large deviation principle can be solved explicitly. We illustrate the implications of these results by using them to characterize long run behavior in a model of traffic congestion. Our work here is closely connected to developments in two branches of the stochastic processes literature. Large deviation principles for environments quite close to those considered here have been established by \cite{AzeRug77}, \cite{Dup88}, and \cite{DupEll97}. In these works, the sequences of processes under consideration are defined on open sets in $\mathbb{R}^n$, and have transition laws that allow for motion in all directions from every state. These results do not apply directly to the evolutionary processes considered here, which necessarily run on a compact set. Thus relative to these works, our contribution lies in addressing behavior at and near boundary states. There are also close links to work on interacting particle systems with long range interactions. In game-theoretic terms, the processes studied in this literature describe the \emph{individual choices} of each of $N$ agents as they evolve in continuous time, with the stochastic changes in each agent's action being influenced by the aggregate behavior of all agents. Two large deviation principles for such systems are proved by \cite{Leo95LD}. The first describes large deviations of the sequence of probability distributions on the set of empirical distributions, where the latter distributions anonymously describe the $N$ agents' sample paths through the finite set of actions $\scrA = \{1, \ldots n\}$. \footnote{In more detail, each sample path of the $N$ agent particle system specifies the action $i \in \scrA$ played by each agent as a function of time $t \in [0, T]$. Each sample path generates an empirical distribution $D^N$ over the set of paths $\mathscr{I} = \{\iota \colon [0, T] \to \scrA\}$, where with probability one, $D^N$ places mass $\frac1N$ on $N$ distinct paths in $\mathscr{I}$. The random draw of a sample path of the particle system then induces a probability distribution $\mathscr{P}^N$ over empirical distributions $D^N$ on the set of paths $\mathscr{I}$. The large deviation principle noted above concerns the behavior of the probability distributions $\mathscr{P}^N$ as $N$ grows large.} The second describes large deviations of the sequence the probability distributions over paths on discrete grids $\mathscr{X}^N$ in the $n$-simplex, paths that represent the evolution of \emph{aggregate} behavior in the $N$ agent particle system. \footnote{In parlance of the particle systems literature, the first result is concerns the ``empirical distributions'' (or ``empirical measures'') of the system, while the latter concerns the ``empirical process''.} The Freidlin-Wentzell theory for particle systems with long range interactions has been developed by \cite{BorSun12}, who provide many further references to this literature. The large deviation principle we prove here is a discrete-time analogue of the second result of \cite{Leo95LD} noted above. Unlike \cite{Leo95LD}, we allow individuals' transition probabilities to depend in a vanishing way on the population size, as is natural in our game-theoretic context (see Examples \ref{ex:MatchNF}--\ref{ex:Clever} below). Also, our discrete-time framework obviates the need to address large deviations in the arrival of revision opportunities. \footnote{Under a continuous-time process, the number of revision opportunities received by $N$ agents over a short but fixed time interval $[t, t+dt]$ follows a Poisson distribution with mean $N\, dt$. As the population size grows large, the number of arrivals per agent over this interval becomes almost deterministic. However, a large deviation principle for the evolutionary process must account for exceptional realizations of the number of arrivals. For a simple example illustrating how random arrivals influence large deviations results, see \pgcite{DemZei98}{Exercise 2.3.18}.} But the central advantage of our approach is its simplicity. Describing the evolution of the choices of each of $N$ individual agents requires a complicated stochastic processes. Understanding the proofs (and even the statements) of large deviation principles for these processes requires substantial background knowledge. Here our interest is in aggregate behavior. By making the aggregate behavior process our primitive, we are able to state our large deviation principle with a minimum of preliminaries. Likewise, our proof of this principle, which follows the weak convergence approach of \cite{DupEll97}, is relatively direct, and in Section \ref{sec:LDP} we explain its main ideas in a straightforward manner. These factors may make the work to follow accessible to researchers in economics, biology, engineering, and other fields. This paper is part of a larger project on large deviations and stochastic stability under evolutionary game dynamics with payoff-dependent mistake probabilities and arbitrary numbers of actions. In \cite{SanSta16}, we considered the case of the small noise double limit, in which the noise level in agents' decisions is first taken to zero, and then the population size to infinity. The initial analysis of the small noise limit concerns a sequence of Markov chains on a fixed finite state space; the relevant characterizations of large deviations properties in terms of discrete minimization problems are simple and well-known. Taking the second limit as the population size grows large turns these discrete minimization problems into continuous optimal control problems. We show that the latter problems possess a linear structure that allows them to be solved analytically. The present paper begins the analysis of large deviations and stochastic stability when the population size is taken to infinity for a fixed noise level. This analysis concerns a sequence of Markov chains on ever finer grids in the simplex, making the basic large deviations result---our main result here---considerably more difficult than its small-noise counterpart. Future work will provide a full development of Freidlin-Wentzell theory for the large population limit, allowing for mean dynamics with multiple stable states. It will then introduce the second limit as the noise level vanishes, and determine the extent to which the agreement of the two double limits agree. Further discussion of this research agenda is offered in Section \ref{sec:Disc}. \section{The Model} We consider a model in which all agents are members of a single population. The extension to multipopulation settings only requires more elaborate notation. \subsection{Finite-population games}\label{sec:FPopGames} We consider games in which the members of a population of $N$ agents choose actions from the common finite action set $\scrA= \{1, \ldots , n\}$. We describe the population's aggregate behavior by a \emph{population state} $x$, an element of the simplex $X = \{x \in \mathbb{R}^n_+\colon \sum_{i=1}^n x_i = 1\}$, or more specifically, the grid $\mathscr{X}^N = X \cap \frac1N \mathbb{Z}^n =\brace{x \in X\colon Nx \in {\mathbb{Z}}^n}$. The standard basis vector $e_i \in X \subset \mathbb{R}^n$ represents the \emph{pure population state} at which all agents play action $i$. We identify a \emph{finite-population game} with its payoff function $F^N\colon \mathscr{X}^N \to \mathbb{R}^n$, where $F^N_i(x)\in \mathbb{R}$ is the payoff to action $i$ when the population state is $x \in \mathscr{X}^N$. \begin{example}[Matching in normal form games]\label{ex:MatchNF} Assume that agents are matched in pairs to play a symmetric two-player normal form game $A \in \mathbb{R}^{n \times n}$, where $A_{ij}$ is the payoff obtained by an $i$ player who is matched with a $j$ player. If each agent is matched with all other agents (but not with himself), then average payoffs in the resulting population game are given by $F^N_i(x) =\frac1{N-1} (A(Nx - e_i))_i=(Ax)_i + \tfrac1{N-1}((Ax)_i - A_{ii})\,$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} \begin{example}[Congestion games] To define a \emph{congestion game} (\cite{BecMcGWin56}, \cite{Ros73}), one specifies a collection of facilities $\Lambda$ (e.g., links in a highway network), and associates with each facility $\lambda \in \Lambda$ a function $\ell^N_\lambda \colon \{0, \frac1N, \ldots , 1\} \to \mathbb{R}$ describing the cost (or benefit, if $\ell^N_\lambda <0$) of using the facility as a function of the fraction of the population that uses it. Each action $i \in \scrA$ (a path through the network) requires the facilities in a given set $\Lambda_i \subseteq \Lambda$ (the links on the path), and the payoff to action $i$ is the negative of the sum of the costs accruing from these facilities. Payoffs in the resulting population game are given by $F^N_i(x) = -\sum_{\lambda \in \Lambda_i}\ell^N_\lambda(u_\lambda(x))$, where $u_\lambda(x)=\sum_{i:\: \lambda\in \Lambda_i}x_i$ denotes the total utilization of facility $\lambda$ at state $x$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} Because the population size is finite, the payoff vector an agent considers when revising may depend on his current action. To allow for this possibility, we let $F^N_{i \to \cdot}\colon \mathscr{X}^N \to \mathbb{R}^n$ denote the payoff vector considered at state $x$ by a action $i$ player. \begin{example}[Simple payoff evaluation]\label{ex:Simple} Under \emph{simple payoff evaluation}, all agents' decisions are based on the current vector of payoffs: $F^N_{i\to j}(x)=F^N_j(x)$ for all $i, j \in \scrA$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} \begin{example}[Clever payoff evaluation]\label{ex:Clever} Under \emph{clever payoff evaluation}, an action $i$ player accounts for the fact that by switching to action $j$ at state $x$, he changes the state to the adjacent state $y =x + \frac1N(e_j - e_i)$. To do so, he evaluates payoffs according to the \emph{clever payoff vector} $F^N_{i\to j}(x)=F^N_j(x +\tfrac1N(e_j-e_i))$. \footnote{This adjustment is important in finite population models: see \pgcite{SanPGED}{Section 11.4.2} or \cite{SanSta16}. } \ensuremath{\hspace{4pt}\Diamondblack} \end{example} As the assumptions in Section \ref{sec:Processes} will make clear, our results are the same whether simple or clever payoff evaluation is assumed. \subsection{Revision protocols}\label{sec:RP} In our model of evolution, each agent occasionally receives opportunities to switch actions. At such moments, an agent decides which action to play next by employing a \emph{protocol} $\rho^N \colon \mathbb{R}^n \times \mathscr{X}^N \to X^n$, with the choice probabilities of a current action $i$ player being described by $\rho^N_{i\, \cdot}\colon \mathbb{R}^n \times \mathscr{X}^N \to X$. Specifically, if a revising action $i$ player faces payoff vector $\pi \in \mathbb{R}^n$ at population state $x \in \mathscr{X}^N$, then the probability that he proceeds by playing action $j$ is $\rho^N_{i j}(\pi, x)$. We will assume shortly that this probability is bounded away from zero, so that there is always a nonnegligible probability of the revising agent playing each of the actions in $\scrA$; see condition \eqref{eq:LimSPBound} below. \begin{example}[The logit protocol]\label{ex:Logit} A fundamental example of a revision protocol with positive choice probabilities is the \emph{logit protocol}, defined by \begin{equation}\label{eq:LogitProtocol} \rho^N_{ij}(\pi, x) = \frac{\exp(\eta^{-1}\pi_j)}{\sum_{k \in \scrA}\exp(\eta^{-1}\pi_k)} \end{equation} for some \emph{noise level} $\eta>0$. When $\eta$ is small, an agent using this protocol is very likely to choose an optimal action, but places positive probability on every action, with lower probabilities being placed on worse-performing actions. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} \begin{example}[Perturbed best response protocols] One can generalize \eqref{eq:LogitProtocol} by assuming that agents choice probabilities maximize the difference between their expected base payoff and a convex penalty: \begin{equation*} \rho^N_{i\cdot}(\pi, x)=\argmax_{x\in \Int(X)}\paren{\sum_{k\in\scrA}\pi_kx_k -h(x)}, \end{equation*} where $h \colon \Int(X) \to \mathbb{R}$ is strictly convex and steep, in the sense that $|\nabla h(x)|$ approaches infinity whenever $x$ approaches the boundary of $X$. The logit protocol \eqref{eq:LogitProtocol} is recovered when $h$ is the negated entropy function $\eta^{-1}\sum_{k \in \scrA}x_k \log x_k$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} \begin{example}[The pairwise logit protocol]\label{ex:PLogit} Under the \emph{pairwise logit protocol}, a revising agent chooses a candidate action distinct from his current action at random, and then applies the logit rule \eqref{eq:LogitProtocol} only to his current action and the candidate action: \begin{equation*} \rho^N_{ij}(\pi, x) = \begin{cases} \frac1{n-1}\cdot\frac{\exp(\eta^{-1}\pi_j)}{\exp(\eta^{-1}\pi_i)+\exp(\eta^{-1}\pi_j)}&\text{if }j \ne i\\ \frac1{n-1}\sum_{k\ne i}\frac{\exp(\eta^{-1}\pi_i)}{\exp(\eta^{-1}\pi_i)+\exp(\eta^{-1}\pi_k)}&\text{if }j = i.\;\ensuremath{\hspace{4pt}\Diamondblack} \end{cases} \end{equation*} \end{example} \begin{example}[Imitation with ``mutations''] Suppose that with probability $1-\varepsilon$, a revising agent picks an opponent at random and switches to her action with probability proportional to the opponent's payoff, and that with probability $\varepsilon>0$ the agent chooses an action at random. If payoffs are normalized to take values between 0 and 1, the resulting protocol takes the form \begin{equation*} \rho^N_{ij}(\pi, x) = \begin{cases} (1-\varepsilon)\,\frac{N}{N-1}x_j\,\pi_j + \frac{\varepsilon}n&\text{if }j \ne i\\ (1-\varepsilon)\paren{\frac{Nx_i -1}N+\sum_{k\ne i}\frac{N}{N-1}x_k(1 - \pi_k)} + \frac{\varepsilon}n&\text{if }j = i. \end{cases} \end{equation*} The positive mutation rate ensures that all actions are chosen with positive probability. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} For many further examples of revision protocols, see \cite{SanPGED}. \subsection{The stochastic evolutionary process}\label{sec:TheSEP} Together, a population game $F^N$ and a revision protocol $\rho^N$ define a discrete-time stochastic process $\mathbf{X}^N = \{X^{N}_{k}\}_{k=0}^\infty$, which is defined informally as follows: During each period, a single agent is selected at random and given a revision opportunity. The probabilities with which he chooses each action are obtained by evaluating the protocol $\rho^N$ at the relevant payoff vector and population state. Each period of the process $\mathbf{X}^N$ takes $\frac1N$ units of clock time, as this fixes at one the expected number of revision opportunities that each agent receives during one unit of clock time. More precisely, the process $\mathbf{X}^N$ is a Markov chain with initial condition $X^N_0 \in \mathscr{X}^N$ and transition law \begin{equation}\label{eq:PNEta} \mathbb{P}\paren{X^{N}_{k+1}=y \,\big\vert\, X^{N}_{k}=x}= \begin{cases} x_{i}\, \rho^N_{ij}(F^N_{i\to\cdot}(x),x) & \text{if}\;y=x+\frac1N(e_{j}-e_{i})\text{ and }j\ne i,\\ \sum_{i=1}^{n}x_{i}\, \rho^N_{ii}(F^N_{i\to\cdot}(x),x) & \text{if}\;y=x,\\ 0 & \text{otherwise}. \end{cases} \end{equation} When a single agent switches from action $i$ to action $j$, the population state changes from $x$ to $y=x+\frac1N(e_{j}-e_{i})$. This requires that the revision opportunity be assigned to a current action $i$ player, which occurs with probability $x_i$, and that this player choose to play action $j$, which occurs with probability $\rho^N_{ij}(F^N_{i\to\cdot}(x),x) $. Together these yield the law \eqref{eq:PNEta}. \begin{example} Suppose that $N$ agents are matched to play the normal form game $A\in \mathbb{R}^{n\times n}$, using clever payoff evaluation and the logit choice rule with noise level $\eta>0$. Then if the state in period $k$ is $x \in \mathscr{X}^N$, then by Examples \ref{ex:MatchNF}, \ref{ex:Clever}, and \ref{ex:Logit} and equation \eqref{eq:PNEta}, the probability that the state in period $k+1$ is $x+\frac1N(e_{j}-e_{i}) \ne x$ is equal to \footnote{Since the logit protocol is parameterized by a noise level and since clever payoff evaluation is used, this example satisfies the assumptions of our analysis of the small noise double limit in \cite{SanSta16}.} \[ \mathbb{P}\paren{X^{N}_{k+1}=x+\tfrac1N(e_{j}-e_{i}) \,\big\vert\, X^{N}_{k}=x}= x_i \cdot \frac{\exp\paren{\eta^{-1}\cdot\frac1{N-1}(A (Nx-e_i))_j}}{\sum\limits_{\ell \in \scrA}\exp\paren{\eta^{-1}\cdot\frac1{N-1}(A (Nx-e_i))_\ell}} .\;\; \ensuremath{\hspace{4pt}\Diamondblack} \] \end{example} \subsection{A class of population processes}\label{sec:Processes} It will be convenient to consider an equivalent class of Markov chains defined using a more parsimonious notation. All Markov chains to come are defined on a probability space $(\Omega, \mathscr{F}, \mathbb{P})$, and we sometimes use the notation $\mathbb{P}_{x}$ to indicate that the Markov chain $\mathbf{X}^N$ under consideration is run from initial condition $x\in\mathscr{X}^N$. The Markov chain $\mathbf{X}^N = \{X^{N}_{k}\}_{k=0}^\infty$ runs on the discrete grid $\mathscr{X}^N = X \cap \frac1N \mathbb{Z}^n$, with each period taking $\frac1N$ units of clock time, so that each agent expects to receive one revision opportunity per unit of clock time (cf.~Section \ref{sec:MDDA}). We define the law of $\mathbf{X}^N$ by setting an initial condition $X^N_0 \in \mathscr{X}^N$ and specifying subsequent states via the recursion \begin{equation}\label{eq:RecursiveDef} X^{N}_{k+1}=X^{N}_{k}+\tfrac{1}{N}\zeta^{N}_{k+1}. \end{equation} The normalized increment $\zeta^{N}_{k+1}$ follows the conditional law $\nu^{N}(\,\cdot\,|X^{N}_{k})$, defined by \begin{equation}\label{eq:CondLaw} \nu^{N}(\mathscr{z}|x)= \begin{cases} x_{i}\, \sigma^N_{ij}(x) & \text{if}\;\mathscr{z}=e_{j}-e_{i}\text{ and }j\ne i,\\ \sum_{i=1}^{n}x_{i}\, \sigma^N_{ii}(x) & \text{if}\;\mathscr{z}=\mathbf{0},\\ 0 &\text{otherwise}, \end{cases} \end{equation} where the function $\sigma^N\colon \mathscr{X}^N \to \mathbb{R}^{n\times n}_+$ satisfies $\sum_{j \in \scrA}\sigma^N_{ij}(x) = 1$ for all $i \in \scrA$ and $x \in \mathscr{X}^N$. The \emph{switch probability} $\sigma^N_{ij}(x)$ is the probability that an agent playing action $i $ who receives a revision opportunity proceeds by playing action $j$. The model described in the previous sections is the case in which $\sigma^N_{ij}(x) =\rho^N_{ij}(F^N_{i\to\cdot}(x),x)$. We observe that the support of the transition measure $\nu^{N}(\,\cdot\,|x)$ is contained in the set of \emph{raw increments} $\mathscr{Z} = \{e_j - e_i\colon i, j \in \scrA\}$. Since an unused action cannot become less common, the support of $\nu^{N}(\,\cdot\,|x)$ is contained in $\mathscr{Z}(x) =\{\mathscr{z} \in \mathscr{Z} \colon x_i = 0 \Rightarrow \mathscr{z}_i \geq 0\}$. Our large deviations results concern the behavior of sequences $\{\mathbf{X}^N\}_{N=N_0}^\infty$ of Markov chains defined by \eqref{eq:RecursiveDef} and \eqref{eq:CondLaw}. To allow for finite population effects, we permit the switch probabilities $\sigma^N_{ij}(x)$ to depend on $N$ in a manner that becomes negligible as $N$ grows large. Specifically, we assume that there is a Lipschitz continuous function $\sigma \colon \mathscr{X}^N \to \mathbb{R}^{n\times n}_+$ that describes the limiting switch probabilities, in the sense that \begin{equation}\label{eq:LimSPs} \lim_{N\to \infty}\max_{x \in \mathscr{X}^N}\max_{i,j\in\scrA}|\sigma^N_{ij}(x)-\sigma_{ij}(x) |=0. \end{equation} In the game model, this assumption holds when the sequences of population games $F^N$ and revision protocols $\rho^N$ have at most vanishing finite population effects, in that they converge to a limiting population game $F \colon X \to \mathbb{R}^n$ and a limiting revision protocol $\rho \colon \mathbb{R}^n \times X \to \mathbb{R}$, both of which are Lipschitz continuous. In addition, we assume that limiting switch probabilities are bounded away from zero: there is a $\varsigma >0$ such that \begin{equation}\label{eq:LimSPBound} \min_{x \in X}\min_{i,j \in \scrA}\sigma_{ij}(x) \geq \varsigma. \end{equation} This assumption is satisfied in the game model when the choice probabilities $\rho^N_{ij}(\pi, x)$ are bounded away from zero. \footnote{More specifically, the bound on choice probabilities must hold uniformly over the payoff vectors $\pi$ that may arise in the population games $F^N$.} This is so under all of the revision protocols from Section \ref{sec:RP}. Assumption \eqref{eq:LimSPBound} and the transition law \eqref{eq:CondLaw} imply that the Markov chain $\mathbf{X}^N$ is aperiodic and irreducible for $N$ large enough. Thus for such $N$, $\mathbf{X}^N$ admits a unique stationary distribution, $\mu^{N}$, which is both the limiting distribution of the Markov chain and its limiting empirical distribution along almost every sample path. Assumptions \eqref{eq:LimSPs} and \eqref{eq:LimSPBound} imply that the transition kernels \eqref{eq:CondLaw} of the Markov chains $\mathbf{X}^N$ approach a limiting kernel $\nu \colon X \to \Delta(\mathscr{Z})$, defined by \begin{equation}\label{eq:CondLawLimit} \nu(\mathscr{z}|x)= \begin{cases} x_{i}\, \sigma_{ij}(x) & \text{if}\;\mathscr{z}=e_{j}-e_{i}\text{ and }j\ne i\\ \sum_{i\in\scrA}x_{i}\, \sigma_{ii}(x) & \text{if}\;\mathscr{z}=\mathbf{0},\\ 0 &\text{otherwise}. \end{cases} \end{equation} Condition \eqref{eq:LimSPs} implies that the convergence of $\nu^N$ to $\nu$ is uniform: \begin{equation}\label{eq:LimTrans} \lim_{N\to \infty}\max_{x \in \mathscr{X}^N}\max_{\mathscr{z} \in \mathscr{Z}}\abs{\nu^N(\mathscr{z}|x)-\nu(\mathscr{z}|x) }=0. \end{equation} The probability measures $\nu(\,\cdot \,|x)$ depend Lipschitz continuously on $x$, and by virtue of condition \eqref{eq:LimSPBound}, each measure $\nu(\,\cdot \,|x)$ has support $\mathscr{Z}(x)$. \section{Sample Path Large Deviations}\label{sec:SPLD} \subsection{Deterministic approximation}\label{sec:MDDA} Before considering the large deviations properties of the processes $\mathbf{X}^N$, we describe their typical behavior. By definition, each period of the process $\mathbf{X}^N=\{X^{N}_{k}\}_{k=0}^\infty$ takes $\frac1N$ units of clock time, and leads to a random increment of size $\frac1N$. Thus when $N$ is large, each brief interval of clock time contains a large number of periods during which the transition measures $\nu^{N}(\,\cdot\,|X^{N}_{k})$ vary little. Intuition from the law of large numbers then suggests that over this interval, and hence over any finite concatenation of such intervals, the Markov chain $\mathbf{X}^N$ should follow an almost deterministic trajectory, namely the path determined by the process's expected motion. To make this statement precise, note that the expected increment of the process $\mathbf{X}^N$ from state $x$ during a single period is \begin{equation}\label{eq:ExpInc} \mathbb{E}(X^N_{k+1} - X^N_k | X^N_k = x) = \frac1N \mathbb{E}(\zeta^N_k | X^N_k = x)=\frac1N \sum_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}\, \nu^N(\mathscr{z} | x) . \end{equation} Since there are $N$ periods per time unit, the expected increment per time unit is obtained by multiplying \eqref{eq:ExpInc} by $N$. Doing so and taking $N$ to its limit defines the \emph{mean dynamic}, \begin{subequations}\label{eq:MD} \begin{equation}\label{eq:MDZeta} \dot x = \sum_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}\, \nu(\mathscr{z} | x)= \mathbb{E}\zeta_x, \end{equation} where $\zeta_x$ is a random variable with law $\nu(\cdot| x)$. Substituting in definition \eqref{eq:CondLawLimit} and simplifying yields the coordinate formula \begin{equation}\label{eq:MDCoor} \dot x_i = \sum_{j \in \scrA} x_j \,\sigma_{ji}(x)- x_i. \end{equation} \end{subequations} Assumption \eqref{eq:LimSPBound} implies that the boundary of the simplex is repelling under \eqref{eq:MD}. Since the right hand side of \eqref{eq:MD} is Lipschitz continuous, it admits a unique forward solution $\{x_t\}_{t\ge 0}$ from every initial condition $x_0 =x$ in $X$, and this solution does not leave $X$. A version of the deterministic approximation result to follow was first proved by \cite{Kur70}, with the exponential rate of convergence established by \cite{Ben98}; see also \cite{BenWei03,BenWei09}. To state the result, we let $|\cdot|$ denote the $\ell^1$ norm on $\mathbb{R}^n$, and we define $\hat\mathbf{X}^N=\{\hat {X}_t^N \}_{t\ge 0}$ to be the piecewise affine interpolation of the process $\mathbf{X}^N$: \[ \hat {X}_t^N =X_{\lfloor Nt \rfloor}^N + (Nt-\lfloor Nt \rfloor)(X_{\lfloor Nt \rfloor+1}^N -X_{\lfloor Nt \rfloor}^N). \] \begin{theorem}\label{thm:DetApprox} Suppose that $\{X^{N}_{k}\}$ has initial condition $x^N \in \mathscr{X}^N$, and let $\lim_{N\to \infty}x^N = x \in X$. Let $\{x_t\}_{t \geq 0}$ be the solution to \eqref{eq:MD} with $x_{0}=x$. For any $T< \infty$ there exists a constant $c > 0$ independent of $x$ such that for all $\varepsilon > 0$ and $N$ large enough, \[ \mathbb{P}_{x^N}\!\left( {\sup\limits_{t\in [0,T]} \left| {\hat {X}_t^N -x_t } \right|\ge \varepsilon } \right)\le 2n\exp (-c\varepsilon ^2N). \] \end{theorem} \subsection{The Cram\'{e}r transform and relative entropy}\label{sec:Cramer} Stating our large deviations results requires some additional machinery. \footnote{For further background on this material, see \cite{DemZei98} or \cite{DupEll97}.} Let $\mathbb{R}^n_{0}=\{z\in\mathbb{R}^n\colon\sum_{i\in \scrA}z_{i}=0\}$ denote the set of vectors tangent to the simplex. The \textit{Cram\'{e}r transform} $L(x, \cdot):\mathbb{R}^n_{0}\to[0,\infty]$ of probability distribution $\nu(\cdot|x)\in\Delta(\mathscr{Z})$ is defined by \begin{equation}\label{eq:CramerTr} L(x, z)=\sup \limits_{ u\in \mathbb{R}^n_0 }\left(\abrack{ u, z}-H(x, u)\right), \text{ where }\:H(x,u)= \log\paren{\sum_{\mathscr{z} \in \mathscr{Z}}\mathrm{e}^{\langle u,\mathscr{z} \rangle}\,\nu(\mathscr{z}|x)}. \end{equation} In words, $L(x, \cdot)$ is the convex conjugate of the log moment generating function of $\nu(\cdot|x)$. It is well known that $L(x, \cdot)$ is convex, lower semicontinuous, and nonnegative, and that $L(x, z) =0$ if and only if $z= \mathbb{E} \zeta_x$; moreover, $L(x, z) < \infty$ if and only if $z \in Z(x)$, where $Z(x) = \conv(\mathscr{Z}(x))$ is the convex hull of the support of $\nu(\,\cdot\,|x)$. To help interpret what is to come, we recall \emph{Cram\'{e}r's theorem}: Let $\{\zeta_x^k\}_{k=1}^\infty$ be a sequence of i.i.d.\ random variables with law $\nu(\,\cdot\,|x)$, and let $\bar \zeta_x^N $ be the sequence's $N$th sample mean. Then for any set $V\subseteq \mathbb{R}^n$, \begin{subequations} \begin{gather} \limsup \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\bar \zeta_x^N \in V)\leq-\inf \limits_{ z\in \cl( V)} L (x, z),\;\text{ and}\label{eq:LDPIIDUpper}\\ \liminf \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\bar \zeta_x^N \in V)\geq-\inf \limits_{ z\in \Int( V)} L (x, z).\label{eq:LDPIIDLower} \end{gather} \end{subequations} Thus for ``nice'' sets $V$, those for which the right-hand sides of the upper and lower bounds \eqref{eq:LDPIIDUpper} and \eqref{eq:LDPIIDLower} are equal, this common value is the exponential rate of decay of the probability that $\bar \zeta_x^N$ lies in $V$. Our analysis relies heavily on a well-known characterization of the Cram\'er transform as a constrained minimum of relative entropy, a characterization that also provides a clear intuition for Cram\'{e}r's theorem. Recall that the \emph{relative entropy} of probability measure $\lambda\in\Delta(\mathscr{Z})$ given probability measure $\pi\in\Delta(\mathscr{Z})$ is the extended real number \begin{equation*} R(\lambda||\pi)=\sum_{\mathscr{z}\in\mathscr{Z}}\lambda(\mathscr{z})\log\frac{\lambda(\mathscr{z})}{\pi(\mathscr{z})}, \end{equation*} with the conventions that $0\log 0 = 0\log\frac00=0$. It is well known that $R(\cdot||\cdot)$ is convex, lower semicontinuous, and nonnegative, that $R(\lambda||\pi)= 0$ if and only $\lambda=\pi$, and that $R(\lambda||\pi)<\infty$ if and only if $\support(\lambda)\subseteq\support(\pi)$. A basic interpretation of relative entropy is provided by \emph{Sanov's theorem}, which concerns the asymptotics of the empirical distributions $\mathscr{E}^N_x$ of the sequence $\{\zeta_x^k\}_{k=1}^\infty$, defined by $\mathscr{E}^N_x(\mathscr{z}) = \frac1N\sum_{k=1}^N 1(\zeta_x^k =\mathscr{z})$. This theorem says that for every set of distributions $\Lambda \subseteq \Delta(\mathscr{Z})$, \begin{subequations} \begin{gather} \limsup \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\mathscr{E}^N_x \in \Lambda)\leq-\:\inf_{\lambda \in \Lambda}\;R(\lambda||\nu(\,\cdot\,|x)),\;\text{ and}\label{eq:SanovUpper}\\ \liminf \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\mathscr{E}^N_x \in \Lambda)\geq-\inf_{\lambda \in \Int(\Lambda)} R(\lambda||\nu(\,\cdot\,|x)).\label{eq:SanovLower} \end{gather} \end{subequations} Thus for ``nice'' sets $\Lambda$, the probability that the empirical distribution lies in $\Lambda$ decays at an exponential rate given by the minimal value of relative entropy on $\Lambda$. The intuition behind Sanov's theorem and relative entropy is straightforward. We can express the probability that the $N$th empirical distribution is the feasible distribution $\lambda\in \Lambda$ as the product of the probability of obtaining a particular realization of $\{\zeta_x^k\}_{k=1}^\infty$ with empirical distribution $\lambda$ and the number of such realizations: \[ \mathbb{P}(\mathscr{E}^N_x = \lambda) = \prod_{\mathscr{z} \in \mathscr{Z}}\nu(\mathscr{z}|x)^{N\lambda(\mathscr{z})} \times \frac{N!}{\prod_{\mathscr{z} \in \mathscr{Z}}\,(N\lambda(\mathscr{z}))!}. \] Then applying Stirling's approximation $n! \approx n^n e^{-n}$ yields \[ \frac 1N \log \mathbb{P}(\mathscr{E}^N_x = \lambda) \approx \sum_{\mathscr{z} \in \mathscr{Z}}\lambda(\mathscr{z}) \log \nu(\mathscr{z}|x) - \sum_{\mathscr{z} \in \mathscr{Z}}\lambda(\mathscr{z}) \log \lambda(\mathscr{z}) = -R(\lambda||\nu(\,\cdot\,|x)). \] The rate of decay of $\mathbb{P}(\mathscr{E}^N_x \in \Lambda)$ is then determined by the ``most likely'' empirical distribution in $\Lambda$: that is, by the one whose relative entropy is smallest. \footnote{Since number of empirical distributions for sample size $N$ grows polynomially (it is less than $(N+1)^{|\mathscr{Z}|}$), the rate of decay cannot be determined by a large set of distributions in $\Lambda$ with higher relative entropies.} The representation of the Cram\'er transform in terms of relative entropy is obtained by a variation on the final step above: given Sanov's theorem, the rate of decay of obtaining a sample mean $\bar \zeta_x^N $ in $V \subset \mathbb{R}^n$ should be determined by the smallest relative entropy associated with a probability distribution whose mean lies in $V$. \footnote{This general idea---the preservation of large deviation principles under continuous functions---is known as the \emph{contraction principle}. See \cite{DemZei98}.} Combining this idea with \eqref{eq:LDPIIDUpper} and \eqref{eq:LDPIIDLower} suggests the representation \footnote{See \cite{DemZei98}, Section 2.1.2 or \cite{DupEll97}, Lemma 6.2.3(f). } \begin{equation}\label{eq:CramerRep} L(x, z)=\min_{\lambda\in\Delta(\mathscr{Z})}\left\{R(\lambda||\nu(\cdot|x))\colon \sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z}\lambda(\mathscr{z})=z\right\}. \end{equation} If $z \in Z(x)$, so that $L(x,z) < \infty$, then the minimum in \eqref{eq:CramerRep} is attained uniquely. \subsection{Path costs} To state the large deviation principle for the sequence of interpolated processes $\{\hat\mathbf{X}^{N}\}_{N=N_0}^\infty$, we must introduce a function that characterizes the rates of decay of the probabilities of sets of sample paths through the simplex $X$. Doing so requires some preliminary definitions. For $T \in (0, \infty)$, let $\mathscr{C}[0, T]$ denote the set of continuous paths $\phi\colon [0,T]\to X$ through $X$ over time interval $[0, T]$, endowed with the supremum norm. Let $\mathscr{C}_x[0, T]$ denote the set of such paths with initial condition $\phi_{0} = x$, and let $\scrA\scrC_{x}[0, T]$ be the set of absolutely continuous paths in $\mathscr{C}_x[0, T]$. We define the \emph{path cost function} (or \emph{rate function}) $c_{x,T}\colon \mathscr{C}[0, T]\to [0, \infty ]$ by \begin{equation}\label{eq:PathCost} c_{x,T} (\phi )= \begin{cases} \int_{0}^{T} L (\phi _t ,\dot {\phi }_t )\,\mathrm{d} t & \text{if }\phi\in\scrA\scrC_{x}[0,T],\\ \infty & \text{otherwise}. \end{cases} \end{equation} By Cram\'er's theorem, $L (\phi _t ,\dot {\phi }_t )$ describes the ``difficulty'' of proceeding from state $\phi_t$ in direction $\dot {\phi }_t$ under the transition laws of our Markov chains. Thus the path cost $c_{x,T} (\phi )$ represents the ``difficulty'' of following the entire path $\phi$. Since $L(x, z) = 0$ if and only if $z = \mathbb{E} \zeta_x$, path $\phi\in \mathscr{C}_x[0, T]$ satisfies $c_{x,T}(\phi) = 0$ if and only if it is the solution to the mean dynamic \eqref{eq:MD} from state $x$. In light of definition \eqref{eq:PathCost}, we sometimes refer to the function $L\colon X \times \mathbb{R}^n \to [0, \infty]$ as the \emph{running cost function}. As illustrated by Cram\'er's theorem, the rates of decay described by large deviation principles are defined in terms of the smallest value of a function over the set of outcomes in question. This makes it important for such functions to satisfy lower semicontinuity properties. The following result, which follows from Proposition 6.2.4 of \cite{DupEll97}, provides such a property. \begin{proposition}\label{prop:GoodRF} The function $c_{x,T}$ is a $($\emph{good}$)$ \emph{rate function}: its lower level sets $\{\phi\in\mathscr{C}\colon c_{x,T}(\phi)\leq M\}$ are compact. \end{proposition} \subsection{A sample path large deviation principle}\label{sec:LD} Our main result, Theorem \ref{thm:SPLDP}, shows that the sample paths of the interpolated processes $\hat\mathbf{X}^N$ satisfy a large deviation principle with rate function \eqref{eq:PathCost}. To state this result, we use the notation $\hat\mathbf{X}^{N}_{[0,T]}$ as shorthand for $\{\hat {X}_t^{N} \}_{t\in [0,T]}$. \begin{theorem} \label{thm:SPLDP}\text{} Suppose that the processes $\{\hat\mathbf{X}^{N}\}_{N=N_0}^\infty$ have initial conditions $x^N \in \mathscr{X}^N$ satisfying $\lim\limits_{N\to\infty}x^N = x \in X$. Let $\Phi \subseteq \mathscr{C}[0, T]$ be a Borel set. Then \begin{subequations} \begin{gather}\label{eq:SPLDUpper} \limsup \limits_{N\to \infty } \frac 1N \log \mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi } \le -\inf \limits_{\phi \in \cl(\Phi)} c_{x,T} (\phi ),\,\text{ and}\\ \label{eq:SPLDLower} \liminf\limits_{N\to \infty } \frac 1N \log \mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi } \ge -\inf \limits_{\phi \in \Int(\Phi)} c_{x,T} (\phi ). \end{gather} \end{subequations} \end{theorem} We refer to inequality \eqref{eq:SPLDUpper} as the \emph{large deviation principle upper bound}, and to \eqref{eq:SPLDLower} as the \emph{large deviation principle lower bound}. While Cram\'{e}r's theorem concerns the probability that the sample mean of $N$ i.i.d. random variables lies in a given subset of $\mathbb{R}^n$ as $N$ grows large, Theorem \ref{thm:SPLDP} concerns the probability that the sample path of the process $\hat\mathbf{X}^{N}_{[0,T]}$ lies in a given subset of $\mathscr{C}[0, T]$ as $N$ grows large. If $\Phi \subseteq \mathscr{C}[0, T]$ is a set of paths for which the infima in \eqref{eq:SPLDUpper} and \eqref{eq:SPLDLower} are equal and attained at some path $\phi^{\mathlarger *}$, then Theorem \ref{thm:SPLDP} shows that the probability that the sample path of $\hat\mathbf{X}^{N}_{[0,T]}$ lies in $\Phi$ is of order $\exp(-Nc_{x,T}(\phi^{\mathlarger *}))$. \subsection{Uniform results} Applications to Freidlin-Wentzell theory require uniform versions of the previous two results, allowing for initial conditions $x^N \in \mathscr{X}^N$ that take values in compact subsets of $X$. We therefore note the following extensions of Proposition \ref{prop:GoodRF} and Theorem \ref{thm:SPLDP}. \begin{proposition}\label{prop:UniformGoodRF} For any compact set $K\subseteq X$ and any $M<\infty$, the sets $\bigcup_{x \in K}\{\phi\in\mathscr{C}[0,T]\colon c_{x,T}(\phi)\leq M\}$ are compact. \end{proposition} \begin{theorem} \label{thm:USPLDP} Let $\Phi \subseteq \mathscr{C}[0, T]$ be a Borel set. For every compact set $K\subseteq X$, \begin{subequations} \begin{gather}\label{eq:uniformSPLDUpper} \limsup \limits_{N\to \infty } \frac 1N\log\paren{\sup_{x^N\in K \cap \mathscr{X}^N}\mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi }}\le -\inf_{x\in K}\inf \limits_{\phi \in \cl(\Phi)} c_{x,T} (\phi ),\,\text{ and}\\ \label{eq:uniformSPLDLower} \liminf\limits_{N\to \infty } \frac 1N \log\paren{\inf_{x^N\in K\cap \mathscr{X}^N}\mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi } }\ge -\sup_{x\in K}\inf \limits_{\phi \in \Int(\Phi)} c_{x,T} (\phi ). \end{gather} \end{subequations} \end{theorem} The proof of Proposition \ref{prop:UniformGoodRF} is an easy extension of that of Proposition 6.2.4 of \cite{DupEll97}; compare p.~165 of \cite{DupEll97}, where the property being established is called \emph{compact level sets uniformly on compacts}. Theorem \ref{thm:USPLDP}, the \emph{uniform large deviation principle}, follows from Theorem \ref{thm:SPLDP} and an elementary compactness argument; compare the proof of Corollary 5.6.15 of \cite{DemZei98}. \section{Applications}\label{sec:App} To be used in most applications, the results above must be combined with ideas from Freidlin-Wentzell theory. In this section, we use the large deviation principle to study the frequency of excursions from a globally stable rest point and the asymptotics of the stationary distribution, with a full analysis of the case of logit choice in potential games. We then remark on future applications to games with multiple stable equilibria, and the wider scope for analytical solutions that may arise by introducing a second limit. \subsection{Excursions from a globally attracting state and stationary distributions}\label{sec:Exit} In this section, we describe results on excursions from stable rest points that can be obtained by combining the results above with the work of \cite{FreWen98} and refinements due to \cite{DemZei98}, which consider this question in the context of diffusion processes with a vanishingly small noise parameter. To prove the results in our setting, one must adapt arguments for diffusions to sequences of processes running on increasingly fine finite state spaces. As an illustration of the difficulties involved, observe that while a diffusion process is a Markov process with continuous sample paths, our original process $\mathbf{X}^{N}$ is Markov but with discrete sample paths, while our interpolated process $\hat\mathbf{X}^{N}$ has continuous sample paths but is not Markov. \footnote{When the interpolated process $\hat\mathbf{X}^{N}$ is halfway between adjacent states $x$ and $y$ at time $\frac{k+1/2}N$, its position at time $\frac{k}N$ determines its position at time $\frac{k+1}N$.} Since neither process possesses both desirable properties, a complete analysis of the problem is quite laborious. We therefore only sketch the main arguments here, and will present a detailed analysis in future work. Consider a sequence of processes $\mathbf{X}^{N}$ satisfying the assumptions above and whose mean dynamic \eqref{eq:MD} has a globally stable rest point $x^{\mathlarger *}$. We would like to estimate the time until the process exits a given open set $O\subset X$ containing $x^{\mathlarger *}$. By the large deviations logic described in Section \ref{sec:Cramer}, we expect this time should be determined by the cost of the least cost path that starts at $x^{\mathlarger *}$ and leaves $O$. With this in mind, let $\partial O$ denote the boundary of $O$ relative to $X$, and define \begin{gather} C_y = \inf_{T>0}\:\inf_{\phi\in\mathscr{C}_{x^{\mathlarger *}}[0,T]:\:\phi(T)=y}c_{x^{\mathlarger *}\!,T}(\phi)\;\text{ for } y \in X,\label{eq:yCost}\\ C_{\partial O}=\inf_{y \in \partial O} C_y.\label{eq:ExitCost} \end{gather} Thus $C_y$ is the lowest cost of a path from $x^{\mathlarger *}$ to $y$, and the \emph{exit cost} $C_{\partial O}$ is the lowest cost of a path that leaves $O$. Now define $\hat\tau^{N}_{\partial O}=\inf\{t\geq 0\colon\hat{X}^{N}_{t}\in \partial O\}$ to be the random time at which the interpolated process $\hat\mathbf{X}^{N}$ hits $\partial O$ of $O$. If this boundary satisfies a mild regularity condition, \footnote{The condition requires that for all $\delta >0$ small enough, there is a nonempty closed set $K_\delta \subset X$ disjoint from $\cl(O)$ such that for all $x \in \partial O$, there exists a $y \in K_\delta$ satisfying $|x-y|=\delta$. } we can show that for all $\varepsilon>0$ and all sequences of $x^N \in \mathscr{X}^N$ converging to some $x \in O$, we have \begin{gather} \lim_{N\to\infty}\mathbb{P}_{x^N}\!\paren{C_{\partial O}-\varepsilon < \tfrac1N\log\hat\tau^{N}_{\partial O} <C_{\partial O}+\varepsilon}=1 \:\text{ and}\label{eq:ETBound}\\ \lim_{N\to\infty}\tfrac{1}{N}\log\mathbb{E}_{x^N}\hat\tau^{N}_{\partial O}=C_{\partial O}.\label{eq:ETExBound} \end{gather} That is, the time until exit from $O$ is of approximate order $\exp(NC_{\partial O})$ with probability near 1, and the expected time until exit from $O$ is of this order as well. Since stationary distribution weights are inversely proportional to expected return times, equation \eqref{eq:ETExBound} can be used to show that the rates of decay of stationary distribution weights are also determined by minimal costs of paths. If we let $B_\delta(y) = \{x \in X \colon |y-x|< \delta\}$, then for all $y \in X$ and $\varepsilon>0$, there is a $\delta$ sufficiently small that \begin{equation}\label{eq:LDSD} -C_y - \varepsilon \leq \tfrac{1}{N}\log \mu^{N}(B_\delta(y))\leq -C_y + \varepsilon. \end{equation} for all large enough $N$. The main ideas of the proofs of \eqref{eq:ETBound} and \eqref{eq:ETExBound} are as follows. To prove the upper bounds, we use the LDP lower bound to show there is a finite duration $T$ such that the probability of reaching $\partial O$ in $T$ time units starting from any state in $O$ is at least $q^N_T=\exp(-N(C_{\partial O}+\varepsilon))$. It then follows from the strong Markov property that the probability of failing to reach $\partial O$ within $kT$ time units is at most $(1-q^N_T)^k$. Put differently, if we define the random variable $R^N_T$ to equal $k$ if $\partial O$ is reached between times $(k-1)T$ and $kT$, then the distribution of $R^N_T$ is stochastically dominated by the geometric distribution with parameter $q^N_T$. It follows that the expected time until $\partial O$ is reached is at most $T\cdot\mathbb{E} R^N_T \leq T/q^N_T = T\exp(N(C_{\partial O}+\varepsilon))$, yielding the upper bound in \eqref{eq:ETExBound}. The upper bound in \eqref{eq:ETBound} then follows from Chebyshev's inequality. To prove the lower bounds in \eqref{eq:ETBound} and \eqref{eq:ETExBound}, we again view the process $\hat\mathbf{X}^N$ as making a series of attempts to reach $\partial O$. Each attempt requires at least $\delta>0$ units of clock time, and the LDP upper bound implies that for $N$ large enough, an attempt succeeds with probability less than $\exp(-N(C_{\partial O} - \frac\eps2))$. Thus to reach $\partial O$ within $k\delta$ time units, one of the first $k$ attempts must succeed, and this has probability less than $k \exp(-N(C_{\partial O} - \frac\eps2))$. Choosing $k \approx \delta^{-1}\exp(N(C_{\partial O} -\varepsilon))$, we conclude that the probability of exiting $O$ in $\exp(N(C_{\partial O}-\varepsilon))$ time units is less than $k\delta \approx \delta^{-1}\exp(-N\varepsilon/2)$. This quantity vanishes as $N$ grows large, yielding the lower bound in \eqref{eq:ETBound}; then Chebyshev's inequality gives the lower bound in \eqref{eq:ETExBound}. \subsection{Logit evolution in potential games} We now apply the results above in a context for which the exit costs $C_O$ can be computed explicitly: that of evolution in potential games under the logit choice rule. Consider a sequence of stochastic evolutionary processes $\{\mathbf{X}^{N}\}_{N=N_0}^\infty$ derived from population games $F^N$ and revision protocols $\rho^N$ that converge uniformly to Lipschitz continuous limits $F$ and $\rho$ (see Section \ref{sec:Processes}), where $\rho$ is the logit protocol with noise level $\eta >0$ (Example \ref{ex:Logit}). Theorem \ref{thm:DetApprox} implies that when $N$ is large, the process $\hat\mathbf{X}^{N}$ is well-approximated over fixed time spans by solutions to the mean dynamic \eqref{eq:MD}, which in the present case is the \emph{logit dynamic} \begin{equation}\label{eq:LogitDyn} \dot x = M^\eta(F(x)) - x,\;\text{ where }\;M^\eta_j(\pi) = \frac{\exp(\eta^{-1}\pi_j)}{\sum_{k \in \scrA}\exp(\eta^{-1}\pi_k)}. \end{equation} Now suppose in addition that the limiting population game $F$ is a \emph{potential game} (\cite{San01}), meaning that there is a function $f \colon \mathbb{R}^n_+ \to \mathbb{R}$ such that $\nabla f(x) = F(x)$ for all $x \in X$. \footnote{The analysis to follow only requires the limiting game $F$ to be a potential game. In particular, there is no need for the convergent sequence $\{F^N\}$ of finite-population games to consist of potential games (as defined in \cite{MonSha96}), or to assume that any of the processes $\hat\mathbf{X}^N$ are reversible (cf.~\cite{Blu97}).} In this case, \cite{HofSan07} establish the following global convergence result. \begin{proposition}\label{prop:LogPotGC} If $F$ is a potential game, then the \emph{logit potential function} \begin{equation}\label{eq:LogitPotential} f^\eta(x) = \eta^{-1}f(x) - h(x), \quad h(x)=\sum\nolimits_{i \in \scrA}x_i \log x_i. \end{equation} is a strict global Lyapunov function for the logit dynamic \eqref{eq:LogitDyn}. Thus solutions of \eqref{eq:LogitDyn} from every initial condition converge to connected sets of rest points of \eqref{eq:LogitDyn}. \end{proposition} \noindent We provide a concise proof of this result in Section \ref{sec:LogPotProofs}. Together, Theorem \ref{thm:DetApprox} and Proposition \ref{prop:LogPotGC} imply that for large $N$, the typical behavior of the process $\hat\mathbf{X}^{N}$ is to follow a solution of the logit dynamic \eqref{eq:LogitDyn}, ascending the function $f^\eta$ and approaching a rest point of \eqref{eq:LogitDyn}. We now use the large deviation principle to describe the excursions of the process $\hat\mathbf{X}^N$ from stable rest points, focusing as before on cases where the mean dynamic \eqref{eq:LogitDyn} has a globally attracting rest point $x^{\mathlarger *}$, which by Proposition \ref{prop:LogPotGC} is the unique local maximizer of $f^\eta$ on $X$. According to the results from the previous section, the time required for the process to exit an open set $O$ containing $x^{\mathlarger *}$ is characterized by the exit cost \eqref{eq:ExitCost}, which is the infimum over paths from $x^{\mathlarger *}$ to some $y \notin O$ of the path cost \begin{equation}\label{eq:PathCost2} c_{x^{\mathlarger *}\!,T} (\phi )=\int_{0}^{T} \!L (\phi _t ,\dot {\phi }_t )\,\mathrm{d} t = \int_{0}^{T} \!\sup_{u_t \in \mathbb{R}^n_0}\paren{u_t^\prime \dot\phi_t - H (\phi _t ,u_t)}\mathrm{d} t, \end{equation} where the expression of the path cost in terms of the log moment generating functions $H(x, \cdot)$ follows from definition \eqref{eq:CramerTr} of the Cram\'er transform. In the logit/potential model, we are able to use properties $H$ to compute the minimal costs \eqref{eq:yCost} and \eqref{eq:ExitCost} exactly. \begin{proposition}\label{prop:LPLD} In the logit/potential model, when \eqref{eq:LogitDyn} has a globally attracting rest point $x^{\mathlarger *}$, we have $c^{\mathlarger *}_y =f^\eta(x^{\mathlarger *})-f^\eta(y) $, and so $C^{\mathlarger *}_{\partial O} = \min\limits_{y \in \partial O}\,(f^\eta(x^{\mathlarger *})-f^\eta(y))$. \end{proposition} In words, the minimal cost of a path from $x^{\mathlarger *}$ to state $y \ne x^{\mathlarger *}$ is equal to the decrease in the logit potential. Combined with equations \eqref{eq:ETBound}--\eqref{eq:LDSD}, Proposition \ref{prop:LPLD} implies that the waiting times $\tau^{N}_{\partial O} $ to escape the set $O$ is described by the smallest decrease in $f^\eta$ required to reach the boundary of $O$, and that $f^\eta$ also governs the rates of decay in the stationary distributions weights $\mu^N(B_\delta(y))$. \textit{Proof. } We prove the proposition using tools from the calculus of variations (cf.~\pgcite{FreWen98}{Section 5.4}) and these two basic facts about the function $H$ in the logit/potential model, which we prove in Section \ref{sec:LogPotProofs}: \begin{lemma}\label{lem:NewHLemma} Suppose $x \in \Int(X)$. Then in the logit/potential model, \begin{gather} H(x,-\nabla f^\eta (x))= 0\;\text{ and}\label{eq:HJ}\\ \nabla _u H(x,-\nabla f^\eta(x))= -(M^\eta(F(x)) - x).\label{eq:HFOC} \end{gather} \end{lemma} Equation \eqref{eq:HJ} is the Hamilton-Jacobi equation associated with path cost minimization problem \eqref{eq:yCost}, and shows that changes in potential provide a lower bound on the cost of reaching any state $y$ from state $x^{\mathlarger *}$ along an interior path $\phi\in\mathscr{C}_{x^{\mathlarger *}}[0,T]$ with $\phi_T=y$: \begin{equation}\label{eq:CostBound} c_{x^{\mathlarger *}\!,T} (\phi ) = \int_{0}^{T} \!\sup_{u_t \in \mathbb{R}^n_0}\paren{u_t^\prime \dot\phi_t - H (\phi _t ,u_t)}\mathrm{d} t \geq \int_{0}^{T}\!\!\!-\nabla f^\eta(\phi_t)^\prime \dot\phi_t\,\mathrm{d} t = f^\eta(x^{\mathlarger *}) - f^\eta(y) . \end{equation} In Section \ref{sec:LogPotProofs}, we prove a generalization of \eqref{eq:HJ} to boundary states which lets us extend inequality \eqref{eq:CostBound} to paths with boundary segments---see equation \eqref{eq:CostBoundBd}. These inequalities give us the lower bound \begin{equation}\label{eq:cstaryLB} c^{\mathlarger *}_y \geq f^\eta(x^{\mathlarger *})-f^\eta(y) . \end{equation} Equation \eqref{eq:HFOC} is the first order condition for the first integrand in \eqref{eq:CostBound} for paths that are reverse-time solutions to the logit dynamic \eqref{eq:LogitDyn}. Thus if $\psi\colon (-\infty, 0] \to X$ satisfies $\psi_0 = y$ and $\dot \psi_t = -(M^\eta(F(\psi_t)) - \psi_t)$ for all $ t\leq 0$, then Proposition \ref{prop:LogPotGC} and the assumption that $x^{\mathlarger *}$ is globally attracting imply that $\lim_{t\to-\infty}\psi_t = x^{\mathlarger *}$, which with \eqref{eq:HJ} and \eqref{eq:HFOC} yields \begin{equation}\label{eq:CostAttained} \int_{-\infty}^{0} \;\sup_{u_t \in \mathbb{R}^n_0}\paren{u_t^\prime \dot\psi_t - H (\psi _t ,u_t)}\mathrm{d} t = \int_{-\infty}^{0}\!\!\!-\nabla f^\eta(\psi_t)^\prime \dot\psi_t\,\mathrm{d} t = f^\eta(x^{\mathlarger *}) - f^\eta(y) . \end{equation} This equation and a continuity argument imply that lower bound \eqref{eq:cstaryLB} is tight. \hspace{4pt}\ensuremath{\blacksquare} Congestion games are the most prominent example of potential games appearing in applications, and the logit protocol is a standard model of decision making in this context (\cite{BenLer85}). We now illustrate how the results above can be used to describe excursions of the process $\mathbf{X}^N$ from the stationary state of the logit dynamic and the stationary distribution $\mu^N$ of the process. We consider a network with three parallel links in order to simplify the exposition, as our analysis can be conducted just as readily in any congestion game with increasing delay functions. \begin{example} Consider a network consisting of three parallel links with delay functions $\ell_1(u) = 1 + 8u$, $\ell_2(u) = 2+4u$, and $\ell_3(u) = 4$. The links are numbered in increasing order of congestion-free travel times (lower-numbered links are shorter in distance), but in decreasing order of congestion-based delays (higher-numbered links have greater capacity). The corresponding continuous-population game has payoff functions $F_i(x) = -\ell_i(x_i)$ and concave potential function \[ f(x) = -\sum_{i\in\scrA}\int_0^{x_i}\ell_i(u)\,\mathrm{d} u = -\!\paren{x_1 + 4 (x_1)^2 + 2x_2 + 2(x_2)^2 +4x_3 }. \] The unique Nash equilibrium of this game, $x^{\mathlarger *} = (\frac38, \frac12, \frac18)$, is the state at which travel times on each link are equal ($\ell_1(x^{\mathlarger *}_1)=\ell_2(x^{\mathlarger *}_2)=\ell_3(x^{\mathlarger *}_3)=4$), and it is the maximizer of $f$ on $X$. Suppose that a large but finite population of agents repeatedly play this game, occasionally revising their strategies by applying the logit rule $M^\eta$ with noise level $\eta$. Then in the short term, aggregate behavior evolves according to the logit dynamic \eqref{eq:LogitDyn}, ascending the logit potential function $f^\eta = \eta^{-1} f + h$ until closely approaching its global maximizer $x^\eta$. Thereafter, \eqref{eq:ETBound} and \eqref{eq:ETExBound} imply that excursions from $f^\eta$ to other states $y$ occur at rate $\exp(N(f^\eta(x^\eta)-f^\eta(y)))$. The values of $N$ and $f^\eta(y)$ also describe the proportions of time spent at each state: by virtue of \eqref{eq:LDSD}, $\mu^N(B_\delta(y)) \approx \exp(-N(f^\eta(x^\eta)-f^\eta(y)))$. Figure \ref{fig:Congestion} presents solutions of the logit dynamic \eqref{eq:LogitDyn} and level sets of the logit potential function $f^\eta$ in the congestion game above for noise levels $\eta = .25$ (panel (i)) and $\eta = .1$ (panel (ii)). In both cases, all solutions of \eqref{eq:LogitDyn} ascend the logit potential function and converge to its unique maximizer, $x^{(.25)} \approx (.3563, .4482, .1956)$ in (i), and $x^{(.1)} = (.3648, .4732, .1620)$ in (ii). The latter rest point is closer to the Nash equilibrium on account of the smaller amount of noise in agents' decisions. \begin{figure} \caption{Solution trajectories of the logit dynamics and level sets of $f^\eta$ in a congestion game. In both panels, lighter shades represent higher values of $f^\eta$, and increments between level sets are $.5$ units. For any point $y$ on a solution trajectory, the most likely excursion path from the rest point to a neighborhood of $y$ follows the trajectory backward from the rest point. The values of $f^\eta$ also describe the rates of decay of mass in the stationary distribution.} \label{fig:Congestion} \end{figure} In each panel, the ``major axes'' of the level sets of $f^\eta$ correspond to exchanges of agents playing strategy 3 for agents playing strategies 2 and 1 in fixed shares, with a slightly larger share for strategy 2. That deviations of this sort are the most likely is explained by the lower sensitivity of delays on higher numbered links to fluctuations in usage. In both panels, the increments between the displayed level sets of $f^\eta$ are $.5$ units. Many more level sets are drawn in panel (ii) than in panel (i): \footnote{In panel (i), the size of the range of $f^\eta$ is $f^{(.25)}(x^{(.25)}) - f^{(.25)}(e_1)\approx -10.73 - (-20) = 9.27$, while in panel (ii) it is $f^{(.1)}(x^{(.1)}) - f^{(.1)}(e_1)\approx -28.38 - (-50) = 21.62$.} when there is less noise in agents' decisions, excursions from equilibrium of a given unlikelihood are generally smaller, and excursions of a given size and direction are less common. \ensuremath{\hspace{4pt}\Diamondblack} \end{example} \subsection{Discussion}\label{sec:Disc} The analyses above rely on the assumption that the mean dynamic \eqref{eq:MD} admits a globally stable state. If instead this dynamic has multiple attractors, then the time $\hat\tau^N_{\partial O}$ to exit $O$ starting from a stable rest point $x^{\mathlarger *} \in O$ need only satisfy properties \eqref{eq:ETBound} and \eqref{eq:ETExBound} when the set $O$ is contained in the basin of attraction of $x^{\mathlarger *}$. Beyond this case, the most likely amount of time required to escape $O$ may disagree with the expected amount of time to do so, since the latter may be driven by a small probability of becoming stuck near another attractor in $O$. Likewise, when the global structure of \eqref{eq:MD} is nontrivial, the asymptotics of the stationary distribution are more complicated, being driven by the relative likelihoods of transitions between the different attractors. To study these questions in our context, one must not only address the complications noted in Section \ref{sec:Exit}, but must also employ the graph-theoretic arguments developed by \pgcite{FreWen98}{Chapter 6} to capture the structure of transitions among the attractors. Because the limiting stationary distribution is the basis for the approach to equilibrium selection discussed in the introduction, carrying out this analysis is an important task for future work. We have shown that the control problems appearing in the statement of the large deviation principle can be solved explicitly in the case of logit choice in potential games. They can also be solved in the context of two-action games, in which the state space $X$ is one-dimensional. Beyond these two cases, the control problems do not appear to admit analytical solutions. To contend with this, and to facilitate comparisons with other analyses in the literature, one can consider the \emph{large population double limit}, studying the behavior of the large population limit as the noise level in agents' decisions is taken to zero. There are strong reasons to expect this double limit to be analytically tractable. In \cite{SanSta16}, we study the reverse order of limits, under which the noise level $\eta$ is first taken to zero, and then the population size $N$ to infinity. For this order of limits, we show that large deviations properties are determined by the solutions to piecewise linear control problems, and that these problems can be solved analytically. Moreover, \cite{SanORDERS} uses birth-death chain methods to show that in the two-action case, large deviations properties under the two orders of limits are identical. These results and our preliminary analyses suggest that the large population double limit is tractable, and that in typical cases, conclusions for the two orders of limits will agree. While we are a number of steps away from reaching these ends, the analysis here provides the tools required for work on this program to proceed. \section{Analysis}\label{sec:LDP} The proof of Theorem \ref{thm:SPLDP} follows the weak convergence approach of \cite{DupEll97} (henceforth DE). As noted in the introduction, the main novelty we must contend with is the fact that our processes run on a compact set $X$. This necessitates a delicate analysis of the behavior of the process on and near the boundary of $X$. At the same time, the fact that the conditional laws \eqref{eq:CondLaw} have finite support considerably simplifies a number of the steps from DE's approach. Proofs of auxiliary results that would otherwise interrupt the flow of the argument are relegated to Sections \ref{sec:LSCProof} and \ref{sec:RIOP}. Some technical arguments that mirror those from DE are provided in Section \ref{sec:AD}. Before entering into the details of our analysis, we provide an overview. In Section \ref{sec:JC}, we use the representation \eqref{eq:CramerRep} of the Cram\'er transform to establish joint continuity properties of the running cost function $L(\cdot, \cdot)$. To start, we provide examples of discontinuities that this function exhibits at the boundary of $X$. We then show that excepting these discontinuities, the running cost function is ``as continuous as possible'' (Proposition \ref{prop:Joint2}). The remaining sections follow the line of argument in DE, with modifications that use Proposition \ref{prop:Joint2} to contend with boundary issues. Section \ref{sec:laplace} describes how the large deviation principle upper and lower bounds can be deduced from corresponding Laplace principle upper and lower bounds. The latter bounds concern the limits of expectations of continuous functions, making them amenable to analysis using weak convergence arguments. Section \ref{sec:SOCP} explains how the expectations appearing in the Laplace principle can be expressed as solutions to stochastic optimal control problems \eqref{eq:VNSeqEqInitial}, the running costs of which are relative entropies defined with respect to the transition laws $\nu^N(\cdot | x)$ of $\mathbf{X}^N$. Section \ref{sec:LCP} describes the limit properties of the controlled processes as $N$ grows large. Finally, Sections \ref{sec:ProofUpper} and \ref{sec:ProofLower} use the foregoing results to prove the Laplace principle upper and lower bounds; here the main novelty is in Section \ref{sec:ProofLower}, where we show that the control problem appearing on the right hand side of the Laplace principle admits $\varepsilon$-optimal solutions that initially obey the mean dynamic and remain in the interior of the simplex thereafter (Proposition \ref{prop:interiorpaths}). \subsection{Joint continuity of running costs}\label{sec:JC} Representation \eqref{eq:CramerRep} implies that for each $x \in X$, the Cram\'er transform $L(x, \cdot)$ is continuous on its domain $Z(x)$ (see the beginning of the proof of Proposition \ref{prop:Joint2} below). The remainder of this section uses this representation to establish joint continuity properties of the running cost function $L(\cdot, \cdot)$. The difficulty lies in establishing these properties at states on the boundary of $X$. Fix $x \in X$, and let $i \in \support(x)$ and $j \ne i$. Since $e_j-e_i$ is an extreme point of $Z(x)$, the point mass $\delta_{e_j-e_i}$ is the only distribution in $\Delta(\mathscr{Z})$ with mean $e_j - e_i$. Thus representation \eqref{eq:CramerRep} implies that \begin{equation}\label{eq:SimpleLBound0} L(x, e_j - e_i)=R(\delta_{e_j-e_i}||\nu(\cdot|x)) = -\log x_i \sigma_{ij}(x)\geq -\log x_i. \end{equation} Thus $L(x, e_j - e_i)$ grows without bound as $x$ approaches the face of $X$ on which $x_i=0$, and $L(x, e_j-e_i) = \infty$ when $x_i=0$. Intuitively, reducing the number of action $i$ players reduces the probability that such a player is selected to revise; when there are no such players, selecting one becomes impossible. A more serious difficulty is that running costs are not continuous at the boundary of $X$ even when they are finite. For example, suppose that $n \ge 3$, let $x$ be in the interior of $X$, and let $z_\alpha = e_3 - (\alpha e_1 + (1-\alpha) e_2)$. Since the unique $\lambda$ with mean $z_\alpha$ has $\lambda(e_3 -e_1) = \alpha$ and $\lambda(e_3 -e_2) = 1-\alpha$, equation \eqref{eq:CramerRep} implies that \[ L(x,z_\alpha)= \alpha \log\frac{\alpha}{x_1\sigma_{13}(x)}+(1-\alpha)\log\frac{1-\alpha}{x_2\sigma_{23}(x)}. \] If we set $\alpha(x) = -(\log (x_1\sigma_{13}(x)))^{-1}$ and let $x$ approach some $x^{\mathlarger *}$ with $x^{\mathlarger *}_1=0$, then $L(x,z_{\alpha(x)})$ approaches $1 - \log (x_2 \sigma_{23}(x^{\mathlarger *}))$; however, $z_{\alpha(x)}$ approaches $e_3-e_2$, and $L(x^{\mathlarger *}, e_3-e_2) =- \log (x_2 \sigma_{23}(x^{\mathlarger *}))$. These observations leave open the possibility that the running cost function is continuous as a face of $X$ is approached, provided that one restricts attention to displacement directions $z\in Z=\conv(\mathscr{Z}) = \conv(\{e_j - e_i\colon i, j \in \scrA\})$ that remain feasible on that face. Proposition \ref{prop:Joint2} shows that this is indeed the case. For any nonempty $I \subseteq \scrA$, define $X(I) = \{x\in X\colon I\subseteq \support(x)\}$, $\mathscr{Z}(I) = \{\mathscr{z} \in \mathscr{Z} \colon \mathscr{z}_j \geq 0 \text{ for all }j \notin I\}$, and $Z(I)=\conv(\mathscr{Z}(I)) = \{z \in Z \colon z_j \geq 0 \text{ for all }j \notin I\}$. \begin{proposition}\label{prop:Joint2} \begin{mylist} \item $L(\cdot, \cdot)$ is continuous on $\Int(X) \times Z$. \item For any nonempty $I \subseteq \scrA$, $L(\cdot, \cdot)$ is continuous on $X(I) \times Z(I)$. \end{mylist} \end{proposition} \emph{Proof}. For any $\lambda \in \Delta(\mathscr{Z}(I))$ and $x\in X(I) $, we have $\support(\lambda) \subseteq \mathscr{Z}(I) \subseteq \support(\nu(\cdot|x)) $. Thus by the definition of relative entropy, the function $\mathscr{L} \colon X(I) \times \Delta(\mathscr{Z}(I)) \to [0,\infty]$ defined by $ \mathscr{L}(x,\lambda) = R(\lambda||\nu(\cdot|x))$ is real-valued and continuous. Let \begin{equation}\label{eq:Lambdaz} \Lambda_{\mathscr{Z}(I)}(z)= \brace{\lambda \in \Delta(\mathscr{Z}) \colon \support(\lambda) \subseteq\mathscr{Z}(I), \sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z} \lambda(\mathscr{z})=z} \end{equation} be the set of distributions on $\mathscr{Z}$ with support contained in $\mathscr{Z}(I)$ and with mean $z$. Then the correspondence $\Lambda_{\mathscr{Z}(I)} \colon Z(I) \Rightarrow \Delta(\mathscr{Z}(I))$ defined by \eqref{eq:Lambdaz} is clearly continuous and compact-valued. Thus if we define $L_I\colon X(I) \times Z(I) \to [0, \infty)$ by \begin{equation}\label{eq:LAgain2} L_I(x,z) = \min\{R(\lambda||\nu(\cdot|x))\colon \lambda \in \Lambda_{\mathscr{Z}(I)}(z) \}, \end{equation} then the theorem of the maximum (\cite{Ber63}) implies that $L_I$ is continuous. By representation \eqref{eq:CramerRep}, \begin{equation}\label{eq:LAgain} L(x,z) = \min\{R(\lambda||\nu(\cdot|x))\colon \lambda \in \Lambda_\mathscr{Z}(z) \}. \end{equation} Since $\mathscr{Z}(\scrA)=\mathscr{Z}$, \eqref{eq:LAgain2} and \eqref{eq:LAgain} imply that $L_S(x,z)=L(x,z)$, establishing part (i). To begin the proof of part (ii), we eliminate redundant cases using an inductive argument on the cardinality of $I$. Part (i) establishes the base case in which $\#I = n$. Suppose that the claim in part (ii) is true when $\#I > k \in \{1, \ldots , n -1\}$; we must show that this claim is true when $\#I = k$. Suppose that $\support(x) = J \supset I$, so that $\#J > k$. Then the inductive hypothesis implies that the restriction of $L$ to $X(J) \times Z(J)$ is continuous at $(x, z)$. Since $X(J)\subset X(I)$ is open relative to $X$ and since $Z(J)\supset Z(I)$, the restriction of $L$ to $X(I) \times Z(I)$ is also continuous at $(x, z)$. It remains to show that the restriction of $L$ to $X(I) \times Z(I)$ is continuous at all $(x, z) \in X(I) \times Z(I)$ with $ \support(x) = I$. Since $\mathscr{Z}(I)\subset\mathscr{Z}$, \eqref{eq:LAgain2} and \eqref{eq:LAgain} imply that for all $(x, z) \in X(I) \times Z(I)$, \begin{equation}\label{eq:LIneq} L(x,z)\le L_I(x,z). \end{equation} If in addition $\support(x) = I$, then $\mathscr{L}(x,\lambda) = \infty$ whenever $\support(\lambda) \not\subseteq\mathscr{Z}(I)$, so \eqref{eq:LAgain2} and \eqref{eq:LAgain} imply that inequality \eqref{eq:LIneq} binds. Since $L_I$ is continuous, our remaining claim follows directly from this uniform approximation: \begin{lemma}\label{lem:RevIneq} For any $\varepsilon>0$, there exists a $\delta>0$ such that for any $x \in X$ with $\max_{k \in \scrA \smallsetminus I}x_k < \delta$ and any $z \in Z(I)$, we have \begin{equation}\label{eq:LIneq2} L(x,z)\ge L_I(x,z)-\varepsilon. \end{equation} \end{lemma} \noindent A constructive proof of Lemma \ref{lem:RevIneq} is provided in Section \ref{sec:LSCProof}. \subsection{The large deviation principle and the Laplace principle} \label{sec:laplace} While Theorem \ref{thm:SPLDP} is stated for the finite time interval $[0,T]$, we assume without loss of generality that $T=1$. In what follows, $\mathscr{C}$ denotes the set of continuous functions from $[0,1]$ to $X$ endowed with the supremum norm, $\mathscr{C}_{x}\subset \mathscr{C}$ denotes the set of paths in $\mathscr{C}$ starting at $x$, and $\scrA\scrC\subset \mathscr{C}$ and $\scrA\scrC_{x}\subset \mathscr{C}_{x}$ are the subsets consisting of absolutely continuous paths. Following DE, we deduce Theorem \ref{thm:SPLDP} from the \emph{Laplace principle}. \begin{theorem} \label{thm:laplace} Suppose that the processes $\{\hat\mathbf{X}^{N}\}_{N=N_0}^\infty$ have initial conditions $x^N \in \mathscr{X}^N$ satisfying $\lim\limits_{N\to\infty}x^N = x \in X$. Let $h\colon \mathscr{C} \to \mathbb{R}$ be a bounded continuous function. Then \begin{subequations} \begin{gather} \label{eq:LPUpper} \limsup_{N\rightarrow\infty}\frac{1}{N}\log\mathbb{E}_{x^N}\!\left[\exp\left(-Nh(\hat{\mathbf{X}}^{N})\right)\right]\leq-\inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)},\,\text{ and}\\ \liminf_{N\rightarrow\infty}\frac{1}{N}\log\mathbb{E}_{x^N}\!\left[\exp\left(-Nh(\hat{\mathbf{X}}^{N})\right)\right]\geq -\inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}.\label{eq:LPLower} \end{gather} \end{subequations} \end{theorem} \noindent Inequality \eqref{eq:LPUpper} is called the \emph{Laplace principle upper bound}, and inequality \eqref{eq:LPUpper} is called the \emph{Laplace principle lower bound}. Because $c_x$ is a rate function (Proposition \ref{prop:GoodRF}), the large deviation principle (Theorem \ref{thm:SPLDP}) and the Laplace principle (Theorem \ref{thm:laplace}) each imply the other. The forward implication is known as \emph{Varadhan's integral lemma} (DE Theorem 1.2.1). For intuition, express the large deviation principle as $\mathbb{P}_{x^N}(\mathbf{X}^N \approx \phi) \approx \exp(-N c_x(\phi))$, and argue heuristically that \begin{align*} \mathbb{E}_{x^N}\left[\exp(-Nh(\hat{\mathbf{X}}^{N}))\right] &\approx \int_\Phi \exp(-Nh(\phi))\, \mathbb{P}_{x^N}(\mathbf{X}^N \approx \phi)\,\mathrm{d}\phi\\ & \approx \int_\Phi \exp(-N(h(\phi)+ c_x(\phi)))\,\mathrm{d}\phi\\ & \approx \exp\paren{-N\inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}}, \end{align*} where the final approximation uses the Laplace method for integrals (\cite{Bru70}). Our analysis requires the reverse implication (DE Theorem 1.2.3), which can be derived heuristically as follows. Let $\Phi \subset \mathscr{C}$, and let the (extended real-valued, discontinuous) function $h_\Phi$ be the indicator function of $\Phi \subset \mathscr{C}$ in the convex analysis sense: \[ h_\Phi(\phi)= \begin{cases} 0 & \text{if }\phi\in\Phi,\\ +\infty & \text{otherwise}. \end{cases} \] Then plugging $h_\Phi$ into \eqref{eq:LPUpper} and \eqref{eq:LPLower} yields \[ \lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}_{x^N}(\hat{\mathbf{X}}^N\in\Phi) = -\inf_{\phi\in\Phi}c_x(\phi), \] which is the large deviation principle. The proof of DE Theorem 1.2.3 proceeds by considering well-chosen approximations of $h_\Phi$ by bounded continuous functions. The statements that form the Laplace principle concern limits of expectations of continuous functions, and so can be evaluated by means of weak convergence arguments. We return to this point at the end of the next section. \subsection{The stochastic optimal control problem}\label{sec:SOCP} For a given function $h \in \mathscr{C}\to\mathbb{R}$, we define \begin{equation}\label{eq:VN} V^{N}(x^N)=-\frac{1}{N}\log\mathbb{E}_{x^N}\!\left[\exp(-Nh(\hat{\mathbf{X}}^{N}))\right] \end{equation} to be the negation of the expression from the left hand side of the Laplace principle. This section, which follows DE Sections 3.2 and 4.3, shows how $V^N$ can be expressed as the solution of a stochastic optimal control problem. The running costs of this problem are relative entropies, and its terminal costs are determined by the function $h$. For each $k\in\{0,1,2,\ldots,N\}$ and sequence $(x_{0},\ldots,x_{k})\in (\mathscr{X}^{N})^{k+1}$, we define the period $k$ value function \begin{equation}\label{eq:VNk} V^{N}_k(x_{0},\ldots,x_{k})=-\frac{1}{N}\log\mathbb{E}\!\left[\exp(-Nh(\hat{\mathbf{X}}^{N}))\big|X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}\right]. \end{equation} Note that $V^N_0 \equiv V^N$. If we define the map $\hat\phi \:(\:= \hat\phi^N)$ from sequences $x_0, \ldots , x_N$ to paths in $\mathscr{C}$ by \begin{equation}\label{eq:PhiHat} \hat\phi_t(x_0, \ldots , x_N)=x_{k}+(Nt-k)(x_{k+1}-x_{k})\;\;\text{for all }t\in[\tfrac{k}{N},\tfrac{k+1}{N}], \end{equation} then \eqref{eq:VNk} implies that \begin{equation}\label{eq:VNTerminal} V^{N}_N(x_{0},\ldots,x_N)=h(\hat\phi(x_{0},\ldots,x_N)). \end{equation} Note also that $\hat X^N_t = \hat\phi_t(X^{N}_{0}, \ldots , X^{N}_{N})$; this can be expressed concisely as $\hat{\mathbf{X}}^{N}=\hat\phi({\mathbf{X}}^{N})$. Proposition \ref{prop:DPFE} shows that the value functions $V^N_k$ satisfy a dynamic programming functional equation, with running costs given by relative entropy functions and with terminal costs given by $h(\hat\phi(\cdot))$. To read equation \eqref{eq:VNFunctionalEq}, recall that $\nu^N$ is the transition kernel for the Markov chain $\{X^{N}_{k}\}$. \begin{proposition}\label{prop:DPFE} For $k\in\{0,1,\ldots,N-1\}$ and $(x_{0},\ldots,x_{k})\in(\mathscr{X}^{N})^{k+1}$, we have \begin{equation}\label{eq:VNFunctionalEq} V^{N}_k(x_{0},\ldots,x_{k})=\inf_{\lambda\in\Delta(\mathscr{Z})}\paren{\frac{1}{N}R(\lambda\,||\,\nu^{N}(\hspace{1pt}\cdot\hspace{1pt}|x_{k}))+\sum_{\mathscr{z}\in\mathscr{Z}}V^{N}_{k+1}(x_{0},\ldots,x_{k}+\tfrac{1}{N}\mathscr{z})\,\lambda(\mathscr{z})}. \end{equation} \end{proposition} \noindent For $k = N$, $V^N_N$ is given by the terminal condition \eqref{eq:VNTerminal}. The key idea behind Proposition \ref{prop:DPFE} is the following observation (DE Proposition 1.4.2), which provides a variational formula for expressions like \eqref{eq:VN} and \eqref{eq:VNk}. \begin{observation}\label{obs:VarRep} For any probability measure $\pi \in \Delta(\mathscr{Z})$ and function $\gamma\colon\mathscr{Z}\rightarrow\mathbb{R}$, we have \begin{equation}\label{eq:variational} -\log\sum_{\mathscr{z}\in\mathscr{Z}}\mathrm{e}^{-\gamma(\mathscr{z})}\pi(\mathscr{z})=\!\min_{\lambda\in\Delta(\mathscr{Z})}\paren{R(\lambda||\pi)+\sum_{\mathscr{z}\in\mathscr{Z}}\gamma(\mathscr{z})\lambda(\mathscr{z})}. \end{equation} The minimum is attained at $\lambda^{\mathlarger *}(\mathscr{z})=\pi(\mathscr{z})\,\mathrm{e}^{-\gamma(\mathscr{z})}/\sum_{\mathscr{y}\in\mathscr{Z}} \pi(\mathscr{y})\,\mathrm{e}^{-\gamma(\mathscr{y})}$. In particular, $\lambda^{\mathlarger *} \ll \pi$. \end{observation} Equation \eqref{eq:variational} expresses the log expectation on its left hand side as the minimized sum of two terms: a relative entropy term that only depends on the probability measure $\pi$, and an expectation that only depends on the function $\gamma$. This additive separability and the Markov property lead to equation \eqref{eq:VNFunctionalEq}. Specifically, observe that \begin{gather*} \exp\paren{\!-N V^N_k(x_0, \ldots , x_k)} =\mathbb{E}\!\brack{\exp(-Nh(\hat\phi({\mathbf{X}}^{N})))\,\big|\,X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}}\\ \qquad=\mathbb{E}\!\brack{\mathbb{E}\!\brack{\exp(-Nh(\hat\phi({\mathbf{X}}^{N})))\,\big|\, X^{N}_{0},\ldots,X^{N}_{k+1}}\,\big|\,X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}}\\ \qquad=\mathbb{E}\!\brack{-N V^N_{k+1}(X^{N}_{0},\ldots,X^{N}_{k+1})\,\big|\,X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}}\\ \qquad=\sum_{\mathscr{z} \in \mathscr{Z}}\exp\paren{-N V^N_{k+1}(x_0, \ldots , x_k,x_k + \tfrac1N \mathscr{z})}\hspace{1pt}\nu^{N}(\mathscr{z}|x_{k}), \end{gather*} where the last line uses the Markov property. This equality and Observation \ref{obs:VarRep} yield \begin{align*} V^N_k(x_0, \ldots , x_k) &= -\frac1N\log\sum_{\mathscr{z} \in \mathscr{Z}}\exp\paren{\!-N V^N_{k+1}(x_0, \ldots , x_k,x_k + \tfrac1N \mathscr{z})}\hspace{1pt}\nu^{N}(\mathscr{z}|x_{k})\\ &=\frac1N\inf_{\lambda\in\Delta(\mathscr{Z})}\paren{R(\lambda\,||\,\nu^{N}(\hspace{1pt}\cdot\hspace{1pt}|x_{k}))+\sum_{\mathscr{z}\in\mathscr{Z}}NV^{N}_{k+1}(x_{0},\ldots,x_{k}+\tfrac{1}{N}\mathscr{z})\,\lambda(\mathscr{z})}, \end{align*} which is equation \eqref{eq:VNFunctionalEq}. Since the value functions $V^N_k$ satisfy the dynamic programming functional equation \eqref{eq:VNFunctionalEq}, they also can be represented by describing the same dynamic program in sequence form. To do so, we define for $k \in \{0, \ldots , N-1\}$ a \emph{period k control} $\lambda^N_k\colon (\mathscr{X}^N)^{k} \to \Delta(\mathscr{Z})$, which for each sequence of states $(x_0, \ldots , x_{k})$ specifies a probability distribution $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|x_0, \ldots , x_{k})$, namely the minimizer of problem \eqref{eq:VNFunctionalEq}. Given the sequence of controls $\{\lambda^N_k\}_{k=0}^{N-1}$ and an initial condition $\xi^N_0 = x^N \in \mathscr{X}^N$, we define the \emph{controlled process} $\boldsymbol{\xi}^N=\{\xi^N_k\}_{k=0}^N$ by $\xi^N_0 = x^N \in \mathscr{X}^N$ and the recursive formula \begin{equation}\label{eq:ControlledProcess} \xi^N_{k+1} = \xi^N_k + \frac1N\zeta^N_k, \end{equation} where $\zeta^N_k$ has law $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k)$. We also define the piecewise affine interpolation $\smash{\hat\boldsymbol{\xi}}^N=\{\hat\xi^N_t\}_{t\in[0,1]}$ by $\hat \xi^N_t = \hat\phi_t(\boldsymbol{\xi}^N)$, where $\hat\phi$ is the interpolation function \eqref{eq:PhiHat}. We then have \begin{proposition}\label{prop:VNSeqEq} For $k\in\{0,1,\ldots,N-1\}$ and $(x_{0},\ldots,x_{k})\in(\mathscr{X}^{N})^{k+1}$, $V^N_k(x_0, \ldots , x_k)$ equals \begin{equation}\label{eq:VNSeqEq} \inf_{\lambda^N_k, \ldots , \lambda^N_{N-1}}\!\! \mathbb{E}\Bigg[\frac1N\sum_{j=k}^{N-1}R\paren{\lambda^N_j(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_j)\,||\, \nu^N(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_j)} + h(\smash{\hat\boldsymbol{\xi}}^N)\hspace{1pt}\bigg|\hspace{1pt}\xi^{N}_{0}=x_{0},\ldots,\xi^{N}_{k}=x_{k}\Bigg]. \end{equation} \end{proposition} \noindent Since Observation \ref{obs:VarRep} implies that the infimum in the functional equation \eqref{eq:VNFunctionalEq} is always attained, Proposition \ref{prop:VNSeqEq} follows from standard results (cf.~DE Theorem 1.5.2), and moreover, the infimum in \eqref{eq:VNSeqEq} is always attained. Since $V^N_0 \equiv V^N$ by construction, Proposition \ref{prop:VNSeqEq} yields the representation \begin{equation}\label{eq:VNSeqEqInitial} V^N(x^N) = \inf_{\lambda^N_0, \ldots , \lambda^N_{N-1}}\!\! \mathbb{E}_{x^N}\Bigg[\frac1N\sum_{j=0}^{N-1}R\paren{\lambda^N_j(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_j)\,||\, \nu^N(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_j)} + h(\smash{\hat\boldsymbol{\xi}}^N)\Bigg]. \end{equation} The running costs in \eqref{eq:VNSeqEqInitial} are relative entropies of control distributions with respect to transition distributions of the Markov chain $\mathbf{X}^N$, and so reflect how different the control distribution is from the law of the Markov chain at the relevant state. Note as well that the terminal payoff $h(\smash{\hat\boldsymbol{\xi}}^N)$ may depend on the entire path of the controlled process $\boldsymbol{\xi}^N$. With this groundwork in place, we can describe \apcite{DupEll97} weak convergence approach to large deviations as follows: Equation \eqref{eq:VNSeqEqInitial} represents expression $V^N(x^N)$ from the left-hand side of the Laplace principle as the expected value of the optimal solution to a stochastic optimal control problem. For any given sequence of pairs of control sequences $\{\lambda^N_k\}_{k=0}^{N-1}$ and controlled processes $\boldsymbol{\xi}^N$, Section \ref{sec:LCP} shows that suitably chosen subsequences converge in distribution to some limits $\{\lambda_t\}_{t \in [0,1]}$ and $\boldsymbol{\xi}$ satisfying the averaging property \eqref{eq:LimControlled}. This weak convergence and the continuity of $h$ allow one to obtain limit inequalities for $V^N(x^N)$ using Fatou's lemma and the dominated convergence theorem. By considering the optimal control sequences for \eqref{eq:VNSeqEqInitial}, one obtains both the candidate rate function $c_x(\cdot)$ and the Laplace principle upper bound \eqref{eq:LPUB2}. The Laplace principle lower bound is then obtained by choosing a path $\psi$ that approximately minimizes $c_x(\cdot) + h(\cdot)$, constructing controlled processes that mirror $\psi$, and using the weak convergence of the controlled processes and the continuity of $L$ and $h$ to establish the limit inequality \eqref{eq:LPLB2}. \subsection{Convergence of the controlled processes}\label{sec:LCP} The increments of the controlled process $\boldsymbol{\xi}^N$ are determined in two steps: first, the history of the process determines the measure $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k) \in \Delta(\mathscr{Z})$, and then the increment itself is determined by a draw from this measure. With some abuse of notation, one can write $\lambda^N_k(\cdot) = \lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k)$, and thus view $\lambda^N_k$ as a random measure. Then, using compactness arguments, one can show that as $N$ grows large, certain subsequences of the random measures $\lambda^N_k$ on $\Delta(\mathscr{Z})$ converge in a suitable sense to limiting random measures. Because the increments of $\boldsymbol{\xi}^N$ become small as $N$ grows large (cf.~\eqref{eq:ControlledProcess}), intuition from the law of large numbers---specifically Theorem \ref{thm:DetApprox}---suggests that the idiosyncratic part of the randomness in these increments should be averaged away. Thus in the limit, the evolution of the controlled process should still depend on the realizations of the random measures, but it should only do so by way of their means. The increments of the controlled process $\boldsymbol{\xi}^N$ are determined in two steps: first, the history of the process determines the measure $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k) \in \Delta(\mathscr{Z})$, and then the increment itself is determined by a draw from this measure. With some abuse of notation, one can write $\lambda^N_k(\cdot) = \lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k)$, and thus view $\lambda^N_k$ as a random measure. Then, using compactness arguments, one can show that as $N$ grows large, certain subsequences of the random measures $\lambda^N_k$ on $\Delta(\mathscr{Z})$ converge in a suitable sense to limiting random measures. Because the increments of $\boldsymbol{\xi}^N$ become small as $N$ grows large (cf.~\eqref{eq:ControlledProcess}), intuition from the law of large numbers---specifically Theorem \ref{thm:DetApprox}---suggests that the idiosyncratic part of the randomness in these increments should be averaged away. Thus in the limit, the evolution of the controlled process should still depend on the realizations of the random measures, but it should only do so by way of their means. To make this argument precise, we introduce continuous-time interpolations of the controlled processes $\boldsymbol{\xi}^N=\{\xi^N_k\}_{k=0}^N$ and the sequence of controls $\{\lambda^N_k\}_{k=0}^{N-1}$. The piecewise affine interpolation $\smash{\hat\boldsymbol{\xi}}^N=\{\hat\xi^N_t\}_{t\in[0,1]}$ was introduced above; it takes values in the space $\mathscr{C} = \mathscr{C}([0,1]:X)$, which we endow with the topology of uniform convergence. The piecewise constant interpolation $\smash{\bar\boldsymbol{\xi}}^N=\{\bar\xi^N_t\}_{t\in[0,1]}$ is defined by \begin{equation*} \bar{\xi}^{N}_{t}=\left\{\begin{array}{ll}\xi^{N}_{k} & \text{if}\, t\in[\frac{k}{N},\frac{k+1}{N})\text{ and }k=0,1,\ldots,N-2,\\ \xi^{N}_{N-1} & \text{if}\; t\in[\frac{N-1}{N},1]. \end{array}\right. \end{equation*} The process $\smash{\bar\boldsymbol{\xi}}^N$ takes values in the space $\mathscr{D} = \mathscr{D}([0, 1]: X)$ of left-continuous functions with right limits, which we endow with the Skorokhod topology. Finally, the piecewise constant control process $\{\bar\lambda^N_t\}_{t\in[0,1]}$ is defined by \begin{equation*} \bar{\lambda}^{N}_{t}(\cdot)= \begin{cases} \lambda^{N}_{k}(\cdot\hspace{1pt}|\hspace{1pt}\xi^{N}_{0},\ldots,\xi^{N}_{k}) & \text{if}\, t\in[\frac{k}{N},\frac{k+1}{N})\text{ and }k=0,1,\ldots,N-2,\\ \lambda^{N}_{N-1}(\cdot\hspace{1pt}|\hspace{1pt}\xi^{N}_{0},\ldots,\xi^{N}_{N-1}) & \text{if}\; t\in[\frac{N-1}{N},1]. \end{cases} \end{equation*} Using these definitions, we can rewrite formulation \eqref{eq:VNSeqEqInitial} of $V^N(x^N)$ as \begin{equation}\label{eq:VNInt} V^{N}(x^N)=\inf_{\lambda^N_0, \ldots , \lambda^N_{N-1}}\!\!\mathbb{E}_{x^N}\paren{\int_{0}^{1}R(\bar{\lambda}^{N}_{t}\,||\,\nu^{N}(\cdot\hspace{1pt}|\hspace{1pt}\bar{\xi}^{N}_{t}))\,\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}. \end{equation} As noted after Proposition \ref{prop:VNSeqEq}, the infimum in \eqref{eq:VNInt} is always attained by some choice of the control sequence $\{\lambda^N_k\}_{k=0}^{N-1}$. Let $\mathscr{P}(\mathscr{Z}\times[0,1])$ denote the space of probability measures on $\mathscr{Z}\times[0,1]$. For a collection $\{\pi_t\}_{t\in[0,1]}$ of measures $\pi_t \in \Delta(\mathscr{Z})$ that is Lebesgue measurable in $t$, we define the measure $\pi_t \otimes dt \in \mathscr{P}(\mathscr{Z} \times [0, 1])$ by \[ (\pi_t \otimes dt)(\{z\} \times B) = \int_B \pi_t(z)\, \mathrm{d} t \] for all $z \in \mathscr{Z}$ and all Borel sets $B$ of $[0,1]$. Using this definition, we can represent the piecewise constant control process $\{\bar\lambda^N_t\}_{t\in[0,1]}$ as the \emph{control measure} $\Lambda^{\!N} = \bar\lambda^N_t \otimes dt$. Evidently, $\Lambda^{\!N}$ is a random measure taking values in $\mathscr{P}(\mathscr{Z}\times[0,1])$, a space we endow with the topology of weak convergence. Proposition \ref{prop:converge}, a direct consequence of DE Theorem 5.3.5 and p.~165, formalizes the intuition expressed in the first paragraph above. It shows that along certain subsequences, the control measures $\Lambda^{N}$ and the interpolated controlled processes $\smash{\hat\boldsymbol{\xi}}^N$ and $\smash{\bar\boldsymbol{\xi}}^N$ converge in distribution to a random measure $\Lambda$ and a random process $\boldsymbol{\xi}$, and moreover, that the evolution of $\boldsymbol{\xi}$ is amost surely determined by the means of $\Lambda$. \begin{proposition}\label{prop:converge} Suppose that the initial conditions $x^{N}\in\mathscr{X}^{N}$ converge to $x \in X$, and that the control sequence $\{\lambda^N_k\}_{k=0}^{N-1}$ is such that $\sup_{N\geq N_0} \!V^N(x^N)<\infty$. \begin{mylist} \item Given any subsequence of $\{(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)\}_{N=N_0}^\infty$, there exists a $\mathscr{P}(\mathscr{Z}\times[0,1])$-valued random measure $\Lambda$ and $\mathscr{C}$-valued random process $\boldsymbol{\xi}$ $($both possibly defined on a new probability space$)$ such that some subsubsequence converges in distribution to $(\Lambda,\boldsymbol{\xi},\boldsymbol{\xi})$ in the topologies specified above. \item There is a collection of $\Delta(\mathscr{Z})$-valued random measures $\{\lambda_t\}_{t \in [0,1]}$, measurable with respect to $t$, such that with probability one, the random measure $\Lambda$ can be decomposed as $\Lambda =\lambda_t \otimes\mathrm{d} t$. \item With probability one, the process $\boldsymbol{\xi}$ satisfies $\xi_{t}=x+\int_{0}^{t}\paren{\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z}\lambda_{s}(\mathscr{z})}\mathrm{d} s$ for all $t\in[0,1]$, and is absolutely continuous in $t$. Thus with probability one, \end{mylist} \begin{equation}\label{eq:LimControlled} \dot{\xi}_{t}=\sum\nolimits_{\mathscr{z}\in\mathscr{Z}}\mathscr{z}\lambda_{t}(\mathscr{z}) \end{equation} \hspace{.5in}almost surely with respect to Lebesgue measure. \end{proposition} \subsection{Proof of the Laplace principle upper bound} \label{sec:ProofUpper} In this section we consider the Laplace principle upper bound \eqref{eq:LPUpper}, which definition \eqref{eq:VN} allows us to express as \begin{equation}\label{eq:LPUB2} \liminf_{N\rightarrow\infty} V^{N}(x^{N})\geq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{equation} The argument here follows DE Section 6.2. Let $\{\lambda^N_k\}_{k=0}^{N-1}$ be the optimal control sequence in representation \eqref{eq:VNSeqEqInitial}, and let $\boldsymbol{\xi}^N$ be the corresponding controlled process. Define the triples $\{(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)\}_{N=N_0}^\infty$ of interpolated processes as in Section \ref{sec:LCP}. Proposition \ref{prop:converge} shows that for any subsequence of these triples, there is a subsubsequence that converges in distribution to some triple $(\lambda_t \otimes\mathrm{d} t,\boldsymbol{\xi},\boldsymbol{\xi})$ satisfying \eqref{eq:LimControlled}. Then one argues that along this subsubsequence, \begin{align*} \liminf_{N\rightarrow\infty}V^{N}(x^{N}) &\geq \mathbb{E}_x\paren{\int_{0}^{1}R\paren{\vphantom{I^N}{\lambda}_{t}\hspace{1pt}||\hspace{1pt} \nu(\cdot|\xi_{t})}\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq\mathbb{E}_x\paren{\int_{0}^{1}L\left(\xi_{t},\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z} \lambda_{t}(\mathscr{z})\right)\mathrm{d} t+h(\boldsymbol{\xi})}\\ &= \mathbb{E}_x\paren{\int_{0}^{1}L(\xi_{t},\dot{\xi}_{t})\,\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{align*} The key ingredients needed to establish the initial inequality are equation \eqref{eq:VNSeqEqInitial}, Skorokhod's theorem, equation \eqref{eq:UnifConvR} below, the lower semicontinuity of relative entropy, and Fatou's lemma. Then the second inequality follows from representation \eqref{eq:CramerRep} of the Cram\'er transform, the equality from equation \eqref{eq:LimControlled}, and the final inequality from the definition \eqref{eq:PathCost} of the cost function $c_x$. Since the subsequence chosen initially was arbitrary, inequality \eqref{eq:LPUB2} is proved. The details of this argument can be found in Section \ref{sec:ProofUpperApp}, which largely follows DE Section 6.2. But while in DE the transition kernels $\nu^N(\cdot | x)$ of the Markov chains are assumed to be independent of $N$, here we allow for a vanishing dependence on $N$ (cf.~equation \eqref{eq:LimTrans}). Thus we require a simple additional argument, Lemma \ref{lem:RelEnt}, that uses lower bound \eqref{eq:LimSPBound} to establish the uniform convergence of relative entropies: namely, that if $\lambda^N \colon \mathscr{X}^N \to \Delta(\mathscr{Z})$ are transition kernels satisfying $\lambda^N(\cdot |x) \ll \nu^N(\cdot |x) $ for all $x \in \mathscr{X}^N$, then \begin{equation}\label{eq:UnifConvR} \lim_{N\to\infty}\max_{x\in\mathscr{X}^N}\abs{R(\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu^N(\cdot|x)) - R(\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu(\cdot|x))}=0. \end{equation} \subsection{Proof of the Laplace principle lower bound} \label{sec:ProofLower} Finally, we consider the Laplace principle lower bound \eqref{eq:LPLower}, which definition \eqref{eq:VN} lets us express as \begin{equation}\label{eq:LPLB2} \limsup_{N\rightarrow\infty} V^{N}(x^{N})\leq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{equation} The argument here largely follows DE Sections 6.2 and 6.4. Their argument begins by choosing a path that is $\varepsilon$-optimal in the minimization problem from the right-hand side of \eqref{eq:LPLB2}. To account for our processes running on a set with a boundary, we show that this path can be chosen to start with a brief segment that follows the mean dynamic, and then stays in the interior of $X$ thereafter (Proposition \ref{prop:interiorpaths}). With this choice of path, the joint continuity properties of the running costs $L(\cdot, \cdot)$ established in Proposition \ref{prop:Joint2} are sufficient to complete the dominated convergence argument in display \eqref{eq:DomConvArgument}, which establishes that inequality \eqref{eq:LPLB2} is violated by no more than $\varepsilon$. Since $\varepsilon$ was arbitrary, \eqref{eq:LPLB2} follows. For a path $\phi \in \mathscr{C}=\mathscr{C}([0,1]:X)$ and an interval $I \subseteq[0,1]$, write $\phi_I$ for $\{\phi_t\colon t \in I\}$. Define the set of paths \[ \tilde\mathscr{C} = \{\phi \in \mathscr{C} \colon \text{ for some }\alpha\in(0, 1], \phi_{[0,\alpha]}\text{ solves }\eqref{eq:MD}\text{ and }\phi_{[\alpha, 1]} \subset \Int(X)\}. \] Let $\tilde \mathscr{C}_x$ denote the set of such paths that start at $x$. \begin{proposition}\label{prop:interiorpaths} For all $x \in X$, $ \inf\limits_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}=\inf\limits_{\phi\in\tilde\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. $ \end{proposition} \noindent The proof of this result is rather involved, and is presented in Section \ref{sec:RIOP}. The next proposition, a version of DE Lemma 6.5.5, allows us to further restrict our attention to paths having convenient regularity properties. We let $\mathscr{C}^{{\mathlarger *}}\subset \tilde\mathscr{C}$ denote the set of paths $\phi \in \tilde\mathscr{C}$ such that after the time $\alpha>0$ such that $\phi^\alpha_{[0,\alpha]}$ solves \eqref{eq:MD}, the derivative $\dot{\phi}$ is piecewise constant and takes values in $Z$. \begin{proposition}\label{prop:interiorpaths2} $ \inf\limits_{\phi\in\tilde\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}=\inf\limits_{\phi\in\mathscr{C}^{{\mathlarger *}}}\paren{c_{x}(\phi)+h(\phi)}. $ \end{proposition} \noindent The proof of Proposition \ref{prop:interiorpaths2} mimics that of DE Lemma 6.5.5; see Section \ref{sec:PfInteriorpaths2} for details. Now fix $\varepsilon > 0$. By the previous two propositions, we can choose an $\alpha > 0$ and a path $\psi \in \mathscr{C}^{\mathlarger *}$ such that $\psi_{[0,\alpha]}$ solves \eqref{eq:MD} and \begin{equation}\label{eq:EpsOpt} c_{x}(\psi) + h(\psi)\leq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}+\varepsilon. \end{equation} We now introduce a controlled process $\boldsymbol\chi^N$ that follows $\psi$ in expectation as long as it remains in a neighborhood of $\psi$. Representation \eqref{eq:CramerRep} implies that for each $k \in \{0, \ldots, N-1\}$ and $x \in \mathscr{X}^N$, there is a transition kernel $\pi^{N}_{k}(\cdot\hspace{1pt}|x)$ that minimizes relative entropy with respect to $\nu(\cdot|x)$ subject to the aforementioned constraint on its expectation: \begin{equation}\label{eq:AnotherREForL} R(\pi_{k}^{N}(\cdot\hspace{1pt}|x)\hspace{1pt}||\hspace{1pt}\nu(\cdot|x))=L(x,\dot\psi_{k/N})\;\text{ and }\;\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z} \pi_{k}^{N}(\mathscr{z}|x)=\dot\psi_{k/N}. \end{equation} To ensure that this definition makes sense for all $k$, we replace the piecewise continuous function $\dot \psi$ with its right continuous version. Since $\psi_{[0,\alpha]}$ solves \eqref{eq:MD}, it follows from property \eqref{eq:NewPhi0} that there is an $\hat \alpha \in (0, \alpha]$ such that \begin{equation}\label{eq:Growing} \dot\psi_t \in Z(x) = \{z \in Z\colon z_j \geq 0 \text{ for all }j \notin\support(x)\}\:\text{ whenever }t \in [0,\hat\alpha]. \end{equation} Property \eqref{eq:NewPhi0} also implies that $(\psi_t)_i \geq x_i \wedge \varsigma$ for all $t \in [0, \alpha]$ and $i \in \support(x)$. Now choose a $\delta >0$ satisfying \begin{equation}\label{eq:DeltaInequality} \delta < \min\paren{\{\varsigma\} \cup \{x_i\colon i \in \support(x)\} \cup \{\tfrac12 \dist(\psi_t, \partial X) \colon t \in [\hat \alpha, 1]\}}. \end{equation} For future reference, note that if $y \in X$ satisfies $|y - \psi_t|< \delta$, then $|y_i - (\psi_t)_i|< \frac\delta2$ for all $i \in \scrA$ (by the definition of the $\ell^1$ norm), and so if $t \in [0, \alpha]$ we also have \begin{equation}\label{eq:XBarX} y \in \bar X_x \equiv \{\hat x \in X\colon \hat x_i \geq \tfrac12(x_i \wedge \varsigma)\text{ for all }i \in \support(x)\}. \end{equation} For each $(x_0,\ldots,x_k)\in (\mathscr{X}^N)^{k+1}$, define the sequence of controls $\{\lambda^N_k\}_{k=0}^{N-1}$ by \begin{equation}\label{eq:NewLambda} \lambda_{k}^{N}(\mathscr{z}|x_0,\ldots,x_k)= \begin{cases} \pi_{k}^{N}(\mathscr{z}|x_k) & \text{if }\max_{0\leq j\leq k}|x_{j}-\psi_{j/N}|\leq\delta,\\ \nu^N(\mathscr{z}|x_k) & \text{if }\max_{0\leq j\leq k}|x_{j}-\psi_{j/N}|>\delta. \end{cases} \end{equation} Finally, define the controlled process $\boldsymbol\chi^N=\{\chi_{k}^{N}\}_{k=0}^{N}$ by setting $\chi^N_0 = x^N$ and using the recursive formula $ \chi^N_{k+1} = \chi^N_k + \frac1N\zeta^N_k, $ where $\zeta^N_k$ has law $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\chi^N_0, \ldots , \chi^N_k)$. Under this construction, the process $\boldsymbol\chi^N$ evolves according to the transition kernels $\pi^N_k$, and so follows $\psi$ in expectation, so long as it stays $\delta$-synchronized with $\psi$. If this ever fails to be true, the evolution of $\boldsymbol\chi^N$ proceeds according to the kernel $\nu^N$ of the original process $\mathbf{X}^N$. This implies that until the stopping time \begin{equation}\label{eq:TauN} \tau^{N}=\frac{1}{N}\min\brace{k\in\{0,1,\ldots,N\}\colon|\chi^{N}_{k}-\psi_{k/N}|>\delta}\wedge 1, \end{equation} the relative entropies of transitions are given by \eqref{eq:AnotherREForL}, and that after $\tau^N$ these relative entropies are zero. Define the pair $\{(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N)\}_{N=N_0}^\infty$ of interpolated processes as in Section \ref{sec:LCP}. Proposition \ref{prop:converge} shows that for any subsequence of these pairs, there is a subsubsequence that converges in distribution to some pair $(\lambda_t \otimes\mathrm{d} t,\boldsymbol{\xi})$ satisfying \eqref{eq:LimControlled}. By Prokhorov's theorem, $\tau^N$ can be assumed to converge in distribution on this subsubsequence to some $[0,1]$-valued random variable $\tau$. Finally, DE Lemma 6.4.2 and Proposition 5.3.8 imply that $\tau = 1$ and $\boldsymbol\chi = \psi$ with probability one. For the subsubsequence specified above, we argue as follows: \begin{align} \limsup_{N\to\infty}&\,V^N(x^N) \leq\lim_{N\to\infty}\mathbb{E}_{x^N}\Bigg[\frac1N\sum_{j=0}^{N-1}R\paren{\lambda^N_j(\hspace{1pt}\cdot\hspace{1pt}|\chi^N_0, \ldots , \chi^N_j)\,||\, \nu^N(\hspace{1pt}\cdot\hspace{1pt}|\chi^N_j)} + h(\smash{\hat\boldsymbol\chi}^N)\Bigg]\notag\\ &=\lim_{N\to\infty}\mathbb{E}_{x^N}\Bigg[\frac1N\sum_{j=0}^{N\tau^N-1}L(\chi^N_j,\dot\psi_{j/N}) + h(\smash{\hat\boldsymbol\chi}^N)\Bigg]\notag\\ &=\lim_{N\to\infty}\mathbb{E}_{x^N}\Bigg[\frac1N\!\!\sum_{j=0}^{(N\tau^N \wedge \lfloor N\hat\alpha\rfloor)-1}\hspace{-2ex}L(\hat\chi^N_{j/N},\dot\psi_{j/N})+\frac1N\!\sum_{j=N\tau^N \wedge \lfloor N\hat\alpha\rfloor}^{N\tau^N-1}\hspace{-2ex}L(\hat\chi^N_{j/N},\dot\psi_{j/N})+ h(\smash{\hat\boldsymbol\chi}^N)\Bigg]\label{eq:DomConvArgument}\\ &=\int_0^{\hat\alpha} L(\psi_t,\dot\psi_t)\,\mathrm{d} t + \int_{\hat\alpha}^1L(\psi_t,\dot\psi_t)\,\mathrm{d} t +h(\psi)\notag\\ &=c_x(\psi)+h(\psi).\notag \end{align} The initial inequality follows from representation \eqref{eq:VNSeqEqInitial}, the second line from the uniform convergence in \eqref{eq:UnifConvR}, along with equations \eqref{eq:AnotherREForL}, \eqref{eq:NewLambda}, and \eqref{eq:TauN}, and the fifth line from the definition of $c_x$. The fourth line is a consequence of the continuity of $h$, the convergence of $\tau^N$ to $\tau$, the uniform convergence of $\smash{\hat\boldsymbol\chi}^N$ to $\psi$, the piecewise continuity and right continuity of $\dot\psi$, Skorokhod's theorem, and the dominated convergence theorem. For the application of dominated convergence to the first sum, we use the fact that when $ j/N < \tau^N \wedge \hat \alpha$, we have $\hat\chi^N_{j/N} \in \bar X_x$ (see \eqref{eq:XBarX}) and $\dot\psi_{j/N} \in Z(x) $ (see \eqref{eq:Growing}), along with the fact that $L(\cdot,\cdot)$ is continuous, and hence uniformly continuous and bounded, on $\bar X_x \times Z(x)$, as follows from Proposition \ref{prop:Joint2}(ii). For the application of dominated convergence to the second sum, we use the fact when $\hat \alpha \leq j/N < \tau^N$, we have $\dist(\hat\chi^N_{j/N}, \partial X) \geq \frac\delta2$ (see \eqref{eq:DeltaInequality}), and the fact that $L(\cdot,\cdot)$ is continuous and bounded on $Y \times Z$ when $Y$ is a closed subset of $\Int(X)$, as follows from Proposition \ref{prop:Joint2}(i). Since every subsequence has a subsubsequence that satisfies \eqref{eq:DomConvArgument}, the sequence as a whole must satisfy \eqref{eq:DomConvArgument}. Therefore, since $\varepsilon>0$ was arbitrary, \eqref{eq:DomConvArgument}, \eqref{eq:EpsOpt}, and \eqref{eq:VN} establish inequality \eqref{eq:LPLB2}, and hence the lower bound \eqref{eq:LPLower}. \section{Proof of Lemma \ref{lem:RevIneq}}\label{sec:LSCProof} Lemma \ref{lem:RevIneq} follows from equation \eqref{eq:LAgain} and Lemma \ref{lem:Approx2}, which in turn requires Lemma \ref{lem:Approx1}. Lemma \ref{lem:Approx1} shows that for any distribution $\lambda$ on $\mathscr{Z}$ with mean $z \in Z(I)$, there is a distribution $\bar \lambda$ on $\mathscr{Z}(I)$ whose mean is also $z$, with the variational distance between $\bar\lambda$ and $\lambda$ bounded by a fixed multiple of the mass that $\lambda$ places on components outside of $\mathscr{Z}(I)$. The lemma also specifies some equalities that $\lambda$ and $\bar\lambda$ jointly satisfy. Lemma \ref{lem:Approx2} shows that if $x$ puts little mass on actions outside $I$, then the reduction in relative entropy obtained by switching from $\bar \lambda$ to $\lambda$ is small at best, uniformly over the choice of displacement vector $z \in Z(I)$. Both lemmas require additional notation. Throughout what follows we write $K$ for $\scrA\smallsetminus I$. For $\lambda \in \Delta(\mathscr{Z})$, write $\lambda_{ij}$ for $\lambda(e_j - e_i)$ when $j \ne i$. Write $\lambda_{[i]} = \sum_{j \ne i}\lambda_{ij}$ for the $i$th ``row sum'' of $\lambda$ and $\lambda^{[j]} = \sum_{i \ne j}\lambda_{ij}$ for the $j$th ``column sum''. (Remember that $\lambda$ has no ``diagonal components'', but instead has a single null component $\lambda_{\mathbf{0}} = \lambda(\mathbf{0})$.) For $I \subseteq \scrA$, write $\lambda_{I} = \sum_{i \in I}\lambda_{[i]}$ for the sum over all elements of $\lambda$ from rows in $I$. If $\lambda, \bar\lambda \in \Delta(\mathscr{Z})$, we apply the same notational devices to $\Delta\lambda = \bar\lambda - \lambda$ and to $|\Delta\lambda|$, the latter of which is defined by $|\Delta\lambda|_{ij} = |(\Delta\lambda)_{ij}|$. Finally, if $\chi \in \mathbb{R}^{I}_+$, we write $\chi_{[I]}$ for $\sum_{i \in I} \chi_i$. \begin{lemma}\label{lem:Approx1} Fix $z \in Z(I)$ and $\lambda \in \Lambda_{\mathscr{Z}}(z)$. Then there exist a distribution $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$ and a vector $\chi \in \mathbb{R}^{I}_+$ satisfying \begin{mylist} \item $\Delta\lambda_{[i]} = \Delta\lambda^{[i]} = -\chi_i$ for all $i \in I$, \item $\Delta\lambda_{\mathbf{0}} = \lambda_{[K]}+\chi_{[I]}$, \item $\chi_{[I]} \leq \lambda_{[K]}$, \text{ and} \item $|\Delta\lambda|_{[S]} \leq 3\lambda_{[K]}$, \end{mylist} \noindent where $\Delta\lambda = \bar\lambda - \lambda$. \end{lemma} \begin{lemma}\label{lem:Approx2} Fix $\varepsilon>0$. There exists a $\delta>0$ such that for any $x \in X$ with $\bar x_K \equiv\max_{k \in K}x_k < \delta$, any $z \in Z(I)$, and any $\lambda \in \Lambda_{\mathscr{Z}(\support(x))}(z)$, we have \begin{equation*} R(\lambda||\nu(\cdot|x))\ge R(\bar\lambda||\nu(\cdot|x))-\varepsilon. \end{equation*} where $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$ is the distribution determined for $\lambda$ in Lemma \ref{lem:Approx1}. \end{lemma} To see that Lemma \ref{lem:Approx2} implies Lemma \ref{lem:RevIneq}, fix $x \in X$ with $\bar x_K < \delta$ and $z \in Z(I)$, and let $\lambda\in \Lambda_\mathscr{Z}(z) $ and $\lambda^I\in \Lambda_{\mathscr{Z}(I)}(z) $ be the minimizers in \eqref{eq:LAgain} and \eqref{eq:LAgain2}, respectively; then since $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$, \[ L(x,z) = R(\lambda||\nu(\cdot|x)) \geq R(\bar\lambda||\nu(\cdot|x))-\varepsilon \geq R(\lambda^I||\nu(\cdot|x))-\varepsilon = L_I(x,z)-\varepsilon. \] \subsection{Proof of Lemma \ref{lem:Approx1}} To prove Lemma \ref{lem:Approx1}, we introduce an algorithm that incrementally constructs the pair $(\bar\lambda, \chi) \in \Lambda_{\mathscr{Z}(I)}(z) \times \mathbb{R}^I_+$ from the pair $(\lambda,\mathbf{0}) \in \Lambda_{\mathscr{Z}}(z) \times \mathbb{R}^I_+$. All intermediate states $(\nu, \pi)$ of the algorithm are in $\Lambda_{\mathscr{Z}}(z) \times \mathbb{R}^I_+$. Suppose without loss of generality that $K = \{1, \ldots , \bar k\}$. The algorithm first reduces all elements of the first row of $\lambda$ to 0, then all elements of the second row, and eventually all elements of the $\bar k$th row. Suppose that at some stage of the algorithm, the state is $(\nu, \pi)\in \Lambda_{\mathscr{Z}}(z) \times \mathbb{R}^I_+$, rows $1$ through $k-1$ have been zeroed, and row $k$ has not been zeroed: \begin{gather} \nu_{[h]} = 0\text{ for all }h < k, \text{ and }\label{eq:AlgDag}\\ \nu_{[k]} > 0.\label{eq:AlgDagAlt} \end{gather} Since $\nu \in \Lambda_{\mathscr{Z}}(z)$ and $z \in Z(I)$, \begin{gather} \nu^{[i]} - \nu_{[i]} =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_i \nu(\mathscr{z}) = z_i\text{ for all }i \in I,\text{ and}\label{eq:AlgStar}\\ \nu^{[\ell]} - \nu_{[\ell]} =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_\ell \nu(\mathscr{z}) = z_\ell \geq 0\text{ for all }\ell \in K.\label{eq:Alg2Star} \end{gather} \eqref{eq:AlgDagAlt} and \eqref{eq:Alg2Star} together imply that $\nu^{[k]}>0$. Thus there exist $j\ne k$ and $i \ne k$ such that \begin{equation}\label{eq:AlgInc} \nu_{kj} \wedge\nu_{ik} \equiv c > 0. \end{equation} Using \eqref{eq:AlgInc}, we now construct the algorithm's next state $(\hat\nu, \hat\pi)$ from the current state $(\nu, \pi)$ using one of three mutually exclusive and exhaustive cases, as described next; only the components of $\nu$ and $\pi$ whose values change are noted explicitly. \begin{alignat}{3} &\text{Case 1: }\;i \ne j &&\hspace{.5in}\text{Case 2: }\;i = j\in K&&\hspace{.5in}\text{Case 3: }\;i = j\in I \notag\\ &\hat\nu_{kj} = \nu_{kj} - c&&\hspace{.5in}\hat\nu_{kj} = \nu_{kj} - c&&\hspace{.5in}\hat\nu_{kj} = \nu_{kj} - c\notag\\ &\hat\nu_{ik} = \nu_{ik} - c&&\hspace{.5in}\hat\nu_{jk} = \nu_{jk} - c&&\hspace{.5in}\hat\nu_{jk} = \nu_{jk} - c\notag\\ &\hat\nu_{ij} = \nu_{ij} + c&&&&\notag\\ &\hat\nu_{\mathbf{0}} = \nu_{\mathbf{0}} + c&&\hspace{.5in}\hat\nu_{\mathbf{0}} = \nu_{\mathbf{0}} + 2c&&\hspace{.5in}\hat\nu_{\mathbf{0}} = \nu_{\mathbf{0}} + 2c\notag\\ &&& &&\hspace{.5in}\hat\pi_{i} = \pi_{i} + c\notag \end{alignat} In what follows, we confirm that following the algorithm to completion leads to a final state $(\bar\lambda, \chi)$ with the desired properties. Write $\Delta\nu = \hat\nu - \nu$ and $\Delta\pi = \hat\pi - \pi$, and define $|\Delta\nu|$ componentwise as described above. The following statements are immediate: \begin{subequations} \begin{alignat}{3} &\text{Case 1: }\;i \ne j &&\hspace{.3in}\text{Case 2: }\;i = j\in K&&\hspace{.3in}\text{Case 3: }\;i = j\in I \notag\\ &\Delta\nu_{[k]} = \Delta^{[k]} =- c&&\hspace{.3in}\Delta\nu_{[k]} = \Delta\nu^{[k]} =- c&&\hspace{.3in}\Delta\nu_{[k]} = \Delta\nu^{[k]} =- c\label{eq:chartl1}\\ &&&\hspace{.3in}\Delta\nu_{[j]} = \Delta\nu^{[j]} =- c&&\hspace{.3in}\Delta\nu_{[j]} = \Delta\nu^{[j]} =- c\label{eq:chartl2}\\ & \Delta\nu_{[\ell]} = \Delta\nu^{[\ell]} = 0, \ell \ne k&&\hspace{.3in}\Delta\nu_{[\ell]} = \Delta\nu^{[\ell]} = 0, \ell \notin \{k,j\}&&\hspace{.3in}\Delta\nu_{[\ell]} = \Delta\nu^{[\ell]} = 0, \ell \notin \{k,j\}\label{eq:chartl3}\\ &\Delta\nu_{\mathbf{0}} = c&&\hspace{.3in}\Delta\nu_{\mathbf{0}} = 2c &&\hspace{.3in}\Delta\nu_{\mathbf{0}} = 2c\label{eq:chartl4}\\ &\Delta\pi_{[I]} = 0&&\hspace{.3in}\Delta\pi_{[I]} = 0 &&\hspace{.3in}\Delta\pi_{[I]} = c\label{eq:chartl5}\\ &\Delta\nu_{[K]} = -c&&\hspace{.3in}\Delta\nu_{[K]} = -2 c &&\hspace{.3in}\Delta\nu_{[K]} = -c\label{eq:chartl6}\\ &|\Delta\nu|_{[\scrA]} = 3 c&&\hspace{.3in}|\Delta\nu|_{[\scrA]} = 2 c &&\hspace{.3in}|\Delta\nu|_{[S]} = 2 c\label{eq:chartl7} \end{alignat} \end{subequations} The initial equalities in \eqref{eq:chartl1}--\eqref{eq:chartl3} imply that if $\nu$ is in $\Lambda_{\mathscr{Z}}(z)$, then so is $\hat\nu$. \eqref{eq:chartl1}--\eqref{eq:chartl3} also imply that no step of the algorithm increases the mass in any row of $\nu$. Moreover, \eqref{eq:AlgInc} and the definition of the algorithm imply that during each step, no elements of the $k$th row or the $k$th column of $\nu$ are increased, and that at least one such element is zeroed. It follows that row 1 is zeroed in at most $2n-3$ steps, followed by row 2, and ultimately followed by row $\bar k$. Thus a terminal state with $\bar\lambda_{[K]}=0$, and hence with $\bar\lambda\in\Lambda_{\mathscr{Z}(I)}(z)$, is reached in a finite number of steps. For future reference, we note that \begin{equation}\label{eq:AlgFrown} \Delta\lambda_{[K]} = \bar\lambda_{[K]} - \lambda_{[K]} = -\lambda_{[K]}. \end{equation} We now verify conditions (i)-(iv) from the statement of the lemma. First, for any $i \in I$, \eqref{eq:chartl2}, \eqref{eq:chartl3}, and \eqref{eq:chartl5} imply that in all three cases of the algorithm, $\Delta\nu_{[i]} = \Delta\nu^{[i]} = -\Delta\pi_{[I]}$; the common value is 0 in Cases 1 and 2 and $-c$ in Case 3. Thus aggregating over all steps of the algorithm yields $\Delta\lambda_{[i]} = \Delta\lambda^{[i]} = -\chi_i$, which is condition (i). Second, \eqref{eq:chartl4}--\eqref{eq:chartl6} imply that in all three cases, $\Delta\nu_{\mathbf{0}} = -\Delta\nu_{[K]} +\Delta\pi_{[I]}$. Aggregating over all steps of the algorithm yields $\Delta\lambda_{\mathbf{0}} = -\Delta\lambda_{[K]} +\chi_{[I]}$. Then substituting \eqref{eq:AlgFrown} yields $\Delta\lambda_{\mathbf{0}} = \lambda_{[K]} +\chi_{[I]}$ which is condition (ii). Third, \eqref{eq:chartl5} and \eqref{eq:chartl6} imply that in all three cases, $\Delta\pi_{[I]} \leq -\Delta\nu_{[K]} $. Aggregating and using \eqref{eq:AlgFrown} yields $\chi_{[I]} \leq -\Delta\lambda_{[K]} =\lambda_{[K]}$, establishing (iii). Fourth, \eqref{eq:chartl6} and \eqref{eq:chartl7} imply that in all three cases, $|\Delta\nu|_{[\scrA]} \le -3 \Delta\nu_{[K]} $. Aggregating and using \eqref{eq:AlgFrown} yields $|\Delta\lambda|_{[\scrA]} \le -3\Delta\lambda_{[K]} =3\lambda_{[K]}$, establishing (iv). This completes the proof of Lemma \ref{lem:Approx1}. \subsection{Proof of Lemma \ref{lem:Approx2}} To prove Lemma \ref{lem:Approx2}, it is natural to introduce the notation $d = \lambda - \bar \lambda = -\Delta \lambda$ to represent the increment generated by a move from distribution $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$ to distribution $\lambda\in \Lambda_{\mathscr{Z}}(z)$. We will show that when $\bar x_K =\max_{k \notin I}x_i$ is small, such a move can only result in a slight reduction in relative entropy. To start, observe that \begin{gather} d_{[i]} = d^{[i]} = \chi_i\text{ for all }i \in I,\label{eq:d1} \\ d_{[k]} = d^{[k]} = \lambda_{[k]}\text{ for all }k \in K,\label{eq:d2} \\ d_{\mathbf{0}} = -\lambda_{[K]}-\chi_{[I]}\geq -2\lambda_{[K]},\text{ and}\label{eq:d3}\\ |d|_{[\scrA]} \leq 3\lambda_{[K]}\label{eq:d4}. \end{gather} Display \eqref{eq:d1} follows from part (i) of Lemma \ref{lem:Approx1}, display \eqref{eq:d3} from parts (ii) and (iii), and display \eqref{eq:d4} from part (iv). For display \eqref{eq:d2}, note first that since $\lambda$ and $\bar\lambda$ are both in $\Lambda_{\mathscr{Z}}(z)$, we have \[ \lambda^{[k]} - \lambda_{[k]} =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_k \lambda(\mathscr{z}) = z_k =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_k \bar\lambda(\mathscr{z})= \bar\lambda^{[k]} - \bar\lambda_{[k]}, \] and hence \[ d^{[k]} = \lambda^{[k]} - \bar\lambda^{[k]} =\lambda_{[k]} - \bar\lambda_{[k]} = d_{[k]}; \] then \eqref{eq:d2} follows from the fact that $\bar\lambda_{[k]}=0$, which is true since $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$. By definition, \begin{equation*} R(\lambda||\nu(\cdot|x)) = \sum_{i \in \scrA}\sum_{j\ne i}\paren{\lambda_{ij}\log \lambda_{ij} - \lambda_{ij} \log x_i \sigma_{ij}} + \lambda_{\mathbf{0}}\log\lambda_{\mathbf{0}} - \lambda_{\mathbf{0}} \log \Sigma, \end{equation*} where $\sigma_{ij} = \sigma_{ij}(x)$ and $\Sigma = \sum_{j \in \scrA}x_j\sigma_{jj}$. Thus, writing $\ell(p) = p\log p$ for $p\in (0,1]$ and $\ell(0) = 0$, we have \begin{align}\label{eq:REDiff} R(\lambda||\nu(\cdot|x))- R(\bar\lambda||\nu(\cdot|x)) &=\sum_{i \in \scrA}\sum_{j\ne i}\paren{\ell(\lambda_{ij}) - \ell(\bar\lambda_{ij}) }+\paren{ \ell(\lambda_{\mathbf{0}}) - \ell(\bar\lambda_{\mathbf{0}}) }\\ &\qquad - \paren{\sum_{i \in \scrA}\sum_{j\ne i} d_{ij} \log x_i\sigma_{ij} + d_{\mathbf{0}} \log\Sigma}.\notag \end{align} We first use \eqref{eq:d1}--\eqref{eq:d3}, Lemma \ref{lem:Approx1}(iii), and the facts that $\chi \geq 0$ and $\Sigma \geq \varsigma$ to obtain a lower bound on the final term of \eqref{eq:REDiff}: \begin{align*} -\Bigg(\sum_{i \in \scrA}&\sum_{j\ne i} d_{ij} \log x_i\sigma_{ij} + d_{\mathbf{0}} \log\Sigma\Bigg)\\ &= -\sum_{i \in I}\chi_i\log x_i -\sum_{k \in K}\lambda_{[k]}\log x_k-\sum_{i \in \scrA}\sum_{j\ne i} d_{ij} \log \sigma_{ij} + \paren{\sum_{i \in I}\chi_i +\sum_{k \in K}\lambda_{[k]}}\log\Sigma\\ &\geq\sum_{i \in I}\chi_i\log \Sigma +\sum_{k \in K}\lambda_{[k]} \log \frac{\Sigma}{x_k }+\sum_{i \in \scrA}\sum_{j\ne i} |d_{ij}| \log \sigma_{ij} \\ &\geq \chi_{[I]}\log\varsigma + \lambda_{[K]}\log \frac{\varsigma}{\bar x_K } +|d|_{[\scrA]}\log{\varsigma}\\ &\geq \lambda_{[K]}\log\varsigma + \lambda_{[K]}\log \frac{\varsigma}{\bar x_K } +3\lambda_{[K]}\log{\varsigma}\\ &\geq \lambda_{[K]}\log \frac{\varsigma^5}{\bar x_K }. \end{align*} To bound the initial terms on the right-hand side of \eqref{eq:REDiff}, observe that the function $\ell\colon[0,1] \to \mathbb{R}$ is convex, nonpositive, and minimized at $\mathrm{e}^{-1}$, where $\ell(\mathrm{e}^{-1}) = -\mathrm{e}^{-1}$. Define the convex function $\hat\ell \colon [0,1] \to \mathbb{R}$ by $\hat\ell(p)=\ell(p)$ if $p \leq \mathrm{e}^{-1}$ and $\hat\ell(p)=-\mathrm{e}^{-1}$ otherwise. We now argue that for any $p,q \in [0,1]$, we have \begin{equation}\label{eq:HatEllIneq} -|\ell(p) -\ell(q)|\geq \hat\ell(|p-q|). \end{equation} Since $\ell$ is nonpositive with minimum $-\mathrm{e}^{-1}$, $-|\ell(p) -\ell(q)|\geq -\mathrm{e}^{-1}$ always holds. If $|p-q| \leq \mathrm{e}^{-1}$, then $-|\ell(p) -\ell(q)|\geq \min\{\ell(|p-q|), \ell(1-|p-q|)\} = \ell(|p-q|)$; the inequality follows from the convexity of $\ell$, and the equality obtains because $\ell(r)-\ell(1-r) \leq 0$ for $r \in [0,\frac12]$, which is true because $r \mapsto \ell(r)-\ell(1-r)$ is convex on $[0,\frac12]$ and equals $0$ at the endpoints. Together these claims yield \eqref{eq:HatEllIneq}. Together, inequality \eqref{eq:HatEllIneq}, Jensen's inequality, and inequality \eqref{eq:d4} imply that \begin{align*} \sum_{i \in \scrA}\sum_{j\ne i}\paren{\ell(\lambda_{ij}) - \ell(\bar\lambda_{ij}) } &\geq-\sum_{i \in \scrA}\sum_{j\ne i}\abs{\ell(\lambda_{ij}) - \ell(\bar\lambda_{ij}) }\\ &\geq\sum_{i \in \scrA}\sum_{j\ne i}\hat\ell(|\lambda_{ij} -\bar\lambda_{ij}|)\\ &\geq(n^2-n)\,\hat\ell\paren{\frac{\sum_{i \in \scrA}\sum_{j\ne i}|\lambda_{ij} -\bar\lambda_{ij}|}{n^2-n}}\\ &\geq(n^2-n)\,\hat\ell\paren{\tfrac{3}{n^2-n}\lambda_{[K]}}. \end{align*} Finally, display \eqref{eq:d3} implies that $d_{\mathbf{0}}=\lambda_{\mathbf{0}} -\bar \lambda_{\mathbf{0}} \in [-2\lambda_{[K]}, 0]$. Since $\ell\colon[0,1] \to \mathbb{R}$ is convex with $\ell(1) = 0$ and $\ell^\prime(1)=1$, it follows that \begin{equation*} \ell(\lambda_{\mathbf{0}}) - \ell( \bar \lambda_{\mathbf{0}}) \geq \ell( 1 + d_{\mathbf{0}}) -\ell(1) \geq d_{\mathbf{0}}\geq -2\lambda_{[K]}. \end{equation*} Substituting three of the last four displays into \eqref{eq:REDiff}, we obtain \[ R(\lambda||\nu(\cdot|x))- R(\bar\lambda||\nu(\cdot|x)) \geq (n^2 -n)\,\hat\ell\paren{\tfrac{3}{n^2-n}\lambda_{[K]}}+\lambda_{[K]}\paren{\log \frac{\varsigma^5}{\bar x_K }-2}. \] To complete the proof of the lemma, it is enough to show that if $\bar x_K \leq \varsigma^5\mathrm{e}^{-2}$, then \begin{equation}\label{eq:RELast} (n^2 -n)\,\hat\ell\paren{\tfrac{3}{n^2-n}\lambda_{[K]}}+\lambda_{[K]}\paren{\log \frac{\varsigma^5}{\bar x_K }-2}\geq -(n^2 -n)\paren{\frac {\bar x_K}{\mathrm{e}\varsigma^5}}^{1/3}. \end{equation} We do so by computing the minimum value of the left-hand side of \eqref{eq:RELast} over $\lambda_{[K]}\geq 0$. Let $a= n^2 - n$ and $c = \log \frac{\varsigma^5}{\bar x_K }-2$. The assumption that $\bar x_K \leq \varsigma^5\mathrm{e}^{-2}$ then becomes $c \geq 0$. Thus if $s > \frac{a}{3\mathrm{e}}$, then \[ a\hat\ell(\tfrac3a s)+cs = -a\mathrm{e}^{-1} + cs \geq -a \mathrm{e}^{-1}+c\cdot\tfrac{a}{3\mathrm{e}} = a \ell(\tfrac3a \cdot\tfrac{a}{3\mathrm{e}} )+c\cdot\tfrac{a}{3\mathrm{e}} . \] Thus if $s\leq \frac{a}{3\mathrm{e}}$ minimizes $a \ell(\frac3a s)+cs$ over $s \geq 0$, it also minimizes $a \hat\ell(\frac3a s)+cs$, and the minimized values are the same. To minimize $a \ell(\frac3a s)+cs$, note that it is concave in $s$; solving the first-order condition yields the minimizer, $s^{\mathlarger *}= \frac{a}3\exp(-\frac{c}3-1)$. This is less than or equal to $\frac{a}{3\mathrm{e}}$ when $c \geq 0$. Plugging $s^{\mathlarger *}$ into the objective function yields $-a \exp(-\frac{c}3-1)$, and substituting in the values of $a$ and $c$ and simplifying yields the right-hand side of \eqref{eq:RELast}. This completes the proof of Lemma \ref{lem:Approx2}. \section{Proof of Proposition \ref{prop:interiorpaths}} \label{sec:RIOP} It remains to prove Proposition \ref{prop:interiorpaths}. Inequality \eqref{eq:LimSPBound} implies that solutions $\bar\phi$ to the mean dynamic \eqref{eq:MD} satisfy \begin{equation}\label{eq:NewPhi0} \varsigma -\bar\phi_i \leq \dot{\bar\phi}_i \leq 1 \end{equation} for every action $i \in \scrA$. Thus if $(\bar\phi_0)_i \leq \frac\varsigma2$, then for all $t \in [0, \frac\varsigma4]$, the upper bound in \eqref{eq:NewPhi0} yields $(\bar\phi_t)_i \leq \frac{3\varsigma}4$. Then the lower bound yields $ (\dot{\bar\phi}_t)_i \geq \varsigma - \frac{3\varsigma}4= \frac\varsigma4$, and thus \begin{equation}\label{eq:NewPhi1} (\bar\phi_0)_i \leq \tfrac\varsigma2\;\text{ implies that }\;(\bar\phi_t)_i-(\bar\phi_0)_i \geq \tfrac\varsigma4 t\;\text{ for all }t \in [0, \tfrac\varsigma4]. \end{equation} In addition, it follows easily from \eqref{eq:NewPhi0} and \eqref{eq:NewPhi1} that solutions $\bar\phi$ of \eqref{eq:MD} from every initial condition in $X$ satisfy \begin{equation}\label{eq:NewPhi2} (\bar\phi_t)_i \geq \tfrac\varsigma4 t\;\text{ for all }t \in [0, \tfrac\varsigma4]. \end{equation} Fix $\phi \in \mathscr{C}_x$ with $c_x(\phi) < \infty$, so that $\phi$ is absolutely continuous, with $\dot\phi_t \in Z$ at all $t \in [0,1]$ where $\phi$ is differentiable. Let $\bar\phi \in \mathscr{C}_x$ be the solution to \eqref{eq:MD} starting from $x$. Then for $\alpha \in (0, \tfrac\varsigma4]$, define trajectory $\phi^\alpha \in \tilde \mathscr{C}_x$ as follows \begin{equation}\label{eq:DefNewPhi} \phi^\alpha_t= \begin{cases} \bar\phi_t &\text{ if }t\leq\alpha,\\ \bar\phi_\alpha + (1 - \frac{2}{\varsigma}\alpha )(\phi_{t -\alpha}- x) &\text{ if }t>\alpha. \end{cases} \end{equation} Thus $\phi^\alpha$ follows the solution to mean dynamic from $x$ until time $\alpha$; then, the increments of $\phi^\alpha$ starting at time $\alpha$ mimic those of $\phi$ starting at time 0, but are slightly scaled down. The next lemma describes the key properties of $\phi^\alpha$. In part ($ii$) and hereafter, $|\cdot|$ denotes the $\ell^1$ norm on $\mathbb{R}^n$. \begin{lemma}\label{lem:NewPhiProps} If $\alpha \in (0, \tfrac\varsigma4]$ and $t \in [\alpha,1]$, then \begin{mylist} \item $\dot\phi^\alpha_t = (1 - \tfrac{2}{\varsigma}\alpha )\dot\phi_{t -\alpha}$. \item $|\phi^\alpha_t - \phi_{t-\alpha}| \leq (2+\frac4\varsigma) \alpha$. \item for all $i \in \scrA$, $(\phi^\alpha_t)_i \geq \tfrac\varsigma4\alpha$. \item for all $i \in \scrA$, $(\phi^\alpha_t)_i \geq \tfrac\varsigma{12}(\phi_{t -\alpha})_i$. \end{mylist} \end{lemma} \textit{Proof. } Part (i) is immediate. For part (ii), combine the facts that $\bar\phi_t$ and $\phi_t$ move at $\ell^1$ speed at most $2$ (since both have derivatives in $Z$) and the identity $\phi_{t-\alpha} = \phi_0 + (\phi_{t-\alpha} -\phi_0)$ with the definition of $\phi^\alpha_t$ to obtain \[ \abs{\phi^\alpha_t - \phi_{t-\alpha}} \leq \abs{ \bar\phi_\alpha - x} + \tfrac{2}{\varsigma}\alpha (\phi_{t -\alpha}- x)\leq 2\alpha + \tfrac{2}{\varsigma}\alpha \cdot 2(t-\alpha) \leq (2+\tfrac4\varsigma) \alpha. \] We turn to part (iii). If $(\phi_{t -\alpha}- x)_i \geq 0$, then it is immediate from definition \eqref{eq:DefNewPhi} and inequality \eqref{eq:NewPhi2} that $(\phi^\alpha_t)_i \geq \tfrac\varsigma4\alpha$. So suppose instead that $(\phi_{t -\alpha}- x)_i < 0$. Then \eqref{eq:DefNewPhi} and the fact that $(\phi_{t -\alpha})_i\geq 0$ imply that \begin{equation}\label{eq:NewPhi3} (\phi^\alpha_t)_i\geq(\bar\phi_\alpha)_i - (1 - \tfrac{2}{\varsigma}\alpha )x_i=((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i. \end{equation} If $x_i \leq \frac\varsigma2$, then \eqref{eq:NewPhi3} and \eqref{eq:NewPhi1} yield \[ (\phi^\alpha_t)_i\geq ((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i \geq \tfrac\varsigma4\alpha + 0 = \tfrac\varsigma4\alpha. \] If $x_i \in[ \frac\varsigma2, \varsigma]$, then \eqref{eq:NewPhi3} and \eqref{eq:NewPhi0} yield \[ (\phi^\alpha_t)_i\geq((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i \geq 0 + \tfrac{2}{\varsigma}\alpha \cdot \tfrac\varsigma2\geq\alpha. \] And if $x_i \geq \varsigma$, then \eqref{eq:NewPhi3}, \eqref{eq:NewPhi0}, and the fact that $\dot{\bar\phi}_i \geq -1$ yield \[ (\phi^\alpha_t)_i\geq((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i \geq -\alpha + \tfrac{2}{\varsigma}\alpha \cdot \varsigma\geq\alpha. \] It remains to establish part (iv). If $(\phi_{t -\alpha})_i=0$ there is nothing to prove. If $(\phi_{t -\alpha})_i\in (0, 3\alpha]$, then part (iii) implies that \[ \frac{(\phi^\alpha_t)_i}{(\phi_{t -\alpha})_i}\geq \frac{\tfrac\varsigma4\alpha}{3\alpha}=\frac\varsigma{12}. \] And if $(\phi_{t -\alpha})_i\geq 3\alpha$, then definition \eqref{eq:DefNewPhi} and the facts that $\dot{\bar\phi}_i \geq -1$ and $\alpha \leq \frac\varsigma4$ imply that \[ \frac{(\phi^\alpha_t)_i}{(\phi_{t -\alpha})_i} \geq \frac{(\bar\phi_\alpha - x)_i + (1 - \frac{2}{\varsigma}\alpha )(\phi_{t -\alpha})_i+\frac{2}{\varsigma}\alpha x_i}{(\phi_{t -\alpha})_i} \geq -\frac{\alpha}{3\alpha}+ 1 - \frac{2}{\varsigma}\alpha \geq \frac23 - \frac12 = \frac16. \;\hspace{4pt}\ensuremath{\blacksquare} \] Each trajectory $\phi^\alpha$ is absolutely continuous, and Lemma \ref{lem:NewPhiProps}(ii) and the fact that \eqref{eq:MD} is bounded imply that $\phi^\alpha$ converges uniformly to $\phi$ as $\alpha$ approaches 0. This uniform convergence implies that \begin{equation}\label{eq:hConv} \lim_{\alpha\to 0}h(\phi^\alpha)= h(\phi). \end{equation} Since $\phi^\alpha_{[0,\alpha]}$ is a solution to \eqref{eq:MD}, and thus has cost zero, it follows from Lemma \ref{lem:NewPhiProps}(i) and the convexity of $L(x, \cdot)$ that \begin{align} c_{x}(\phi^\alpha) &= \int_{\alpha}^{1}L(\phi^{\alpha}_t,\dot{\phi}^{\alpha}_t)\,\mathrm{d} t\notag\\ &\leq \int_{\alpha}^{1}L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})\,\mathrm{d} t +\int_{\alpha}^{1}\tfrac{2}{\varsigma}\alpha \paren{L(\phi^{\alpha}_t,\mathbf{0})-L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})}\mathrm{d} t.\label{ex:CostDecomp} \end{align} To handle the second integral in \eqref{ex:CostDecomp}, fix $t \in [\alpha, 1]$. Since $\phi^\alpha_t\in \Int(X)$, $\nu(\cdot|\phi^\alpha_t)$ has support $Z$, a set with extreme points $\ext(Z) =\{e_j-e_i\colon j\neq i \}=\mathscr{Z} \smallsetminus\{\mathbf{0}\}$. Therefore, the convexity of $L(x, \cdot)$, the final equality in \eqref{eq:SimpleLBound0}, the lower bound \eqref{eq:LimSPBound}, and Lemma \ref{lem:NewPhiProps}(iii) imply that for all $z \in Z$, \begin{align*} L(\phi^\alpha_t,z) \leq \max_{i\in \scrA}\max_{j\ne i}L(\phi^\alpha_t, e_j-e_i) \leq -\!\log\paren{\varsigma \min_{i\in \scrA}\hspace{1pt}(\phi^\alpha_t)_i} \leq -\!\log \tfrac{\varsigma^2}4 \alpha. \end{align*} Thus since $L$ is nonnegative and since $\lim_{\alpha\to0}\alpha \log{\alpha} = 0$, the second integrand in \eqref{ex:CostDecomp} converges uniformly to zero, and so \begin{equation}\label{eq:SecondIntBound} \lim_{\alpha \to 0}\int_{\alpha}^{1}\tfrac{2}{\varsigma}\alpha \paren{L(\phi^{\alpha}_t,\mathbf{0})-L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})}\mathrm{d} t = 0. \end{equation} To bound the first integral in \eqref{ex:CostDecomp}, note first that by representation \eqref{eq:CramerRep}, for each $t \in [0, 1]$ there is a probability measure $\lambda_t \in \Delta(\mathscr{Z})$ with $\lambda_t \ll \nu(\cdot|\phi_{t})$ such that \begin{gather} L(\phi_{t},\dot{\phi}_{t})=R(\lambda_{t}||\nu(\cdot|\phi_{t}))\;\text{ and}\label{eq:LNew1}\\ L(\phi^{\alpha}_{t+\alpha},\dot{\phi}_{t})\leq R(\lambda_{t}||\nu(\cdot|\phi^{\alpha}_{t+\alpha}))\;\text{ for all }\alpha \in (0,1].\label{eq:LNew2} \end{gather} DE Lemma 6.2.3 ensures that $\{\lambda_t\}_{t\in[0,1]}$ can be chosen to be a measurable function of $t$. (Here and below, we ignore the measure zero set on which either $\dot{\phi}_{t}$ is undefined or $L(\phi_{t},\dot{\phi}_{t})=\infty$.) Lemma \ref{lem:RelEnt2} and expressions \eqref{eq:LNew1} and \eqref{eq:LNew2} imply that \begin{align*} \limsup_{\alpha \to 0}\int_{\alpha}^{1}L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})\,\mathrm{d} t &=\limsup_{\alpha \to 0}\int_{0}^{1-\alpha}L(\phi^{\alpha}_{t+\alpha},\dot{\phi}_{t})\,\mathrm{d} t \\ &\leq \limsup_{\alpha \to 0}\paren{\int_{0}^{1-\alpha}R(\lambda_{t}||\nu(\cdot|\phi^{\alpha}_{t+\alpha}))\,\mathrm{d} t +\int_{1-\alpha}^1 0\,\mathrm{d} t}\\ &= \int_{0}^{1}R(\lambda_{t}||\nu(\cdot|\phi_{t}))\,\mathrm{d} t \\ &=\int_{0}^{1}L(\phi_t,\dot{\phi}_t)\,\mathrm{d} t \\ &= c_x(\phi), \end{align*} where the third line follows from the dominated convergence theorem and Lemma \ref{lem:RelEnt2} below. Combining this inequality with \eqref{eq:hConv}, \eqref{ex:CostDecomp}, and \eqref{eq:SecondIntBound}, we see that \[ \inf_{\phi\in\mathscr{C}^{\circ}}\paren{c_{x}(\phi)-h(\phi)}\leq \limsup_{\alpha\to0}\paren{c_{x}(\phi^\alpha)-h(\phi^\alpha)} \leq c_{x}(\phi)-h(\phi). \] Since $\phi \in \mathscr{C}$ was arbitrary, the result follows. It remains to prove the following lemma: \begin{lemma}\label{lem:RelEnt2} Write $R^\alpha_{t+\alpha} = R(\lambda_{t}||\nu(\cdot|\phi^{\alpha}_{t+\alpha}))$ and $R_t = R(\lambda_{t}||\nu(\cdot|\phi_{t}))$. Then \begin{mylist} \item for all $t \in [0,1)$, $\lim_{\alpha \to 0} R^\alpha_{t+\alpha} = R_t$; \item for all $\alpha >0$ small enough and $t \in [0, 1- \alpha]$, $R^\alpha_{t+\alpha} \leq R_t + \log \frac{12}\varsigma + 1$. \end{mylist} \end{lemma} \textit{Proof. } Definition \eqref{eq:CondLawLimit} implies that \begin{gather}\label{eq:RelEntCalc2} \hspace{-.25in}R^\alpha_{t+\alpha} - R_t=\sum_{\mathscr{z} \in \mathscr{Z}(\phi_t)}\lambda_{t}(\mathscr{z})\log\frac{\nu(\mathscr{z}|\phi_t)}{\nu(\mathscr{z}|\phi^\alpha_{t+\alpha})}\\ =\sum_{i \in \support(\phi_t)}\sum_{j \in \scrA\smallsetminus\{i\}}\lambda_{t}(e_j-e_i)\paren{\log\frac{(\phi_t)^{}_i}{(\phi^\alpha_{t+\alpha})^{}_i} +\log \frac{\sigma_{ij}(\phi_t)}{\sigma_{ij}(\phi^\alpha_{t+\alpha})}} + \lambda_t(\mathbf{0})\log\frac{\sum_{i\in \scrA}(\phi_t)^{}_i\hspace{1pt}\sigma_{ii}(\phi_t)}{\sum_{i\in \scrA}(\phi^\alpha_{t+\alpha})^{}_i\hspace{1pt}\sigma_{ii}(\phi^\alpha_{t+\alpha})}.\notag \end{gather} The uniform convergence from Lemma \ref{lem:NewPhiProps}(ii) and the continuity of $\sigma$ imply that for each $t \in [0, 1)$, the denominators of the fractions in \eqref{eq:RelEntCalc2} converge to their numerators as $\alpha$ vanishes, implying statement (i). The lower bound \eqref{eq:LimSPBound} then implies that the second and third logarithms in \eqref{eq:RelEntCalc2} themselves converge uniformly to zero as $\alpha$ vanishes; in particular, for $\alpha$ small enough and all $t \in [0, 1-\alpha]$, these logarithms are bounded above by 1. Moreover, Lemma \ref{lem:NewPhiProps}(iv) implies that when $\alpha$ is small enough and $t \in [0, 1-\alpha]$ is such that $i \in \support(\phi_t)$, the first logarithm is bounded above by $\log \frac{12}\varsigma$. Together these claims imply statement (ii). This completes the proof of the lemma, and hence the proof of Proposition \ref{prop:interiorpaths}. \section{Proofs and Auxiliary Results for Section \ref{sec:App}}\label{sec:LogPotProofs} In the analyses in this section, we are often interested in the action of a function's derivative in directions $z \in \mathbb{R}^n_0$ that are tangent to the simplex. With this in mind, we let $\mathbf{1} \in \mathbb{R}^n$ be the vector of ones, and let $P = I - \frac1n\mathbf{1}\mathbf{1}'$ be the matrix that orthogonally projects $\mathbb{R}^n$ onto $\mathbb{R}^n_0$. Given a function $g\colon \mathbb{R}^n \to \mathbb{R}$, we define the \emph{gradient of} $g$ \emph{with respect to} $\mathbb{R}^n_0$ by $\nabla_{\!0} g(x) = P \nabla g(x)$, so that for $z \in \mathbb{R}^n_0$, we have $\nabla g(x)^\prime z = \nabla g(x)^\prime P z = (P \nabla g(x))^\prime z= \nabla_{\!0} g(x)^\prime z$. \noindent\emph{Proof of Proposition \ref{prop:LogPotGC}}. It is immediate from the definition of $M^\eta$ that $M^\eta(\pi) = M^\eta(P\pi)$ for all $\pi \in \mathbb{R}^n$, leading us to introduce the notation $\bar M^\eta \equiv M^\eta|_{\mathbb{R}^n_0}$. Recalling that $h(x) = \sum_{k \in S} x_k\log x_k$ denotes the negated entropy function, one can verify by direct substitution that \begin{equation}\label{eq:Inverses} \bar M^\eta\colon \mathbb{R}^n_0 \to \Int(X)\text{ and }\eta \nabla_{\!0} h\colon \Int(X) \to \mathbb{R}^n_0\text{ are inverse functions}. \end{equation} Now let $x_t \in \Int(X)$ and $y_t=M^\eta(F(x_t)) = M^\eta(P F(x_t))$. Then display \eqref{eq:Inverses} implies that $\eta \nabla_{\!0} h(y_t)=P F(x_t)$. Since $f^\eta(x) = \eta^{-1}f(x) - h(x)$, $\nabla f(x) = F(x)$, and $\dot x_t = M^\eta(F(x_t ))-x_t\in \mathbb{R}^n_0$, we can compute as follows: \begin{align*} \tfrac {\mathrm{d}}{\mathrm{d} t} f^\eta(x_t ) &=\nabla_{\!0} f^\eta(x)'\dot x_t = (\eta ^{-1}P F(x_t )-\nabla_{\!0} h(x_t ))'(M^\eta(F(x_t ))-x_t ) \\ &= (\nabla_{\!0} h(y_t )-\nabla_{\!0} h(x_t ){)}'(y_t -x_t ) \le 0, \end{align*} strictly so whenever $M^\eta(F(x_{t})) \ne x_{t}$, by the strict convexity of $h$. Since the boundary of $X$ is repelling under \eqref{eq:LogitDyn}, the proof is complete. \hspace{4pt}\ensuremath{\blacksquare} We now turn to Lemma \ref{lem:NewHLemma} and subsequent claims. We start by stating the generalization of the Hamilton-Jacobi equation \eqref{eq:HJ}. For each $R \subseteq S$ with $\#R \geq 2$, let $X_{R} = \{x \in X: \support(x)\subseteq R\}$, and define $f^\eta_{R}\colon X \to \mathbb{R}$ by \[ f^\eta_{R} (x)=\eta ^{-1}f(x)-\left( {\sum\limits_{i\in R} {x_i \log x_i } +\sum\limits_{j\in S\smallsetminus R} {x_j } } \right), \] respectively. Evidently, \begin{equation}\label{eq:fetaR} f^\eta_{R}(x)=f^\eta(x)\text{ when }\support(x) = R. \end{equation} Our generalization of equation \eqref{eq:HJ} is \begin{equation}\label{eq:HJ2} H(x,-\nabla f^\eta_{R} (x))\leq 0\text{ when }\support(x) = R,\text{with equality if and only if }R=S. \end{equation} To use \eqref{eq:fetaR} and \eqref{eq:HJ2} to establish the upper bound $c^{\mathlarger *}_y \leq -f^\eta(y)$ for paths $\phi\in\mathscr{C}_{x^{\mathlarger *}}[0,T]$, $\phi(T)=y$ that include boundary segments, define $S_t = \support(\phi_t)$. At any time $t$ at which $\phi$ is differentiable, $\dot\phi_t$ is tangent to the face of $X$ corresponding to $S_t$, and so \eqref{eq:fetaR} implies that $\tfrac{\mathrm{d}}{\mathrm{d} t} f^\eta(\phi_t) = \nabla f^\eta_{S_t}(\phi_t)'\dot\phi_t$. We therefore have \begin{align} c_{x^{\mathlarger *}\!,T} (\phi ) &= \int_{0}^{T} {\sup _{u_t \in \mathbb{R}^n_0 } \left( {{u}'_t \dot {\phi }_t -H(\phi _t ,u_t )} \right)\mathrm{d} t} \ge \int_{0}^{T} {\left( {-\nabla f^\eta_{S_t} (\phi _t )'\dot {\phi }_t -H(\phi _t ,-\nabla f^\eta_{S_t} (\phi _t ))} \right)\mathrm{d} t}\notag \\ &\geq \int_{0}^{T} \!\!\! -\nabla f^\eta_{S_t}(\phi _t )'\dot \phi _t \,\mathrm{d} t = f^\eta(x^{\mathlarger *}) -f^\eta(y ) = -f^\eta(y ),\label{eq:CostBoundBd} \end{align} establishing the lower bound. \noindent\emph{Derivation of property \eqref{eq:HJ2}}. Let $x \in X$ have support $R \subseteq S$, $\#R \geq 2$. Then since $P\mathbf{1} = \mathbf{0}$, \begin{equation}\label{eq:nablofeta} \nabla_{\!0} f^\eta_{R} (x)= P \left( {\eta ^{-1}F(x)-\sum\limits_{i\in R} {e_i (1+\log x_i )-\sum\limits_{j\in S\smallsetminus R} {e_j } } } \right) = P \left( {\eta ^{-1}F(x)-\sum\limits_{i\in R} {e_i \log x_i } } \right). \end{equation} Recalling definition \eqref{eq:CramerTr} of $H$, letting $\zeta_x$ be a random variable with distribution $\nu(\cdot|x)$, and using the fact that $P(e_{j} - e_{i})=e_{j} - e_{i}$, we compute as follows: \begin{align*} \exp&(H(x,-\nabla f^\eta_{R} (x)))= \mathbb{E}\exp(-\nabla f^\eta_{R} (x{)}'\zeta _x )\\ &= \sum\limits_{i\in S} {\sum\limits_{j\ne i} {\exp (-\nabla f^\eta_{R} (x{)}'(e_j -e_i ))\mathbb{P}(\zeta _x =e_j -e_i )} } +\mathbb{P}(\zeta_{x} = 0) \\ &= \sum\limits_{i\in R} {\sum\limits_{j\in S\smallsetminus \{i\}} {\exp (-\eta ^{-1}F_j (x)+\eta ^{-1}F_i (x)+\log x_j -\log x_i )\;x_i \frac{\exp (\eta ^{-1}F_j (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }} } \\ &\hspace{2em}+ \sum\limits_{i\in S} {x_i \frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }} \\ &= \sum\limits_{i\in R} {\frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }(1-x_i)} +\sum\limits_{i\in R} {\frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }\;x_i } \\ &= \frac{\sum\nolimits_{i\in R}\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} } . \end{align*} Since the final expression equals 1 when $R=S$ and is less than 1 when $R \subset S$, property \eqref{eq:HJ2} follows. \hspace{4pt}\ensuremath{\blacksquare} \noindent\emph{Derivation of equation \eqref{eq:HFOC}}. Let $x \in \Int(X)$, and observe that \begin{equation}\label{eq:PartialH} \frac{\partial H}{\partial u_i } (x,u) = \frac{\sum\nolimits_{j\ne i} \left( \exp (u_i -u_j )x_j \exp (\eta ^{-1}F_i (x))-\exp (u_j -u_i )x_i \exp (\eta ^{-1}F_j (x)) \right)}{\mathbb{E}\exp (u'\zeta _x )\sum\nolimits_{k\in S} \exp (\eta ^{-1}F_k (x))} . \end{equation} Recall from the previous derivation that $\mathbb{E}\exp(-\nabla f^\eta (x{)}'\zeta _x )=1$. Thus since $u_i-u_j=(e_i-e_j)^\prime u = (P (e_i-e_j))^\prime u$, it follows from \eqref{eq:PartialH} that $\frac{\partial H}{\partial u_i } (x,u)=\frac{\partial H}{\partial u_i } (x,P u)$, so we can use equation \eqref{eq:nablofeta} with $R=S$ to compute as follows: \begin{align*} \frac{\partial H}{\partial u_i }&(x,-\nabla f^\eta(x))=\frac{\partial H}{\partial u_i }(x,-\nabla_{\!0} f^\eta(x))\\ &= \frac{1}{\sum\limits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} }\sum\limits_{j\ne i} {\left( {\exp (-\eta ^{-1}F_i (x)+\eta ^{-1}F_j (x)+\log x_i -\log x_j )\,x_j \exp (\eta ^{-1}F_i (x))} \right.} \\ &\hspace{2em}-\left. {\exp (-\eta ^{-1}F_j (x)+\eta ^{-1}F_i (x)+\log x_j -\log x_i )\,x_i \exp (\eta ^{-1}F_j (x))} \right) \\ &= \frac{1}{\sum\limits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} }\sum\limits_{j\ne i} {\left( {x_i \exp (\eta ^{-1}F_j (x))-x_j \exp (\eta ^{-1}F_i (x))} \right)} \\ &= x_i \frac{\sum\nolimits_{j\ne i} {\exp (\eta ^{-1}F_j (x))} }{\sum\nolimits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} }-(1-x_i )\frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} } \\ &= x_{i} (1 -M^\eta_i (F(x))) - (1 - x_{i}) M^\eta_i (F(x)) \\ &= x_{i} - M^\eta_i (F(x)). \hspace{4pt}\ensuremath{\blacksquare} \end{align*} \section{Additional Details}\label{sec:AD} \subsection{Proof of the Laplace principle upper bound: further details} \label{sec:ProofUpperApp} Here we give a detailed proof of the Laplace principle upper bound. The argument follows DE Section 6.2. By equation \eqref{eq:VNInt}, and the remarks that follow it, there is an optimal control sequence $\{\lambda^N_k\}_{k=0}^{N-1}$ that attains the infimum in \eqref{eq:VNInt}: \begin{equation}\label{eq:VNAgain} V^{N}(x^N)=\mathbb{E}_{x^N}\paren{\int_{0}^{1}R(\bar{\lambda}^{N}_{t}\,||\,\nu^{N}(\cdot\hspace{1pt}|\hspace{1pt}\bar{\xi}^{N}_{t}))\,\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}. \end{equation} The control measure $\Lambda^N$ and the interpolated processes $\smash{\hat\boldsymbol{\xi}}^N$ and $\smash{\bar\boldsymbol{\xi}}^N$ induced by the control sequence are defined in the previous section. Proposition \ref{prop:converge} implies that there is a random measure $\Lambda$ and a random process $\boldsymbol{\xi}$ such that some subsubsequence of $(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)$ converges in distribution to $(\Lambda,\boldsymbol{\xi},\boldsymbol{\xi})$. By the Skorokhod representation theorem, we can assume without loss of generality that $(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)$ converges almost surely to $(\Lambda,\boldsymbol{\xi},\boldsymbol{\xi})$ (again along the subsubsequence, which hereafter is fixed). The next lemma establishes the uniform convergence of relative entropies generated by the transition probabilities $\nu^N(\cdot | x)$ of the stochastic evolutionary process. \begin{lemma}\label{lem:RelEnt} For each $N \geq N_0$, let $\lambda^N \colon \mathscr{X}^N \to \Delta(\mathscr{Z})$ be a transition kernel satisfying $\lambda^N(\cdot |x) \ll \nu^N(\cdot |x) $ for all $x \in \mathscr{X}^N$. Then \[ \lim_{N\to\infty}\max_{x\in\mathscr{X}^N}\abs{R\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu^N(\cdot|x)} - R\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu(\cdot|x)}}=0. \] \end{lemma} \textit{Proof. } Definitions \eqref{eq:CondLaw} and \eqref{eq:CondLawLimit} imply that \begin{equation}\label{eq:RelEntCalc} \begin{split} R&\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu(\cdot|x)} - R\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu^N(\cdot|x)} =\sum_{\mathscr{z} \in \mathscr{Z}(x)}\lambda^N(\mathscr{z}|x)\log\frac{\nu^N(\mathscr{z}|x)}{\nu(\mathscr{z}|x)}\\ &=\sum_{i \in \support(x)}\sum_{j \in S\smallsetminus\{i\}}\lambda^N(e_j-e_i|x)\log\frac{\sigma^N_{ij}(x)}{\sigma_{ij}(x)} + \lambda^N(\mathbf{0}|x)\log\frac{\sum_{i\in\support(x)}x_i\sigma^N_{ii}(x)}{\sum_{i\in\support(x)}x_i\sigma_{ii}(x)}. \end{split} \end{equation} By the uniform convergence in \eqref{eq:LimSPs}, there is a vanishing sequence $\{\varepsilon^N\}$ such that \begin{gather*} \max_{x \in \mathscr{X}^N}\max_{i,j \in S}|\sigma_{ij}^N(x) - \sigma_{ij}( x)| \leq \varepsilon^N. \end{gather*} This inequality, the lower bound \eqref{eq:LimSPBound}, and display \eqref{eq:RelEntCalc} imply that for each $\mathscr{z} \in \mathscr{Z}(x)$ and $x \in \mathscr{X}^N$, we can write \[ \log\frac{\nu^N(\mathscr{z}|x)}{\nu(\mathscr{z}|x)} = \log\paren{1+\frac{\varepsilon^N(x,\mathscr{z})}{\varsigma(x,\mathscr{z})}}, \] where $|\varepsilon^N(x,\mathscr{z})|\leq \varepsilon^N$ and $\varsigma(x,\mathscr{z})\geq \varsigma$. This fact and display \eqref{eq:RelEntCalc} imply the result. \hspace{4pt}\ensuremath{\blacksquare} To proceed, we introduce a more general definition of relative entropy and an additional lemma. For $\alpha, \beta \in \mathscr{P}(\mathscr{Z} \times [0, 1])$ with $\beta \ll \alpha$, let $\frac{\mathrm{d} \beta}{\mathrm{d} \alpha}\colon \mathscr{Z} \times [0, 1] \to \mathbb{R}_+$ be the Radon-Nikodym derivative of $\beta$ with respect to $\alpha$. The \emph{relative entropy} of $\beta$ with respect to $\alpha$ is defined by \[ \mathscr{R}( \beta || \alpha) = \int_{\mathscr{Z} \times [0, 1]} \log\paren{\frac{\mathrm{d} \beta}{\mathrm{d} \alpha}(\mathscr{z},t)}\mathrm{d} \beta(\mathscr{z},t). \] We then have the following lemma (DE Theorem 1.4.3(f)): \begin{lemma}\label{lem:RECR} Let $\{\pi_t\}_{t\in[0,1]}$, $\{\hat\pi_t\}_{t\in[0,1]}$ with $\pi_t, \hat\pi_t \in \Delta(\mathscr{Z})$ be Lebesgue measurable in $t$, and suppose that $\hat\pi_t \ll \pi_t$ for almost all $t \in [0, 1]$. Then \begin{equation}\label{eq:scrRandR} \mathscr{R}( \hat\pi_t \otimes dt \,||\, \pi_t \otimes dt) = \int_0^1 R( \hat\pi_t \hspace{1pt}||\hspace{1pt} \pi_t)\,\mathrm{d} t. \end{equation} \end{lemma} \noindent This result is an instance of the \emph{chain rule for relative entropy}, which expresses the relative entropy of two probability measures on a product space as the sum of two terms: the expected relative entropy of the conditional distributions of the first component given the second, and the relative entropy of the marginal distributions of the second component (see DE Theorem C.3.1). In Lemma \ref{lem:RECR}, the marginal distributions of the second component are both Lebesgue measure; thus the second summand is zero, yielding formula \eqref{eq:scrRandR}. We now return to the main line of argument. For a measurable function $\phi\colon[0,1]\to X$, define the collection $\{\nu_t^{\phi}\}_{t\in[0,1]}$ of measures in $\Delta(\mathscr{Z})$ by \begin{equation}\label{eq:AnotherNu} \nu_t^{{\phi}}(\mathscr{z})=\nu(\mathscr{z}|{\phi}_{t}). \end{equation} Focusing still on the subsubsequence from Proposition \ref{prop:converge}, we begin our computation as follows: \begin{align} \liminf_{N\rightarrow\infty}V^{N}(x^{N}) &= \liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\int_{0}^{1}R\paren{\bar{\lambda}^{N}_{t}\hspace{1pt}||\hspace{1pt} \nu^{N}(\cdot|\bar{\xi}^{N}_{t})}\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}\notag\\ &=\liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\int_{0}^{1}R\paren{\bar{\lambda}^{N}_{t}\hspace{1pt}||\hspace{1pt} \nu(\cdot|\bar{\xi}^{N}_{t})}\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}\notag\\ &=\liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\mathscr{R}( \bar\lambda^N_t \otimes dt \,||\, \nu_t^{\hspace{1pt}\smash{\bar\boldsymbol{\xi}}^N}\! \otimes dt) +h(\smash{\hat{\boldsymbol{\xi}}}^{N})}\notag \end{align} The first line is equation \eqref{eq:VNAgain}. The second line follows from Lemma \ref{lem:RelEnt}, using Observation \ref{obs:VarRep} to show that the optimal control sequence $\{\lambda^N_k\}$ satisfies $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|x_0, \ldots , x_{k}) \ll \nu^N(\cdot|x_k)$ for all $(x_{0},\ldots,x_{k})\in(\mathscr{X}^{N})^{k+1}$ and $k\in\{0,1,2,\ldots,N-1\}$. The third line follows from equation \eqref{eq:AnotherNu} and Lemma \ref{lem:RECR}. We specified above that $(\Lambda^{N}= \bar\lambda^N_t\otimes dt,\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)$ converges almost surely to $(\Lambda= \lambda_t \otimes dt ,\boldsymbol{\xi},\boldsymbol{\xi})$ in the topology of weak convergence, the topology of uniform convergence, and the Skorokhod topology, respectively. The last of these implies that $\smash{\bar\boldsymbol{\xi}}^N$ also converges to $\boldsymbol{\xi}$ almost surely in the uniform topology (DE Theorem A.6.5(c)). Thus, since $x \mapsto \nu( \cdot | x)$ is continuous, $\nu_t^{\smash{\bar\boldsymbol{\xi}}^N}$ converges weakly to $\nu_t^{\boldsymbol{\xi}}$ for all $t \in [0, 1]$ with probability one. This implies in turn that $\nu_t^{\hspace{1pt}\smash{\bar\boldsymbol{\xi}}^N}\! \otimes dt$ converges weakly to $\nu_t^{\boldsymbol{\xi}}\! \otimes dt$ with probability one (DE Theorem A.5.8). Finally, $\mathscr{R}$ is lower semicontinuous (DE Lemma 1.4.3(b)) and $h$ is continuous. Thus DE Theorem A.3.13(b), an extension of Fatou's lemma, yields \[ \liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\mathscr{R}( \bar\lambda^N_t \otimes dt \,||\, \nu_t^{\hspace{1pt}\smash{\bar\boldsymbol{\xi}}^N}\! \otimes dt) +h(\smash{\hat{\boldsymbol{\xi}}}^{N})} \geq \mathbb{E}_{x}\paren{\mathscr{R}( \lambda_t \otimes dt \,||\, \nu_t^{\boldsymbol{\xi}}\! \otimes dt) +h({\boldsymbol{\xi}})}. \] To conclude, we argue as follows: \begin{align*} \liminf_{N\rightarrow\infty}V^{N}(x^{N}) &\geq \mathbb{E}_{x}\paren{\mathscr{R}( \lambda_t \otimes dt \,||\, \nu_t^{\boldsymbol{\xi}}\! \otimes dt) +h({\boldsymbol{\xi}})}\\ &=\mathbb{E}_x\paren{\int_{0}^{1}R\paren{\vphantom{I^N}{\lambda}_{t}\hspace{1pt}||\hspace{1pt} \nu(\cdot|\xi_{t})}\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq\mathbb{E}_x\paren{\int_{0}^{1}L\left(\xi_{t},\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z} \lambda_{t}(\mathscr{z})\right)\mathrm{d} t+h(\boldsymbol{\xi})}\\ &= \mathbb{E}_x\paren{\int_{0}^{1}L(\xi_{t},\dot{\xi}_{t})\,\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{align*} Here the second line follows from equation \eqref{eq:AnotherNu} and Lemma \ref{lem:RECR}, the third from representation \eqref{eq:CramerRep}, the fourth from Proposition \ref{prop:converge}(iii), and the fifth from the definition \eqref{eq:PathCost} of the cost function $c_x$. Since every subsequence has a subsequence that satisfies the last string of inequalities, the sequence as a whole must satisfy the string of inequalities. This establishes the upper bound \eqref{eq:LPUpper}. \subsection{Proof of Proposition \ref{prop:interiorpaths2}}\label{sec:PfInteriorpaths2} Finally, we prove Proposition \ref{prop:interiorpaths2}, adding only minor modifications to the proof of DE Lemma 6.5.5. Fix an $\alpha > 0$ and an absolutely continuous path $\phi^\alpha\in\tilde\mathscr{C}$ such that $\phi^\alpha_{[0,\alpha]}$ solves \eqref{eq:MD} and $\phi^\alpha_{[\alpha,1]}\subset\Int(X)$. For $\beta\in(0,1)$ with $\frac{1}{\beta}\in\mathbb{N}$, we define the path $\phi^{\beta}\in\mathscr{C}^{\mathlarger *}$ as follows: On $[0, \alpha]$, $\phi^\beta$ agrees with $\phi^\alpha$. If $k \in \mathbb{Z}_+$ satisfies $\alpha + (k + 1)\beta \leq 1$, then for $t \in (\alpha +k\beta,\alpha +(k+1)\beta]$, we define the derivative $\dot\phi^\beta_t$ by \begin{align} \dot{\phi}^{\beta}_t&=\frac{1}{\beta}\int_{\alpha +k\beta}^{\alpha +(k+1)\beta}\dot{\phi}^\alpha_s\,\mathrm{d} s\notag\\ &=\frac{1}{\beta}(\phi^\alpha_{\alpha +(k+1)\beta}-\phi^\alpha_{\alpha +k\beta}).\label{eq:PWDef} \end{align} Similarly, if there is an $\ell \in \mathbb{Z}_+$ such that $\alpha +\ell\beta< 1<\alpha +(\ell+1)\beta$, then for $t \in (\alpha +\ell\beta, 1]$ we set $\dot\phi^\beta_t = \frac{1}{1-(\alpha +\ell\beta)}(\phi^\alpha_1-\phi^\alpha_{\alpha +\ell\beta})$. Evidently, $\phi^\beta \in \mathscr{C}^{\mathlarger *}$, and because $\phi^\alpha$ is absolutely continuous, applying the definition of the derivative to expression \eqref{eq:PWDef} shows that \begin{align*} \lim_{\beta\to 0}\dot{\phi}^{\beta}_t=\dot{\phi}^\alpha_t\;\text{ for almost every }t\in[0,1]. \end{align*} We now prove that $\phi^\beta$ converges uniformly to $\phi^\alpha$. To start, note that $\phi^\beta_{k\beta} = \phi^\alpha_t$ if $t \in [0, \alpha]$, if $t = \alpha + k \beta \leq 1$, or if $t = 1$. To handle the in-between times, fix $\delta>0$, and choose $\beta$ small enough that $|\phi^\alpha_t - \phi^\alpha_s| \leq \frac\delta2$ whenever $|t-s|\leq \beta$. If $t \in (k\beta, (k+1)\beta)$, then \begin{align*} |\phi^{\beta}_t-\phi^\alpha_t| =\abs{\int_{k\beta}^{t}(\dot{\phi}^{\beta}_s-\dot{\phi}^\alpha_s)\,\mathrm{d} s} \leq \frac{t-k\beta}{\beta}\abs{\phi^\alpha_{(k+1)\beta}-\phi^\alpha_{k\beta}}+\abs{\phi^\alpha_t-\phi^\alpha_{k\beta}} \leq \delta. \end{align*} A similar argument shows that $|\phi^{\beta}_t-\phi^\alpha_t|\leq \delta$ if $t \in (\alpha +\ell\beta, 1)$. Since $\delta > 0$ was arbitrary, we have established the claim. Since $h$ is continuous, the uniform convergence of $\phi^\beta$ to $\phi^\alpha$ implies that $\lim_{\beta\to 0}h(\phi^\beta)=h(\phi^\alpha)$. Moreover, this uniform convergence, the convergence of the $Z$-valued functions $\dot\phi^\beta$ to $\dot\phi^\alpha$, the fact that $\phi^\alpha_{[\alpha,1]}\subset\Int(X)$, the continuity (and hence uniform continuity and boundedness) of $L(\cdot,\cdot)$ on closed subsets of $\Int(X)\times Z$ (see Proposition \ref{prop:Joint2}(i)), and the bounded convergence theorem imply that \[ \lim_{\beta\to 0}c_{x}(\phi^{\beta})=\lim_{\beta\to 0}\paren{\int_{0}^{\alpha}L(\phi^{\alpha}_t,\dot{\phi}^{\alpha}_t)\,\mathrm{d} t+\int_{\alpha}^{1}L(\phi^{\beta}_t,\dot{\phi}^{\beta}_t)\,\mathrm{d} t}=\int_{0}^{1}L(\phi^\alpha_t,\dot{\phi}^\alpha_t)\,\mathrm{d} t=c_{x}(\phi^\alpha). \] Since $\phi^\alpha$ was an arbitrary absolutely continuous path in $\tilde\mathscr{C}$, the proof is complete. \hspace{4pt}\ensuremath{\blacksquare} \mybibliography \end{document}
\begin{document} \begin{center}\begin{large} New approach to Minkowski fractional inequalities using generalized k-fractional integral operator \end{large}\end{center} \begin{center} $Vaijanath \, L. Chinchane $ Department of Mathematics,\\ Deogiri Institute of Engineering and Management\\ Studies Aurangabad-431005, INDIA\\ [email protected] \end{center} \begin{abstract} In this paper, we obtain new results related to Minkowski fractional integral inequality using generalized k-fractional integral operator which is in terms of the Gauss hypergeometric function. \end{abstract} \textbf{Keywords :} Minkowski fractional integral inequality, generalized k-fractional integral operator and Gauss hypergeometric function.\\ \textbf{Mathematics Subject Classification:} 26D10, 26A33, 05A30.\\ \section{Introduction } \paragraph{} In the last decades many researchers have worked on fractional integral inequalities using Riemann-Liouville, generalized Riemann-Liouville, Hadamard and Siago, see \cite{C1,C2,C3,D1,D2,D3,D4}. W. Yang \cite{YA} proved the Chebyshev and Gr$\ddot{u}$ss-type integral inequalities for Saigo fractional integral operator. S. Mubeen and S. Iqbal \cite{MU} has proved the Gr$\ddot{u}$ss-type integral inequalities generalized k-fractional integral. In \cite{BA1,C5,KI2,YI} authors have studied some fractional integral inequalities using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function). Recently many researchers have shown development of fractional integral inequalities associated with hypergeometric functions, see \cite{SH1,KI2,P1,R1,S1,SA,V1,W1,YI}. Also, in \cite{C2,D1} authors established reverse Minkowski fractional integral inequality using Hadamard and Riemann-Liouville integral operator respectively. \paragraph{}In literature few results have been obtained on some fractional integral inequalities using Saigo fractional integral operator, see \cite{C4,K4,P1,P2,YI}. Motivated from \cite{C1,C5,D1,KI2}, our purpose in this paper is to establish some new results using generalized k-fractional integral in terms of Gauss hypergeometric function. The paper has been organized as follows, in section 2, we define basic definitions and proposition related to generalized k-fractional integral. In section 3, we give the results about reverse Minkowski fractional integral inequality using fractional generalized k-fractional integral, In section 4, we give some other inequalities using fractional generalized k-fractional integral. \section{Preliminaries} \paragraph{} In this section, we give some necessary definitions which will be used latter. \begin{definition} \cite{KI2,YI} The function $f(x)$, for all $x>0$ is said to be in the $L_{p,k}[0,\infty),$ if \begin{equation} L_{p,k}[0,\infty)=\left\{f: \|f\|_{L_{p,k}[0,\infty)}=\left(\int_{0}^{\infty}|f(x)|^{p}x^{k}dx\right)^{\frac{1}{p}} < \infty \, \, 1 \leq p < \infty \, k \geq 0\right\}, \end{equation} \end{definition} \begin{definition} \cite{KI2,SAO,YI} Let $f \in L_{1,k}[0,\infty),$. The generalized Riemann-Liouville fractional integral $I^{\alpha,k}f(x)$ of order $\alpha, k \geq 0$ is defined by \begin{equation} I^{\alpha,k}f(x)= \frac{(k+1)^{1-\alpha}}{\Gamma (\alpha)}\int_{0}^{x}(x^{k+1}-t^{k+1})^{\alpha-1}t^{k} f(t)dt. \end{equation} \end{definition} \begin{definition} \cite{KI2,YI} Let $k\geq 0,\alpha>0, \mu >-1$ and $\beta, \eta \in R $. The generalized k-fractional integral $I^{\alpha,\beta,\eta,\mu}_{x,k}$ (in terms of the Gauss hypergeometric function)of order $\alpha$ for real-valued continuous function $f(t)$ is defined by \begin{equation} \begin{split} I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]& = \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)d\tau. \end{split} \end{equation} \end{definition} where, the function $_{2}F_{1}(-)$ in the right-hand side of (2.3) is the Gaussian hypergeometric function defined by \begin{equation} _{2}F_{1} (a, b; c; x)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}} \frac{x^{n}}{n!}, \end{equation} and $(a)_{n}$ is the Pochhammer symbol\\ $$(a)_{n}=a(a+1)...(a+n-1)=\frac{\Gamma(a+n)}{\Gamma(a)}, \,\,\,(a)_{0}=1.$$ Consider the function \begin{equation} \begin{split} F(x,\tau)&= \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\tau^{(k+1)\mu}\\ &(x^{k+1}-\tau^{k+1})^{\alpha-1} \times _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\\ &=\sum_{n=0}^{\infty}\frac{(\alpha+\beta+\mu)_{n}(-n)_{n}}{\Gamma(\alpha+n)n!}x^{(k+1)(-\alpha-\beta-2\mu-\eta)}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1+n}(k+1)^{\mu+\beta+1}\\ &=\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}(k+1)^{\mu+\beta+1}}{x^{k+1}(\alpha+\beta+2\mu)\Gamma(\alpha)}+\\ &\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(-n)}{x^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+1)}+\\ &\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha+1}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(\alpha+\beta+\mu+1)(-n)(-n+1)}{x^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+2)2!}+... \end{split} \end{equation} It is clear that $F(x,\tau)$ is positive because for all $\tau \in (0, x)$ , $(x>0)$ since each term of the (2.5) is positive. \section{Reverse Minkowski fractional integral inequality } \paragraph{}In this section, we establish reverse Minkowski fractional integral inequality using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function). \begin{theorem} Let $p\geq1$ and let $f$, $g$ be two positive function on $[0, \infty)$, such that for all $x>0$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$ we have \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}\leq \frac{1+M(m+2)}{(m+1)(M+1)}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}, \end{equation} for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof}: Using the condition $\frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$, $x>0$, we can write \begin{equation} (M+1)^{p}f(\tau)\leq M^{p}(f+g)^{p}(\tau). \end{equation} Multiplying both side of (3.2) by $F(x,\tau)$, then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we get \begin{equation} \begin{split} &(M+1)^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f^{p}(\tau)d\tau\\ &\leq M^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{p}(\tau)d\tau, \end{split} \end{equation} \noindent which is equivalent to \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \leq \frac{M^{p}}{(M+1)^{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right], \end{equation} \noindent hence, we can write \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{1}{p}} \leq \frac{M}{(M+1)} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}. \end{equation} On other hand, using condition $m\leq \frac{f(\tau)}{g(\tau)}$, we obtain \begin{equation} (1+\frac{1}{m})g(\tau)\leq \frac{1}{m}(f(\tau)+g(\tau)), \end{equation} therefore, \begin{equation} (1+\frac{1}{m})^{p}g^{p}(\tau)\leq(\frac{1}{m})^{p}(f(\tau)+g(\tau))^{p}. \end{equation} Now, multiplying both side of (3.7) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $G(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{p}(x)]\right]^{\frac{1}{p}} \leq \frac{1}{(m+1)} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}. \end{equation} The inequalities (3.1) follows on adding the inequalities (3.5) and (3.8). \paragraph{}Our second result is as follows. \begin{theorem} Let $p\geq1$ and $f$, $g$ be two positive function on $[0, \infty)$, such that for all $x>0$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$ then we have \begin{equation} \begin{split} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{2}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)] \right]^{\frac{2}{p}}\geq &(\frac{(M+1)(m+1)}{M}-2)\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{1}{p}}+\\ &\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)] \right]^{\frac{1}{p}}. \end{split} \end{equation} for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof}: Multiplying the inequalities (3.5) and (3.8), we obtain \begin{equation} \frac{(M+1)(m+1)}{M}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}\times \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}\leq \left([I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]]^{\frac{1}{p}}\right)^{2}. \end{equation} Applying Minkowski inequalities to the right hand side of (3.10), we have \begin{equation} (\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]\right]^{\frac{1}{p}})^{2}\leq (\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}})^{2}, \end{equation} which implies that \begin{equation} \begin{split} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]\right]^{\frac{2}{p}}\leq & \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{2}{p}}+ \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{2}{p}}\\ &+2\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}. \end{split} \end{equation} Hence, from (3.10) and (3.12), we obtain (3.9). Theorem 3.2 is thus proved. \section{ Other fractional integral inequalities related to Minkowski inequality} \paragraph{}In this section, we establish some new integral inequalities related to Minkowski inequality using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function). \begin{theorem} Let $p>1$, $\frac{1}{p}+\frac{1}{q}=1 $ and $f$, $g$ be two positive function on $[0, \infty)$, such that $I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M < \infty$, $\tau \in [0,x]$ we have \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g(x)]\right]^{\frac{1}{q}} \leq (\frac{M}{m})^{\frac{1}{pq}}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}}]\right], \end{equation} for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} Since $\frac{f(\tau)}{g(\tau)}\leq M $, $\tau \in[0,x]$ $x> 0$, therefore \\ \begin{equation} [g(\tau)]^{\frac{1}{p}}\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}}, \end{equation} and also, \begin{equation} \begin{split} [f(\tau)]^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}&\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}}[f(\tau)]^{\frac{1}{p}}\\ &\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}+\frac{1}{q}}\\ &\geq M^{\frac{-1}{q}}[f(\tau)]. \end{split} \end{equation} Multiplying both side of (4.3) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \begin{split} &\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)^{\frac{1}{p}}g(\tau)^{\frac{1}{q}}d\tau \\ &\leq M^{\frac{-1}{q}}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)d\tau, \end{split} \end{equation} which implies that \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right] \leq M^{\frac{-1}{q}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]. \end{equation} Consequently, \begin{equation} \left(I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right]\right)^{\frac{1}{p}} \leq M^{\frac{-1}{pq}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]^{\frac{1}{p}}, \end{equation} on other hand, since $m g(\tau)\leq f(\tau)$, \, $\tau \in[0,x)$, $x>0$, then we have \begin{equation} [f(\tau)]^{\frac{1}{p}}\geq m^{\frac{1}{p}}[g(\tau)]^{\frac{1}{p}}, \end{equation} multiplying equation (4.7) by $[g(\tau)]^{\frac{1}{q}}$, we have \begin{equation} [f(\tau)]^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}\geq m^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}[g(\tau)]^{\frac{1}{p}}= m^{\frac{1}{p}}[g(\tau)]. \end{equation} Multiplying both side of (4.8) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \begin{split} &\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)^{\frac{1}{p}}g(\tau)^{\frac{1}{q}}d\tau \\ &\leq M^{\frac{1}{p}}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g(\tau)d\tau, \end{split} \end{equation} which implies that \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right] \leq m^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}g(x)\right]. \end{equation} Hence, we can write \begin{equation} \left(I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right]\right)^{\frac{1}{q}} \leq m^{\frac{1}{pq}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]^{\frac{1}{q}}, \end{equation} multiplying equation (4.6) and (4.11) we get the result (4.1). \begin{theorem} Let $f$ and $g$ be two positive function on $[0, \infty[$, such that\\ $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. $x>0$, If $0<m\leq \frac{f(\tau)^{p}}{g(\tau)^{q}}\leq M < \infty$, $\tau \in [0,x]$. Then we have \begin{equation*} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f^{p}(x)\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}g^{q}(x)\right]^{\frac{1}{q}}\leq (\frac{M}{m})^{\frac{1}{pq}}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}(f(x)g(x))\right] hold. \end{equation*} Where $p>1$, $\frac{1}{p}+\frac{1}{q}=1 $, for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} Replacing $f(\tau)$ and $g(\tau)$ by $f(\tau)^{p}$ and $g(\tau)^{q}$, $\tau \in [0,x]$, $x>0$ in theorem 4.1, we obtain required inequality. \paragraph{} Now, here we present fractional integral inequality related to Minkowsky inequality as follows \begin{theorem} let $f$ and $g$ be two integrable functions on $[1, \infty]$ such that $\frac{1}{p}+\frac{1}{q}=1, p>1,$ and $0<m<\frac{f(\tau)}{g(\tau)}<M, \tau \in (0,x), x>0.$ Then for all $\alpha>0,$ we have \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}\{fg\}(x)\leq \frac{2^{p-1}M^{p}}{p(M+1)^{p}}\left(I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}+g^{p}](x)\right)+\frac{2^{q-1}}{q(m+1)^{q}}\left(I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{q}+g^{q}](x)\right), \end{equation} for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} Since, $\frac{f(\tau)}{g(\tau)}<M, \tau \in (0,x), x>0,$ we have \begin{equation} (M+1)f(\tau)\leq M(f+g)(\tau). \end{equation} Taking $p^{th}$ power on both side and multiplying resulting identity by $ F(x,\tau)$, we obtain \begin{equation} \begin{split} &(M+1)^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f^{p}(\tau)d\tau\\ &\leq M^{p} \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{p}(\tau)d\tau, \end{split} \end{equation} therefore, \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\leq \frac{M^{p}}{(M+1)^{p}}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)], \end{equation} on other hand, $0<m<\frac{f(\tau)}{g(\tau)}, \tau \in (0,x), x>0,$ we can write \begin{equation} (m+1)g(\tau)\leq (f+g)(\tau), \end{equation} therefore, \begin{equation} \begin{split} &(m+1)^{q}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g^{q}(\tau)d\tau\\ &\leq \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{q}(\tau)d\tau, \end{split} \end{equation} consequently, we have \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\leq \frac{1}{(m+1)^{q}}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)]. \end{equation} Now, using Young inequality \begin{equation} [f(\tau)g(\tau)]\leq \frac{f^{p}(\tau)}{p}+\frac{g^{q}(\tau)}{q}. \end{equation} Multiplying both side of (4.19) by $ F(x,\tau)$, which is positive because $\tau \in(0,x)$, $x>0$, then integrate the resulting identity with respect to $\tau$ from $0$ to $x$, we get \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)g(x))]\leq \frac{1}{p}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]+\frac{1}{q}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)], \end{equation} from equation (4.15), (4.18) and (4.20) we get \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)g(x))]\leq \frac{M^{p}}{p(M+1)^{p}}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]+\frac{1}{q(m+1)^{q}}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)], \end{equation} now using the inequality $(a+b)^{r}\leq 2^{r-1}(a^{r}+b^{r}), r>1, a,b \geq 0,$ we have \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)] \leq 2^{p-1}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f^{p}+g^{p})(x)], \end{equation} and \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)] \leq 2^{q-1}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f^{q}+g^{q})(x)]. \end{equation} Injecting (4.22), (2.23) in (4.21) we get required inequality (4.12). This complete the proof. \begin{theorem} Let $f$, $g$ be two positive function on $[0, \infty)$, such that $f$ is non-decreasing and $g$ is non-increasing. Then \begin{equation} \begin{split} I^{\alpha,\beta,\eta,\mu}_{x,k}f^{\gamma}(x) g^{\delta}(x)&\leq (k+1)^{-\mu-\beta}x^{(k+1)(\mu+\beta)}\frac{\Gamma(1-\beta)\Gamma(1+\mu+\eta+1)}{\Gamma(1-\beta+\eta)\Gamma(\mu+1)} \\ &\times I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)], \end{split}\end{equation} for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} let $\tau,\rho \in [0,x]$, $x>0$, for any $\delta>0$, $\gamma>0$, we have \begin{equation} \left(f^{\gamma}(\tau)-f^{\gamma}(\rho)\right)\left(g^{\delta}(\rho)-g^{\delta}(\tau)\right) \geq 0, \end{equation} \begin{equation} f^{\gamma}(\tau)g^{\delta}(\rho)-f^{\gamma}(\tau)g^{\delta}(\tau)- f^{\gamma}(\rho)(g^{\delta}(\rho)+f^{\gamma}(\rho)g^{\delta}(\tau) \geq 0, \end{equation} therefore \begin{equation} f^{\gamma}(\tau)g^{\delta}(\tau)+f^{\gamma}(\rho)(g^{\delta}(\rho)\leq f^{\gamma}(\tau)g^{\delta}(\rho)+f^{\gamma}(\rho)g^{\delta}(\tau). \end{equation} Now, multiplying both side of (4.27) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \begin{split} &\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}[f^{\gamma}(\tau)g^{\delta}(\tau)]d\tau\\ &+ f^{\gamma}(\rho)g^{\delta}(\rho)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}[1]d\tau \\ &\leq g^{\delta}(\rho)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}f^{\gamma}(\tau)d\tau\\ &+f^{\gamma}(x)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g^{\delta}(\tau)d\tau, \end{split} \end{equation} \begin{equation} \begin{split} &I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]+f^{\gamma}(\rho)(g^{\delta}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[1]\\ &\leq g^{\delta}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]+f^{\gamma}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]. \end{split} \end{equation} Again, multiplying both side of (4.29) by $F(x,\rho)$, ( $\rho \in(0,x)$, $x>0$), where $F(x,\rho)$ is defined by (2.5). Then integrating resulting identity with respect to $\rho$ from $0$ to $x$, we have \begin{equation*} \begin{split} &I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[1]+I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)] I^{\alpha,\beta,\eta,\mu}_{x,k}[1]\\ &\leq I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)] +I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)], \end{split} \end{equation*} then we can write \begin{equation*} 2I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)] \leq \frac{1}{[I^{\alpha,\beta,\eta,\mu}_{x,k}[1]]^{-1}}2 II^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]. \end{equation*} This proves the result (4.24).\\ \textbf{Competing interests}\\ The authors declare that they have no competing interests. \end{document}
\begin{document} \begin{abstract} In this paper, we compute the number of two-term tilting complexes for an arbitrary symmetric algebra with radical cube zero over an algebraically closed field. Firstly, we give a complete list of symmetric algebras with radical cube zero having only finitely many isomorphism classes of two-term tilting complexes in terms of their associated graphs. Secondly, we enumerate the number of two-term tilting complexes for each case in the list. \end{abstract} \maketitle \section{Introduction} Tilting theory plays an important role in the study of many areas of mathematics. A central notion of tilting theory is a tilting complex which is a generalization of a progenerator in Morita theory. Indeed, its endomorphism algebra is derived equivalent to the original algebra \cite{Rickard89der}. Hence it is a natural problem to give a classification of tilting complexes for a given algebra. In this paper, we study a classification of two-term tilting complexes for an arbitrary symmetric algebra with radical cube zero over an algebraically closed field $\mathbf{k}$. Symmetric algebras with radical cube zero have been studied by Okuyama \cite{Okuyama86}, Benson \cite{Benson08} and Erdmann--Solberg \cite{ES}, and also appear in several areas such as \cite{CL,HK,Seidel08}. Recently, Green--Schroll \cite{GSa} showed that this class is precisely the Brauer configuration algebras with radical cube zero. The study of symmetric algebras $A$ with radical cube zero can be reduced to that of algebras with radical square zero. For example, as an application of $\tau$-tilting theory (\cite{AIR}), we find in Proposition \ref{reduction} that the functor $-\otimes_A A/\operatorname{soc}\nolimits A$ gives a bijection \begin{align} \mathop{2\text{-}\mathsf{tilt}}\nolimits A \longrightarrow \mathop{2\text{-}\mathsf{silt}}\nolimits \, (A/\operatorname{soc}\nolimits A). \notag \end{align} Here, we denote by $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ (respectively, $\mathop{2\text{-}\mathsf{silt}}\nolimits A$) the set of isomorphism classes of basic two-term tilting (respectively two-term silting) complexes for $A$. Notice that tilting complexes coincide with silting complexes for a symmetric algebra $A$ (\cite[Example 2.8]{AI}). In \cite{Adachi16b,Aoki18,Zhang13}, they study two-term silting theory (or equivalently $\tau$-tilting theory) for algebras with radical square zero. The first author (\cite{Adachi16b}) gives a characterization of algebras with radical square zero which are $\tau$-tilting finite (i.e., having only finitely many isomorphism classes of basic two-term silting complexes) by using the notion of single quivers, see Proposition \ref{RSZ}(2). Using this result, we give a complete list of $\tau$-tilting finite symmetric algebras with radical cube zero as follows. Now, let $A$ be a basic connected finite dimensional symmetric $\mathbf{k}$-algebra with radical cube zero. Let $Q$ be the Gabriel quiver of $A$ and $Q^{\circ}$ the quiver obtained from $Q$ by deleting all loops. We show in Definition-Proposition \ref{graphA} that $Q^{\circ}$ is the double quiver $Q_G$ (see Definition \ref{def:double quiver}) of a finite connected (undirected) graph $G$ with no loops, i.e., $Q^{\circ}=Q_G$. We call $G$ the graph of $A$. \begin{theorem} \label{theorem1} Let $A$ be a basic connected finite dimensional symmetric $\mathbf{k}$-algebra with radical cube zero. Then the following conditions are equivalent. \begin{enumerate}[\rm (1)] \item $A$ is $\tau$-tilting finite (or equivalently, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite). \item The graph of $A$ is one of graphs in the following list. \end{enumerate} $$ \begin{xy} (6, -4)*{(\mathbb{A}_n)}, (-2,-12)*+{1}="A1", (6,-12)*+{2}="A2", (25, -12)*+{n}="An", (15, -12)*{\cdots}="dot", { "A1" \ar@{-} "A2"}, {"A2" \ar@{-} (11,-12)}, { (19, -12) \ar@{-} "An" }, (48, -4)*{(\mathbb{D}_n)\ 4 \leq n}, (33,-7)*+{1}="D1", (33, -17)*+{2}="D2", (41,-12)*+{3}="D3", (60, -12)*+{n}="Dn", (50, -12)*{\cdots}="dot", { "D1" \ar@{-} "D3"}, { "D2" \ar@{-} "D3"}, {"D3" \ar@{-} (46, -12)}, { (54, -12) \ar@{-} "Dn"}, (78, -4)*{(\mathbb{E}_6)}, (70, -14)*+{1}="E61", (78, -14)*+{2}="E62", (86, -14)*+{3}="E63", (94, -14)*+{5}="E64", (102, -14)*+{6}="E65", (86, -6)*+{4}="E66", {"E61" \ar@{-} "E62"}, {"E62" \ar@{-} "E63"}, {"E63" \ar@{-} "E64"}, {"E64" \ar@{-} "E65"}, {"E63" \ar@{-} "E66"}, (6, -20)*{(\mathbb{E}_7)}, (-2, -31)*+{1}="E71", (6, -31)*+{2}="E72", (14, -31)*+{3}="E73", (22, -31)*+{5}="E74", (30, -31)*+{6}="E75", (38, -31)*+{7}="E76", (14, -23)*+{4}="E77", {"E71" \ar@{-} "E72"}, {"E72" \ar@{-} "E73"}, {"E73" \ar@{-} "E74"}, {"E74" \ar@{-} "E75"}, {"E75" \ar@{-} "E76"}, {"E73" \ar@{-} "E77"}, (58, -20)*{(\mathbb{E}_8)}, (50, -31)*+{1}="E81", (58, -31)*+{2}="E82", (66, -31)*+{3}="E83", (74, -31)*+{5}="E84", (82, -31)*+{6}="E85", (90, -31)*+{7}="E86", (98, -31)*+{8}="E87", (66, -23)*+{4}="E88", {"E81" \ar@{-} "E82"}, {"E82" \ar@{-} "E83"}, {"E83" \ar@{-} "E84"}, {"E84" \ar@{-} "E85"}, {"E85" \ar@{-} "E86"}, {"E86" \ar@{-} "E87"}, {"E83" \ar@{-} "E88"}, (125, -4)*{(\widetilde{\mathbb{A}}_{n-1}) \ n\colon \mathrm{odd}}, (125, -12)*+{1}="b1", (115, -18)*+{2}="b2", (115, -28)*+{3}="b3",(125, -32)*+{4}="b4", (135, -18)*+{n}="bn", (135, -26)*{\rotatebox{90}{$\cdots$}}="ddot", {"b1" \ar@{-} "b2"}, {"b2" \ar@{-} "b3"}, {"b3" \ar@{-} "b4"}, {"b1" \ar@{-} "bn"}, {"bn" \ar@{-} (135, -22)}, { (133, -30) \ar@{-} "b4"}, \end{xy} \noindent $$ $$ \begin{xy} (26, 0)*{({\rm I}_n) \ 4 \leq n}, (28, -7)*+{1}="c1", (21,-15)*+{2}="c2", (36, -15)*+{3}="c3", (36, -23)*+{4}="c4", (36, -28)="c5", (36, -35)="c6", (36.5, -31)*{\rotatebox{90}{$\cdots$}}="ddot", (36, -39)*+{n}="c7", {"c1" \ar@{-} "c2"}, {"c1" \ar@{-} "c3"}, {"c2" \ar@{-} "c3"}, {"c3" \ar@{-} "c4"}, {"c4" \ar@{-} "c5"}, {"c6" \ar@{-} "c7"}, (53, 0)*{({\rm II}_n) \ 5 \leq n \leq8}, (52, -7)*+{1}="d1", (45,-15)*+{2}="d2", (59, -15)*+{3}="d3", (45, -23)*+{4}="d4", (59, -23)*+{5}="d5", (59, -28)="d6", (59, -35)="d7", (59.5, -31)*{\rotatebox{90}{$\cdots$}}="edot", (59, -39)*+{n}="d8", {"d1" \ar@{-} "d2"}, {"d1" \ar@{-} "d3"}, {"d2" \ar@{-} "d3"}, {"d2" \ar@{-} "d4"}, {"d3" \ar@{-} "d5"}, {"d5" \ar@{-} "d6"}, {"d7" \ar@{-} "d8"}, (72, 0)*{({\rm III})}, (79,-7)*+{1}="e1", (79, -15)*+{2}="e2", (72,-22)*+{3}="e3", (72, -30)*+{4}="e4", (86,-22)*+{5}="e5", (86,-30)*+{6}="e6", {"e1" \ar@{-} "e2"}, {"e2" \ar@{-} "e3"}, {"e2" \ar@{-} "e5"}, {"e3" \ar@{-} "e4"}, {"e4" \ar@{-} "e6"}, {"e5" \ar@{-} "e6"}, (97, 0)*{({\rm IV})}, (104, -7)*+{1}="f1", (104, -15)*+{2}="f2", (97,-23)*+{3}="f3", (111, -23)*+{4}="f4", (97, -31)*+{5}="f5", (111, -31)*+{6}="f6", {"f1" \ar@{-} "f2"}, {"f2" \ar@{-} "f3"}, {"f2" \ar@{-} "f4"}, {"f3" \ar@{-} "f4"}, {"f3" \ar@{-} "f5"}, {"f4" \ar@{-} "f6"}, (121, 0)*{({\rm V})}, (128, -7)*+{1}="g1", (121,-15)*+{2}="g2", (135, -15)*+{3}="g3", (121, -23)*+{4}="g4", (121, -31)*+{5}="g5", (135, -23)*+{6}="g6", (135, -31)*+{7}="g7", {"g1" \ar@{-} "g2"}, {"g1" \ar@{-} "g3"}, {"g2" \ar@{-} "g3"}, {"g2" \ar@{-} "g4"}, {"g3" \ar@{-} "g6"}, {"g4" \ar@{-} "g5"}, {"g6" \ar@{-} "g7"}, \end{xy} $$ \end{theorem} The second author (\cite{Aoki18}) classifies two-term silting complexes for an arbitrary algebra with radical square zero by using tilting modules over a path algebra (see Proposition \ref{RSZ}(1)). Since the cardinality of the set of isomorphism classes of tilting modules over a path algebra is well known, this provides us an explicit way to compute the number of them. We use this result to determine the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ for each graph $G$ in the list of Theorem \ref{theorem1}. \begin{theorem} \label{theorem2} In Theorem \ref{theorem1}, the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ depends only on the graph $G$ of $A$ and is given as follows. {\fontsize{9pt}{0.4cm}\selectfont \begin{table}[h] {\renewcommand\arraystretch{1.3} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $G$ & $\mathbb{A}_n$& $\mathbb{D}_{n}$&$\mathbb{E}_6 $&$ \mathbb{E}_7$&$ \mathbb{E}_8$& $\tilde{\mathbb{A}}_{n-1}$ & $ {\rm I}_n$ & ${\rm II}_5$& ${\rm II}_6$& ${\rm II}_7$ & ${\rm II}_8$ & ${\rm III}$ & ${\rm IV}$ & ${\rm V}$ \\ \hline $\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A$ &$\binom{2n}{n}$ & $a_n$ & $1700$ & $8872$ & $54066$ & $2^{2n-1}$ & $b_n$ & $632$ & $2936$ & $11306$ & $75240$ & $3108$& $4056$& $17328$ \\ \hline \end{tabular} } \end{table}} \noindent Here, for any $n\ge 4$, let $a_n:= 6\cdot 4^{n-2}-2\binom{2(n-2)}{n-2}$ and $b_n:=6\cdot 4^{n-2} + 2\binom{2n}{n} -4\binom{2(n-1)}{n-1}-4\binom{2(n-2)}{n-2}$. \end{theorem} We remark that the numbers for Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ in the list are precisely the biCatalan numbers introduced by \cite{BR} in the context of Coxeter-Catalan combinatorics. Our results for Dynkin graphs are independently obtained by \cite{DIRRT} in the study of biCambrian lattices for preprojective algebras. We also remark that we can generalize our results for Brauer configuration algebras in terms of multiplicities. A Brauer configuration algebra is defined by a configuration and a multiplicity function. The configuration of a Brauer configuration algebra with radical cube zero corresponds to a graph \cite{GSa}. By \cite{EJR}, one can show that the number of two-term tilting complexes over Brauer configuration algebras is independent of the multiplicity. Therefore, we can also apply our results to any Brauer configuration algebra obtained by replacing the multiplicity of a Brauer configuration associated with a graph in the list of Theorem \ref{theorem1}. This paper is organized as follows. In Section \ref{sec:preliminaries}, we recall the definition of algebras with radical square zero and their two-term silting theory which are needed in this paper. In Section \ref{sec:RCZ}, we study symmetric algebras with radical cube zero together with the correspondence algebra with radical square zero. Our main results are Theorem \ref{reduced ver} and Corollary \ref{number by graph} which provide us an explicit way to compute the number of two-term tilting complexes for a given symmetric algebra with radical cube zero. In Section \ref{main theorem}, we prove Theorems \ref{theorem1} and \ref{theorem2} by using results shown in the previous section. \section{Preliminaries} \label{sec:preliminaries} Throughout this paper, $\mathbf{k}$ is an algebraically closed field. We recall that any basic connected finite dimensional $\mathbf{k}$-algebra $A$ is isomorphic to a bound quiver algebra $A\cong \mathbf{k}Q/I$, where $Q$ is a finite connected quiver and $I$ is an admissible ideal in the path algebra $\mathbf{k}Q$ of the quiver $Q$. We call $Q_A:=Q$ the \emph{Gabriel quiver} of $A$. \subsection{Silting complexes} Let $A$ be a basic (not necessary connected) finite dimensional $\mathbf{k}$-algebra. We denote by $\moduleCategory A$ the category of finitely generated right $A$-modules and by $\proj A$ the category of finitely generated projective right $A$-modules. Let $\mathsf{K}^{\rm b}(\proj A)$ denote the homotopy category of bounded complexes of objects of $\proj A$. For a complex $X\in \mathsf{K}^{\rm b}(\proj A)$, we say that $X$ is \emph{basic} if it is a direct sum of pairwise non-isomorphic indecomposable objects. \begin{definition} A complex $T$ in $\mathsf{K}^{\rm b}(\proj A)$ is said to be \emph{presilting} if it satisfies \begin{align} \operatorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\proj A)}(T,T[i])=0 \notag \end{align} for all positive integers $i$. A presilting complex $T$ is called a \emph{silting complex} if it satisfies $\thick T=\mathsf{K}^{\rm b}(\proj A)$, where $\thick T$ is the smallest triangulated full subcategory which contains $T$ and is closed under taking direct summands. In addition, a silting complex $T$ is called a \emph{tilting complex} if $\operatorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\proj A)}(T,T[i])=0$ for all non-zero integers $i$. \end{definition} We restrict our interest to the set of two-term silting complexes. Here, a complex $T=(T^{i},d^{i})$ in $\mathsf{K}^{\rm b}(\proj A)$ is said to be \emph{two-term} if it is isomorphic to a complex concentrated only in degree $0$ and $-1$, i.e., \begin{align} (T^{-1}\overset{d^{-1}}{\rightarrow} T^0) = \cdots \to 0 \to T^{-1} \overset{d^{-1}}{\longrightarrow} T^0 \to 0 \to \cdots \notag \end{align} We denote by $\mathop{2\text{-}\mathsf{silt}}\nolimits A$ (respectively, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$) the set of isomorphic classes of basic two-term silting (respectively, two-term tilting) complexes for $A$. Now, we call $M\in \moduleCategory A$ a \emph{tilting module} if all the following conditions are satisfied: (i) the projective dimension of $M$ is at most $1$, (ii) $\operatorname{Ext}\nolimits_A^1(M,M)=0$, and (iii) $|M|=|A|$, where $|M|$ denotes the number of pairwise non-isomorphic indecomposable direct summands of $M$. We denote by $\mathop{\mathsf{tilt}}\nolimits A$ the set of isomorphism classes of basic tilting $A$-modules. By definition, we can naturally regard a tilting $A$-module $M$ as a tilting complex. More precisely, by taking a minimal projective presentation $P_1 \overset{f}{\to} P_0\to M \to 0$ of $M$ in $\moduleCategory A$, the two-term complex $(P_{1} \overset{f}{\to} P_0)$ provides a tilting complex in $\mathsf{K}^{\rm b}(\proj A)$. The number of tilting modules over a path algebra of a Dynkin quiver is well known. \begin{proposition}{\rm (see \cite{ONFR} for example)} \label{tilting number} Let $Q$ be a quiver whose underlying graph $\Delta$ is one of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. Then the number $\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ is given by the following table and does not depend on the orientation of $Q$. \begin{table}[h] \begin{center} {\renewcommand\arraystretch{1.3} \begin{tabular}{|c||c|c|c|c|c|c|} \hline $\Delta$ & $\mathbb{A}_n \, (n\geq 1)$ &$\ \mathbb{D}_n \,(n\geq4)$& $\mathbb{E}_6$ & $\mathbb{E}_7$ & $\mathbb{E}_8$ \\ \hline $\# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ & $\frac{1}{n+1}\binom{2n}{n}$ & $\frac{3n-4}{2n}\binom{2(n-1)}{n-1}$ & $418$ & $2431$ & $17342$\\ \hline \end{tabular}} \end{center} \end{table} \end{proposition} More generally, if $Q$ is a disjoint union of Dynkin quivers $Q_{\lambda}$ ($\lambda \in \Lambda$), then we have \begin{equation} \label{disjoint Dynkin} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q = \prod_{\lambda\in \Lambda} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\lambda} \end{equation} and this number is completely determined by a collection of the underlying graphs $\Delta_{\lambda}$ of $Q_{\lambda}$ for all $\lambda\in \Lambda$ as in Proposition \ref{tilting number}. \subsection{Algebras with radical square zero} Let $A$ be a basic connected finite dimensional $\mathbf{k}$-algebra. We say that $A$ is an algebra with \emph{radical square zero} (respectively, \emph{radical cube zero}) if $J^2=0$ but $J\neq 0$ (respectively, $J^3=0$ but $J^2\neq 0$), where $J$ is the Jacobson radical of $A$. For simplicity, we abbreviate an algebra with radical square zero (respectively, radical cube zero) by a RSZ (respectively, RCZ) algebra. We first recall that any basic connected finite dimensional RSZ $\mathbf{k}$-algebra $A$ is isomorphic to a bound quiver algebra $\mathbf{k}Q/I$, where $Q:=Q_A$ is the Gabriel quiver of $A$ and $I$ is the two-sided ideal in $\mathbf{k}Q$ generated by all paths of length $2$. Next, let $Q=(Q_{0},Q_{1})$ be a finite connected quiver, where $Q_{0}$ is the vertex set and $Q_{1}$ is the arrow set. We denote by $Q^{\rm op}$ the opposite quiver of $Q$. For a map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, we define a quiver $Q_{\epsilon}$, called a {\it single quiver} of $Q$, as follows: \begin{itemize} \item The set of vertices is $Q_0$. \item We draw an arrow $a \colon i\to j$ in $Q_{\epsilon}$ whenever there exists an arrow $a\colon i\to j$ with $\epsilon(i)=+1$ and $\epsilon(j)=-1$. \end{itemize} Note that $Q_{\epsilon}$ is bipartite (i.e., each vertex is either a sink or a source), but not connected in general. Since it has no loops by definition, we have $Q_{\epsilon}=(Q^{\circ})_{\epsilon}$, where $Q^{\circ}$ denotes the quiver obtained from $Q$ by deleting all loops. We give a connection between two-term silting complexes for a RSZ algebra and tilting modules over path algebras. \begin{proposition}\label{RSZ} Let $A$ be a basic connected finite dimensional RSZ $\mathbf{k}$-algebra and $Q_A$ the Gabriel quiver of $A$. Let $Q:=(Q_A)^{\circ}$ be the quiver obtained from $Q_A$ by deleting all loops. Then the following statements hold. \begin{enumerate}[\rm (1)] \item \textnormal{(\cite[Theorem 1.1]{Aoki18})} There is a bijection \begin{align} \mathop{2\text{-}\mathsf{silt}}\nolimits A \longrightarrow \bigsqcup_{\epsilon \colon Q_{0} \rightarrow\{\pm 1\}} \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag \end{align} \item \textnormal{(\cite{Adachi16b,Aoki18})} The following conditions are equivalent. \begin{enumerate}[\rm (a)] \item $\mathop{2\text{-}\mathsf{silt}}\nolimits A$ is finite. \item For every map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, the underlying graph of the single quiver $Q_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{enumerate} \item If one of equivalent conditions of \textnormal{(2)} holds, we have \begin{align} \#\mathop{2\text{-}\mathsf{silt}}\nolimits A = \sum_{\epsilon \colon Q_0 \to \{\pm1\}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}.\notag \end{align} \end{enumerate} \end{proposition} We remark that we can replace the quiver $Q$ with the Gabriel quiver $Q_A$ of $A$ in Proposition \ref{RSZ} since we have $(Q_A)_{\epsilon} = Q_{\epsilon}$ for any map $\epsilon\colon Q_0 \to \{\pm1\}$. \section{Two-term tilting complexes over symmetric RCZ algebras} \label{sec:RCZ} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra. Then $\overline{A}:=A/\operatorname{soc}\nolimits A$ is a RSZ algebra by definition. Moreover, the Gabriel quiver of $\overline{A}$ coincides with the Gabriel quiver of $A$ since $\operatorname{soc}\nolimits A$ is contained in the square of the Jacobson radical of $A$. The following is basic. Here, we remember that silting complexes coincide with tilting complexes for a symmetric algebra $A$ (\cite[Example 2.8]{AI}). In particular, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A=\mathop{2\text{-}\mathsf{silt}}\nolimits A$. \begin{proposition} \cite[Theorem 3.3]{Adachi16a} \label{reduction} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $\overline{A}:=A/\operatorname{soc}\nolimits A$. Then the functor $-\otimes_A \overline{A}$ gives a bijection \begin{align} \mathop{2\text{-}\mathsf{tilt}}\nolimits A \longrightarrow \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}. \notag \end{align} \end{proposition} Next, the following observations provide us a combinatorial framework of studying two-term tilting complexes over symmetric RCZ algebras. \begin{definition} \label{def:double quiver} For a finite connected graph $G$ with no loops, we define a quiver $Q_G$ as follows. \begin{itemize} \item The set of vertices of $Q_G$ is the set of vertices of $G$. \item We draw two arrows $a^{\ast} \colon i\to j$ and $a^{\ast\ast} \colon j\to i$ whenever there exists an edge $a$ of $G$ connecting $i$ and $j$. \end{itemize} We call $Q_G$ the \emph{double quiver} of $G$. Notice that $Q_G$ has no loops since so does $G$. \end{definition} \begin{defprop} \label{graphA} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra. Let $Q_A$ be the Gabriel quiver of $A$ and $Q:=(Q_A)^{\circ}$ the quiver obtained from $Q_A$ by deleting all loops. Then $Q$ is the double quiver $Q_G$ of a finite connected (undirected) graph $G$ with no loops. We call $G$ the graph of $A$. \end{defprop} \begin{proof} For the Gabriel quiver $Q_A$ of $A$, let $\pi \colon \mathbf{k}Q_A \to A$ be a canonical surjection. For any vertex $i$ of $Q_A$, let $P_i$ be the indecomposable projective $A$-module corresponding to $i$. By definition, $P_i$ has Loewy length $3$ and its simple socle is isomorphic to the simple top $S_{i}:=P_{i}/P_{i}J$. We recall from \cite[Proposition 5.6]{GSa} that our algebra $A$ is special multiserial (we refer to \cite[Definition 2.2]{GSa} for the definition of special multiserial algebras). Then each arrow $a\colon i\to j$ of $Q_A$ determines the unique arrow $\sigma(a)$ such that $\pi(a\sigma(a))\neq 0$, and the correspondence $\sigma$ gives a permutation of the set of arrows of $Q_A$, see \cite[Definition 4.8]{GSa}. In addition, the element $\pi(a\sigma(a)\sigma^2(a)\cdots\sigma^{m-1}(a))$ lies in the socle of $P_i$, where $m$ is the length of the $\sigma$-orbit containing the arrow $a$. Since $P_i$ has Loewy length $3$, $m=2$ must hold. In particular, $\sigma(a)$ is the unique arrow $\sigma(a)\colon j\to i$ such that $\pi(\sigma(a)a)\neq 0$. Now, we can restrict the permutation $\sigma$ to the subset consisting of all arrows which are not loops. Then we define a finite undirected graph $G$ as follows: The set of vertices of $G$ bijectively corresponds to the set of vertices of $Q_A$, and the set of edges of $G$ is naturally given by the set of unordered pairs $\{a,\sigma(a)\}$ for all arrows $a$ of $Q_A$ which are not loops. Then $G$ is the desired one as $(Q_A)^{\circ}=Q_G$ from our construction. \end{proof} As we mentioned before, the algebras $A$ and $\overline{A}:=A/\operatorname{soc}\nolimits A$ have the same Gabriel quiver $Q_A = Q_{\overline{A}}$. Therefore, $(Q_A)^{\circ}= (Q_{\overline{A}})^{\circ}$ is the double quiver $Q_G$ of a common finite connected graph $G$ with no loops by Definition-Proposition \ref{graphA}. \begin{theorem} \label{reduced ver} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $\overline{A}:=A/\operatorname{soc}\nolimits A$. Let $Q_A$ be the Gabriel quiver of $A$ and $Q:=(Q_A)^{\circ}$ the quiver obtained from $Q_A$ by deleting all loops. \begin{enumerate}[\rm (1)] \item The following conditions are equivalent. \begin{enumerate}[\rm (a)] \item $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite. \item $\mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}$ is finite. \item For every map $\epsilon \colon Q_{0} \rightarrow \{\pm 1\}$, the underlying graph of the single quiver $Q_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{enumerate} \item Fix any vertex $v\in Q_{0}$. If one of the equivalent conditions in \textnormal{(1)} is satisfied, then the following equalities hold. \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A} = 2 \cdot \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}{Q}_{\epsilon}. \notag \end{align} \end{enumerate} \end{theorem} \begin{proof} (1) It follows from Propositions \ref{RSZ}(2) and \ref{reduction}. (2) By Proposition \ref{reduction}, we have $\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}$. We show the second equality. Let $v$ be a vertex in $Q$. By Proposition \ref{RSZ}(1), we have \begin{align} \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A} = \sum_{\epsilon \colon Q_{0} \rightarrow\{\pm 1\}} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op} = \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op} + \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=-1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag \end{align} For a map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, we define a map $-\epsilon\colon Q_{0}\to \{ \pm 1\}$ by $(-\epsilon)(i):=-\epsilon(i)$ for all $i\in Q_{0}$. Since $Q$ is the double quiver of the graph $G$ of $A$, we have $Q_{-\epsilon}=(Q_{\epsilon})^{\mathrm{op}}$. This implies that $Q_{\epsilon}$ and $Q_{-\epsilon}$ have the same underlying graph $\Delta$. By our assumption, $\Delta$ is a disjoint union of Dynkin graphs. Thus we obtain $\# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\epsilon}= \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{-\epsilon}$ because the number of non-isomorphic tilting modules over a path algebra of Dynkin type does not depend on orientation, see Proposition \ref{tilting number}. Hence we have \begin{align} \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\epsilon} =\sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op} = \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=-1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag \end{align} This finishes the proof. \end{proof} For our convenience, we restate Theorem \ref{reduced ver} in terms of undirected graphs. Let $G=(G_0,G_1)$ be a finite connected graph with no loops, where $G_0$ is the set of vertices and $G_1$ is the set of edges. For each map $\epsilon \colon G_0\to \{\pm1\}$, let $G_{\epsilon}$ be the graph obtained from $G$ by removing all edges between vertices $i,j$ with $\epsilon(i)=\epsilon(j)$. From our construction, $G_{\epsilon}$ is precisely the underlying graph of the quiver $Q_{\epsilon}$, where $Q:=Q_G$ is the double quiver of $G$ with vertex set $Q_0=G_0$. In particular, $Q_{\epsilon}$ is a disjoint union of Dynkin quivers if and only if $G_{\epsilon}$ is a disjoint union of Dynkin graphs. Now, we recall that, for a quiver $Q$ whose underlying graph $\Delta$ is a disjoint union of Dynkin graphs, the number $\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ does not depend on orientation of $Q$ and given by (\ref{disjoint Dynkin}). Then, we set $|\Delta| := \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$. \begin{corollary} \label{number by graph} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $G$ the graph of $A$. \begin{enumerate}[\rm (1)] \item The following conditions are equivalent. \begin{enumerate}[\rm (a)] \item $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite. \item For every map $\epsilon \colon G_0 \to \{\pm1\}$, the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{enumerate} \item Assume that, for any $\epsilon \colon G_0\to \{\pm1\}$, the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs $\Delta_{\epsilon,\lambda}$ ($\lambda \in \Lambda_{\epsilon}$). Then for a fixed vertex $v$ of $G$, the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is equal to \end{enumerate} \begin{align}\label{number G} 2\cdot \sum_{\substack{\epsilon\colon G_0 \to \{\pm1\} \\ \epsilon(v)=+1}} |G_{\epsilon}| \ = \ 2\cdot \sum_{\substack{\epsilon\colon G_0 \to \{\pm1\} \\ \epsilon(v)=+1}} \prod_{\lambda\in \Lambda_{\epsilon}} |\Delta_{\epsilon,\lambda}|. \end{align} \end{corollary} \begin{proof} Let $Q:=(Q_A)^{\circ}$, where $Q_A$ is the Gabriel quiver of $A$. Then $Q=Q_G$ holds by Definition-Proposition \ref{graphA}. Then the assertion follows from Theorem \ref{reduced ver} since $G_{\epsilon}=Q_{\epsilon}$ for any map $\epsilon \colon G_0\to \{\pm1\}$. \end{proof} \begin{definition} \label{number GG} Keeping the notations in Corollary \ref{number by graph}(2), we write $||G||$ for the number given by the left hand side of (\ref{number G}). \end{definition} \begin{figure} \caption{A half of single quivers of the double quiver of $\mathbb{E}_6$.} \label{Fig.E6} \end{figure} \begin{example} \label{example-E6} \begin{enumerate}[\rm (1)] \item Let $Q$ be a quiver whose underlying graph $\Delta$ is one of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. Let $A$ be the trivial extension of the path algebra $\mathbf{k}Q$ of $Q$ by a minimal co-generator. It is easy to see that $A$ is a symmetric RCZ algebra if $Q$ is bipartite. In this case, the Gabriel quiver of $A$ is precisely the double quiver $Q_{\Delta}$ of $\Delta$, in other words, the graph of $A$ is $\Delta$. On the other hand, $Q^{\rm op}$ also determines the symmetric RCZ algebra, which is naturally isomorphic to $A$. \item Let $\Delta=\mathbb{E}_{6}$ and let $A$ be the symmetric RCZ algebra obtained as in (1). In Figure \ref{Fig.E6}, we describe single quivers of $Q:=Q_{\mathbb{E}_6}$ associated to maps $\epsilon$ with $\epsilon(6)=+1$. Here, the notation $i^{\sigma}$ denotes the vertex $i$ with $\epsilon(i)=\sigma \in \{\pm1\}$. Using the Corollary \ref{number by graph}, we find that there are $1700$ isomorphism classes of basic two-term tilting complexes over $A$ as in the list of Theorem \ref{theorem2}. \end{enumerate} \end{example} \section{Proof of Main Theorem} \label{main theorem} In this section, we prove Theorems \ref{theorem1} and \ref{theorem2}. Throughout this section, $G$ is a finite connected graph with no loops. \subsection{Proof of Theorem \ref{theorem1}} By Corollary \ref{number by graph}(1), the proof is completed with the following proposition. \begin{proposition}\label{tothm1} Let $G$ be a connected finite graph with no loops. Then the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ for every map $\epsilon \colon G_{0} \to\{\pm 1\}$ if and only if $G$ is one of the list in Theorem \ref{theorem1}. \end{proposition} In the following, we give a proof of Proposition \ref{tothm1} by removing extended Dynkin graphs from the collection $G_{\epsilon}$ of subgraphs of $G$. We start with removing extended Dynkin graphs of type $\widetilde{\mathbb{A}}$. A graph is called an \emph{$n$-cycle} if it is a cycle with exactly $n$ vertices. In particular, it is called an \emph{odd-cycle} if $n$ is odd, and an \emph{even-cycle} if $n$ even. \begin{lemma}\label{remove-ext-A} The following statements are equivalent: \begin{enumerate}[\rm (1)] \item There exists a map $\epsilon \colon G_{0} \to\{\pm 1\}$ such that $G_{\epsilon}$ contains an extended Dynkin graph of type $\widetilde{\mathbb{A}}$ as a subgraph. \item $G$ contains an even-cycle as a subgraph. \end{enumerate} \end{lemma} \begin{proof} (2)$\Rightarrow$(1): Let $G'$ be a subgraph of $G$ which is an even-cycle. Since an even-cycle is a bipartite graph, there exists a map $\epsilon \colon G_{0} \to \{\pm 1\}$ such that the underlying graph of $G_{\epsilon}$ contains $G'$ as a subgraph. Hence the assertion follows. (1)$\Rightarrow$(2): Assume that for some map $\epsilon\colon G_{0}\to \{ \pm 1\}$, the graph $G_{\epsilon}$ contains an extended Dynkin graph $G'$ of type $\widetilde{\mathbb{A}}$. Since $G_{\epsilon}$ is bipartite, so is $G'$. Hence $G'$ is an even-cycle and a subgraph of $G$. This finishes the proof. \end{proof} By Lemma \ref{remove-ext-A}, we may assume that $G$ contains no even-cycle as a subgraph. In particular, $G$ has no multiple edges. We give a connection between our graphs $G_{\epsilon}$ and subtrees of $G$. Recall that a \emph{subtree} of $G$ is a connected subgraph of $G$ without cycles. \begin{proposition}\label{subtree-bipartite} Assume that $G$ contains no even-cycle as a subgraph. Let $G'$ be a connected graph. Then the following statements are equivalent. \begin{enumerate}[\rm (1)] \item There exists a map $\epsilon \colon G_{0} \to\{ \pm 1\}$ such that $G_{\epsilon}$ contains $G'$ as a subgraph. \item $G'$ is a subtree of $G$. \end{enumerate} In particular, there exists a naturally two-to-one correspondence between the set of connected graphs of the form $G_{\epsilon}$ and the set of subtrees of $G$. \end{proposition} \begin{proof} (2)$\Rightarrow$(1) is clear. We show (1)$\Rightarrow$(2). Since $G$ has no even-cycle as a subgraph, then $G_{\epsilon}$ is tree by Lemma \ref{remove-ext-A}. Since $G'$ is a subgraph of $G$, any subgraph of $G'$ is a subtree of $G$. \end{proof} For a tree, we have the following result. \begin{corollary}\label{Dynkin-case} Assume $G$ is a tree. Then the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ for each map $\epsilon \colon G_{0} \to \{\pm 1\}$ if and only if $G$ is a Dynkin graph of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{corollary} \begin{proof} It is well known that $G$ is a Dynkin graph if and only if all subtrees of $G$ are Dynkin graphs. The assertion follows from Proposition \ref{subtree-bipartite}. \end{proof} We remove extended Dynkin graphs of type $\widetilde{\mathbb{D}}$. Assume that $G$ contains at least two odd-cycles. Then there exists a subtree $G'$ of $G$ such that $G'$ is an extended Dynkin graph of type $\widetilde{\mathbb{D}}$. Moreover, by Proposition \ref{subtree-bipartite}, there exists a map $\epsilon \colon G_{0} \to \{\pm 1\}$ such that $G_\epsilon$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph. Hence we may assume that $G$ contains at most one odd-cycle. By Corollary \ref{Dynkin-case}, it is enough to consider the case where $G$ contains exactly one odd-cycle. Namely, $G$ consists of an odd-cycle such that each vertex $v$ in the odd-cycle is attached to a tree $T_{v}$. \begin{align} \xymatrix@C=4mm@R=3mm{ \bullet\ar@{-}[r]&\bullet\ar@{-}[r]&v_{1}\ar@{-}[dr]\ar@{-}[dl]\ar@{-}[r]&\bullet&\bullet\ar@{-}[d]&\\ &v_{2}\ar@{-}[rr]&&v_{3}\ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\\ }\notag \end{align} \begin{lemma}\label{remove-ext-D} Fix an integer $k\ge 1$ and $n:=2k+1$. Assume that $G$ consists of an $n$-cycle such that each vertex $v$ in the $n$-cycle is attached to a tree $T_{v}$. Then the following statements are equivalent: \begin{enumerate}[\rm (1)] \item There exists a map $\epsilon \colon G_{0} \to\{ \pm 1\}$ such that $G_{\epsilon}$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph. \item $G$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph. \item $G$ satisfies one of the following conditions. \begin{enumerate}[\rm (a)] \item There is a vertex $v$ in the $n$-cycle such that the degree is at least four. \item There is a vertex $v$ in the $n$-cycle such that the degree is exactly three and $T_{v}$ is not a Dynkin graph of type $\mathbb{A}$. \item $k\ge 2$ and there are at least two vertices in the $n$-cycle such that the degrees are at least three. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} (1)$\Leftrightarrow$(2) follows from Proposition \ref{subtree-bipartite}. Moreover, we can easily check (2)$\Leftrightarrow$(3) because $\widetilde{\mathbb{D}}_{4}$ has exactly one vertex whose degree is exactly four and $\widetilde{\mathbb{D}}_{l}$ ($l\ge 5$) has exactly two vertices whose degree are exactly three. \end{proof} Fix an integer $k \ge 1$ and $n:=2k+1$. By Lemma \ref{remove-ext-D}, we may assume that $G$ is one of the following graphs: \begin{align} \begin{picture}(400,200)(0,0) \put(95,5){$k=1$} \put(290,5){$k\ge 2$} \put(70,200){\xymatrix@C=3mm@R=3mm{ &1_{l_{1}}\ar@{-}[d]&&\\ &\vdots\ar@{-}[d]&&\\ &1_{1}\ar@{-}[d]&&\\ &1\ar@{-}[rd]\ar@{-}[ld]&&\\ 2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\ 2_{1}\ar@{-}[d]&&3_{1}\ar@{-}[d]\\ \vdots\ar@{-}[d]&&\vdots\ar@{-}[d]\\ 2_{l_{2}}&&3_{l_{3}} }} \put(270,170){\xymatrix@C=4mm@R=4mm{ &1_{l_{1}}\ar@{-}[d]&\\ &\vdots\ar@{-}[d]&\\ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]&&n\ar@{-}[d]\\ 3\ar@{.}[rr]&&{n-1} }} \end{picture}\notag \end{align} Finally, we remove extended Dynkin graphs of type $\widetilde{\mathbb{E}}$. \begin{lemma}\label{remove-ext-E} Fix an integer $k \ge 1$ and $n:=2k+1$. \begin{enumerate}[\rm (1)] \item Assume that $k=1$. The following graphs $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ are the minimal graphs containing an extended Dynkin graph of type $\widetilde{\mathbb{E}}$. \begin{align} \begin{picture}(400,160)(0,0) \put(25,150){\textnormal{(i)}} \put(30,150){\xymatrix@C=3mm@R=3mm{ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]\ar@{-}[rr]&&3\ar@{-}[d]\\ 2_{1}&&3_{1}\ar@{-}[d]\\ &&3_{2} }} \put(145,150){\textnormal{(ii)}} \put(265,150){\textnormal{(iii)}} \put(150,150){\xymatrix@C=3mm@R=3mm{ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\ 2_{1}\ar@{-}[d]&&3_{1}\ar@{-}[d]\\ 2_{2}&&3_{2}\ar@{-}[d]\\ &&3_{3} }} \put(270,150){\xymatrix@C=3mm@R=3mm{ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\ 2_{1}&&3_{1}\ar@{-}[d]\\ &&3_{2}\ar@{-}[d]\\ &&3_{3}\ar@{-}[d]\\ &&3_{4}\ar@{-}[d]\\ &&3_{5} }} \end{picture}\notag \end{align} \item Assume that $k\ge 2$. The following graphs $(\mathrm{iv})$ and $(\mathrm{v})$ are the minimal graphs containing an extended Dynkin graph of type $\widetilde{\mathbb{E}}$. \begin{align} \begin{picture}(300,145)(0,0) \put(60,5){\textnormal{(iv)} $k= 2$} \put(210,5){\textnormal{(v)} $k\ge 3$} \put(45,130){\xymatrix@C=4mm@R=4mm{ &1_{2}\ar@{-}[d]&\\ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]&&n\ar@{-}[d]\\ 3\ar@{.}[rr]&&{n-1} }} \put(195,130){\xymatrix@C=4mm@R=4mm{ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]&&n\ar@{-}[d]\\ 3\ar@{-}[d]&&n-1\ar@{-}[d]\\ 4\ar@{.}[rr]&&n-2 }} \end{picture}\notag \end{align} \end{enumerate} \end{lemma} \begin{proof} We can easily find extended Dynkin graphs $\widetilde{\mathbb{E}}_{6}$, $\widetilde{\mathbb{E}}_{7}$ and $\widetilde{\mathbb{E}}_{8}$ in the graphs above. \end{proof} Now we are ready to prove Proposition \ref{tothm1}. \begin{proof}[Proof of Proposition \ref{tothm1}] If $G$ is a tree, then the assertion follows from Corollary \ref{Dynkin-case}. We assume that $G$ is not a tree. By Lemma \ref{remove-ext-A}, we may assume that $G$ does not contain even-cycles as subgraphs. Then $G$ does not contain extended Dynkin graphs as subgraphs if and only if $G$ is one of the following classes: \begin{itemize} \item $(\mathrm{I}_{n})_{n\ge 4}$ in Theorem \ref{theorem1}(2), \item proper connected non-tree subgraphs appearing in Lemma \ref{remove-ext-E}(i)--(v). \end{itemize} The second class coincides with the graphs $(\widetilde{\mathbb{A}}_{n-1})_{n:{\rm odd}}$, $(\mathrm{I}_{n})_{4\le n \le 8}$, $(\mathrm{II}_{n})_{5\le n\le 8}$, (III), (IV) and (V) in Theorem \ref{theorem1}(2). Hence the assertion follows from Proposition \ref{subtree-bipartite}. \end{proof} We finish this subsection with proof of Theorem \ref{theorem1}. \begin{proof}[Proof of Theorem \ref{theorem1}] The result follows from Corollary \ref{number by graph}(1) and Proposition \ref{tothm1}. \end{proof} \subsection{Proof of Theorem 1.2} We just compute the number of two-term tilting complexes for each graph in the list of Theorem \ref{theorem1}. Our calculation is based on Theorem \ref{reduced ver} and Corollary \ref{number by graph}. For our purpose, we assume that $G$ is a graph appearing in the list of Theorem \ref{theorem1} and let $A$ be a basic connected finite dimensional symmetric RCZ algebra whose graph is $G$. Keeping above notations, we determine the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$, or equivalently, $||G||$ in Definition \ref{number GG}. First, for types $\mathbb{A}$ and $\widetilde{\mathbb{A}}$, the number is already computed by \cite{Aoki18}: \begin{proposition} \cite[Theorem 1.2]{Aoki18} \label{type A} The following equality holds. \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \begin{cases} \binom{2n}{n} &\text{if $G=\mathbb{A}_{n}$,}\\ 2^{2n-1} &\text{if $G=\widetilde{\mathbb{A}}_{n-1}$ for odd $n$.} \end{cases}\notag \end{align} \end{proposition} Secondly, we consider the case where $G$ is a Dynkin graph of type $\mathbb{D}$. For simplicity, let $c_{0}=1$, $c_{l}:=\binom{2l}{l}$ for each $l\ge 1$. Then we have $||\mathbb{A}_l|| = c_l$ for all $l\geq 1$ by Proposition \ref{type A}. In addition, let $||\mathbb{A}_0||:=2$. \begin{proposition}\label{type D} Let $n\ge 4$ and $G=\mathbb{D}_n$. Then we have \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = 6\cdot 4^{n-2} - 2 c_{n-2}.\notag \end{align} \end{proposition} \begin{proof} Let $G$ be a graph as follows. \begin{align} \xymatrix@R=1mm{ 1&&&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4\ar@{-}[r]&\cdots\ar@{-}[r]&n.\\ 2&&&&&&& }\notag \end{align} By Corollary \ref{number by graph}, we have \begin{equation} \label{for D} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A =2 \cdot \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{ \pm 1\} \\ \epsilon(3)=+1}} |{G}_{\epsilon}|. \end{equation} We study the right hand side of (\ref{for D}). Let $M$ be the set of maps $\epsilon \colon G_{0} \rightarrow \{\pm1\}$ such that $\epsilon(3)=+1$. Clearly, $M$ is a disjoint union of the following subsets: \begin{itemize} \item $M_{1}:=\{ \epsilon\in M \mid \epsilon(1)=\epsilon(2)=\epsilon(3) \}$. \item $M_{2}:=\{ \epsilon\in M \mid \epsilon(1)=-\epsilon(2)=\epsilon(3) \}$. \item $M_{3}:=\{ \epsilon\in M \mid -\epsilon(1)=\epsilon(2)=\epsilon(3) \}$. \item $M_{4}:=\{ \epsilon\in M \mid -\epsilon(1)=-\epsilon(2)=\epsilon(3)=\epsilon(4) \}$. \item $M_{5}:=\{ \epsilon\in M \mid -\epsilon(1)=-\epsilon(2)=\epsilon(3)=-\epsilon(4) \}=\bigsqcup_{t=4}^{n}M_{5}(t)$, where \begin{align} M_{5}(t):=\Bigl\{ \epsilon\in M_{5}\ \Bigl.\Bigr|\ t=\min\{ 4 \le j \le n \mid \epsilon(j)=\epsilon(j+1)\} \Bigr\}. \notag \end{align} \end{itemize} From now, we compute $\mathsf{n}(i):=\sum_{\epsilon \in M_i} |G_{\epsilon}|$ for each $i\in\{ 1,\ldots, 5\}$. In the following, the notation $\xymatrix{i\ar@{~}[r]&j}$ is replaced by an edge connecting $i$ and $j$ if $\epsilon(i)\neq\epsilon(j)$, otherwise nothing between them. (i) Let $\epsilon \in M_{1}$. Then $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1&&&&&\\ &3\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align} Let $G'$ be the subgraph of $G$ obtained by removing the vertices $\{1,2\}$. Then we have $|G_{\epsilon}| = |G'_{\epsilon|_{\{3,\ldots,n\}}}|$. Since $G'$ is a Dynkin graph of type $\mathbb{A}_{n-2}$, we obtain \[ 2\mathsf{n}(1)= 2 \cdot \sum_{\substack{\epsilon\colon G_0' \to \{\pm1\} \\ \epsilon(3)=+1}} |G'_{\epsilon}| =||\mathbb{A}_{n-2}|| = c_{n-2} \] where the last equality follows from Proposition \ref{type A}. By an argument similar to (1), we can calculate other cases. (ii) For each $\epsilon\in M_{2}$, the graph $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1&&&&&\\ &3\ar@{-}[ld]\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align} Then we can check $2\mathsf{n}(2)=||\mathbb{A}_{n-1}||-||\mathbb{A}_{n-2}||=c_{n-1}-c_{n-2}$. (iii) By the symmetry of $G$, we have $\mathsf{n}(3)=\mathsf{n}(2)$. (iv) Let $\epsilon\in M_{4}$. Then $G_{\epsilon}$ is described as \begin{align} \xymatrix@R=1mm{ 1&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align} Thus we find that $2\mathsf{n}(4)= |\mathbb{A}_3| \cdot ||\mathbb{A}_{n-3}|| = 5c_{n-3}$. (v) For $\epsilon\in M_{5}(t)$, the graph $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1&&&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4& \ar@{-}[l]\cdots\ar@{-}[r]&t&t+1\ar@{~}[r]&\cdots\ar@{~}[r]&n.\\ 2&&&&&&& }\notag \end{align} Then we obtain \begin{align} 2\mathsf{n}(5) &=\sum_{t=4}^{n}|\mathbb{D}_t| \cdot ||\mathbb{A}_{n-t}|| =\frac{3n-4}{2n}c_{n-1}\cdot 2c_{0}+\sum_{t=4}^{n-1}\frac{3t-4}{2t}c_{t-1}c_{n-t}\notag\\ &=\frac{3n-4}{2n}c_{n-1}+\sum_{t=4}^{n}\frac{3t-4}{2t}c_{t-1}c_{n-t}.\notag \end{align} To finish the proof, we need the following lemma. \begin{lemma} \label{binom numbers} For any positive integer $n$, the following equalities hold: \begin{enumerate}[(1)] \item $\displaystyle{\sum_{t=1}^{n}c_{t-1}c_{n-t}=4^{n-1}}$. \item $\displaystyle{\sum_{t=1}^{n}\frac{1}{t}c_{t-1}c_{n-t}=\frac{1}{2}c_{n}}$. \end{enumerate} \end{lemma} \begin{proof} The equality (1) is well-known. The equality (2) is obtained by \begin{align} \sum_{t=1}^{n}\frac{1}{t}c_{t-1}c_{n-t} =\frac{n+1}{2}\sum_{t=1}^{n}C_{t-1}C_{n-t} =\frac{n+1}{2}C_{n}=\frac{1}{2}c_{n},\notag \end{align} where $C_{n} := \frac{1}{n+1}c_{n}$ is the $n$-th Catalan number. \end{proof} By Lemma \ref{binom numbers}, we obtain the equality \begin{align} \sum_{t=4}^{n}\frac{3t-4}{2t}c_{t-1}c_{n-t} &=\frac{3}{2}\sum_{t=4}^{n} c_{t-1}c_{n-t}-2\sum_{t=4}^{n}\frac{1}{t}c_{t-1}c_{n-t}\notag\\ &=\frac{3}{2}(4^{n-1} -c_{n-1} -2 c_{n-2} -6 c_{n-3}) -2(\frac{1}{2}c_{n}-c_{n-1}-c_{n-2}-2c_{n-3})\notag\\ &= 6\cdot 4^{n-2}-c_{n}+\frac{1}{2}c_{n-1}-c_{n-2}-5c_{n-3}.\notag \end{align} By (i)--(v), we have \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A &=c_{n-2}+2(c_{n-1}-c_{n-2})+5c_{n-3}+6\cdot 4^{n-2}-c_{n}+\frac{2n-2}{n}c_{n-1}-c_{n-2}-5c_{n-3}\notag\\ &=6\cdot 4^{n-2}-c_{n}+\frac{4n-2}{n}c_{n-1}-2c_{n-2}\notag\\ &=6\cdot 4^{n-2}-2c_{n-2},\notag \end{align} where the last equality follows from $c_{n}=\frac{2(2n-1)}{n}c_{n-1}$. \end{proof} Thirdly, we give an enumeration for type ($\mathrm{I}$). The number is obtained by using the result on type $\mathbb{D}$. \begin{proposition} \label{type I} If $G=\mathrm{I}_{n}$, then we have \begin{align} \#\mathop{2\text{-}\mathsf{tilt}}\nolimits A = 6\cdot 4^{n-2} + 2c_{n} -4c_{n-1}-4c_{n-2}.\notag \end{align} \end{proposition} \begin{proof} Let $G$ be a graph as follows. \begin{align} \xymatrix@R=1mm{ 1\ar@{-}[dd]&&&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4\ar@{-}[r]&\cdots\ar@{-}[r]&n.\\ 2&&&&&&& }\notag \end{align} By using a similar method of the proof of proposition \ref{type D}, we calculate the right-hand side of \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = 2 \sum_{\substack{\epsilon\colon G_{0} \rightarrow \{ \pm 1\} \\ \epsilon(3)=+1}} |G_{\epsilon}|.\notag \end{align} Let $M$ and $M_{i}$ $(1\leq i \leq 5)$ be sets of maps given in the proof of Proposition \ref{type D} and $\mathsf{m}(i):=\sum_{\epsilon\in M_{i}} |G_{\epsilon}|$. For each map $\epsilon\in M_{1}\sqcup M_{4}\sqcup M_{5}$, we have $G_{\epsilon}=(\mathbb{D}_n)_{\epsilon}$. Hence for each $i\in \{1,4,5\}$, we have \begin{align} \mathsf{m}(i)=\sum_{\epsilon\in M_{i}} |G_{\epsilon}|=\sum_{\epsilon\in M_{i}} |(\mathbb{D}_n)_{\epsilon}| = \mathsf{n}(i).\notag \end{align} Since $\mathsf{m}(2)=\mathsf{m}(3)$ holds by the symmetry of $G$, we have only to calculate $\mathsf{m}(2)$. For each map $\epsilon\in M_{2}$, the graph $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1\ar@{-}[dd]&&&&&\\ &3\ar@{-}[ld]\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align} Then the calculation of $\mathsf{m}(2)$ is reduced to that of Dynkin graphs of type $\mathbb{A}$. In fact, let $G'$ be the Dynkin graph $\mathbb{A}_{n}$. Then we have \begin{align} \mathsf{m}(2) &= \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\}\\ \epsilon(3)=+1}} |G'_{\epsilon}|- \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\}\\ \epsilon(2)=\epsilon(3)=+1}} |G'_{\epsilon}| - \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\} \\ -\epsilon(1)=-\epsilon(2)=\epsilon(3)=+1}} |G'_{\epsilon}|\notag \\ &=\frac{1}{2}||\mathbb{A}_{n}|| - \frac{1}{4} ||\mathbb{A}_{2}|| \cdot ||\mathbb{A}_{n-2}|| - \mathsf{n}(2). \notag\\ &=\frac{1}{2}c_{n}-\frac{3}{2}c_{n-2}-\mathsf{n}(2).\notag \end{align} Therefore we obtain \begin{align} \#\mathop{2\text{-}\mathsf{tilt}}\nolimits A &= 2(\mathsf{m}(1)+\mathsf{m}(2)+\mathsf{m}(3)+\mathsf{m}(4)+\mathsf{m}(5))\notag\\ &= 2(\mathsf{n}(1)+2\mathsf{n}(2)+\mathsf{n}(4)+\mathsf{n}(5))-4\mathsf{n}(2)+4\mathsf{m}(2)\notag\\ &= || \mathbb{D}_{n}|| - 4\mathsf{n}(2) + 4\mathsf{m}(2) \notag\\ &= 6\cdot 4^{n-2} - 2c_{n-2} -4\mathsf{n}(2) + 2c_{n}-6c_{n-2}-4\mathsf{n}(2) \notag\\ &= 6\cdot 4^{n-2} + 2c_{n} - 4c_{n-1} - 4c_{n-2}. \notag \end{align} This finishes the proof. \end{proof} For the remained finite series $\mathbb{E}$, (II), (III), (IV) and (V), we just compute the number by using the formula (\ref{number G}) in Corollary \ref{number by graph}(2). \begin{proposition} \label{type sporadic} For each case \textnormal{$\mathbb{E}$, (II), (III), (IV)} and \textnormal{(V)}, the number $\mathop{2\text{-}\mathsf{tilt}}\nolimits A_G$ is given by the table of Theorem \ref{theorem2}. \end{proposition} \begin{proof} The number for $\mathbb{E}_6$ is shown in Example \ref{example-E6}(2) and the others are similar. The detail is left to the reader. \end{proof} \end{document}
\begin{document} \title{Noncommutative Local Systems} \setlength{\parindent}{0pt} \begin{center} \author{ {\textbf{Petr R. Ivankov*}\\ e-mail: * [email protected] \\ } } \end{center} \noindent \paragraph{} Gelfand - Na\u{i}mark theorem supplies a one to one correspondence between commutative $C^*$-algebras and locally compact Hausdorff spaces. So any noncommutative $C^*$-algebra can be regarded as a generalization of a topological space. Generalizations of several topological invariants may be defined by algebraic methods. For example Serre Swan theorem \cite{karoubi:k} states that complex topological $K$-theory coincides with $K$-theory of $C^*$-algebras. This article is concerned with generalization of local systems. The classical construction of local system implies an existence of a path groupoid. However the noncommutative geometry does not contain this object. There is a construction of local system which uses covering projections. Otherwise a classical (commutative) notion of a covering projection has a noncommutative generalization. A generalization of noncommutative covering projections supplies a generalization of local systems. \tableofcontents \section{Motivation. Preliminaries} \paragraph{} Local system examples arise geometrically from vector bundles with flat connections, and from topology by means of linear representations of the fundamental group. Generalization of local systems requires a generalization of a topological space given by the Gelfand-Na\u{i}mark theorem \cite{arveson:c_alg_invt} which states the correspondence between locally compact Hausdorff topological spaces and commutative $C^*$-algebras. \begin{thm}\label{gelfand-naimark}\cite{arveson:c_alg_invt} Let $A$ be a commutative $C^*$-algebra and let $\mathcal{X}$ be the spectrum of A. There is the natural $*$-isomorphism $\gamma:A \to C_0(\mathcal{X})$. \end{thm} \paragraph{} So any (noncommutative) $C^*$-algebra may be regarded as a generalized (noncommutative) locally compact Hausdorff topological space. We would like to generalize a notion of a local system. A classical notion of local system uses a fundamental groupoid. \begin{thm} \cite{spanier:at} For each topological space there is a category $\mathscr{P}(\mathcal{X})$ whose objects are points of $\mathcal{X}$, whose morphisms from $x_0$ to $x_1$ are the path classes with $x_0$ as origin and $x_1$ as end, and whose composite is the product of path classes. \end{thm} \begin{defn} \cite{spanier:at} The category $\mathscr{P}(\mathcal{X})$ is called the {\it category of path classes} of $\mathcal{X}$ or the {\it fundamental groupoid}. \end{defn} \begin{defn} \cite{spanier:at} A {\it local system} on a space $\mathcal{X}$ is a covariant functor from fundamental groupoid of $\mathcal{X}$ to some category. For any category $\mathscr{C}$ there is a category of local systems on $\mathcal{X}$ with values in $\mathscr{C}$. Two local systems are said to be {\it equivalent} if they are equivalent objects in this category. \end{defn} \paragraph{} Otherwise it is known that any connected gruopoid is equivalent to a category with single object, i.e. a groupoid is equivalent to a group which is regarded as a category. Any groupoid can be decomposed into connected components, therefore any local system corresponds to representations of groups. It means that in case of linearly connected space $\mathcal{X}$ local systems can be defined by representations of fundamental group $\pi_1(\mathcal{X})$. Otherwise there is an interrelationship between fundamental group and covering projections. This circumstance supplies a following definition \ref{borel_const_comm} and a lemma \ref{borel_local_system_app} which do not explicitly uses a fundamental groupoid. \begin{defn}\label{borel_const_comm}\cite{davis_kirk_at} Let $p : \mathcal{P} \to \mathcal{B}$ be a principal $G$-bundle. Suppose $G$ acts on the left on a space $\mathcal{F}$, i.e. an action $G \times \mathcal{F} \to \mathcal{F}$ is given. Define the {\it Borel construction} \begin{equation*} \mathcal{P} \times_G \mathcal{F} \end{equation*} to be the quotient space $\mathcal{P} \times \mathcal{F} / \approx$ where \begin{equation*} \left(x, f\right) \approx \left(xg, g^{-1}f\right). \end{equation*} \end{defn} We next give one application of the Borel construction. Recall that a local coefficient system is a fiber bundle over $B$ with fiber $A$ and structure group $G$ where $A$ is a (discrete) abelian group and G acts via a homomorphism $G \to \mathrm{Aut}(A)$. \begin{lem}\label{borel_local_system_app}\cite{davis_kirk_at} Every local coefficient system over a path-connected (and semilocally simply connected) space $B$ is of the form \begin{tikzpicture}\label{borel_local_comm} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { A & \widetilde{\mathcal{B}} \times_{\pi_1(\mathcal{B})}A \\ & B \\}; \path[-stealth] (m-1-1) edge node [left] {} (m-1-2) (m-1-2) edge node [right] {$q$} (m-2-2); \end{tikzpicture} i.e., is associated to the principal $\pi_1(\mathcal{B})$-bundle given by the universal cover $\widetilde{\mathcal{B}}$ of $\mathcal{B}$ where the action is given by a homomorphism $\pi_1(\mathcal{B}) \to \mathrm{Aut}(A)$. \end{lem} In lemma \ref{borel_local_comm} the $\mathcal{B}$ is a topological space, the $\widetilde{\mathcal{B}}$ means the universal covering space of $\mathcal{B}$, $\pi = \pi_1(\mathcal{B})$ is the fundamental group of the $\mathcal{B}$. The $\pi$ group equals to group of covering transformations $G(\widetilde{\mathcal{B}}, \mathcal{B})$ of the universal covering space. So above construction does not need fundamental groupoid, it uses a covering projection and a group of covering transformations. However noncommutative generalizations of these notions are developed in \cite{ivankov:infinite_cov_pr}. So local systems can be generalized. We may summarize several properties of the Gelfand - Na\u{i}mark correspondence with the following dictionary. \newline \break \begin{tabular}{|c|c|} \hline TOPOLOGY & ALGEBRA\\ \hline Locally compact space & $C^*$-algebra\\ Covering projection & Noncommutative covering projection \\ Group of covering transformations & Noncommutative group of covering transformations \\ Local system & ? \\ \hline \end{tabular} \newline \newline \break This article assumes elementary knowledge of following subjects: \begin{enumerate} \item Set theory \cite{halmos:set}, \item Category theory \cite{spanier:at}, \item Algebraic topology \cite{spanier:at}, \item $C^*$-algebras and operator theory \cite{pedersen:ca_aut}, \item Differential geometry \cite{koba_nomi:fgd}, \item Spectral triples and their connections \cite{connes:c_alg_dg,connes:ncg94,varilly:noncom,varilly_bondia}. \end{enumerate} The terms "set", "family" and "collection" are synonyms. Following table contains used in this paper notations. \newline \begin{tabular}{|c|c|} \hline Symbol & Meaning\\ \hline \\ $A^G$ & Algebra of $G$ invariants, i.e. $A^G = \{a\in A \ | \ ga=a, \forall g\in G\}$\\ $\mathrm{Aut}(A)$ & Group * - automorphisms of $C^*$ algebra $A$\\ $B(H)$ & Algebra of bounded operators on Hilbert space $H$\\ $\mathbb{C}$ (resp. $\mathbb{R}$) & Field of complex (resp. real) numbers \\ $C(\mathcal{X})$ & $C^*$ - algebra of continuous complex valued \\ & functions on topological space $\mathcal{X}$\\ $C_0(\mathcal{X})$ & $C^*$ - algebra of continuous complex valued \\ & functions on topological space $\mathcal{X}$\\ & functions on topological space $\mathcal{X}$\\ $G(\widetilde{\mathcal{X}} | \mathcal{X})$ & Group of covering transformations of covering projection \\ & $\widetilde{\mathcal{X}} \to \mathcal{X}$ \cite{spanier:at}\\ $H$ &Hilbert space \\ $M(A)$ & A multiplier algebra of $C^*$-algebra $A$\\ $\mathscr{P}(\mathcal{X})$ & Fundamental groupoid of a topological space $\mathcal{X}$\\ $U(H) \subset \mathcal{B}(H) $ & Group of unitary operators on Hilbert space $H$\\ $U(A) \subset A $ & Group of unitary operators of algebra $A$\\ $U(n) \subset GL(n, \mathbb{C}) $ & Unitary subgroup of general linear group\\ $\mathbb{Z}$ & Ring of integers \\ $\mathbb{Z}_m$ & Ring of integers modulo $m$ \\ $\Omega$ & Natural contravariant functor from category of commutative \\ & $C^*$ - algebras, to category of Hausdorff spaces\\ \hline \end{tabular} \section{Noncommutative covering projections} \paragraph{} In this section we recall the described in \cite{ivankov:infinite_cov_pr} construction of a noncommutative covering projection. Instead the expired "rigged space" notion we use the "Hilbert module" one. \subsection{Hermitian modules and functors} \begin{defn} \cite{rieffel_morita} Let $B$ be a $C^*$-algebra. By a (left) {\it Hermitian $B$-module} we will mean the Hilbert space $H$ of a non-degenerate *-representation $A \rightarrow B(H)$. Denote by $\mathbf{Herm}(B)$ the category of Hermitian $B$-modules. \end{defn} \paragraph{} Let $A$, $B$ be $C^*$-algebras. In this section we will study some general methods for construction of functors from $\mathbf{Herm}(B)$ to $\mathbf{Herm}(A)$. \begin{defn} \cite{rieffel_morita} Let $B$ be a $C^*$-algebra. By (right) {\it pre-$B$-Hilbert module} we mean a vector space, $X$, over complex numbers on which $B$ acts by means of linear transformations in such a way that $X$ is a right $B$-module (in algebraic sense), and on which there is defined a $B$-valued sesquilinear form $\langle,\rangle_X$ conjugate linear in the first variable, such that \begin{enumerate} \item $\langle x, x \rangle_B \ge 0$ \item $\left(\langle x, y \rangle_X\right)^* = \langle y, x \rangle_X$ \item $\langle x, yb \rangle_X = \langle x, y \rangle_Xb$. \end{enumerate} \end{defn} \begin{empt} It is easily seen that if we factor a pre-$B$-Hilbert module by subspace of the elements $x$ for which $\langle x, x \rangle_X = 0$, the quotient becomes in a natural way a pre-$B$-Hilbert module having the additional property that inner product is definite, i.e. $\langle x, x \rangle_X > 0$ for any non-zero $x\in X$. On a pre-$B$-Hilbert module with definite inner product we can define a norm $\|\|$ by setting \begin{equation}\label{rigged_norm_eqn} \|x\|=\|\langle x, x \rangle_X\|^{1/2}. \end{equation} From now we will always view a pre-$B$-Hilbert module with definite inner product as being equipped with this norm. The completion of $X$ with this norm is easily seen to become again a pre-$B$-Hilbert module. \end{empt} \begin{defn} \cite{rieffel_morita} Let $B$ be a $C^*$-algebra. By a {\it Hilbert $B$-module} we will mean a pre-$B$-Hilbert module, $X$, satisfying the following conditions: \begin{enumerate} \item If $\langle x, x \rangle_X\ = 0$ then $x = 0$, for all $x \in X$ \item $X$ is complete for the norm defined in (\ref{rigged_norm_eqn}). \end{enumerate} \end{defn} \begin{exm}\label{fin_rigged_exm} Let $A$ be a $C^*$-algebra and a finite group acts on $A$, $A^G$ is the algebra of $G$-invariants. Then $A$ is a Hilbert $A^G$-module on which is defined following $A^G$-valued form \begin{equation}\label{inv_scalar_product} \langle x, y \rangle_A = \frac{1}{|G|} \sum_{g \in G} g(x^*y). \end{equation} Since given by \ref{inv_scalar_product} sum is $G$-invariant we have $ \langle x, y \rangle_A \in A^G$. \end{exm} \paragraph{} Viewing a Hilbert $B$-module as a generalization of an ordinary Hilbert space, we can define what we mean by bounded operators on a Hilbert $B$-module. \begin{defn}\cite{rieffel_morita} Let $X$ be a Hilbert $B$-module. By a {\it bounded operator} on $X$ we mean a linear operator, $T$, from $X$ to itself which satisfies following conditions: \begin{enumerate} \item for some constant $k_T$ we have \begin{equation}\nonumber \langle Tx, Tx \rangle_X \le k_T \langle x, x \rangle_X, \ \forall x\in X, \end{equation} or, equivalently $T$ is continuous with respect to the norm of $X$. \item there is a continuous linear operator, $T^*$, on $X$ such that \begin{equation}\nonumber \langle Tx, y \rangle_X = \langle x, T^*y \rangle_X, \ \forall x, y\in X. \end{equation} \end{enumerate} It is easily seen that any bounded operator on a $B$-Hilbert module will automatically commute with the action of $B$ on $X$ (because it has an adjoint). We will denote by $\mathcal{L}(X)$ (or $\mathcal{L}_B(X)$ there is a chance of confusion) the set of all bounded operators on $X$. Then it is easily verified than with the operator norm $\mathcal{L}(X)$ is a $C^*$-algebra. \end{defn} \begin{defn}\cite{pedersen:ca_aut} If $X$ is a Hilbert $B$-module then denote by $\theta_{\xi, \zeta} \in \mathcal{L}_B(X)$ such that \begin{equation}\nonumber \theta_{\xi, \zeta} (\eta) = \zeta \langle\xi, \eta \rangle_X , \ (\xi, \eta, \zeta \in X) \end{equation} Norm closure of a generated by such endomorphisms ideal is said to be the {\it algebra of compact operators} which we denote by $\mathcal{K}(X)$. The $\mathcal{K}(X)$ is an ideal of $\mathcal{L}_B(X)$. Also we shall use following notation $\xi\rangle \langle \zeta \stackrel{\text{def}}{=} \theta_{\xi, \zeta}$. \end{defn} \begin{defn}\cite{rieffel_morita}\label{corr_defn} Let $A$ and $B$ be $C^*$-algebras. By a {\it Hilbert $B$-$A$-correspondence} we mean a Hilbert $B$-module, which is a left $A$-module by means of *-homomorphism of $A$ into $\mathcal{L}_B(X)$. \end{defn} \begin{empt}\label{herm_functor_defn} Let $X$ be a Hilbert $B$-$A$-correspondence. If $V\in \mathbf{Herm}(B)$ then we can form the algebraic tensor product $X \otimes_{B_{\mathrm{alg}}} V$, and equip it with an ordinary pre-inner-product which is defined on elementary tensors by \begin{equation}\nonumber \langle x \otimes v, x' \otimes v' \rangle = \langle \langle x',x \rangle_B v, v' \rangle_V. \end{equation} Completing the quotient $X \otimes_{B_{\mathrm{alg}}} V$ by subspace of vectors of length zero, we obtain an ordinary Hilbert space, on which $A$ acts (by $a(x \otimes v)=ax\otimes v$) to give a *-representation of $A$. We will denote the corresponding Hermitian module by $X \otimes_{B} V$. The above construction defines a functor $X \otimes_{B} -: \mathbf{Herm}(B)\to \mathbf{Herm}(A)$ if for $V,W \in \mathbf{Herm}(B)$ and $f\in \mathrm{Hom}_B(V,W)$ we define $f\otimes X \in \mathrm{Hom}_A(V\otimes X, W\otimes X)$ on elementary tensors by $(f \otimes X)(x \otimes v)=x \otimes f(v)$. We can define action of $B$ on $V\otimes X$ which is defined on elementary tensors by \begin{equation}\nonumber b(x \otimes v)= (x \otimes bv) = x b \otimes v. \end{equation} \end{empt} \subsection{Galois correspondences} \begin{defn}\label{herm_a_g_defn} Let $A$ be a $C^*$-algebra, $G$ is a finite or countable group which acts on $A$. We say that $H \in \mathbf{Herm}(A)$ is a {\it $A$-$G$ Hermitian module} if \begin{enumerate} \item Group $G$ acts on $H$ by unitary $A$-linear isomorphisms, \item There is a subspace $H^G \subset H$ such that \begin{equation}\label{g_act} H = \bigoplus_{g\in G}gH^G. \end{equation} \end{enumerate} Let $H$, $K$ be $A$-$G$ Hermitian modules, a morphism $\phi: H\to K$ is said to be a $A$-$G$-morphism if $\phi(gx)=g\phi(x)$ for any $g \in G$. Denote by $\mathbf{Herm}(A)^G$ a category of $A$-$G$ Hermitian modules and $A$-$G$-morphisms. \end{defn} \begin{rem} Condition 2 in the above definition is introduced because any topological covering projection $\widetilde{\mathcal{X}} \to \mathcal{X}$ commutative $C^*$ algebras $C_0\left(\widetilde{\mathcal{X}}\right)$, $C_0\left(\mathcal{X}\right)$ satisfies it with respect to the group of covering transformations $G(\widetilde{\mathcal{X}}| \mathcal{X})$. \end{rem} \begin{defn} Let $H$ be $A$-$G$ Hermitian module, $B\subset M(A)$ is sub-$C^*$-algebra such that $(ga)b = g(ab)$, $b(ga) = g(ba)$, for any $a\in A$, $b \in B$, $g \in G$. There is a functor $(-)^G: \mathbf{Herm}(A)^G \to\mathbf{Herm}(B)$ defined by following way \begin{equation} H \mapsto H^G. \end{equation} This functor is said to be the {\it invariant functor}. \end{defn} \begin{defn} Let $_AX_B$ be a Hilbert $B$-$A$ correspondence, $G$ is finite or countable group such that \begin{itemize} \item $G$ acts on $A$ and $X$, \item Action of $G$ is equivariant, i.e. $g (a\xi) = (ga) (g\xi)$ , and $B$ invariant, i.e. $g(\xi b)=(g\xi)b$ for any $\xi \in X$, $b \in B$, $a\in A$, $g \in G$, \item Inner-product of $G$ is equivariant, i.e. $\langle g\xi, g \zeta\rangle_X = \langle\xi, \zeta\rangle_X$ for any $\xi, \zeta \in X$, $g \in G$. \end{itemize} Then we say that $_AX_B$ is a {\it $G$-equivariant Hilbert $B$-$A$-correspondence}. \end{defn} \paragraph{} Let $_AX_B$ be a $G$-equivariant Hilbert $B$-$A$-correspondence. Then for any $H\in \mathbf{Herm}(B)$ there is an action of $G$ on $X\otimes_B H$ such that \begin{equation*} g \left(x \otimes \xi\right) = \left(x \otimes g\xi\right). \end{equation*} \begin{defn}\label{inf_galois_defn} Let $_AX_B$ be a $G$-equivariant Hilbert $B$-$A$-correspondence. We say that $_AX_B$ is {\it $G$-Galois Hilbert $B$-$A$-correspondence} if it satisfies following conditions: \begin{enumerate} \item $X \otimes_B H$ is a $A$-$G$ Hermitian module, for any $H \in \mathbf{Herm}(B)$, \item A pair $\left(X \otimes_B -, \left(-\right)^G\right)$ such that \begin{equation}\nonumber X \otimes_B -: \mathbf{Herm}(B) \to \mathbf{Herm}(A)^G, \end{equation} \begin{equation}\nonumber (-)^G: \mathbf{Herm}(A)^G \to \mathbf{Herm}(B). \end{equation} is a pair of inverse equivalence. \end{enumerate} \end{defn} Following theorem is an analog of to theorems described in \cite{miyashita_infin_outer_gal}, \cite{takeuchi:inf_out_cov}. \begin{thm}\cite{ivankov:infinite_cov_pr}\label{main_lem} Let $A$ and $\widetilde{A}$ be $C^*$-algebras, $_{\widetilde{A}}X_A$ be a $G$-equivariant Hilbert $A$-$\widetilde{A}$-correspondence. Let $I$ be a finite or countable set of indices, $\{e_i\}_{i\in I} \subset M(A)$, $\{\xi_i\}_{i\in I} \subset \ _{\widetilde{A}}X_A$ such that \begin{enumerate} \item \begin{equation}\label{1_mb} 1_{M(A)} = \sum_{i\in I}^{}e^*_ie_i, \end{equation} \item \begin{equation}\label{1_mkx} 1_{M(\mathcal{K}(X))} = \sum_{g\in G}^{} \sum_{i \in I}^{}g\xi_i\rangle \langle g\xi_i , \end{equation} \item \begin{equation}\label{ee_xx} \langle \xi_i, \xi_i \rangle_X = e_i^*e_i, \end{equation} \item \begin{equation}\label{g_ort} \langle g\xi_i, \xi_i\rangle_X=0, \ \text{for any nontrivial} \ g \in G. \end{equation} \end{enumerate} Then $_{\widetilde{A}}X_A$ is a $G$-Galois Hilbert $A$-$\widetilde{A}$-correspondence. \end{thm} \begin{defn} Consider a situation from the theorem \ref{main_lem}. Let us consider two specific cases \begin{enumerate} \item $e_i \in A$ for any $i \in I$, \item $\exists i \in I \ e_i \notin A$. \end{enumerate} Norm completion of the generated by operators \begin{equation*} g\xi_i^* \rangle \langle g \xi_i \ a; \ g \in G, \ i \in I, \ \begin{cases} a \in M(A), & \text{in case 1},\\ a \in A, & \text{in case 2}. \end{cases} \end{equation*} algebra is said to be the {\it subordinated to $\{\xi_i\}_{i \in I}$ algebra}. If $\widetilde{A}$ is the subordinated to $\{\xi_i\}_{i \in I}$ then \begin{enumerate} \item $G$ acts on $\widetilde{A}$ by following way \begin{equation*} g \left( \ g'\xi_i \rangle \langle g' \xi_i \ a \right) = gg'\xi_i \rangle \langle gg' \xi_i \ a; \ a \in M(A). \end{equation*} \item $X$ is a left $A$ module, moreover $_{\widetilde{A}}X_A$ is a $G$-Galois Hilbert $A$-$\widetilde{A}$-correspondence. \item There is a natural $G$-equivariant *-homomorphism $\varphi: A \to M\left(\widetilde{A}\right)$, $\varphi$ is equivariant, i.e. \begin{equation} \varphi(a)(g\widetilde{a})= g \varphi(a)(\widetilde{a}); \ a \in A, \ \widetilde{a}\in \widetilde{A}. \end{equation} \end{enumerate} A quadruple $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ is said to be a {\it Galois quadruple}. The group $G$ is said to be a {\it group Galois transformations} which shall be denoted by $G\left(\widetilde{A}\ | \ A\right)=G$. \end{defn} \begin{rem} Henceforth subordinated algebras only are regarded as noncommutative generalizations of covering projections. \end{rem} \begin{defn} If $G$ is finite then bimodule $_{\widetilde{A}}X_A$ can be replaced with $_{\widetilde{A}}\widetilde{A}_A$ where product $\langle \ , \ \rangle_{\widetilde{A}}$ is given by \eqref{inv_scalar_product}. In this case a Galois quadruple $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)=\left(A, \widetilde{A}, _{\widetilde{A}}A_A, G\right)$ can be replaced with a {\it Galois triple} $\left(A, \widetilde{A}, G\right)$. \end{defn} \subsection{Infinite noncommutative covering projections} \paragraph{} In case of commutative $C^*$-algebras definition \ref{inf_galois_defn} supplies algebraic formulation of infinite covering projections of topological spaces. However I think that above definition is not a quite good analogue of noncommutative covering projections. Noncommutative algebras contains inner automorphisms. Inner automorphisms are rather gauge transformations \cite{gross_gauge} than geometrical ones. So I think that inner automorphisms should be excluded. Importance of outer automorphisms was noted by Miyashita \cite{miyashita_fin_outer_gal,miyashita_infin_outer_gal}. It is reasonably take to account outer automorphisms only. I have set more strong condition. \begin{defn}\label{gen_in_def}\cite{rieffel_finite_g} Let $A$ be $C^*$-algebra. A *-automorphism $\alpha$ is said to be {\it generalized inner} if it is given by conjugating with unitaries from multiplier algebra $M(A)$. \end{defn} \begin{defn}\label{part_in_def}\cite{rieffel_finite_g} Let $A$ be $C^*$ - algebra. A *- automorphism $\alpha$ is said to be {\it partly inner} if its restriction to some non-zero $\alpha$-invariant two-sided ideal is generalized inner. We call automorphism {\it purely outer} if it is not partly inner. \end{defn} Instead definitions \ref{gen_in_def}, \ref{part_in_def} following definitions are being used. \begin{defn} Let $\alpha \in \mathrm{Aut}(A)$ be an automorphism. A representation $\rho : A\rightarrow B(H)$ is said to be {\it $\alpha$ - invariant} if a representation $\rho_{\alpha}$ given by \begin{equation*} \rho_{\alpha}(a)= \rho(\alpha(a)) \end{equation*} is unitary equivalent to $\rho$. \end{defn} \begin{defn} Automorphism $\alpha \in \mathrm{Aut}(A)$ is said to be {\it strictly outer} if for any $\alpha$- invariant representation $\rho: A \rightarrow B(H) $, automorphism $\rho_{\alpha}$ is not a generalized inner automorphism. \end{defn} \begin{defn}\label{nc_fin_cov_pr_defn} A Galois quadruple $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ (resp. a triple $\left(A, \widetilde{A}, G\right)$) with countable (resp. finite) $G$ is said to be a {\it noncommutative infinite (resp. finite) covering projection} if action of $G$ on $\widetilde{A}$ is strictly outer. \end{defn} \section{Noncommutative generalization of local systems} \begin{defn}\label{loc_sys_defn} Let $A$ be a $C^*$-algebra, and let $\mathscr{C}$ be a category. A {\it noncommutative local system} contains following ingredients: \begin{enumerate} \item A noncommutative covering projection $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ (or $\left(A, \widetilde{A}, G\right)$), \item A covariant functor $F: G \to \mathscr{C}$, \end{enumerate} where $G$ is regarded as a category with a single object $e$, which is the unity of $G$. Indeed a local system is a group homomorphism $G \to \mathrm{Aut}(F(e))$. \end{defn} \begin{exm} If $\mathcal{X}$ is a linearly connected space then there is the equivalence of categories $\mathscr{P}(\mathcal{X}) \approx \pi_1(\mathcal{X})$. Let $F: \mathscr{P}(\mathcal{X})\to \mathscr{C}$ is a local system then there is an object $A$ in $\mathscr{C}$ such that $F$ is uniquely defined by a group homomorphism $f: \pi_1(\mathcal{X}) \to \mathrm{Aut}(A)$. Let $G=\pi_1(\mathcal{X})/\mathrm{ker}f$ be a factor group and let $\widetilde{\mathcal{X}} \to \mathcal{X}$ be a covering projection such that $G(\widetilde{\mathcal{X}} | \mathcal{X})\approx G$. Then there is a natural group homomorphism $G \to \mathrm{Aut}(A)$ which can be regarded as covariant functor $G \to \mathscr{C}$. If $\mathcal{X}$ is locally compact and Hausdorff than from \cite{ivankov:infinite_cov_pr} it follows that there is a noncommutative covering projection $\left(C_0(\mathcal{X}), C_0(\mathcal{\widetilde{X}}), \ _{C_0(\mathcal{X})}X_{C_0(\mathcal{\widetilde{X}})} \ , G\right)$. So a noncommutative local system is a generalization of a commutative one. \end{exm} \section{Noncommutative bundles with flat connections} \subsection{Cotensor products} \begin{empt} {\it Cotensor products associated with Hopf algebras}. Let $H$ be a Hopf algebra over a commutative ring $k$, with bijective antipode $S$. We use the Sweedler notation \cite{karaali:ha} for the comultiplication on $H$: $\Delta(h)= h_{(1)}\otimes h_{(2)}$. $\mathcal{M}^H$ (respectively ${}^H\mathcal{M}$) is the category of right (respectively left) $H$-comodules. For a right $H$-coaction $\rho$ (respectively a left $H$-coaction $\lambda$) on a $k$-module $M$, we denote $$\rho(m)=m_{[0]}\otimes m_{[1]}\quad \ \mathrm{and} \ \quad\lambda(m)=m_{[-1]}\otimes m_{[0]}.$$ Let $M$ be a right $H$-comodule, and $N$ a left $H$-comodule. The cotensor product $M\square_H N$ is the $k$-module \begin{equation}\label{cotensor_hopf} M\square_H N= \left\{\sum_i m_i\otimes n_i\in M\otimes N~|~\sum_i \rho(m_i)\otimes n_i= \sum_i m_i\otimes \lambda(n_i)\right\}. \end{equation} If $H$ is cocommutative, then $M\square_H N$ is also a right (or left) $H$-comodule. \end{empt} \begin{empt} {\it Cotensor products associated with groups}. Let $G$ be a finite group. A set $H = \mathrm{Map}(G, \mathbb{C})$ has a natural structure of commutative Hopf algebra (See \cite{hajac:toknotes}). Addition (resp. multiplication) on $H$ is pointwise addition (resp. pointwise multiplication). Let $\delta_g\in H, ( g \in G)$ be such that \begin{equation}\label{group_hopf_action_rel} \delta_g(g')\left\{ \begin{array}{c l} 1 & g'=g\\ 0 & g' \ne g \end{array}\right. \end{equation} Comultiplication $\Delta: H \rightarrow H \otimes H$ is induced by group multiplication \begin{equation}\nonumber \Delta f(g) = \sum_{g_1 g_2 = g} f(g_1) \otimes f(g_2); \ \forall f \in \mathrm{Map}(G, \mathbb{C}), \ \forall g\in G. \end{equation} i.e. \begin{equation}\nonumber \Delta \delta_g = \sum_{g_1 g_2 = g} \delta_{g_1} \otimes \delta_{g_2}; \ \forall g\in G, \end{equation} Let $M$ (resp. $N$) be a linear space with right (resp. left) action of $G$ then \begin{equation}\label{cotensor_g} M\square_{\mathrm{Map}(G,\mathbb{C})}N = \left\{\sum_i m_i\otimes n_i\in M\otimes N~|~\sum_i m_i g\otimes n_i= \sum_i m_i\otimes gn_i;~\forall g\in G\right\}. \end{equation} Henceforth we denote by $M\square_GN$ a cotensor product $M\square_{\mathrm{Map}(G,\mathbb{C})}N$. \end{empt} \subsection{Bundles with flat connections in differential geometry}\label{fvb_dg} \paragraph{} I follow to \cite{koba_nomi:fgd} in explanation of the differential geometry and flat bundles. \begin{prop}\label{comm_cov_mani}(Proposition 5.9 \cite{koba_nomi:fgd}) \begin{enumerate} \item Given a connected manifold $M$ there is a unique (unique up to isomorphism) universal covering manifold, which will be denoted by $\widetilde{M}$. \item The universal covering manifold $\widetilde{M}$ is a principal fibre bundle over $M$ with group $\pi_1(M)$ and projection $p: \widetilde{M} \to M$, where $\pi_1(M)$ is the first homotopy group of $M$. \item The isomorphism classes of covering spaces over $M$ are in 1:1 correspondence with the conjugate classes of subgroups of $\pi_1(M)$. The correspondence is given as follows. To each subgroup $H$ of $\pi_1(M)$, we associate $E=\widetilde{M}/H$. Then the covering manifold $E$ corresponding to $H$ is a fibre bundle over $M$ with fibre $\pi_1(M)/H$ associated with the principal bundle $\widetilde{M}(M, \pi_1(M))$. If $H$ is a normal subgroup of $\pi_1(M)$, $E=\widetilde{M}/H$ is a principal fibre bundle with group $\pi_1(M)/H$ and is called a regular covering manifold of $M$. \end{enumerate} \end{prop} \paragraph{}Let $\Gamma$ be a flat connection $P(M, G)$, where $M$ is connected and paracompact. Let $u_0\in P$; $M^*=P(u_0)$, the holonomy bundle through $u_0$; $M^*$ is a principal fibre bundle over $M$ whose structure group is the holomomy group $\Phi(u_0)$. In \cite{koba_nomi:fgd} is explained that $\Phi(u_0)$ is discrete, and since $M^*$ is connected, $M^*$ is a covering space of $M$. Set $x_0=\pi(u_0)\in M$. Every closed curve of $M$ starting from $x_0$ defines, by means of the parallel displacement along it, an element of $\Phi(u_0)$. In \cite{koba_nomi:fgd} it is explained that the same element of the first homotopy group $\pi_1(M, x_0)$ give rise to the same element of $\Phi(u_0)$. Thus we obtain a homomorphism of $\pi_1(M, x_0)$ onto $\Phi(u_0)$. Let $N$ be a normal subgroup of $\Phi(u_0)$ and set $M'=M^*/N$. Then $M'$ is principal fibre bundle over $M$ with structure group $\Phi(u_0)/N$. In particular $M'$ is a covering space of $M$. Let $P'(M',G)$ be the principal fibre bundle induced by covering projection $M'\to M$. There is a natural homomorphism $f: P' \to P$ \cite{koba_nomi:fgd}. \begin{prop}\label{flat_dg_prop} (Proposition 9.3 \cite{koba_nomi:fgd}) There exists a unique connection $\Gamma'$ in $P'(M',G)$ which is is mapped into $\Gamma$ by homomorphism $f: P'\to P$. The connection $\Gamma'$ is flat. If $u'_0$ is a point of $P'$ such that $f(u'_0)=u_0$, then the holonomy group $\Phi(u'_0)$ of $\Gamma'$ with reference point $u'_0$ is isomorphically mapped onto $N$ by $f$. \end{prop} \begin{empt}\label{dg_fl_con_ingr}{\it Construction of flat connections} Let $M$ be a manifold. Proposition \ref{flat_dg_prop} supplies construction of flat bundle $P(M,G)$ which imply following ingredients: \begin{enumerate} \item A covering projection $M'\to M$. \item A principal bundle $P'(M', G)$ with a flat connection $\Gamma$. \end{enumerate} \end{empt} \begin{empt}\label{can_fl_conn}{\it Associated vector bundle}. A principal bundle $P(M,G)$ and a flat connection $\Gamma$ are given by these ingredients. If $G$ acts on $\mathbb{C}^n$ then there is an associated with $P(M,G)$ vector fibre $\mathcal{F}$ bundle with a standard fibre $\mathbb{C}^n$. A space $F$ of continuous sections of $\mathcal{F}$ is a finitely generated projective $C(M)$-module. See \cite{koba_nomi:fgd}. \end{empt} \begin{empt}\label{can_fl_bundle}{\it Canonical flat connection and flat bunles}. There is a specific case of flat principal bundle such that $P'=M'\times G$ and $\Gamma$ is a canonical flat connection \cite{koba_nomi:fgd}. In this case the existence of $P(M,G)$ depends only on $\pi_1(M)$ and does not depend on differential structure of $M$. \end{empt} \begin{empt}\label{comm_fund_k}{\it Local systems and $K$-theory}. If $R(G)$ is the group representation ring and $R_0(G)$ is a subgroup of zero virtual dimension then there is a natural homomorphism $R_0(G) \to K^0(M)$ described in \cite{gilkey:odd_space,wolf:const_curv}. \end{empt} \subsection{Topological noncommutative bundles with flat connections} \paragraph{} There are noncommutative generalizations of described in \ref{fvb_dg} constructions. According to Serre Swan theorem \cite{karoubi:k} any vector bundle over space $\mathcal{X}$ corresponds to a projective $C_0(\mathcal{X})$ module. \begin{defn} Let $\left(A, \widetilde{A}, G\right)$ be a finite noncommutative covering projection. According to definition \ref{loc_sys_defn} any group homomorphism $G \to U(n)$ is a local system. There is a natural linear action of $G$ on $\mathbb{C}^n$, and $\widetilde{A}\square_G\mathbb{C}^n$ is a left $A$-module which is said to be a {\it topological noncommutative bundle with flat connection}. \end{defn} \begin{lem} Let $\left(A, \widetilde{A}, G\right)$ be a finite noncommutative covering projection, and let $P = \widetilde{A}\square_G\mathbb{C}^n$ be a topological noncommutative bundle with flat connection. Then $P$ is a finitely generated projective left and right $A$-module. \end{lem} \begin{proof} According to definition $\widetilde{A}$ is a left finitely generated projective $A$-module. A left $A$-module $\widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n$ is also finitely generated and projective because $\widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n \approx \widetilde{A}^n$. There is a projection $p: \widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n \to \widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n$ given by: \begin{equation*} p(a \otimes x) = \frac{1}{|G|}\sum_{g\in G} ag \otimes g^{-1}x. \end{equation*} The image of $p$ is $P$, therefore $P$ is projective left $A$-module. Similarly we can prove that $P$ is a finitely generated projective right $A$-module \end{proof} \begin{exm} Let $M$ be a differentiable manifold $M'\to M$ is a covering projection $P'=M' \times U(n)$ is a principal bundle with a canonical flat connection $\Gamma'$. So there are all ingredients of \ref{dg_fl_con_ingr}. So we have a principal bundle $P(M, U(n))$ with a flat connection $\Gamma$. There is a noncommutative covering projection $\left(C(M), C(M'), _{C(M')}X_{C(M)}, G\right)$. Let $\mathcal{F}$ (resp. $\mathcal{F'}$) be a vector bundle associated with $P(M,U(n))$ (resp. $P(M',U(n))$), and let $F$ (resp. $F'$) be a projective finitely generated $C(M)$ (resp. $C(M')$ module which corresponds to $\mathcal{F}$ (resp. $\mathcal{F'}$). Then we have $F = C(M')\square_GF'$, i.e. $F$ is a topological flat bundle. \end{exm} \begin{rem} Since existence of $P(M, U(n)$ depend on topology of $M$ only we use a notion "topological noncommutative bundle with flat connection" is used for its noncommutative generalization. \end{rem} \begin{exm}\label{nc_torus_fin_cov} Let $A_{\theta}$ be a noncommutative torus $\left(A_{\theta}, A_{\theta'}, \mathbb{Z}_m\times\mathbb{Z}_n\right)$ a Galois triple described in \cite{ivankov:infinite_cov_pr}. Any group homomorphism $\mathbb{Z}_m\times\mathbb{Z}_n\to U(1)$ induces a topological noncommutative flat bundle. \end{exm} \subsection{General noncommutative bundles with flat connections} \begin{empt}\label{n_f_b_constr} A vector fibre bundle with a flat connection is not necessary a topological bundle with flat connection, since proposition \ref{fvb_dg} and construction \ref{dg_fl_con_ingr} does not require it. However general case of \ref{fvb_dg} and construction \ref{dg_fl_con_ingr} have a noncommutative analogue. The analogue requires a noncommutative generalization of differentiable manifolds with flat connections. Generalization of a spin manifold is a spectral triple \cite{connes:c_alg_dg,connes:ncg94,varilly:noncom,varilly_bondia}. First of all we generalize the proposition \ref{comm_cov_mani}. Suppose that there is a spectral triple $(\mathcal{B}, H, D)$ such that \begin{itemize} \item $\mathcal{B} \subset B$ is a pre-$C^*$-algebra which is a dense subalgebra in $B$. \item there is a faithful representation $B \to B(H)$. \end{itemize} Let $\left(B, A, G\right)$ be a finite noncommutative covering projection. According to 8.2 of \cite{ivankov:infinite_cov_pr} there is the spectral triple $(\mathcal{A}, A \otimes_BH, \widetilde{D} )$ such that \begin{itemize} \item $\mathcal{A} \subset A$ is a pre-$C^*$-algebra which is a dense subalgebra of $A$. \item $\widetilde{D}gh = g\widetilde{D}h$, for any $g \in G$, $h \in \Dom \widetilde{D}$. \end{itemize} Let $\mathcal{F}$ be a finite projective right $\mathcal{B}$-module with a flat connection $\nabla: \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$. Let $\mathcal{E} = \mathcal{F}\otimes_{\mathcal{B}} \mathcal{A}$ be a projective finitely generated $\mathcal{A}$-module and the action of $G$ on $\mathcal{E}$ is induced by the action of $G$ on $\mathcal{A}$. According to \cite{ivankov:infinite_cov_pr} connection $\nabla$ can be naturally lifted to $\widetilde{\nabla}: \mathcal{E}\to \mathcal{E} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$. Let $\mathcal{E}'$ be an isomorphic to $\mathcal{E}$ as $\mathcal{A}$-module and there is an action of $G$ on $\mathcal{E}'$ such that \begin{equation}\label{twisted_act} g(xa)=(gx)(ga); \ \forall x \in \mathcal{E}, \ \forall a \in \mathcal{A}, \ \forall g \in G. \end{equation} Different actions of $G$ give different $\mathcal{B}$-modules $\mathcal{F} = \mathcal{E}\square_G \mathcal{A}$, $\mathcal{F}' = \mathcal{E}'\square_G \mathcal{A}$. Both $\mathcal{F}$ and $\mathcal{F}'$ can be included into following sequences \begin{equation}\label{seqf} \mathcal{F} \xrightarrow{i} \mathcal{E} \xrightarrow{p} \mathcal{F}, \end{equation} \begin{equation*} \mathcal{F}' \xrightarrow{i'} \mathcal{E}' \xrightarrow{p'} \mathcal{F}'. \end{equation*} These sequences induce following \begin{equation}\label{seqfo} \mathcal{F} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B}) \xrightarrow{i \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{E}\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}) \xrightarrow{p \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{F}\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}), \end{equation} \begin{equation*} \mathcal{F}' \otimes_{\mathcal{B}} \Omega^1(\mathcal{B}) \xrightarrow{i' \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{E}'\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}) \xrightarrow{p' \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{F}'\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}), \end{equation*} The connection $\nabla$ is given by \begin{equation*} \nabla p(x) = \left(p \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}\right)\left(\widetilde{\nabla}(x)\right); \ x \in \mathcal{E}. \end{equation*} From \eqref{seqf} and \eqref{seqfo} it follows that if $y\in \mathcal{F}$ then $\nabla y$ does not depend on $x \in \mathcal{E}$ such that $y=p(x)$. Similarly there is a flat connection $\nabla': \mathcal{F}' \to \mathcal{F}' \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$ given by \begin{equation*} \nabla' p'(x) = \left(p' \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}\right)\left(\widetilde{\nabla'}(x)\right); \ x \in \mathcal{E}'. \end{equation*} Following table explains a correspondence between the proposition \ref{comm_cov_mani} and the above construction. \newline \newline \break \begin{tabular}{|c|c|} \hline DIFFERENTIAL GEOMETRY & SPECTRAL TRIPLES\\ \hline Manifold $M$ & Spectral triple $(\mathcal{B}, H, D)$\\ The covering manifold $E$ & Spectral triple $(\mathcal{A}, \mathcal{A} \otimes_{\mathcal{B}}H, \widetilde{D} )$ \\ A regular covering projection $E\to M$ & A noncommutative covering projection $(B, A, G)$ \\ Group of covering transformations $\pi_1(M)/H$ & Group of noncommutative covering transformations $G$ \\ A connection on vector fibre bundle $F\to M$ & An operator $\nabla: \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$ \\ \hline \end{tabular} \newline \newline \break \end{empt} \begin{exm} Let $(\mathcal{A}_{\theta}, H, D)$ be a spectral triple associated to a noncommutative torus $A_{\theta}$ generated by unitary elements $u,v\in A_{\theta}$. Let $\mathcal{F} = \mathcal{A}^4_{\theta}$ be a free module and let $e_1,..., e_4 \in \mathcal{F}$ be its generators. Let $\nabla: \mathcal{F} \to \mathcal{F} \otimes \Omega^1(\mathcal{A}_{\theta})$ be a connection given by \begin{equation*} \nabla e_1 = c_u e_2 \otimes du, \ \nabla e_2 = -c_u e_1 \otimes du, \ \nabla e_3 = c_v e_4 \otimes dv, \ \nabla e_4 = -c_v e_3 \otimes dv. \end{equation*} where $c_u, c_v \in \mathbb{R}$. According to \cite{ivankov:nc_wilson_lines} the connection $\nabla$ is flat. Let $\left(A_{\theta}, A_{\theta'}, \mathbb{Z}_m\times\mathbb{Z}_n\right)$ a Galois triple from example \ref{nc_torus_fin_cov}. This data induces a spectral triple $\left(\mathcal{A}_{\theta'}, \mathcal{A}_{\theta'}\otimes_{\mathcal{A}_{\theta}}H, D\right)$. If $\mathcal{E} = \mathcal{F} \otimes_{\mathcal{A}_{\theta}}\mathcal{A}_{\theta'}$ then \begin{equation} \mathcal{E} \approx \mathcal{A}_{\theta'}\otimes \mathbb{C}^{4} \approx \mathcal{A}_{\theta}\otimes \mathbb{C}^{4nm} \end{equation} and there is a natural connection $\widetilde{\nabla}: \mathcal{E} \to \mathcal{E}\otimes_{\mathcal{A}_{\theta}}\Omega^1\left(\mathcal{A}_{\theta}\right)$. Let $\rho : \mathbb{Z}_m\times\mathbb{Z}_n \to U(4)$ be a nontrivial representation. There is an action of $\mathbb{Z}_m\times\mathbb{Z}_n$ on $\mathcal{E}' = \mathcal{A}_{\theta'}\otimes \mathbb{C}^{4}$ given by \begin{equation*} g (a \otimes x) = ga \otimes \rho(g)x; \ a \in \mathcal{A}_{\theta}, \ x \in \mathbb{C}^4. \end{equation*} which satisfies \eqref{twisted_act}. Then $\mathcal{F}' = \mathcal{A}_{\theta'}\square_{\mathbb{Z}_m\times\mathbb{Z}_n}\mathcal{E}'$ is a finitely generated $A_{\theta}$ module with a connection $\nabla': \mathcal{F}' \to \mathcal{F} \otimes \Omega^1(\mathcal{A}_{\theta})$ given by the construction \ref{n_f_b_constr}. \end{exm} \subsection{Noncommutative bundles with flat connections and $K$-theory} \paragraph{}A homomorphism $R_0(G) \to K^0(M)$ from \ref{comm_fund_k} can be generalized. Let $\left(A, \widetilde{A}, G\right)$ be a finite noncommutative covering projection and $\rho: G \to U(n)$ is a representation, $\mathrm{triv}_n: G \to U(n)$ is the trivial representation. Suppose that an action if $G$ on $\mathbb{C}^n$ is given by $\rho$. Then a homomorphism $R_0(G) \to K(A)$ is given by \begin{equation*} [\rho] - \left[\mathrm{triv}_n\right] \mapsto \left[\widetilde{A}\square_G\mathbb{C}^n\right] - \left[A^n\right]. \end{equation*} \section{Noncommutative generalization of Borel construction} \begin{empt} There is a noncommutative generalization of the Borel construction \ref{borel_const_comm} \end{empt} \begin{defn}\label{borel_const_ncomm} Let $A$, $B$ be $C^*$-algebras, let $G$ be a group which acts on both $A$ and $B$. Let $A\otimes_{\mathbb{C}} B$ is any tensor product such that $A\otimes_{\mathbb{C}} B$ is a $C^*$-algebra. The norm closure of generated by \begin{equation*} C = \left\{\sum_i a_i\otimes b_i\in A\otimes_{\mathbb{C}} B~|~\sum_i a_i g\otimes b_i= \sum_i a_i\otimes gb_i;~\forall g\in G\right\}. \end{equation*} subalgebra is said to be a {\it cotensor product of $C^*$-algebras}. Denote by $A \square_G B$ the cotensor product. \end{defn} \begin{rem} We do not fix a type of a tensor product because different applications can use different tensor products (See \cite{bruckler:tensor}). \end{rem} \begin{exm} Let $\mathcal{X}$, $\mathcal{Y}$ be locally compact Hausdorff spaces and let $G$ be a finite or countable group which acts on both $\mathcal{X}$ and $\mathcal{Y}$. Suppose that action on $\mathcal{X}$ (resp. $\mathcal{Y}$) is right (resp. left). Then there is natural right (resp. left) action of on $C_0(\mathcal{X})$ (resp. $C_0(\mathcal{Y})$). From \cite{bruckler:tensor} it follows that the minimal and the maximal norm on $C_0(\mathcal{X}) \otimes_{\mathbb{C}} C_0(\mathcal{Y})$ coincide. It is well known that $C_0(\mathcal{X}\times\mathcal{Y}) \approx C_0(\mathcal{X}) \otimes_{\mathbb{C}} C_0(\mathcal{Y})$. Let $\mathcal{Z} = \mathcal{X}\times\mathcal{Y} / \approx$ where $\approx$ is given by \begin{equation*} (xg, y) \approx (x, g^{-1}y). \end{equation*} It is clear that $C_0(\mathcal{Z}) \approx C_0(\mathcal{X}) \square_G C_0(\mathcal{Y})$. \end{exm} \begin{defn} Let $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ be a Galois quadruple such that there is right action of $G$ of $\widetilde{A}$ and left action of $G$ on $C^*$-algebra $B$. A cotensor product $\widetilde{A}\square_G B$ is said to be a {\it noncommutative Borel construction}. \end{defn} \begin{exm} Let $p: \widetilde{\mathcal{B}}\to \mathcal{B}$ be a topological normal covering projection of locally compact topological spaces, and $G = G(\widetilde{\mathcal{B}}| \mathcal{B})$ is a group of covering transformations. Then $p$ is a principal $G(\widetilde{\mathcal{B}}| \mathcal{B})$-bundle. Let $\mathcal{F}$ be a locally compact topological space with action of $G$ on it. Then there is a natural isomorphism with the $C^*$-algebra of a topological Borel construction \begin{equation*} C_0(\widetilde{\mathcal{B}}\times_G\mathcal{F})\approx C_0(\widetilde{\mathcal{B}})\square_GC_0(\mathcal{F}). \end{equation*} \end{exm} \end{document}
\begin{document} \title{Discrete dynamical systems in group theory} \abstract{ In this expository paper we describe the unifying approach for many known entropies in Mathematics developed in \cite{DGV1}. First we give the notion of semigroup entropy $h_{\mathfrak{S}}:{\mathfrak{S}}\to\mathbb R_+$ in the category ${\mathfrak{S}}$ of normed semigroups and contractive homomorphisms, recalling also its properties from \cite{DGV1}. For a specific category $\mathfrak X$ and a functor $F:\mathfrak X\to {\mathfrak{S}}$ we have the entropy $h_F$, defined by the composition $h_F=h_{\mathfrak{S}}\circ F$, which automatically satisfies the same properties proved for $h_{\mathfrak{S}}$. This general scheme permits to obtain many of the known entropies as $h_F$, for appropriately chosen categories $\mathfrak X$ and functors $F:\mathfrak X\to {\mathfrak{S}}$. In the last part we recall the definition and the fundamental properties of the algebraic entropy for group endomorphisms, noting how its deeper properties depend on the specific setting. Finally we discuss the notion of growth for flows of groups, comparing it with the classical notion of growth for finitely generated groups.} \section{Introduction} This paper covers the series of three talks given by the first named author at the conference ``Advances in Group Theory and Applications 2011'' held in June, 2011 in Porto Cesareo. It is a survey about entropy in Mathematics, the approach is the categorical one adopted in \cite{DGV1} (and announced in \cite{D}, see also \cite{LoBu}). We start Section 4 recalling that a \emph{flow} in a category $\mathfrak X$ is a pair $(X,\phi)$, where $X$ is an object of $\mathfrak X$ and $\phi: X\to X$ is a morphism in $\mathfrak X$. A morphism between two flows $\phi: X\to X$ and $\psi: Y\to Y$ is a morphism $\alpha: X \to Y$ in $\mathfrak X$ such that the diagram $$\xymatrix{X\ar[r]^{\alpha} \ar[d]_{\phi}&Y\ar[d]^{\psi}\\X\ar[r]^{\alpha}& Y.}$$ commutes. This defines the category $\mathbf{Flow}_{\mathfrak X}$ of flows in $\mathfrak X$. To classify flows in $\mathfrak X$ up to isomorphisms one uses invariants, and entropy is roughly a numerical invariant associated to flows. Indeed, letting $\mathbb R_{\geq 0} = \{r\in \mathbb R: r\geq 0\}$ and $\mathbb R_+= \mathbb R_{\geq 0}\cup \{\infty\}$, by the term \emph{entropy} we intend a function \begin{equation}\label{dag} h: \mathbf{Flow}_{\mathfrak X}\to \mathbb R_+, \end{equation} obeying the invariance law $h(\phi) = h(\psi)$ whenever $(X,\phi)$ and $(Y,\psi)$ are isomorphic flows. The value $h(\phi)$ is supposed to measure the degree to which $X$ is ``scrambled" by $\phi$, so for example an entropy should assign $0$ to all identity maps. For simplicity and with some abuse of notations, we adopt the following \noindent\textbf{Convention.} If $\mathfrak X$ is a category and $h$ an entropy of $\mathfrak X$, writing $h: {\mathfrak X}\to \mathbb R_+$ we always mean $h: \mathbf{Flow}_{\mathfrak X}\to \mathbb R_+$ as in \eqref{dag}. The first notion of entropy in Mathematics was the measure entropy $h_{mes}$ introduced by Kolmogorov \cite{K} and Sinai \cite{Sinai} in 1958 in Ergodic Theory. The topological entropy $h_{top}$ for continuous self-maps of compact spaces was defined by Adler, Konheim and McAndrew \cite{AKM} in 1965. Another notion of topological entropy $h_B$ for uniformly continuous self-maps of metric spaces was given later by Bowen \cite{B} (it coincides with $h_{top}$ on compact metric spaces). Finally, entropy was taken also in Algebraic Dynamics by Adler, Konheim and McAndrew \cite{AKM} in 1965 and Weiss \cite{W} in 1974; they defined an entropy $\mathrm{ent}$ for endomorphisms of torsion abelian groups. Then Peters \cite{P} in 1979 introduced its extension $h_{alg}$ to automorphisms of abelian groups; finally $h_{alg}$ was defined in \cite{DG} and \cite{DG-islam} for any group endomorphism. Recently also a notion of algebraic entropy for module endomorphisms was introduced in \cite{SZ}, namely the algebraic $i$-entropy $\mathrm{ent}_i$, where $i$ is an invariant of a module category. Moreover, the adjoint algebraic entropy $\mathrm{ent}^\star$ for group endomorphisms was investigated in \cite{DGV} (and its topological extension in \cite{G}). Finally, one can find in \cite{AZD} and \cite{DG-islam} two ``mutually dual'' notions of entropy for self-maps of sets, namely the covariant set-theoretic entropy $\mathfrak h$ and the contravariant set-theoretic entropy $\mathfrak h^*$. The above mentioned specific entropies determined the choice of the main cases considered in this paper. Namely, $\mathfrak X$ will be one of the following categories (other examples can be found in \S\S \ref{NewSec1} and \ref{NewSec2}): \begin{itemize} \item[(a)] $\mathbf{Set}$ of sets and maps and its non-full subategory $\mathbf{Set}_{\mathrm{fin}}$ of sets and finite-to-one maps (set-theoretic entropies $\mathfrak h$ and $\mathfrak h^*$ respectively); \item[(b)] $\mathbf{CTop}$ of compact topological spaces and continuous maps (topological entropy $h_{top}$); \item[(c)] $\mathbf{Mes}$ of probability measure spaces and measure preserving maps (measure entropy $h_{mes}$); \item[(d)] $\mathbf{Grp}$ of groups and group homomorphisms and its subcategory $\mathbf{AbGrp}$ of abelian groups (algebraic entropy $\mathrm{ent}$, algebraic entropy $h_{alg}$ and adjoint algebraic entropy $\mathrm{ent}^\star$); \item[(e)] $\mathbf{Mod}_R$ of right modules over a ring $R$ and $R$-module homomorphisms (algebraic $i$-entropy $\mathrm{ent}_i$). \end{itemize} Each of these entropies has its specific definition, usually given by limits computed on some ``trajectories'' and by taking the supremum of these quantities (we will see some of them explicitly). The proofs of the basic properties take into account the particular features of the specific categories in each case too. It appears that all these definitions and basic properties share a lot of common features. The aim of our approach is to unify them in some way, starting from a general notion of entropy of an appropriate category. This will be the semigroup entropy $h_{\mathfrak{S}}$ defined on the category ${\mathfrak{S}}$ of normed semigroups. In Section \ref{sem-sec} we first introduce the category ${\mathfrak{S}}$ of normed semigroups and related basic notions and examples mostly coming from \cite{DGV1}. Moreover, in \S \ref{preorder-sec} (which can be avoided at a first reading) we add a preorder to the semigroup and discuss the possible behavior of a semigroup norm with respect to this preorder. Here we include also the subcategory $\mathfrak L$ of ${\mathfrak{S}}$ of normed semilattices, as the functors given in Section \ref{known-sec} often have as a target actually a normed semilattice. In \S \ref{hs-sec} we define explicitly the semigroup entropy $h_{\mathfrak{S}}: {\mathfrak{S}} \to \mathbb R_+$ on the category ${\mathfrak{S}}$ of normed semigroups. Moreover we list all its basic properties, clearly inspired by those of the known entropies, such as Monotonicity for factors, Invariance under conjugation, Invariance under inversion, Logarithmic Law, Monotonicity for subsemigroups, Continuity for direct limits, Weak Addition Theorem and Bernoulli normalization. Once defined the semigroup entropy $h_{\mathfrak{S}}:{\mathfrak{S}}\to \mathbb R_+$, our aim is to obtain all known entropies $h:{\mathfrak X} \to \mathbb R_+$ as a composition $h_F:=h_{\mathfrak{S}} \circ F$ of a functor $F: \mathfrak X\to {\mathfrak{S}}$ and $h_{\mathfrak{S}}$: \begin{equation*} \xymatrix@R=6pt@C=37pt {\mathfrak{X}\ar[dd]_{F}\ar[rrd]^{h=h_F} & & \\ & & \mathbb R_+ \\ {\mathfrak{S}}\ar[rru]_{h_{\mathfrak{S}}} & & } \end{equation*} This is done explicitly in Section \ref{known-sec}, where all specific entropies listed above are obtained in this scheme. We dedicate to each of them a subsection, each time giving explicitly the functor from the considered category to the category of normed semigroups. More details and complete proofs can be found in \cite{DGV1}. These functors and the entropies are summarized by the following diagram: \begin{equation*} \xymatrix@-1pc{ &&&\mathbf{Mes}\ar@{-->}[ddddr]|-{\mathfrak{mes}}\ar[ddddddr]|-{h_{mes}}& &\mathbf{AbGrp}\ar[ddddddl]|-{\mathrm{ent}}\ar@{-->}[ddddl]|-{\mathfrak{sub}}&&\\ & &\mathbf{CTop}\ar@{-->}[dddrr]|-{\mathfrak{cov}}\ar[dddddrr]|-{h_{top}}& & & & \mathbf{Grp}\ar@{-->}[dddll]|-{\mathfrak{pet}}\ar[dddddll]|-{h_{alg}} & &\\ & \mathbf{Set}\ar@{-->}[ddrrr]|-{\mathfrak{atr}}\ar[ddddrrr]|-{\mathfrak h} && & & & & \mathbf{Grp}\ar@{-->}[ddlll]|-{\mathfrak{sub}^\star}\ar[ddddlll]|-{\mathrm{ent}^\star} \\ \mathbf{Set}_\mathrm{fin}\ar@{-->}[drrrr]|-{\mathfrak{str}}\ar[dddrrrr]|-{\mathfrak h^*} && && & & & &\mathbf{Mod}_R\ar@{-->}[dllll]|-{\mathfrak{sub}_i}\ar[dddllll]|-{\mathrm{ent}_i} \\ & && & \mathfrak S \ar[dd]|-{h_\mathfrak S} && & \\ \\ & && &{\mathbb R_+} & & & } \end{equation*} In this way we obtain a simultaneous and uniform definition of all entropies and uniform proofs (as well as a better understanding) of their general properties, namely the basic properties of the specific entropies can be derived directly from those proved for the semigroup entropy. The last part of Section \ref{known-sec} is dedicated to what we call Bridge Theorem (a term coined by L. Salce), that is roughly speaking a connection between entropies $h_1:\mathfrak X_1 \to \mathbb R_+$ and $h_2:\mathfrak X_2 \to \mathbb R_+$ via functors $\varepsilon: \mathfrak X_1 \to \mathfrak X_2$. Here is a formal definition of this concept: \begin{Definition}\label{BTdef} Let $\varepsilon: \mathfrak X_1 \to \mathfrak X_2$ be a functor and let $h_1:\mathfrak X_1 \to \mathbb R_+$ and $h_2:\mathfrak X_2 \to \mathbb R_+$ be entropies of the categories $\mathfrak X_1 $ and $ \mathfrak X_2$, respectively (as in the diagram below). \begin{equation*}\label{Buz} \xymatrix@R=6pt@C=37pt {\mathfrak{X}_1\ar[dd]_{\varepsilon}\ar[rrd]^{h_{1}} & & \\ & & \mathbb R_+ \\ \mathfrak{X}_2\ar[rru]_{h_{2}} & & } \end{equation*}We say that the pair $(h_1, h_2)$ satisfies the \emph{weak Bridge Theorem} with respect to the functor $\varepsilon$ if there exists a positive constant $C_\varepsilon$, such that for every endomorphism $\phi$ in $\mathfrak X_1$ \begin{equation}\label{sBT} h_2(\varepsilon(\phi)) \leq C_\varepsilon h_1(\phi). \end{equation} If equality holds in \eqref{sBT} we say that $(h_1,h_2)$ satisfies the \emph{Bridge Theorem} with respect to $\varepsilon$, and we shortly denote this by $(BT_\varepsilon)$. \end{Definition} In \S \ref{BTsec} we discuss the Bridge Theorem passing through the category ${\mathfrak{S}}$ of normed semigroups and so using the new semigroup entropy. This approach permits for example to find a new and transparent proof of Weiss Bridge Theorem (see Theorem \ref{WBT}) as well as for other Bridge Theorems. A first limit of this very general setting is the loss of some of the deeper properties that a specific entropy may have. So in the last Section \ref{alg-sec} for the algebraic entropy we recall the definition and the fundamental properties, which cannot be deduced from the general scheme. We start Section 4 recalling the Algebraic Yuzvinski Formula (see Theorem \ref{AYF}) recently proved in \cite{GV}, giving the values of the algebraic entropy of linear transformations of finite-dimensional rational vector spaces in terms of the Mahler measure. In particular, this theorem provides a connection of the algebraic entropy with the famous Lehmer Problem. Two important applications of the Algebraic Yuzvinski Formula are the Addition Theorem and the Uniqueness Theorem for the algebraic entropy in the context of abelian groups. In \S \ref{Growth-sec} we describe the connection of the algebraic entropy with the classical topic of growth of finitely generated groups in Geometric Group Theory. Its definition was given independently by Schwarzc \cite{Sch} and Milnor \cite{M1}, and after the publication of \cite{M1} it was intensively investigated; several fundamental results were obtained by Wolf \cite{Wolf}, Milnor \cite{M2}, Bass \cite{Bass}, Tits \cite{Tits} and Adyan \cite{Ad}. In \cite{M3} Milnor proposed his famous problem (see Problem \ref{Milnor-pb} below); the question about the existence of finitely generated groups with intermediate growth was answered positively by Grigorchuk in \cite{Gri1,Gri2,Gri3,Gri4}, while the characterization of finitely generated groups with polynomial growth was given by Gromov in \cite{Gro} (see Theorem \ref{GT}). Here we introduce the notion of finitely generated flows $(G,\phi)$ in the category of groups and define the growth of $(G,\phi)$. When $\phi=\mathrm{id}_G$ is the identical endomorphism, then $G$ is a finitely generated group and we find exactly the classical notion of growth. In particular we recall a recent significant result from \cite{DG0} extending Milnor's dichotomy (between polynomial and exponential growth) to finitely generated flows in the abelian case (see Theorem \ref{DT}). We leave also several open problems and questions about the growth of finitely generated flows of groups. The last part of the section, namely \S \ref{aent-sec}, is dedicated to the adjoint algebraic entropy. As for the algebraic entropy, we recall its original definition and its main properties, which cannot be derived from the general scheme. In particular, the adjoint algebraic entropy can take only the values $0$ and $\infty$ (no finite positive value is attained) and we see that the Addition Theorem holds only restricting to bounded abelian groups. A natural side-effect of the wealth of nice properties of the entropy $h_F=h_{\mathfrak{S}}\circ F$, obtained from the semigroup entropy $h_{\mathfrak{S}}$ through functors $F:\mathfrak X\to {\mathfrak{S}}$, is the loss of some entropies that do not have all these properties. For example Bowen's entropy $h_B$ cannot be obtained as $h_F$ since $h_B(\phi^{-1})= h_B(\phi)$ fails even for the automorphism $\phi: \mathbb R \to \mathbb R$ defined by $\phi(x)= 2x$, see \S \ref{NewSec2} for an extended comment on this issue; there we also discuss the possibility to obtain Bowen's topological entropy of measure preserving topological automorphisms of locally compact groups in the framework of our approach. For the same reason other entropies that cannot be covered by this approach are the intrinsic entropy for endomorphisms of abelian groups \cite{DGSV} and the topological entropy for automorphisms of locally compact totally disconnected groups \cite{DG-tdlc}. This occurs also for the function $\phi \mapsto \log s(\phi)$, where $s(\phi)$ is the scale function defined by Willis \cite{Willis,Willis2}. The question about the relation of the scale function to the algebraic or topological entropy was posed by T. Weigel at the conference; these non-trivial relations are discussed for the topological entropy in \cite{BDG}. \section{The semigroup entropy}\label{sem-sec} \subsection{The category ${\mathfrak{S}}$ of normed semigroups} We start this section introducing the category ${\mathfrak{S}}$ of normed semigroups, and other notions that are fundamental in this paper. \begin{Definition}\label{Def1} Let $(S,\cdot)$ be a semigroup. \begin{itemize} \item[(i)] A \emph{norm} on $S$ is a map $v: S \to \mathbb R_{\geq 0}$ such that \begin{equation*} v(x \cdot y) \leq v(x) + v(y)\ \text{for every}\ x,y\in S. \end{equation*} A \emph{normed semigroup} is a semigroup provided with a norm. If $S$ is a monoid, a \emph{monoid norm} on $S$ is a semigroup norm $v$ such that $v(1)=0$; in such a case $S$ is called \emph{normed monoid}. \item[(ii)] A semigroup homomorphism $\phi:(S,v)\to (S',v')$ between normed semigroups is \emph{contractive} if $$v'(\phi(x))\leq v(x)\ \text{for every}\ x\in S.$$ \end{itemize} \end{Definition} Let ${\mathfrak{S}}$ be the category of normed semigroups, which has as morphisms all contractive semigroup homomorphisms. In this paper, when we say that $S$ is a normed semigroup and $\phi:S\to S$ is an endomorphism, we will always mean that $\phi$ is a contractive semigroup endomorphism. Moreover, let $\mathfrak M$ be the non-full subcategory of ${\mathfrak{S}}$ with objects all normed monoids, where the morphisms are all (necessarily contractive) monoid homomorphisms. We give now some other definitions. \begin{Definition} A normed semigroup $(S,v)$ is: \begin{itemize} \item[(i)] \emph{bounded} if there exists $C\in \mathbb N_+$ such that $v(x) \leq C$ for all $x\in S$; \item[(ii)]\emph{arithmetic} if for every $x\in S$ there exists a constant $C_x\in \mathbb N_+$ such that $v(x^n) \leq C_x\cdot \log (n+1)$ for every $n\in\mathbb N$. \end{itemize} \end{Definition} Obviously, bounded semigroups are arithmetic. \begin{Example}\label{Fekete} Consider the monoid $S = (\mathbb N, +)$. \begin{itemize} \item[(a)] Norms $v$ on $S$ correspond to {subadditive sequences} $(a_n)_{n\in\mathbb N}$ in $ \mathbb R_+$ (i.e., $a_{n + m}\leq a_n + a_m$) via $v \mapsto (v(n))_{n\in\mathbb N}$. Then $\lim_{n\to \infty} \frac{a_n}{n}= \inf_{n\in\mathbb N} \frac{a_n}{n}$ exists by Fekete Lemma \cite{Fek}. \item[(b)] Define $v: S \to \mathbb R_+$ by $v(x) = \log (1+ x)$ for $x\in S$. Then $v$ is an arithmetic semigroup norm. \item[(c)] Define $v_1: S \to \mathbb R_+$ by $v_1(x) = \sqrt x$ for $x\in S$. Then $v_1$ is a semigroup norm, but $(S, + , v_1)$ is not arithmetic. \item[(d)] For $a\in \mathbb N$, $a>1$ let $v_a(n) = \sum_i b_i$, when $n= \sum_{i=0}^k b_ia^i$ and $0\leq b_i < a$ for all $i$. Then $v_a$ is an arithmetic norm on $S$ making the map $x\mapsto ax$ an endomorphism in ${\mathfrak{S}}$. \end{itemize} \end{Example} \subsection{Preordered semigroups and normed semilattices}\label{preorder-sec} A triple $(S,\cdot,\leq)$ is a \emph{preordered semigroup} if the semigroup $(S,\cdot)$ admits a preorder $\leq$ such that $$x\leq y\ \text{implies}\ x \cdot z \leq y \cdot z\ \text{and}\ z \cdot x \leq z \cdot y\ \text{for all}\ x,y,z \in S.$$ Write $x\sim y$ when $x\leq y$ and $y\leq x$ hold simultaneously. Moreover, the \emph{positive cone} of $S$ is $$P_+(S)=\{a\in S:x\leq x \cdot a \ \text{and}\ x\leq a\cdot x\ \text{for every}\ x\in S\}.$$ A norm $v$ on the preordered semigroup $(S,\cdot,\leq)$ is \emph{monotone} if $x\leq y$ implies $v(x) \leq v(y)$ for every $x,y \in S$. Clearly, $v(x) = v(y)$ whenever $x \sim y$ and the norm $v$ of $S$ is monotone. Now we propose another notion of monotonicity for a semigroup norm which does not require the semigroup to be explicitly endowed with a preorder. \begin{Definition} Let $(S,v)$ be a normed semigroup. The norm $v$ is \emph{s-monotone} if $$\max\{v(x), v(y)\}\leq v(x \cdot y)\ \text{for every}\ x,y \in S.$$ \end{Definition} This inequality may become a too stringent condition when $S$ is close to be a group; indeed, if $S$ is a group, then it implies that $v(S) = \{v(1)\}$, in particular $v$ is constant. If $(S,+,v)$ is a commutative normed monoid, it admits a preorder $\leq^a$ defined for every $x,y\in S$ by $x\leq^a y$ if and only if there exists $z\in S$ such that $x+z=y$. Then $(S,\cdot,\leq)$ is a {preordered semigroup} and the norm $v$ is s-monotone if and only if $v$ is monotone with respect to $\leq^a$. The following connection between monotonicity and s-monotonicity is clear. \begin{Lemma} Let $S$ be a preordered semigroup. If $S=P_+(S)$, then every monotone norm of $S$ is also s-monotone. \end{Lemma} A \emph{semilattice} is a commutative semigroup $(S,\vee)$ such that $x\vee x=x$ for every $x\in S$. \begin{Example} \begin{itemize} \item[(a)] Each lattice $(L, \vee, \wedge)$ gives rise to two semilattices, namely $(L, \vee)$ and $(L, \wedge)$. \item[(b)] A filter $\mathcal F$ on a given set $X$ is a semilattice with respect to the intersection, with zero element the set $X$. \end{itemize} \end{Example} Let ${\mathfrak{L}}$ be the full subcategory of ${\mathfrak{S}}$ with objects all normed semilattices. Every normed semilattice $(L,\vee)$ is trivially arithmetic, moreover the canonical partial order defined by $$x\leq y\ \text{if and only if}\ x\vee y=y,$$ for every $x,y\in L$, makes $L$ also a partially ordered semigroup. Neither preordered semigroups nor normed semilattices are formally needed for the definition of the semigroup entropy. Nevertheless, they provide significant and natural examples, as well as useful tools in the proofs, to justify our attention to this topic. \subsection{Entropy in ${\mathfrak{S}}$}\label{hs-sec} For $(S,v)$ a normed semigroup $\phi:S\to S$ an endomorphism, $x\in S$ and $n\in\mathbb N_+$ consider the \emph{$n$-th $\phi$-trajectory of $x$} $$T_n(\phi,x) = x \cdot\phi(x)\cdot\ldots \cdot\phi^{n-1}(x)$$ and let $$c_n(\phi,x) = v(T_n(\phi,x)).$$ Note that $c_n(\phi,x) \leq n\cdot v(x)$. Hence the growth of the function $n \mapsto c_n(\phi,x)$ is at most linear. \begin{Definition} Let $S$ be a normed semigroup. An endomorphism $\phi:S\to S$ is said to have \emph{logarithmic growth}, if for every $x\in S$ there exists $C_x\in\mathbb N_+$ with $c_n(\phi,x) \leq C_x\cdot \log (n+1)$ for all $n\in\mathbb N_+$. \end{Definition} Obviously, a normed semigroup $S$ is arithmetic if and only if $\mathrm{id}_{S}$ has logarithmic growth. The following theorem from \cite{DGV1} is fundamental in this context as it witnesses the existence of the semigroup entropy; so we give its proof also here for reader's convenience. \begin{Theorem}\label{limit} Let $S$ be a normed semigroup and $\phi:S\to S$ an endomorphism. Then for every $x \in S$ the limit \begin{equation}\label{hs-eq} h_{{\mathfrak{S}}}(\phi,x):= \lim_{n\to\infty}\frac{c_n(\phi,x)}{n} \end{equation} exists and satisfies $h_{{\mathfrak{S}}}(\phi,x)\leq v(x)$. \end{Theorem} \begin{proof} The sequence $(c_n(\phi,x))_{n\in\mathbb N_+}$ is subadditive. Indeed, \begin{align*} c_{n+m}(\phi,x)&= v(x\cdot\phi(x)\cdot\ldots\cdot\phi^{n-1}(x)\cdot\phi^{n}(x)\cdot\ldots\cdot\phi^{n+m-1}(x))\\ &=v((x\cdot\phi(x)\cdot\ldots\cdot\phi^{n-1}(x))\cdot\phi^{n}(x\cdot\ldots\cdot\phi^{m-1}(x))) \\ &\leq c_n(\phi,x)+v(\phi^{n}(x\cdot\ldots\cdot\phi^{m-1}(x))) \\ &\leq c_n(\phi,x)+v(x\cdot\ldots\cdot\phi^{m-1}(x))=c_n(\phi,x)+c_m(\phi,x). \end{align*} By Fekete Lemma (see Example \ref{Fekete} (a)), the limit $\lim_{n\to\infty} \frac{c_n(\phi,x)}{n}$ exists and coincides with $\inf_{n\in\mathbb N_+} \frac{c_n(\phi,x)}{n}$. Finally, $h_{{\mathfrak{S}}}(\phi,x)\leq v(x)$ follows from $c_n(\phi,x) \leq n v(x)$ for every $n\in\mathbb N_+$. \end{proof} \begin{Remark} \begin{itemize} \item[(a)] The proof of the existence of the limit defining $h_{{\mathfrak{S}}}(\phi,x) $ exploits the property of the semigroup norm and also the condition on $\phi$ to be contractive. For an extended comment on what can be done in case the function $v: S \to \mathbb R_+$ fails to have that property see \S \ref{NewSec1}. \item[(b)] With $S = (\mathbb N,+)$, $\phi = \mathrm{id}_\mathbb N$ and $x=1$ in Theorem \ref{limit} we obtain exactly item (a) of Example \ref{Fekete}. \end{itemize} \end{Remark} \begin{Definition}\label{SEofEndos} Let $S$ be a normed semigroup and $\phi:S\to S$ an endomorphism. The \emph{semigroup entropy} of $\phi$ is $$h_{{\mathfrak{S}}}(\phi)=\sup_{x\in S}h_{{\mathfrak{S}}}(\phi,x).$$ \end{Definition} If an endomorphism $\phi:S\to S$ has logarithmic growth, then $h_{{\mathfrak{S}}}(\phi) = 0$. In particular, $h_{{\mathfrak{S}}}(\mathrm{id}_S)=0$ if $S$ is arithmetic. Recall that an endomorphism $\phi:S\to S$ of a normed semigroup $S$ is \emph{locally quasi periodic} if for every $x\in S$ there exist $n,k\in\mathbb N$, $k>0$, such that $\phi^n(x)=\phi^{n+k}(x)$. If $S$ is a monoid and $\phi(1)=1$, then $\phi$ is \emph{locally nilpotent} if for every $x\in S$ there exists $n\in\mathbb N_+$ such that $\phi^n(x)=1$. \begin{Lemma}\label{locally} Let $S$ be a normed semigroup and $\phi:S\to S$ an endomorphism. \begin{itemize} \item[(a)]If $S$ is arithmetic and $\phi$ is locally periodic, then $h_{\mathfrak{S}}(\phi)=0$. \item[(b)] If $S$ is a monoid and $\phi(1)=1$ and $\phi$ is locally nilpotent, then $h_{\mathfrak{S}}(\phi)=0$. \end{itemize} \end{Lemma} \begin{proof} (a) Let $x\in S$, and let $l,k\in\mathbb N_+$ be such that $\phi^l(x)=\phi^{l+k}(x)$. For every $m\in\mathbb N_+$ one has $$T_{l+mk}(\phi,x)=T_l(\phi,x)\cdot T_m(\mathrm{id}_S,y) = T_l(\phi,x)\cdot y^m,$$ where $y=\phi^l(T_k(\phi,x))$. Since $S$ is arithmetic, there exists $C_x\in \mathbb N_+$ such that \begin{equation*}\begin{split} v(T_{l+mk}(\phi,x)) = v(T_l(\phi,x)\cdot y^m) \leq \\ v(T_l(\phi,x)) + v( y^m) \leq v(T_l (\phi,x)) + C_x\cdot\log (m+1), \end{split}\end{equation*} so $\lim_{m\to \infty} \frac{v(T_{l+mk}(\phi,x))}{l+mk}=0$. Therefore we have found a subsequence of $(c_n(\phi,x))_{n\in\mathbb N_+}$ converging to $0$, hence also $h_{\mathfrak{S}}(\phi,x)=0$. Hence $h_{\mathfrak{S}}(\phi)=0$. (b) For $x\in S$, there exists $n\in\mathbb N_+$ such that $\phi^n(x)=1$. Therefore $T_{n+k}(\phi,x)=T_n(\phi,x)$ for every $k\in\mathbb N$, hence $h_{\mathfrak{S}}(\phi,x)=0$. \end{proof} We discuss now a possible different notion of semigroup entropy. Let $(S,v)$ be a normed semigroup, $\phi:S\to S$ an endomorphism, $x\in S$ and $n\in\mathbb N_+$. One could define also the ``left'' $n$-th $\phi$-trajectory of $x$ as $$T_n^{\#}(\phi,x)=\phi^{n-1}(x)\cdot\ldots\cdot\phi(x)\cdot x,$$ changing the order of the factors with respect to the above definition. With these trajectories it is possible to define another entropy letting $$h_{\mathfrak{S}}^{\#}(\phi,x)=\lim_{n\to\infty}\frac{v(T_n^{\#}(\phi,x))}{n},$$ and $$h_{\mathfrak{S}}^{\#}(\phi)=\sup\{h_{\mathfrak{S}}^{\#}(\phi,x):x\in S\}.$$ In the same way as above, one can see that the limit defining $h_{\mathfrak{S}}^{\#}(\phi,x)$ exists. Obviously $h_{\mathfrak{S}}^{\#}$ and $h_{\mathfrak{S}}$ coincide on the identity map and on commutative normed semigroups, but now we see that in general they do not take always the same values. Item (a) in the following example shows that it may occur the case that they do not coincide ``locally'', while they coincide ``globally''. Moreover, modifying appropriately the norm in item (a), J. Spev\'ak found the example in item (b) for which $h_{\mathfrak{S}}^{\#}$ and $h_{\mathfrak{S}}$ do not coincide even ``globally''. \begin{Example} Let $X=\{x_n\}_{n\in\mathbb Z}$ be a faithfully enumerated countable set and let $S$ be the free semigroup generated by $X$. An element $w\in S$ is a word $w=x_{i_1}x_{i_2}\ldots x_{i_m}$ with $m\in\mathbb N_+$ and $i_j\in\mathbb Z$ for $j= 1,2, \ldots, m$. In this case $m$ is called the {\em length} $\ell_X(w)$ of $w$, and a subword of $w$ is any $w'\in S$ of the form $w'=x_{i_k}x_{i_k+1}\ldots x_{i_l}$ with $1\le k\le l\le n$. Consider the automorphism $\phi:S\to S$ determined by $\phi(x_n)=x_{n+1}$ for every $n\in\mathbb Z$. \begin{itemize}\label{ex-jan} \item[(a)] Let $s(w)$ be the number of adjacent pairs $(i_k,i_{k+1})$ in $w$ such that $i_k<i_{k+1}$. The map $v:S\to\mathbb R_+$ defined by $v(w)=s(w)+1$ is a semigroup norm. Then $\phi:(S,v)\to (S,v)$ is an automorphism of normed semigroups. It is straightforward to prove that, for $w=x_{i_1}x_{i_2}\ldots x_{i_m}\in S$, \begin{itemize} \item[(i)] $h_{\mathfrak{S}}^\#(\phi,w)=h_{\mathfrak{S}}(\phi,w)$ if and only if $i_1>i_m+1$; \item[(ii)] $h_{\mathfrak{S}}^\#(\phi,w)=h_{\mathfrak{S}}(\phi,w)-1$ if and only if $i_m=i_1$ or $i_m=i_1-1$. \end{itemize} Moreover, \begin{itemize} \item[(iii)]$h_{\mathfrak{S}}^\#(\phi)=h_{\mathfrak{S}}(\phi)=\infty$. \end{itemize} In particular, $h_{\mathfrak{S}}(\phi,x_0)=1$ while $h_{\mathfrak{S}}^\#(\phi,x_0)=0$. \item[(b)] Define a semigroup norm $\nu: S\to \mathbb R_+$ as follows. For $w=x_{i_1}x_{i_2}\ldots x_{i_n}\in S$ consider its subword $w'=x_{i_k}x_{i_{k+1}}\ldots x_{i_l}$ with maximal length satisfying $i_{j+1}=i_j+1$ for every $j\in\mathbb Z$ with $k\le j\le l-1$ and let $\nu(w)=\ell_X(w')$. Then $\phi:(S,\nu)\to (S,\nu)$ is an automorphism of normed semigroups. It is possible to prove that, for $w\in S$, \begin{enumerate} \item[(i)] if $\ell_X(w)=1$, then $\nu(T_n(\phi,w))=n$ and $\nu(T^\#_n(\phi,w))=1$ for every $n\in\mathbb N_+$; \item[(ii)] if $\ell_X(w)=k$ with $k>1$, then $\nu(T_n(\phi,w))< 2k$ and $\nu(T^\#_n(\phi,w))< 2k $ for every $n\in\mathbb N_+$. \end{enumerate} From (i) and (ii) and from the definitions we immediately obtain that \begin{itemize} \item[(iii)] $h_\mathfrak{S}(\phi)=1\neq 0=h^\#_\mathfrak{S}(\phi)$. \end{itemize} \end{itemize} \end{Example} We list now the main basic properties of the semigroup entropy. For complete proofs and further details see \cite{DGV1}. \begin{Lemma}[Monotonicity for factors] Let $S$, $T$ be normed semigroups and $\phi: S \to S$, $\psi:T\to T$ endomorphisms. If $\alpha:S\to T$ is a surjective homomorphism such that $\alpha \circ \psi = \phi \circ \alpha$, then $h_{{\mathfrak{S}}}(\phi) \leq h_{{\mathfrak{S}}}(\psi)$. \end{Lemma} \begin{proof} Fix $x\in S$ and find $y \in T$ with $x= \alpha(y)$. Then $c_n(x, \phi) \leq c_n(\psi, y)$. Dividing by $n$ and taking the limit gives $h_{{\mathfrak{S}}}(\phi,x) \leq h_{{\mathfrak{S}}}(\psi,y)$. So $h_{{\mathfrak{S}}}(\phi,x)\leq h_{{\mathfrak{S}}}(\psi)$. When $x$ runs over $S$, we conclude that $h_{{\mathfrak{S}}}(\phi) \leq h_{{\mathfrak{S}}}(\psi)$. \end{proof} \begin{Corollary}[Invariance under conjugation] Let $S$ be a normed semigroup and $\phi: S \to S$ an endomorphism. If $\alpha:T\to S$ is an isomorphism, then $h_{{\mathfrak{S}}}(\phi)=h_{{\mathfrak{S}}}(\alpha\circ\phi\circ\alpha^{-1})$. \end{Corollary} \begin{Lemma}[Invariance under inversion]\label{inversion} Let $S$ be a normed semigroup and $\phi:S\to S$ an automorphism. Then $h_{{\mathfrak{S}}}(\phi^{-1})=h_{{\mathfrak{S}}}(\phi)$. \end{Lemma} \begin{Theorem}[Logarithmic Law] Let $(S,v)$ be a normed semigroup and $\phi:S\to S$ an endomorphism. Then $$ h_{{\mathfrak{S}}}(\phi^{k})\leq k\cdot h_{{\mathfrak{S}}}(\phi) $$ for every $k\in \mathbb N_+$. Furthermore, equality holds if $v$ is s-monotone. Moreover, if $\phi:S\to S$ is an automorphism, then $$h_{{\mathfrak{S}}}(\phi^k) = |k|\cdot h_{{\mathfrak{S}}}(\phi)$$ for all $k \in \mathbb Z\setminus\{0\}$. \end{Theorem} \begin{proof} Fix $k \in \mathbb N_+$, $x\in S$ and let $y= x\cdot\phi(x)\cdot\ldots\cdot\phi^{k-1}(x)$. Then \begin{align*} h_{\mathfrak{S}}(\phi^k)\geq h_{\mathfrak{S}}(\phi^k, y)&=\lim_{n\to\infty} \frac{c_{n}(\phi^k,y)}{n}=\lim_{n\to \infty} \frac{v (y\cdot \phi^k(y)\cdot\ldots \cdot\phi^{(n-1)k}(y)) }{n}=\\ &= k \cdot \lim_{n\to \infty} \frac{c_{nk}(\phi,x)}{nk}=k\cdot h_{\mathfrak{S}}(\phi,x). \end{align*} This yields $h_{\mathfrak{S}}(\phi^k)\geq k\cdot h_{\mathfrak{S}}(\phi,x)$ for all $x\in S$, and consequently, $h_{\mathfrak{S}}(\phi^k)\geq k\cdot h_{\mathfrak{S}}(\phi)$. Suppose $v$ to be s-monotone, then \begin{equation*}\begin{split} h_{\mathfrak{S}}(\phi,x)=\lim_{n\to \infty} \frac{v(x\cdot\phi (x)\cdot\ldots\cdot\phi^{nk-1}(x))}{n\cdot k} \geq \\ \lim_{n\to\infty} \frac{ v(x\cdot\phi^k(x)\cdot\ldots\cdot(\phi^k)^{n-1}(x))}{n\cdot k}= \frac{h_{\mathfrak{S}}(\phi^k,x)}{k} \end{split}\end{equation*} Hence, $k\cdot h_{\mathfrak{S}}(\phi)\geq h_{\mathfrak{S}}(\phi^k,x)$ for every $x\in S$. Therefore, $k\cdot h_{\mathfrak{S}}(\phi)\geq h_{\mathfrak{S}}(\phi^k)$. If $\phi$ is an automorphism and $k\in\mathbb Z\setminus\{0\}$, apply the previous part of the theorem and Lemma \ref{inversion}. \end{proof} The next lemma shows that monotonicity is available not only under taking factors: \begin{Lemma}[Monotonicity for subsemigroups] Let $(S,v)$ be a normed semigroup and $\phi:S\to S$ an endomorphism. If $T$ is a $\phi$-invariant normed subsemigroup of $(S,v)$, then $h_{{\mathfrak{S}}}(\phi)\geq h_{{\mathfrak{S}}}(\phi\restriction_{T})$. Equality holds if $S$ is ordered, $v$ is monotone and $T$ is cofinal in $S$. \end{Lemma} Note that $T$ is equipped with the induced norm $v\restriction_T$. The same applies to the subsemigroups $S_i$ in the next corollary: \begin{Corollary}[Continuity for direct limits] Let $(S,v)$ be a normed semigroup and $\phi:S\to S$ an endomorphism. If $\{S_i: i\in I\}$ is a directed family of $\phi$-invariant normed subsemigroup of $(S,v)$ with $ S =\varinjlim S_i$, then $h_{{\mathfrak{S}}}(\phi)=\sup h_{{\mathfrak{S}}}(\phi\restriction_{S_i})$. \end{Corollary} We consider now products in ${\mathfrak{S}}$. Let $\{(S_i,v_i):i\in I\}$ be a family of normed semigroups and let $S=\prod_{i \in I}S_i$ be their direct product in the category of semigroups. In case $I$ is finite, then $S$ becomes a normed semigroup with the $\max$-norm $v_{\prod}$, so $(S,v_{\prod})$ is the product of the family $\{S_i:i\in I\}$ in the category ${\mathfrak{S}}$; in such a case one has the following \begin{Theorem}[Weak Addition Theorem - products]\label{WAT} Let $(S_i,v_i)$ be a normed semigroup and $\phi_i:S_i\to S_i$ an endomorphism for $i=1,2$. Then the endomorphism $\phi_1 \times \phi_2$ of $ S _1 \times S_2$ has $h_{\mathfrak{S}}(\phi_1 \times \phi_2)= \max\{ h_{\mathfrak{S}}(\phi_1),h_{\mathfrak{S}}(\phi_2)\}$. \end{Theorem} If $I$ is infinite, $S$ need not carry a semigroup norm $v$ such that every projection $p_i: (S,v) \to (S_i,v_i)$ is a morphism in ${\mathfrak{S}}$. This is why the product of the family $\{(S_i,v_i):i\in I\}$ in ${\mathfrak{S}}$ is actually the subset $$S_{\mathrm{bnd}}=\{x=(x_i)_{i\in I}\in S: \sup_{i\in I}v_i(x_i)\in\mathbb R\}$$ of $S$ with the norm $v_{\prod}$ defined by $$v_{\prod}(x)=\sup_{i\in I}v_i(x_i)\ \text{for any}\ x=(x_i)_{i\in I}\in S_{\mathrm{bnd}}.$$ For further details in this direction see \cite{DGV1}. \subsection{Entropy in $\mathfrak M$} We collect here some additional properties of the semigroup entropy in the category $\mathfrak M$ of normed monoids where also coproducts are available. If $(S_i,v_i)$ is a normed monoid for every $i\in I$, the direct sum $$S= \bigoplus_{i\in I} S_i =\{(x_i)\in \prod_{i\in I}S_i: |\{i\in I: x_i \ne 1\}|<\infty\}$$ becomes a normed monoid with the norm $$v_\oplus(x) = \sum_{i\in I} v_i(x_i)\ \text{for any}\ x = (x_i)_{i\in I} \in S.$$ This definition makes sense since $v_i$ are monoid norms, so $v_i(1) = 0$. Hence, $(S,v_\oplus)$ becomes a coproduct of the family $\{(S_i,v_i):i\in I\}$ in $\mathfrak M$. We consider now the case when $I$ is finite, so assume without loss of generality that $I=\{1,2\}$. In other words we have two normed monoids $(S_1,v_1)$ and $(S_2,v_2)$. The product and the coproduct have the same underlying monoid $S=S_1\times S_2$, but the norms $v_\oplus$ and $v_{\prod}$ in $S$ are different and give different values of the semigroup entropy $h_{\mathfrak{S}}$; indeed, compare Theorem \ref{WAT} and the following one. \begin{Theorem}[Weak Addition Theorem - coproducts] Let $(S_i,v_i)$ be a normed monoid and $\phi_i:S_i\to S_i$ an endomorphism for $i=1,2$. Then the endomorphism $\phi_1 \oplus \phi_2$ of $S _1 \oplus S_2$ has $h_{\mathfrak{S}}(\phi_1 \oplus \phi_2)= h_{\mathfrak{S}}(\phi_1) + h_{\mathfrak{S}}(\phi_2)$. \end{Theorem} For a normed monoid $(M,v) \in \mathfrak M$ let $B(M)= \bigoplus_\mathbb N M$, equipped with the above coproduct norm $v_\oplus(x) = \sum_{n\in\mathbb N} v(x_n)$ for any $x=(x_n)_{n\in\mathbb N}\in B(M)$. The \emph{right Bernoulli shift} is defined by $$\beta_M:B(M)\to B(M), \ \beta_M(x_0,\dots,x_n,\dots)=(1,x_0,\dots,x_n,\dots),$$ while the \emph{left Bernoulli shift} is $${}_M\beta:B(M)\to B(M),\ {}_M\beta(x_0,x_1,\dots,x_n,\dots)=(x_1,x_2, \dots,x_n,\dots).$$ \begin{Theorem}[Bernoulli normalization] Let $(M,v)$ be a normed monoid. Then: \begin{itemize} \item[(a)] $h_{\mathfrak{S}} (\beta_M)=\sup_{x\in M}v(x)$; \item[(b)] $h_{\mathfrak{S}}({}_M\beta) = 0$. \end{itemize} \end{Theorem} \begin{proof} (a) For $x\in M$ consider $\underline{x}=(x_n)_{n\in\mathbb N}\in B(M)$ such that $x_0=x$ and $x_n=1$ for every $n\in\mathbb N_+$. Then $v_\oplus(T_n(\beta_M,\underline{x}))=n\cdot v(x)$, so $h_{\mathfrak{S}}(\beta_M,\underline{x})=v(x)$. Hence $h_{\mathfrak{S}}(\beta_M)\geq \sup_{x\in M}v(x)$. Let now $\underline{x}=(x_n)_{n\in\mathbb N}\in B(M)$ and let $k\in\mathbb N$ be the greatest index such that $x_k\neq 1$; then \begin{equation*}\begin{split} v_\oplus(T_n(\beta_M,\underline{x}))= \sum_{i=0}^{k+n} v(T_n(\beta_M,\underline{x})_i)\leq\\ \sum_{i=0}^{k-1} v(x_0\cdot\ldots\cdot x_i) + (n-k)\cdot v(x_1\cdot\ldots\cdot x_k)+\sum_{i=1}^{k} v(x_i\cdot\ldots\cdot x_k). \end{split}\end{equation*} Since the first and the last summand do not depend on $n$, after dividing by $n$ and letting $n$ converge to infinity we obtain $$h_{\mathfrak{S}}(\beta_M,\underline{x})=\lim_{n\to \infty} \frac{v_\oplus(T_n(\beta_M,\underline{x}))}{n}\leq v(x_1\cdot\ldots\cdot x_k)\leq \sup_{x\in M}v(x).$$ (b) Note that ${}_M\beta$ is locally nilpotent and apply Lemma \ref{locally}. \end{proof} \subsection{Semigroup entropy of an element and pseudonormed semigroups}\label{NewSec1} One can notice a certain asymmetry in Definition \ref{SEofEndos}. Indeed, for $S$ a normed semigroup, the local semigroup entropy defined in \eqref{hs-eq} is a two variable function $$h_{{\mathfrak{S}}}: \mathrm{End}(S) \times S \to \mathbb R_+.$$ Taking $h_{{\mathfrak{S}}}(\phi)=\sup_{x\in S}h_{{\mathfrak{S}}}(\phi,x)$ for an endomorphism $\phi\in\mathrm{End}(S)$, we obtained the notion of semigroup entropy of $\phi$. But one can obviously exchange the roles of $\phi$ and $x$ and obtain the possibility to discuss the entropy of an element $x\in S$. This can be done in two ways. Indeed, in Remark \ref{Asymm} we consider what seems the natural counterpart of $h_{{\mathfrak{S}}}(\phi)$, while here we discuss a particular case that could appear to be almost trivial, but actually this is not the case, as it permits to give a uniform approach to some entropies which are not defined by using trajectories. So, by taking $\phi=\mathrm{id}_S$ in \eqref{hs-eq}, we obtain a map $h_{\mathfrak{S}}^0:S\to\mathbb R_+$: \begin{Definition} Let $S$ be a normed semigroup and $x\in S$. The \emph{semigroup entropy} of $x$ is $$ h_{{\mathfrak{S}}}^0(x):=h_{{\mathfrak{S}}}(\mathrm{id}_S,x) = \lim_{n\to\infty} \frac{v(x^n)}{n}. $$ \end{Definition} We shall see now that the notion of semigroup entropy of an element is supported by many examples. On the other hand, since some of the examples given below cannot be covered by our scheme, we propose first a slight extension that covers those examples as well. Let ${\mathfrak S}^*$ be the category having as objects of all pairs $(S,v)$, where $S$ is a semigroup and $v:S \to \mathbb R_+$ is an \emph{arbitrary} map. A morphism in the category ${\mathfrak S}^*$ is a semigroup homomorphism $\phi: (S,v) \to (S',v')$ that is contracting with respect to the pair $v,v'$, i.e., $v'(\phi(x)) \leq v(x)$ for every $x\in S$. Note that our starting category ${\mathfrak S}$ is simply a full subcategory of ${\mathfrak S}^*$, having as objects those pairs $(S,v)$ such that $v$ satisfies (i) from Definition \ref{Def1}. These pairs were called normed semigroups and $v$ was called a semigroup norm. For the sake of convenience and in order to keep close to the current terminology, let us call the function $v$ in the larger category ${\mathfrak S}^*$ a \emph{semigroup pseudonorm} (although, we are imposing no condition on $v$ whatsoever). So, in this setting, one can define a local semigroup entropy $h_{{\mathfrak{S}}^*}: \mathrm{End}(S) \times S \to \mathbb R_+$ following the pattern of \eqref{hs-eq}, replacing the limit by $$h_{{\mathfrak{S}}^*} (\phi,x)=\limsup_{n\to \infty}\frac{v(T_n(\phi,x))}{n}.$$ In particular, $$h_{{\mathfrak{S}}^*}^0(x)=\limsup_{n\to \infty}\frac{v(x^n)}{n}.$$ Let us note that in order to have the last $\limsup$ a limit, one does not need $(S,v)$ to be in ${\mathfrak{S}}$, but it suffices to have the semigroup norm condition (i) from Definition \ref{Def1} fulfilled only for products of powers of the same element. We consider here three different entropies, respectively from \cite{MMS}, \cite{FFK} and \cite{Silv}, that can be described in terms of $h_{\mathfrak{S}}^0$ or its generalized version $h_{{\mathfrak{S}}^*}^0$. We do not go into the details, but we give the idea how to capture them using the notion of semigroup entropy of an element of the semigroup of all endomorphisms of a given object equipped with an appropriate semigroup (pseudo)norm. \begin{itemize} \item[(a)] Following \cite{MMS}, let $R$ be a Noetherian local ring and $\phi:R\to R$ an endomorphism of finite length; moreover, $\lambda(\phi)$ is the length of $\phi$, which is a real number $\geq 1$. In this setting the entropy of $\phi$ is defined by $$h_\lambda(\phi)=\lim_{n\to \infty}\frac{\log\lambda(\phi^n)}{n}$$ and it is proved that this limit exists. Then the set $S=\mathrm{End}_{\mathrm{fl}}(R)$ of all finite-length endomorphisms of $R$ is a semigroup and $\log\lambda(-)$ is a semigroup norm on $S$. For every $\phi\in S$, we have $$ h_\lambda(\phi)=h_{\mathfrak{S}}(\mathrm{id}_S,\phi)=h_{{\mathfrak{S}}}^0(\phi). $$ In other words, $h_\lambda(\phi)$ is nothing else but the semigroup entropy of the element $\phi$ of the normed semigroup $S=\mathrm{End}_{\mathrm{fl}}(R)$. \item[(b)] We recall now the entropy considered in \cite{Silv}, which was already introduced in \cite{BV}. Let $t\in\mathbb N_+$ and $\varphi:\mathbb P^t\to\mathbb P^t$ be a dominant rational map of degree $d$. Then the entropy of $\varphi$ is defined as the logarithm of the dynamical degree, that is $$ h_\delta (\varphi)=\log \delta_\phi=\limsup_{n\to \infty}\frac{\log\deg(\varphi^n)}{n}. $$ Consider the semigroup $S$ of all dominant rational maps of $\mathbb P^n$ and the function $\log\deg(-)$. In general this is only a semigroup pseudonorm on $S$ and $$h_{{\mathfrak{S}}^*}^0(\varphi)=h_\delta(\varphi).$$ Note that $\log\deg(-)$ is a semigroup norm when $\varphi$ is an endomorphism of the variety $\mathbb P^t$. \item[(c)] We consider now the growth rate for endomorphisms introduced in \cite{Bowen} and recently studied in \cite{FFK}. Let $G$ be a finitely generated group, $X$ a finite symmetric set of generators of $G$, and $\varphi:G\to G$ an endomorphism. For $g\in G$, denote by $\ell_X(g)$ the length of $g$ with respect to the alphabet $X$. The growth rate of $\varphi$ with respect to $x\in X$ is $$\log GR(\varphi,x)=\lim_{n\to \infty}\frac{\log \ell_X(\varphi^n(x))}{n}$$ (and the growth rate of $\varphi$ is $\log GR(\varphi)=\sup_{x\in X} \log GR(\varphi,x)$). Consider $S=\mathrm{End}(G)$ and, fixed $x\in X$, the map $\log GR(-,x)$. As in item (b) this is only a semigroup pseudonorm on $S$. Nevertheless, also in this case the semigroup entropy $$\log GR(\varphi,x)=h_{{\mathfrak{S}}^*}^0(\varphi).$$ \end{itemize} \begin{Remark}\label{Asymm} For a normed semigroup $S$, let $h_{{\mathfrak{S}}}: \mathrm{End}(S) \times S \to \mathbb R_+$ be the local semigroup entropy defined in \eqref{hs-eq}. Exchanging the roles of $\phi\in \mathrm{End}(S)$ and $x\in S$, define the \emph{global semigroup entropy} of an element $x\in S$ by $$ h_{{\mathfrak{S}}}(x)=\sup_{\phi \in \mathrm{End}(S)}h_{{\mathfrak{S}}}(\phi,x). $$ Obviously, $h_{{\mathfrak{S}}}^0(x) \leq h_{{\mathfrak{S}}}(x)$ for every $x\in S$. \end{Remark} \section{Obtaining known entropies}\label{known-sec} \subsection{The general scheme} Let $\mathfrak X$ be a category and let $F:\mathfrak X\to {\mathfrak{S}}$ be a functor. Define the entropy $$h_{F}:\mathfrak X\to \mathbb R_+$$ on the category $\mathfrak X$ by $$h_{F}(\phi)=h_{{\mathfrak{S}}}(F(\phi)),$$ for any endomorphism $\phi: X \to X$ in $\mathfrak X$. Recall that with some abuse of notation we write $h_{F}:\mathfrak X\to \mathbb R_+$ in place of $h_{F}:\mathrm{Flow}_\mathfrak X\to \mathbb R_+$ for simplicity. Since the functor $F$ preserves commutative squares and isomorphisms, the entropy $h_{F}$ has the following properties, that automatically follow from the previously listed properties of the semigroup entropy $h_{\mathfrak{S}}$. For the details and for properties that need a further discussion see \cite{DGV1}. Let $X$, $Y$ be objects of $\mathfrak X$ and $\phi:X\to X$, $\psi:Y\to Y$ endomorphism in $\mathfrak X$. \begin{itemize} \item[(a)][Invariance under conjugation] If $\alpha:X\to Y$ is an isomorphism in $\mathfrak X$, then $h_{F}(\phi)=h_{F}(\alpha\circ\phi\circ\alpha^{-1})$. \item[(b)][Invariance under inversion] If $\phi:X\to X$ is an automorphism in $\mathfrak X$, then $h_{F}(\phi^{-1})=h_{F}(\phi)$. \item[(c)][Logaritmic Law] If the norm of $F(X)$ is $s$-monotone, then $h_{F}(\phi^{k})=k\cdot h_{F}(\phi)$ for all $k\in \mathbb N_+$. \end{itemize} Other properties of $h_{F}$ depend on properties of the functor $F$. \begin{itemize} \item[(d)][Monotonicity for invariant subobjects] If $F$ sends subobject embeddings in $\mathfrak X$ to embeddings in ${\mathfrak{S}}$ or to surjective maps in ${\mathfrak{S}}$, then, if $Y$ is a $\phi$-invariant subobject of $X$, we have $h_{F}(\phi\restriction_Y)\leq h_{F}(\phi)$. \item[(e)][Monotonicity for factors] If $F$ sends factors in $\mathfrak X$ to surjective maps in ${\mathfrak{S}}$ or to embeddings in ${\mathfrak{S}}$, then, if $\alpha:T\to S$ is an epimorphism in $\mathfrak X$ such that $\alpha \circ \psi = \phi \circ \alpha$, then $h_F(\phi) \leq h_F(\psi)$. \item[(f)][Continuity for direct limits] If $F$ is covariant and sends direct limits to direct limits, then $h_F(\phi)=\sup_{i\in I} h_F(\phi\restriction_{X_i})$ whenever $X=\varinjlim X_i$ and $X_i$ is a $\phi$-invariant subobject of $X$ for every $i\in I$. \item[(g)][Continuity for inverse limits] If $F$ is contravariant and sends inverse limits to direct limits, then $h_F(\phi)=\sup_{i\in I} h_F(\overline\phi_i)$ whenever $X=\varprojlim X_i$ and $(X_i,\phi_i)$ is a factor of $(X,\phi)$ for every $i\in I$. \end{itemize} In the following subsections we describe how the known entropies can be obtained from this general scheme. For all the details we refer to \cite{DGV1} \subsection{Set-theoretic entropy} In this section we consider the category $\mathbf{Set}$ of sets and maps and its (non-full) subcategory $\mathbf{Set}_\mathrm{fin}$ having as morphisms all the finitely many-to-one maps. We construct a functor $\mathfrak{atr}:\mathbf{Set}\to{\mathfrak{S}}$ and a functor $\mathfrak{str}: \mathbf{Set}_\mathrm{fin} \to {\mathfrak{S}}$, which give the set-theoretic entropy $\mathfrak h$ and the covariant set-theoretic entropy $\mathfrak h^*$, introduced in \cite{AZD} and \cite{DG-islam} respectively. We also recall that they are related to invariants for self-maps of sets introduced in \cite{G0} and \cite{AADGH} respectively. A natural semilattice with zero, arising from a set $X$, is the family $({\mathcal S}(X),\cup)$ of all finite subsets of $X$ with neutral element $\emptyset$. Moreover the map defined by $v(A) = |A|$ for every $A\in\mathcal S(X)$ is an s-monotone norm. So let $\mathfrak{atr}(X)=(\mathcal S(X),\cup,v)$. Consider now a map $\lambda:X\to Y$ between sets and define $\mathfrak{atr}(\lambda):\mathcal S(X)\to \mathcal S(Y)$ by $A\mapsto \lambda(A)$ for every $A\in\mathcal S(X)$. This defines a covariant functor $$\mathfrak{atr}: \mathbf{Set} \to {\mathfrak{S}}$$ such that $$h_{\mathfrak{atr}}=\mathfrak h.$$ Consider now a finite-to-one map $\lambda:X\to Y$. As above let $\mathfrak{str}(X)=(\mathcal S(X),\cup,v)$, while $\mathfrak{str}(\lambda):\mathfrak{str}(Y)\to\mathfrak{str}(X)$ is given by $A \mapsto\lambda^{-1}(A)$ for every $A\in\mathcal S(Y)$. This defines a contravariant functor $$ \mathfrak{str}: \mathbf{Set}_\mathrm{fin}\to{\mathfrak{S}} $$ such that $$ h_{\mathfrak{str}}=\mathfrak h^*. $$ \subsection{Topological entropy for compact spaces} In this subsection we consider in the general scheme the topological entropy $h_{top}$ introduced in \cite{AKM} for continuous self-maps of compact spaces. So we specify the general scheme for the category $\mathfrak X=\mathbf{CTop}$ of compact spaces and continuous maps, constructing the functor $\mathfrak{cov}:\mathbf{CTop}\to{\mathfrak{S}}$. For a topological space $X$ let $\mathfrak{cov}(X)$ be the family of all open covers $\mathcal U$ of $X$, where it is allowed $\emptyset\in\mathcal U$. For ${\cal U}, {\cal V}\in \mathfrak{cov}(X)$ let ${\cal U} \vee {\cal V}=\{U\cap V: U\in {\cal U}, V\in {\cal V}\}\in \mathfrak{cov}(X)$. One can easily prove commutativity and associativity of $\vee$; moreover, let $\mathcal E=\{X\}$ denote the trivial cover. Then \begin{center} $(\mathfrak{cov}(X), \vee, \mathcal E)$ is a commutative monoid. \end{center} For a topological space $X$, one has a natural preorder ${\cal U} \prec{\cal V} $ on $\mathfrak{cov}(X)$; indeed, ${\cal V}$ \emph{refines} ${\cal U}$ if for every $V \in {\cal V}$ there exists $U\in {\cal U}$ such that $V\subseteq U$. Note that this preorder has bottom element $\mathcal E$, and that it is not an order. In general, ${\cal U} \vee {\cal U} \ne {\cal U} $, yet ${\cal U} \vee {\cal U} \sim {\cal U} $, and more generally \begin{equation}\label{vee} {\cal U} \vee {\cal U} \vee \ldots \vee {\cal U} \sim {\cal U}. \end{equation} For $X$, $Y$ topological spaces, a continuous map $\phi:X\to Y$ and ${\cal U}\in \mathfrak{cov} (Y)$, let $\phi^{-1}({\cal U})=\{\phi^{-1}(U): U\in {\cal U}\}$. Then, as $\phi^{-1}({\cal U} \vee {\cal V})= \phi^{-1}({\cal U})\vee \phi^{-1}({\cal V})$, we have that $\mathfrak{cov} (\phi): \mathfrak{cov} (Y)\to \mathfrak{cov} (X)$, defined by ${\cal U} \mapsto \phi^{-1}({\cal U})$, is a semigroup homomorphism. This defines a contravariant functor $\mathfrak{cov}$ from the category of all topological spaces to the category of commutative semigroups. To get a semigroup norm on $\mathfrak{cov}(X)$ we restrict this functor to the subcategory ${\mathbf{CTop}}$ of compact spaces. For a compact space $X$ and ${\cal U}\in \mathfrak{cov}(X)$, let $$M({\cal U})=\min\{|{\cal V}|: {\cal V}\mbox{ a finite subcover of }{\cal U}\}\ \text{and}\ v({\cal U})=\log M({\cal U}).$$ Now \eqref{vee} gives $v({\cal U} \vee {\cal U} \vee \ldots \vee {\cal U}) = v({\cal U})$, so \begin{center} $(\mathfrak{cov}(X), \vee, v)$ is an arithmetic normed semigroup. \end{center} For every continuous map $\phi:X\to Y$ of compact spaces and ${\cal W}\in \mathfrak{cov}(Y)$, the inequality $v(\phi^{-1}({\cal W}))\leq v({\cal W})$ holds. Consequently \begin{center} $\mathfrak{cov}(\phi): \mathfrak{cov} (Y)\to \mathfrak{cov} (X)$, defined by ${\cal W}\mapsto\phi^{-1}({\cal W})$, is a morphism in ${\mathfrak{S}}$. \end{center} Therefore the assignments $X \mapsto \mathfrak{cov}(X)$ and $\phi\mapsto\mathfrak{cov}(\phi)$ define a contravariant functor $$\mathfrak{cov}:\mathbf{CTop}\to {\mathfrak{S}}.$$ Moreover, $$h_{\mathfrak{cov}}=h_{top}.$$ Since the functor $\mathfrak{cov}$ takes factors in ${\mathbf{CTop}}$ to embeddings in ${\mathfrak{S}}$, embeddings in ${\mathbf{CTop}}$ to surjective morphisms in ${\mathfrak{S}}$, and inverse limits in ${\mathbf{CTop}}$ to direct limits in ${\mathfrak{S}}$, we have automatically that the topological entropy $h_{top}$ is monotone for factors and restrictions to invariant subspaces, continuous for inverse limits, is invariant under conjugation and inversion, and satisfies the Logarithmic Law. \subsection{Measure entropy} In this subsection we consider the category ${\mathbf{MesSp}}$ of probability measure spaces $(X, \mathfrak B, \mu)$ and measure preserving maps, constructing a functor $\mathfrak{mes}:{\mathbf{MesSp}}\to{\mathfrak{S}}$ in order to obtain from the general scheme the measure entropy $h_{mes}$ from \cite{K} and \cite{Sinai}. For a measure space $(X,\mathfrak{B},\mu)$ let $\mathfrak{P}(X)$ be the family of all measurable partitions $\xi=\{A_1,A_2,\ldots,A_k\}$ of $X$. For $\xi, \eta\in \mathfrak{P}(X)$ let $\xi \vee \eta=\{U\cap V: U\in \xi, V\in \eta\}$. As $\xi \vee \xi = \xi$, with zero the cover $\xi_0=\{X\}$, \begin{center} $(\mathfrak{P}(X),\vee)$ is a semilattice with $0$. \end{center} Moreover, for $\xi=\{A_1,A_2,\ldots,A_k\}\in \mathfrak{P}(X)$ the \emph{entropy} of $\xi$ is given by Boltzmann's Formula $$ v(\xi)=-\sum_{i=1}^k \mu(A_k)\log \mu(A_k). $$ This is a monotone semigroup norm making $\mathfrak{P}(X)$ a normed semilattice and a normed monoid. Consider now a measure preserving map $T:X\to Y$. For a cover $\xi=\{A_i\}_{i=1}^k\in \mathfrak{P}(Y)$ let $T^{-1}(\xi)=\{T^{-1}(A_i)\}_{i=1}^k$. Since $T$ is measure preserving, one has $T^{-1}(\xi)\in \mathfrak{P}(X)$ and $\mu (T^{-1}(A_i)) = \mu(A_i)$ for all $i=1,\ldots,k$. Hence, $v(T^{-1}(\xi)) = v(\xi)$ and so \begin{center} $\mathfrak{mes}(T):\mathfrak{P}(Y)\to\mathfrak{P}(X)$, defined by $\xi\mapsto T^{-1}(\xi)$, is a morphism in ${\mathfrak{L}}$. \end{center} Therefore the assignments $X \mapsto\mathfrak{P}(X)$ and $T\mapsto\mathfrak{mes}(T)$ define a contravariant functor $$\mathfrak{mes}:{\mathbf{MesSp}}\to{\mathfrak{L}}.$$ Moreover, $$h_{\mathfrak{mes}}=h_{mes}.$$ The functor $\mathfrak{mes}:{\mathbf{MesSp}}\to{\mathfrak{L}}$ is covariant and sends embeddings in ${\mathbf{MesSp}}$ to surjective morphisms in ${\mathfrak{L}}$ and sends surjective maps in ${\mathbf{MesSp}}$ to embeddings in ${\mathfrak{L}}$. Hence, similarly to $h_{top}$, also the measure-theoretic entropy $h_{mes}$ is monotone for factors and restrictions to invariant subspaces, continuous for inverse limits, is invariant under conjugation and inversion, satisfies the Logarithmic Law and the Weak Addition Theorem. In the next remark we briefly discuss the connection between measure entropy and topological entropy. \begin{Remark} \begin{itemize} \item[(a)] If $X$ is a compact metric space and $\phi: X \to X$ is a continuous surjective self-map, by Krylov-Bogolioubov Theorem \cite{BK} there exist some $\phi$-invariant Borel probability measures $\mu$ on $X$ (i.e., making $\phi:(X,\mu) \to (X,\mu)$ measure preserving). Denote by $h_\mu$ the measure entropy with respect to the measure $\mu$. The inequality $h_{\mu}(\phi)\leq h_{top}(\phi)$ for every $\mu \in M(X,\phi)$ is due to Goodwyn \cite{Goo}. Moreover the \emph{variational principle} (see \cite[Theorem 8.6]{Wa}) holds true: $$h_{top}(\phi)= \sup \{h_{\mu}(\phi): \mu\ \text{$\phi$-invariant measure on $X$}\}.$$ \item[(b)] In the computation of the topological entropy it is possible to reduce to surjective continuous self-maps of compact spaces. Indeed, for a compact space $X$ and a continuous self-map $\phi:X\to X$, the set $E_\phi(X)=\bigcap_{n\in\mathbb N}\phi^n(X)$ is closed and $\phi$-invariant, the map $\phi\restriction_{E_\phi(X)}:E_\phi(X)\to E_\phi(X)$ is surjective and $h_{top}(\phi)=h_{top}(\phi\restriction_{E_\phi(X)})$ (see \cite{Wa}). \item[(c)] In the case of a compact group $K$ and a continuous surjective endomorphism $\phi:K\to K$, the group $K$ has its unique Haar measure and so $\phi$ is measure preserving as noted by Halmos \cite{Halmos}. In particular both $h_{top}$ and $h_{mes}$ are available for surjective continuous endomorphisms of compact groups and they coincide as proved in the general case by Stoyanov \cite{S}. In other terms, denote by $\mathbf{CGrp}$ the category of all compact groups and continuous homomorphisms, and by $\textbf{CGrp}_e$ the non-full subcategory of $\textbf{CGrp}$, having as morphisms all epimorphisms in $\textbf{CGrp}$. So in the following diagram we consider the forgetful functor $V: \textbf{CGrp}_e\to \textbf{Mes}$, while $i$ is the inclusion of $\textbf{CGrp}_e$ in $\textbf{CGrp}$ as a non-full subcategory and $U:\mathbf{CGrp}\to \mathbf{Top}$ is the forgetful functor: \begin{equation*} \xymatrix{ \textbf{CGrp}_e\ar[r]^{i}\ar[d]^V & \textbf{CGrp}\ar[r]^{U}& \textbf{Top} \\ \textbf{Mes} } \end{equation*} For a surjective endomorphism $\phi$ of the compact group $K$, we have then $h_{mes}(V(\phi))=h_{top}(U(\phi))$. \end{itemize} \end{Remark} \subsection{Algebraic entropy} Here we consider the category $\mathbf{Grp}$ of all groups and their homomorphisms and its subcategory $\mathbf{AbGrp}$ of all abelian groups. We construct two functors $\mathfrak{sub}:\mathbf{AbGrp}\to{\mathfrak{L}}$ and $\mathfrak{pet}:\mathbf{Grp}\to{\mathfrak{S}}$ that permits to find from the general scheme the two algebraic entropies $\mathrm{ent}$ and $h_{alg}$. For more details on these entropies see the next section. Let $G$ be an abelian group and let $({\cal F}(G),\cdot)$ be the semilattice consisting of all finite subgroups of $G$. Letting $v(F) = \log|F|$ for every $F \in {\cal F}(G)$, then \begin{center} $({\cal F}(G),\cdot,v)$ is a normed semilattice \end{center} and the norm $v$ is monotone. For every group homomorphism $\phi: G \to H$, \begin{center} the map ${\cal F}(\phi): {\cal F}(G) \to {\cal F}(H)$, defined by $F\mapsto \phi(F)$, is a morphism in ${\mathfrak{L}}$. \end{center} Therefore the assignments $G\mapsto {\cal F}(G)$ and $\phi\mapsto {\cal F}(\phi)$ define a covariant functor $$\mathfrak{sub}: {\mathbf{AbGrp}} \to {\mathfrak{L}}.$$ Moreover $$h_{\mathfrak{sub}}=\mathrm{ent}.$$ Since the functor $\mathfrak{sub}$ takes factors in $\mathbf{AbGrp}$ to surjective morphisms in ${\mathfrak{S}}$, embeddings in $\mathbf{AbGrp}$ to embeddings in ${\mathfrak{S}}$, and direct limits in $\mathbf{AbGrp}$ to direct limits in ${\mathfrak{S}}$, we have automatically that the algebraic entropy $\mathrm{ent}$ is monotone for factors and restrictions to invariant subspaces, continuous for direct limits, invariant under conjugation and inversion, satisfies the Logarithmic Law. For a group $G$ let $\mathcal{H}(G)$ be the family of all finite non-empty subsets of $G$. Then $\mathcal{H}(G)$ with the operation induced by the multiplication of $G$ is a monoid with neutral element $\{1\}$. Moreover, letting $v(F) = \log |F|$ for every $F \in \mathcal{H}(G)$ makes $\mathcal{H}(G)$ a normed semigroup. For an abelian group $G$ the monoid $\mathcal{H}(G)$ is arithmetic since for any $F\in \mathcal{H}(G)$ the sum of $n$ summands satisfies $|F + \ldots + F|\leq (n+1)^{|F|}$. Moreover, $(\mathcal{H}(G),\subseteq)$ is an ordered semigroup and the norm $v$ is $s$-monotone. For every group homomorphism $\phi:G \to H$, \begin{center} the map $\mathcal{H}(\phi): \mathcal{H}(G) \to \mathcal{H}(H)$, defined by $F\mapsto \phi(F)$, is a morphism in ${\mathfrak{S}}$. \end{center} Consequently the assignments $G \mapsto (\mathcal{H}(G),v)$ and $\phi\mapsto \mathcal{H}(\phi)$ give a covariant functor $$\mathfrak{pet}:\mathbf{Grp}\to {\mathfrak{S}}.$$ Hence $$h_{\mathfrak{pet}}=h_{alg}.$$ Note that the functor $\mathfrak{sub}$ is a subfunctor of $\mathfrak{pet}: {\mathbf{AbGrp}} \to{\mathfrak{S}}$ as $ {\cal F}(G) \subseteq {\cal H}(G)$ for every abelian group $G$. As for the algebraic entropy $\mathrm{ent}$, since the functor $\mathfrak{pet}$ takes factors in $\mathbf{Grp}$ to surjective morphisms in ${\mathfrak{S}}$, embeddings in $\mathbf{Grp}$ to embeddings in ${\mathfrak{S}}$, and direct limits in $\mathbf{Grp}$ to direct limits in ${\mathfrak{S}}$, we have automatically that the algebraic entropy $h_{alg}$ is monotone for factors and restrictions to invariant subspaces, continuous for direct limits, invariant under conjugation and inversion, satisfies the Logarithmic Law. \subsection{$h_{top}$ and $h_{alg}$ in locally compact groups}\label{NewSec2} As mentioned above, Bowen introduced topological entropy for uniformly continuous self-maps of metric spaces in \cite{B}. His approach turned out to be especially efficient in the case of locally compact spaces provided with some Borel measure with good invariance properties, in particular for {continuous endomorphisms of locally compact groups provided with their Haar measure}. Later Hood in \cite{hood} extended Bowen's definition to uniformly continuous self-maps of arbitrary uniform spaces and in particular to continuous endomorphisms of (not necessarily metrizable) locally compact groups. On the other hand, Virili \cite{V} extended the notion of algebraic entropy to continuous endomorphisms of locally compact abelian groups, inspired by Bowen's definition of topological entropy (based on the use of Haar measure). As mentioned in \cite{DG-islam}, his definition can be extended to continuous endomorphisms of arbitrary locally compact groups. Our aim here is to show that both entropies can be obtained from our general scheme in the case of measure preserving topological automorphisms of locally compact groups. To this end we recall first the definitions of $h_{top}$ and $h_{alg}$ in locally compact groups. Let $G$ be a locally compact group, let $\mathcal C(G)$ be the family of all compact neighborhoods of $1$ and $\mu$ be a right Haar measure on $G$. For a continuous endomorphism $\phi: G \to G$, $U\in\mathcal C(G)$ and a positive integer $n$, the $n$-th cotrajectory $C_n(\phi,U)=U\cap \phi^{-1}(U)\cap\ldots\cap\phi^{-n+1}(U)$ is still in $\mathcal C(G)$. The topological entropy $h_{top}$ is intended to measure the rate of decay of the $n$-th cotrajectory $C_n(\phi,U)$. So let \begin{equation} H_{top}(\phi,U)=\limsup _{n\to \infty} - \frac{\log \mu (C_n(\phi,U))}{n}, \end{equation} which does not depend on the choice of the Haar measure $\mu$. The \emph{topological entropy} of $\phi$ is $$ h_{top}(\phi)=\sup\{H_{top}(\phi,U):U\in\mathcal C(G)\}. $$ If $G$ is discrete, then $\mathcal C(G)$ is the family of all finite subsets of $G$ containing $1$, and $\mu(A) = |A|$ for subsets $A$ of $G$. So $H_{top}(\phi,U)= 0$ for every $U \in \mathcal C(G)$, hence $h_{top}(\phi)=0$. To define the algebraic entropy of $\phi$ with respect to $U\in\mathcal C(G)$ one uses the {$n$-th $\phi$-trajectory} $T_n(\phi,U)=U\cdot \phi(U)\cdot \ldots\cdot \phi^{n-1}(U)$ of $U$, that still belongs to $\mathcal C(G)$. It turns out that the value \begin{equation}\label{**} H_{alg}(\phi,U)=\limsup_{n\to \infty} \frac{\log \mu (T_n(\phi,U))}{n} \end{equation} does not depend on the choice of $\mu$. The \emph{algebraic entropy} of $\phi$ is $$ h_{alg}(\phi)=\sup\{H_{alg}(\phi,U):U\in\mathcal C(G)\}. $$ The term ``algebraic'' is motivated by the fact that the definition of $T_n(\phi,U)$ (unlike $C_n(\phi,U)$) makes use of the group operation. As we saw above \eqref{**} is a limit when $G$ is discrete. Moreover, if $G$ is compact, then $h_{alg}(\phi)=H_{alg}(\phi,G)=0$. In the sequel, $G$ will be a locally compact group. We fix also a measure preserving topological automorphism $\phi: G \to G$. To obtain the entropy $h_{top}(\phi)$ via semigroup entropy fix some $V\in \mathcal C(G)$ with $\mu(V)\leq 1$. Then consider the subset $$ \mathcal C_0(G)=\{U\in \mathcal C(G): U \subseteq V\}. $$ Obviously, $\mathcal C_0(G)$ is a monoid with respect to intersection, having as neutral element $V$. To obtain a pseudonorm $v$ on $\mathcal C_0(G)$ let $v(U) = - \log \mu (U)$ for any $U \in \mathcal C_0(G)$. Then $\phi$ defines a semigroup isomorphism $\phi^\#: \mathcal C_0(G)\to \mathcal C_0(G)$ by $\phi^\#(U) = \phi^{-1}(U)$ for any $U\in \mathcal C_0(G)$. It is easy to see that $\phi^\#: \mathcal C_0(G)\to \mathcal C_0(G)$ is a an automorphism in ${\mathfrak{S}}^*$ and the semigroup entropy $h_{{\mathfrak{S}}^*}(\phi^\#)$ coincides with $h_{top}(\phi)$ since $H_{top}(\phi,U) \leq H_{top}(\phi,U')$ whenever $U \supseteq U'$. To obtain the entropy $h_{alg}(\phi)$ via semigroup entropy fix some $W\in \mathcal C(G)$ with $\mu(W)\geq 1$. Then consider the subset $$ \mathcal C_1(G)=\{U\in \mathcal C(G): U \supseteq W\} $$ of the set $\mathcal C(G)$. Note that for $U_1, U_2 \in \mathcal C_1(G)$ also $U_1U_2 \in \mathcal C_1(G)$. Thus $\mathcal C_1(G)$ is a semigroup. To define a pseudonorm $v$ on $\mathcal C_1(G)$ let $v(U) = \log \mu (U)$ for any $U \in \mathcal C_1(G)$. Then $\phi$ defines a semigroup isomorphism $\phi_\#: \mathcal C_1(G)\to \mathcal C_1(G)$ by $\phi_\#(U) = \phi(U)$ for any $U\in \mathcal C_1(G)$. It is easy to see that $\phi_\#: \mathcal C_1(G)\to \mathcal C_1(G)$ is a morphism in ${\mathfrak{S}}^*$ and the semigroup entropy $h_{{\mathfrak{S}}^*}(\phi_\#)$ coincides with $h_{alg}(\phi)$, since $ \mathcal C_1(G)$ is cofinal in $ \mathcal C(G)$ and $H_{alg}(\phi,U) \leq H_{alg}(\phi,U')$ whenever $U \subseteq U'$. \begin{Remark} We asked above the automorphism $\phi$ to be ``measure preserving". In this way one rules out many interesting cases of topological automorphisms that are not measure preserving (e.g., all automorphisms of $\mathbb R$ beyond $\pm \mathrm{id}_\mathbb R$). This condition is imposed in order to respect the definition of the morphisms in ${\mathfrak{S}}^*$. If one further relaxes this condition on the morphisms in ${\mathfrak{S}}^*$ (without asking them to be contracting maps with respect to the pseudonorm), then one can obtain a semigroup entropy that covers the topological and the algebraic entropy of arbitrary topological automorphisms of locally compact groups (see \cite{DGV} for more details). \end{Remark} \subsection{Algebraic $i$-entropy} For a ring $R$ we denote by $\mathbf{Mod}_R$ the category of right $R$-modules and $R$-module homomorphisms. We consider here the algebraic $i$-entropy introduced in \cite{SZ}, giving a functor ${\mathfrak{sub}_i}:\mathbf{Mod}_R\to {\mathfrak{L}}$, to find $\mathrm{ent}_i$ from the general scheme. Here $i: \mathbf{Mod}_R \to \mathbb R_+$ is an invariant of $\mathbf{Mod}_R$ (i.e., $i(0)=0$ and $i(M) = i(N)$ whenever $M\cong N$). Consider the following conditions: \begin{itemize} \item[(a)] $i(N_1 + N_2)\leq i(N_1) + i(N_2)$ for all submodules $N_1$, $N_2$ of $M$; \item[(b)] $i(M/N)\leq i(M)$ for every submodule $N$ of $M$; \item[(b$^*$)] $i(N)\leq i(M)$ for every submodule $N$ of $M$. \end{itemize} The invariant $i$ is called \emph{subadditive} if (a) and (b) hold, and it is called \emph{preadditive} if (a) and (b$^*$) hold. For $M\in\mathbf{Mod}_R$ denote by ${\cal L}(M)$ the lattice of all submodules of $M$. The operations are intersection and sum of two submodules, the bottom element is $\{0\}$ and the top element is $M$. Now fix a subadditive invariant $i$ of $\mathbf{Mod}_R$ and for a right $R$-module $M$ let $${\cal F}_i(M)=\{\mbox{submodules $N$ of $M$ with }i(M)< \infty\},$$ which is a subsemilattice of ${\cal L}(M)$ ordered by inclusion. Define a norm on ${\cal F}_i(M)$ setting $$v(H)=i(H)$$ for every $H \in {\cal F}_i(M)$. The norm $v$ is not necessarily monotone (it is monotone if $i$ is both subadditive and preadditive). For every homomorphism $\phi: M \to N$ in $\mathbf{Mod}_R$, \begin{center} ${\cal F}_i(\phi): {\cal F}_i(M) \to {\cal F}_i(N)$, defined by ${\cal F}_i(\phi)(H) =\phi(H)$, is a morphism in ${\mathfrak{L}}$. \end{center} Moreover the norm $v$ makes the morphism ${\cal F}_i(\phi)$ contractive by the property (b) of the invariant. Therefore, the assignments $M \mapsto {\cal F}_i(M)$ and $\phi\mapsto {\cal F}_i(\phi)$ define a covariant functor $$\mathfrak{sub}_i:\mathbf{Mod} _R\to {\mathfrak{L}}.$$ We can conclude that, for a ring $R$ and a subadditive invariant $i$ of $\mathbf{Mod}_R$, $$h_{\mathfrak{sub}_i}=\mathrm{ent}_i.$$ If $i$ is preadditive, the functor ${\mathfrak{sub}_i}$ sends monomorphisms to embeddings and so $\mathrm{ent}_i$ is monotone under taking submodules. If $i$ is both subadditive and preadditive then for every $R$-module $M$ the norm of ${\mathfrak{sub}_i}(M)$ is s-monotone, so $\mathrm{ent}_{i}$ satisfies also the Logarithmic Law. In general this entropy is not monotone under taking quotients, but this can be obtained with stronger hypotheses on $i$ and with some restriction on the domain of ${\mathfrak{sub}_i}$. A clear example is given by vector spaces; the algebraic entropy $\mathrm{ent}_{\dim}$ for linear transformations of vector spaces was considered in full details in \cite{GBS}: \begin{Example} Let $K$ be a field. Then for every $K$-vector space $V$ let ${\cal F}_d(M)$ be the set of all finite-dimensional subspaces $N$ of $M$. Then $({\cal F}_d(V),+)$ is a subsemilattice of $({\cal L}(V),+)$ and $v(H)=\dim H$ defines a monotone norm on ${\cal F}_d(V)$. For every morphism $\phi: V \to W$ in $\mathbf{Mod}_K$ \begin{center} the map ${\cal F}_d(\phi): {\cal F}_d(V) \to {\cal F}_d(W)$, defined by $H\mapsto\phi(H)$, is a morphism in ${\mathfrak{L}}$. \end{center} Therefore, the assignments $M \mapsto {\cal F}_d(M)$ and $\phi\mapsto {\cal F}_d(\phi)$ define a covariant functor $$\mathfrak{sub}_d:\mathbf{Mod}_K\to {\mathfrak{L}}.$$ Then $$h_{\mathfrak{sub}_d}=\mathrm{ent}_{\dim}.$$ Note that this entropy can be computed ad follows. Every flow $\phi: V \to V$ of $\mathbf{Mod}_K$ can be considered as a $K[X]$-module $V_\phi$ letting $X$ act on $V$ as $\phi$. Then $h_{\mathfrak{sub}_d}(\phi)$ coincides with the rank of the $K[X]$-module $V_\phi$. \end{Example} \subsection{Adjoint algebraic entropy} We consider now again the category $\mathbf{Grp}$ of all groups and their homomorphisms, giving a functor $\mathfrak{sub}^\star:\mathbf{Grp}\to {\mathfrak{L}}$ such that the entropy defined using this functor coincides with the adjoint algebraic entropy $\mathrm{ent}^\star$ introduced in \cite{DGS}. For a group $G$ denote by ${\cal C}(G)$ the family of all subgroups of finite index in $G$. It is a subsemilattice of $({\cal L}(G), \cap)$. For $N\in{\cal C}(G)$, let $$v(N) = \log[G:N];$$ then \begin{center} $({\cal C}(G),v)$ is a normed semilattice, \end{center} with neutral element $G$; moreover the norm $v$ is monotone. For every group homomorphism $\phi: G \to H$ \begin{center} the map ${\cal C}(\phi): {\cal C}(H) \to {\cal C}(G)$, defined by $N\mapsto \phi^{-1}(N)$, is a morphism in ${\mathfrak{S}}$. \end{center} Then the assignments $G\mapsto{\cal C}(G)$ and $\phi\mapsto{\cal C}(\phi)$ define a contravariant functor $$\mathfrak{sub}^\star:\mathbf{Grp}\to {\mathfrak{L}}.$$ Moreover $$h_{\mathfrak{sub}^\star}=\mathrm{ent}^\star.$$ There exists also a version of the adjoint algebraic entropy for modules, namely the adjoint algebraic $i$-entropy $\mathrm{ent}_i^\star$ (see \cite{Vi}), which can be treated analogously. \subsection{Topological entropy for totally disconnected compact groups} Let $(G,\tau)$ be a totally disconnected compact group and consider the filter base ${\cal V}_G(1)$ of open subgroups of $G$. Then \begin{center} $({\cal V}_G(1), \cap)$ is a normed semilattice \end{center} with neutral element $G \in {\cal V}_G(1)$ and norm defined by $v_o(V)=\log [G:V]$ for every $V\in{\cal V}_G(1)$. For a continuous homomorphism $\phi: G\to H$ between compact groups, \begin{center} the map ${\cal V}_H(1)\to {\cal V}_G(1)$, defined by $V \mapsto \phi^{-1}(V)$, is a morphism in ${\mathfrak{L}}$. \end{center} This defines a contravariant functor $$\mathfrak{sub}_o^\star:\mathbf{TdCGrp}\to{\mathfrak{L}},$$ which is a subfunctor of $\mathfrak{sub}^\star$. Then the entropy $h_{\mathfrak{sub}^\star_o}$ coincides with the restriction to $\mathbf{TdCGrp}$ of the topological entropy $h_{top}$. This functor is related also to the functor $\mathfrak{cov}:\mathbf{TdCGrp} \to{\mathfrak{S}}$. Indeed, let $G$ be a totally disconnected compact group. Each $V\in {\cal V}_G(1)$ defines a cover ${\cal U}_V=\{x\cdot V\}_{x\in G}$ of $G$ with $v_o(V)=v({\cal U}_V)$. So the map $V \mapsto {\cal U}_V$ defines an isomorphism between the normed semilattice $\mathfrak{sub}_o^\star(G)={\cal V}_G(1)$ and the subsemigroup $\mathfrak{cov}_s(G)=\{{\cal U}_V:V \in {\cal V}_G(1)\}$ of $\mathfrak{cov}(G)$. \subsection{Bridge Theorem}\label{BTsec} In Definition \ref{BTdef} we have formalized the concept of Bridge Theorem between entropies $h_1:\mathfrak X_1 \to \mathbb R_+$ and $h_2:\mathfrak X_2 \to \mathbb R_+$ via functors $\varepsilon: \mathfrak X_1 \to \mathfrak X_2$. Obviously, the Bridge Theorem with respect to the functor $\varepsilon$ is available when each $h_i$ has the form $h_i= h_{F_i}$ for appropriate functors $F_i: \mathfrak{X}_i \to {\mathfrak{S}}$ ($i= 1,2$) that commute with $\varepsilon$ (i.e., $F_1 = F_2 \varepsilon$), that is $$ h_2(\varepsilon(\phi))= h_1(\phi)\ \mbox{ for all morphisms $\phi$ in }\ \mathfrak X_1. $$ Actually, it is sufficient that $F_i$ commute with $\varepsilon$ ``modulo $h_{\mathfrak{S}}$" (i.e., $h_{\mathfrak{S}} F_1 = h_{\mathfrak{S}} F_2 \varepsilon$) to obtain this conclusion: \begin{equation}\label{Buzz} \xymatrix@R=6pt@C=37pt { \mathfrak{X}_1\ar[dd]_{\varepsilon}\ar[dr]^{F_1}\ar@/^2pc/[rrd]^{h_{1}} & & \\ & {{\mathfrak{S}}}\ar[r]|-{ {h_{\mathfrak{S}}}}&\mathbb R^+ \\ \mathfrak{X}_2\ar[ur]_{F_2}\ar@/_2pc/[rru]_{h_{2}} & & } \end{equation} In particular the Pontryagin duality functor {$\ \widehat{}: {\mathbf{AbGrp}} \to {\mathbf{CAbGrp}}$} connects the category of abelian groups and that of compact abelian groups so connects the respective entropies $h_{alg}$ and $h_{top}$ by a Bridge Theorem. Taking the restriction to torsion abelian groups and the totally disconnected compact groups one obtains: \begin{Theorem}[Weiss Bridge Theorem]\emph{\cite{W}}\label{WBT} Let $K$ be a totally disconnected compact abelian group and $\phi: K\to K$ a continuous endomorphism. Then $h_{top}(\phi) = \mathrm{ent}(\widehat \phi)$. \end{Theorem} \begin{proof} Since totally disconnected compact groups are zero-dimensional, every open finite cover $\mathcal U$ of $K$ admits a refinement consisting of clopen sets in $K$. Moreover, since $K$ admits a local base at 0 formed by open subgroups, it is possible to find a refinement of $\mathcal U$ of the form $\mathcal U_V$ for some open subgroup $ \mathcal V$. This proves that $\mathfrak{cov}_s(K)$ is cofinal in $\mathfrak{cov}(K)$. Hence, we have $$ h_{top}(\phi)=h_{\mathfrak{S}}(\mathfrak{cov}(\phi))=h_{\mathfrak{S}}(\mathfrak{cov}_s(\phi)). $$ Moreover, we have seen above that $\mathfrak{cov}_s(K)$ is isomorphic to $\mathfrak{sub}^\star_o(K)$, so one can conclude that $$h_{\mathfrak{S}}(\mathfrak{cov}_s(\phi))=h_{\mathfrak{S}}(\mathfrak{sub}^\star_o (\phi)).$$ Now the semilattice isomorphism $L\to \mathcal F(\widehat K)$ given by $N \mapsto N^\perp$ preserves the norms, so it is an isomorphism in ${\mathfrak{S}}$. Hence $$h_{\mathfrak{S}}(\mathfrak{sub}^\star_o (\phi))=h_{\mathfrak{S}}(\mathfrak{sub}(\widehat \phi))$$ and consequently $$h_{top}(\phi)= h_{\mathfrak{S}}(\mathfrak{sub}(\widehat \phi))=\mathrm{ent}(\widehat \phi).$$ \end{proof} The proof of Weiss Bridge Theorem can be reassumed by the following diagram. \begin{equation*} \xymatrix@R=6pt@C=37pt { (\widehat K,\widehat\phi)\ar[r]^{\mathfrak{sub}}\ar@/^4.5pc/[rrrddd]^{h_{\mathfrak{sub}}}&(({\cal F}(\widehat K),+);\mathfrak{sub}(\widehat \phi))\ar[dd]_{\widehat{}}\ar[dddrr]|-{h_{\mathfrak{S}}}& &\\ & & &\\ &((\mathfrak{sub}^\star_o(K),\cap);\mathfrak{sub}^\star_o(\phi))\ar[dd]_{\gamma} & & \\ & & & \mathbb R^+ \\ &((\mathfrak{cov}_{s}(K),\vee);\phi)\ar@{^{(}->}[dd]_{\iota} & & \\ & & &\\ (K,\phi)\ar@/^3pc/[uuuuuu]^{\widehat{}\;\;}\ar[r]_{\mathfrak{cov}}\ar@/_4.5pc/[rrruuu]_{h_{\mathfrak{cov}}}&((\mathfrak{cov}(K),\vee);\mathfrak{cov}(\phi))\ar[uuurr]|-{h_{\mathfrak{S}}} & & } \end{equation*} Similar Bridge Theorems hold for other known entropies; they can be proved using analogous diagrams (see \cite{DGV1}). The first one that we recall concerns the algebraic entropy $\mathrm{ent}$ and the adjoint algebraic entropy $\mathrm{ent}^\star$: \begin{Theorem} Let $\phi: G\to G$ be an endomorphism of an abelian group. Then $\mathrm{ent}^\star(\phi) = \mathrm{ent}(\widehat\phi)$. \end{Theorem} The other two Bridge Theorems that we recall here connect respectively the set-theoretic entropy $\mathfrak h$ with the topological entropy $h_{top}$ and the contravariant set-theoretic entropy $\mathfrak h^*$ with the algebraic entropy $h_{alg}$. We need to recall first the notion of generalized shift, which extend the Bernoulli shifts. For a map $\lambda:X\to Y$ between two non-empty sets and a fixed non-trivial group $K$, define $\sigma_\lambda:K^Y \to K^X$ by $\sigma_\lambda(f) = f\circ \lambda $ for $f\in K^Y$. For $Y = X$, $\lambda$ is a self-map of $X$ and $\sigma_\lambda$ was called \emph{generalized shift} of $K^X$ (see \cite{AADGH,AZD}). In this case $\bigoplus_X K$ is a $\sigma_\lambda$-invariant subgroup of $K^X$ precisely when $\lambda$ is finitely many-to-one. We denote $\sigma_\lambda\restriction_{\bigoplus_XK}$ by $\sigma_\lambda^\oplus$. Item (a) in the next theorem was proved in \cite{AZD} (see also \cite[Theorem 7.3.4]{DG-islam}) while item (b) is \cite[Theorem 7.3.3]{DG-islam} (in the abelian case it was obtained in \cite{AADGH}). \begin{Theorem} \emph{\cite{AZD}} Let $K$ be a non-trivial finite group, let $X$ be a set and $\lambda:X\to X$ a self-map. \begin{itemize} \item[(a)]Then $h_{top}(\sigma_\lambda)=\mathfrak h(\lambda)\log|K|$. \item[(b)] If $\lambda$ is finite-to-one, then $h_{alg}(\sigma_\lambda^\oplus)=\mathfrak h^*(\lambda)\log|K|$. \end{itemize} \end{Theorem} In terms of functors, fixed a non-trivial finite group $K$, let $\mathcal F_K: \mathbf{Set}\to \mathbf{TdCGrp}$ be the functor defined on flows, sending a non-empty set $X$ to $K^X$, $\emptyset$ to $0$, a self-map $\lambda:X\to X$ to $\sigma_\lambda:K^Y\to K^X$ when $X\ne \emptyset$. Then the pair $(\mathfrak h, h_{top})$ satisfies $(BT_{\mathcal F_K})$ with constant $\log |K|$. Analogously, let $\mathcal G_K: \mathbf{Set}_\mathrm{fin}\to \mathbf{Grp}$ be the functor defined on flows sending $X$ to $\bigoplus_X K$ and a finite-to-one self-map $\lambda:X\to X$ to $\sigma_\lambda^\oplus:\bigoplus_X K\to \bigoplus_X K$. Then the pair $(\mathfrak h^*, h_{alg})$ satisfies $(BT_{\mathcal G_K})$ with constant $\log |K|$. \begin{Remark} At the conference held in Porto Cesareo, R. Farnsteiner posed the following question related to the Bridge Theorem. Is $h_{top}$ studied in non-Hausdorff compact spaces? The question was motivated by the fact that the prime spectrum $\mathrm{Spec}(A)$ of a commutative ring $A$ is usually a non-Hausdorff compact space. Related to this question and to the entropy $h_\lambda$ defined for endomorphisms $\phi$ of local Noetherian rings $A$ (see \S \ref{NewSec1}), one may ask if there is any relation (e.g., a weak Bridge Theorem) between these two entropies and the functor $\mathrm{Spec}$; more precisely, one can ask whether there is any stable relation between $h_{top}(\mathrm{Spec}(\phi))$ and $h_\lambda(\phi)$. \end{Remark} \section{Algebraic entropy and its specific properties}\label{alg-sec} In this section we give an overview of the basic properties of the algebraic entropy and the adjoint algebraic entropy. Indeed, we have seen that they satisfy the general scheme presented in the previous section, but on the other hand they were defined for specific group endomorphisms and these definitions permit to prove specific features, as we are going to briefly describe. For further details and examples see \cite{DG}, \cite{DGS} and \cite{DG-islam}. \subsection{Definition and basic properties} Let $G$ be a group and $\phi:G\to G$ an endomorphism. For a finite subset $F$ of $G$, and for $n\in\mathbb N_+$, the \emph{$n$-th $\phi$-trajectory} of $F$ is \begin{equation*}\label{T_n} T_n(\phi,F)=F\cdot\phi(F)\cdot\ldots\cdot\phi^{n-1}(F); \end{equation*} moreover let \begin{equation}\label{gamma} {\gamma_{\phi,F}(n)}=|T_n(\phi,F)|. \end{equation} The \emph{algebraic entropy of $\phi$ with respect to $F$} is \begin{equation*}\label{H} H_{alg}(\phi,F)={\lim_{n\to \infty}\frac{\log \gamma_{\phi,F}(n) }{n}}; \end{equation*} This limit exists as $H_{alg}(\phi,F)=h_{\mathfrak{S}}(\mathcal H(\phi),F)$ and so Theorem \ref{limit} applies (see also \cite{DG-islam} for a direct proof of the existence of this limit and \cite{DG} for the abelian case). The \emph{algebraic entropy} of $\phi:G\to G$ is $$ h_{alg}(\phi)=\sup\{H_{alg}(\phi,F): F\ \text{finite subset of}\ G\}=h_{\mathfrak{S}}(\mathcal H(\phi)). $$ Moreover $$ \mathrm{ent}(\phi)=\sup\{H_{alg}(\phi,F): F\ \text{finite subgroup of}\ G\}. $$ If $G$ is abelian, then $\mathrm{ent}(\phi)=\mathrm{ent}(\phi\restriction_{t(G)})= h_{alg}(\phi\restriction_{t(G)})$. Moreover, $h_{alg}(\phi) = \mathrm{ent}(\phi)$ if $G$ is locally finite, that is every finite subset of $G$ generates a finite subgroup; note that every locally finite group is obviously torsion, while the converse holds true under the hypothesis that the group is abelian (but the solution of Burnside Problem shows that even groups of finite exponent fail to be locally finite). For every abelian group $G$, the identity map has $h_{alg}(\mathrm{id}_G)=0$ (as the normed semigroup $\mathcal H(G)$ is arithmetic, as seen above). Another basic example is given by the endomorphisms of $\mathbb Z$, indeed if $\phi: \mathbb Z \to \mathbb Z$ is given by $\phi(x) = mx$ for some positive integer $m$, then $h_{alg}(\phi) = \log m$. The fundamental example for the algebraic entropy is the right Bernoulli shift: \begin{Example}\label{shift}(Bernoulli normalization) Let $K$ be a group. \begin{itemize} \item[(a)] The \emph{right Bernoulli shift} $\beta_K:K^{(\mathbb N)}\to K^{(\mathbb N)}$ is defined by $$(x_0,\ldots,x_n,\ldots)\mapsto (1,x_0,\ldots,x_n,\ldots).$$ Then $h_{alg}(\beta_K)=\log|K|$, with the usual convention that $\log|K|=\infty$ when $K$ is infinite. \item[(b)] The \emph{left Bernoulli shift} ${}_K\beta:K^{(\mathbb N)}\to K^{(\mathbb N)}$ is defined by $$(x_0,\ldots,x_n,\ldots)\mapsto (x_1,\ldots,x_{n+1},\ldots).$$ Then $h_{alg}({}_K\beta)=0$, as ${}_K\beta$ is locally nilpotent. \end{itemize} \end{Example} The following basic properties of the algebraic entropy are consequences of the general scheme and were proved directly in \cite{DG-islam}. \begin{fact}\label{properties} \emph{Let $G$ be a group and $\phi:G\to G$ an endomorphism. \begin{itemize} \item[(a)]\emph{[Invariance under conjugation]} If $\phi=\xi^{-1}\psi\xi$, where $\psi:H\to H$ is an endomorphism and $\xi:G\to H$ isomorphism, then $h_{alg}(\phi) = h_{alg}(\psi)$. \item[(b)]\emph{[Monotonicity]} If $H$ is a $\phi$-invariant normal subgroup of the group $G$, and $\overline\phi:G/H\to G/H$ is the endomorphism induced by $\phi$, then $h_{alg}(\phi)\geq \max\{h_{alg}(\phi\restriction_H),h_{alg}(\overline{\phi})\}$. \item[(c)]\emph{[Logarithmic Law]} For every $k\in\mathbb N$ we have $h_{alg}(\phi^k) = k \cdot h_{alg}(\phi)$; if $\phi$ is an automorphism, then $h_{alg}(\phi)=h_{alg}(\phi^{-1})$, so $h_{alg}(\phi^k)=|k|\cdot h_{alg}(\phi)$ for every $k\in\mathbb Z$. \item[(d)]\emph{[Continuity]} If $G$ is direct limit of $\phi$-invariant subgroups $\{G_i : i \in I\}$, then $h_{alg}(\phi)=\sup_{i\in I}h_{alg}(\phi\restriction_{G_i}).$ \item[(e)]\emph{[Weak Addition Theorem]} If $G=G_1\times G_2$ and $\phi_i:G_i\to G_i$ is an endomorphism for $i=1,2$, then $h_{alg}(\phi_1\times\phi_2)=h_{alg}(\phi_1)+h_{alg}(\phi_2)$. \end{itemize} } \end{fact} As described for the semigroup entropy in the previous section, and as noted in \cite[Remark 5.1.2]{DG-islam}, for group endomorphisms $\phi:G\to G$ it is possible to define also a ``left'' algebraic entropy, letting for a finite subset $F$ of $G$, and for $n\in\mathbb N_+$, $$T_n^\#(\phi,F)=\phi^{n-1}(F)\cdot\ldots\cdot\phi(F)\cdot F,$$ $$H^\#_{alg}(\phi,F)={\lim_{n\to \infty}\frac{\log |T^\#_n(\phi,F)|}{n}}$$ and $$h_{alg}^\#(\phi)=\sup\{H^\#_{alg}(\phi,F): F\ \text{finite subset of}\ G\}.$$ Answering a question posed in \cite[Remark 5.1.2]{DG-islam}, we see now that $$h_{alg}(\phi)=h_{alg}^\#(\phi).$$ Indeed, every finite subset of $G$ is contained in a finite subset $F$ of $G$ such that $1\in F$ and $F={F^{-1}}$; for such $F$ we have $$H_{alg}(\phi,F)=H_{alg}^\#(\phi,F),$$ since, for every $n\in\mathbb N_+$, \begin{equation*}\begin{split} T_n(\phi,F)^{-1}=\phi^{n-1}(F)^{-1}\cdot\ldots\cdot\phi(F)^{-1}\cdot F^{-1}=\\ \phi^{n-1}(F^{-1})\cdot\ldots\cdot\phi(F^{-1})\cdot F^{-1}=T_n^\#(\phi,F) \end{split}\end{equation*} and so $|T_n(\phi,F)|=|T_n(\phi,F)^{-1}|=|T_n^\#(\phi,F)|$. \subsection{Algebraic Yuzvinski Formula, Addition Theorem and Uni\-que\-ness}\label{ab-sec} We recall now some of the main deep properties of the algebraic entropy in the abelian case. They are not consequences of the general scheme and are proved using the specific features of the algebraic entropy coming from the definition given above. We give here the references to the papers where these results were proved, for a general exposition on algebraic entropy see the survey paper \cite{DG-islam}. The next proposition shows that the study of the algebraic entropy for torsion-free abelian groups can be reduced to the case of divisible ones. It was announced for the first time by Yuzvinski \cite{Y1}, for a proof see \cite{DG}. \begin{Proposition}\label{AA_} Let $G$ be a torsion-free abelian group, $\phi:G\to G$ an endomorphism and denote by $\widetilde\phi$ the (unique) extension of $\phi$ to the divisible hull $D(G)$ of $G$. Then $h_{alg}(\phi)=h_{alg}(\widetilde\phi)$. \end{Proposition} Let $f(t)=a_nt^n+a_1t^{n-1}+\ldots+a_0\in\mathbb Z[t]$ be a primitive polynomial and let $\{\lambda_i:i=1,\ldots,n\}\subseteq\mathbb C$ be the set of all roots of $f(t)$. The \emph{(logarithmic) Mahler measure} of $f(t)$ is $$m(f(t))= \log|a_n| + \sum_{|\lambda_i|>1}\log |\lambda_i|.$$ The Mahler measure plays an important role in number theory and arithmetic geometry and is involved in the famous Lehmer Problem, asking whether $\inf\{m(f(t)):f(t)\in\mathbb Z[t]\ \text{primitive}, m(f(t))>0\}>0$ (for example see \cite{Ward0} and \cite{Hi}). If $g(t)\in\mathbb Q[t]$ is monic, then there exists a smallest positive integer $s$ such that $sg(t)\in\mathbb Z[t]$; in particular, $sg(t)$ is primitive. The Mahler measure of $g(t)$ is defined as $m(g(t))=m(sg(t))$. Moreover, if $\phi:\mathbb Q^n\to \mathbb Q^n$ is an endomorphism, its characteristic polynomial $p_\phi(t)\in\mathbb Q[t]$ is monic, and the Mahler measure of $\phi$ is $m(\phi)=m(p_\phi(t))$. The formula \eqref{yuzeq} below was given a direct proof recently in \cite{GV}; it is the algebraic counterpart of the so-called Yuzvinski Formula for the topological entropy \cite{Y1} (see also \cite{LW}). It gives the values of the algebraic entropy of linear transformations of finite dimensional rational vector spaces in terms of the Mahler measure, so it allows for a connection of the algebraic entropy with Lehmer Problem. \begin{Theorem}[Algebraic Yuzvinski Formula] \label{AYF}\emph{\cite{GV}} Let $n\in\mathbb N_+$ and $\phi:\mathbb Q^n\to\mathbb Q^n$ an endomorphism. Then \begin{equation}\label{yuzeq} h_{alg}(\phi)=m(\phi). \end{equation} \end{Theorem} The next property of additivity of the algebraic entropy was first proved for torsion abelian groups in \cite{DGSZ}, while the proof of the general case was given in \cite{DG} applying the Algebraic Yuzvinski Formula. \begin{Theorem}[Addition Theorem]\emph{\cite{DG}}\label{AT} Let $G$ be an abelian group, $\phi:G\to G$ an endomorphism, $H$ a $\phi$-invariant subgroup of $G$ and $\overline\phi:G/H\to G/H$ the endomorphism induced by $\phi$. Then $$h_{alg}(\phi)=h_{alg}(\phi\restriction_H)+ h_{alg}(\overline\phi).$$ \end{Theorem} Moreover, uniqueness is available for the algebraic entropy in the category of all abelian groups. As in the case of the Addition Theorem, also the Uniqueness Theorem was proved in general in \cite{DG}, while it was previously proved in \cite{DGSZ} for torsion abelian groups. \begin{Theorem}[Uniqueness Theorem]\label{UT}\emph{\cite{DG}} The algebraic entropy $$h_{alg}:\mathrm{Flow}_\mathbf{AbGrp}\to\mathbb R_+$$ is the unique function such that: \begin{itemize} \item[(a)] $h_{alg}$ is invariant under conjugation; \item[(b)] $h_{alg}$ is continuous on direct limits; \item[(c)] $h_{alg}$ satisfies the Addition Theorem; \item[(d)] for $K$ a finite abelian group, $h_{alg}(\beta_K)=\log|K|$; \item[(e)] $h_{alg}$ satisfies the Algebraic Yuzvinski Formula. \end{itemize} \end{Theorem} \subsection{The growth of a finitely generated flow in $\mathbf{Grp}$}\label{Growth-sec} In order to measure and classify the growth rate of maps $\mathbb N \to \mathbb N$, one need the relation $\preceq$ defined as follows. For $\gamma, \gamma': \mathbb N \to \mathbb N$ let $\gamma \preceq \gamma'$ if there exist $n_0,C\in\mathbb N_+$ such that $\gamma(n) \leq \gamma'(Cn)$ for every $n\geq n_0$. Moreover $\gamma\sim\gamma$ if $\gamma\preceq\gamma'$ and $\gamma'\preceq\gamma$ (then $\sim$ is an equivalence relation), and $\gamma\prec\gamma'$ if $\gamma\preceq\gamma'$ but $\gamma\not\sim\gamma'$. For example, for every $\alpha, \beta\in\mathbb R_{\geq0}$, $n^\alpha\sim n^\beta$ if and only if $\alpha=\beta$; if $p(t)\in\mathbb Z[t]$ and $p(t)$ has degree $d\in\mathbb N$, then $p(n)\sim n^d$. On the other hand, $a^n\sim b^n$ for every $a,b\in\mathbb R$ with $a,b>1$, so in particular all exponentials are equivalent with respect to $\sim$. So a map $\gamma: \mathbb N \to \mathbb N$ is called: \begin{itemize} \item[(a)] \emph{polynomial} if $\gamma(n) \preceq n^d$ for some $d\in\mathbb N_+$; \item[(b)] \emph{exponential} if $\gamma(n) \sim 2^n$; \item[(c)] \emph{intermediate} if $\gamma(n)\succ n^d$ for every $d\in\mathbb N_+$ and $\gamma(n)\prec 2^n$. \end{itemize} Let $G$ be a group, $\phi:G\to G$ an endomorphism and $F$ a non-empty finite subset of $G$. Consider the function, already mentioned in \eqref{gamma}, $$ \gamma_{\phi,F}:\mathbb N_+\to\mathbb N_+\ \text{defined by}\ \gamma_{\phi,F}(n)=|T_n(\phi,F)|\ \text{for every}\ n\in\mathbb N_+. $$ Since $$ |F|\leq\gamma_{\phi,F}(n)\leq|F|^n\mbox{ for every }n\in\mathbb N_+, $$ the growth of $\gamma_{\phi,F}$ is always at most exponential; moreover, $H_{alg}(\phi,F)\leq \log |F|$. So, following \cite{DG0} and \cite{DG-islam}, we say that $\phi$ has \emph{polynomial} (respectively, \emph{exponential}, \emph{intermediate}) \emph{growth at $F$} if $\gamma_{\phi,F}$ is polynomial (respectively, exponential, intermediate). Before proceeding further, let us make an important point here. All properties considered above concern practically the $\phi$-invariant subgroup $G_{\phi,F}$ of $G$ generated by the trajectory $T(\phi, F) = \bigcup_{n\in\mathbb N_+} T_n(\phi, F)$ and the restriction $\phi\restriction_{G_{\phi,F}}$. \begin{Definition} We say that the flow $(G, \phi)$ in $\mathbf{Grp}$ is \emph{finitely generated} if $G = G_{\phi,F}$ for some finite subset $F$ of $G$. \end{Definition} Hence, all properties listed above concern finitely generated flows in $\mathbf{Grp}$. We conjecture the following, knowing that it holds true when $G$ is abelian or when $\phi=\mathrm{id}_G$: if the flow $(G,\phi)$ is finitely generated, and if $G = G_{\phi,F}$ and $G = G_{\phi,F'}$ for some finite subsets $F$ and $F'$ of $G$, then $\gamma_{\phi,F}$ and $\gamma_{\phi,F'}$ have the same type of growth. In this case the growth of a finitely generated flow $G_{\phi,F}$ would not depend on the specific finite set of generators $F$ (so $F$ can always be taken symmetric). In particular, one could speak of growth of a finitely generated flow without any reference to a specific finite set of generators. Nevertheless, one can give in general the following \begin{Definition} Let $(G,\phi)$ be a finitely generated flow in $\mathbf{Grp}$. We say that $(G,\phi)$ has \begin{itemize} \item[(a)] \emph{polynomial growth} if $\gamma_{\phi,F}$ is polynomial for every finite subset $F$ of $G$; \item[(b)] \emph{exponential growth} if there exists a finite subset $F$ of $G$ such that $\gamma_{\phi,F}$ is exponential; \item[(c)] \emph{intermediate growth} otherwise. \end{itemize} We denote by $\mathrm{Pol}$ and $\mathrm{Exp}$ the classes of finitely generated flows in $\mathbf{Grp}$ of polynomial and exponential growth respectively. Moreover, $\mathcal M=\mathrm{Pol}\cup\mathrm{Exp}$ is the class of finitely generated flows of non-intermediate growth. \end{Definition} This notion of growth generalizes the classical one of growth of a finitely generated group given independently by Schwarzc \cite{Sch} and Milnor \cite{M1}. Indeed, if $G$ is a finitely generated group and $X$ is a finite symmetric set of generators of $G$, then $\gamma_X=\gamma_{\mathrm{id}_G,X}$ is the classical \emph{growth function} of $G$ with respect to $X$. For a connection of the terminology coming from the theory of algebraic entropy and the classical one, note that for $n\in\mathbb N_+$ we have $T_n(\mathrm{id}_G,X)=\{g\in G:\ell_X(g)\leq n\}$, where $\ell_X(g)$ is the length of the shortest word $w$ in the alphabet $X$ such that $w=g$ (see \S \ref{NewSec1} (c)). Since $\ell_X$ is a norm on $G$, $T_n(\mathrm{id}_G,X)$ is the ball of radius $n$ centered at $1$ and $\gamma_X(n)$ is the cardinality of this ball. Milnor \cite{M3} proposed the following problem on the growth of finitely generated groups. \begin{problem}[Milnor Problem]\label{Milnor-pb}{\cite{M3}} Let $G$ be a finitely generated group and $X$ a finite set of generators of $G$. \begin{itemize} \item[(i)] Is the growth function $\gamma_X$ necessarily equivalent either to a power of $n$ or to the exponential function $2^n$? \item[(ii)] In particular, is the {growth exponent} $\delta_G=\limsup_{n\to \infty}\frac{\log\gamma_X(n)}{\log n}$ either a well defined integer or infinity? For which groups is $\delta_G$ finite? \end{itemize} \end{problem} Part (i) of Problem \ref{Milnor-pb} was solved negatively by Grigorchuk in \cite{Gri1,Gri2,Gri3,Gri4}, where he constructed his famous examples of finitely generated groups $\mathbb G$ with intermediate growth. For part (ii) Milnor conjectured that $\delta_G$ is finite if and only if $G$ is virtually nilpotent (i.e., $G$ contains a nilpotent finite-index subgroup). The same conjecture was formulated by Wolf \cite{Wolf} (who proved that a nilpotent finitely generated group has polynomial growth) and Bass \cite{Bass}. Gromov \cite{Gro} confirmed Milnor's conjecture: \begin{Theorem}[Gromov Theorem]\label{GT}\emph{\cite{Gro}} A finitely generated group $G$ has polynomial growth if and only if $G$ is virtually nilpotent. \end{Theorem} The following two problems on the growth of finitely generated flows of groups are inspired by Milnor Problem. \begin{problem} Describe the permanence properties of the class $\mathcal M$. \end{problem} Some stability properties of the class $\mathcal M$ are easy to check. For example, stability under taking finite direct products is obviously available, while stability under taking subflows (i.e., invariant subgroups) and factors fails even in the classical case of identical flows. Indeed, Grigorchuk's group $\mathbb G$ is a quotient of a finitely generated free group $F$, that has exponential growth; so $(F,\mathrm{id}_F) \in \mathcal M$, while $(\mathbb G, \mathrm{id}_{\mathbb G})\not \in\mathcal M$. Furthermore, letting $G = \mathbb G \times F$, one has $(G,\mathrm{id}_G) \in \mathcal M$, while $(\mathbb G, \mathrm{id}_{\mathbb G})\not \in\mathcal M$, so $\mathcal M$ is not stable even under taking direct summands. On the other hand, stability under taking powers is available since $(G,\phi) \in \mathcal M$ if and only if $(G,\phi^n) \in \mathcal M$ for $n\in\mathbb N_+$. \begin{problem}\label{Ques4} \begin{itemize} \item[(i)] Describe the finitely generated groups $G$ such that $(G,\phi)\in\mathcal M$ for every endomorphism $\phi:G\to G$. \item[(ii)] Does there exist a finitely generated group $G$ such that $(G,\mathrm{id}_G)\in\mathcal M$ but $(G,\phi)\not\in\mathcal M$ for some endomorphism $\phi:G\to G$? \end{itemize} \end{problem} In item (i) of the above problem we are asking to describe all finitely generated groups $G$ of non-intermediate growth such that $(G,\phi)$ has still non-intermediate growth for every endomorphism $\phi:G\to G$. On the other hand, in item (ii) we ask to find a finitely generated group $G$ of non-intermediate growth that admits an endomorphism $\phi:G\to G$ of intermediate growth. The basic relation between the growth and the algebraic entropy is given by Proposition \ref{exp} below. For a finitely generated group $G$, an endomorphism $\phi$ of $G$ and a pair $X$ and $X'$ of finite generators of $G$, one has $\gamma_{\phi,X}\sim\gamma_{\phi,X'}$. Nevertheless, $H_{alg}(\phi,X)\neq H_{alg}(\phi,X')$ may occur; in this case $(G,\phi)$ has necessarily exponential growth. We give two examples to this effect: \begin{Example}\label{exaAugust} \begin{itemize} \item[(a)] {\cite{DG-islam}} Let $G$ be the free group with two generators $a$ and $b$; then $X=\{a^{\pm 1},b^{\pm 1}\}$ gives $H_{alg}(\mathrm{id}_G,X)=\log 3$ while for $X'=\{a^{\pm 1},b^{\pm 1},(ab)^{\pm 1}\}$ we have $H_{alg}(\mathrm{id}_G,X')=\log 4$. \item[(b)] Let $G = \mathbb Z$ and $\phi: \mathbb Z \to \mathbb Z$ defined by $\phi(x) = mx$ for every $x\in \mathbb Z$ and with $m>3$. Let also $X= \{0,\pm 1\}$ and $X'= \{0,\pm 1, \ldots \pm m\}$. Then $H_{alg}(\phi,X) \leq \log |X| =\log 3$, while $H_{alg}(\phi,X')= h_{alg}(\phi)= \log m$. \end{itemize} \end{Example} \begin{Proposition}\label{exp}\emph{\cite{DG-islam}} Let $(G,\phi)$ be a finitely generated flow in {\bf Grp}. \begin{itemize} \item[(a)]Then $h_{alg}(\phi)>0$ if and only if $(G,\phi)$ has exponential growth. \item[(b)]If $(G,\phi)$ has polynomial growth, then $h_{alg}(\phi)=0$. \end{itemize} \end{Proposition} In general the converse implication in item (b) is not true even for the identity. Indeed, if $(G,\phi)$ has intermediate growth, then $h_{alg}(\phi)=0$ by item (a). So for Grigorchuk's group $\mathbb G$, the flow $(\mathbb G,\mathrm{id}_\mathbb G)$ has intermediate growth yet $h_{alg}(\mathrm{id}_\mathbb G)=0$. This motivates the following \begin{Definition}\label{MPara} Let $\mathcal G$ be a class of groups and $\Phi$ be a class of morphisms. We say that the pair $(\mathcal G, \Phi)$ satisfies Milnor Paradigm (briefly, MP) if no finitely generated flow $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$ can have intermediate growth. \end{Definition} In terms of the class $\mathcal M$, $$(\mathcal G, \Phi)\ \text{satisfies MP if and only if }\ (\mathcal G, \Phi)\in \mathcal M\ (\forall G\in\mathcal G)(\forall \phi\in\Phi) .$$ Equivalently, $(\mathcal G, \Phi)$ satisfies MP when $h_{alg}(\phi)=0$ always implies that $(G,\phi)$ has polynomial growth for finitely generated flows $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$. In these terms Milnor Problem \ref{Milnor-pb} (i) is asking whether the pair $(\mathbf{Grp},\mathcal{I}d)$ satisfies MP, where $\mathcal I d$ is the class of all identical endomorphisms. So we give the following general open problem. \begin{problem}\label{PB0} \begin{itemize} \item[(i)] Find pairs $(\mathcal G,\Phi)$ satisfying MP. \item[(ii)] For a given $\Phi$ determine the properties of the largest class $\mathcal G_\Phi$ such that $(\mathcal G_\Phi, \Phi)$ satisfies MP. \item[(iii)] For a given $\mathcal G$ determine the properties of the largest class $\Phi_{\mathcal G}$ such that $(\mathcal G, \Phi_{\mathcal G})$ satisfies MP. \item[(iv)] Study the Galois correspondence between classes of groups $\mathcal G$ and classes of endomorphisms $\Phi$ determined by MP. \end{itemize} \end{problem} According to the definitions, the class $\mathcal G_{\mathcal{I} d}$ coincides with the class of finitely generated groups of non-intermediate growth. The following result solves Problem \ref{PB0} (iii) for $\mathcal G=\mathbf{AbGrp}$, showing that $\Phi_\mathbf{AbGrp}$ coincides with the class $\mathcal E$ of all endomorphisms. \begin{Theorem}[Dichotomy Theorem]\emph{\cite{DG0}}\label{DT} There exist no finitely generated flows of intermediate growth in $\mathbf{AbGrp}$. \end{Theorem} Actually, one can extend the validity of this theorem to nilpotent groups. This leaves open the following particular case of Problem \ref{PB0}. We shall see in Theorem \ref{osin} that the answer to (i) is positive when $\phi=\mathrm{id}_G$. \begin{question}\label{Ques1} Let $(G,\phi)$ be a finitely generated flow in $\mathbf{Grp}$. \begin{itemize} \item[(i)] If $G$ is solvable, does $(G,\phi)\in\mathcal M$? \item[(ii)] If $G$ is a free group, does $(G,\phi)\in\mathcal M$? \end{itemize} \end{question} We state now explicitly a particular case of Problem \ref{PB0}, inspired by the fact that the right Bernoulli shifts have no non-trivial quasi-periodic points and they have uniform exponential growth (see Example \ref{bern}). In \cite{DG0} group endomorphisms $\phi:G\to G$ without non-trivial quasi-periodic points are called algebraically ergodic for their connection (in the abelian case and through Pontryagin duality) with ergodic transformations of compact groups. \begin{question}\label{Ques2} Let $\Phi_0$ be the class of endomorphisms without non-trivial quasi-periodic points. Is it true that the pair $(\mathbf{Grp},\Phi_0)$ satisfies MP? \end{question} For a finitely generated group $G$, the \emph{uniform exponential growth rate} of $G$ is defined as $$ \lambda(G)=\inf\{H_{alg}(\mathrm{id}_G,X):X\ \text{finite set of generators of}\ G\} $$ (see for instance \cite{dlH-ue}). Moreover, $G$ has \emph{uniform exponential growth} if $\lambda(G)>0$. Gromov \cite{GroLP} asked whether every finitely generated group of exponential growth is also of uniform exponential growth. This problem was recently solved by Wilson \cite{Wilson} in the negative. Since the algebraic entropy of a finitely generated flow $(G,\phi)$ in $\mathbf{Grp}$ can be computed as $$ h_{alg}(\phi)=\sup\{H_{alg}(\phi,F): F\ \text{finite subset of $G$ such that $G=G_{\phi,F}$}\}, $$ one can give the following counterpart of the uniform exponential growth rate for flows: \begin{Definition} For $(G,\phi)$ be a finitely generated flow in $\mathbf{Grp}$ let $$ \lambda(G,\phi)=\inf\{H_{alg}(\phi,F): F\ \text{finite subset of $G$ such that $G=G_{\phi,F}$} \}. $$ The flow $(G,\phi)$ is said to have \emph{uniformly exponential growth} if $\lambda(G,\phi)>0$. Let $\mathrm{Exp}_\mathrm u$ be the subclass of $\mathrm{Exp}$ of all finitely generated flows in $\mathbf{Grp}$ of uniform exponential growth. \end{Definition} Clearly $\lambda(G,\phi)\leq h_{alg}(\phi)$, so one has the obvious implication \begin{equation}\label{GP} h_{alg}(\phi)=0\ \Rightarrow\ \lambda(G,\phi)=0. \end{equation} To formulate the counterpart of Gromov's problem on uniformly exponential growth it is worth to isolate also the class $\mathcal W$ of the finitely generated flows in $\mathbf{Grp}$ of exponential but not uniformly exponential growth (i.e., $\mathcal W=\mathrm{Exp}\setminus \mathrm{Exp}_\mathrm u$). Then $\mathcal W$ is the class of finitely generated flows $(G,\phi)$ in $\mathbf{Grp}$ for which \eqref{GP} cannot be inverted, namely $h_{alg}(\phi)> 0=\lambda(G,\phi)$. We start stating the following problem. \begin{problem} Describe the permanence properties of the classes $\mathrm{Exp}_\mathrm u$ and $\mathcal W$. \end{problem} It is easy to check that $\mathrm{Exp}_\mathrm u$ and $\mathcal W$ are stable under taking direct products. On the other hand, stability of $\mathrm{Exp}_\mathrm u$ under taking subflows (i.e., invariant subgroups) and factors fails even in the classical case of identical flows. Indeed, Wilson's group $\mathbb W$ is a quotient of a finitely generated free group $F$, that has uniform exponential growth (see \cite{dlH-ue}); so $(F,\mathrm{id}_F)\in \mathrm{Exp}_\mathrm u$, while $(\mathbb W, \mathrm{id}_{\mathbb W})\in\mathcal W$. Furthermore, letting $G = \mathbb W \times F$, one has $(G,\mathrm{id}_G)\in \mathrm{Exp}_\mathrm u$, while $(\mathbb W, \mathrm{id}_{\mathbb W})\in\mathcal W$, so $\mathrm{Exp}_\mathrm u$ is not stable even under taking direct summands. In the line of MP, introduced in Definition \ref{MPara}, we can formulate also the following \begin{Definition}\label{GPara} Let $\mathcal G$ be a class of groups and $\Phi$ be a class of morphisms. We say that the pair $(\mathcal G, \Phi)$ satisfies Gromovr Paradigm (briefly, MP), if every finitely generated flow $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$ of exponential growth has has uniform exponential growth. \end{Definition} In terms of the class $\mathcal W$, $$ (\mathcal G, \Phi)\ \text{satisfies GP if and only if }\ (\mathcal G, \Phi)\not\in \mathcal M\ (\forall G\in\mathcal G)(\forall \phi\in\Phi) . $$ In these terms, Gromov's problem on uniformly exponential growth asks whether the pair $(\mathbf{Grp}, \mathcal I d)$ satisfies GP. In analogy to the general Problem \ref{PB0}, one can consider the following obvious counterpart for GP: \begin{problem}\label{PB1} \begin{itemize} \item[(i)] Find pairs $(\mathcal G, \Phi)$ satisfying GP. \item[(ii)] For a given $\Phi$ determine the properties of the largest class $\mathcal G_\Phi$ such that $(\mathcal G_\Phi,\Phi)$ satisfies GP. \item[(iii)] For a given $\mathcal G$ determine the properties of the largest class $\Phi_{\mathcal G}$ such that $(\mathcal G,\Phi_\mathcal G)$ satisfies GP. \item[(iv)] Study the Galois correspondence between classes of groups $\mathcal G$ and classes of endomorphisms $\Phi$ determined by GP. \end{itemize} \end{problem} We see now in item (a) of the next example a particular class of finitely generated flows for which $\lambda$ coincides with $h_{alg}$ and they are both positive, so in particular these flows are all in $\mathrm{Exp}_\mathrm u$. In item (b) we leave an open question related to Question \ref{Ques2}. \begin{Example}\label{bern} \begin{itemize} \item[(a)] For a finite group $K$, consider the flow $(\bigoplus_\mathbb N K,\beta_K)$. We have seen in Example \ref{shift} that $h_{alg}(\beta_K)=\log|K|$. In this case we have $\lambda(\bigoplus_\mathbb N K,\beta_K)=\log|K|$, since a subset $F$ of $\bigoplus_\mathbb N K$ generating the flow $(\bigoplus_\mathbb N K,\beta_K)$ must contain the first copy $K_0$ of $K$ in $\bigoplus_\mathbb N K$, and $H_{alg}(\beta_K,K_0)=\log|K|$. \item[(b)] Is it true that $\lambda(G,\phi) = h_{alg}(\phi) > 0$ for every finitely generated flow $(G,\phi)$ in $\mathbf{Grp}$ such that $\phi \in \Phi_0$? In other terms, we are asking whether all finitely generated flows $(G,\phi)$ in $\mathbf{Grp}$ with $\phi\in\Phi_0$ have uniform exponential growth (i.e., are contained in $\mathrm{Exp}_\mathrm u$). \end{itemize} \end{Example} One can also consider the pairs $(\mathcal G, \Phi)$ satisfying the conjunction MP \& GP. For any finitely generated flow $(G,\phi)$ in $\mathbf{Grp}$ one has \begin{equation}\label{osin-eq} (G,\phi)\ \text{has polynomial growth}\ \ \buildrel{(1)}\over\Longrightarrow\ h_{alg}(\phi)=0\ \ \buildrel{(2)}\over\Longrightarrow\ \lambda(G,\phi)=0. \end{equation} The converse implication of (1) (respectively, (2)) holds for all $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$ precisely when the pair $(\mathcal G, \Phi)$ satisfies MP (respectively, GP). Therefore, the pair $(\mathcal G, \Phi)$ satisfies the conjunction MP \& GP precisely when the three conditions in \eqref{osin-eq} are all equivalent (i.e., $\lambda(G,\phi)=0 \Rightarrow (G,\phi)\in \mathrm{Pol}$) for all finitely generated flows $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$. A large class of groups $\mathcal G$ such that $(\mathcal G, \mathcal I d)$ satisfies MP \& GP was found by Osin \cite{O} who proved that a finitely generated solvable group $G$ of zero uniform exponential growth is virtually nilpotent, and recently this result was generalized in \cite{O1} to elementary amenable groups. Together with Gromov Theorem and Proposition \ref{exp}, this gives immediately the following \begin{Theorem}\label{osin} Let $G$ be a finitely generated elementary amenable group. The following conditions are equivalent: \begin{itemize} \item[(a)] $h_{alg}(\mathrm{id}_G)=0$; \item[(b)] $\lambda(G)=0$; \item[(c)] $G$ is virtually nilpotent; \item[(d)] $G$ has polynomial growth. \end{itemize} \end{Theorem} This theorem shows that the pair $\mathcal G=\{\mbox{elementary amenable groups}\}$ and $\Phi =\mathcal{I} d$ satisfies simultaneously MP and GP. In other words it proves that the three conditions in \eqref{osin-eq} are all equivalent when $G$ is an elementary amenable finitely generated group and $\phi=\mathrm{id}_G$. \subsection{Adjoint algebraic entropy}\label{aent-sec} We recall here the definition of the adjoint algebraic entropy $\mathrm{ent}^\star$ and we state some of its specific features not deducible from the general scheme, so beyond the ``package" of general properties coming from the equality $\mathrm{ent}^\star=h_{\mathfrak{sub}^\star}$ such as Invariance under conjugation and inversion, Logarithmic Law, Monotonicity for factors (these properties were proved in \cite{DG-islam} in the general case and previously in \cite{DGS} in the abelian case applying the definition). In analogy to the algebraic entropy $\mathrm{ent}$, in \cite{DGS} the adjoint algebraic entropy of endomorphisms of abelian groups $G$ was introduced ``replacing" the family $\mathcal F(G)$ of all finite subgroups of $G$ with the family $\mathcal C(G)$ of all finite-index subgroups of $G$. The same definition was extended in \cite{DG-islam} to the more general setting of endomorphisms of arbitrary groups as follows. Let $G$ be a group and $N\in \mathcal C(G)$. For an endomorphism $\phi:G\to G$ and $n\in\mathbb N_+$, the \emph{$n$-th $\phi$-cotrajectory of $N$} is $$C_n(\phi,N)=N\cap\phi^{-1}(N)\cap\ldots\cap\phi^{-n+1}(N).$$ The \emph{adjoint algebraic entropy of $\phi$ with respect to $N$} is $$ H^\star(\phi,N)={\lim_{n\to \infty}\frac{\log[G:C_n(\phi,N)]}{n}}. $$ This limit exists as $H^\star(\phi,N)=h_{\mathfrak{S}}(\mathcal C(\phi),N)$ and so Theorem \ref{limit} applies. The \emph{adjoint algebraic entropy of $\phi$} is $$\mathrm{ent}^\star(\phi)=\sup\{H^\star(\phi,N):N\in\mathcal C(G)\}.$$ The values of the adjoint algebraic entropy of the Bernoulli shifts were calculated in \cite[Proposition 6.1]{DGS} applying \cite[Corollary 6.5]{G0} and the Pontryagin duality; a direct computation can be found in \cite{G}. So, in contrast with what occurs for the algebraic entropy, we have: \begin{Example}[Bernoulli shifts]\label{beta*} For $K$ a non-trivial group, $$\mathrm{ent}^\star(\beta_K)=\mathrm{ent}^\star({}_K\beta)=\infty.$$ \end{Example} As proved in \cite{DGS}, the adjoint algebraic entropy satisfies the Weak Addition Theorem, while the Monotonicity for invariant subgroups fails even for torsion abelian groups; in particular, the Addition Theorem fails in general. On the other hand, the Addition Theorem holds for bounded abelian groups: \begin{Theorem}[Addition Theorem]\label{AT*} Let $G$ be a bounded abelian group, $\phi:G\to G$ an endomorphism, $H$ a $\phi$-invariant subgroup of $G$ and $\overline\phi:G/H\to G/H$ the endomorphism induced by $\phi$. Then $$\mathrm{ent}^\star(\phi)=\mathrm{ent}^\star(\phi\restriction_H)+\mathrm{ent}^\star(\overline\phi).$$ \end{Theorem} The following is one of the main results on the adjoint algebraic entropy proved in \cite{DGS}. It shows that the adjoint algebraic entropy takes values only in $\{0,\infty\}$, while clearly the algebraic entropy may take also finite positive values. \begin{Theorem}[Dichotomy Theorem]\label{dichotomy}\emph{\cite{DGS}} Let $G$ be an abelian group and $\phi:G\to G$ an endomorphism. Then \begin{center} either $\mathrm{ent}^\star(\phi)=0$ or $\mathrm{ent}^\star(\phi)=\infty$. \end{center} \end{Theorem} Applying the Dichotomy Theorem and the Bridge Theorem (stated in the previous section) to the compact dual group $K$ of $G$ one gets that for a continuous endomorphism $\psi$ of a compact abelian group $K$ either $\mathrm{ent} (\psi)=0$ or $\mathrm{ent}(\psi)=\infty$. In other words: \begin{Corollary} If $K$ is a compact abelian group, then every endomorphism $\psi:K\to K$ with $0 < \mathrm{ent} (\psi) < \infty$ is discontinuous. \end{Corollary} \end{document}
\begin{document} \begin{abstract} In recent work, Cuntz, Deninger and Laca have studied the Toeplitz type C*-algebra associated to the affine monoid of algebraic integers in a number field, under a time evolution determined by the absolute norm. The KMS equilibrium states of their system are parametrized by traces on the C*-algebras of the semidirect products $J_\gamma \rtimes \ok^*$ resulting from the multiplicative action of the units $\ok^*$ on integral ideals $J_\gamma$ representing each ideal class $\gamma \in \Cl_K$. At each fixed inverse temperature $\beta >2$, the extremal equilibrium states correspond to extremal traces of $C^*(J_\gamma \rtimes \ok^*)$. Here we undertake the study of these traces using the transposed action of $\ok^*$ on the duals $\hat{J}_\gamma$ of the ideals and the recent characterization of traces on transformation group C*-algebras due to Neshveyev. We show that the extremal traces of $C^*(J_\gamma \rtimes \ok^*)$ are parametrized by pairs consisting of an ergodic invariant measure for the action of $\ok^*$ on $\hat{J}_\gamma$ together with a character of the isotropy subgroup associated to the support of this measure. For every class $\gamma$, the dual group $\hat{J}_\gamma$ is a $d$-torus on which $\ok^*$ acts by linear toral automorphisms. Hence, the problem of classifying all extremal traces is a generalized version of Furstenberg's celebrated $\times_2$ $\times_3$ conjecture. We classify the results for various number fields in terms of ideal class group, degree, and unit rank, and we point along the way the trivial, the intractable, and the conjecturally classifiable cases. At the topological level, it is possible to characterize the number fields for which infinite $\ok^*$-invariant sets are dense in $\hat{J}_\gamma$, thanks to a theorem of Berend; as an application we give a description of the primitive ideal space of $C^*(J_\gamma \rtimes \ok^*)$ for those number fields. \end{abstract} \maketitle \section{Introduction} Let $K$ be an algebraic number field and let $\OO_{\! K}$ denote its ring of integers. The associated multiplicative monoid $\OO_{\! K}^\times := \OO_{\! K} \setminus \{0\}$ of nonzero integers acts by injective endomorphisms on the additive group of $\OO_{\! K}$ and gives rise to the semi-direct product $\ok\rtimes \ok^\times$, the affine monoid (or `$b+ax$ monoid') of algebraic integers in $K$. Let $\{\xi_{(x,w)}: (x,w) \in \ok\rtimes \ok^\times\}$ be the standard orthonormal basis of the Hilbert space $\ell^2(\ok\rtimes \ok^\times)$. The left regular representation $L$ of $ \ok\rtimes \ok^\times$ by isometries on $\ell^2(\ok\rtimes \ok^\times)$ is determined by $L_{(b,a)} \xi_{(x,w)} = \xi_{(b+ax,aw)}$. In \cite{CDL}, Cuntz, Deninger and Laca studied the Toeplitz-like C*-algebra $\mathfrak{T} [\OO_{\! K}] := C^*(L_{(b,a)}: (b,a) \in \ok\rtimes \ok^\times)$ generated by this representation and analyzed the equilibrium states of the natural time evolution $\sigma$ on $\mathfrak{T} [\OO_{\! K}]$ determined by the absolute norm $N_a := | \OO_{\! K}^\times/(a)|$ via \[ \sigma _t (L_{(b,a)}) = N_a^{it} L_{(b,a)} \qquad a\in \OO_{\! K}^\times, \ \ t\in \mathbb R. \] One of the main results of \cite{CDL} is a characterization of the simplex of KMS equilibrium states of this dynamical system at each inverse temperature $\beta \in (0,\infty]$. Here we will be interested in the low-temperature range of that classification. To describe the result briefly, let $\ok^*$ be the group of units, that is, the elements of $\OO_{\! K}^\times$ whose inverses are also integers, and recall that by a celebrated theorem of Dirichlet, $\ok^* \cong W_K \times \mathbb Z^{r+s-1}$, where $W_K$ (the group of roots of unity in $\ok^*$) is finite, $r$ is the number of real embeddings of $K$, and $s$ is equal to half the number of complex embeddings of $K$. Let $\Cl_K $ be the ideal class group of $K$, which, by definition, is the quotient of the group of all fractional ideals in $K$ modulo the principal ones, and is a finite abelian group. For each ideal class $\gamma \in \Cl_K$ let $J_\gamma \in \gamma$ be an integral ideal representing $\gamma$. By \cite[Theorem 7.3]{CDL}, for each $\beta > 2$ the KMS$_\beta$ states of $C^*(\ok\rtimes \ok^\times)$ are parametrized by the tracial states of the direct sum of group C*-algebras $\bigoplus_{\gamma\in\Cl_K} C^*(J_\gamma\rtimes \ok^*)$, where the units act by multiplication on each ideal viewed as an additive group. It is intriguing that exactly the same direct sum of group C*-algebras also plays a role in the computation of the $K$-groups of the semigroup C*-algebras of algebraic integers in the work of Cuntz, Echterhoff and Li, see e.g. \cite[Theorem 8.2.1]{CEL}. Considering as well that the group of units and the ideals representing different ideal classes are a measure of the failure of unique factorization into primes in $\OO_{\! K}$, we feel it is of interest to investigate the tracial states of the C*-algebras $C^*(J_\gamma\rtimes \ok^*)$ that arise as a natural parametrization of KMS equilibrium states of $C^*(\ok\rtimes \ok^\times)$. This work is organized as follows. In Section \ref{FromKMS} we review the phase transition from \cite{CDL} and apply a theorem of Neshveyev's to show in \thmref{thm:nesh} that the extremal KMS states arise from ergodic invariant probability measures and characters of their isotropy subgroups for the actions $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \hat J_\gamma$ of units on the duals of integral ideals. We begin Section \ref{unitaction} by showing that for imaginary quadratic fields, the orbit space of the action of units is a compact Hausdorff space that parametrizes the ergodic invariant probability measures. All other number fields have infinite groups of units leading to `bad quotients' for which noncommutative geometry provides convenient tools of analysis. Units act by toral automorphisms and so the classification of equilibrium states is intrinsically related to the higher-dimensional, higher-rank version of the question, first asked by H. Furstenberg, of whether Lebesgue measure is the only nonatomic ergodic invariant measure for the pair of transformations $\times 2$ and $\times 3$ on $\mathbb R/\mathbb Z$. Once in this framework, it is evident from work of Sigmund \cite{Sig} and of Marcus \cite{Mar} on partially hyperbolic toral automorphisms and from the properties of the Poulsen simplex \cite{LOS}, that for fields whose unit rank is $1$, which include real quadratic fields, there is an abundance of ergodic measures, \proref{poulsen}, and hence of extremal equilibrium states, see also \cite{katz}. We also show in this section that there is solidarity among integral ideals with respect to the ergodicity properties of the actions of units, \proref{oneidealsuffices}. In Section \ref{berendsection}, we look at the topological version of the problem and we identify the number fields for which \cite[Theorem 2.1]{B} can be used to give a complete description of the invariant closed sets. In \thmref{conjecturalclassification} we summarize the consequences, for extremal equilibrium at low temperature, of the current knowledge on the generalized Furstenberg conjecture. For fields of unit rank at least $2$ that are not complex multiplication fields, i.e. that have no proper subfields of the same unit rank, we show that if there is an extremal KMS state that does not arise from a finite orbit or from Lebesgue measure, then it must arise from a zero-entropy, nonatomic ergodic invariant measure; it is not known whether such a measure exists. For complex multiplication fields of unit rank at least $2$, on the other hand, it is known that there are other measures, arising from invariant subtori. As a byproduct, we also provide in \proref{ZWclaim} a proof of an interesting fact stated in \cite{ZW}, namely the units acting on algebraic integers are generic among toral automorphism groups that have Berend's ID property. We conclude our analysis in Section \ref{prim} by computing the topology of the quasi-orbit space of the action $\ok^*\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ for number fields satisfying Berend's conditions. As an application we also obtain an explicit description of the primitive ideal space of the C*-algebra $C(\OO_{\! K} \rtimes \ok^*)$, \thmref{primhomeom}. For the most part, sections \ref{unitaction} and \ref{berendsection} do not depend on operator algebra considerations other than for the motivation and the application, which are discussed in sections \ref{FromKMS} and \ref{prim}. \noindent{\sl Acknowledgments:} This research started as an Undergraduate Student Research Award project and the authors acknowledge the support from the Natural Sciences and Engineering Research Council of Canada. We would like to thank Martha {\L}\k{a}cka for pointing us to \cite{LOS}, and we also especially thank Anthony Quas for bringing Z. Wang's work \cite{ZW} to our attention, and for many helpful comments, especially those leading to Lemmas \ref{partition} and \ref{Anthony'sLemma1}. \section{From KMS states to invariant measures and isotropy}\label{FromKMS} Our approach to describing the tracial states of the C*-algebras $\bigoplus_{\gamma\in\Cl_K} C^*(J_\gamma\rtimes \ok^*)$ is shaped by the following three observations. First, the tracial states of a group C*-algebra form a Choquet simplex \cite{thoma}, so it suffices to focus our attention on the {\em extremal traces}. Second, there is a canonical isomorphism $C^*(J\rtimes \ok^*) \cong C^*(J)\rtimes \ok^* $, which we may combine with the Gelfand transform for $C^*(J)$, thus obtaining an isomorphism of $C^*(J\rtimes \ok^*)$ to the transformation group C*-algebra $C(\hat{J})\rtimes \ok^*$, associated to the transposed action of $\ok^*$ on the continuous complex-valued functions on the compact dual group $\hat{J}$. Specifically, the action of $\ok^*$ on $\hat{J}$ is determined by \begin{equation}\label{actiononjhat} (u \cdot \chi )(j):= \chi( u j), \qquad u \in \ok^*, \ \ \chi \in \hat{J}, \ \ j\in J, \end{equation} or by $\langle j, u \cdot \chi \rangle = \langle u j, \chi\rangle$, if we use $\langle \ , \ \rangle$ to denote the duality pairing of $J$ and $\hat J$. Third, this puts the problem of describing the tracial states squarely in the context of Neshveyev's characterization of traces on crossed products, so our task is to identify and describe the relevant ingredients of this characterization. In brief terms, when \cite[Corollary 2.4]{nes} is interpreted in the present situation, it says that that for each integral ideal $J$, the extremal traces on $C(\hat{J})\rtimes \ok^*$ are parametrized by triples $(H,\chi,\mu)$ in which $H$ is a subgroup of $\ok^*$, $\chi$ is a character of $H$, and $\mu$ is an ergodic $\ok^*$-invariant measure on $\hat{J}$ such that the set of points in $\hat{J}$ whose isotropy subgroups for the action of $\ok^*$ are equal to $H$ has full $\mu$ measure. Recall that, by definition, an $\ok^*$-invariant probability measure $\mu$ on $\widehat{J}$ is \emph{ergodic invariant} for the action of $\ok^*$ if $\mu(A) \in\{0,1\}$ for every $\ok^*$-invariant Borel set $A\subset \hat{J} $. Our first simplification is that the action of $\ok^*$ on $\hat J$ automatically has $\mu$-almost everywhere constant isotropy with respect to each ergodic invariant probability measure $\mu$. \begin{lemma}\label{automaticisotropy} Let $K$ be an algebraic number field with ring of integers $\OO_{\! K}$ and group of units $\ok^*$ and let $J$ be a nonzero ideal in $\OO_{\! K}$. Suppose $\mu$ is an ergodic $\ok^*$-invariant probability measure on $\hat J$. Then there exists a unique subgroup $H_\mu$ of $\ok^*$ such that the isotropy group $(\ok^*)_\chi:= \{u \in \ok^*: u\cdot \chi = \chi\}$ is equal to $H_\mu$ for $\mu$-a.a. characters $\chi\in \hat J$. \end{lemma} \begin{proof} For each subgroup $H \leq \ok^*$, let $M_H := \{\chi \in \hat J \mid (\ok^*)_\chi = H\}$ be the set of characters of $J$ with isotropy equal to $H$. Since the isotropy is constant on orbits, each $M_H$ is $\ok^*$-invariant, and clearly the $M_H$ are mutually disjoint. By Dirichlet's unit theorem $\ok^* \cong W_K \times \mathbb Z^{r+s-1}$ with $W_K$ finite, and $r$ and $2s$ the number of real and complex embeddings of $K$, respectively. Thus every subgroup of $\ok^*$ is generated by at most $|W_K| + (r+s-1)$ generators, and hence $\ok^*$ has only countably many subgroups. Thus $\{M_H: H \leq \ok^*\}$ is a countable partition of $\hat J$ into subsets of constant isotropy. We claim that each $M_H$ is a Borel measurable set in $\hat J$. To see this, observe: \begin{align*} M_H =&\{ \chi \in \hat J: u\cdot \chi = \chi \text{ for all }u \in H\text{ and } u\cdot \chi \neq \chi \text{ for all }u \in \ok^*\setminus H\}\\ =& \Big(\bigcap\limits_{u \in H} \{\chi \in \hat J \mid \chi^{-1}(u\cdot\chi)=1\} \Big)\bigcap \Big( \bigcap\limits_{u \in \ok^*\setminus H} \{\chi \in \hat J \mid \chi^{-1}(u\cdot \chi)\ne 1\}\Big) \end{align*} because $u\cdot \chi = \chi$ iff $\chi^{-1}(u\cdot\chi)=1$. Since the map $\chi \mapsto\chi^{-1}(u\cdot \chi)$ is continuous on $\hat J$, the sets in the first intersection are closed and those in the second one are open. By above, the intersection is countable, so $M_H$ is Borel-measurable, as desired. For every Borel measure $\mu$ on $\hat J$, we have \[ \sum\limits_{H \leq \ok^*} \mu (M_H) = \mu\Big(\bigcup\limits_{H \leq \ok^*} M_H \Big) = 1, \] so at least one $M_H$ has positive measure. If $\mu$ is ergodic $\ok^*$-invariant, then there exists a unique ${H_\mu \leq \ok^*}$ such that $\mu(M_{H_\mu}) = 1$ and thus $H_\mu$ is the (constant) isotropy group of $\mu$-a.a points $\chi \in \hat J$. \end{proof} Since each ergodic invariant measure determines an isotropy subgroup, the characterization of extremal traces from \cite[Corollary 2.4]{nes} simplifies as follows. \begin{theorem}\label{thm:nesh} Let $K$ be an algebraic number field with ring of integers $\OO_{\! K}$ and group of units $\ok^*$ and let $J$ be a nonzero ideal in $\OO_{\! K}$. Denote the standard generating unitaries of $C^*(J\rtimes \ok^*)$ by $\delta_j$ for $j\in J$ and $\nu_u$ for $u \in \ok^*$. Then for each extremal trace $\tau$ on $C^*(J\rtimes \ok^*)$ there exists a unique probability measure $\mu_\tau$ on $\hat J$ such that \begin{equation} \label{mufromtau} \int_{\hat J} \<j, x\rangle d\mu_\tau(x) = \tau(\delta_j) \quad \text{ for } j \in J. \end{equation} The probability measure $\mu_\tau $ is ergodic $\ok^*$-invariant, and if we denote by $H_{\mu_\tau}$ its associated isotropy subgroup from \lemref{automaticisotropy}, then the function $\chi_\tau$ defined by $\chi_\tau(h):= \tau(\nu_h)$ for $h \in H_{\mu_\tau}$ is a character on $H_{\mu_\tau}$. Furthermore, the map $\tau \mapsto (\mu_\tau,\chi_\tau)$ is a bijection of the set of extremal traces of $C^*(J\rtimes \ok^*)$ onto the set of pairs $(\mu, \chi)$ consisting of an ergodic $\ok^*$-invariant probability measure $\mu$ on $\hat J$ and a character $\chi \in \widehat H_\mu$. The inverse map $(\mu,\chi) \mapsto \tau_{(\mu,\chi)}$ is determined by \begin{equation} \label{muchi-parameters} \tau_{(\mu,\chi)}(\delta_j \nu_u) = \begin{cases}\displaystyle\chi(u)\int_{\hat J} \<j, x\>d\mu(x) &\text{ if $u \in H_\mu$}\\ 0 &\text{ otherwise,}\end{cases} \end{equation} for $j\in J$ and $u\in \ok^*$. \end{theorem} \begin{proof} Recall that equation \eqref{actiononjhat} gives the continuous action of $\ok^*$ by automorphisms of the compact abelian group $\hat J$ obtained on transposing the multiplicative action of $\ok^*$ on $J$. There is a corresponding action $\alpha$ of $\ok^*$ by automorphisms of the C*-algebra $C(\hat J)$ of continuous functions on $\hat{J}$; it is given by $\alpha_u(f) (\chi) = f(u^{-1} \cdot \chi)$. The characterization of traces \cite[Corollary 2.4]{nes} then applies to the crossed product $C(\hat J) \rtimes_\alpha \ok^*$ as follows. For a given extremal tracial state $\tau$ of $C^*(J\rtimes \ok^*)$ there is a probability measure $\mu_\tau$ on $\hat J$ that arises, via the Riesz representation theorem, from the restriction of $\tau$ to $C^*(J) \cong C(\hat J)$ and is characterized by its Fourier coefficients in equation \eqref{mufromtau}. By \lemref{automaticisotropy}, there is a subset of $\hat J$ of full $\mu_\tau$ measure on which the isotropy subgroup is automatically constant, and is denoted by $H_{\mu_\tau}$. The unitary elements $\nu_u$ generate a copy of $C^*(\ok^*)$ inside $C(\hat J) \rtimes_\alpha \ok^*$ and the restriction of $\tau$ to these generators determines a character $\chi_\tau$ of $H_{\mu_\tau}$ given by $\chi_\tau(u) := \tau(\nu_u)$. See the proof of \cite[Corollary 2.4]{nes} for more details. By \lemref{automaticisotropy}, the condition of almost constant isotropy is automatically satisfied for every ergodic invariant measure on $\hat J$, hence every ergodic invariant measure arises as $\mu_\tau$ for some extremal trace $\tau$. The parameter space for extremal tracial states is thus the set of all pairs $(\mu,\chi)$ consisting of an ergodic $\ok^*$-invariant probability measure $\mu$ on $\hat J$ and a character $\chi$ of the isotropy subgroup $H_\mu$ of $\mu$. Formula \eqref{muchi-parameters} is a particular case of the formula in \cite[Corollary 2.4]{nes} with $f$ equal to the character function $f(\cdot) = \<j,\cdot\rangle$ on $\hat J$ associated to $j\in J$. Since for a fixed $u \in \ok^*$ the right hand side of \eqref{muchi-parameters} is a continuous linear functional of the integrand and the character functions span a dense subalgebra, this particular case is enough to imply \begin{equation} \tau_{(\mu,\chi)}(f\nu_u) = \begin{cases}\displaystyle\chi(u)\int_{\hat J} f(x)d\mu(x) &\text{ if $u \in H_\mu$}\\ 0 &\text{ otherwise,}\end{cases} \end{equation} for every $f\in C(\hat J)$. \end{proof} \section{The action of units on integral ideals}\label{unitaction} Combining \cite[Theorem 7.3]{CDL} with \thmref{thm:nesh} above, we see that for $\beta> 2$, the extremal KMS$_\beta$ equilibrium states of the system $(\mathfrak{T} [\OO_{\! K}], \sigma)$ are indexed by pairs $(\mu,\kappa)$ consisting of an ergodic invariant probability measure $\mu$ and a character $\kappa$ of its isotropy subgroup relative to the action of the unit group $\ok^*$ on a representative of each ideal class. If the field $K$ is imaginary quadratic, that is, if $r=0$ and $s=1$, then the group of units is finite, consisting exclusively of roots of unity. In this case, things are easy enough to describe because the space of $\ok^*$-orbits in $\hat{J}$ is a compact Hausdorff topological space. \begin{proposition} \label{imaginaryquadratic}Suppose $K$ is an imaginary quadratic number field, let $J \subset \OO_{\! K}$ be an integral ideal and write $W_K$ for the group of units. Then the orbit space $W_K \backslash \hat J $ is a compact Hausdorff space and the closed invariant sets in $\hat J$ are indexed by the closed sets in $W_K \backslash \hat J $. Moreover, the ergodic invariant probability measures on $\hat J$ are the equiprobability measures on the orbits and correspond to unit point masses on $W_K\backslash \hat J$. \end{proposition} \begin{proof} Since $W_K$ is finite, distinct orbits are separated by disjoint invariant open sets, so the quotient space $W_K \backslash \hat J $ is a compact Hausdorff space. Since $\hat J$ is compact, the quotient map $q: \hat J \to W_K \backslash \hat J $ given by $q(\chi ) := W_K \cdot \chi$ is a closed map by the closed map lemma, and so invariant closed sets in $\hat J$ correspond to closed sets in the quotient. For each probability measure $\mu$ on $\hat J$, there is a probability measure $\tilde\mu$ on $W_K \backslash \hat J$ defined by \[\tilde\mu(E) := \mu(q^{-1}(E))\quad \text{ for each measurable } E\subseteq W_K \backslash \hat J.\] This maps the set of $W_K$-invariant probability measures on $\hat J$ onto the set of all probability measures on $W_K \backslash \hat J $. Ergodic invariant measures correspond to unit point masses on $W_K\backslash \hat J$, and their $W_K$-invariant lifts are equiprobability measures on single orbits in $\hat J$. \end{proof} As a result we obtain the following characterization of extremal KMS equilibrium states. \begin{corollary} Suppose $K$ is an imaginary quadratic algebraic number field and let $J_\gamma$ be an integral ideal representing the ideal class $\gamma\in \Cl_K$. For $\beta>2$, the extremal KMS$_\beta$ states of the system $(\mathfrak T[\OO_{\! K}], \sigma^N)$ are parametrized by the triples $(\gamma, W\cdot \chi, \kappa)$, where $\gamma\in \Cl_K$, $\chi$ is a point in $ \hat J_\gamma$, with orbit $W\cdot \chi$ and $\kappa$ is a character of the isotropy subgroup of $\chi$. \end{corollary} Before we discuss invariant measures and isotropy for fields with infinite group of units, we need to revisit a few general facts about the multiplicative action of units on the algebraic integers and, more generally, on the integral ideals. The concise discussion in \cite{ZW} is particularly convenient for our purposes. As is customary, we let $d = [K:\mathbb Q]$ be the {\em degree} of $K$ over $\mathbb Q$. The number $r$ of real embeddings and the number $2s$ of complex embeddings satisfy $r+2s=d$. We also let $n = r+s -1$ be the {\em unit rank} of $K$, namely, the free abelian rank of $\ok^*$ according to Dirichlet's unit theorem. We shall denote the real embeddings of $K$ by $\sigma_j:K \to \mathbb R$ for $j = 1, 2, \cdots r$ and the conjugate pairs of complex embeddings of $K$ by $\sigma_{r +j}, \sigma_{r+s+j} : K \to \mathbb C$ for $ j = 1, \cdots, s$. Thus, there is an isomorphism \[ \sigma: K \otimes_\mathbb Q \mathbb R \to \mathbb R^r \times \mathbb C^s \] such that \[ \sigma (k\otimes x) = (\sigma_1( k) x, \sigma_2(k) x, \cdots, \sigma_r(k) x;\, \sigma_{r+1}(k) x, \cdots, \sigma_{r+s}(k) x ). \] The ring of integers $\OO_{\! K}$ is a free $\mathbb Z$-module of rank $d$, and thus $\OO_{\! K}\otimes_\mathbb Z \mathbb R \cong \mathbb R^d \cong \mathbb R^r \oplus \mathbb C^s$. We temporarily fix an integral basis for $\OO_{\! K}$, which fixes an isomorphism $\theta: \OO_{\! K} \to \mathbb Z^d$. Then, at the level of $\mathbb Z^d$, the action of each $u \in \ok^*$ is implemented as left multiplication by a matrix $A_u \in GL_d(\mathbb Z)$. Moreover, once this basis has been fixed, the usual duality pairing $\langle \mathbb Z^d, \mathbb R^d/\mathbb Z^d \rangle $ given by $\langle n, t \rangle = \exp{2\pi i (n \cdot t)} $, with $n\in \mathbb Z^d$, $t\in \mathbb R^d$ and $n\cdot t = \sum_{j=1}^d n_j t_j$, gives an isomorphism of $\mathbb R^d/\mathbb Z^d $ to $\widehat{\OO}_{\! K}$, in which the character $\chi_t\in \widehat{\OO}_{\! K}$ corresponding to $t\in \mathbb R^d/\mathbb Z^d$ is given by $\chi_t(x) = \exp{2\pi i (\theta(x) \cdot t)} $ for $x\in \OO_{\! K}$. Thus, the action of a unit $u\in \ok^*$ is \[(u\cdot \chi_t)(x) = \chi_t(u\cdot x) = \exp{2\pi i (A_u \theta(x) \cdot t)} = \exp{2\pi i ( \theta(x) \cdot A_u^T t)}.\] This implies that the action $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ is implemented, at the level of $\mathbb R^d/\mathbb Z^d$, by the representation $\rho: \ok^* \to GL_d(\mathbb Z)$ defined by $\rho(u) = A_u^{T}$, cf. \cite[Theorem 0.15]{Wal}. Similar considerations apply to the action of $\ok^*$ on $\hat J$ for each integral ideal $J \subset \OO_{\! K}$, giving a representation $\rho_J: \ok^* \to GL_d(\mathbb Z)$. For ease of reference we state the following fact about this matrix realization $\rho_J$ of the action of $\ok^*$ on $\hat{J}$. \begin{proposition}\label{diagonalization} The collection of matrices $\{\rho(u) : u \in \ok^*\}$ is simultaneously diagonalizable (over $\mathbb C$) , and for each $u\in \ok^*$ the eigenvalue list of $\rho(u)$ is the list of its archimedean embeddings $\sigma_k(u): k = 1, 2, \cdots r+2s$. \end{proposition} See e.g. the discussion in \cite[Section 2.1]{LW}, and \cite[Section 2.1]{ZW} for the details. Multiplication of complex numbers in each complex embedding is regarded as the action of 2x2 matrices on $\mathbb R+i\mathbb R \cong \mathbb R^2$, and the 2x2 blocks corresponding to complex roots simultaneously diagonalize over $\mathbb C^d$. The self duality of $\mathbb R^r\oplus \mathbb C^s $ can be chosen to be compatible with the isomorphism mentioned right after (2.1) in \cite{LW} and with multiplication by units. See also \cite[Ch7]{KlausS}. When the number field $K$ is not imaginary quadratic, then $\ok^*$ is infinite and so the analysis of orbits and invariant measures is much more subtle; for instance, most orbits are infinite, some are dense, and the orbit space does not have a Hausdorff topology. We summarize for convenience of reference the known basic general properties in the next proposition. \begin{proposition}\label{orbitsandisotropy} Let $K$ be a number field with $\operatorname{rank}(\ok^*) \geq 1$, and let $J$ be an ideal in $\OO_{\! K}$. Then normalized Haar measure on $\hat J$ is ergodic $\ok^*$-invariant, and for each $\chi \in \hat J$, \begin{enumerate} \item the orbit $\ok^*\cdot \chi$ is finite if and only if $\chi$ corresponds to a point with rational coordinates in the identification $\hat J \cong \mathbb R^d /\mathbb Z^d$; in this case the corresponding isotropy subgroup is a full-rank subgroup of $\ok^*$; \item the orbit $\ok^*\cdot \chi$ is infinite if and only if $\chi $ corresponds to a point with at least one irrational coordinate in $\mathbb R^d /\mathbb Z^d$; \item the characters $\chi$ corresponding to points $(w_1,w_2,\ldots,w_d) \in \mathbb R^d$ such that the numbers $1, w_1, w_2, \ldots w_d$ are rationally independent have trivial isotropy. \end{enumerate} \end{proposition} \begin{proof} By \proref{diagonalization}, for each $u\in \ok^*$, the eigenvalues of the matrix $\rho(u)$ encoding the action of $u$ at the level of $\mathbb R^d/\mathbb Z^d$ are precisely the various embeddings of $u$ in the archimedean completions of $K$. Since $\operatorname{rank}(\ok^*) \geq 1$, there exists a non-torsion element $u \in \ok^*$, whose eigenvalues are not roots of unity. Hence normalized Haar measure is ergodic for the action of $\{\rho(u): u\in\ok^*\}$ by \cite[Corollary 1.10.1]{Wal} and the first assertion now follows from \cite[Theorem 5.11]{Wal}. The isotropy is a full rank subgroup of $\ok^*$ because $|\ok^*/(\ok^*)_x| = |\ok^*\cdot x| < \infty$. Let $w = (w_1, w_2, \cdots, w_d)$ be a point in $\mathbb R^d/\mathbb Z^d$ such that $1, w_1, \ldots, w_d$ are rationally independent. Suppose $w$ is a fixed point for the matrix $\rho(u) \in GL_d(\mathbb Z)$ acting on $\mathbb R^d/\mathbb Z^d$. Then $\rho(u) w = w$ (mod $\mathbb Z^d$) and hence $(\rho(u) - I)w \in \mathbb Z^d$, i.e. \[[(\rho(u)-I)w]_i = \sum\limits_{j=1}^d (\rho(u)-I)_{ij}w_j \in \mathbb Z\] for all $1 \le i \le d$. Since $(\rho(u)-I)_{ij} \in \mathbb Z$ for all $i,j$, the rational independence of $1, w_1, \ldots, w_d$ implies that $\rho(u) = I$, so $u=1$, as desired. \end{proof} We see next that for the number fields with unit rank 1 there are many more ergodic invariant probability measures on $\widehat{\OO}_{\! K}$ than just Haar measure and measures supported on finite orbits. In fact, a smooth parametrization of these measures and of the corresponding KMS equilibrium states of $(\mathfrak T[\OO_{\! K}], \sigma)$ seems unattainable. \begin{proposition}\label{poulsen} Suppose the number field $K$ has unit-rank equal to $1$, namely, $K$ is real quadratic, mixed cubic, or complex quartic. Then the simplex of ergodic invariant probability measures on $\widehat{\OO}_{\! K}$ is isomorphic to the Poulsen simplex \cite{LOS}. \end{proposition} \begin{proof} The fundamental unit gives a partially hyperbolic toral automorphism of $\widehat{\OO}_{\! K}$, for which Haar measure is ergodic invariant. By \cite{Mar,Sig}, the invariant probability measures of such an automorphism that are supported on finite orbits are dense in the space of all invariant probability measures. This remains true when we include the torsion elements of $\ok^*$. Since these equiprobabilities supported on finite orbits are obviously ergodic invariant and hence extremal among invariant measures, it follows from \cite[Theorem 2.3]{LOS} that the simplex of invariant probability measures on $\widehat{\OO}_{\! K}$ is isomorphic to the Poulsen simplex. \end{proof} For fields with unit rank at least $2$, whether normalized Haar measure and equiprobabilities supported on finite orbits are the only ergodic $\ok^*$-invariant probability measures is a higher-dimensional version of the celebrated Furstenberg conjecture, according to which Lebesgue measure is the only non-atomic probability measure on $\mathbb T = \mathbb R/\mathbb Z$ that is jointly ergodic invariant for the transformations $\times 2$ and $\times 3$ on $\mathbb R$ modulo $\mathbb Z$. As stated, this remains open, however, Rudolph and Johnson have proved that if $p$ and $q$ are multiplicatively independent positive integers, then the only probability measure on $\mathbb R/\mathbb Z$ that is ergodic invariant for $\times p$ and $ \times q$ and has non-zero entropy is indeed Lebesgue measure \cite{Rud,Joh}. Number fields always give rise to automorphisms of tori of dimension at least $2$, so, strictly speaking the problem in which we are interested does not contain Furstenberg's original formulation as a particular case. Nevertheless, the higher-dimensional problem is also interesting and open as stated in general, and there is significant recent activity on it and on closely related problems \cite{KS,KK, KKS}. In particular, see \cite{EL1} for a summary of the history and also a positive entropy result for higher dimensional tori along the lines of the Rudolph--Johnson theorem. We show next that the toral automorphism groups arising from different integral ideals have a solidarity property with respect to the generalized Furstenberg conjecture. \begin{proposition} \label{oneidealsuffices} If for some integral ideal $J$ in $\OO_{\! K}$ the only ergodic $\ok^*$-invariant probability measure on $\hat J$ having infinite support is normalized Haar measure, then the same is true for every integral ideal in $\OO_{\! K}$. \end{proposition} The proof depends on the following lemmas. \begin{lemma}\label{partition} Let $J\subseteq I$ be two integral ideals in $\OO_{\! K}$ and let $r: \hat I \to \hat J$ be the restriction map. Denote by $\lambda_{\hat I}$ normalized Haar measure on $\hat I$. For each $\gamma \in \hat J$, there exists a neighborhood $N$ of $\gamma$ in $\hat J$ and homeomorphisms $h_j$ of $N$ into $\hat I$ for $j = 1, 2, \ldots , | I/J |$, with mutually disjoint images and such that \begin{enumerate} \item $\lambda_{\hat I} (h_j(E)) = \lambda_{\hat I} ( h_k(E))$ for every measurable $E\subseteq N$ and $1\leq j, k \leq | I/J |$; \item $r\circ h_j = \operatorname {id}_N$; \item $r^{-1} ( E) = \bigsqcup_j h_j(E)$ for all $E \subseteq N$, that is, the $h_j$'s form a complete system of local inverses of $r$ on $N$. \end{enumerate} \end{lemma} \begin{proof} Let $J^\perp:= \{\kappa\in \hat I: \kappa (j) = 1, \forall j\in J\}$ be the kernel of the restriction map $r: \hat I \to \hat J$. Since $J^\perp$ is a subgroup of order $| I/J | <\infty$, and since $\hat I$ is Hausdorff, we may choose a collection $\{ A_\kappa: \kappa \in J^\perp\}$ of mutually disjoint open subsets of $\hat I$ such that $\kappa \in A_\kappa$ for each $\kappa\in J^\perp$. Define $B_1:= \bigcap_{\kappa \in J^\perp} \kappa^{-1} A_\kappa$ and for each $\kappa\in J^\perp$ let $B_\kappa := \kappa B_1$. Then $\{B_\kappa : \kappa \in J^\perp\}$ is a collection of mutually disjoint open sets such that $\kappa \in B_\kappa$ and $r(B_\kappa) = r(B_1)$ for every $\kappa \in J^\perp$. We claim that the restrictions $r: B_\kappa \to \hat J$ are homeomorphisms onto their image. Since the $B_\kappa$ are translates of $B_1$ and since $r$ is continuous and open, it suffices to verify that $r$ is injective on $B_1$. This is easy to see because if $r(\xi_1) = r(\xi_2)$ for two distinct elements $\xi_1,\xi_2$ of $B_1$, then $\xi_2 = \kappa \xi_1$ for some $\rho \in J^\perp \setminus \{1\}$, and this would contradict $B_1 \cap \kappa B_1 = \emptyset$. This proves the claim. We may then take $N: = \gamma \, r(B_1)$ and define $h_\rho := (r |_{B_\rho})^{-1}$, for which properties (1)-(3) are now easily verified. \end{proof} \begin{lemma}\label{Anthony'sLemma1} Let $X$ be a measurable space and let $T: X \to X$ be measurable. Suppose that $\lambda$ is an ergodic $T$-invariant probability measure on $X$. If $\mu$ is a $T$-invariant probability measure on $X$ such that $\mu \ll \lambda$, then $\mu = \lambda$. \end{lemma} \begin{proof} Fix $f \in L^\infty(\lambda)$ and define $(A_n f)(x) = \frac{1}{n} \sum\limits_{k=0}^{n-1} f(T^{k}x)$. Let $S = \{x \in X: (A_nf)(x) \to \int_X f d\lambda\}$. By the Birkhoff ergodic theorem, we have that $\lambda(S^c)=0$, and so $\mu(S^c)=0$ as well, that is, $(A_nf)(x) \to \int_X f d\lambda$ $\mu$-a.e.. Since $f \in L^\infty(\lambda)$ and $\mu \ll \lambda$, we have that $f \in L^\infty(\mu)$ as well, with $\|f\|_\infty^\mu \le \|f\|_\infty^\lambda$. Observe that $|(A_nf)(x)| \le \|f\|_\infty^\lambda$ for $\mu$-a.e. $x$, and so by the dominated convergence theorem, $\int_X A_n f d\mu \to \int_X\left(\int_X f d\lambda\right) d\mu = \int_X f d\lambda$, with the last equality because $\mu(X)=1$. Because $\mu$ is $T$-invariant, we have that $\int_X A_n f d\mu = \int_X f d\mu$ for all $n$. Combining this with the above implies that $\int_X f d\lambda = \int_X f d\mu$ for all $f \in L^\infty(\lambda)$. In particular, this holds for the indicator function of each measurable set, and so $\mu = \lambda.$ \end{proof} \begin{lemma} \label{fromItoJ} Let $J \subseteq I$ be two integral ideals in $\widehat{\OO}_{\! K}$ and let $r: \hat I \to \hat J$ be the restriction map. If $\mu$ is an ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$, then $\tilde{\mu}:= \mu \circ r^{-1}$ is an ergodic invariant probability measure on $\hat J$. Moreover, the support of $\mu$ is finite if and only if the support of $\tilde \mu$ is finite. \end{lemma} \begin{proof} Assume $\mu$ is ergodic invariant on $\hat I$ and let $E \subseteq \hat J$ be an $\ok^*$-invariant measurable set. Since $r$ is $\ok^*$-equivariant, $r^{-1}(E)$ is also $\ok^*$-invariant so $\tilde{\mu}(E):=\mu(r^{-1}(E)) \in \{0,1\}$ because $\mu$ is ergodic invariant. Thus, $\tilde{\mu}$ is also ergodic invariant. The statement about the support follows immediately because $r$ has finite fibers. \end{proof} \begin{lemma}\label{liftingF} Suppose $J \subseteq I$ are integral ideals in $\OO_{\! K}$, and let $\lambda_{\hat J}, \lambda_{\hat I}$ be normalized Haar measures on $\hat J, \hat I$, respectively. If the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat J$ with infinite support is $\lambda_{\hat J}$, then the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$ with infinite support is $\lambda_{\hat I}$. \end{lemma} \begin{proof} Let $\mu$ be an ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$ with infinite support. By \lemref{fromItoJ}, $\mu \circ r^{-1}$ is an ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat J$ with infinite support, and so by assumption must equal $\lambda_{\hat J}$. In particular, $\lambda_{\hat I} \circ r^{-1} = \lambda_{\hat J}$. Since $\hat J$ is compact, the open cover $\{N_\gamma: \gamma \in \hat J\}$ given by the sets constructed in \lemref{partition} has a finite subcover, that is, there exist $\gamma_1, \ldots, \gamma_n \in \hat J$ so that $\hat J = \bigcup\limits_{k=1}^n N_{\gamma_k}$, where $N_{\gamma_k}$ is a neighborhood of $\gamma_k \in \hat J$ satisfying the conditions stated in \lemref{partition}, with corresponding maps $h_{\gamma_k}^{(j)}$, for $1 \le j \le |I/J|$ and $1 \le k \le n$. We will first show that if $B \subseteq \hat I$ is such that $r|_B$ is a homeomorphism with $r(B) \subseteq N_{\gamma_k}$ for some $k$, and if $\lambda_{\hat I}(B) = 0$, then $\mu(B) = 0$. Suppose $B$ is such a set and $\lambda_{\hat I}(B)=0$. By part (3) of \lemref{partition}, $r^{-1}(r(B)) = \bigsqcup\limits_{j=1}^{|I/J|} h_{\gamma_k}^{(j)}(r(B))$, so $r^{-1}(r(B))$ is a disjoint union of $|I/J|$ sets, all having the same measure under $\lambda_{\hat I}$. Moreover, there exists some $1 \le j \le |I/J|$ such that $h_{\gamma_k}^{(j)}(r(B)) = B$, because the $h_{\gamma_k}^{(j)}$'s form a complete set of local inverses for $r$, and $r$ is injective on $B$. Putting these together yields \[\lambda_{\hat I}(r^{-1}(r(B))) = |I/J| \lambda_{\hat I}(h_{\gamma_k}^{(j)}(r(B))) = |I/J|\lambda_{\hat I}(B) =0.\] Since $\mu \circ r^{-1} = \lambda_{\hat J} = \lambda_{\hat I} \circ r^{-1}$, this implies that $\mu(r^{-1}(r(B))) = 0$ as well, and since $B \subseteq r^{-1}(r(B))$, we have that $\mu(B) = 0$. Now, since $r: \hat I \to \hat J$ is a covering map, for each $\chi \in \hat I$, there exists an open neighbourhood $U_\chi$ of $\chi$ such that $r|_{U_\chi}$ is a homeomorphism. Let $1 \le k \le n$ be such that $r(\chi) \in N_{\gamma_k}$, and let $W_\chi:=U_\chi \cap r^{-1}(N_{\gamma_k})$. This forms another open cover of $\hat I$, and so by compactness of $\hat I$, there exists a finite subcover $W_1, \ldots, W_m$. Finally, let $A \subseteq \hat I$ be such that $\lambda_{\hat I}(A)=0$. Then $A \cap W_i$ is a set on which $r$ acts as a homeomorphism, and there exists $1 \le k \le n$ such that $r(A \cap W_i) \subseteq N_{\gamma_k}$. Thus, by the above, we conclude $\mu(A \cap W_i) = 0$ for all $1 \le i \le m$. Since these sets cover $A$, we have that $\mu(A)=0$, and hence $\mu \ll \lambda_{\hat I}$, as desired. By \lemref{Anthony'sLemma1} it follows that $\mu = \lambda_{\hat I}$. \end{proof} \begin{proof}[Proof of \proref{oneidealsuffices}:] Suppose $J$ is an integral ideal such that the only $\ok^*$-invariant probability measure on $\hat J$ having an infinite orbit is normalized Haar measure. By \lemref{liftingF} applied to the inclusion $J\subset \OO_{\! K}$, the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\widehat{\OO}_{\! K}$ with infinite support is normalized Haar measure. Suppose now $I \subseteq \OO_{\! K}$ is an arbitrary integral ideal. Since the ideal class group is finite, a power of $I$ is principal and thus we may choose $q \in \mathcal O^\times_K$ such that $q \OO_{\! K} \subseteq I$. The action of $\OO_{\! K}^*$ on $\widehat{\OO}_{\! K}$ is conjugate to the action of $\OO_{\! K}^*$ on $\widehat{q\OO_{\! K}}$, and so the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\widehat{q\OO_{\! K}}$ with infinite support is normalized Haar measure. Thus, by \lemref{liftingF} again with $\widehat{q\OO_{\! K}} \subseteq \hat I$, we conclude that the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$ is $\lambda_{\hat I}$. \end{proof} In order to understand the situation for number fields with unit rank higher than $1$, we review in the next section the topological version of the problem of ergodic invariant measures, namely, the classification of closed invariant sets. \section{Berend's theorem and number fields}\label{berendsection} An elegant generalization to higher-dimensional tori of Furstenberg's characterization \cite[Theorem IV.1]{F} of closed invariant sets for semigroups of transformations of the circle was obtained by Berend \cite[Theorem 2.1]{B}. The fundamental question investigated by Berend is whether an infinite invariant set is necessarily dense, and his original formulation is for semigroups of endomorphisms of a torus. Here we are interested in the specific situation arising from an algebraic number field $K$ in which the units $\ok^*$ act by automorphisms on $\hat J$ for integral ideals $J \subseteq \OO_{\! K}$ representing each ideal class, so we paraphrase Berend's Property ID for the special case of a group action on a compact space. \begin{definition} (cf. \cite[Definition 2.1]{B}.) Let $G$ be a group acting on a compact space $X$ by homeomorphisms. We say that the action $G\mathrel{\reflectbox{$\righttoleftarrow$}} X$ {\em satisfies the ID property}, or that it has the {\em infinite invariant dense property}, if the only closed infinite $G$-invariant subset of $X$ is $X$ itself. \end{definition} The first observation is a topological version of the measure-theoretic solidarity proved in \proref{oneidealsuffices}; namely, if $K$ is a given number field, then the action $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \hat J$ has the ID property either for all integral ideals $J$, or for none. \begin{proposition}\label{IDforallornone} Suppose $K$ is an algebraic number field, and let $J$ be an ideal in $\OO_{\! K}$. Then the action of $\ok^*$ on $\hat J$ is ID if and only if the action of $\ok^*$ on $\hat{\OO_{\! K}}$ is ID. \end{proposition} \begin{proof} Suppose first that $J_1 \subseteq J_2$ are ideals in $\OO_{\! K}$ and assume that the action of $\ok^*$ on $\hat{J}_2$ is ID. The restriction map $r: \hat J_2 \to \hat J_1$ is $\ok^*$-equivariant, continuous, surjective, and has finite fibers. Thus, if $E$ were a closed, proper, infinite $\ok^*$-invariant subset of $\hat{J}_1$, then $r^{-1}(E)$ would be a closed, proper, infinite $\ok^*$-invariant subset of $\hat{J}_2$, contradicting the assumption that the action of $\ok^*$ on $\hat{J}_2$ is ID. So no such set $E$ exists, proving that the action of $\ok^*$ on $\hat{J}_1$ is also ID. In particular, if the action of $\ok^*$ on $\hat{\OO_{\! K}}$ is ID, then the action on $\hat{J}$ is also ID for every integral ideal $J\subset \OO_{\! K}$. For the converse, recall that, as in the proof of \proref{oneidealsuffices}, there exists an integer $q \in \OO_{\! K}^\times$ such that $q\OO_{\! K} \subseteq J$, so we may apply the preceding paragraph to this inclusion. Since the action of $\ok^*$ on $\widehat{q\OO_{\! K}}$ is conjugate to that on $\widehat{\OO}_{\! K}$, this completes the proof. \end{proof} In order to decide for which number fields the action of units on the integral ideals is ID, we need to recast Berend's necessary and sufficient conditions in terms of properties of the number field. Recall that, by definition, a number field is called a {\em complex multiplication (or CM) field} if it is a totally imaginary quadratic extension of a totally real subfield. These fields were studied by Remak \cite{rem}, who observed that they are exactly the fields that have a {\em unit defect}, in the sense that they contain a proper subfield $L$ with the same unit rank. \begin{theorem}\label{berend4units} Let $K$ be an algebraic number field and let $J$ be an ideal in $\OO_{\! K}$. The action of $\ok^*$ on $\hat J$ is ID if and only if $K$ is not a CM field and $\operatorname{rank} \ok^* \geq 2$. \end{theorem} For the proof we shall need a few number theoretic facts. We believe these are known but we include the relatively straightforward proofs below for the convenience of the reader. \begin{lemma}\label{99percent} Suppose $\mathcal F$ is a finite family of subgroups of $\mathbb Z^d$ such that $\operatorname{rank}(F) < d$ for every $F\in \mathcal F$. Then there exists $m\in \mathbb Z^d$ such that $m+F$ is nontorsion in $\mathbb Z^d/F$ for every $F \in \mathcal F$. \end{lemma} \begin{proof} Recall that for each subgroup $F$ there exists a basis $\{n^F_j\}_{j= 1, 2, \ldots , d}$ of $\mathbb Z^d$ and integers $a_1, a_2, \ldots, a_{\operatorname{rank}(F)}$ such that \[ F = \textstyle\big\{ \sum_{i=1}^{\operatorname{rank}(F)} k_i n^F_i:\ k_i \in a_i\mathbb Z, \ 1\leq i \leq \operatorname{rank}(F)\big\}. \] The associated vector subspaces $S_F := \operatorname{span}_\mathbb R\{n^F_1, \ldots, n^F_{\operatorname{rank}(F)}\} $ of $\mathbb R^d$ are proper and closed so $\mathbb R^d \setminus \cup_F S_F$ is a nonempty open set, see e.g. \cite[Theorem 1.2]{rom}. Let $r$ be a point in $\mathbb R^d \setminus \cup_F S_F$ with rational coordinates. If $k$ denotes the l.c.m. of all the denominators of the coordinates of $r$, then $m := kr \in \mathbb Z^d$ and its image $m+F \in \mathbb Z^d/F$ is of infinite order for every $F$ because $m \notin S_F$. \end{proof} \begin{proposition}\label{unitdefect} Let $K$ be an algebraic number field. Then there exists a unit $u\in \ok^*$ such that $K = \mathbb Q(u^k)$ for every $k\in \N^\times$ if and only if $K$ is not a CM field. \end{proposition} \begin{proof}Assume first $K$ is not a CM field. Then $\operatorname{rank} \OO_{\!F}^* < \operatorname{rank} \ok^*$ for every proper subfield $F$ of $K$. Since there are only finitely many proper subfields $F$ of $K$, \lemref{99percent} gives a unit $u\in \ok^*$ with nontorsion image in $\ok^*/\OO_{\!F}^*$ for every $F$. Thus $u^k \notin F$ for every proper subfield $F$ of $K$ and every $k\in \mathbb N$. Assume now $K$ is a CM field, and let $F$ be a totally real subfield with the same unit rank as $K$ \cite{rem}. Then the quotient $\ok^*/\OO_{\!F}^*$ is finite and there exists a fixed integer $m$ such that $u^m \in F$ for every $u\in \ok^*$. \end{proof} \begin{lemma} \label{friday} Let $k$ be an algebraic number field with $\operatorname{rank} \ok^* \geq 1$. Then for every embedding $\sigma: k \to \mathbb C$, there exists $u \in \ok^* $ such that $|\sigma(u)| > 1$. \end{lemma} \begin{proof} Assume for contradiction that $\sigma$ is an embedding of $k$ in $\mathbb C$ such that $\sigma(\ok^* ) \subseteq \{z \in \mathbb C: |z|=1\}$. Let $K = \sigma(k)$ and let $U_K = \sigma(\ok^* )$. Then $K \cap \mathbb R$ is a real subfield of $K$ with $ U_{K\cap \mathbb R} = \{\pm1\}$, so $K \cap \mathbb R= \mathbb Q$. Also $K \cap \mathbb R$ is the maximal real subfield of $K$, and since we are assuming $\operatorname{rank} \ok^* \geq 1$, $K$ cannot be a CM field. To see this, suppose that $k$ were CM. Let $\ell \subseteq k$ be a totally real subfield such that $[k:\ell] = 2$. Since $\ell$ is totally real, $\sigma(\ell)\subseteq \mathbb R$, and since $K \cap \mathbb R= \mathbb Q$, it must be that $\sigma(\ell) = \mathbb Q$. Then $\ell = \mathbb Q$, so $k$ is quadratic imaginary, contradicting $\operatorname{rank} \ok^* \ge 1$. By \proref{unitdefect}, there exists $u \in U_K$ such that $K = \mathbb Q(u)$. Since $|u|=1$, we have that $\overline K =\mathbb Q(\overline u) = \mathbb Q(u^{-1})= \mathbb Q(u) = K$, so $K$ is closed under complex conjugation. Write $u = a + ib$. Then $u + \overline u = 2a \in K \cap \mathbb R= \mathbb Q$, so $a \in \mathbb Q$. Thus, $K = \mathbb Q(u) = \mathbb Q(ib).$ Since $|u|=1$, $a^2+b^2=1$, and so we have that \[\mathbb Q(ib) \cong \mathbb Q(\sqrt{-b^2}) \cong \mathbb Q(\sqrt{a^2-1}) \cong \mathbb Q\left(\sqrt{\frac{m^2-n^2}{n^2}}\right) \cong \mathbb Q(\sqrt{m^2-n^2}),\] where $a = m/n \in \mathbb Q$. Thus, $K$ is a quadratic field. But it cannot be quadratic imaginary because $\operatorname{rank} U_K \ge 1$, and it cannot be quadratic real because all the units lie on the unit circle. This proves there can be no such embedding. \end{proof} \begin{proof} [Proof of \thmref{berend4units}] By \proref{IDforallornone}, it suffices to prove the case $J = \OO_{\! K}$. Let $d = [K:\mathbb Q]$ and recall that $\widehat{\OO}_{\! K} \cong \mathbb T^d$. All we need to do is verify that Berend's necessary and sufficient conditions for ID \cite[Theorem 2.1]{B}, when interpreted for the automorphic action of $\ok^*$ on $\widehat{\OO}_{\! K}$, characterize non-CM fields of unit rank $2$ or higher. Since the action of $\ok^*$ by linear toral automorphisms $\rho(u)$ with $u\in \ok^*$ is faithful by \cite[p. 729]{KKS}, Berend's conditions are: \begin{enumerate} \item (totally irreducible) there exists a unit $u$ such that the characteristic polynomial of $\rho(u^n)$ is irreducible for all $n\in \mathbb N$; \item (quasi-hyperbolic) for every common eigenvector of $\{\rho(u): u\in \ok^*\}$, there is a unit $u\in \ok^*$ such that the corresponding eigenvalue of $\rho(u)$ is outside the unit disc; and \item (not virtually cyclic) there exist units $u,v\in \ok^*$ such that if $m,n \in \mathbb N$ satisfy $\rho(u^m) = \rho(v^n)$, then $m = n = 0$. \end{enumerate} Suppose first that the action of $\ok^*$ on $\OO_{\! K}$ is ID. By \cite[Theorem 2.1]{B} conditions (1) and (3) above hold, i.e. the action of $\ok^*$ on $\OO_{\! K}$ is totally irreducible and not virtually cyclic. By \proref{unitdefect}, $K$ is not a CM field and since $\rho:\ok^* \to GL_d(\mathbb Z)$ is faithful, (3) is a restatement of $\operatorname{rank}\ok^* \geq 2$. Suppose now that $K$ is not CM and has unit-rank at least $2$. By \proref{unitdefect}, there exists $u \in \ok^*$ such that $\mathbb Q(u^n) = K$ for every $n \in \mathbb N$. Hence the minimal polynomial of $\rho(u^n)$ has degree $d$, and so it coincides with the characteristic polynomial. This proves that condition (1) holds, i.e. the action of $\rho(u)$ is totally irreducible. We have already observed that condition (3) holds iff the unit rank of $K$ is at least $2$, so it remains to see that the hyperbolicity condition (2) holds too. In the simultaneous diagonalization of the matrix group $\rho(\ok^*)$, the diagonal entries of $\rho(u)$ are the embeddings of $u$ into $\mathbb R$ or $\mathbb C$, see e.g. \cite[p.729]{KKS}. Then condition (2) follows from \lemref{friday}. \end{proof} \begin{remark} Notice that for units acting on algebraic integers, Berend's hyperbolicity condition (2) is automatically implied by the rank condition (3). \end{remark} \begin{remark} Since the matrices representing the actions of $\ok^*$ on $\hat J$ and on $\hat \OO_{\! K}$ are conjugate over $\mathbb Q$, \proref{IDforallornone} can be derived from the implication (1)$\implies$(3) in \cite[Proposition 2.1]{KKS}. We may also see that the matrices implementing the action on $\hat J$ and on $\hat \OO_{\! K}$ have the same sets of characteristic polynomials, so the questions of expansive eigenvalues (condition (2)) and of total irreducibility are equivalent for the two actions. The third condition is independent of whether we look at $\hat J$ or $\hat \OO_{\! K}$, so this yields yet another proof of Proposition \ref{IDforallornone}. \end{remark} By \thmref{berend4units}, for each non-CM algebraic number field $K$ with unit rank at least $2$, the action $\ok^*$ on $\widehat{\OO}_{\! K}$, transposed as $\{\rho(u): u\in \ok^*\}$ acting on $\mathbb R^d/\mathbb Z^d$, is an example of an abelian toral automorphism group for which one may hope to prove that normalized Haar measure is the only ergodic invariant probability measure with infinite support. So it is natural to ask which groups of toral automorphisms arise this way. A striking observation of Z. Wang \cite[Theorem 2.12]{ZW}, see also \cite[Proposition 2.2]{LW}, states that every finitely generated abelian group of automorphisms of $\mathbb T^d$ that contains a totally irreducible element and whose rank is maximal and greater than or equal to $2$ arises, up to conjugacy, from a finite index subgroup of units acting on the integers of a non-CM field of degree $d$ and unit rank at least $2$, cf. \cite[Condition 1.5]{ZW}. We wish next to give a proof of the converse, which was also stated in \cite{ZW}. \begin{proposition} \label{ZWclaim} Suppose $G$ is an abelian subgroup of $SL_d(\mathbb Z)$ satisfying \cite[Condition 2.8]{ZW}. Specifically, suppose there exist \begin{itemize} \item a non-CM number field $K$ of degree $d$ and unit rank at least $2$; \item an embedding $\phi:G \to \ok^*$ of $G$ into a finite index subgroup of $\ok^*$; \item a co-compact lattice $\Gamma$ in $K\subset K\otimes_\mathbb Q \mathbb R\cong \mathbb R^d$ invariant under multiplication by $\phi(G)$; and \item a linear isomorphism $\psi: \mathbb R^d \to K\otimes_\mathbb Q \mathbb R\cong \mathbb R^d$ mapping $\mathbb Z^d$ onto $\Gamma$ that intertwines the actions $G\mathrel{\reflectbox{$\righttoleftarrow$}}\mathbb R^d$ and $\phi(G) \mathrel{\reflectbox{$\righttoleftarrow$}} (K\otimes_\mathbb Q \mathbb R)/\Gamma$. \end{itemize} Then $G $ satisfies \cite[Condition 1.5]{ZW}, namely \begin{enumerate} \item $\operatorname{rank}(G)\geq 2$; \item the action $g\mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb R^d/\mathbb Z^d \cong \mathbb T^d$ is totally irreducible for some $g\in G$; \item $\operatorname{rank} G_1 = \operatorname{rank} G$ for each abelian subgroup $G_1 \subsetSL_d(\mathbb Z)$ containing~$G$. \end{enumerate} \end{proposition} \begin{proof} Suppose $K$ is a non-CM algebraic number field of degree $d$ with unit rank at least $2$, and assume $G$ is a subgroup of $SL_d(\mathbb Z)$ that satisfies the assumptions with respect to $K$. Part (1) of \cite[Condition 1.5]{ZW} is immediate, because $\phi(G)$ is of full rank in $\ok^*$. By \proref{unitdefect}, there exists a unit $u\in \ok^*$ such that the characteristic polynomial of $\rho(u^m)$ is irreducible over $\mathbb Q$ for all $m \in \mathbb N$. This is equivalent to the action of $u^m$ on $\widehat{\OO}_{\! K}$ being irreducible for all $m \in \mathbb N$, see, e.g. \cite[Proposition 3.1]{KKS}. Since $\phi(G)$ is of finite index in $\ok^*$, there exists $N\in \mathbb N$ such that $u^N \in \phi(G)$. We claim that $g := \phi^{-1}(u^N)$ is a totally irreducible element in $G\mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb R^d/\mathbb Z^d$. To see this, it suffices to show that the characteristic polynomial of $g^k$ is irreducible over $\mathbb Q$ for every positive integer $k$. Since the linear isomorphism $\psi$ intertwines the actions $g^k\mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb T^d$ and $\rho(\phi(g))^k \mathrel{\reflectbox{$\righttoleftarrow$}} (K\otimes_\mathbb Q\mathbb R)/\Gamma$, the characteristic polynomial of $g^k$ equals the characteristic polynomial of $\rho(\phi(g))^k = \rho(u^{kN})$, which is irreducible because it coincides with the characteristic polynomial of $u^{kN}$ as an element of the ring $\OO_{\! K}$. This proves part (2) of Condition 1.5. Suppose now that $G_1$ is an abelian subgroup of $SL_d(\mathbb Z)$ containing $G$ and apply the construction from \cite[Proposition 2.13]{ZW} (see also \cite{KlausS,EL1}) to the irreducible element $g\in G\subset G_1 \mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb T^d$. Up to an automorphism, the resulting number field arising from this construction is $K= \mathbb Q(u^N)$, and the embedding $\phi_1: G_1 \to \ok^*$ is an extension of $\phi:G\to \ok^*$. Since $\phi(G)\subset \phi_1(G_1) \subset \ok^*$ and $\phi(G)$ is of finite index in $\ok^*$, \[\operatorname{rank} (G_1) = \operatorname{rank}\phi_1(G_1) = \operatorname{rank} \ok^* = \operatorname{rank} \phi(G) = \operatorname{rank} G,\] and this proves proves part (3) of Condition 1.5. \end{proof} As a consequence, we see that the action of units on the algebraic integers of number fields are generic for group actions with Berend's ID property in the following sense, cf. \cite{ZW,LW}. \begin{corollary} If $G$ is a finitely generated abelian subgroup of $SL_d(\mathbb Z)$ of torsion-free rank at least 2 that contains a totally irreducible element and is maximal among abelian subgroups of $SL_d(\mathbb Z)$ containing $G$, then $G$ is conjugate to a finite-index toral automorphism subgroup of the action of $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ for a non-CM algebraic number field $K$ of degree $d$ and unit rank at least $2$. \end{corollary} Finally, we summarize what we can say at this point for equilibrium states of C*-algebras associated to number fields with unit rank strictly higher than one. If the generalized Furstenberg conjecture is verified, the following result would complete the classification started in \proref{imaginaryquadratic} and \proref{poulsen}. Let $K$ be a number field and for each $\gamma \in \Cl_K$ define $F_\gamma $ to be the set of all pairs $(\mu, \chi)$ with $\mu$ an equiprobability measure on a finite orbit of the action of $\ok^*$ in $\hat {J}_\gamma$, and $\chi \in \hat{H}_\mu$, where the $\mu$-a.e. isotropy group $H_\mu$ is a finite index subgroup of $\ok^*$. Also let $(\lambda_J, 1) $ denote the pair consisting of normalized Haar measure on $\hat{J}$ and the trivial character of its trivial a.e. isotropy group. Then the map $(\mu,\chi) \mapsto \tau_{\mu,\chi}$ from \thmref{thm:nesh} gives an extremal tracial state of $C^*(J_\gamma \rtimes \ok^*)$ for each pair $(\mu,\chi) \in F_\gamma \sqcup \{(\lambda_{J_\gamma}, 1) \}$. Recall that the map $\tau \mapsto \varphi_\tau$ from \cite[Theorem 7.3]{CDL} is an affine bijection of all tracial states of $\bigoplus_{\gamma\in \Cl_K} C^*(J_\gamma\rtimes \ok^*)$ onto $\mathcal K_\beta$, the simplex of KMS$_\beta$ equilibrium states of the system $(\mathfrak T[\OO_{\! K}], \sigma)$ studied in \cite{CDL}. \begin{theorem}\label{conjecturalclassification} Suppose $K$ is an algebraic number field with unit rank at least $2$ and define $\Phi:(\mu,\chi) \mapsto \varphi_{\tau_{\mu,\chi}}$ to be the composition of the maps from \thmref{thm:nesh} and from \cite[Theorem 7.3]{CDL}, assigning a state $\varphi_{\tau_{\mu,\chi}} \in \operatorname{Extr}(\mathcal K_\beta)$ to each pair $(\mu,\chi)$ consisting of an ergodic invariant probability measure $\mu$ in one of the $\hat{J}_\gamma$ and an associated character of the $\mu$-almost constant isotropy $H_\mu$. Let \[ F_K:= \bigsqcup_{\gamma \in \Cl_K} \big(F_\gamma \sqcup \{(\lambda_{J_\gamma}, 1) \}\big) \] be the set of pairs whose measure $\mu$ has finite support or is Haar measure. Then \begin{enumerate} \item if $K$ is a CM field, then the inclusion $\Phi(F_K) \subset \operatorname{Extr}(\mathcal K_\beta)$ is proper; and \item if $K$ is not a CM field, and if there exists $ \phi \in \operatorname{Extr}(\mathcal K_\beta) \setminus \Phi(F_K) $ then the measure $\mu$ on $\hat{J}_\gamma$ arising from $\phi$ has zero-entropy and infinite support. \end{enumerate} \end{theorem} \begin{proof} To prove assertion (1), recall that when $K$ is a CM field Berend's theorem implies that there are invariant subtori, which have ergodic invariant probability measures on the fibers, cf. \cite{KK,KS}. These measures give rise to tracial states and to KMS states not accounted for in $\Phi(F_K)$. Assertion (2) follows from \cite[Theorem 1.1]{EL1}. \end{proof} \section{Primitive ideal space}\label{prim} The computation of the primitive ideal spaces of the C*-algebras $C^*(J \rtimes \ok^*)$ associated to the action of units on integral ideals lies within the scope of Williams' characterization in \cite{DW}. We briefly review the general setting next. Let $G$ be a countable, discrete, abelian group acting continuously on a second countable compact Hausdorff space $X$. We define an equivalence relation on $X$ by saying that {\em $x$ and $y$ are equivalent} if $x$ and $y$ have the same orbit closure, i.e. if $\overline{G\cdot x} = \overline{G \cdot y}$. The equivalence class of $x$, denoted by $[x]$, is called the {\em quasi-orbit} of $x$, and the quotient space, which in general is not Hausdorff, is denoted by $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} X)$ and is called the {\em quasi-orbit space}. It is important to distinguish the quasi-orbit of a point from the closure of its orbit, as the latter may contain other points with strictly smaller orbit closure. Let $\epsilon_x$ denote evaluation at $x\in X$, viewed as a one-dimensional representation of $C(X)$. For each character $\kappa \in \hat G_x$, the pair $(\epsilon_x,\kappa)$ is clearly covariant for the transformation group $(C(X), G_x)$, and the corresponding representation $\epsilon_x\times \kappa$ of $C(X) \rtimes G_x$ gives rise to an induced representation $\operatorname{Ind}_{{G_x}}^G(\epsilon_x\times \kappa)$ of $C(X) \rtimes G$, which is irreducible because $\epsilon_x\times \kappa$ is. Since $G$ is abelian and the action is continuous, whenever $x$ and $y$ are in the same quasi-orbit, $[x] = [y]$, the corresponding isotropy subgroups coincide: $G_x = G_y$. Thus, we may consider an equivalence relation on the product $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} X) \times \hat G$ defined by \[ ([x],\kappa) \sim ([y],\lambda) \quad \iff \quad [x]=[y] \text{ and } \kappa\restr{G_x} = \lambda\restr{G_x} . \] By \cite[Theorem 5.3]{DW}, the map $(x,\kappa) \mapsto \ker \operatorname{Ind}_{{G_x}}^G(\epsilon_x\times \kappa)$ induces a homeomorphism of $(\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} X) \times \hat G)/_{\!\sim}$ onto the primitive ideal space of the crossed product $C(X)\rtimes G$, see e.g. \cite[Theorem 1.1]{primbc} for more details on this approach. We wish to apply the above result to actions $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \hat J$ for integral ideals $J$ of non-CM number fields with unit rank at least $2$, as in \thmref{berend4units}. Notice that by \proref{orbitsandisotropy} if the orbit $\ok^*\cdot \chi$ is finite, then it is equal to the quasi-orbit $[\chi]$. The first step is to describe the quasi-orbit space for the action of units. We focus on the case $J = \OO_{\! K}$; ideals representing nontrivial classes behave similarly because of the solidarity established in \proref{IDforallornone}. \begin{proposition}\label{quasiorbitset} Suppose $K$ is a non-CM algebraic number field with unit rank at least $2$. Then the quasi-orbit space of the action $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ is \[ \mathcal{Q}(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) = \{[x]: |\ok^*\cdot x | < \infty\}\cup \{\omega_\infty\}. \] The point $\omega_\infty$ is the unique infinite quasi-orbit $[\alpha]$ of any $\alpha \in \widehat{\OO}_{\! K}\cong \mathbb R^d/\mathbb Z^d$ having at least one irrational coordinate. The closed proper subsets are the finite subsets all of whose points are finite (quasi-)orbits. Infinite subsets and subsets that contain the infinite quasi-orbit $\omega_\infty$ are dense in the whole space. \end{proposition} \begin{proof} By \thmref{berend4units}, the closure of each infinite orbit is the whole space. Thus, the points with infinite orbits collapse into a single quasi-orbit \[ \omega_\infty := \{x\in \widehat{\OO}_{\! K}: |\ok^* \cdot x| =\infty\} = \{x\in \widehat{\OO}_{\! K}: \overline{\ok^* \cdot x} = \widehat{\OO}_{\! K}\}. \] That this is the set of points with at least one irrational coordinate is immediate from \cite[Theorem 5.11]{Wal}. When the orbit of $x$ is finite, it is itself a quasi-orbit, which we view as a point in $\mathcal{Q}(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$. In this case $x\in \widehat{\OO}_{\! K}$ has all rational coordinates. To describe the topology, recall that the quotient map $q: \widehat{\OO}_{\! K} \to \mathcal{Q}(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ is surjective, continuous and open by the Lemma in page 221 of \cite{PG}, see also the proof of Proposition 2.4 in \cite{primbc}. Any two different finite quasi-orbits $[x]$ and $[y]$ are finite, mutually disjoint subsets of $\widehat{\OO}_{\! K}$ and as such can be separated by disjoint open sets $V$ and $W$, so that $[x]\subset V$ and $[y] \subset W$. Passing to the quotient space, we have $[x]\notin q(W)$ and $[y] \notin q(V)$, so $[x]$ and $[y]$ are $T_1$-separated, which implies that finite sets of finite quasi-orbits are closed in $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$. The singleton $\{\omega_\infty\}$ is dense in $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ because every infinite orbit in $\widehat{\OO}_{\! K}$ is dense by \thmref{berend4units}. If $A$ is an infinite subset of $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ consisting of finite quasi-orbits, then $\bigcup_{[x]\in A} [x]$ is an infinite invariant set in $\widehat{\OO}_{\! K}$, hence is dense by \thmref{berend4units}. This implies that $\omega_\infty$ is in the closure of $A$, and hence $A$ is dense in $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$. \end{proof} \begin{theorem} \label{primhomeom} Let $K$ be a non-CM algebraic number field with unit rank at least 2, and let $G = \ok^*$. The primitive ideal space of $C(\widehat{\OO}_{\! K})\rtimes G$ is homeomorphic to the space \begin{equation}\label{primdef} \bigsqcup\limits_{[x]} \left(\{[x]\} \times \hat{G}_x \right) \end{equation} in which a net $([x_\iota], \gamma_\iota)$ converges to $([x],\gamma)$ iff $[x_\iota] \to [x]$ in $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ and $\gamma_\iota|_{G_x} \to \gamma|_{G_x}$ in $\hat{G}_x$. Notice that if $[x]$ is a finite quasi-orbit, then the net $\{[x_\iota] \}$ is eventually constant equal to $[x]$, and if $[x] = \omega_\infty$, then the condition $\gamma_\iota|_{G_{\omega_\infty}} \to \gamma|_{G_{\omega_\infty}}$ is trivially true because $G_{\omega_\infty} = \{1\}$. \end{theorem} \begin{proof} Consider the diagram below, where $f$ is the quotient map and the vertical map $g$ is defined by $g([([x],\gamma)]) = ([x], \gamma|_{G_x})$, where $[([x],\gamma)]$ denotes the equivalence class of $([x],\gamma)$ with respect to $\sim$. \[ \begin{tikzcd} \mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G \arrow{r}{f} \arrow[swap]{dr}{g\circ f} & \mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G/\sim \arrow{d}{g} \\ & \bigsqcup\limits_{[x]} \left(\{[x]\} \times \hat{G}_x \right) \end{tikzcd} \] By the fundamental property of the quotient map, see e.g. \cite[Theorem 9.4]{wil}, $g\circ f$ is continuous if and only if $g$ is continuous. It is clear that $g$ is a bijection. We show next that $g \circ f$ is continuous. Suppose that $([x_\iota], \gamma_\iota)$ is a net in $\mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G$ converging to $([x],\gamma)$. Then $[x_\iota] \to [x]$ in $\mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$, and $\gamma_\iota \to \gamma$ in $\hat G$. Then clearly also $\gamma_\iota|_{G_x} \to \gamma|_{G_x}$ in $\hat{G}_x$ as well. Hence the net $g\circ f([x_\iota], \gamma_\iota)$ converges to $g\circ f([x],\gamma) = ([x], \gamma|_{G_x})$, as desired. It remains to show that $g^{-1}$ is continuous, or equivalently, that $g$ is a closed map. Suppose that $W \subseteq \mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G / \sim$ is closed, and suppose that $([x_\iota], \gamma_\iota)$ is a net in $g(W)$ converging to $([x], \gamma)$. Consider any net $(\tilde{\gamma_\iota})$ in $\hat{G}$ such that $\tilde{\gamma_\iota}|_{G_x} = \gamma_\iota$. By the compactness of $\hat{G}$, there exists a convergent subnet $\tilde{\gamma}_{\iota_\eta}$ with limit $\tilde{\gamma}$. Then $\tilde{\gamma}_{\iota_\eta}|_{G_x} \to \tilde{\gamma}|_{G_x}$ as well, so $\gamma_{\iota_\eta} \to \tilde{\gamma}|_{G_x}$. Since $\hat{G}_x$ is Hausdorff, limits are unique, and hence $\tilde{\gamma}|_{G_x} = \gamma$. The net $([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta})$ converges to $([x], \tilde{\gamma})$ in $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat{G}$, and since $f$ is continuous, $f([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta}) \to f([x],\tilde{\gamma})$. Moreover, $f([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta}) = [([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta})] \in W$ because $g$ is injective and $g([([x_{\iota_\eta}],\tilde{\gamma}_{\iota_\eta})]) = ([x_{\iota_\eta}],\gamma_{\iota_\eta}) \in g(W)$ by assumption. Since $W$ is closed, $[([x],\tilde{\gamma})] \in W$, and so its image $([x], \gamma) \in g(W)$, as desired. \end{proof} \begin{remark} Recall that $G \cong W \times \mathbb Z ^d$ with $W$ the roots of unity in $G$, and that the isotropy subgroup $G_x$ is constant on the quasi-orbit $[x]$ of $x$. If $[x]$ is finite, then $G_x$ is of full rank in $G$, and thus $G_x \cong V_x \times \mathbb Z^d$, with $V_x \subset W$ the torsion part of $G_x$. Hence, for every finite quasi-orbit $[x]$, we have $\hat{G_x} \cong \hat V_{[x]} \times \mathbb T^d$. Notice that $\hat V_{[x]} \cong V_{[x]}$ (noncanonically) because $V_x$ is finite. \end{remark} \end{document}
\begin{document} \title{Thermal effect on mixed state geometric phases for neutrino propagation in a magnetic field} \author{Da-Bao Yang} \email{[email protected]} \affiliation{Department of fundamental physics, School of Science, Tianjin Polytechnic University, Tianjin 300387, People's Republic of China} \author{Ji-Xuan Hou} \affiliation{Department of Physics, Southeast University, Nanjing 211189, People's Republic of China} \author{Ku Meng} \affiliation{Department of fundamental physics, School of Science, Tianjin Polytechnic University, Tianjin 300387, People's Republic of China} \date{\today} \begin{abstract} In astrophysical environments, neutrinos may propagate over a long distance in a magnetic field. In the presence of a rotating magentic field, the neutrino spin can flip from left-handed neutrino to right-handed neutrino. Smirnov demonstrated that the pure state geometric phase due to the neutrino spin precession may cause resonantg spin conversion inside the Sun. However, in general, the neutrinos may in an ensemble of thermal state. In this article, the corresponding mixed state geometric phases will be formulated, including the off-diagonal casse and diagonal ones. The spefic features towards temperature will be analysized. \end{abstract} \pacs{03.65.Vf, 75.10.Pq, 31.15.ac} \maketitle \section{Introduction} \label{sec:introduction} Geometric phase had been discovered by Berry \cite{berry1984quantal} in the circumstance of adiabatic evolution. Then it was generalized by Wilczek and Zee \cite{Wilczek1984appearance}, Aharonov and Anandan \cite{aharonov1987phase,anandon1988nonadiabatic}, Samuel and Bhandari\cite{samuel1988general}, and Samuel and Bhandari \cite{samuel1988general} in the context of pure state. Moreover, it was also extended to mixed state counterparts. Its operationally well-defined notion was proposed by Sj$\ddot{o}$qvist \emph{et. al}. \cite{sjoqvist2000mixed} based on inferferometry. Subsequently, it was generalized to degenerate case by Singh et. al. \cite{singh2003geometric} and to nonunitary evuolution by Tong et. al. \cite{tong2004kinematic} by use of kenimatic approach. In addition, when the final state is orthogonal to the initial state, the above geometric phase is meaningless. So the complementary one to the usual geometric phases had been put forward by Manini and Pistolesi \cite{manini2000off}. The new phase is called off-diagonal geoemtric phase, which was been generalized to non-abelian casse by Kult et. al. \cite{kult2007nonabelian}. It also had been extended to mixed state ones by Filipp and Sj$\ddot{o}$qvist \cite{filipp2003PRL,filipp2003offdiagonalPRA} during unitary evolution . Further extension to non-degenerate case was made by Tong et. al. \cite{tong2005offdiagonal} by kinematic approach. Finally , there are excellent reviewed articles \cite{xiao2010Berry} and monographs \cite{shapere1989book,bohm2003book,chru2004book} talking about its influence and applications in physics and other natural science. As well kown, Neutrino plays an important role in particle physics and astronomy. Smirnov investigated the effect of resonant spin converstion of solar neutrinos which was induced by the geometric phase \cite{smirnov1991solarneutrino}. Joshi and Jain figured out the geometric phase of neutrino when it was propagating in a rotating transverse mangnetic field \cite{joshi2015neutrino}. However, their disscussion are confined to the pure state case. In this article, we will talk about mixed state geometric phase of neutrino, ranging from off-diagonal phase to diagonal one. This paper is organised as follows. In the next section, the off-diagonal geometric phase for mixed state will be reviewed as well as the usual mixed state geometric phase. Furthermore, the related equation about the propagation of two helicity components of neutrino will be retrospected. In Sec. III, both the off-diagonal and diagonal mixed geometric phase for neutrino in thermal state are going to be calculated. Finally, a conclusion is drawn in the last section. \section{Review of o ff-diagonal phase} \label{sec:reviews} If a non-degenerate density matrix takes this form \begin{equation} \rho_{1}=\lambda_{1}|\psi_{1}\rangle\langle\psi_{2}|+\cdots+\lambda_{N}|\psi_{N}\rangle\langle\psi_{N}|.\label{eq:DenstiyMatrix1} \end{equation} Moreover, a density operator that can't interfer with $\rho_{1}$ is introduced \cite{filipp2003offdiagonalPRA}, which is \[ \rho_{n}=W^{n-1}\rho_{1}(W^{\dagger})^{n-1},n=1,...,N \] where \[ W=|\psi_{1}\rangle\langle\psi_{N}|+\psi_{N}\rangle\langle\psi_{N-1}|\cdots+|\psi_{2}\rangle\langle\psi_{1}|. \] In the unitary evolution, excet the ususal mixed state geometric phase, there exists so called mixed state off diagonal phase , which reads \cite{filipp2003offdiagonalPRA} \begin{equation} \gamma_{\rho_{j_{1}}...\rho_{j_{l}}}^{(l)}=\Phi[Tr(\prod_{a=1}^{l}U^{\parallel}(\tau)\sqrt[l]{\rho_{j_{a}}})],\label{eq:OffdiagonlaGeometricPhase} \end{equation} where $\Phi[z]\equiv z/|z|$ for nonzero complex number $z$ and \cite{tong2005offdiagonal} \begin{equation} U^{\parallel}=U(t)\sum_{k=1}^{N}e^{-i\delta_{k}},\label{eq:ParallelUnitaryEvolution} \end{equation} in which \begin{equation} \delta_{k}=-i\int_{0}^{t}\langle\psi_{k}|U^{\dagger}(t^{\prime})\dot{U}(t^{\prime})|\psi_{k}\rangle dt^{\prime}\label{eq:DynamicalPhase} \end{equation} and $U(t)$ is the time evolution operator of this system. Moreover $U^{\parallel}$ satisfies the parallel transport condition, which is \[ \langle\psi_{k}|U^{\parallel\dagger}(t)\dot{U}^{\parallel}(t)|\psi_{k}\rangle=0,\ k=1,\cdots,N. \] In addition, the usual mixed state geometric phase factor \cite{tong2004kinematic} takes the following form \begin{equation} \gamma=\Phi\left[\sum_{k=1}^{N}\lambda_{k}\langle\psi_{k}|U(\tau)|\psi_{k}\rangle e^{-i\delta_{k}}\right]\label{eq:DiagonalGeometricPhase} \end{equation} The propagation helicity components $\left(\begin{array}{cc} \nu_{R} & \nu_{L}\end{array}\right)^{T}$of a neutrino in a magnetic field obeys the following equation \cite{smirnov1991solarneutrino} \begin{equation} i\frac{d}{dt}\left(\begin{array}{c} \nu_{R}\\ \nu_{L} \end{array}\right)=\left(\begin{array}{cc} \frac{V}{2} & \mu_{\nu}Be^{-i\omega t}\\ \mu_{\nu}Be^{i\omega t} & -\frac{V}{2} \end{array}\right)\left(\begin{array}{c} \nu_{R}\\ \nu_{L} \end{array}\right),\label{eq:Schrodinger} \end{equation} where $T$ denotes the matrices transposing operation, $\vec{B}=B_{x}+iB_{y}=Be^{i\omega t}$, $\mu_{\nu}$ reprensents the magnetic moment of a massive Dirac neutrino and $V$ is a term due to neutrino mass as well as interaction with matter. The instantaneous eigenvalues and eigenvectors corresponding to the Hamiltonian take the following form \cite{joshi2015neutrino} \[ E_{1}=+\sqrt{\left(\frac{V}{2}\right)^{2}+(\mu_{\nu}B)^{2}} \] \begin{equation} |\psi_{1}\rangle=\frac{1}{N}\left(\begin{array}{c} \mu_{\nu}B\\ -e^{i\omega t}\left(\frac{V}{2}-E_{1}\right) \end{array}\right)\label{eq:EigenVector1} \end{equation} and \[ E_{2}=-\sqrt{\left(\frac{V}{2}\right)^{2}+(\mu_{\nu}B)^{2}} \] \begin{equation} |\psi_{2}\rangle=\frac{1}{N}\left(\begin{array}{c} e^{-i\omega t}\left(\frac{V}{2}-E_{1}\right)\\ \mu_{\nu}B \end{array}\right),\label{eq:EigenVector2} \end{equation} where the normalized factor \[ N=\sqrt{\left(\frac{V}{2}-E_{1}\right)^{2}+\left(\mu_{\nu}B\right)^{2}}. \] If this system in a thermal state, the density operator can be written as \begin{equation} \rho=\lambda_{1}|1\rangle\langle1|+\lambda_{2}|2\rangle\langle2|\label{eq:DensityMatrix} \end{equation} where \[ \lambda_{1}=\frac{e^{-\beta E_{1}}}{e^{-\beta E_{1}}+e^{-\beta E_{2}}} \] and \[ \lambda_{2}=\frac{e^{-\beta E_{2}}}{e^{-\beta E_{1}}+e^{-\beta E_{2}}}. \] In addition, $\beta=1/(kT),$ where $k$ is the Boltzmann constant and $T$ represents the temperature. In the next section, the mixed state geometric phase for both off-diagonal one and diagonal one will be calculated. \section{Mixed state geometric phase} \label{sec:Nonadiabatic} The differential equation Eq. \eqref{eq:Schrodinger} can be exactly solved by the following transformation \begin{equation} \left(\begin{array}{c} \nu_{R}\\ \nu_{L} \end{array}\right)=e^{-i\sigma_{z}\frac{1}{2}\omega t}\left(\begin{array}{c} a\\ b \end{array}\right),\label{eq:TransformedState} \end{equation} where $\sigma_{z}$ is a Pauli matrix along $z$ direction whose explicit form is \[ \sigma_{z}=\left(\begin{array}{cc} 1 & 0\\ 0 & -1 \end{array}\right). \] By substituting Eq. \eqref{eq:TransformedState} into Eq. \eqref{eq:Schrodinger}, one can obtain \begin{equation} i\frac{d}{dt}\left(\begin{array}{c} a\\ b \end{array}\right)=\tilde{H}\left(\begin{array}{c} a\\ b \end{array}\right),\label{eq:TransformedSchodinger} \end{equation} where \[ \tilde{H}=\mu_{\nu}B\sigma_{x}+\frac{1}{2}(V-\omega)\sigma_{z}. \] Furthermore, it can be written in this form \begin{equation} \tilde{H}=\frac{1}{2}\Omega\left(\begin{array}{ccc} \frac{2\mu_{\nu}B}{\Omega} & 0 & \frac{V-\omega}{\Omega}\end{array}\right)\centerdot\left(\begin{array}{ccc} \sigma_{x} & \sigma_{y} & \sigma_{z}\end{array}\right),\label{eq:TransformedHamiltonian} \end{equation} where $\Omega=\sqrt{\left(2\mu_{\nu}B\right)^{2}+\left(V-\omega\right)^{2}}$. Because $\tilde{H}$ is independent of time, Eq. \ref{eq:TransformedSchodinger} can be exactly solved, whose time evolution operator takes the form \[ \tilde{U}=e^{-i\tilde{H}t}. \] Associating with Eq. \eqref{eq:TransformedState}, the time evolution operator for Eq. \eqref{eq:Schrodinger} is \begin{equation} U=e^{-i\tilde{H}t}e^{i\sigma_{z}\frac{1}{2}\omega t}.\label{eq:UnitaryEvolution} \end{equation} By substituting Eq. \eqref{eq:TransformedHamiltonian} into Eq. \ref{eq:UnitaryEvolution}, the above operator can be written in an explicit form, which is \[ U=\left(\begin{array}{cc} \cos\frac{\Omega}{2}t-i\frac{V-\omega}{\Omega}\sin\frac{\Omega}{2}t & -i\frac{2\mu_{\nu}B}{\Omega}\sin\frac{\Omega}{2}t\\ -i\frac{2\mu_{\nu}B}{\Omega}\sin\frac{\Omega}{2}t & \cos\frac{\Omega}{2}t+i\frac{V-\omega}{\Omega}\sin\frac{\Omega}{2}t \end{array}\right)\left(\begin{array}{cc} e^{i\frac{\omega t}{2}} & 0\\ 0 & e^{-i\frac{\omega t}{2}} \end{array}\right) \] In order to calculate off-diagonal phase \eqref{eq:OffdiagonlaGeometricPhase}, by use of Eq. \eqref{eq:ParallelUnitaryEvolution}, we can work out \[ \begin{array}{ccc} U_{11}^{\parallel} & \equiv & \langle\psi_{1}|U(t)\left(e^{-i\delta_{1}}|\psi_{1}\rangle\langle\psi_{1}|+e^{-i\delta_{2}}|\psi_{2}\rangle\langle\psi_{2}|\right)|\psi_{1}\rangle\\ & = & U_{11}e^{-i\delta_{1}}, \end{array} \] where $U_{11}=\langle\psi_{1}|U(t)|\psi_{1}\rangle$. In order to simplify the result, let's talk about an easier case. when $t=\tau=2\pi/\Omega$, \begin{equation} U_{11}=-\frac{1}{N^{2}}\left[\mu_{\nu}^{2}B^{2}e^{i\frac{\omega\tau}{2}}+\left(\frac{V}{2}-E_{1}\right)^{2}e^{-i\frac{\omega\tau}{2}}\right]=U_{22}^{*},\label{eq:UDiagonal} \end{equation} where $*$ denotes the complex conjugate operation. By similar calculations, one can obtains \begin{equation} U_{12}=\frac{2}{N^{2}}\mu_{\nu}B\left(\frac{V}{2}-E_{1}\right)\sin\left(\frac{1}{2}\omega\right)e^{-i(\omega\tau+\frac{\pi}{2})}=-U_{21}^{*}\label{eq:UOffDiagonal} \end{equation} Furthermore, $\delta_{1}$ can be explicitly calculated out by substituting Eq. \eqref{eq:UnitaryEvolution} Eq. \eqref{eq:EigenVector1} into Eq. \eqref{eq:DynamicalPhase}, which takes the form \begin{equation} \delta_{1}=\frac{1}{N^{2}}\left[2\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-E_{1}\right)+\left(\frac{V}{2}-E_{1}\right)^{2}\left(\frac{V}{2}-\omega\right)-\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-\omega\right)\right]\tau.\label{eq:DynamicalPhase1} \end{equation} By similar calculation, one can get \begin{equation} \delta_{2}=-\delta_{1}.\label{eq:DynamicalPhase2} \end{equation} Hence Eq. \eqref{eq:ParallelUnitaryEvolution} can be explicitly calculated out, \begin{equation} \left(\begin{array}{cc} U_{11}^{\parallel} & U_{12}^{\parallel}\\ U_{21}^{\parallel} & U_{22}^{\parallel} \end{array}\right)=\left(\begin{array}{cc} U_{11} & U_{12}\\ U_{21} & U_{22} \end{array}\right)\left(\begin{array}{cc} e^{-i\delta_{1}} & 0\\ 0 & e^{-i\delta_{2}} \end{array}\right).\label{eq:RelationsUnitaryEvolutions} \end{equation} Now, let us calculate the mixed state off-diagonal phasse \begin{equation} \gamma_{\rho_{1}\rho_{2}}^{(2)}=\Phi\left[Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right)\right],\label{eq:OffDiagonalGeometricPhaseForNeutrino} \end{equation} where $\rho_{1}=\lambda_{1}|1\rangle\langle1|+\lambda_{2}|2\rangle\langle2|$ and $\rho_{2}=\lambda_{1}|2\rangle\langle2|+\lambda_{2}|1\rangle\langle1|$. Under the basis of $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$, \begin{equation} \begin{array}{ccc} Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right) & = & \sum_{b=1}^{2}\langle\psi_{b}|\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}|\psi_{b}\rangle\\ & = & \sqrt{\lambda_{1}\lambda_{2}}\left[\left(U_{11}^{\parallel}\right)^{2}+\left(U_{22}^{\parallel}\right)^{2}\right]+U_{12}^{\parallel}U_{21}^{\parallel}. \end{array}\label{eq:TraceOffDiagonal} \end{equation} By substituting Eq. \eqref{eq:RelationsUnitaryEvolutions} into Eq. \eqref{eq:TraceOffDiagonal}, we can obtain a simpler result \[ Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right)=\sqrt{\lambda_{1}\lambda_{2}}\left[\left(U_{11}e^{-i\delta_{1}}\right)^{2}+\left(U_{22}e^{-i\delta_{2}}\right)^{2}\right]+U_{12}U_{21}e^{-i\left(\delta_{1}+\delta_{2}\right)} \] By substituting Eq. \eqref{eq:UDiagonal} and Eq. \eqref{eq:UOffDiagonal} into the above equation, off-diagonal geometric phase \eqref{eq:OffDiagonalGeometricPhaseForNeutrino} can be explicitly calculated, \begin{equation} \begin{array}{ccc} \gamma_{\rho_{1}\rho_{2}}^{(2)} & = & \Phi\{\left(\frac{V}{2}-E_{1}\right)^{2}\mu_{\nu}^{2}B^{2}\left(\cos\omega\tau-1\right)+\sqrt{\lambda_{1}\lambda_{2}}\vartimes\\ & & [\left(\frac{V}{2}-E_{1}\right)^{4}\cos\left(\omega\tau+2\delta_{1}\right)+\mu_{\nu}^{4}B^{4}\cos\left(\omega\tau+2\delta_{1}\right)\\ & & +2\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-E_{1}\right)^{2}\cos2\delta_{1}]\} \end{array}\label{eq:OffDiagonalPhaseFinalResult} \end{equation} Hence, the corresponding phase is either $\pi$ or $0$, which depends on temperature and magnetic field. So, its phase is unresponsitive to temperature. By substituting Eq. \eqref{eq:UDiagonal}, Eq. \eqref{eq:DynamicalPhase1} and Eq. \eqref{eq:DynamicalPhase2} into Eq. \eqref{eq:DiagonalGeometricPhase}, the diagonal geometric phase for miexed state reads \begin{equation} \begin{array}{ccc} \gamma & = & \Phi\{\left[\lambda_{1}e^{i(\frac{\omega\tau}{2}-\delta_{1})}+\lambda_{2}e^{-i(\frac{\omega\tau}{2}-\delta_{1})}\right]\mu_{\nu}^{2}B^{2}+\\ & & \left[\lambda_{1}e^{-i(\frac{\omega\tau}{2}+\delta_{1})}+\lambda_{2}e^{i(\frac{\omega\tau}{2}+\delta_{1})}\right]\left(\frac{V}{2}-E_{1}\right)^{2}\}. \end{array}\label{eq:DiagonalPhaseFinalResult} \end{equation} From the above result, we can draw a conclution that if $\lambda_{1}=\lambda_{2}$, in another word $T\rightarrow\infty$, the corresponding phase maybe $\pi$ or $0$. In other circumstance, it may vary continuously in an interval. By contrary to off-diagonal one, the diagonal phase is more sensitive to temprature. \section{Conclusions and Acknowledgements } \label{sec:discussion} In this article, the time evolution operator of neutrino spin in the presence of uniformly rotating magnetic field is obtained. Under this time evolution operator, a thermal state of this neutrinos evolves. Then there exists mixed off-diagonal geometric phase for mixed state, as well as diagonal ones. They have been calculated respectively. And an analytic form is achieved. In addition, a conclusion is drawn that diagonal phase is more sensentive to off-diagonal one towards temperature. D.B.Y. is supported by NSF ( Natural Science Foundation ) of China under Grant No. 11447196. J.X.H. is supported by the NSF of China under Grant 11304037, the NSF of Jiangsu Province, China under Grant BK20130604, as well as the Ph.D. Programs Foundation of Ministry of Education of China under Grant 20130092120041. And K. M. is supported by NSF of China under grant No.11447153. \end{document}
\begin{document} \title{On Carlson's depth conjecture in group cohomology} \author[D.~J. Green]{David J. Green} \address{Dept of Mathematics \\ Univ.\@ of Wuppertal \\ D--42097 Wuppertal \\ Germany} \email{[email protected]} \subjclass{Primary 20J06} \date{5 June 2002} \begin{abstract} We establish a weak form of Carlson's conjecture on the depth of the mod-$p$ cohomology ring of a $p$-group. In particular, Duflot's lower bound for the depth is tight if and only if the cohomology ring is not detected on a certain family of subgroups. The proofs use the structure of the cohomology ring as a comodule over the cohomology of the centre via the multiplication map. We demonstrate the existence of systems of parameters (so-called polarised systems) which are particularly well adapted to this comodule structure. \end{abstract} \maketitle \section*{Introduction} \noindent Let $G$ be a finite $p$-group and $C$ its greatest central elementary abelian subgroup. Write $k$ for the prime field $\f$. Cohomology will always be with coefficients in~$k$. Denote by $r$ the $p$-rank of~$G$, and by $z$ the $p$-rank of $C$. Following Broto and Henn~\cite{BrHe:Central} we shall exploit the fact that the multiplication map $\mu \colon G \times C \rightarrow G$, $(g,c) \mapsto g.c$ is a group homomorphism. The main result of this paper is as follows: \begin{theorem} \label{theorem:introED1} Suppose that $G$ is a $p$-group whose centre has $p$-rank $z$. Then the following statements are equivalent: \begin{enumerate} \item The mod-$p$ cohomology ring $\coho{G}$ is not detected on the centralizers of its rank $z+1$ elementary abelian subgroups. \item There is an associated prime $\mathfrak{p}$ such that $\coho{G}/\mathfrak{p}$ has dimension $z$. \item The depth of $\coho{G}$ equals $z$. \end{enumerate} \end{theorem} \noindent This is a special case of a conjecture due to Carlson~\cite{Carlson:DepthTransfer}, reproduced here as Conjecture~\ref{conjecture:ED}. Recall that Duflot proved in~\cite{Duflot:Depth} that $z$ is a lower bound for the depth. So this result characterises the cases where Duflot's lower bound is tight. Theorem~\ref{theorem:introED1} is proved as Theorem~\ref{theorem:ED1}. The proof rests upon the existence of \emph{polarised systems}: homogeneous systems of parameters for $\coho{G}$ which are particularly well adapted to the $\coho{C}$-comodule structure. There are two extreme types of behaviour which a cohomology class $x \in \coho{G}$ can demonstrate under the comodule structure map $\mu^*$: one extreme is that $\operatorname{Res}_C(x)$ is nonzero, and so $\mu^*(x) = 1 \otimes \operatorname{Res}_C(x) + \text{other terms}$. The other extreme is that $x$ is primitive, meaning that $\mu^*(x) = x \otimes 1$. Roughly speaking, a polarised system of parameters is one consisting solely of elements which each exhibit one or the other of these extreme kinds of behaviour. The precise definition, which ensures that each such system is a detecting sequence for the depth of $\coho{G}$, is given in Definition~\ref{definition:polarised}\@. Polarised systems of parameters always exist, as is proved in Theorem~\ref{theorem:existence}\@. \noindent In addition to Theorem~\ref{theorem:introED1} we also prove a weak form of the general case of Carlson's conjecture. This is done in Theorem~\ref{theorem:polarisedEqualities}, which includes the following statement: \begin{theorem} \label{theorem:introMyEDgen} Let $\zeta_1$, \dots, $\zeta_z$, $\kappa_1$, \dots, $\kappa_{r-z}$ be a polarised system of parameters for $\coho{G}$. Then $\coho{G}$ has depth $z + \sa$, where $\sa \in \{0,\ldots,r-z\}$ is defined by \[ \sa = \max \{ s \leq r-z \mid \text{$\kappa_1,\ldots,\kappa_s$ is a regular sequence in $\coho{G}$} \} \, . \] \end{theorem} \section{Primitive comodule elements} \label{section:primitive} \noindent Group multiplication $\mu \colon G \times C \rightarrow G$ is a group homomorphism. As observed by Broto and Henn~\cite{BrHe:Central}, this means that $\coho{G}$ inherits the structure of a comodule over the coalgebra $\coho{C}$. Recall that $x \in \coho{G}$ is called a primitive comodule element if $\mu^*(x) = x \otimes 1 \in \coho{G \times C} \cong \coho{G} \otimes_k \coho{C}$. As the comodule structure map $\mu^*$ is simultaneously a ring homomorphism, it follows that the primitives form a subalgebra $\PH{G}$ of $\coho{G}$. As the quotient map $G \rightarrow G/C$ coequalises $\mu$ and projection onto the first factor of $G \times C$, it follows that the image of inflation from $\coho{G/C}$ is contained in $\PH{G}$. \begin{lemma} \label{lemma:quotientComodule} Suppose that $I$ is a homogeneous ideal in $\coho{G}$ which is generated by primitive elements. Then \[ \mu^*(I) \subseteq I \otimes_k \coho{C} \, , \] and so $\mu^*$ induces a ring homomorphism \[ \lambda \colon \coho{G}/I \rightarrow \coho{G}/I \otimes_k \coho{C} \] which induces an $\coho{C}$-comodule structure on $\coho{G}/I$. \end{lemma} \begin{proof} For $x \in I$ and $y \in \coho{G}$ one has $\mu^*(xy) = \mu^*(x) \mu^*(y) = (x \otimes 1) \mu^*(y) \in I \otimes_k \coho{C}$. \end{proof} \begin{lemma} \label{lemma:myBrotoCarlsonHenn} Suppose that $\zeta_1, \ldots, \zeta_t$ is a sequence of homogeneous elements of $\coho{G}$ whose restrictions form a regular sequence in $\coho{C}$, and suppose that $I$~is an ideal in $\coho{G}$ generated by primitive elements. Then $\zeta_1, \ldots, \zeta_t$ is a regular sequence for the quotient ring $\coho{G}/I$. \end{lemma} \begin{proof} Carlson's proof for the case $I=0$ (Proposition 5.2 of~\cite{Carlson:Problems}) generalises easily. Denote by $R$ the polynomial algebra $k[\zeta_1, \ldots, \zeta_t]$. The map $\lambda$~of Lemma~\ref{lemma:quotientComodule} induces an $R$-module structure on $\coho{G}/I \otimes_k \coho{C}$, and $\lambda$~is a split monomorphism of $R$-modules, the splitting map being induced by projection onto the first factor $G \times C \rightarrow G$. So as an $R$-module $\coho{G}/I$ is a direct summand of $\coho{G}/I \otimes_k \coho{C}$. The result will therefore follow if we can show that $\coho{G}/I \otimes_k \coho{C}$ is a free $R$-module. To see that $\coho{G}/I \otimes_k \coho{C}$ is indeed a free $R$-module set \[ F_i := \sum_{j \geq i} \coho[j]{G}/(I \cap \coho[j]{G}) \] and observe that $F_i \otimes_k \coho{C}$ is an $R$-submodule of $\coho{G}/I \otimes_k \coho{C} = F_0 \otimes_k \coho{C}$. Projection $G \times C \rightarrow C$ makes $F_i \otimes_k \coho{C} / F_{i+1} \otimes_k \coho{C}$ a free $\coho{C}$-module. Now for $x \in F_i$, $y \in \coho{C}$ and $\theta \in R$ we have $\theta.(x \otimes y) \in x \otimes (\operatorname{Res}^G_C \theta). y + F_{i+1} \otimes_k \coho{C}$. So the $R$-module structure on $F_i \otimes_k \coho{C} / F_{i+1} \otimes_k \coho{C}$ is induced by the restriction map $R \rightarrow \coho{G} \rightarrow \coho{G}/I \rightarrow \coho{C}$ from the free $\coho{C}$-structure. But $\coho{C}$ is a free $R$-module by Theorem 10.3.4 of \cite{Evens:book}, because the restrictions of $\zeta_1,\ldots,\zeta_t$ form a regular sequence. So $F_i \otimes_k \coho{C} / F_{i+1} \otimes_k \coho{C}$ is a free $R$-module for all~$i$, whence it follows that $F_0 \otimes_k \coho{C} / F_i \otimes_k \coho{C}$ is a free $R$-module for all~$i$. As the degree of each homogeneous element of $F_i \otimes_k \coho{C}$ is at least~$i$, it follows that $\coho{G}/I \otimes_k \coho{C}$ is itself a free $R$-module. \end{proof} \begin{corollary} \label{coroll:myBrotoCarlsonHenn} Let $G$ be a $p$-group whose centre has $p$-rank~$z$. Suppose that there is a length~$s$ regular sequence in $\coho{G}$ which consists entirely of primitive elements. Then the depth of $\coho{G}$ is at least $z + s$. \end{corollary} \begin{proof} Let $I$ be the ideal generated by the primitive elements in the regular sequence. Let $\zeta_1,\ldots,\zeta_z$ be elements of $\coho{G}$ whose restrictions form a homogeneous system of parameters for $\coho{C}$: it is well known that such sequences exist. Now apply Lemma~\ref{lemma:myBrotoCarlsonHenn}. One concrete example of such classes $\zeta_i$ is obtained as follows. Let $\rho_G$ be the regular representation of~$G$, and $\zeta_i$ the Chern class $c_{p^n - p^{n-i}}(\rho_G)$ for $1 \leq i \leq z$, where $p^n$ is the order of~$G$. Then $\rho_G$ restricts to~$C$ as $|G:C|$ copies of the regular representation $\rho_C$, whence $\operatorname{Res}_C(\zeta_i) = c_{p^z - p^{z-i}}(\rho_C)^{|G:C|}$. But the $c_{p^z - p^{z-i}}(\rho_C)$ are (up to a sign) the Dickson invariants. See the proof of Theorem~\ref{theorem:existence} for more details. \end{proof} \section{Polarised systems of parameters} \label{section:polarised} \noindent We shall now give the definition of a polarised system of parameters, the key definition of this paper. In fact we shall introduce two closely related concepts: the axioms for a polarised system (Definition~\ref{definition:polarised}) are easily checked in practice, whereas the special polarised systems of Definition~\ref{definition:specialPolarised} have precisely the properties we shall need to investigate depth. Lemma~\ref{lemma:polarisedDefinitions} shows that the two concepts are more or less interchangeable. \begin{definition} \label{definition:ACG} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. Denote by $C$ the greatest central elementary abelian subgroup of~$G$, and set \begin{align*} \mathcal{A}^C(G) & := \{ V \leq G \mid \text{$V$ is elementary abelian and contains~$C$} \} \, , \\ \mathcal{A}^C_d(G) & := \{ V \in \mathcal{A}^C(G) \mid \text{$V$ has $p$-rank $d$} \} \, \\ \mathcal{H}^C_d(G) & := \{ C_G(V) \mid V \in \mathcal{A}^C_d(G) \} \, . \end{align*} So $\mathcal{A}^C(G)$ is the disjoint union of the $\mathcal{A}^C_{z+s}(G)$ for $0 \leq s \leq r-z$. \end{definition} \begin{definition} \label{definition:polarised} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. Recall that inflation map $\operatorname{Inf} \colon \coho{V/C} \rightarrow \coho{V}$ is a split monomorphism for each $V \in \mathcal{A}^C(G)$, and so its image $\operatorname{Im} \operatorname{Inf}$ is isomorphic to $\coho{V/C}$. A system $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ of homogeneous elements of $\coho{G}$ shall be called a polarised system of parameters if the following four axioms are satisfied. \begin{description} \item[(PS1)] $\operatorname{Res}_C(\zeta_1)$, \dots, $\operatorname{Res}_C(\zeta_z)$ is a system of parameters for $\coho{C}$. \item[(PS2)] $\operatorname{Res}_V(\kappa_j)$ lies in $\operatorname{Im} \operatorname{Inf}$ for each $1 \leq j \leq r-z$ and for each $V \in \mathcal{A}^C(G)$. \item[(PS3)] For each $V \in \mathcal{A}^C(G)$, the restrictions $\operatorname{Res}_V(\kappa_1), \ldots, \operatorname{Res}_V(\kappa_s)$ constitute a system of parameters for $\operatorname{Im} \operatorname{Inf}$. Here $z+s$ is the rank of~$V$. \item[(PS4)] $\operatorname{Res}_V(\kappa_j) = 0$ for $V \in \mathcal{A}^C_{z+s}(G)$ with $0 \leq s < j \leq r-z$. \end{description} \end{definition} \begin{remark} Polarised systems of parameters always exist, as we shall see in Theorem~\ref{theorem:existence}\@. Observe that Axiom (PS1) involves only the $\zeta_i$, whereas the remaining axioms involve only the $\kappa_j$. Basically Axiom (PS1) says that $\zeta_1,\ldots,\zeta_z$ is a regular sequence which can be detected on the centre, (PS2) says that the $\kappa_j$ are primitive after raising to suitably high $p$th powers, and (PS3) says that the $\kappa_j$ together with the $\zeta_i$ will form a detecting sequence for the depth of $\coho{G}$. Axiom (PS4) is a more technical condition which we shall only use once: it is needed in Lemma~\ref{lemma:polarisedDefinitions} to show that, after raising to a suitably high $p$th power, each $\kappa_j$ is a sum of transfer classes as required by Axiom (PS5) below. \end{remark} \begin{lemma} Polarised systems of parameters for $\coho{G}$ are indeed systems of parameters. \end{lemma} \begin{proof} Let $V \in \mathcal{A}^C_{z+s}(G)$. The restrictions of $\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots, \kappa_s$ constitute a system of parameters for~$\coho{V}$ by (PS1) and (PS3)\@. Hence $\zeta_1,\ldots,\zeta_z, \kappa_1, \ldots, \kappa_{r-z}$ are algebraically independent over~$k$, for we may choose $V$ to have $p$-rank~$r$. Now let $\gamma$ be a homogeneous element of $\coho{G}$. For $V \in \mathcal{A}^C(G)$ there is a monic polynomial $f_V(x)$ with coefficients in $k[\zeta_1,\ldots,\zeta_z, \kappa_1, \ldots, \kappa_{r-z}]$ such that $f_V(\gamma)$ has zero restriction to~$V$. Taking the product of all such polynomials one obtains a polynomial $f(x)$ such that $f(\gamma)$ has zero restriction to each maximal elementary abelian subgroup of~$G$. So $f(\gamma)$ is nilpotent by a well-known result of Quillen. \end{proof} \begin{definition} \label{definition:specialPolarised} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. A system $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ of homogeneous elements of $\coho{G}$ shall be called a special polarised system of parameters if it satisfies the following five axioms: (PS1), (PS3), (PS4) and \begin{description} \item[(PS$\mathbf{2'}$)] $\kappa_j$ is a primitive element of the $\coho{C}$-comodule $\coho{G}$ for each $1 \leq j \leq r-z$. \item[(PS5)] $\kappa_j$ lies in $\sum_{H \in \mathcal{H}^C_{z+i}(G)} \operatorname{Tr}^G_H(\coho{H})$ for each $1 \leq i \leq j \leq r-z$. \end{description} \end{definition} \begin{lemma} \label{lemma:polarisedDefinitions} Axiom (PS$2'$) implies Axiom (PS2), and so every special polarised system is a polarised system of parameters. Conversely for each polarised system $\zeta_1,\ldots,\zeta_z$, $\kappa_1, \ldots, \kappa_{r-z}$ there is a nonnegative integer $N$ such that $\zeta_1,\ldots,\zeta_z$, $\kappa_1^{p^N}, \ldots, \kappa_{r-z}^{p^N}$ is a special polarised system of parameters for $\coho{G}$. \end{lemma} \begin{proof} Let $V \in \mathcal{A}^C(G)$. Restriction from $\coho{G}$~to $\coho{V}$ is a map of $\coho{C}$-comodules and so sends primitive elements to primitive elements. But the subalgebra of primitive elements of $\coho{V}$ coincides with the image of inflation from $V/C$. So (PS$2'$) implies (PS2). Now suppose $\zeta_1,\ldots,\zeta_z$, $\kappa_1, \ldots, \kappa_{r-z}$ is a polarised system for $\coho{G}$. For each $1 \leq j \leq r-z$ the restriction of $\kappa_j$ to each $V \in \mathcal{A}^C(G)$ is primitive by (PS2)\@. Hence the element $\mu^*(\kappa_j) - \kappa_j \otimes 1$ of $\coho{G} \otimes \coho{C}$ has zero restriction to every maximal elementary abelian subgroup and is therefore nilpotent. For fixed $1 \leq i \leq r-z$, denote by $\mathcal{K}$ the set consisting of those subgroups $K$~of $G$ such that $C_G(K)$ is not $G$-conjugate to any subgroup of any $H \in \mathcal{H}^C_{z+i}$. Following Carlson (Proof of Corollary 2.2~of \cite{Carlson:DepthTransfer}), observe that every $K \in \mathcal{K}$ has $p$-rank less than $z+i$. Moreover every $K \in \mathcal{K}$ is contained in $K_C = \langle K,C \rangle$, and $K_C$~itself lies in~$\mathcal{K}$. So $\operatorname{Res}_{K_C}(\kappa_j) = 0$ for all $j \geq i$ by (PS4)\@. Hence each such $\kappa_j$ lies in the radical ideal $\sqrt{J'}$, where $J'$ is the ideal $\bigcap_{K \in \mathcal{K}} \ker \operatorname{Res}_K$. So by Benson's result on the image of the transfer map (Theorem~1.1 of \cite{Benson:ImTr}) the $\kappa_j$ also lie in $\sqrt{J}$, where $J$ is the ideal $\sum_{H \in \mathcal{H}^C_{z+i}(G)} \operatorname{Tr}^G_H(\coho{H})$. \end{proof} \begin{remark} The above proof is the only time we shall make use of the Axiom (PS4)\@. In particular, the results of \S\ref{section:specialPolarisedDepth} do not depend on (PS4)\@. I do not know whether or not (PS4) is a consequence of the remaining axioms for a special polarised system of parameters. Axiom (PS5) will be used in Lemma~\ref{lemma:kappaAssocPrime} to prove the existence of an associated prime ideal with desirable properties. \end{remark} \begin{lemma} \label{lemma:genBrotoHenn} Suppose that $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ is a polarised system of parameters for $\coho{G}$. Let $0 \leq s \leq r-z$. Then the sequence $\zeta_1, \ldots, \zeta_z, \kappa_1, \ldots, \kappa_s$ is regular in $\coho{G}$ if and only if the sequence $\kappa_1, \ldots, \kappa_s$ is regular in $\coho{G}$. \end{lemma} \begin{proof} Recall that regular sequences may be permuted at will. Moreover, replacing one element of a sequence by its $p$th power has no effect on whether the sequence is regular or not. Hence by Lemma~\ref{lemma:polarisedDefinitions} it suffices to prove the result for special polarised systems. So we may assume that the given sequence is a special polarised system of parameters. Let $I$ be the homogeneous ideal $I$ in $\coho{G}$ generated by $\kappa_1, \ldots, \kappa_s$. By Axiom (PS$2'$) this ideal is generated by primitive elements. Also, the restrictions of $\zeta_1,\ldots,\zeta_z$ form a regular sequence in $\coho{C}$ by Axiom~(PS1)\@. Therefore Lemma~\ref{lemma:myBrotoCarlsonHenn} tells us that $\zeta_1,\ldots,\zeta_z$ constitute a regular sequence for $\coho{G}/I$. \end{proof} \section{Three depth-related numbers} \label{section:threeNumbers} \noindent In this section we shall introduce three numbers $\tauH$, $\taua$~and $\tauaH$, each of which is an approximation to the depth $\tau$ of $\coho{G}$. \begin{definition} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. Write $\tau$ for the depth of $\coho{G}$ and set \[ \tauH := \max \{ d \in \{z, \ldots, r\} \mid \text{The family $\mathcal{H}^C_d(G)$ detects $\coho{G}$} \} \, . \] \end{definition} \noindent In~\cite{Carlson:DepthTransfer} Carlson formulates the following conjecture: \begin{conjecture}[Carlson] \label{conjecture:ED} The number~$\tauH$ coincides with the depth $\tau$~of $\coho{G}$. Moreover, $\coho{G}$ has an associated prime $\mathfrak{p}$ such that $\dim \coho{G}/\mathfrak{p} = \tau$. \end{conjecture} \noindent In fact, Carlson formulates the conjecture not just for $p$-groups, but for arbitrary finite groups. In this article however we only consider $p$-groups. In Theorem~\ref{theorem:ED1} we shall prove a special case of this conjecture, after deriving a partial result for the general case in Theorem~\ref{theorem:polarisedEqualities}. For this we need two more depth-related numbers. \begin{definition} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$, and let $\mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a polarised system of parameters for $\coho{G}$\@. Define $\taua$ to be $z + \sa$, where $\sa$ is the largest $s \in \{0, \ldots, r-z\}$ such that $\kappa_1,\ldots, \kappa_s$ is a regular sequence in $\coho{G}$. Let $\SaH$~be the subset of $\{0, \ldots, r-z\}$ such that $s$~lies in $\SaH$ if and only if the restriction map \[ \coho{G}/(\kappa_1,\ldots,\kappa_{s-1}) \rightarrow \prod_{H \in \mathcal{H}^C_{z+s}} \coho{H}/(\operatorname{Res}_H \kappa_1,\ldots, \operatorname{Res}_H \kappa_{s-1}) \] is injective\footnote{So $\SaH$ always contains~$0$, and $1$~lies in $\SaH$ if and only if the family $\mathcal{H}^C_{z+1}$ detects $\coho{G}$.}. Define $\saH := \max \SaH$ and $\tauaH := z + \saH$. \end{definition} \begin{lemma} \label{lemma:tauCdash} Let $\mathfrak{a} = (\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z})$ be a polarised system of parameters for $\coho{G}$\@. If $s > 0$ lies in $\SaH$ then the family $\mathcal{H}^C_{z + s}(G)$ detects $\coho{G}$ and $s-1$ lies in~$\SaH$. Therefore $\tauH \geq \tauaH$ and $\SaH = \{ 0, \ldots, \saH \}$. \end{lemma} \noindent For the proof we shall need an elementary fact about regular sequences. \begin{lemma} \label{lemma:genGrComm} Suppose that $R,S$ are connected graded commutative $k$-algebras and that $f \colon R \rightarrow S$ is an algebra homomorphism which respects the grading. Suppose further that $\zeta_1,\ldots,\zeta_d$ is a family of homogenous positive-degree elements of $R$ satisfying the following conditions: \begin{enumerate} \item $f(\zeta_1),\ldots,f(\zeta_d)$ is a regular sequence in~$S$. \item The induced map $f_d \colon R/(\zeta_1,\ldots,\zeta_d) \rightarrow S/(f(\zeta_1),\ldots,f(\zeta_d))$ is an injection. \end{enumerate} Then $f \colon R \rightarrow S$ is an injection. \end{lemma} \begin{proof} It suffices to prove the case $r=1$. Write $\zeta$~for $\zeta_1$. Let $a \not = 0$ be an element of $\ker(f)$ whose degree is as small as possible. Since $f(a) = 0$ in $S/(f(\zeta))$ it follows that there is an $a' \in R$ with $a = a' \zeta$. Since $f(a) = 0$ and $f(\zeta)$ is regular it follows that $a' \in \ker(f)$, contradicting the minimality of $\deg(a)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:tauCdash}] Apply Lemma~\ref{lemma:genGrComm} to the family $\kappa_1,\ldots, \kappa_{s-1}$ with $R = \coho{G}$, $S = \prod_{\mathcal{H}^C_{z + s}} \coho{H}$ and $f$ the product of the restriction maps. Because $s \in \SaH$ the induced map of quotients is an injection. By Axiom~(PS3) the restrictions of $\kappa_1,\ldots, \kappa_{s-1}$ form a regular sequence in $\coho{V}$ for each $V \in \mathcal{A}^C_{z + s}(G)$, and so by \cite[Prop.~5.2]{Carlson:Problems} they form a regular sequence in $\coho{H}$ for each $H \in \mathcal{H}^C_{z + s}$. Hence the restrictions form a regular sequence in~$S$, and so the family $\mathcal{H}^C_{z + s}$ detects $\coho{G}$. If instead we just invoke the first step in the proof of Lemma~\ref{lemma:genGrComm}, we see that the $\coho{H}/(\operatorname{Res} \kappa_1,\ldots,\operatorname{Res} \kappa_{s-2})$ with $H \in \mathcal{H}^C_{z + s}$ detect $\coho{G}/(\kappa_1,\ldots,\kappa_{s-2})$. \end{proof} \section{Depth and special polarised systems} \label{section:specialPolarisedDepth} \noindent The following fact from commutative algebra is well known. \begin{lemma} \label{lemma:assocPrime} Let $A$ be a graded commutative ring and $M$ a Noetherian graded $A$-module. Suppose that $\mathfrak{p}$ is an associated prime of~$M$, and that $\zeta_1,\ldots,\zeta_d$ is a regular sequence for $M$. Then $M/(\zeta_1,\ldots,\zeta_d)M$ has an associated prime~$\mathfrak{q}$ containing $\mathfrak{p}$. \end{lemma} \begin{proof} It suffices to prove the case $d=1$. Write $\zeta$~for $\zeta_1$. Pick $x \in M$ with $\operatorname{Ann}_A(x) = \mathfrak{p}$. If $x$ lies in $\zeta M$ then there is an $x' \in M$ with $\zeta x' = x$. As $\zeta$ is regular it follows that $\operatorname{Ann}_A(x') = \mathfrak{p}$ too, so replace $x$~by $x'$. This can only happen finitely often, as $M$ is Noetherian and $Ax$ is strictly contained in $Ax'$. So we may assume that $x$~does not lie in $\zeta M$, which means that the image of $x$~in $M/\zeta M$ is nonzero and annihilated by $\mathfrak{p}$. \end{proof} \begin{lemma} \label{lemma:kappaAssocPrime} Let $\mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a special polarised system of parameters for $\coho{G}$, and $I$ the ideal generated by the $\kappa_j$ with $j \leq \saH$. Then the $\coho{G}$-module $\coho{G}/I$ has an associated prime $\mathfrak{p}$ which contains $\kappa_1$, \dots, $\kappa_{r-z}$. \end{lemma} \begin{proof} Set $s = \saH + 1$. The result is trivial if $\saH = r-z$, so we may assume that $s \leq r-z$. By definition of $\saH$, the family $\mathcal{H} = \mathcal{H}^C_{z + s}$ does not detect the quotient $\coho{G}/I$. Pick a class $x \in \coho{G}$ which does not lie in the ideal $I$, but whose restriction to each $H \in \mathcal{H}$ does lie in the ideal $\coho{H} . \operatorname{Res}_H(I)$. Let $A$ be the ideal of classes in $\coho{G}$ which annihilate the image of $x$~in the quotient $\coho{G}/I$. For any $j \geq s$ we have $\kappa_j \in \sum_{H \in \mathcal{H}} \operatorname{Tr}^G_H \coho{H}$ by Axiom~(PS5), say $\kappa_j = \sum_H \operatorname{Tr}_H \gamma_H$. So $\kappa_j x = \sum_H \operatorname{Tr}_H (\gamma_H \operatorname{Res}_H(x))$ by Frobenius reciprocity. Now by assumption $\operatorname{Res}_H(x)$ lies in the ideal generated by $\operatorname{Res}_H(\kappa_1)$, \dots, $\operatorname{Res}_H(\kappa_{s-1})$; and this by a second application of Frobenius reciprocity means that $\kappa_j x$ lies in the ideal $I$. So $\kappa_j \in A$ for all $j \geq s$, which means that the $\coho{G}$-module $\coho{G}/I$ has an associated prime $\mathfrak{p}$ containing $\kappa_1,\ldots,\kappa_{r-z}$. \end{proof} \begin{theorem} \label{theorem:specialPolarisedEqualities} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$, and let $ \mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a special polarised system of parameters for $\coho{G}$. Then the numbers $\taua$~and $\tauaH$ both coincide with the depth $\tau$~of $\coho{G}$. \end{theorem} \begin{proof} We shall prove that $\tau \geq \taua \geq \tauaH = \tau$. Each $\kappa_j$ is primitive by Axiom~(PS$2'$), so $\tau \geq \taua$ by Corollary~\ref{coroll:myBrotoCarlsonHenn}. $\taua \geq \tauaH$: Suppose that $s \in \SaH$ and $\kappa_1$, \dots, $\kappa_{s-1}$ is a regular sequence in $\coho{G}$. If $\kappa_1,\ldots,\kappa_s$ is not a regular sequence then there is some nonzero $x \in \coho{G}/(\kappa_1,\ldots,\kappa_{s-1})$ annihilated by $\kappa_s$. Since $s \in \SaH$ there is some $H \in \mathcal{H}^C_{z + s}$ such that $\operatorname{Res}_H(x)$ is nonzero in $\coho{H}/(\operatorname{Res}_H \kappa_1,\ldots,\operatorname{Res}_H \kappa_{s-1})$. But this is a contradiction since (as in the proof of Lemma~\ref{lemma:tauCdash}) the restrictions of $\kappa_1,\ldots,\kappa_s$ form a regular sequence in $\coho{H}$. By induction on~$s$ we deduce that $\taua \geq \tauaH$. $\tauaH = \tau$: Set $s = \saH$ and denote by $I$ the ideal $(\kappa_1,\ldots,\kappa_s)$ of $\coho{G}$. By Lemma~\ref{lemma:kappaAssocPrime} the $\coho{G}$-module $\coho{G}/I$ has an associated prime~$\mathfrak{p}$ which contains $\kappa_1$, \dots, $\kappa_{r-z}$. As $\taua \geq \tauaH$ the sequence $\kappa_1,\ldots,\kappa_s$ is regular in $\coho{G}$, so by Lemma~\ref{lemma:genBrotoHenn} the sequence $\zeta_1,\ldots,\zeta_z$, $\kappa_1$, \dots, $\kappa_s$ is regular in $\coho{G}$. Therefore the sequence $\zeta_1,\ldots,\zeta_z$ is regular for $\coho{G}/I$, and so (by Lemma~\ref{lemma:assocPrime}) the $\coho{G}$-module $\coho{G}/(\zeta_1,\ldots,\zeta_z,\kappa_1,\ldots,\kappa_s)$ has an associated prime $\mathfrak{q}$ containing all elements of the homogeneous system of parameters $\zeta_1$, \dots, $\zeta_z$, $\kappa_1$, \dots, $\kappa_{r-z}$ for $\coho{G}$. So the depth of this quotient module is zero. But every regular sequence in $\coho{G}$ can be extended to a length~$\tau$ regular sequence (see \cite[\S4.3--4]{Benson:PolyInvts}, for example). So $\tau = z + s = \tauaH$. \end{proof} \section{Depth and polarised systems} \label{section:polarisedDepth} \noindent In this section we shall remove the requirement in Theorem~\ref{theorem:specialPolarisedEqualities} that the polarised system be special. \begin{theorem} \label{theorem:polarisedEqualities} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$, and let $ \mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a polarised system of parameters for $\coho{G}$. Then the numbers $\taua$~and $\tauaH$ both coincide with the depth $\tau$~of $\coho{G}$. \end{theorem} \noindent For the proof we shall need one further fact about regular sequences. \begin{lemma} \label{lemma:liftingInjections} Suppose that $R,S$ are connected graded commutative $k$-algebras and that $f \colon R \rightarrow S$ is an algebra homomorphism which respects the grading. Suppose further that $\zeta_1,\ldots,\zeta_d$ is a family of homogenous positive-degree elements of $R$ satisfying the following conditions: \begin{enumerate} \item $\zeta_1,\ldots,\zeta_d$ is a regular sequence in~$R$. \item The induced map $\fui[n] \colon R/(\zeta_1^{n_1},\ldots,\zeta_d^{n_d}) \rightarrow S/(f(\zeta_1)^{n_1},\ldots,f(\zeta_d)^{n_d})$ is an injection for certain positive integers $n_1,\ldots,n_d$. \end{enumerate} Then the induced map $\fui[1] \colon R/(\zeta_1,\ldots,\zeta_d) \rightarrow S/(f(\zeta_1), \ldots, f(\zeta_d))$ is an injection. \end{lemma} \begin{proof} Pick $x \in R$ with $\fui[1](x) = 0$. Then $\fui[n](\zeta_1^{n_1-1} \ldots \zeta_d^{n_d-1} x) = 0$ and so \begin{equation} \label{eqn:peelOff} \text{$\zeta_1^{n_1-1} \ldots \zeta_d^{n_d-1} x$ lies in the ideal $(\zeta_1^{n_1}, \ldots, \zeta_d^{n_d})$ of $R$.} \end{equation} Set $\zeta' := \zeta_1^{n_1-1} \ldots \zeta_{d-1}^{n_{d-1}-1}$. Then there are $a_1, \ldots, a_d \in R$ such that $\zeta' \zeta_d^{n_d-1} x = \zeta_1^{n_1} a_1 + \cdots + \zeta_d^{n_d} a_d$, whence $\zeta_d^{n_d-1} (\zeta' x - \zeta_d a_d) \in (\zeta_1^{n_1}, \ldots, \zeta_{d-1}^{n_{d-1}})$. As the sequence $\zeta_1^{n_1}, \ldots, \zeta_{d-1}^{n_{d-1}}, \zeta_d^s$ is regular in~$R$ for $s \geq 1$ we deduce that $\zeta' x \in (\zeta_1^{n_1}, \ldots, \zeta_{d-1}^{n_{d-1}}, \zeta_d)$. So we have reduced Eqn.~\eqref{eqn:peelOff} to the case $n_d = 1$ without altering the remaining $n_t$. As regular sequences may be permuted at will we deduce that $x \in (\zeta_1, \ldots, \zeta_d)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:polarisedEqualities}] Arguing exactly as in the proof of Theorem~\ref{theorem:specialPolarisedEqualities} one shows that $\taua \geq \tauaH$. By Lemma~\ref{lemma:polarisedDefinitions} there is an integer $N \geq 0$ such that the system of parameters $\mathfrak{b} = (\zeta_1,\ldots,\zeta_z, \kappa_1^{p^N}, \ldots, \kappa_{r-z}^{p^N})$ is special polarised. From the definition one sees that $\taua = \taua[b]$. So by Theorem~\ref{theorem:specialPolarisedEqualities} it suffices to prove that $\tauaH \geq \tauaH[b]$. But this is a consequence of Lemma~\ref{lemma:liftingInjections} applied to the sequence $\kappa_1, \ldots, \kappa_{\saH[b]-1}$ with $R = \coho{G}$, $S = \prod_{\mathcal{H}} \coho{H}$ and $f$~the restriction map, where $\mathcal{H} = \mathcal{H}^C_{z+\saH[b]}(G)$ and $n_i = p^N$ for all~$i$. \end{proof} \section{Dickson invariants} \label{section:Dickson} \noindent Let $V$ be an $m$-dimensional $k$-vector space. We shall make extensive use of the Dickson invariants, the polynomial generators of the ring of $\text{\sl GL}(V)$-invariants in $k[V]$. See Benson's book~\cite{Benson:PolyInvts} for proofs of the properties of these invariants. Denote by $f_V$ the polynomial in $k[V][X]$ defined as follows: \begin{equation} \label{eqn:fVdef} f_V(X) = \prod_{v \in V} (X - v) \, . \end{equation} Recall that Dickson proved there are homogeneous polynomials $D_s(V)$ for $1 \leq s \leq m$ such that \begin{equation} \label{eqn:fV} f_V(X) = \sum_{s = 0}^m (-1)^s D_s(V) X^{p^{m-s}} \, , \end{equation} where $D_0(V) = 1$. The sequence $D_1(V), \ldots, D_m(V)$ is regular in $k[V]$, and the invariant ring $k[V]^{\text{\sl GL}(V)}$ is the polynomial algebra $k[D_1(V), \ldots, D_m(V)]$. If $\pi \colon V \rightarrow U$ is projection onto a codimension~$\ell$ subspace, then the induced map $k[V] \rightarrow k[U]$ sends \begin{equation} \label{eqn:DicksonRes} D_s(V) \mapsto \begin{cases} D_s(U)^{p^{\ell}} & \text{if $s \leq \dim(U)$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation} \section{Existence of polarised systems} \label{section:existence} \noindent For each elementary abelian $p$-group $V$ we shall embed $k[V^*]$ in $\coho{V}$ by identifying $V^*$ with the image of the Bockstein map $\coho[1]{V} \rightarrow \coho[2]{V}$. \subsection{A construction using the norm map} Let $G$ be a $p$-group of order $p^n$ and $p$-rank $r$ whose centre has $p$-rank~$z$. We shall only be interested in the case $r > z$. Let $U_1, \ldots, U_K$ be representatives of the $G$-orbits in $\mathcal{A}^C_{z+1}(G)$, which is a $G$-set via the conjugation action. For each $U \in \mathcal{A}^C_{z+1}(G)$ pick a basis element $x_U$ for the one-dimensional subspace $\operatorname{Ann}(C)$ of $U^*$, and observe that $x_U^{p-1}$ is independent of the basis element chosen. As before, view $U^*$ as a subspace of $\coho[2]{U}$. Define $\Theta \in \coho{G}$ by \begin{equation} \label{eqn:ThetaDef} \Theta = \prod_{i = 1}^K \mathcal{N}^G_{U_i} \left(1 + x_{U_i}^{p-1}\right)^{|G : N_G(U_i)|} \, . \end{equation} Now consider the restriction $\operatorname{Res}_V(\Theta)$ for $V \in \mathcal{A}^C(G)$. By the Mackey formula \[ \operatorname{Res}_V (\Theta) = \prod_{i = 1}^K \prod_{g \in U_i \setminus G / V} \mathcal{N}^V_{U_i^g \cap V} \, g^* \operatorname{Res}^{U_i}_{U_i \cap {}^g V} \left(1 + x_{U_i}^{p-1}\right)^{|G : N_G(U_i)|} \, . \] The intersection $U_i \cap {}^g V$ always contains $C$, the largest central elementary abelian subgroup of~$G$. Conversely the intersection equals $C$ (and $x_{U_i}$ therefore restricts to zero) unless $U' = U_i^g$ lies in~$V$, in which case $g^* x_{U_i}^{p-1} = x_{U'}^{p-1}$. Moreover, the number of double cosets $U_i g V$ leading to this $U'$ is $|N_G(U_i)|/|V|$ and every $U'$~in $\mathcal{A}^C_{z+1}(V) := \{ U \in \mathcal{A}^C_{z+1}(G) \mid U \leq V \}$ occurs for some~$i$. So \begin{equation} \label{eqn:ResV_Theta} \operatorname{Res}_V(\Theta) = \prod_{U' \in \mathcal{A}^C_{z+1}(V)} \mathcal{N}_{U'}^V (1 + x_{U'}^{p-1})^{|G : V|} \, . \end{equation} In particular for $U \in \mathcal{A}^C_{z+1}(G)$ one has \begin{equation} \label{eqn:ResU_Theta} \operatorname{Res}_U(\Theta) = 1 + x_U^{(p-1)p^{n-(z+1)}} \, . \end{equation} Let $\eta \in \coho[2(p-1)p^{n-(z+1)}]{G}$ be the homogeneous component of $\Theta$ in this degree. As the norm map from $\coho{U'}$~to $\coho{V}$ is a ring homomorphism (see \cite[Proposition~6.3.3]{Evens:book}), we deduce from Eqn.~\eqref{eqn:ResV_Theta} that \begin{equation} \label{eqn:ResV_eta} \operatorname{Res}_V(\eta) = \hat{\eta}^{|G:V|} \quad \text{for} \quad \hat{\eta} = \sum_{U' \in \mathcal{A}^C_{z+1}(V)} \mathcal{N}_{U'}^V x_{U'}^{p-1} \, . \end{equation} Denote by $W = W(V)$ the subspace $\operatorname{Ann}(C)$~of $V^*$. Then $\hat{\eta}$ lies in $k[W]$, since $\mathcal{N}_{U'}^V (x_{U'})$ is the product of all $\phi \in V^*$ with $\operatorname{Res}_{U'} (\phi) = x_{U'}$. Moreover $\hat{\eta}$ is by construction a $\text{\sl GL}(W)$-invariant, so a scalar multiple of $D_1(W)$ for degree reasons. By considering the restriction to any $U \in \mathcal{A}^C_{z+1}(V)$, we deduce from Eqn.~\eqref{eqn:DicksonRes} that \begin{equation} \label{eqn:ResV_eta_Dickson} \operatorname{Res}^G_V(\eta) = D_1(W)^{|G : V|} \, . \end{equation} \subsection{The existence proof} \begin{theorem} \label{theorem:existence} Let $G$ be a $p$-group of order $p^n$ and $p$-rank $r$ whose centre has $p$-rank~$z$. For $1 \leq i \leq z$ define $\zeta_i \in \coho{G}$ by $\zeta_i = c_{p^n - p^{n-i}}(\rho_G)$, a Chern class of the regular representation of~$G$. If $z < r$ define $\eta \in \coho{G}$ as above and homogeneous elements $\kappa_1, \kappa_2, \ldots, \kappa_{r-z} \in \coho{G}$ as follows: \[ \kappa_j := \mathcal{P}^{p^{n - z + j - 4}} \cdots \mathcal{P}^{p^{n - z - 1}} \mathcal{P}^{p^{n - z - 2}} (\eta) \in \coho[2(p^{n - z} - p^{n - z - j})]{G} \] for $1 \leq j \leq r - z$. Then for each $1 \leq j \leq r-z$ and for each $V \in \mathcal{A}^C_{z+s}(G)$ one has \begin{equation} \label{eqn:existence} \operatorname{Res}^G_V(\kappa_j) = \begin{cases} D_j(W)^{|G:V|} & \text{if $j \leq s$, and} \\ 0 & \text{otherwise.} \end{cases} \end{equation} Here, $W$~is the subspace of $V^*$ which annihilates~$C$. Then $\zeta_1, \ldots, \zeta_z, \kappa_1, \ldots, \kappa_{r-z}$ is a polarised system of parameters for $\coho{G}$. So $\coho{G}$ has both polarised and special polarised systems of parameters. \end{theorem} \begin{proof} Equation~\eqref{eqn:existence} holds for $\kappa_1 = \eta$ by Eqn.~\eqref{eqn:ResV_eta_Dickson}. The general case of Eqn.~\eqref{eqn:existence} follows from Eqn.~\eqref{eqn:DicksonRes} and the action of the Steenrod algebra on the Dickson invariants (see~\cite{Wilkerson:Dickson}). Axioms (PS2) and (PS4) follow immediately from Eqn.~\eqref{eqn:existence}. Axiom (PS3) holds because the Dickson invariants form a regular sequence. Observe that $\rho_G$ restricts to~$C$ as $p^{n-z}$ copies of the regular representation $\rho_C$. So the total Chern class $c(\rho_G)$ restricts to $C$ as $c(\rho_C)^{p^{n-z}}$, meaning that $\operatorname{Res}_C(\zeta_i) = c_{p^z - p^{z-i}}(\rho_C)^{p^{n-z}}$ for $1 \leq i \leq z$. In view of Eqn.~\eqref{eqn:fVdef} one has $c(\rho_C) = f_{C^*}(1)$ and hence $c_{p^z - p^{z - i}}(\rho_C) = (-1)^i D_i(C^*)$ by Equation~\eqref{eqn:fV}, a well known observation due originally to Milgram. So $\operatorname{Res}_C(\zeta_i) = (-1)^i D_i(C^*)^{|G:C|}$, which means that Axiom (PS1) is satisfied and so a polarised system of parameters has been constructed. Hence special polarised systems of parameters exist too, by Lemma~\ref{lemma:polarisedDefinitions}. \end{proof} \section{Tightness of Duflot's lower bound} \label{section:DuflotTight} \noindent Recall that Duflot's Theorem~\cite{Duflot:Depth} states that the depth of $\coho{G}$ is at least~$z$. \begin{theorem} \label{theorem:ED1} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank~$z$. Then the following statements are equivalent: \begin{enumerate} \item \label{enum:notDetected} The mod-$p$ cohomology ring $\coho{G}$ is not detected on the family $\mathcal{H}^C_{z+1}(G)$. \item \label{enum:assocPrime} $\coho{G}$ has an associated prime $\mathfrak{p}$ such that the dimension of $\coho{G}/\mathfrak{p}$ is $z$. \item \label{enum:oneK1} There is a polarised system of parameters $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ for $\coho{G}$ such that $\kappa_1$ is a zero divisor in $\coho{G}$. \item \label{enum:allK1} If $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ is a polarised system of parameters for $\coho{G}$ then $\kappa_1$ is a zero divisor in $\coho{G}$. \item \label{enum:depthz} The depth of $\coho{G}$ equals $z$. \end{enumerate} \end{theorem} \begin{proof} Carlson proved in~\cite{Carlson:DepthTransfer} that \eqref{enum:notDetected} implies \eqref{enum:depthz}\@. A standard commutative algebra argument shows that \eqref{enum:assocPrime} implies \eqref{enum:depthz}\@. We saw in Lemma~\ref{lemma:genBrotoHenn} that \eqref{enum:depthz} implies \eqref{enum:allK1}, and \eqref{enum:oneK1} follows from \eqref{enum:allK1} by the existence result Theorem~\ref{theorem:existence}. Now let $\mathfrak{a}$ be a polarised system of parameters satisfying Statement~\eqref{enum:oneK1}, which is equivalent to $\taua = z$. So $\tauaH = z$ by Theorem~\ref{theorem:polarisedEqualities}\@. As in the proof of that theorem there is a special polarised system of parameters~$\mathfrak{b}$ which satisfies $\tauaH[b] = \tauaH = z$, so \eqref{enum:assocPrime} follows by Lemma~\ref{lemma:kappaAssocPrime}. Finally consider the definition of~$\tauaH$. If $\tauaH = z$ then $\mathcal{H}^C_{z+1}(G)$ does not detect $\coho{G}$, yielding~\eqref{enum:notDetected}\@. \end{proof} \section{An example} Let $G$ be the extraspecial $p$-group $p^{1+2n}_+$ with $n \geq 1$. This group has order $p^{2n+1}$ and $p$-rank $n+1$. Its centre has $p$-rank~$1$. If $p$~is odd $G$ has exponent~$p$. The mod-$p$ cohomology ring $\coho{G}$ is Cohen-Macaulay for $p=2$ by Theorem~4.6 of~\cite{Quillen:Extraspecial}, for Quillen computes the cohomology ring as the quotient of a polynomial algebra by a regular sequence. Also, Milgram and Tezuka showed in~\cite{MilgramTezuka} that the cohomology ring is Cohen-Macaulay for $G = 3^{1+2}_+$. From now on assume that $p$ is odd, with $n \geq 2$ if $p = 3$. Then by a result of Minh~\cite{Minh:EssExtra}, there are essential classes. For such groups the centre $C$ is cyclic of order~$p$, and the set $\mathcal{H}^C_2(G)$ of centralisers coincides with the set of maximal subgroups. Consequently $\mathcal{H}^C_{z+1}(G)$ does not detect $\coho{G}$, and so $\coho{G}$ has depth~$1$ by the part of Theorem~\ref{theorem:ED1} proved by Carlson in~\cite{Carlson:DepthTransfer}. Now let $V$ be a rank $n+1$ elementary abelian subgroup of~$G$, and let $\hat{\rho}$ be a one-dimensional ordinary representation of~$V$ whose restriction to $C$ is not trivial. Let $\rho$ be the induced representation of~$G$. Then $\rho$ is an irreducible representation of degree~$p^n$, and its character restricts to each $U \in \mathcal{A}^C_{n+1}(G)$ as the sum of all degree one characters whose restrictions to $C$ coincide with the restriction of the character of~$\hat{\rho}$. Set $\zeta_1 := c_{p^n}(\rho)$ and $\kappa_j := c_{p^n - p^{n-j}}(\rho)$ for $1 \leq j \leq n$. Then $\zeta_1, \kappa_1, \ldots, \kappa_n$ satisfies the axioms for a polarised system of parameters for $\coho{G}$. So combining Minh's result with Theorem~\ref{theorem:ED1} one deduces that $\kappa_1$ has nontrivial annihilator in $\coho{G}$. Conversely a direct proof of this fact would yield a new proof of Minh's result. If $n=1$ and $p > 3$ it is known (see~\cite{Leary:integral}) that $c_2(\rho)$ is a nonzero essential class which annihilates~$\kappa_1$. \end{document}
\begin{document} \begin{abstract} Our main result is elementary and concerns the relationship between the multiplicative groups of the coordinate and endomorphism rings of the formal additive group over a field of characteristic $p>0$. The proof involves the combinatorics of base $p$ representations of positive integers in a striking way. We apply the main result to construct a canonical quotient of the module of Coleman power series over the Iwasawa algebra when the base local field is of characteristic $p$. This gives information in a situation which apparently has never previously been investigated. \end{abstract} \title{Digit patterns and Coleman power series} \section{Introduction} \subsection{Overview and motivation} Our main result (Theorem~\ref{Theorem:MainResult} below) concerns the relationship between the multiplicative groups of the coordinate and endomorphism rings of the formal additive group over a field of characteristic $p>0$. Our result is elementary and does not require a great deal of apparatus for its statement. The proof of the main result involves the combinatorics of base $p$ representations of positive integers in a striking way. We apply our main result (see Corollary~\ref{Corollary:ColemanApp} below) to construct a canonical quotient of the module of Coleman power series over the Iwasawa algebra when the base local field is of characteristic $p$. By {\it Coleman power series} we mean the telescoping power series introduced and studied in Coleman's classical paper \cite{Coleman}. Apart from Coleman's \cite{ColemanLocModCirc} complete results in the important special case of the formal multiplicative group over ${\mathbb{Z}}_p$, little is known about the structure of the module of Coleman power series over the Iwasawa algebra, and, so far as we can tell, the characteristic $p$ situation has never previously been investigated. We undertook this research in an attempt to fill the gap in characteristic $p$. Our results are far from being as complete as Coleman's, but they are surprising on account of their ``digital'' aspect, and they raise further questions worth investigating. \subsection{Formulation of the main result} The notation introduced under this heading is in force throughout the paper. \subsubsection{Rings and groups of power series} Fix a prime number $p$ and a field $K$ of characteristic $p$. Let $q$ be a power of $p$. Consider: the (commutative) power series ring $$K[[X]]=\left\{\left.\sum_{i=0}^\infty a_i X^i\right| a_i\in K\right\};$$ the (in general noncommutative) ring $$R_{q,K}=\left\{\left.\sum_{i=0}^\infty a_i X^{q^i}\right| a_i\in K\right\},$$ in which by definition multiplication is power series composition; and the subgroup $$\Gamma_{q,K}=\left\{\left.X+\sum_{i=1}^\infty a_i X^{q^i}\right| a_i\in K\right\}\subset R_{q,K}^\times,$$ where in general $A^\times$ denotes the group of units of a ring $A$ with unit. Note that $K[[X]]^\times$ is a right $\Gamma_{q,K}$-module via composition of power series. \subsubsection{Logarithmic differentiation} Given $F=F(X)\in K[[X]]^\times$, put $${\mathbf{D}}[F](X)=XF'(X)/F(X)\in XK[[X]].$$ Note that \begin{equation}\label{equation:Homogeneity} {\mathbf{D}}[F(\alpha X)]={\mathbf{D}}[F](\alpha X) \end{equation} for all $\alpha\in K^\times$. Note that the sequence \begin{equation}\label{equation:Factoid} 1\rightarrow K[[X^p]]^\times\subset K[[X]]^\times\xrightarrow{{\mathbf{D}}} \left\{\left.\sum_{i=1}^\infty a_i X^i\in XK[[X]]\right| a_{pi}=a_i^p\;\mbox{for all}\;i\in {\mathbb{N}} \right\}\rightarrow 0 \end{equation} is exact, where ${\mathbb{N}}$ denotes the set of positive integers. \subsubsection{$q$-critical integers} Given $c\in {\mathbb{N}}$, let $$ O_q(c)=\{n\in {\mathbb{N}}\vert (n,p)=1\;\mbox{and} \;n\equiv p^ic\bmod{q-1}\;\mbox{for some $i\in {\mathbb{N}}\cup\{0\}$}\}. $$ Given $n\in {\mathbb{N}}$, let $\ord_p n$ denote the exact order with which $p$ divides $n$. We define $$ C^0_q=\left\{c\in {\mathbb{N}}\cap(0,q)\left| (c,p)=1\;\mbox{and}\;\frac{c+1}{p^{\ord_p(c+1)}}=\min_{n\in O_q(c)\cap(0,q)} \frac{n+1}{p^{\ord_p(n+1)}}\right.\right\}, $$ and we call elements of this set {\em $q$-critical integers}. In the simplest case $p=q$ one has $C^0_p=\{1,\dots,p-1\}$, but in general the set $C^0_q$ is somewhat complicated. Put $$ C_q=\bigcup_{c\in C_q^0}\{q^i(c+1)-1\vert i\in {\mathbb{N}}\cup\{0\}\}, $$ noting that the union is disjoint, since the sets in the union are contained in different congruence classes modulo $q-1$. See below for informal ``digital'' descriptions of the sets $C_q^0$ and $C_q$. \subsubsection{The homomorphism $\psi_q$} We define a homomorphism $$\psi_q:XK[[X]]\rightarrow X^2K[[X]]$$ as follows: given $F=F(X)=\sum_{i=1}^\infty a_iX^i\in XK[[X]]$, put $$ \psi_{q}[F]=X\cdot \sum_{k\in C_q}a_kX^{k}. $$ Note that the composite map $$\psi_q\circ {\mathbf{D}}:K[[X]]^\times\rightarrow \left\{\left.\sum_{k\in C_q}a_k X^{k+1}\right| a_k\in K\right\}$$ is surjective by exactness of sequence (\ref{equation:Factoid}). Further, since the set $\{k+1\vert k\in C_q\}$ is stable under multiplication by $q$, the target of $\psi_q\circ {\mathbf{D}}$ comes equipped with the structure of left $R_{q,K}$-module. More precisely, the target of $\psi_q\circ {\mathbf{D}}$ is a free left $R_{q,K}$-module for which the set $\{X^{k+1}\vert k\in C_q^0\}$ is a basis. The following is the main result of the paper. \begin{Theorem}\label{Theorem:MainResult} The formula \begin{equation}\label{equation:MainResult} \psi_{q}[{\mathbf{D}}[F\circ \gamma]]=\gamma^{-1}\circ \psi_{q}[{\mathbf{D}}[F]] \end{equation} holds for all $\gamma\in \Gamma_{q,K}$ and $F\in K[[X]]^\times$. \end{Theorem} \noindent In \S\S\ref{section:Reduction}--\ref{section:DigitMadness2} we give the proof of the theorem. More precisely, we first explain in \S\ref{section:Reduction} how to reduce the proof of the theorem to a couple of essentially combinatorial assertions (Theorems~\ref{Theorem:DigitMadness} and \ref{Theorem:DigitMadness2}), and then we prove the latter in \S\ref{section:DigitMadness} and \S\ref{section:DigitMadness2}, respectively. In \S\ref{section:ColemanApp} we make the application (Corollary~\ref{Corollary:ColemanApp}) of Theorem~\ref{Theorem:MainResult} to Coleman power series. The application does not require any of the apparatus of the proof of Theorem~\ref{Theorem:MainResult}. \subsection{Informal discussion} \subsubsection{``Digital'' description of $C_q^0$} The definition of $C_q^0$ can readily be understood in terms of simple operations on digit strings. For example, to verify that $39$ is $1024$-critical, begin by writing out the base $2$ representation of $39$ thus: $$39=100111_2$$ Then put enough place-holding $0$'s on the left so as to represent $39$ by a digit string of length $\ord_2 1024=10$: $$39=0000100111_2$$ Then calculate as follows: $$\begin{array}{cclr} \mbox{permute cyclically} &0000100111_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&100_2\\ &0001001110_2&\mbox{ignore: terminates with a $0$}\\ &0010011100_2&\mbox{ignore: terminates with a $0$}\\ \downarrow&0100111000_2&\mbox{ignore: terminates with a $0$}\\ &1001110000_2&\mbox{ignore: terminates with a $0$}\\ &0011100001_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&1110000_2\\ &0111000010_2&\mbox{ignore: terminates with a $0$}\\ &1110000100_2&\mbox{ignore: terminates with a $0$}\\ \downarrow &1100001001_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&110000100_2\\ &1000010011_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&10000100_2\\ \end{array}$$ Finally, conclude that $39$ is $1024$-critical because the first entry of the last column is the smallest in that column. This numerical example conveys some of the flavor of the combinatorial considerations coming up in the proof of Theorem~\ref{Theorem:MainResult}. \subsubsection{``Digital'' description of $C_q$} Given $c\in C_q^0$, let $[c_1,\cdots,c_m]_p$ be a string of digits representing $c$ in base $p$. (The digit string notation is defined below in \S\ref{section:Reduction}.) Then each digit string of the form $$[c_1,\dots,c_m,\underbrace{p-1,\dots,p-1}_{ n\ord_p q}]_p$$ represents an element of $C_q$. Moreover, each element of $C_q$ arises this way, for unique $c\in C_q^0$ and $n\in {\mathbb{N}}\cup\{0\}$. \subsubsection{Miscellaneous remarks} $\;$ (i) The set $C_q$ is a subset of the set of {\em magic numbers} (relative to the choice of $q$) as defined and studied in \cite[\S8.22, p.\ 309]{Goss}. For the moment we do not understand this connection on any basis other than ``numerology'', but we suspect that it runs much deeper. (ii) A well-ordering of the set of positive integers distinct from the usual one, which we call the {\em $p$-digital well-ordering}, plays a key role in the proof of Theorem~\ref{Theorem:MainResult}, via Theorems~\ref{Theorem:DigitMadness} and \ref{Theorem:DigitMadness2} below. In particular, Theorem~\ref{Theorem:DigitMadness2}, via Proposition~\ref{Proposition:DigitMadness2}, characterizes the sets $C_q^0$ and $C_q$ in terms of the $p$-digital well-ordering and congruences modulo $q-1$. (iii) The results of this paper were discovered by extensive computer experimentation with base $p$ expansions and binomial coefficients modulo $p$. No doubt refinements of our results can be discovered by continuing such experiments. (iv) It is an open problem to find a minimal set of generators for $K[[X]]^\times$ as a topological right $\Gamma_{q,K}$-module, the topologies here being the $X$-adically induced ones. It seems very likely that the module is always infinitely generated, even when $K$ is a finite field. Computer experimentation (based on the method of proof of Proposition~\ref{Proposition:Reduction} below) with the simplest case of the problem (in which $K$ is the two-element field and $p=q=2$) has revealed some interesting patterns. But still we are unable to hazard any detailed guess about the solution. \section{Application to Coleman power series} \label{section:ColemanApp} We assume that the reader already knows about Lubin-Tate formal groups and Coleman power series, and is familiar with their applications. We refer the less well-versed reader to \cite{LubinTate}, \cite{Coleman}, \cite{ColemanLocModCirc} and \cite{ColemanAI} to get started up the mountain of literature. \subsection{Background} We review \cite{LubinTate}, \cite{Coleman} and \cite{ColemanLocModCirc} just far enough to fix a consistent system of notation and to frame precisely the general structure problem motivating our work. \subsubsection{The setting} Let $k$ be a nonarchimedean local field with maximal compact subring ${\mathcal{O}}$ and uniformizer $\pi$. Let $q$ and $p$ be the cardinality and characteristic, respectively, of the residue field ${\mathcal{O}}/\pi$. Let $\bar{k}$ be an algebraic closure of $k$. Let $H$ be a complete unramified extension of $k$ in the completion of $\bar{k}$, let $\varphi$ be the arithmetic Frobenius automorphism of $H/k$, and let ${\mathcal{O}}_H$ be the ring of integers of $H$. Let the action of $\varphi$ be extended coefficient-by-coefficient to the power series ring ${\mathcal{O}}_H[[X]]$. \subsubsection{Lubin-Tate formal groups} We say that formal power series with coefficients in ${\mathcal{O}}$ are {\em congruent modulo $\pi$} if they are so coefficient-by-coefficient, and we say they are {\em congruent modulo degree $2$} if the constant and linear terms agree. Let ${\mathcal{F}}_\pi$ be the set of one-variable power series $f=f(X)$ such that $$f(X)\equiv \pi X\bmod{\deg 2},\;\;\;f(X)\equiv X^q\bmod{\pi}.$$ The simplest example of an element of ${\mathcal{F}}_\pi$ is $\pi X+X^q$. The general example of an element of ${\mathcal{F}}_\pi$ is $\pi X+X^q+\pi X^2e(X)$, where $e(X)\in {\mathcal{O}}[[X]]$ is arbitrary. Given $f\in {\mathcal{F}}_\pi$, there exists unique $F_f=F_f(X,Y)\in {\mathcal{O}}[[X,Y]]$ such that $$F_f(X,Y)\equiv X+Y\bmod{\deg 2},\;\;\;f(F_f(X,Y))=F_f(f(X),f(Y)).$$ The power series $F_f(X,Y)$ is a {\em commutative formal group law}. Given $a\in {\mathcal{O}}$ and $f,g\in {\mathcal{F}}_\pi$, there exists unique $[a]_{f,g}=[a]_{f,g}(X)\in {\mathcal{O}}[[X]]$ such that $$[a]_{f,g}(X)\equiv aX\bmod{\deg 2},\;\;f([a]_{f,g}(X))=[a]_{f,g}(g(X)).$$ We write $[a]_f=[a]_f(X)=[a]_{f,f}(X)$ to abbreviate notation. The family \linebreak $\{[a]_f(X)\}_{a\in {\mathcal{O}}}$ is a system of {\em formal complex multiplications} for the formal group law $F_f(X,Y)$. For each fixed $f\in {\mathcal{F}}_\pi$, the package $$({\mathcal{F}}_f(X,Y),\{[a]_f(X)\}_{a\in {\mathcal{O}}})$$ is a {\em Lubin-Tate formal group}. The formal properties of the ``big package'' $$\left(\{F_f(X,Y)\}_{f\in {\mathcal{F}}_\pi}, \{[a]_{f,g}(X)\}_{\begin{subarray}{c} a\in {\mathcal{O}}\\ f,g\in {\mathcal{F}}_\pi \end{subarray}}\right)$$ are detailed in \cite[Thm.\ 1, p.\ 382]{LubinTate}. In particular, one has \begin{equation}\label{equation:LTformal} [\pi]_f(X)=f(X),\;\;\;[1]_f(X)=X,\;\;\; [a]_{f,g}\circ[b]_{g,h}=[ab]_{f,h} \end{equation} for all $a,b\in {\mathcal{O}}$ and $f,g,h\in {\mathcal{F}}_\pi$. We remark also that \begin{equation}\label{equation:LTformalbis} [\omega]_{\pi X+X^q}(X)=\omega X \end{equation} for all roots $\omega$ of unity in $k$ of order prime to $p$. \subsubsection{Coleman power series} By Coleman's theory \cite{Coleman} there exists for each $f\in {\mathcal{F}}_\pi$ a unique group homomorphism $${\mathcal{N}}_f:{\mathcal{O}}_H[[X]]^\times\rightarrow {\mathcal{O}}_H[[X]]^\times$$ such that $$ {\mathcal{N}}_f[h](f(X))=\prod_{\begin{subarray}{c} \lambda\in \bar{k}\\ f(\lambda)=0 \end{subarray}}h(F_f(X,\lambda)) $$ for all $h\in {\mathcal{O}}_H[[X]]^\times$. Let $${\mathcal{M}}_f=\{h\in {\mathcal{O}}_H[[X]]^\times\vert {\mathcal{N}}_f[h]=\varphi h\}.$$ We refer to elements of ${\mathcal{M}}_f$ as {\em Coleman power series}. \subsubsection{Natural operations on Coleman power series} The group ${\mathcal{M}}_f$ comes equipped with the structure of right ${\mathcal{O}}^\times$-module by the rule \begin{equation}\label{equation:OOModuleRule} ((h,a)\mapsto h\circ [a]_f):{\mathcal{M}}_f\times {\mathcal{O}}^\times\rightarrow {\mathcal{M}}_f, \end{equation} and we have at our disposal a canonical isomorphism \begin{equation}\label{equation:Cformal} (h\mapsto h\circ [1]_{g,f}):{\mathcal{M}}_g\rightarrow {\mathcal{M}}_f \end{equation} of right ${\mathcal{O}}^\times$-modules for all $f,g\in {\mathcal{F}}_\pi$, as one verifies by applying the formal properties (\ref{equation:LTformal}) of the big Lubin-Tate package in a straightforward way. We also have at our disposal a canonical group isomorphism \begin{equation}\label{equation:CBijection} (h\mapsto h\bmod{\pi}):{\mathcal{M}}_f\rightarrow ({\mathcal{O}}_H/\pi)[[X]]^\times \end{equation} as one verifies by applying \cite[Lemma 13, p.\ 103]{Coleman} in a straightforward way. The {\em Iwasawa algebra} (completed group ring) ${\mathbb{Z}}_p[[{\mathcal{O}}^\times]]$ associated to $k$ acts naturally on the slightly modified version $${\mathcal{M}}_f^0=\{h\in {\mathcal{O}}_H[[X]]^\times \vert h\in {\mathcal{M}}_f,\;h(0)\equiv 1\bmod{\pi}\}$$ of ${\mathcal{M}}_f$. \subsubsection{The structure problem} Little seems to be known in general about the structure of the ${\mathcal{O}}^\times$-module ${\mathcal{M}}_f$. To determine this structure is a fundamental problem in local class field theory, and the problem remains open. Essentially everything we do know about the problem is due to Coleman. Namely, in the special case $$k={\mathbb{Q}}_p=H,\;\;\;\pi=p,\;\;\;f(X)=(1+X)^p-1\in {\mathcal{F}}_\pi,$$ Coleman showed \cite{ColemanLocModCirc} that ${\mathcal{M}}^0_f$ is ``nearly'' a free ${\mathbb{Z}}_p[[{\mathcal{O}}^\times]]$-module of rank $1$, and in the process recovered Iwasawa's result on the structure of local units modulo circular units. Moreover, Coleman's methods are strong enough to analyze ${\mathcal{M}}^0_f$ completely in the case of general $H$, even though this level of generality is not explicitly considered in \cite{ColemanLocModCirc}. So in the case of the formal multiplicative group over ${\mathbb{Z}}_p$ we have a complete and satisfying description of structure. Naturally one wishes for so complete a description in the general case. We hope with the present work to contribute to the solution of the structure problem. Here is the promised application of Theorem~\ref{Theorem:MainResult}, which makes the ${\mathcal{O}}^\times$-module structure of a certain quotient of ${\mathcal{M}}_f$ explicit when $k$ is of characteristic $p$. \begin{Corollary}\label{Corollary:ColemanApp} Assume that $k$ is of characteristic $p$ and fix $f\in {\mathcal{F}}_\pi$. Then there exists a surjective group homomorphism $$\Psi_f:{\mathcal{M}}_f\rightarrow\left\{\left.\sum_{k\in C_q}a_kX^{k+1}\right| a_k\in {\mathcal{O}}_H/\pi\right\}$$ such that \begin{equation}\label{equation:ColemanApp} \Psi_f[h\circ [\omega u]_f]\equiv [(\omega u)^{-1}]_{\pi X+X^q}\circ \Psi_f[h]\circ[\omega]_{\pi X+X^q}\bmod{\pi} \end{equation} for all $h=h(X)\in {\mathcal{M}}_{f}$, $u\in 1+\pi{\mathcal{O}}$ and roots of unity $\omega\in {\mathcal{O}}^\times$. \end{Corollary} \proof If we are able to construct $\Psi_{\pi X+X^q}$ with the desired properties, then in the general case the map $$\Psi_f=(h\mapsto \Psi_{\pi X+X^q}[h\circ [1]_{f,\pi X+X^q}])$$ has the desired properties by (\ref{equation:LTformal}) and (\ref{equation:Cformal}). We may therefore assume without loss of generality that $$f=\pi X+X^q,$$ in which case $F_f(X,Y)=X+Y$, i.~e., the formal group underlying the Lubin-Tate formal group attached to $f$ is additive. By (\ref{equation:LTformalbis}) and the definitions, given $a\in {\mathcal{O}}$ and writing $$a=\sum_{i=0}^\infty \alpha_i \pi^i\;\;\;(\alpha_i^q=\alpha_i),$$ in the unique possible way, one has $$[a]_f\equiv \sum_{i=0}^\infty \alpha_i X^{q^i}\bmod{\pi},$$ and hence the map $a\mapsto [a]_{f}\bmod \pi$ gives rise to an isomorphism $$\theta:{\mathcal{O}}\iso R_{q,{\mathcal{O}}/\pi}$$ of rings. Let $$\rho:{\mathcal{M}}_f\rightarrow ({\mathcal{O}}_H/\pi)[[X]]^\times$$ be the isomorphism (\ref{equation:CBijection}). We claim that $$\Psi_f=\psi_q\circ {\mathbf{D}}\circ \rho$$ has all the desired properties. In any case, since $\rho$ is an isomorphism and $\psi_q\circ {\mathbf{D}}$ by (\ref{equation:Factoid}) is surjective, $\Psi_f$ is surjective, too. To verify (\ref{equation:ColemanApp}), we calculate as follows: $$\begin{array}{rcl} \psi_q[{\mathbf{D}}[\rho(h\circ [\omega u]_f)]]&=&\psi_q[{\mathbf{D}}[\rho(h\circ [\omega]_f\circ[ u]_f)]]\\ &=& \psi_q[{\mathbf{D}}[\rho(h)\circ \theta(\omega)\circ \theta(u)]]\\ &=&\theta(u^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)\circ \theta(\omega)]]\\ &=&\theta(u^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]\circ \theta(\omega)]\\ &=&\theta(u^{-1})\circ\theta(\omega^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]]\circ \theta(\omega)\\ &=&\theta((u\omega)^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]]\circ \theta(\omega) \end{array} $$ The third and fourth steps are justified by (\ref{equation:MainResult}) and (\ref{equation:Homogeneity}), respectively. The remaining steps are clear. The claim is proved, and with it the corollary. \qed \section{Reduction of the proof}\label{section:Reduction} We put Coleman power series behind us for the rest of the paper. We return to the elementary point of view taken in the introduction. In this section we explain how to reduce the proof of Theorem~\ref{Theorem:MainResult} to a couple of combinatorial assertions. \subsection{Digital apparatus} \subsubsection{Base $p$ expansions} Given an additive decomposition $$n=\sum_{i=1}^s n_ip^{s-i}\;\;\;(n_i\in {\mathbb{Z}}\cap [0,p),\;n\in {\mathbb{N}}),$$ we write $$n=[n_1,\dots,n_s]_p,$$ we call the latter a {\em base $p$ expansion} of $n$ and we call the coefficients $n_i$ {\em digits}. Note that we allow base $p$ expansions to have leading $0$'s. We say that a base $p$ expansion is {\em minimal} if the first digit is positive. For convenience, we set the empty base $p$ expansion $[]_p$ equal to $0$ and declare it to be minimal. We always read base $p$ expansions left-to-right, as though they were words spelled in the alphabet $\{0,\dots,p-1\}$. In this notation the well-known theorem of Lucas takes the form $$\left(\begin{array}{c} \;[a_1,\dots,a_n]_p\\ \;[b_1,\dots,b_n]_p \end{array}\right)\equiv \left(\begin{array}{c} a_1\\ b_1 \end{array}\right) \cdots \left(\begin{array}{c} a_n\\ b_n \end{array}\right)\bmod{p}.$$ (For all $n\in {\mathbb{N}}\cup\{0\}$ and $k\in {\mathbb{Z}}$ we set $\left(\begin{subarray}{c} n\\ k\end{subarray}\right)= \frac{n!}{k!(n-k)!}$ if $0\leq k\leq n$ and $\left(\begin{subarray}{c} n\\ k\end{subarray}\right)=0$ otherwise.) The theorem of Lucas implies that for all integers $k,\ell,m\geq 0$ such that $m=k+\ell$, the binomial coefficient $\left(\begin{subarray}{c} m\\ k \end{subarray}\right)$ does not vanish modulo $p$ if and only if the addition of $k$ and $\ell$ in base $p$ requires no ``carrying''. \subsubsection{The $p$-core function $\kappa_p$} Given $n\in {\mathbb{N}}$, we define $$ \kappa_p(n)=(n/p^{\ord_p n}+1)/p^{\ord_p(n/p^{\ord_p n}+1)}-1. $$ We call $\kappa_p(n)$ the {\em $p$-core} of $n$. For example, $\kappa_p(n)=0$ iff $n=p^{k-1}(p^\ell-1)$ for some $k,\ell\in {\mathbb{N}}$. The meaning of the $p$-core function is easiest to grasp in terms of minimal base $p$ expansions. One calculates $\kappa_p(n)$ by discarding trailing $0$'s and then discarding trailing $(p-1)$'s. For example, to calculate the $3$-core of $963=[1,0,2,2,2,0,0]_3$, first discard trailing $0$'s to get $[1,0,2,2,2]_3=107$, and then discard trailing $2$'s to get $\kappa_3(963)=[1,0]_3=3$. \subsubsection{The $p$-defect function $\delta_p$} For each $n\in{\mathbb{N}}$, let $\delta_p(n)$ be the length of the minimal base $p$ representation of $\kappa_p(n)$. We call $\delta_p(n)$ the {\em $p$-defect} of $n$. For example, since as noted above $\kappa_3(963)=[1,0]_3$, one has $\delta_3(963)=2$. \subsubsection{The $p$-digital well-ordering} We equip the set of positive integers with a well-ordering $\leq_p$ by declaring $m\leq_{p}n$ if $$ \kappa_p(m)<\kappa_p(n)$$ or $$\kappa_p(m)=\kappa_p(n)\;\mbox{and}\;m/p^{\ord_p m}<n/p^{\ord_pn}$$ or $$\kappa_p(m)=\kappa_p(n)\;\mbox{and}\;m/p^{\ord_p m}=n/p^{\ord_p n}\;\mbox{and} \;m\leq n.$$ In other words, to verify $m\leq _p n$, first compare $p$-cores of $m$ and $n$, then in case of a tie compare numbers of $(p-1)$'s trailing the $p$-core, and in case of another tie compare numbers of trailing $0$'s. We call $\leq_p$ the {\em $p$-digital well-ordering}. In the obvious way we derive order relations $<_p$, $\geq_{p}$ and $>_{p}$ from $\leq_{p}$. We remark that $$\delta_p(m)<\delta_p(n)\Rightarrow m<_p n,\;\;\; m\leq_p n\Rightarrow \delta_p(m)\leq \delta_p(n);$$ in other words, the function $\delta_p$ gives a reasonable if rough approximation to the $p$-digital well-ordering. \subsubsection{The function $\mu_q$} Given $c\in {\mathbb{N}}$, let $\mu_q(c)$ be the unique element of the set $$\{n\in {\mathbb{N}}\vert n\equiv p^i c\bmod{q-1}\;\mbox{for some}\;i\in {\mathbb{N}}\cup\{0\}\} $$ minimal with respect to the $p$-digital well-ordering. Note that $\mu_q(c)$ cannot be divisible by $p$. Consequently $\mu_q(c)$ may also be characterized as the unique element of the set $O_q(c)$ minimal with respect to the $p$-digital well-ordering. \subsubsection{$p$-admissibility} We say that a quadruple $(j,k,\ell,m)\in {\mathbb{N}}^4$ is {\em $p$-admissible} if $$(m,p)=1,\;\;\;m=k+j(p^\ell-1),\;\;\;\left(\begin{array}{c} k-1\\ j \end{array}\right)\not\equiv 0\bmod{p}.$$ This is the key technical definition of the paper. Let ${\mathcal{A}}_p$ denote the set of $p$-admissible quadruples. \begin{Theorem}\label{Theorem:DigitMadness} For all $(j,k,\ell,m)\in {\mathcal{A}}_p$, one has (i) $k<_p m$, and moreover, (ii) if \linebreak $\kappa_p(k)=\kappa_p(m)$, then $j=(p^{\ord_p k}-1)/(p^\ell-1)$. \end{Theorem} \noindent We will prove this result in \S\ref{section:DigitMadness}. Note that the conclusion of part (ii) of the theorem implies $\ord_pk>0$ and $\ell\vert \ord_p k$. \begin{Theorem}\label{Theorem:DigitMadness2} One has \begin{equation}\label{equation:DigitMadness2} \max_{c\in {\mathbb{N}}}\mu_q(c)<q, \end{equation} \begin{equation}\label{equation:DigitMadness2bis} \begin{array}{cl} &\displaystyle\{(\mu_q(c)+1)q^{i}-1\;\vert\; i\in {\mathbb{N}}\cup\{0\},\;c\in {\mathbb{N}}\}\\\\ =&\displaystyle\left\{c\in {\mathbb{N}}\left|(c,p)=1,\; \kappa_p(c)=\min_{n\in O_q(c)}\kappa_p(n)\right.\right\}. \end{array} \end{equation} \end{Theorem} \noindent We will prove this result in \S\ref{section:DigitMadness2}. We have phrased the result in a way emphasizing the $p$-digital well-ordering. But perhaps it is not clear what the theorem means in the context of Theorem~\ref{Theorem:MainResult}. The next result provides an explanation. \begin{Proposition}\label{Proposition:DigitMadness2} Theorem~\ref{Theorem:DigitMadness2} granted, one has \begin{equation}\label{equation:DigitMadness2quad} C_q^0=\{\mu_q(c)\vert c\in {\mathbb{N}}\}, \end{equation} \begin{equation}\label{equation:DigitMadness2ter} C_q=\left\{c\in {\mathbb{N}}\left|(c,p)=1,\; \kappa_p(c)=\kappa_p(\mu_q(c))\right.\right\}. \end{equation} \end{Proposition} \proof The definition of $C^0_q$ can be rewritten $$C^0_q=\left\{c\in {\mathbb{N}}\cap(0,q)\left| (c,p)=1,\;\kappa_p(c)=\min_{n\in O_q(c)\cap(0,q)}\kappa_p(n)\right.\right\}.$$ Therefore relation (\ref{equation:DigitMadness2}) implies containment $\supset$ in (\ref{equation:DigitMadness2quad}) and moreover, supposing failure of equality in (\ref{equation:DigitMadness2quad}), there exist $c,c'\in C_q^0$ such that $$c=\mu_q(c)\neq c',\;\;\;\kappa_p(c)=\kappa_p(c').$$ But $c'=q^i(c+1)-1$ for some $i\in {\mathbb{N}}$ by (\ref{equation:DigitMadness2bis}), hence $c'\geq q$, and hence $c'\not\in C_q^0$. This contradiction establishes equality in (\ref{equation:DigitMadness2quad}) and in turn containment $\subset$ in (\ref{equation:DigitMadness2ter}). Finally, (\ref{equation:DigitMadness2bis}) and (\ref{equation:DigitMadness2quad}) imply equality in (\ref{equation:DigitMadness2ter}). \qed The following is the promised reduction of the proof of Theorem~\ref{Theorem:MainResult}. \begin{Proposition}\label{Proposition:Reduction} If Theorems~\ref{Theorem:DigitMadness} and \ref{Theorem:DigitMadness2} hold, then Theorem~\ref{Theorem:MainResult} holds, too. \end{Proposition} \noindent Before turning to the proof, we pause to discuss the groups in play. \subsection{Generators for $K[[X]]^\times$, ${\mathbf{D}}[K[[X]]^\times]$ and $\Gamma_{q,K}$} \label{subsection:Convenient} Equip $K[[X]]^\times$ with the topology for which the family $\{1+X^nK[[X]]\vert n\in {\mathbb{N}}\}$ is a neighborhood base at the origin. Then the set $$\{1+\alpha X^k\vert \alpha \in K^\times,\;k\in {\mathbb{N}}\}\cup K^\times$$ generates $K[[X]]^\times$ as a topological group. Let ${\mathbb{F}}_p$ be the residue field of ${\mathbb{Z}}_p$. Let $E_p=E_p(X)\in {\mathbb{F}}_p[[X]]$ be the reduction modulo $p$ of the Artin-Hasse exponential $$\exp\left(\sum_{i=0}^\infty \frac{X^{p^i}}{p^i}\right)\in ({\mathbb{Q}}\cap{\mathbb{Z}}_p)[[X]],$$ noting that $${\mathbf{D}}[E_p]=\sum_{i=0}^\infty X^{p^i}.$$ Since $E_p(X)=1+X+O(X^2)$, the set $$\{E_p(\alpha X^k)\;\vert \;\alpha\in K^\times,\;k\in {\mathbb{N}},\;(k,p)=1\}\cup K[[X^p]]^\times$$ generates $K[[X]]^\times$ as a topological group. For each $k\in {\mathbb{N}}$ such that $(k,p)=1$ and $\alpha\in K^\times$, put $$W_{k,\alpha}=W_{k,\alpha}(X)=k^{-1}{\mathbf{D}}[E_p(\alpha X^k)]=\sum_{i=0}^\infty \alpha^{p^i}X^{kp^i}\in XK[[X]]. $$ Equip ${\mathbf{D}}[K[[X]]^\times]$ with the relative $X$-adic topology. The set $$\{W_{k,\alpha}\vert k\in {\mathbb{N}},\;(k,p)=1,\;\alpha\in K^\times\}$$ generates ${\mathbf{D}}[K[[X]]^\times]$ as a topological group, cf.\ exact sequence (\ref{equation:Factoid}). Equip $\Gamma_{q,K}$ with the relative $X$-adic topology. Note that $$ (X+\beta X^{q^\ell})^{-1}=\sum_{i=0}^\infty (-1)^{i}\beta^{\frac{q^{\ell i}-1}{q^\ell-1}}X^{q^{\ell i}}\in \Gamma_{q,K} $$ for all $\ell\in {\mathbb{N}}$ and $\beta \in K^\times$. The inverse operation here is of course understood in the functional rather than multiplicative sense. The set $$\{X+\beta X^{q^\ell}\;\vert\; \beta\in K^\times,\;\ell\in {\mathbb{N}}\}$$ generates $\Gamma_{q,K}$ as a topological group. \subsection{Proof of the proposition} It is enough to verify (\ref{equation:MainResult}) with $F$ and $\gamma$ ranging over sets of generators for the topological groups $K[[X]]^\times$ and $\Gamma_{q,K}$, respectively. The generators mentioned in the preceding paragraph are the convenient ones. So fix $\alpha,\beta\in K^\times$ and $k,\ell\in {\mathbb{N}}$ such that $(k,p)=1$. It will be enough to verify that \begin{equation}\label{equation:Nuff} \psi_{q}[M_{k,\alpha,\ell,\beta}]= \left\{\begin{array}{rl}\displaystyle \alpha X^{k+1}+\sum_{\ell\vert f\in {\mathbb{N}}} (-1)^{f/\ell}\alpha^{q^f}\beta^{\frac{q^f-1}{q^\ell-1}}X^{q^f(k+1)}&\mbox{if $k\in C_q$,}\\\\ 0&\mbox{otherwise,} \end{array}\right. \end{equation} where \begin{equation}\label{equation:MExpansion} \begin{array}{cl} &M_{k,\alpha,\ell,\beta}=M_{k,\alpha,\ell,\beta}(X)=k^{-1}{\mathbf{D}}[E_p(\alpha(X+\beta X^{q^\ell})^k)]\\\\ =&\displaystyle W_{k,\alpha}+\sum_{i=0}^\infty \sum_{j=1}^\infty \left(\begin{array}{c} p^ik-1\\ j \end{array}\right)\alpha^{p^i}\beta^{j}X^{p^ik+j(q^\ell-1)}.\end{array} \end{equation} By Theorem~\ref{Theorem:DigitMadness}, many terms on the right side of (\ref{equation:MExpansion}) vanish, and more precisely, we can rewrite (\ref{equation:MExpansion}) as follows: \begin{equation}\label{equation:Nuff2} \begin{array}{rcl} M_{k,\alpha,\ell,\beta}&\equiv&\displaystyle\alpha X^k+ \sum_{\begin{subarray}{c} m\in O_q(k)\\ m>_pk \end{subarray}} \left(\sum_{\begin{subarray}{c} i\in {\mathbb{N}}\cup\{0\}, j\in{\mathbb{N}}\\ (j,p^ik,\ord_p q^\ell,m)\in {\mathcal{A}}_p \end{subarray}} \left(\begin{array}{c} p^ik-1\\ j \end{array}\right)\alpha^{p^i}\beta^j\right)X^m\\\\ &&\hskip 5cm\bmod{X^pK[[X^p]]}. \end{array} \end{equation} By Theorem~\ref{Theorem:DigitMadness2} as recast in the form of Proposition~\ref{Proposition:DigitMadness2}, along with formula (\ref{equation:Nuff2}) and the definitions, both sides of (\ref{equation:Nuff}) vanish unless $k\in C_q$. So now fix $c\in C_q^0$ and $g\in {\mathbb{N}}\cup\{0\}$ and put $$k=(c+1)q^g-1\in C_q$$ for the rest of the proof of the proposition. Also fix $f\in {\mathbb{N}}\cup\{0\}$ and put $$m=q^f(k+1)-1=(c+1)q^{f+g}-1\in C_q$$ for the rest of the proof. It is enough to evaluate the coefficient of $X^m$ in (\ref{equation:Nuff2}). By part (ii) of Theorem~\ref{Theorem:DigitMadness}, there is no term in the sum on the right side of (\ref{equation:Nuff2}) of degree $m$ unless $\ell\vert f$, in which case there is exactly one term, namely $$\left(\begin{array}{c} q^fk-1\\ \frac{q^f-1}{q^\ell-1} \end{array}\right)\alpha^{q^f}\beta^{\frac{q^f-1}{q^\ell-1}}X^m,$$ and by the theorem of Lucas, the binomial coefficient mod $p$ evaluates to $(-1)^{f/\ell}$. Therefore (\ref{equation:Nuff}) does indeed hold. \qed \subsection{Remarks} $\;$ (i) By formula (\ref{equation:Nuff2}), the $p$-digital well-ordering actually gives rise to a $\Gamma_{q,K}$-stable complete separated filtration of the quotient $K[[X]]^\times/K[[X^p]]^\times$ distinct from the $X$-adically induced one. Theorem~\ref{Theorem:MainResult} merely describes the structure of $K[[X]]^\times/K[[X^p]]^\times$ near the top of the ``$p$-digital filtration''. (ii) Computer experimentation based on formula (\ref{equation:MExpansion}) was helpful in making the discoveries detailed in this paper. We believe that continuation of such experiments could lead to further progress, e.g., to the discovery of a minimal set of generators for $K[[X]]^\times$ as a topological right $\Gamma_{q,K}$-module. \section{Proof of Theorem~\ref{Theorem:DigitMadness}}\label{section:DigitMadness} \begin{Lemma}\label{Lemma:DigitGames} Fix $(j,k,\ell,m)\in {\mathcal{A}}_p$. Put $$e=\ord_p(m+1),\;\;\;f=\ord_p k,\;\;\;g=\ord_p(k/p^f+1).$$ Then there exists a unique integer $r$ such that \begin{equation}\label{equation:DigitGames0} 0\leq r\leq e+\ell-1,\;\; r\equiv 0\bmod{\ell},\;\; j\equiv \frac{p^r-1}{p^\ell-1}\bmod{p^{e}}, \end{equation} and moreover \begin{equation}\label{equation:DigitGames1} f+g\geq e, \end{equation} \begin{equation}\label{equation:DigitGames2} \kappa_p(m)\geq \kappa_p(k). \end{equation} \end{Lemma} \noindent This lemma is the key technical result of the paper. \subsection{Completion of the proof of the theorem, granting the lemma} Fix $(j,k,\ell,m)\in {\mathcal{A}}_p$. Let $e,f,g,r$ be as defined in Lemma~\ref{Lemma:DigitGames}. Since the number of digits in the minimal base $p$ expansion of $k$ cannot exceed the number of digits in the minimal base $p$ expansion of $m$, one has \begin{equation}\label{equation:DigitGames3} \delta_p(k)+f+g\leq \delta_p(m)+e. \end{equation} Combining (\ref{equation:DigitGames1}) and (\ref{equation:DigitGames3}), one has \begin{equation}\label{equation:DigitGames4} \delta_p(k)=\delta_p(m)\Rightarrow f+g=e. \end{equation} Now in general one has $$m+1=(\kappa_p(m)+1)p^e,\;\;\; k+p^f=(\kappa_p(k)+1)p^{f+g},$$ and hence $$ \kappa_p(k)=\kappa_p(m)\Rightarrow \left(j=\frac{p^{f}-1}{p^\ell-1}\;\mbox{and}\;e>g\right) $$ via (\ref{equation:DigitGames4}). Theorem~\ref{Theorem:DigitMadness} now follows via (\ref{equation:DigitGames2}) and the definition of the $p$-digital well-ordering. \qed \subsection{Proof of Lemma~\ref{Lemma:DigitGames}} Since $e$ is the number of trailing $(p-1)$'s in the minimal base $p$ expansion of $m$, the lemma is trivial in the case $e=0$. We therefore assume that $e>0$ for the rest of the proof. Let $$m=[m_1,\dots,m_t]_p\;\;\;(t>0,\;m_1>0,\;\;m_t>0)$$ be the minimal base $p$ expansion of $m$. For convenience, put $$d=\delta_p(m)\geq 0,\;\;\;m_\nu=0\;\mbox{for $\nu<1$}.$$ Then $$t=e+d,\;\;\;m_{d+1}=\cdots=m_{d+e}=p-1,\;\;\;m_{d}< p-1.$$ By hypothesis $$\left(\begin{array}{c} k-1\\ j \end{array}\right)=\left(\begin{array}{c} m-jp^\ell-1+j\\ m-jp^\ell-1 \end{array}\right)>0,$$ hence $$m>jp^\ell,$$ and hence the number of digits in the minimal base $p$ of expansion of $jp^\ell$ does not exceed that of $m$. Accordingly, $$t> \ell$$ and one has a base $p$ expansion for $j$ of the form $$j=[j_1,\dots,j_{t-\ell}]_p,$$ which perhaps is not minimal. For convenience, put $$j_\nu=0\;\mbox{for $\nu<1$ and also for $\nu>t-\ell$}.$$ This state of affairs is summarized by the ``snapshot'' $$m=[m_1,\dots,m_t]=[m_{1},\dots,m_{d},\underbrace{p-1,\dots,p-1}_e]_p,\;\;\kappa_p(m)=[m_{1},\dots,m_{d}]_p,$$ $$jp^\ell=[j_1,\dots,j_t]_p=[j_1,\dots,j_{t-\ell},\underbrace{0,\dots,0}_\ell]_p ,$$ which the reader should keep in mind as we proceed. We are ready now to prove the existence and uniqueness of $r$. One has $$m-jp^\ell-1=k-1-j=[m_1',\dots,m_d',p-1-j_{d+1},\dots, p-1-j_{t-1},p-2]_p,$$ where the digits $m'_1,\dots,m'_d$ are defined by the equation \begin{equation}\label{equation:Swivel} \kappa_p(m)-[j_{1},\dots,j_{d}]_p=[m'_1,\dots,m'_d]_p. \end{equation} By hypothesis and the theorem of Lucas, the addition of $k-1-j$ and $j$ in base $p$ requires no ``carrying'', and hence \begin{equation}\label{equation:BigDigit} k-1= \left\{\begin{array}{ll} \;[m_1'+j_{1-\ell},\dots,m_d'+j_{d-\ell},\\ \;p-1-j_{d+1}+j_{d+1-\ell},\dots, p-1-j_{d+e-1}+j_{d+e-1-\ell},p-2+j_{d+e-\ell}]_p. \end{array}\right. \end{equation} From the system of inequalities for the last $e+\ell$ digits of the base $p$ expansion of $jp^\ell$ implicit in (\ref{equation:BigDigit}), it follows that there exists $r_0\in {\mathbb{N}}\cup\{0\}$ such that \begin{equation}\label{equation:BigDigitBis} jp^\ell=[j_{1-\ell},\dots,j_{d-\ell}, \overbrace{0,\dots,0,\underbrace{\underbrace{1,0,\dots,0}_{\ell},\dots,\underbrace{1,0,\dots,0}_{\ell}}_{\mbox{\tiny $r_0$ blocks}},0}^{e+\ell}]_p. \end{equation} Therefore $r=r_0\ell$ has the required properties (\ref{equation:DigitGames0}). Uniqueness of $r$ is clear. For later use, note the relation \begin{equation}\label{equation:HeadScratcher} r\geq e\Leftrightarrow [j_{d-\ell+1},\dots,j_{d}]_p\neq 0\Rightarrow [j_{d-\ell+1},\dots,j_{d}]_p=p^{r-e}, \end{equation} which is easy to see from the point of view adopted here to prove (\ref{equation:DigitGames0}). By (\ref{equation:DigitGames0}) one has \begin{equation}\label{equation:DigitGames2.2} k+p^r-(m+1)+j' p^e (p^\ell-1)=0\;\;\;\mbox{for some $j'\in {\mathbb{N}}\cup\{0\}$,} \end{equation} and hence one has \begin{equation}\label{equation:DigitGames2.5} r\geq \min(f,e),\;\;\;f\geq \min(r,e). \end{equation} This proves (\ref{equation:DigitGames1}), since either one has $f\geq e$, in which case (\ref{equation:DigitGames1}) holds trivially, or else $f<e$, in which case $r=f$ by (\ref{equation:DigitGames2.5}), and hence (\ref{equation:DigitGames1}) holds by (\ref{equation:DigitGames2.2}). Put $$k-1=[k'_1,\dots,k'_{d+e}]_p,\;\;\;\;{\mathbf{1}}_{r\geq e}=\left\{\begin{array}{ll} 1&\mbox{if $r\geq e$,}\\ 0&\mbox{if $r<e$.} \end{array}\right. $$ Comparing (\ref{equation:BigDigit}) and (\ref{equation:BigDigitBis}), we see that the digits $k'_{d+1},\dots,k'_{d+e}$ are all $(p-1)$'s with at most one exception, and the exceptional digit if it exists is a $p-2$. Further, one has $$k'_{d+1}=\dots=k'_{d+e}=p-1\Leftrightarrow f\geq e\Leftrightarrow {\mathbf{1}}_{r\geq e}=1$$ by (\ref{equation:DigitGames2.5}). Therefore one has $$\kappa_p(k)\leq[k'_1,\dots,k'_d]+{\mathbf{1}}_{r\geq e}.$$ Finally, via (\ref{equation:Swivel}), (\ref{equation:BigDigit}) and (\ref{equation:HeadScratcher}), it follows that $$ \begin{array}{rcl} \kappa_p(k)&\leq &[m_1'+j_{1-\ell},\dots,m_d'+j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\ &=&\kappa_p(m)-[j_1,\dots,j_d]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\ &=&\kappa_p(m)-[j_{1-\ell},\dots,j_d]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\ &=&\kappa_p(m)-[j_{d-\ell+1},\dots,j_{d}]_p+{\mathbf{1}}_{r\geq e} \\ &&-[j_{1-\ell},\dots,j_{d-\ell},\underbrace{0,\dots,0}_\ell]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p\\ &=&\kappa_p(m)-{\mathbf{1}}_{r\geq e}(p^{r-e}-1)-(p^\ell-1)[j_{1-\ell},\dots,j_{d-\ell}]_p\\ &\leq &\kappa_p(m). \end{array} $$ Thus (\ref{equation:DigitGames2}) holds and the proof of the lemma is complete. \qed \section{Proof of Theorem~\ref{Theorem:DigitMadness2}}\label{section:DigitMadness2} \subsection{Further digital apparatus} Put $\lambda=\ord_p q$. For each $c\in {\mathbb{N}}$, let $$\bracket{c}_q=\min\{n\in {\mathbb{N}}\vert n\equiv c\bmod{q-1}\},\;\;\;\tau_p(c)=c/p^{\ord_pc}. $$ Note that $$0<\bracket{c}_q<q,\;\;\; \bracket{c}_q=\bracket{c'}_q\Leftrightarrow c\equiv c'\bmod{q-1}$$ for all $c,c'\in {\mathbb{N}}$. Given $c\in {\mathbb{N}}$, and writing $\bracket{c}_q=[c_1,\dots,c_\lambda]_p$, note that $$\{c_1,\dots,c_\lambda\}\neq \{0\},\;\; \bracket{pc}_q=[c_2,\dots,c_\lambda,c_1]_p,$$ $$\langle c\rangle_q\geq \tau_p(\bracket{c}_q)=[c_1,\dots,c_{\max\{i\vert c_i\neq 0\}}]_p\geq \kappa_p(\langle c\rangle_q).$$ \begin{Lemma}\label{Lemma:Necklace2} $\langle p^ic\rangle_q\leq p^i-1\Rightarrow \tau_p(\langle c\rangle_q)\leq\langle p^ic\rangle_q$ for $c\in {\mathbb{N}}$ and $i\in {\mathbb{N}}\cap(0,\lambda)$. \end{Lemma} \begin{Lemma}\label{Lemma:Necklace3} $\displaystyle \min_{i=0}^{\lambda-1}\tau_p(\langle p^ic+1\rangle_q)=1+\min_{i=0}^{\lambda-1}\kappa_p(\langle p^ic\rangle_q)=1+\kappa_p(\mu_q(c))$ for $c\in {\mathbb{N}}$. \end{Lemma} \begin{Lemma}\label{Lemma:Necklace4} $i\not\equiv 0\bmod{\lambda}\Rightarrow p^i(\mu_q(c)+1)-1\not\in O_q(c)$ for $i,c\in {\mathbb{N}}$. \end{Lemma} \subsection{Completion of the proof of the theorem, granting the lemmas} Relation (\ref{equation:DigitMadness2}) holds by Lemma~\ref{Lemma:Necklace3}. Relation (\ref{equation:DigitMadness2bis}) holds by Lemma~\ref{Lemma:Necklace4}. \qed \subsection{Proof of Lemma~\ref{Lemma:Necklace2}} Write $\langle c\rangle_q=[c_1,\dots,c_\lambda]_p$. By hypothesis $$\langle p^ic\rangle_q= [\underbrace{0,\dots,0}_{\lambda-i},c_1,\dots,c_i]_q,\;\;\; c=[c_1,\dots,c_i,\underbrace{0,\dots,0}_{\lambda-i}]_p, $$ and hence $\tau_p(c)\leq \langle p^ic\rangle_q$. \qed \subsection{Proof of Lemma~\ref{Lemma:Necklace3}} Since $$\mu_q(c)=(\kappa_p(\mu_q(c))+1)p^g-1\in O_q(c),$$ for some $g\in {\mathbb{N}}\cup\{0\}$, one has $$\kappa_p(\mu_q(c))+1\geq \min_{i=0}^{\lambda-1} \min_{j=0}^{\lambda-1} \langle p^i(p^jc+1)\rangle_q.$$ One has $$ \tau_p(\langle n+1\rangle_q)\geq 1+\kappa_p(\langle n\rangle_q) $$ for all $n\in {\mathbb{N}}$, as can be verified by a somewhat tedious case analysis which we omit. Clearly, the inequalities $\geq$ hold in the statement we are trying to prove. Therefore it will be enough to prove that $$ \min_{i=0}^{\lambda-1} \min_{j=0}^{\lambda-1} \langle p^i(p^jc+1)\rangle_q\geq \min_{j=0}^{\lambda-1} \tau_p(\langle p^jc+1\rangle_q). $$ Fix $i=1,\dots,\lambda-1$ and $j=0,\dots,\lambda-1$. It will be enough just to prove that \begin{equation}\label{equation:AlmostLastNuff} \langle p^i(p^jc+1)\rangle_q <\tau_p(\langle p^jc+1\rangle_q) \Rightarrow \langle p^i(p^jc+1)\rangle_q \geq \tau_p(\langle p^{i+j}c+1\rangle_q). \end{equation} But by the preceding lemma, under the hypothesis of (\ref{equation:AlmostLastNuff}), one has $$p^i-1<\langle p^i(p^jc+1)\rangle_q$$ and hence $$\langle p^i(p^jc+1)\rangle_q= \langle p^{i+j}c+1\rangle_q+p^i-1\geq \tau_p( \langle p^{i+j}c+1\rangle_q).$$ Thus (\ref{equation:AlmostLastNuff}) is proved, and with it the lemma. \qed \subsection{Proof of Lemma~\ref{Lemma:Necklace4}} We may assume without loss of generality that \linebreak $0<i<\lambda$ and $c=\mu_q(c)$. By the preceding lemma $c<q$. Write $c=[c_1,\dots,c_\lambda]_p$ and define $c_k$ for all $k$ by enforcing the rule $c_{k+\lambda}=c_k$. Supposing that the desired conclusion does not hold, one has $$p^{\lambda-i}[c_1,\dots,c_\lambda,\underbrace{p-1,\dots,p-1}_{i}]_p\equiv [c_1,\dots,c_\lambda,\underbrace{p-1,\dots,p-1}_{i},\underbrace{0,\dots,0}_{\lambda-i}]_p$$ $$ \equiv [c_1,\dots,c_\lambda]_p+[\underbrace{p-1,\dots,p-1}_{i},\underbrace{0,\dots,0}_{\lambda-i}]_p $$ $$ \equiv [c_1,\dots,c_\lambda]_p-[\underbrace{0,\dots,0}_{i},\underbrace{p-1,\dots,p-1}_{\lambda-i}]_p$$ $$\equiv [c_{1+m},\dots,c_{\lambda+m}]_p=\bracket{p^mc}_q$$ for some integer $m$, where all the congruences are modulo $q-1$. It is impossible to have $c_1=\cdots=c_i=0$ since this would force the frequency of occurrence of the digit $0$ to differ in the digit strings $c_1,\dots,c_\lambda$ and $c_{1+m},\dots,c_{\lambda+m}$, which after all are just cyclic permutations one of the other. Similarly we can rule out the possibility $c_{i+1}=\cdots=c_\lambda=p-1$. Thus the base $p$ expansion of $c$ takes the form $$c=[\underbrace{0,\dots,0}_{\alpha}, \underbrace{\bullet,\dots,\bullet}_{\beta}, \underbrace{p-1,\dots,p-1}_{\gamma} ]_p,$$ where $$\alpha< i,\;\;\beta>0,\;\;\;\gamma<\lambda-i,\;\;\; \alpha+\beta+\gamma=\lambda,$$ and the bullets hold the place of a digit string not beginning with a $0$ and not ending with a $p-1$. Then one has $$\begin{array}{rcl} 1+\kappa_p(c)&=&(c+1)/p^\gamma\\ &>&(c+1-p^{\lambda-i})/p^\gamma+1\;\;(\mbox{strict inequality!})\\ &\geq &\tau_p(c+1-p^{\lambda-i})+1\\ &=&\tau_p(\langle p^mc\rangle_q)+1\\ &\geq &\kappa_p(\langle p^mc\rangle_q)+1 \end{array} $$ in contradiction to the preceding lemma. This contradiction finishes the proof. \qed \end{document}
\begin{document} \twocolumn[ \icmltitle{Greedy Column Subset Selection: \\ New Bounds and Distributed Algorithms} \icmlauthor{Jason Altschuler}{[email protected]} \icmladdress{Princeton University, Princeton, NJ 08544} \icmlauthor{Aditya Bhaskara}{[email protected]} \icmladdress{School of Computing, 50 S. Central Campus Drive, Salt Lake City, UT 84112} \icmlauthor{Gang (Thomas) Fu}{[email protected]} \icmlauthor{Vahab Mirrokni}{[email protected]} \icmlauthor{Afshin Rostamizadeh}{[email protected]} \icmlauthor{Morteza Zadimoghaddam}{[email protected]} \icmladdress{Google, 76 9th Avenue, New York, NY 10011} \icmlkeywords{column selection, greedy algorithms, coresets} \vskip 0.3in ] \begin{abstract} The problem of column subset selection has recently attracted a large body of research, with feature selection serving as one obvious and important application. Among the techniques that have been applied to solve this problem, the greedy algorithm has been shown to be quite effective in practice. However, theoretical guarantees on its performance have not been explored thoroughly, especially in a distributed setting. In this paper, we study the greedy algorithm for the column subset selection problem from a theoretical and empirical perspective and show its effectiveness in a distributed setting. In particular, we provide an improved approximation guarantee for the greedy algorithm which we show is tight up to a constant factor, and present the first distributed implementation with provable approximation factors. We use the idea of randomized composable core-sets, developed recently in the context of submodular maximization. Finally, we validate the effectiveness of this distributed algorithm via an empirical study. \end{abstract} \section{Introduction} Recent technological advances have made it possible to collect unprecedented amounts of data. However, extracting patterns of information from these high-dimensional massive datasets is often challenging. How do we automatically determine, among millions of measured features (variables), which are informative, and which are irrelevant or redundant? The ability to select such features from high-dimensional data is crucial for computers to recognize patterns in complex data in ways that are fast, accurate, and even human-understandable \cite{Guyon}. An efficient method for feature selection receiving increasing attention is Column Subset Selection (CSS). CSS is a constrained low-rank-approximation problem that seeks to approximate a matrix (e.g. instances by features matrix) by projecting it onto a space spanned by only a few of its columns (features). Formally, given a matrix $A$ with $n$ columns, and a target rank $k < n$, we wish to find a size-$k$ subset $S$ of $A$'s columns such that each column $A_i$ of $A$ ($i \in \{1, \dots, n\}$) is contained as much as possible in the subspace $\text{span}(S)$, in terms of the Frobenius norm: \begin{align*} \text{arg max}_{S \text{ contains $k$ of $A$'s columns}} \sum_{i=1}^n \|\text{proj}(A_i \; | \; \text{span}(S))\|_2^2 \end{align*} While similar in spirit to general low-rank approximation, some advantages with CSS include flexibility, interpretability and efficiency during inference. CSS is an unsupervised method and does not require labeled data, which is especially useful when labeled data is sparse. We note, on the other hand, unlabeled data is often very abundant and therefore scalable methods, like the one we present, are often needed. Furthermore, by subselecting features, as opposed to generating new features via an arbitrary function of the input features, we keep the semantic interpretation of the features intact. This is especially important in applications that require interpretable models. A third important advantage is the efficiency of applying the solution CSS feature selection problem during inference. Compared to PCA or other methods that require a matrix-matrix multiplication to project input features into a reduced space during inference time, CSS only requires selecting a subset of feature values from a new instance vector. This is especially useful for latency sensitive applications and when the projection matrix itself may be prohibitively large, for example in restricted memory settings. While there have been significant advances in CSS~\cite{Boutsidis1,Boutsidis2,Guruswami}, most of the algorithms are either impractical and not applicable in a distributed setting for large datasets, or they do not have good (multiplicative $1 - \varepsilon$) provable error bounds. Among efficient algorithms studied for the CSS problem is the simple {\em greedy algorithm}, which iteratively selects the best column and keeps it. Recent work shows that it does well in practice and even in a distributed setting~\cite{Farahat1, Farahat2} and admits a performance guarantee \cite{Civril1}. However, the known guarantees depend on an arbitrarily large matrix-coherence parameter, which is unsatisfactory. Also, even though the algorithm is relatively fast, additional optimizations are needed to scale it to datasets with millions of features and instances. \subsection{Our contributions} Let $A \in \mathbb{R}^{m \times n}$ be the given matrix, and let $k$ be the target number of columns. Let $OPT_k$ denote the {\em optimal} set of columns, i.e., one that {\em covers} the maximum Frobenius mass of $A$. Our contributions are as follows. {\em Novel analysis of Greedy.} For any $\varepsilon > 0$, we show that the natural greedy algorithm (Section~\ref{section-2}), after $r = \frac{k}{\sigma_{\min}(OPT_k) \varepsilon}$ steps, gives an objective value that is within a $(1-\varepsilon)$ factor of the optimum. We also give a matching lower bound, showing that $\frac{k}{\sigma_{\min}(OPT_k) \varepsilon}$ is tight up to a constant factor. Here $\sigma_{\min}(OPT_k)$ is the smallest squared singular value of the {\em optimal} set of columns (after scaling to unit vectors). Our result is similar in spirit to those of~\cite{Civril1, Liberty}, but with an important difference. Their bound on $r$ depends on the {\em least} $\sigma_{\min}(S)$ over \textit{all} $S$ of size $k$, while ours depends on $\sigma_{\min}(OPT_k)$. Note that these quantities can differ significantly. For instance, if the data has even a little bit of redundancy (e.g. few columns that are near duplicates), then there exist $S$ for which $\sigma_{\min}$ is tiny, but the optimal set of columns could be reasonably well-conditioned (in fact, we would {\em expect} the optimal set of columns to be fairly well conditioned). {\em Distributed Greedy.} We consider a natural distributed implementation of the greedy algorithm (Section~\ref{section-2}). Here, we show that an interesting phenomenon occurs: even though partitioning the input does not work in general (as in coreset based algorithms), {\em randomly} partitioning works well. This is inspired by a similar result on submodular maximization~\cite{Mirrokni}. Further, our result implies a $2$-pass streaming algorithm for the CSS problem in the {\em random arrival} model for the columns. We note that if the columns each have sparsity $\phi$,~\cite{Boutsidis2015} gives an algorithm with total communication of $O(\frac{sk\phi}{\varepsilon} + \frac{sk^2}{\varepsilon^4})$. Their algorithm works for ``worst case'' partitioning of the columns into machines and is much more intricate than the greedy algorithm. In constrast, our algorithm is very simple, and for a random partitioning, the communication is just the first term above, along with an extra $\sigma_{\min}(OPT)$ term. Thus depending on $\sigma_{\min}$ and $\varepsilon$, each of the bounds could be better than the other. {\em Further optimizations.} We also present techniques to speed up the implementation of the greedy algorithm. We show that the recent result of~\cite{Mirzasoleiman} (once again, on submodular optimization) can be extended to the case of CSS, improving the running time significantly. We then compare our algorithms (in accuracy and running times) to various well-studied CSS algorithms. (Section 6.) \subsection{Related Work} The CSS problem is one of the central problems related to matrix approximation. Exact solution is known to be UG-hard~\cite{Civril2}, and several approximation methods have been proposed over the years. Techniques such as importance sampling \cite{Drineas1, Frieze}, adaptive sampling \cite{Deshpande1}, volume sampling \cite{Deshpande2, Deshpande4}, leverage scores \cite{Drineas-Leverage}, and projection-cost preserving sketches \cite{Cohen} have led to a much better understanding of the problem. \cite{Guruswami} gave the optimal dependence between column sampling and low-rank approximation. Due to the numerous applications, much work has been done on the implementation side, where adaptive sampling and leverage scores have been shown to perform well. A related, extremely simple algorithm is the greedy algorithm, which turns out to perform well and be scalable \cite{Farahat1, Farahat2}. This was first analyzed by~\cite{Civril1}, as we discussed. There is also substantial literature about distributed algorithms for CSS \cite{Pi, Feldman, Cohen, Farahat3, Farahat4, Boutsidis2015}. In particular, \cite{Farahat3, Farahat4} present distributed versions of the greedy algorithm based on MapReduce. Although they do not provide theoretical guarantees, their experimental results are very promising. The idea of composable coresets has been applied explicitly or implicitly to several problems~\cite{FeldmanSS13,BalcanEL13,VahabPODS2014}. Quite recently, for some problems in which coreset methods do not work in general, surprising results have shown that randomized variants of them give good approximations~\cite{BarbosaENW15,Mirrokni}. We extend this framework to the CSS problem. \subsection{Background and Notation} We use the following notation throughout the paper. The set of integers $\{1, \dots, n\}$ is denoted by $[n]$. For a matrix $A \in \mathbb{R}^{m \times n}$, $A_j$ denotes the $j$th column ($A_j \in \mathbb{R}^m$). Given $S \subseteq [n]$, $A[S]$ denotes the submatrix of $A$ containing columns indexed by $S$. The projection matrix $\Pi_A$ projects onto the column span of $A$. Let $\norm{A}_F$ denote the Frobenius norm, i.e., $\sqrt{\sum_{i,j} A_{i, j}^2}$. We write $\sigma_{\min}(A)$ to denote the minimum \textit{squared} singular value, i.e., $\inf_{x:\norm{x}_2 = 1} \frac{\|Ax\|_2^2}{\|x\|_2^2}$. We abuse notation slightly, and for a set of vectors $V$, we write $\sigma_{\min}(V)$ for the $\sigma_{\min}$ of the matrix with columns $V$. \iffalse {\bf Submodular optimization.} Given a finite set $\Omega$ and a set function $f : 2^{\Omega} \to \mathbb{R}$, define the marginal gain of adding an element $x \in \Omega$ to a set $S \subseteq \Omega$ by $\Delta(x | S) = f(S \cup \{x\}) - f(S)$. $f$ is said to be submodular if $\Delta(x | S) \geq \Delta(x | T)$ for any subsets $S \subseteq T \subseteq \Omega$ and any element $x \in \Omega \setminus T$. This is a formalization of the well-known economic principle of decreasing marginal utility. $f$ is further said to be nonnegative if $f(S) \geq 0$ for any $S \subseteq \Omega$, and monotonically nondecreasing if $f(S) \leq f(T)$ for any $S \subseteq T \subseteq \Omega$. The theory of maximizing submodular functions subject to a cardinality constraint has been well studied, and has been shown to be NP-hard [Nemhauser and Wolsey 1978; Feige 1998]. However, it is a key result in combinatorial optimization that a simple greedy algorithm to this problem for nonnegative, monotone nondecreasing submodular functions admits a $1 - \frac{1}{e}$ constant factor approximation [Nemhauser '78]. \fi \begin{defin}\label{defn:css-problem} Given a matrix $A \in \mathbb{R}^{m \times n}$ and an integer $k \le n$, the \textbf{Column Subset Selection (CSS) Problem} asks to find \[ \text{arg max}_{S \subseteq [n], |S| = k} \norm{\Pi_{A[S]}A}_F^2, \] i.e., the set of columns that {\em best explain} the full matrix $A$. \end{defin} We note that it is also common to cast this as a minimization problem, with the objective being $\norm{A - \Pi_{A[S]} A}_F^2$. While the exact optimization problems are equivalent, obtaining multiplicative approximations for the minimization version could be harder when the matrix is low-rank. For a set of vectors $V$ and a matrix $M$, we denote \[ f_M(V) = \norm{\Pi_V M}_F^2. \] \iffalse In this article, instead of minimizing the unexplained (error) part $\|A - \Pi_{A[S]}A\|_F^2$ of $A$, we maximize the explained part $\|\Pi_{A[S]}A\|_F^2$ of $A$. Formally, \begin{align} \text{arg max}_{S \subseteq [n], |S| = k} \|\Pi_{A[S]}A\|_F^2 = \text{arg min}_{S \subseteq [n], |S| = k} \|A - \Pi_{A[S]}A\|_F^2 \end{align} Thus, a subset of columns that maximizes explanation of $A$ will also minimized the unexplained error. \fi Also, the case when $M$ is a single vector will be important. For any vector $u$, and a set of vectors $V$, we write \[ f_u(V) = \norm{\Pi_V u}_2^2. \] \begin{remark} \label{rem:not-submodular} Note that $f_M (V)$ can be viewed as the extent to which we can {\em cover} matrix $M$ using vectors $V$. However, unlike combinatorial covering objectives, our definition is not submodular, or even subadditive. \iffalse \begin{defin} \label{f definition matrix} Given $A \in \mathbb{R}^{m \times n}$, define the function: $f_A : \mathcal{P}(\mathbb{R}^m) \to \mathbb{R}$ by: $$f_A(S) = \sum_{j=1}^n f_{A_j}(S)$$ over the columns $A_j \in \mathbb{R}^m$ of $A$. \end{defin} \fi As an example, consider covering the following $A$ using its own columns. Here, $f_A(\{A_1, A_2\}) = \|A\|_F^2 > f(\{A_1\}) + f(\{A_2\})$. \[ A = \left( \begin{array}{ccc} 1 & 0 & 1 \\ 1 & -1 & 0 \\ 0 & 1 & 1 \end{array} \right)\] \end{remark} \iffalse \subsection{Overview of article} In section 2.1, we present a ``vanilla'' greedy algorithm GREEDY for choosing $k$ columns of a matrix $A$ that can approximate all of $A$'s other columns linearly. This provides intuition for our proposed algorithm ALG, presented in section 2.2. ALG is a much more efficient version of GREEDY because of three optimizations: (1) an efficient calculation of marginal gain; (2) a random projection to compress the ambient dimension of $A$'s columns; and (3) only looking over a small random subset of all $n$ columns in each iteration. These optimizations make analyzing ALG slightly more involved than GREEDY. To this end, we first analyze GREEDY in section 3, and then use those results to analyze ALG in section 4. \fi \section{Greedy Algorithm for Column Selection} \label{section-2} Let us state our algorithm and analysis in a slightly general form. Suppose we have two matrices $A, B$ with the same number of rows and $n_A$, $n_B$ columns respectively. The $\textsf{GCSS}(A, B, k)$ problem is that of finding a subset $S$ of columns of $B$, that maximizes $f_A(S)$ subject to $|S|=k$. Clearly, if $B = A$, we recover the CSS problem stated earlier. Also, note that scaling the columns of $B$ will not affect the solution, so let us assume that the columns of $B$ are all unit vectors. The greedy procedure iteratively picks columns of $B$ as follows: \begin{algorithm} \label{alg:greedy} \caption{$\textsc{Greedy}$($A \! \in \! \mathbb{R}^{m \times n_A}$, $B \! \in\! \mathbb{R}^{m \times n_B}$, $k \leq n_B$)} \begin{algorithmic}[1] \STATE $S \leftarrow \emptyset$ \FOR{$i = 1:k$} \STATE Pick column $B_j$ that maximizes $f_A(S \cup B_j)$ \STATE $S \leftarrow S \cup \{B_j\}$ \ENDFOR \STATE Return $S$ \end{algorithmic} \end{algorithm} Step (3) is the computationally intensive step in $\textsc{Greedy}$ -- we need to find the column that gives the most {\em marginal gain}, i.e., $f_A(S \cup B_j) - f_A(S)$. In Section~\ref{section-5}, we describe different techniques to speed up the calculation of marginal gain, while obtaining a $1-\varepsilon$ approximation to the optimum $f(\cdot)$ value. Let us briefly mention them here. {\em Projection to reduce the number of rows.} We can left-multiply both $A$ and $B$ with an $r \times n$ Gaussian random matrix. For $r \ge \frac{k\log n}{\varepsilon^2}$, this process is well-known to preserve $f_A(\cdot)$, for any $k$-subset of the columns of $B$ (see~\cite{Sarlos} or Appendix Section~\ref{app:random-projections} for details). {\em Projection-cost preserving sketches.} Using recent results from \cite{Cohen}, we can project each {\em row} of $A$ onto a random $O(\frac{k}{\varepsilon^2})$ dimensional space, and then work with the resulting matrix. Thus we may assume that the number of columns in $A$ is $O(\frac{k}{\varepsilon^2})$. This allows us to efficiently compute $f_A(\cdot)$. \iffalse \textbf{Random projections to reduce the number of rows.} We can project each column of $A$ and $B$ onto a random $O(\frac{k\log (\max(n', n_B))}{\varepsilon^2})$ dimensional space, and then work with the resulting matrices. Thus we may assume that the number of rows in both $A$ and $B$ is $\min\{ m, O(\frac{k\log( \max(n', n_B))}{\varepsilon^2}) \}$, which can be a big improvement. This can be obtained by a union bound on Lemma 10 from \cite{Sarlos}. (Full details in appendix.) \fi {\em Lazier-than-lazy greedy.} \cite{Mirzasoleiman} recently proposed the first algorithm that achieves a constant factor approximation for maximizing submodular functions with a {\em linear} number of marginal gain evaluations. We show that a similar analysis holds for $\textsf{GCSS}$, even though the cost function is not submodular. We also use some simple yet useful ideas from \cite{Farahat2} to compute the marginal gains (see Section~\ref{section-5}). \subsection{Distributed Implementation} We also study a distributed version of the greedy algorithm, shown below (Algorithm~\ref{alg:cs-greedy}). $\ell$ is the number of machines. \begin{algorithm} \label{alg:cs-greedy} \caption{$\textsc{Distgreedy}$($A$, $B$, $k$, $\ell$)} \begin{algorithmic}[1] \STATE {\em Randomly} partition the columns of $B$ into $T_1, \dots, T_{\ell}$ \STATE (Parallel) compute $S_i \leftarrow \textsc{Greedy}(A, T_i, \frac{32k}{\sigma_{\min}(OPT)})$ \STATE (Single machine) aggregate the $S_i$, and compute $S \leftarrow \textsc{Greedy}(A, \cup_{i=1}^{\ell}S_i, \frac{12k}{\sigma_{\min}(OPT)})$ \STATE Return $\text{arg max}_{S' \in \{S, S_1,\dots, S_{\ell}\}} f_A(S')$ \end{algorithmic} \end{algorithm} As mentioned in the introduction, the key here is that the partitioning is done {\em randomly}, in contrast to most results on {\em composable summaries}. We also note that machine $i$ only sees columns $T_i$ of $B$, but requires evaluating $f_A(\cdot)$ on the full matrix $A$ when running \textsc{Greedy}.\footnote{It is easy to construct examples in which splitting both $A$ and $B$ fails badly.} The way to implement this is again by using projection-cost preserving sketches. (In practice, keeping a small sample of the columns of $A$ works as well.) The sketch is first passed to all the machines, and they all use it to evaluate $f_A(\cdot)$. We now turn to the analysis of the single-machine and the distributed versions of the greedy algorithm. \section{Peformance analysis of GREEDY} \label{section-3} The main result we prove is the following, which shows that by taking only slightly more than $k$ columns, we are within a $1 -\varepsilon$ factor of the optimal solution of size $k$. \begin{thm} \label{thm:greedy-main} Let $A \in \mathbb{R}^{m \times n_A}$ and $B \in \mathbb{R}^{m \times n_B}$. Let $OPT_k$ be a set of columns from $B$ that maximizes $f_A(S)$ subject to $|S| = k$. Let $\varepsilon > 0$ be any constant, and let $T_r$ be the set of columns output by $\textsc{Greedy}(A, B, r)$, for $r = \frac{16k}{\varepsilon \sigma_{\min}(OPT_k)}$. Then we have $$f_A(T_r) \geq (1 - \varepsilon) f_A(OPT_k).$$ \end{thm} We show in Appendix Section \ref{app:tight-ex} that this bound is tight up to a constant factor, with respect to $\varepsilon$ and $\sigma_{\min}(OPT_k)$. Also, we note that $\textsf{GCSS}$ is a harder problem than $\textsf{MAX-COVERAGE}$, implying that if we can choose only $k$ columns, it is impossible to approximate to a ratio better than $(1-\frac{1}{e}) \approx 0.63$, unless P=NP. (In practice, $\textsc{Greedy}$ does much better, as we will see.) The basic proof strategy for Theorem~\ref{thm:greedy-main} is similar to that of maximizing submodular functions, namely showing that in every iteration, the value of $f(\cdot)$ increases significantly. The key lemma is the following. \begin{lemma} \label{lem:large-gain} Let $S, T$ be two sets of columns, with $f_A(S) \ge f_A(T)$. Then there exists $v \in S$ such that \[ f_A(T \cup v) - f_A(T) \ge \sigma_{\min}(S) \frac{\big(f_A(S) - f_A(T)\big)^2}{4|S|f_A(S)}.\] \end{lemma} Theorem~\ref{thm:greedy-main} follows easily from Lemma~\ref{lem:large-gain}, which we show at the end of the section. Thus let us first focus on proving the lemma. Note that for submodular $f$, the analogous lemma simply has $\frac{f(S) - f(T)}{|S|}$ on the right-hand side (RHS). The main ingredient in the proof of Lemma~\ref{lem:large-gain} is its {\em single vector} version: \begin{lemma}\label{lem:one-vector} Let $S, T$ be two sets of columns, with $f_u(S) \ge f_u(T)$. Suppose $S=\{v_1, \dots, v_k\}$. Then \[ \sum_{i=1}^k \Big( f_u(T \cup v_i) - f_u(T) \Big) \ge \sigma_{\min}(S) \frac{\big(f_u(S) - f_u(T)\big)^2}{4f_u(S)}.\] \end{lemma} Let us first see why Lemma~\ref{lem:one-vector} implies Lemma~\ref{lem:large-gain}. Observe that for any set of columns $T$, $f_A (T) = \sum_{j} f_{A_j} (T)$ (sum over the columns), by definition. For a column $j$, let us define $\delta_j = \min \{ 1, \frac{f_{A_j}(T)}{f_{A_j}(S)}\}$. Now, using Lemma~\ref{lem:one-vector} and plugging in the definition of $\delta_j$, we have \begin{align} & \frac{1}{\sigma_{\min}(S)} \sum_{i=1}^k \big( f_A(T \cup v_i) - f_A(T) \big) \label{eq:start}\\ & \quad = \frac{1}{\sigma_{\min}(S)} \sum_{j = 1}^n \sum_{i=1}^k \big( f_{A_j}(T \cup v_i) - f_{A_j}(T) \big) \notag\\ & \quad \geq \sum_{j=1}^n \frac{ (1-\delta_j)^2 f_{A_j}(S)}{4} \label{eq:temp3}\\ & \quad = \frac{f_A(S)}{4} \sum_{j=1}^n (1 - \delta_j)^2 \frac{f_{A_j}(S)}{f_A(S)} \label{eq:temp4}\\ & \quad \geq \frac{f_A(S)}{4} \left( \sum_{j=1}^n (1 - \delta_j) \frac{f_{A_j}(S)}{f_A(S)}\right)^2 \label{eq:temp5} \\ & \quad = \frac{1}{4 f_A(S)} \Big(\sum_{j=1}^n \max\{ 0, f_{A_j}(S) - f_{A_j}(T) \}\Big)^2 \label{eq:temp6} \\ & \quad \ge \frac{1}{4 f_A(S)} \Big(f_A(S) - f_A(T) \Big)^2 \label{eq:temp8} \end{align} To get \eqref{eq:temp5}, we used Jensen's inequality ($\mathbb{E}[X^2] \geq ( \mathbb{E}[X])^2$) treating $\frac{f_{A_j}(S)}{f_{A}(S)}$ as a probability distribution over indices $j$. Thus it follows that there exists an index $i$ for which the gain is at least a $\frac{1}{|S|}$ factor, proving Lemma~\ref{lem:large-gain}. \begin{proof}[Proof of Lemma~\ref{lem:one-vector}] Let us first analyze the quantity $f_u(T \cup v_i) - f_u(T)$, for some $v_i \in S$. As mentioned earlier, we may assume the $v_i$ are normalized. If $v_i \in \text{span}(T)$, this quantity is $0$. Thus we can assume that such $v_i$ have been removed from $S$. Now, adding $v_i$ to $T$ gives a gain because of the component of $v_i$ orthogonal to $T$, i.e., $v_i - \Pi_T v_i$, where $\Pi_T$ denotes the projector onto $\text{span}(T)$. Define \[ v_i' = \frac{v_i - \Pi_T v_i}{\norm{v_i - \Pi_T v_i}}_2.\] By definition, $\text{span}(T \cup v_i) = \text{span}(T \cup v_i')$. Thus the projection of a vector $u$ onto $\text{span}(T \cup v_i')$ is $\Pi_T u + \iprod{u, v_i'} v_i'$, which is a vector whose squared length is $\norm{\Pi_T u}^2 + \iprod{u, v_i'}^2 = f_u(T) + \iprod{u, v_i'}^2$. This implies that \begin{equation}\label{eq:gain-single} f_u(T \cup v_i ) - f_u(T) = \iprod{u, v_i'}^2. \end{equation} Thus, to show the lemma, we need a lower bound on $\sum_i \iprod{u, v_i'}^2$. Let us start by observing that a more explicit definition of $f_u(S)$ is the squared-length of the projection of $u$ onto $\text{span}(S)$, i.e. $f_u(S) = \max_{x \in \text{span}(S), \norm{x}_2 = 1} \iprod{u, x}^2$. Let $x = \sum_{i=1}^k \alpha_i v_i$ be a maximizer. Since $\norm{x}_2=1$, by the definition of the smallest squared singular value, we have $\sum_i \alpha_i^2 \le \frac{1}{\sigma_{\min}(S)}$. Now, decomposing $x = \Pi_T x + x'$, we have \[ f_u(S) = \langle x, u \rangle^2 = \langle x' + \Pi_T x, \; u \rangle^2 = (\langle x', u \rangle + \langle \Pi_T x, u \rangle)^2.\] Thus (since the worst case is when all signs align), \begin{align} |\iprod{ x', u}| &\ge \sqrt{f_u(S)} - |\langle \Pi_T x, u \rangle| \ge \sqrt{f_u(S)} - \sqrt{f_u(T)} \notag \\ &= \frac{f_u(S) - f_u(T)}{\sqrt{f_u(S)}+ \sqrt{f_u(T)}} \ge \frac{f_u(S) - f_u(T)}{2\sqrt{f_u(S)}}. \label{eq:dotprod-lb2} \end{align} where we have used the fact that $|\iprod{\Pi_T x, u}|^2 \le f_u(T)$, which is true from the definition of $f_u(T)$ (and since $\Pi_T x$ is a vector of length $\le 1$ in $\text{span}(T)$). Now, because $x = \sum_i \alpha_i v_i$, we have $x' = x - \Pi_T x = \sum_i \alpha_i(v_i - \Pi_T v_i) = \sum_i \alpha_i\norm{v_i - \Pi_Tv_i}_2v_i'$. Thus, \begin{align*} \iprod{x', u}^2 &= \big( \sum_i \alpha_i \norm{v_i - \Pi_Tv_i}_2 \iprod{v_i' , u} \big)^2 \\ &\le \big( \sum_i \alpha_i^2 \norm{v_i - \Pi_Tv_i}_2^2 \big) \big( \sum_i \iprod{v_i', u}^2 \big) \\ &\le \big( \sum_i \alpha_i^2 \big) \big( \sum_i \iprod{v_i', u}^2 \big). \end{align*} where we have used Cauchy-Schwartz, and then the fact that $\|v_i - \Pi_Tv_i\|_2 \leq 1$ (because $v_i$ are unit vectors). Finally, we know that $\sum_i \alpha_i^2 \le \frac{1}{\sigma_{\min}(S)}$, which implies \[ \sum_i \iprod{v_i', u}^2 \ge \sigma_{\min}(S) \iprod{x', u}^2 \ge \sigma_{\min}(S) \frac{(f_u(S)- f_u(T))^2}{4f_u(S)}\] Combined with~\eqref{eq:gain-single}, this proves the lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:greedy-main}] For notational convenience, let $\sigma = \sigma_{min}(OPT_k)$ and $F = f_A(OPT_k)$. Define $\Delta_0 = F$, $\Delta_1 = \frac{\Delta_0}{2}$, $\dots$, $\Delta_{i+1} = \frac{\Delta_i}{2}$ until $\Delta_N \leq \varepsilon F$. Note that the gap $f_A(OPT_k) - f_A(T_0) = \Delta_0$. We show that it takes at most $\frac{8kF}{\sigma \Delta_i}$ iterations (i.e. additional columns selected) to reduce the gap from $\Delta_i$ to $\frac{\Delta_i}{2} = \Delta_{i+1}$. To prove this, we invoke Lemma \ref{lem:large-gain} to see that the gap filled by $\frac{8kF}{\sigma \Delta_i}$ iterations is at least $\frac{8kF}{\sigma \Delta_i} \cdot \sigma \frac{(\frac{\Delta_i}{2})^2}{4kF} = \frac{\Delta_i}{2} = \Delta_{i+1}$. Thus the total number of iterations $r$ required to get a gap of at most $\Delta_N \leq \varepsilon F$ is: \[ r \leq \sum_{i=0}^{N-1} \frac{8kF}{\sigma \Delta_i} = \frac{8kF}{\sigma} \sum_{i=0}^{N-1} \frac{2^{i-N+1}}{\Delta_{N-1}} < \frac{16k}{\varepsilon \sigma} .\] where the last step is due to $\Delta_{N-1} > \varepsilon F$ and $\sum_{i=0}^{N-1} 2^{i-N+1} < 2$. Therefore, after $r < \frac{16k}{\varepsilon \sigma}$ iterations, we have $f_A(OPT_k) - f_A(T_r) \leq \varepsilon f_A(OPT_k)$. Rearranging proves the lemma. \end{proof} \section{Distributed Greedy Algorithm} \label{section-4} We will now analyze the distributed version of the greedy algorithm that was discussed earlier. We show that in one {\em round}, we will find a set of size $O(k)$ as before, that has an objective value $\Omega(f(OPT_k)/\kappa)$, where $\kappa$ is a condition number (defined below). We also combine this with our earlier ideas to say that if we perform $O(\kappa / \varepsilon)$ {\em rounds} of \textsc{Distgreedy}, we get a $(1-\varepsilon)$ approximation (Theorem~\ref{thm:core-set-2}). \subsection{Analyzing one round} \iffalse Often modern data sets are too large for us to run GREEDY or ALG on a single processor. The approach of randomized composable core-sets has received increased popularity as a provably good way to subdivide a large problem into smaller problems, each of which can be approached by a separate machine in a computer cluster. This type of approach is especially scalable because it can easily be used in tandem with mainstream distributed paradigms such as MapReduce. \\ \\ Recently, Mirrokni and Zadimoghaddam showed that randomized composable core-sets give a constant factor approximation for maximizing submodular functions \cite{Mirrokni}. We show that here that although our cost function is not submodular, a similar result still holds (up to the condition number). \\ \\ Formally, consider as before Column Subset Section on a matrix $A \in \mathbb{R}^{m \times n}$ with target rank $k < n$. The difference is that now we have $\ell$ machines at our disposal. The approach we take is to randomly partition the $m \times n$ columns into $\ell$ submatrices, each of dimension roughly $m \times \frac{n}{\ell}$. We give each submatrix to a different machine, and run GREEDY on each of these machines with target rank $k'$ to produce a ``core-set'' solution. Finally, we collect the $\ell$ core-sets, and run GREEDY a final time on a single machine to produce our final set of $k$ columns. \\ \\ The parameter $k'$ must be chosen with care, since there is a natural tradeoff between precision and efficiency. Importantly, it must be sufficiently large such that we have enough options in the final stage to choose a a set of $k$ columns with a constant factor approximation. But from a practical standpoint, $k'$ must be small enough such that the final stage with approximately $\ell k'$ can be run on a single machine. If this is not possible, we can run the above procedure with multiple rounds. \subsection{Constant factor approximation (up to condition number)} \fi We consider an instance of $\textsf{GCSS}(A, B, k)$, and let $OPT$ denote an optimum set of $k$ columns. Let $\ell$ denote the number of machines available. The columns (of $B$) are partitioned across machines, such that machine $i$ is given columns $T_i$. It runs $\textsc{Greedy}$ as explained earlier and outputs $S_i \subset T_i$ of size $k' = \frac{32k}{\sigma_{\min}(OPT)}$. Finally, all the $S_i$ are moved to one machine and we run $\textsc{Greedy}$ on their union and output a set $S$ of size $k'' = \frac{12k}{\sigma_{\min}(OPT)}$. Let us define $\kappa(OPT) = \frac{\sigma_{\max}(OPT)}{\sigma_{\min}(OPT)}$. \begin{thm}\label{thm:distributed-main} Consider running $\textsc{Distgreedy}$ on an instance of $\textsf{GCSS}(A, B, k)$. We have \[ \mathbb{E}[ \max\{ f_A(S), \max_i \{f_A(S_i)\}\}] \ge \frac{f(OPT)}{8\cdot \kappa(OPT)}.\] \end{thm} The key to our proof are the following definitions: \begin{align*} OPT_i^S &= \{x \in OPT \; : \; x \in \textsc{Greedy}(A, T_i \cup x, k') \}\\ OPT_i^{NS} &= \{x \in OPT \; : \; x \not \in \textsc{Greedy}(A, T_i \cup x, k') \} \end{align*} In other words, $OPT_i^S$ contains all the vectors in $OPT$ that would have been selected by machine $i$ if they had been added to the input set $T_i$. By definition, the sets $(OPT_i^S, OPT_i^{NS})$ form a partition of $OPT$ for every $i$. \iffalse \begin{remark} \label{rem-opt-partition} By definition, $OPT = OPT_i^S \cup OPT_i^{NS}$ is a union of disjoint sets for all $i \in [m]$. That is, $OPT_i^S$ and $OPT_i^{NS}$ partition $OPT$. \end{remark} \fi {\bf Proof outline.} Consider any partitioning $T_1, \dots, T_\ell$, and consider the sets $OPT_i^{NS}$. Suppose one of them (say the $i$th) had a large value of $f_A(OPT_i^{NS})$. Then, we claim that $f_A(S_i)$ is also large. The reason is that the greedy algorithm does {\em not} choose to pick the elements of $OPT_i^{NS}$ (by definition) -- this can only happen if it ended up picking vectors that are ``at least as good''. This is made formal in Lemma~\ref{lem:opt-ns}. Thus, we can restrict to the case when {\em none} of $f_A(OPT_i^{NS})$ is large. In this case, Lemma~\ref{lem:additivity} shows that $f_A(OPT_i^{S})$ needs to be large for each $i$. Intuitively, it means that most of the vectors in $OPT$ will, in fact, be picked by $\textsc{Greedy}$ (on the corresponding machines), and will be considered when computing $S$. The caveat is that we might be unlucky, and for every $x \in OPT$, it might have happened that it was sent to machine $j$ for which it was not part of $OPT_j^{S}$. We show that this happens with low probability, and this is where the random partitioning is crucial (Lemma~\ref{lem:opt-s}). This implies that either $S$, or one of the $S_i$ has a large value of $f_A(\cdot)$. Let us now state two lemmas, and defer their proofs to Sections~\ref{app:opt-ns} and~\ref{app:additivity} respectively. \begin{lemma} \label{lem:opt-ns} For $S_i$ of size $k' = \frac{32 k}{\sigma_{\min}(OPT)}$, we have \[ f(S_i) \geq \frac{f_A(OPT_i^{NS})}{2} ~\text{ for all $i$.}\] \end{lemma} \iffalse Fix $i \in [m]$. Let $S_{i, r}$ denote the output of $GREEDY$ on $T_i$ for target rank $r$. By definition of $k'$ and $S_i$, we know $S_i = S_{i, k'}$. Denote $F = f(OPT_i^{NS})$ for shorthand. Recall from Lemma \ref{Large marginal gain lemma} that for any $r \in \mathbb{N}$ such that $f(S_{i, r}) \geq F$, then: \begin{align} f(S_{i, r+1}) - f(S_{i, r}) \geq \sigma_{min}(OPT_i^{NS}) \frac{\big(F - f(S_{i, r})\big)^2}{4F \; |OPT_i^{NS}|} \end{align} Note that since $OPT_i^{NS} \subset OPT$ by construction, so $\sigma_{min}(OPT_i^{NS}) \geq \sigma_{min}(OPT)$ and also $|OPT_i^{NS}| \leq |OPT| = k$. Thus: \begin{align} f(S_{i, r+1}) - f(S_{i, r}) \geq \sigma_{min}(OPT) \frac{\big(F - f(S_{i, r})\big)^2}{4kF} \label{eq-lem-coreset-2-gain} \end{align} Let $\Delta_0 = F$, $\Delta_1 = \frac{\Delta_0}{2}$, \dots, $\Delta_{i+1} = \frac{\Delta_i}{2}$ until $\Delta_N \leq cF$. We show that it takes at most $\frac{8kF}{\sigma_{min}(OPT) \Delta_i}$ iterations (i.e. additional columns selected) to reduce the gap from $\Delta_i$ to $\frac{\Delta_i}{2} = \Delta_{i+1}$. To see this, invoke equation \eqref{eq-lem-coreset-2-gain} above to see that: \begin{align} \text{Gap filled by $\frac{8kF}{\sigma_{min}(OPT) \Delta_i}$ iterations} \geq \frac{8kF}{\sigma_{min}(OPT) \Delta_i} \cdot \sigma_{min}(OPT) \frac{(\frac{\Delta_i}{2})^2}{4kF} = \frac{\Delta_i}{2} = \Delta_{i+1} \end{align} So the total number of iterations $k'$ required to get a gap of at most $\Delta_N \leq cF$ is: \begin{align} k' &\leq \sum_{i=0}^{N-1} \frac{8kF}{\sigma_{min}(OPT) \Delta_i} \\ &= \frac{8kF}{\sigma_{min}(OPT)} \sum_{i=0}^{N-1} \frac{2^{i-N+1}}{\Delta_{N-1}} \label{eq-lem-coreset-2-geo-1} \\ &< \frac{16k}{c \sigma_{min}(OPT)} \label{eq-lem-coreset-2-geo-2} \end{align} Equation \eqref{eq-lem-coreset-2-geo-1} is due to the fact that $\{\Delta_i\}_{i=0}^{N}$ is a geometric series by construction. Equation \ref{eq-lem-coreset-2-geo-2} is because $\Delta_{N-1} > cF$ and $\sum_{i=0}^{N-1} 2^{i-N+1} < 2$. \\ \\ Thus, after $k' < \frac{16k}{c \sigma_{min}(OPT)}$ iterations, we have $f(S_i) - f(OPT_i^{NS}) \leq c f(OPT_i^{NS})$. Rearranging proves the lemma. \end{proof} \fi \begin{lemma} \label{lem:additivity} For any matrix $A$, and any partition $(I, J)$ of $OPT$: \begin{equation}\label{eq:to-prove-additivity} f_A(I) + f_A(J) \geq \frac{f_A(OPT)}{2\kappa(OPT)}. \end{equation} \end{lemma} \iffalse Let $x = \sum_{i=1}^{t} \alpha_i v_i$ be a maximizer. Define $x_A = \sum_{i=1}^s \alpha_i v_i \in \text{span}(A)$ and $x_B = \sum_{i=s+1}^t \alpha_i v_i \in \text{span}(B)$. Note that by bilinearity of the inner product: \begin{align} f_u(A \cup B) = \langle u, \; x \rangle^2 = \big(\langle u, \; x_a \rangle + \langle u, \; x_b \rangle \big)^2 \end{align} Thus by a simple averaging argument, either $\langle u, \; x_a \rangle ^2 \geq \frac{f_u(A \cup B)}{4}$ or $\langle u, \; x_b \rangle^2 \geq \frac{f_u(A \cup B)}{4}$. WLOG, let the first one be true. Let $x_a' = \frac{x_a}{\|x_a\|_2}$ denote the normalization of $x_a$. Then: \begin{align} \frac{f_u(A \cup B)}{4} \leq \langle u, \; x_a \rangle ^2 = \|x_a\|_2^2 \cdot \langle u, \; x_a' \rangle ^2 \leq \|x_a\|_2^2 \cdot f_u(A) \label{eq-lem-coreset-final-1} \end{align} Now observe that since $\|x\|_2^2 = 1$, so: \begin{align} \sigma_{min}(OPT) \leq \sigma_{min}(A \cup B) = \inf_{(\beta_1, \dots, \beta_t)} \frac{\|\sum_{i=1}^t \beta_i v_i \|_2^2}{\sum_{i=1}^{t} \beta_i^2} \leq \frac{\|\sum_{i=1}^t \alpha_i v_i \|_2^2}{\sum_{i=1}^{t} \alpha_i^2} = \frac{\|x\|_2^2}{\sum_{i=1}^{t} \alpha_i^2} = \frac{1}{\sum_{i=1}^{t} \alpha_i^2} \end{align} In particular, this implies that: \begin{align} \sigma_{min}(OPT) \leq \frac{1}{\sum_{i=1}^{t} \alpha_i^2} \leq \frac{1}{\sum_{i=1}^{s} \alpha_i^2} \leq \frac{1}{\|x_a\|_2^2} \label{eq-lem-coreset-final-2} \end{align} Combining equations \eqref{eq-lem-coreset-final-1} and \eqref{eq-lem-coreset-final-2} finishes the proof, since $f_u(B) \geq 0$ is non-negative by Lemma \ref{f structure lemma}. \end{proof} Let $S$ be the set of $16k/\sigma_{min}(OPT)$ items that Greedy selects on set $\cup_{i=1}^{\ell} S_i$, i.e. $S =$ $GREEDY(\cup_{i=1}^{\ell} S_i,$ $16k/\sigma_{min}(OPT))$. \fi Our final lemma is relevant when none of $f_A(OPT_i^{NS})$ are large and, thus, $f_A(OPT_i^{S})$ is large for {\em all} $i$ (due to Lemma~\ref{lem:additivity}). In this case, Lemma~\ref{lem:opt-s} will imply that the expected value of $f(S)$ is large. Note that $T_i$ is a random partition, so the $T_i$, the $OPT_i^{S}$, $OPT_i^{NS}$, $S_i$, and $S$ are all random variables. However, all of these value are fixed given a partition $\{T_i\}$. In what follows, we will write $f(\cdot)$ to mean $f_A(\cdot)$. \begin{lemma}\label{lem:opt-s} For a random partitioning $\{T_i\}$, and $S$ of size $k'' = \frac{12 k}{\sigma_{\min}(OPT)}$, we have \begin{equation}\label{eq:main-lem-to-show} \mathbb{E}[f(S)] \ge \frac{1}{2}\mathbb{E} \left[ \frac{\sum_{i=1}^{\ell} f(OPT_i^S)}{\ell}\right]. \end{equation} \end{lemma} \begin{proof} At a high level, the intuition behind the analysis is that many of the vectors in $OPT$ are selected in the first phase, i.e., in $\cup_i S_i$. For an $x \in OPT$, let $I_x$ denote the indicator for $x \in \cup_i S_i$. Suppose we have a partition $\{T_i\}$. Then if $x$ had gone to a machine $i$ for which $x \in OPT_i^{S}$, then by definition, $x$ will be in $S_i$. Now the key is to observe (see definitions) that the event $x \in OPT_i^S$ does not depend on where $x$ is in the partition! In particular, we could think of partitioning all the elements except $x$ (and at this point, we know if $x \in OPT_i^S$ for all $i$), and {\em then} randomly place $x$. Thus \begin{equation}\label{eq:n14} \mathbb{E}[ I_x ] = \mathbb{E} \left[ \frac{1}{\ell} \sum_{i=1}^{\ell} [[x \in OPT_i^S]] \right], \end{equation} where $[[~\cdot~]]$ denotes the indicator. We now use this observation to analyze $f(S)$. Consider the execution of the greedy algorithm on $\cup_i S_i$, and suppose $V^t$ denotes the set of vectors picked at the $t$th step (so $V^t$ has $t$ vectors). The main idea is to give a lower bound on \begin{equation} \label{eq:diff-expectations} \mathbb{E}[ f(V^{t+1}) - f(V^t) ], \end{equation} where the expectation is over the partitioning $\{T_i\}$. Let us denote by $Q$ the RHS of \eqref{eq:main-lem-to-show}, for convenience. Now, the trick is to show that for {\em any} $V^t$ such that $f(V^t) \le Q$, the expectation in~\eqref{eq:diff-expectations} is large. One lower bound on $f(V^{t+1}) - f(V^t)$ is (where $I_x$ is the indicator as above) \[\frac{1}{k} \sum_{x\in OPT} I_x \big( f(V^t \cup x) - f(V^t) \big).\] Now for every $V$, we can use~\eqref{eq:n14} to obtain \begin{align} \mathbb{E}[ &f(V^{t+1}) - f(V^t) | V^t = V] \notag \\ &\ge \frac{1}{k\ell} \!\! \sum_{x \inOPT} \!\!\!\! \mathbb{E} \left[ \sum_{i=1}^\ell [[x \in OPT_i^S]]\right] \big( f(V \cup x) \!-\! f(V) \big) \notag \\ &= \frac{1}{k\ell} \mathbb{E} \left[ \sum_{i=1}^\ell \sum_{x \in OPT_i^S} \big( f(V \cup x) - f(V) \big) \right] \,. \notag \end{align} Now, using~\eqref{eq:start}-\eqref{eq:temp6}, we can bound the inner sum by \[ \sigma_{\min}(OPT_i^S) \frac{ ( \max\{ 0, f(OPT_i^S) - f(V)\} )^2}{4f(OPT_i^S)} \,.\] Now, we use $\sigma_{\min}(OPT_i^S) \ge \sigma_{\min}(OPT)$ and the identity that for any two nonnegative reals $a, b$: $(\max\{ 0, a-b \})^2/a \ge a/2 - 2b/3$. Together, these imply \begin{align*} \mathbb{E}[ & f(V^{t+1}) - f(V^t) | V^t = V ] \\ &\ge \frac{\sigma_{\min}(OPT)}{4k\ell} \mathbb{E}\left[ \sum_{i=1}^\ell \frac{f(OPT_i^S)}{2} - \frac{2 f(V)}{3} \right]. \end{align*} and consequently: $\mathbb{E}[ f(V^{t+1}) - f(V^t)] \ge \alpha (Q - \frac{2}{3}\mathbb{E}[f(V^t)]$ for $\alpha = \sigma_{\min}(OPT)/4k$. If for some $t$, we have $\mathbb{E}[f(V^t)] \geq Q$, the proof is complete because $f$ is monotone, and $V^t \subseteq S$. Otherwise, $\mathbb{E}[f(V^{t+1}) - f(V^t)]$ is at least $\alpha Q/3$ for each of the $k'' = 12 k/\sigma_{\min}(OPT) = 3/\alpha$ values of $t$. We conclude that $\mathbb{E}[f(S)]$ should be at least $(\alpha Q/3) \times (3/\alpha) = Q$ which completes the proof. \iffalse For any $0 \leq t \leq \frac{16k}{\sigma_{min}(OPT)}$, we define $S^t$ to be the set of first $t$ columns that Greedy chooses from $\cup_{i=1}^{\ell} S_i$. In particular, we have $\emptyset = S^0 \subset S^1 \subset S^2 \cdots \subset S^t = S$. To lower bound $\mathbb{E}[f(S)]$, we apply linearity of expectation and lower bound the expected marginal value $\mathbb{E}[f(S^t) - f(S^{t-1})]$ at step $1 \leq t \leq \frac{16k}{\sigma_{min}(OPT)}$. Since algorithm Greedy chooses the column with the maximum marginal value at each step, we know that the marginal value at step $t$ is at least $f(\{x\} \cup S^{t-1}) - f(S^{t-1})$ for any column $x \in \cup_{i=1}^{\ell} S_i$. In particular we are interested in lower bounding the marginal values using $x \in OPT^S$ where $OPT^S$ is the set of columns in $OPT$ that are selected for the second stage, i.e. $OPT^S = OPT \cap (\cup_{i=1}^{\ell} S_i)$. For the sake of analysis, suppose $x$ is a random column in $OPT$. This column is sent to a random machine $1 \leq i \leq \ell$, i.e. $x \in T_i$. By definition, $x$ is selected ($x \in OPT^S$) if and only if $x \in OPT_i^{S}$, and in that case, we know the marginal value at this step is lower bounded by $f(\{x\} \cup S^{t-1}) - f(S^{t-1})$, otherwise we just lower bound the marginal by zero. Therefore, $\mathbb{E}[f(S^t) - f(S^{t-1})]$ is at least $\mathbb{E}_{x \in OPT, 1 \leq i \leq \ell}[(f(\{x\} \cup S^{t-1}) - f(S^{t-1})) \cdot [x \in OPT_i^S]]$ where $[x \in OPT_i^S]$ is the indicator function, and is one when $x$ is in $OPT_i^S$, and zero otherwise. Since the choice of $x$ and $i$ are independent of each other, we can imagine that machine $i$ is chosen randomly at first, then a random column $x$ is drawn from $OPT$. We conclude that: \begin{align}\label{eq:marginal-1} \mathbb{E}[f(S^t) - f(S^{t-1})] \geq \frac{1}{\ell k}\mathbb{E}[\sum_{i=1}^{\ell} \sum_{x \in OPT_i^S} f(\{x\} \cup S^{t-1}) - f(S^{t-1})] \end{align} Using Lemma~\ref{Large marginal gain helper lemma}, we know the sum $\sum_{x \in OPT_i^S} f(\{x\} \cup S^{t-1}) - f(S^{t-1})$ is lower bounded by $\sigma_{min}(OPT) \frac{(f(OPT_i^S) - f(S^{t-1}))^2} {(4f(OPT_i^S))}$. The quadratic form of this lower bound makes it harder to use it for expectations. Therefore we use the following relaxed inequality: \begin{align} \sum_{x \in OPT_i^S} f(\{x\} \cup S^{t-1}) - f(S^{t-1}) & \geq \sigma_{min}(OPT) (\frac{f(OPT_i^S)}{16} - \frac{f(S^{t-1})}{8}) \label{eq:marginal-2} \\ &\geq \sigma_{min}(OPT) (\frac{f(OPT_i^S)}{16} - \frac{f(S)}{8}) \label{eq:marginal-3} \end{align} where the first inequality can be seen by the identity $\frac{(a-b)^2}{a} \geq \frac{a}{4} - \frac{b}{2}$ for any $a$ and $b$, and the second inequality holds because $S^{t-1}$ is a subset of $S$, and $f$ is a non-decreasing function. Taking the expected value of both sides of equations Equations \eqref{eq:marginal-2}, and \eqref{eq:marginal-3}, and combining the result with equation \eqref{eq:marginal-1} yields the following lower bound on the expected value of marginal value at each step: \begin{align} \mathbb{E}[f(S^t) - f(S^{t-1})] &\geq \frac{\sigma_{min}(OPT)}{\ell k} \mathbb{E}[\sum_{i=1}^{\ell} (\frac{f(OPT_i^S)}{16} - \frac{f(S)}{8})] \\ &= \frac{\sigma_{min}(OPT)}{k} \mathbb{E}[\frac{\sum_{i=1}^{\ell} f(OPT_i^S)}{16\ell} - \frac{f(S)}{8}] \end{align} We note that $\mathbb{E}[f(S)]$ is at least ${16k}{\sigma_{min}(OPT)}$ times the above lower bound since we select that many items in the second round. Therefore we have $\mathbb{E}[f(S)] \geq \mathbb{E}[\frac{\sum_{i=1}^{\ell} f(OPT_i^S)}{\ell}] - 2\mathbb{E}[f(S)]$ which completes the proof. \fi \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:distributed-main}] If $f_A( OPT_i^{NS}) \ge \frac{ f(OPT)}{4\kappa(OPT)}$ for some $i$, then we are done, because Lemma~\ref{lem:opt-ns} implies that $f_A(S_i)$ is large enough. Otherwise, by Lemma~\ref{lem:additivity}, $f_A(OPT_i^{S}) \ge \frac{ f(OPT)}{4\kappa(OPT)}$ for all $i$. Now we can use Lemma~\ref{lem:opt-s} to conclude that $\mathbb{E}[ f_A(S) ] \ge \frac{ f(OPT)}{8\kappa(OPT)}$, completing the proof. \end{proof} \subsection{Multi-round algorithm} We now show that repeating the above algorithm helps achieve a $(1-\varepsilon)$-factor approximation. We propose a framework with $r$ epochs for some integer $r>0$. In each epoch $t \in [r]$, we run the $\textsc{Distgreedy}$ algorithm to select set $S^t$. The only thing that changes in different epochs is the objective function: in epoch $t$, the algorithm selects columns based on the function $f^t$ which is defined to be: $f^t(V) = f_A(V \cup S^1 \cup S^2 \cdots \cup S^{t-1})$ for any $t$. We note that function $f^1$ is indeed the same as $f_A$. The final solution is the union of solutions: $\cup_{t=1}^r S^t$. \begin{thm}\label{thm:core-set-2} For any $\varepsilon <1$, the expected value of the solution of the $r$-epoch $\textsc{Distgreedy}$ algorithm, for $r = O(\kappa(OPT)/\epsilon)$, is at least $(1-\varepsilon)f(OPT)$. \end{thm} The proof is provided in Section~\ref{app:core-set-2} of the appendix. {\em Necessity of Random Partitioning.} We point out that the random partitioning step of our algorithm is crucial for the $\textsf{GCSS}(A, B, k)$ problem. We adapt the instance from~\cite{VahabPODS2014} and show that even if each machine can compute $f_A(\cdot)$ exactly, and is allowed to output $\text{poly}(k)$ columns, it cannot compete with the optimum. Intuitively, this is because the partition of the columns in $B$ could ensure that in each partition $i$, the best way of covering $A$ involve picking some vectors $S_i$, but the $S_i$'s for different $i$ could overlap heavily, while the global optimum should use different $i$ to capture different {\em parts} of the space to be covered. (See Theorem \ref{thm:rand-part} in Appendix~\ref{app:rand-part} for details.) \section{Further optimizations for \textsc{Greedy}} \label{section-5} We now elaborate on some of the techniques discussed in Section~\ref{section-2} for improving the running time of $\textsc{Greedy}$. We first assume that we left-multiply both $A$ and $B$ by a random Gaussian matrix of dimension $r \times m$, for $r \approx k \log n/\varepsilon^2$. Working with the new instance suffices for the purposes of $(1-\varepsilon)$ approximation to CSS (for picking $O(k)$ columns). (Details in the Appendix, Section~\ref{app:random-projections}) \subsection{Projection-Cost Preserving Sketches} Marginal gain evaluations of the form $f_A(S \cup v) - f_A(S)$ require summing the marginal gain of $v$ onto each column of $A$. When $A$ has a large number of columns, this can be very expensive. To deal with this, we use a {\em sketch} of $A$ instead of $A$ itself. This idea has been explored in several recent works; we use the following notation and result: \begin{defin}[\cite{Cohen}] \label{defin:pcps} For a matrix $A \in \mathbb{R}^{m \times n}$, $A' \in \mathbb{R}^{m \times n'}$ is a \emph{rank-$k$ Projection-Cost Preserving Sketch (PCPS)} with error $0 \leq \varepsilon < 1$ if for any set of $k$ vectors $S$, we have: $(1 - \varepsilon) f_A(S) \leq f_{A'}(S) + c \leq (1 + \varepsilon) f_A(S)$ where $c \geq 0$ is a constant that may depend on $A$ and $A'$ but is independent of $S$. \end{defin} \begin{thm}\label{thm:pcps}[Theorem 12 of \cite{Cohen}] Let $R$ be a random matrix with $n$ rows and $n' = O(\frac{k + \log\frac{1}{\delta}}{\varepsilon^2})$ columns, where each entry is set independently and uniformly to $\plusminus \sqrt{\frac{1}{n'}}$. Then for any matrix $A \in \mathbb{R}^{m \times n}$, with probability at least $1 - O(\delta)$, $AR$ is a rank-$k$ PCPS for $A$. \end{thm} Thus, we can use PCPS to sketch the matrix $A$ to have roughly $k/\varepsilon^2$ columns, and use it to compute $f_A(S)$ to a $(1\pm \varepsilon)$ accuracy for any $S$ of size $\le k$. This is also used in our distributed algorithm, where we send the sketch to every machine. \subsection{Lazier-than-lazy Greedy} The natural implementation of $\textsc{Greedy}$ requires $O(nk)$ evaluations of $f(\cdot)$ since we compute the marginal gain of all $n$ candidate columns in each of the $k$ iterations. For submodular functions, one can do better: the recently proposed $\textsc{Lazier-than-lazy Greedy}$ algorithm obtains a $1 - \frac{1}{e} - \delta$ approximation with only a linear number $O(n \log (1/\delta))$ of marginal gain evaluations \cite{Mirzasoleiman}. We show that a similar result holds for \textsf{GCSS}, even though our cost function $f(\cdot)$ is not submodular. The idea is as follows. Let $T$ be the current solution set. To find the next element to add to $T$, we draw a sized $\frac{n_B \log (1/\delta)}{k} $ subset uniformly at random from the columns in $B \setminus T$. We then take from this set the column with largest marginal gain, add it to $T$, and repeat. We show this gives the following guarantee (details in Appendix Section~\ref{app:thm-lazier-than-lazy}.) \begin{thm} \label{thm-lazier-than-lazy:main} Let $A\in \mathbb{R}^{m \times n_A}$ and $B \in \mathbb{R}^{m \times n_B}$. Let $OPT_k$ be the set of columns from $B$ that maximizes $f_A(S)$ subject to $|S| = k$. Let $\varepsilon, \delta > 0$ be any constants such that $\epsilon + \delta \leq 1$. Let $T_r$ be the set of columns output by $\textsc{Lazier-than-lazy Greedy} (A, B, r)$, for $r = \frac{16k}{\varepsilon \sigma_{\min}(OPT_k)}$. Then we have: $$\mathbb{E}[f_A(T_r)] \geq (1 - \varepsilon - \delta) f_A(OPT_k)$$ Further, this algorithm evaluates marginal gain only a linear number $\frac{16n_B \log (1/\delta)}{\varepsilon \sigma_{\min}(OPT_k)}$ of times. \end{thm} Note that this guarantee is nearly identical to our analysis of $\textsc{Greedy}$ in Theorem \ref{thm:greedy-main}, except that it is in expectation. The proof strategy is very similar to that of Theorem \ref{thm:greedy-main}, namely showing that the value of $f(\cdot)$ increases significantly in every iteration (see Appendix Section~\ref{app:thm-lazier-than-lazy}). {\bf Calculating marginal gain faster.} We defer the discussion to Appendix Section~\ref{sec:app:marginal}. \section{Experimental results}\label{section-6} \begin{figure*} \caption{A comparison of reconstruction accuracy, model classification accuracy and runtime of various column selection methods (with PCA proved as an upper bound). The runtime is shown plot shows the relative speedup over the naive GREEDY algorithm.} \label{fig} \end{figure*} In this section we present an empirical investigation of the GREEDY, GREEDY++ and \textsc{Distgreedy}\ algorithms. Additionally, we will compare with several baselines: \\ {\bf Random:} The simplest imaginable baseline, this method selects columns randomly. \\ {\bf 2-Phase:} The two-phased algorithm of \cite{Boutsidis2}, which operates by first sampling $\Theta(k \log k)$ columns based on properties of the top-$k$ right singular space of the input matrix (this requires computing a top-$k$ SVD), then finally selects exactly $k$ columns via a deterministic procedure. The overall complexity is dominated by the top-$k$ SVD, which is $O( \min\{mn^2, m^2n\})$. \\ {\bf PCA:} The columns of the rank-$k$ PCA projection matrix will be used to serve as an upper bound on performance, as they explicitly minimize the Forbenius reconstruction criteria. Note this method only serves as an upper bound and does not fall into the framework of column subset selection. We investigate using these algorithms using two datasets, one with a small set of columns (mnist) that is used to compare both scalable and non-scalable methods, as well as a sparse dataset with a large number of columns (news20.binary) that is meant to demonstrate the scalability of the GREEDY core-set algorithm.\footnote{Both datasets can be found at: www.csie.ntu.edu.tw/$\sim$cjlin/libsvmtools/datasets/multiclass.html.} Finally, we are also interested in the effect of column selection as a preprocessing step for supervised learning methods. To that end, we will train a linear SVM model, using the LIBLINEAR library \citep{fan2008}, with the subselected columns (features) and measure the effectiveness of the model on a held out test set. For both datasets we report test error for the best choice of regularization parameter $c \in \{10^{-3}, \ldots, 10^4\}$. We run GREEDY++ and \textsc{Distgreedy}\ with $\frac{n}{k}\log(10)$ marginal gain evaluations per iteration and the distributed algorithm uses $s = \sqrt{\frac{n}{k}}$ machines with each machine recieving $\frac{n}{s}$ columns. \subsection{Small scale dataset (mnist)} We first consider the MNIST digit recognition task, which is a ten-class classification problem. There are $n$ = 784 input features (columns) that represent pixel values from the $28\times28$-pixel images. We use $m$ = 60,000 instances to train with and 10,000 instances for our test set. From Figure~\ref{fig} we see that all column sampling methods, apart from Random, select columns that approximately provide the same amount of reconstruction and are able to reach within 1\% of the performance of PCA after sampling 300 columns. We also see a very similar trend with respect to classification accuracy. It is notable that, in practice, the core-set version of GREEDY incurs almost no additional error (apart from at the smallest values of $k$) when compared to the standard GREEDY algorithm. Finally, we also show the relative speed up of the competitive methods over the standard GREEDY algorithm. In this small dataset regime, we see that the core-set algorithm does not offer an improvement over the single machine GREEDY++ and in fact the 2-Phase algorithm is the fastest. This is primarily due to the overhead of the distributed core-set algorithm and the fact that it requires two greedy selection stages (e.g.\ map and reduce). Next, we will consider a dataset that is large enough that a distributed model is in fact necessary. \subsection{Large scale dataset (news20.binary)} \begin{table} \begin{small} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \bf n & \bf Rand & \bf 2-Phase & \bf $\textsc{Distgreedy}$ & \bf PCA \\ \hline \hline 500 & 54.9 & 81.8 (1.0) & 80.2 (72.3) & 85.8 (1.3) \\ \hline 1000 & 59.2 & 84.4 (1.0) & 82.9 (16.4) & 88.6 (1.4)\\ \hline 2500 & 67.6 & 87.9 (1.0) & 85.5 (2.4) & 90.6 (1.7) \\ \hline \end{tabular} \end{center} \end{small} \caption{A comparison of the classification accuracy of selected features. Also, the relative speedup over the 2-Phase algorithm for selecting features is shown in parentheses. } \label{table} \end{table} In this section, we show that the $\textsc{Distgreedy}$ algorithm can indeed scale to a dataset with a large number of columns. The news20.binary dataset is a binary class text classification problem, where we start with $n$ = 100,000 sparse features (0.033\% non-zero entries) that represent text trigrams, use $m$ = 14,996 examples to train with and hold-out 5,000 examples to test with. We compare the classification accuracy and column selection runtime of the naive random method, 2-Phase algorithm as well as PCA (that serves as an upper bound on performance) to the $\textsc{Distgreedy}$ algorithm. The results are presented in Table~\ref{table}, which shows that $\textsc{Distgreedy}$ and 2-Phase both perform significantly better than random sampling and come relatively close to the PCA upper bound in terms of accuracy. However, we also find that $\textsc{Distgreedy}$ can be magnitudes of order faster than the 2-Phase algorithm. This is in a large part because the 2-Phase algorithm suffers from the bottleneck of computing a top-$k$ SVD. We note that an approximate SVD method could be used instead, however, it was outside the scope of this preliminary empirical investigation. In conclusion, we have demonstrated that \textsc{Distgreedy}\ is able to scale to larger sized datasets while still selecting effective features. \appendix \section{Appendix} \subsection{Proof of Lemma~\ref{lem:opt-ns}}\label{app:opt-ns} \begin{proof} Let us fix some machine $i$. The main observation is that running greedy with $T_i$ is the same as running it with $T_i \cup OPT_i^{NS}$ (because by definition, the added elements are not chosen). Applying Theorem~\ref{thm:greedy-main}\footnote{To be precise, Theorem~\ref{thm:greedy-main} is presented as comparing against the optimum set of $k$ columns. However, an identical argument (simply stop at the last line in the proof of Theorem~\ref{thm:greedy-main}) shows the same bounds for any (potentially non-optimal) set of $k$ columns. This is the version we use here.} with $B = T_i \cup OPT_i^{NS}$ and $\varepsilon = \frac{1}{2}$, we have that for $ k' \ge \frac{32 |OPT_i^{NS}|}{\sigma_{\min}(OPT_i^{NS})}$, then $f_A(S_i) \geq \frac{f_A(OPT_i^{NS})}{2}$. Now since $OPT_i^{NS}$ is a subset of $OPT$, we have that $OPT_i^{NS}$ is of size at most $|OPT| = k$, and also $\sigma_{\min}(OPT_i^{NS}) \ge \sigma_{\min}(OPT)$. Thus the above bound certainly holds whenever $k' \ge \frac{32 k}{\sigma_{\min}(OPT)}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:additivity}} \label{app:additivity} \begin{proof} [Lemma~\ref{lem:additivity}] As before, we will first prove the inequality for one column $u$ instead of $A$, and adding over the columns gives the result. Suppose $OPT = \{v_1, \dots, v_k\}$, and let us abuse notation slightly, and use $I, J$ to also denote subsets of indices that they correspond to. Now, by the definition of $f$, there exists an $x = \sum_i \alpha_i v_i$, such that $\norm{x}=1$, and $\iprod{x, u}^2 = f_u(OPT)$. Let us write $x = x_I + x_J$, where $x_I = \sum_{i \in I} \alpha_i v_i$. Then, \begin{align*} \iprod{x, u}^2 &= (\iprod{x_I, u} + \iprod{x_J, u})^2 \le 2(\iprod{x_I, u}^2 + \iprod{x_J, u}^2 ) \\ &\le 2(\norm{x_I}^2 f_u(I) + \norm{x_J}^2 f_u(J) ) \\ &\le 2(\norm{x_I}^2 + \norm{x_J}^2) (f_u(I) + f_u(J)). \end{align*} Now, we have \[ \norm{x_I}^2 \le \sigma_{\max}(I)(\sum_{i \in I} \alpha_i^2),\] from the definition of $\sigma_{\max}$, and we clearly have $\sigma_{\max}(I)\le \sigma_{\max}(OPT)$, as $I$ is a subset. Using the same argument with $J$, we have \[ \norm{x_I}^2 + \norm{x_J}^2 \le \sigma_{\max}(OPT) (\sum_i \alpha_i^2). \] Now, since $\norm{x}=1$, the definition of $\sigma_{\min}$ gives us that $\sum_i \alpha_i^2 \le 1/\sigma_{\min}(OPT)$, thus completing the proof. \end{proof} \subsection{Tight example for the bound in Theorem~\ref{thm:greedy-main}} \label{app:tight-ex} We show an example in which we have a collection of (nearly unit) vectors such that: \enum{ \item Two of them can exactly represent a target vector $u$ (i.e., $k=2$). \item The $\sigma_{\min}$ of the matrix with these two vectors as columns is $\sim \theta^2$, for some parameter $\theta <1$. \item The greedy algorithm, to achieve an error $\le \epsilon$ in the squared-length norm, will require at least $\frac{1}{\theta^2 \epsilon}$ steps. } The example also shows that using the greedy algorithm, we cannot expect to obtain a multiplicative guarantee on the {\em error}. In the example, the optimal error is zero, but as long as the full set of vectors is not picked, the error of the algorithm will be non-zero. \paragraph{The construction.} Suppose $e_0, e_1, \dots, e_n$ are orthogonal vectors. The vectors in our collection are the following: $e_1$, $\theta e_0 +e_1$, and $2\theta e_0 + e_j$, for $j \ge 2$. Thus we have $n+1$ vectors. The target vector $u$ is $e_0$. Clearly we can write $e_0$ as a linear combination of the first two vectors in our collection. Let us now see what the greedy algorithm does. In the first step, it picks the vector that has maximum squared-inner product with $e_0$. This will be $2\theta e_0 + e_2$ (breaking ties arbitrarily). We claim inductively that the algorithm never picks the first two vectors of our collection. This is clear because $e_1, e_2, \dots, e_n$ are all orthogonal, and the first two vectors have a strictly smaller component along $e_0$, which is what matters for the greedy choice (it is an easy calculation to make this argument formal). Thus after $t$ iterations, we will have picked $2\theta e_0 + e_2, 2\theta e_0 + e_3, \dots, 2\theta e_0 + e_{t+1}$. Let us call them $v_1, v_2, \dots v_t$ resp. Now, what is the unit vector in the span of these vectors that has the largest squared dot-product with $e_0$? It is a simple calculation to find the best linear combination of the $v_i$ -- all the coefficients need to be equal. Thus the best unit vector is a normalized version of $(1/t) (v_1 + \dots v_t)$, which is \[ v = \frac{ 2\theta e_0 + \frac{1}{t}(e_2 + e_3 +\dots e_{t+1})}{\sqrt{\frac{1}{t} + 4\theta^2}}. \] For this $v$, to have $\iprod{u, v}^2 \ge 1-\epsilon$, we must have \[\frac{4\theta^2}{\frac{1}{t} + 4\theta^2} \le 1- \epsilon, \] which simplifies (for $\epsilon \le 1/2$) to $\frac{1}{4 t\theta^2} \le \frac{\epsilon}{2}$, or $t > \frac{1}{2\theta^2\epsilon}$. \subsection{Proof of Theorem~\ref{thm-lazier-than-lazy:main}} \label{app:thm-lazier-than-lazy} The key ingredient in our argument is that in every iteration, we obtain large marginal gain in expectation. This is formally stated in the following lemma. \begin{lemma} \label{lem:large-gain-lazier-than-lazy} Let $S, T$ be two sets of columns from $B$, with $f_A(S) \geq f_A(T)$. Let $|S| \leq k$, and let $R$ be a size $\frac{n_B \log \frac{1}{\delta}}{k}$ subset drawn uniformly at random from the columns of $B \setminus T$. Then the expected gain in an iteration of $\textsc{Lazier-than-lazy Greedy}$ is at least $(1 - \delta) \sigma_{\min}(S) \frac{(f_A(S) - f_A(T))^2}{4kf_A(S)}$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:large-gain-lazier-than-lazy}] The first part of the proof is nearly identical to the proof of Lemma 2 in \cite{Mirzasoleiman}. We repeat the details here for the sake of completeness. Intuitively, we would like the random sample $R$ to include vectors we have not seen in $S \setminus T$. In order to lower bound the probability that $R \cap (S \setminus T) \neq \emptyset$, we first upper bound the probability that $R \cap (S \setminus T) = \emptyset$. \begin{align} \mathbb{P}\{R \cap (S \setminus T) = \emptyset\} &= \Big(1 - \frac{|S \setminus T|}{n_B - |T|}\Big)^{\frac{n_B \log(\frac{1}{\delta})}{k}} \label{Stochastic greedy lemma eq 1} \\ &\leq e^{-\frac{n_B \log(\frac{1}{\delta})}{k} \frac{|S \setminus T|}{n_B - |T|}} \\ &\leq e^{-\frac{\log(\frac{1}{\delta})}{k} |S \setminus T|} \end{align} where we have used the fact that $1 - x \leq e^{-x}$ for $x \in \mathbb{R}$. Recalling that $\frac{|S\setminus T|}{k} \in [0, 1]$, we have: \begin{align} \mathbb{P}\{R \cap (S \setminus T) \neq \emptyset\} &\geq 1 - e^{-\frac{\log(\frac{1}{\delta})}{k} |S \setminus T|} \\ &\geq (1 - e^{-\log(\frac{1}{\delta})}) \frac{|S \setminus T|}{k} \\ &= (1 - \delta) \frac{|S \setminus T|}{k} \label{Stochastic greedy lemma eq 2} \end{align} The next part of the proof relies on techniques developed in the proof of Theorem 1 in \cite{Mirzasoleiman}. For notational convenience, define $\Delta(v|T) = f_A(T \cup v) - f_A(T)$ to be the marginal gain of adding $v$ to $T$. Using the above calculations, we may lower bound the expected gain $\mathbb{E}[\max_{v \in R} \Delta(v | T)]$ in an iteration of $\textsc{Lazier-than-lazy Greedy}$ as follows: \begin{align} & \mathbb{E} \big[\max_{v \in R} \Delta(v | T)\big] \\ &\geq (1 - \delta) \frac{|S \setminus T|}{k} \cdot \mathbb{E}[\max_{v \in R} \Delta(v | T) \; \Big| \; R \cap (S \setminus T) \neq \emptyset] \label{Stochastic greedy lemma eq 3} \\ &\geq (1 - \delta) \frac{|S \setminus T|}{k} \cdot \mathbb{E}[\max_{v \in R \cap (S \setminus T)} \Delta(v | T) \; \Big| \; R \cap (S \setminus T) \neq \emptyset] \label{Stochastic greedy lemma eq 4} \\ &\geq (1 - \delta) \frac{|S \setminus T|}{k} \cdot \frac{\sum_{v \in S \setminus T} \Delta(v | T)}{|S \setminus T|} \label{Stochastic greedy lemma eq 5} \\ &\geq (1 - \delta) \sigma_{min}(S) \frac{(f_A(S)- f_A(T))^2}{4kf_A(S)} \label{Stochastic greedy lemma eq 6} \end{align} Equation \eqref{Stochastic greedy lemma eq 3} is due to conditioning on the event that $R \cap (S \setminus T) \neq \emptyset$, and lower bounding the probability that this happens with Equation \eqref{Stochastic greedy lemma eq 2}. Equation \eqref{Stochastic greedy lemma eq 5} is due to the fact each element of $S$ is equally likely to be in $R$, since $R$ is chosen uniformly at random. Equation \eqref{Stochastic greedy lemma eq 6} is a direct application of equation \eqref{eq:temp8} because $\sum_{v \in S \setminus T} \Delta(v | T) = \sum_{v \in S} \Delta(v | T)$. \end{proof} We are now ready to prove Theorem~\ref{thm-lazier-than-lazy:main}. The proof technique is similar to that of Theorem~\ref{thm:greedy-main}. \begin{proof}[Proof of Theorem~\ref{thm-lazier-than-lazy:main}] For each $i \in \{0, \dots, r\}$, let $T_i$ denote the set of $i$ columns output by $\textsc{Lazier-than-lazy Greedy}(A, B, i)$. We adopt the same notation for $F$ as in the proof of Theorem \ref{thm:greedy-main}. We also use a similar construction of $\{\Delta_0, \dots, \Delta_N\}$ except that we stop when $\Delta_N \leq \frac{\varepsilon}{1 - \delta}F$. We first demonstrate that it takes at most $\frac{8kF}{(1 - \delta) \sigma_{min}(OPT_k) \Delta_i}$ iterations to reduce the gap from $\Delta_i$ to $\frac{\Delta_i}{2} = \Delta_{i+1}$ in expectation. To prove this, we invoke Lemma \ref{lem:large-gain-lazier-than-lazy} on each $T_i$ to see that the expected gap filled by $\frac{8kF}{(1 - \delta) \sigma_{min}(OPT_k) \Delta_i}$ iterations is lower bounded by $\frac{8kF}{(1 - \delta) \sigma_{min}(OPT_k) \Delta_i} \cdot (1 - \delta) \frac{\sigma_{min}(OPT_k)(\frac{\Delta_i}{2})^2}{4kF} = \frac{\Delta_i}{2} = \Delta_{i+1}$. Thus the total number of iterations $r$ required to decrease the gap to at most $\Delta_N \leq \frac{\varepsilon}{1 - \delta} F$ in expectation is: \begin{align} r &\leq \sum_{i=0}^{N-1} \frac{8kF}{(1 - \delta)\sigma_{min}(OPT_k) \Delta_i} \\ &= \frac{8kF}{(1 - \delta)\sigma_{min}(OPT_k)} \sum_{i=0}^{N-1} \frac{2^{i-N+1}}{\Delta_{N-1}} \label{Stochastic greedy theorem eq 2} \\ &< \frac{16k}{\varepsilon \sigma_{min}(OPT_k)} \label{Stochastic greedy theorem eq 3} \end{align} where equation \eqref{Stochastic greedy theorem eq 3} is because $\Delta_{N-1} > \frac{\varepsilon}{1 - \delta} F$ and $\sum_{i=0}^{N-1} 2^{i-N+1} < 1$. Therefore, after $r \geq \frac{16k}{\varepsilon \sigma_{min}(OPT_k)}$ iterations, we have that: \begin{align} f_A(OPT_k) - \mathbb{E}[f_A(T_r)] &\leq \frac{\varepsilon}{1 - \delta} f_A(OPT_k) \\ &\leq (\varepsilon + \delta)f_A(OPT_k) \end{align} because $\varepsilon + \delta \leq 1$. Rearranging proves the theorem. \end{proof} \subsection{Random Projections to reduce the number of rows} \label{app:random-projections} Suppose we have a set of vectors $A_1, A_2, \dots, A_n$ in $\mathbb{R}^m$, and let $\varepsilon, \delta$ be given accuracy parameters. For an integer $1 \le k \le n$, we say that a vector $x$ is in the $k$-span of $A_1, \dots, A_n$ if we can write $x = \sum_j \alpha_j A_j$, with at most $k$ of the $\alpha_j$ non-zero. Our main result of this section is the following. \begin{thm} \label{thm:random-projections} Let $1\le k \le n$ be given, and set $d = O^*(\frac{k \log (\frac{n}{\delta \varepsilon})}{\varepsilon^2})$, where we use $O^*(\cdot)$ to omit $\log \log$ terms. Let $G \in \mathbb{R}^{d \times m}$ be a matrix with entries drawn independently from $\mathcal{N}(0,1)$. Then with probability at least $1-\delta$, for {\em all} vectors $x$ that are in the $k$-span of $A_1, A_2, \dots, A_n$, we have \[ (1-\varepsilon)\norm{x}_2 \le \frac{1}{\sqrt{d}} \norm{Gx}_2 \le (1+\varepsilon) \norm{x}_2. \] \end{thm} The proof is a standard $\varepsilon$-net argument that is similar to the proof of Lemma 10 in \cite{Sarlos}. Before giving the proof, we first state the celebrated lemma of Johnson and Lindenstrauss. \begin{thm}\label{thm:jl-classic}\cite{Johnson} Let $x \in \mathbb{R}^m$, and suppose $G \in \mathbb{R}^{d\times m}$ be a matrix with entries drawn independently from $\mathcal{N}(0,1)$. Then for any $\varepsilon > 0$, we have \[ \Pr\left[ (1-\varepsilon)\norm{x}_2 \le \frac{1}{\sqrt{d}} \norm{Gx}_2 \le (1+\varepsilon) \norm{x}_2 \right] \ge 1- e^{-\varepsilon^2d/4}. \] \end{thm} Now we prove Theorem ~\ref{thm:random-projections}. \begin{proof}[Proof of Theorem ~\ref{thm:random-projections}] The proof is a simple `net' argument for unit vectors in the $k$-span of $A_1, \dots, A_n$. The details are as follows. First note that since the statement is scaling invariant, it suffices to prove it for {\em unit} vectors $x$ in the $k$-span. Next, note that it suffices to prove it for vectors in a $\gamma$-net for the unit vectors in the $k$-span, for a small enough $\gamma$. To recall, a $\gamma$-net for a set of vectors $S$ is a finite set subset $\mathcal{N}_\gamma$ with the property that for all $x \in S$, there exists a $u \in \mathcal{N}_\gamma$ such that $\norm{x - u}_2 \le \gamma$. Suppose we fix some $\gamma$-net for the set of unit vectors in the $k$-span of $A_1, \dots, A_n$, and suppose we show that for all $u \in \mathcal{N}_\gamma$, we have \begin{equation} (1-\varepsilon/2)\norm{u}_2 \le \frac{1}{\sqrt{d}} \norm{Gu}_2 \le (1+\varepsilon/2) \norm{u}_2.\label{eq:eps-net-needed} \end{equation} Now consider any $x$ in the $k$-span. By definition, we can write $x = u +w$, where $u \in \mathcal{N}_\gamma$ and $\norm{w} <\gamma$. Thus we have \begin{equation} \norm{Gu}_2 - \gamma \norm{G}_{2} \le \norm{Gx}_2 \le \norm{Gu}_2 + \gamma \norm{G}_2, \label{eq:eps-net-bound} \end{equation} where $\norm{G}_2$ is the spectral norm of $G$. From now, let us set $\gamma = \frac{\varepsilon}{4\sqrt{d} \log(4/\delta)}$. Now, whenever $\norm{G}_2 < 2\sqrt{d}\log (4/\delta)$, equation~\eqref{eq:eps-net-bound} implies \[ \norm{Gu}_2 - \frac{\varepsilon}{2} \le \norm{Gx}_2 \le \norm{Gu}_2 + \frac{\varepsilon}{2}.\] The proof follows from showing the following two statements: (a) there exists a net $\mathcal{N}_\gamma$ (for the above choice of $\gamma$) such that Eq.~\eqref{eq:eps-net-needed} holds for all $u \in \mathcal{N}_\gamma$ with probability $\ge 1- \delta/2$, and (b) we have $\norm{G}_2 < 2\sqrt{d}\log(4/\delta)$ w.p. at least $1-\delta/2$. Once we have (a) and (b), the discussion above completes the proof. We also note that (b) follows from the concentration inequalities on the largest singular value of random matrices \cite{Rudelson}. Thus it only remains to prove (a). For this, we use the well known result that every $k$-dimensional space, the set of unit vectors in the space has a $\gamma$-net (in $\ell_2$ norm, as above) of size at most $(4/\gamma)^k$ \cite{Vershynin}. In our setting, there are $\binom{m}{k}$ such spaces we need to consider (i.e., the span of every possible $k$-subset of $A_1, \dots, A_m$). Thus there exists a $\gamma$-net for the unit vectors in the $k$-span, which has a size at most \[ \binom{m}{k} \cdot \left( \frac{4}{\gamma} \right)^k < \left( \frac{4m}{\gamma} \right)^k, \] where we used the crude bound $\binom{m}{k} < m^k$. Now Theorem~\ref{thm:jl-classic} implies that for any (given) $u \in \mathcal{N}_\gamma$ (replacing $\varepsilon$ by $\varepsilon/2$ and noting $\norm{u}_2 = 1$), that \[ \Pr \left[ 1+ \frac{\varepsilon}{2} \le \frac{1}{\sqrt{d}} \norm{Gu}_2 \le 1+\frac{\varepsilon}{2} \right] \ge 1- e^{-\varepsilon^2 d/16}.\] Thus by a union bound, the above holds for all $u \in \mathcal{N}_\gamma$ with probability at least $1- |\mathcal{N}_\gamma|e^{-\varepsilon^2 d/16}$. For our choice of $d$, it is easy to verify that this quantity is at least $1-\delta/2$. This completes the proof of the theorem. \end{proof} \subsection{Efficient Calculation of Marginal Gain}\label{sec:app:marginal} A naive implementation of calculating the marginal gain $f_A(S \cup v) - f(S)$ takes $O(mk^2 + kmn_A)$ floating-point operations (FLOPs) where $|S| = O(k)$. The first term is from performing the Grahm-Schmidt orthonormalization of $S$, and the second term is from calculating the projection of each of $A$'s columns onto span$(S)$. However, it is possible to significantly reduce the marginal gain calculations to $O(mn_A)$ FLOPs in $\textsc{Greedy}$ by introducing $k$ updates, each of which takes $O(mn_A + mn_B)$ FLOPs. This idea was originally proven correct by \cite{Farahat2}, but we discuss it here for completeness. The simple yet critical observation is that $\textsc{Greedy}$ permanently keeps a column $v$ once it selects it. So when we select $v$, we immediately update all columns of $A$ and $B$ by removing their projections onto $v$. This allows us to calculate marginal gains in future iterations without having to consider $v$ again. \subsection{Proof of Theorem~\ref{thm:core-set-2}} \label{app:core-set-2} \begin{proof} Let $C^t$ be the union of first $t$ solutions: $\cup_{j=1}^t S^{j}$. The main observation is that to compute $f^{t+1}(V)$, we can think of first subtracting off the components of the columns of $A$ along $C^t$ to obtain $A'$, and simply computing $f_{A'}(V)$. Now, a calculation identical to the one in Eq.~\eqref{eq:dotprod-lb2} followed by the ones in Eq.~\eqref{eq:start}-\eqref{eq:temp8} (to go from one vector to the matrix) implies that $f_{A'}(OPT) \ge \big( \sqrt{f_{A}(OPT)} - \sqrt{f_{A}(C^t)}\big)^2$. Now we can complete the proof as (Theorem~\ref{thm:greedy-main}). \iffalse For any $t \geq 1$, we use $OPT$ as the benchmark, and note that $f^t(OPT) = f^t(OPT \cup A^{t-1}) \geq f(OPT)$. On the other hand, $f(\emptyset)$ is equal to $f(A^{t-1})$. So there is a gap of at least $f(OPT) - f(A^{t-1})$ to be exploited. Using Theorem~\ref{thm:core-set-1}, we know that in epoch $t$ we find a set $S^t$ with expected $\mathbb{E}[f^t(S^t)]$ at least $\Omega(\sigma_{min}(OPT) (f(OPT) - f(A^{t-1})))$. This can be rewritten as: $$ \mathbb{E}[f(A^t)] \geq (1-\sigma)\mathbb{E}[f(A^{t-1})] + \sigma f(OPT) $$ where $\sigma$ is $\Omega(\sigma_{min}(OPT))$. By monotonicity of $f$ and induction, we can prove that $\mathbb{E}[f(A^t)]$ is at least $[1 - (1 - \Omega(\sigma_{min}(OPT)))^r] f(OPT)$ which completes the proof. \fi \end{proof} \subsection{Necessity of Random Partitioning} \label{app:rand-part} We will now make the intuition formal. We consider the $\textsf{GCSS}(A, B, k)$ problem, in which we wish to cover the columns of $B$ using columns of $A$. Our lower bound holds not just for the greedy algorithm, but for any {\em local} algorithm we use -- i.e., any algorithm that is not aware of the entire matrix $B$ and only works with the set of columns it is given and outputs a poly$(k)$ sized subset of them. \begin{thm} \label{thm:rand-part} For any square integer $k \geq 1$ and constants $\beta, c >0$, there exist two matrices $A$ and $B$, and a partitioning of $(B_1, B_2, \dots, B_\ell)$ with the following property. Consider any local, possibly randomized, algorithm that takes input $B_i$ and outputs $O(k^{\beta})$ columns $S_i$. Now pick $ck$ elements $S^*$ from $\cup_i S_i$ to maximize $f_A(S^*)$. Then, we have the expected value of $f_A (S^*)$ (over the randomization of the algorithm) is at most $O\left( \frac{c\beta \log k}{\sqrt{k}} \right) f_A(OPT_k)$. \end{thm} \begin{proof} Let $k = a^2$, for some integer $a$. We consider matrices with $a^2 + a^3$ rows. Our target matrix $A$ will be a single vector containing all $1$'s. The coordinates (rows) are divided into sets as follows: $X = \{1, 2, \cdots, a^2\}$, and for $1 \leq i \leq a^2$, $Y_i$ is the set $\{a^2 + (i-1) \cdot a + 1, i \cdot a^2 + (i-1) \cdot a + 2, \cdots, a^2 + i \cdot a\}$. Thus we have $a^2 + 1$ blocks, $X$ of size $a^2$ and the rest of size $a$. Now, let us describe the matrix $B_i$ that is sent to machine $i$. It consists of all possible $a$-sized subsets of $X \cup Y_i$.\footnote{As described, it has an exponential in $a$ number of columns, but there are ways to deal with this.} Thus we have $\ell = a^2$ machines, each of which gets $B_i$ as above. Let us consider what a local algorithm would output given $B_i$. Since the instance is extremely symmetric, it will simply pick $O(k^{\beta})$ sets, such that all the elements of $X \cup Y_i$ are {\em covered}, i.e., the vector $A$ restricted to these coordinates is spanned by the vectors picked. But the key is that the algorithm cannot distinguish $X$ from $Y_i$! Thus we have that any set in the cover has at most $O(\beta \log a)$ overlap with the elements of $Y_i$.\footnote{To formalize this, we need to use Yao's minmax lemma and consider the uniform distribution.} Now, we have sets $S_i$ that all of which have $O(\beta \log a)$ overlap with the corresponding $Y_i$. It is now easy to see that if we select at most $ck$ sets from $\cup_i S_i$, we can cover at most $c a^2 \log a$ of the coordinates of the $a^3$ coordinates in $\cup_i Y_i$. The optimal way to span $A$ is to pick precisely the indicator vectors for $Y_i$, which will cover a $(1-(1/a))$ fraction of the mass of $A$. Noting that $k = a^2$, we have the desired result. \iffalse We construct the set of columns $A_i$ based on $X$ and $Y_i$ as follows. We put $a^2 + a^3$ rows in $A$ one for each number in $X \cup Y_1 \cup Y_2 \cdots Y_{a^2}$. We set the number of machines $m$ to be equal to $k = a^2$. For any size $a$ subset $S \subset X \cup Y_i$, we put an indicator column $1_S$ in $A_i$ with $a$ ones in entries that belong to $S$, and zeros elsewhere. Finally we set $B$ to be just one column with all its entries equal to one. The symmetry we observe in each machine makes any algorithm unable to distinguish between entries in $X$ and entries in $Y_i$. Therefore in expectation every selected column will have at most $log(a)$ non-zero entries in $Y_i$. Therefore we can say with high probability each column in $S_i$ has at most $log(a)$ non-zero entries in $Y_i$ since $S_i$ has size at most $k^{\beta}$ (polynomial in $k$). So in the pool of selected columns $\cup_{i=1}^m S_i$ with high probability each column has at most $log(a) \leq log(k)$ non-zero entries in $\cup_{i=1}^m Y_i$. So any subset of $c \cdots k$ columns among the selected columns will not cover more than $c \cdot k log(k)$ entries of the $a^3 = k\sqrt{k}$ entries of $\cup_{i=1}^m Y_i$. The proof completes by observing that $1-\frac{1}{k}$ fraction of entries of $B$ are in $\cup_{i=1}^m Y_i$. \fi \end{proof} \end{document}
\begin{document} \title{A slicing obstruction from the $\frac{10}{8}$ theorem} \begin{abstract} From Furuta's $\frac{10}{8}$ theorem, we derive a smooth slicing obstruction for knots in $S^3$ using a spin $4$-manifold whose boundary is $0$-surgery on a knot. We show that this obstruction is able to detect torsion elements in the smooth concordance group and find topologically slice knots which are not smoothly slice. \end{abstract} \maketitle \section{Introduction}\label{section:1} A knot $K$ in $S^3$ is \emph{smoothly slice} if it bounds a disk that is smoothly embedded in the four-ball. Although detecting whether or not a knot is slice is not typically an easy task to do, there are various known ways to obstruct sliceness. For instance, the Alexander polynomial of a slice knot factors, up to a unit, as $f(t)f(t^{-1})$ and the averaged signature function of the knot vanishes (see, for instance, \cite[Chapter~8]{Lickorish1997}). Also in recent years, modern techniques in low-dimensional topology have been applied to produce obstructions. Examples include the $\tau$-invariant \cite{Ozsvath2003, Rasmussen2003}, $\epsilon$ \cite{Hom2014} and $\Upsilon$ \cite{Ozsvath2014} invariants, all coming from Heegaard Floer homology \cite{Ozsvath2004a, Ozsvath2013}, and the $s$-invariant \cite{Rasmussen2010} from Khovanov homology \cite{Khovanov2000}. In this paper we introduce a new obstruction using techniques in handlebody theory. We call a $4$-manifold a \emph{2-handlebody} if it may be obtained by attaching $2$-handles to $D^4$. The main ingredient is the following: {\thm\label{thm:slicing} Let $K \subset S^3$ be a smoothly slice knot and $X$ be a spin 2-handlebody with $\partial X = S^3_0(K)$. Then either $b_2(X)=1$ or \[ 4b_2(X) \ge 5|\sigma(X)|+12. \]} A key tool used in the proof of Theorem~\ref{thm:slicing} is Furuta's $10/8$ theorem \cite{Furuta2001}. Our theorem can be regarded as an analogous version of his theorem for manifolds with certain types of boundary. Similar ideas to this paper have been used by Bohr and Lee in \cite{Lee2001}, using the branched double cover of a knot. Given a knot $K$, we construct a spin 4-manifold $X$ such that $\partial X = S^3_0(K)$. If we think of $0$-surgery on $K$ as the boundary of the manifold given by a single 2-handle attached to $\partial D^4$, the spin structures on $S^3_0(K)$ are in one-to-one correspondence with characteristic sublinks in this Kirby diagram. (See Section~\ref{section:2} for the relevant definitions.) The 0-framed knot $K$ represents a spin structure which does not extend over this $4$-manifold. We may alter the $4$-manifold, without changing the boundary $3$-manifold, by a sequence of blow ups, blow downs and handle slides, until the characteristic link corresponding to this spin structure is the empty sublink. The manifold we obtain is a spin 4-manifold. Now if $b_2$ and $\sigma$ of the resulting four-manifold violate the inequality of Theorem~\ref{thm:slicing}, $K$ is not smoothly slice. The reason we are interested in the obstruction obtained from Theorem~\ref{thm:slicing} is twofold. First, we show in Section~\ref{sec:4} that our obstruction is able to detect torsion elements in the concordance group; in particular, the obstruction detects the non-sliceness of the figure eight knot. Second, we show that the obstruction is capable of detecting the smooth non-sliceness of topologically slice knots. We remind the reader that a topologically slice knot is a knot in $S^3$ which bounds a locally flat disk in $D^4$. All the algebraic concordance invariants (e.g. the signature function) vanish for a topologically slice knot. \section{The Slicing Obstruction}\label{section:2} In this section we prove Theorem \ref{thm:slicing} and describe how to produce the spin manifolds used to give slicing obstructions. The argument uses Furuta's $10/8$ Theorem. {\thm\cite[Theorem~1]{Furuta2001} Let $W$ be a closed, spin, smooth 4-manifold with an indefinite intersection form. Then \[4b_2(W) \geq 5 |\sigma(W)| +8.\]} Note that, by Donaldson's diagonalisation theorem \cite{Donaldson1987}, a closed, smooth, spin manifold $W$ can have a definite intersection form only if $b_2(W)=0$. \begin{proof}[Proof of Theorem \ref{thm:slicing}] We start by noting that when $K$ is smoothly slice, $S^3_0(K)$ smoothly embeds in $S^4$. (See \cite{Gilmer-Livingston}, for example.) The embedding splits $S^4$ into two spin manifolds $U$ and $V$ with common boundary $S^3_0(K)$. Since $S^3_0(K)$ has the same integral homology as $S^1 \times S^2$, a straightforward argument using the Mayer-Vietoris sequence shows manifolds $U$ and $V$ will have the same homology as $S^2 \times D^2$ and $S^1 \times D^3$ respectively. In particular both spin structures on the three-manifold extend over $V$. Now, as in \cite[Lemma~5.6]{Donald2015}, if $X$ is a spin 2-handlebody with $\partial X = \partial V$, let $W=X \cup_{S^3_0(K)} -V$. This will be spin and $\sigma(W) = \sigma(X)$ since $\sigma(V)=0$. In addition, we have $\chi(W)=\chi(X)=1+b_2(X)$. Since $H_1(W,X;\mathbb{Q}) \cong H_1(V,Y;\mathbb{Q})=0$ it follows from the exact sequence for the pair $(W,X)$ that $b_1(W)=b_3(W)=0$. Therefore $b_2(W) = b_2(X) -1$. The result follows by applying Furuta's theorem in the case $b_2(X) >1$. \end{proof} The rest of this section provides the background needed to apply the obstruction of Theorem~\ref{thm:slicing}. We refer the reader to \cite{Gompf1999} for a more detailed discussion on spin manifolds and characteristic links. {\defn \label{spin} A manifold $X$ has a spin structure if its stable tangent bundle $TX\oplus \epsilon^k$, where $\epsilon^k$ denotes a trivial bundle, admits a trivialization over the 1-skeleton of $X$ which extends over the 2-skeleton. A spin structure is a homotopy class of such trivializations. } \\ It can be shown that the definition does not depend on $k$ for $k \ge 1$. An oriented manifold $X$ admits a spin structure if the second Stiefel-Whitney class vanishes, that is $\omega_2(X)=0$. An oriented 3-manifold always admits a spin structure, since its tangent bundle is trivial. We remind the reader that any closed, connected, spin $3$-manifold $(Y, \mathfrak{s})$ is the spin boundary of a $4$-dimensional spin $2$-handlebody. A constructive proof is given in \cite{Kaplan1979}. As described in Section~\ref{section:1} we are interested in $0$-surgery on knots. The resulting three-manifold is spin with two spin structures $\mathfrak{s}_0, \mathfrak{s}_1$. Note that one of the spin structures, $\mathfrak{s}_0$, extends to the 4-manifold obtained by attaching a $0$-framed $2$-handle to $D^4$ along the knot. There is another $2$-handlebody (not the one with one $2$-handle that $\mathfrak{s}_0$ extends over) that is also bounded by $S^3_0(K)$ and $\mathfrak{s}_1$ extends over it. We explain how to construct such a four-manifold in what follows. {\defn\label{charactersitic} Let $L=\{K_1, ..., K_m\}$ be a framed, oriented link in $S^3$. The linking number $lk(K_i, K_j)$ is defined as the linking number of the two components if $i \neq j$ and is the framing on $K_i$ if $i=j$. A characteristic link $L^{'}\subset L$ is a sublink such that for each $K_i$ in $L$, $lk(K_i, K_i)$ is congruent mod 2 to the total linking number $lk(K_i, L^{'})$.} \\ Note that the characteristic links are independent of the choice of orientation of $L$. A framed link is a Kirby diagram for a $2$-handlebody $X$ and the characteristic links are in one-to-one correspondence with spin structures on $\partial X$. The link components form a natural basis for $H_2(X)$ and the intersection form is given by the linking numbers $lk$. The empty link is characteristic if and only if this form is even and, since $2$-handlebodies are simply connected, this occurs if and only if $X$ is spin. A non-empty characteristic link correspond to a spin structure on the boundary which does not extend. We can remove a characteristic link by modifying the Kirby diagram by handle-slide, blow up and blow down moves until it becomes the empty sublink. These do not change the boundary $3$-manifold, but the latter two change the $4$-manifold. This process produces a spin $4$-manifold where the given spin structure extends. For convenience, we briefly recall how these moves change the framings in link and the effect on a characteristic link. When a component $K_1$ with framing $n_1$ is slid over $K_2$ with framing $n_2$, the new component will be a band sum of $K_1$ and a parallel copy of $K_2$. It will have framing $n_1 + n_2 + 2 lk(K_1,K_2)$, where this linking number is computed using orientations on $K_1$ and $K_2$ induced by the band. The new component will represent the class of $K_1 + K_2$ in $H_2(X)$. Consequently, if $K_1$ and $K_2$ were part of a characteristic link before the slide, the new component will replace them in the new diagram. The most basic blow up move adds a split unknot with framing $\pm 1$. Each characteristic link will change simply by adding this extra component. A general blow up across $r$ parallel strands consists of first adding a split component and then sliding each of the $r$ strands over it. Therefore blowing up positively (respectively negatively), if the linking of the blow up circle with a component of the Kirby diagram is $p$, the framing change on that component will be $p^2$ (respectively $-p^2$). If a blow up curve links a characteristic link non-trivially mod 2 then it does not add any components to the characteristic link. However, if the blow up curve circles $2k$ strands of a characteristic link, it will be added to the characteristic link. Example~\ref{eg:figureeight} (more specifically, Figure~\ref{fig8b} and Figure~\ref{fig8c}) illustrates this. A blow down is the reverse move. Blowing down a component of a characteristic link removes it. Note that during the process of removing a characteristic link, we do not need to keep track of the whole Kirby diagram. Instead, we need only keep the information about the characteristic link and its framings, along with $b_2$ and $\sigma$. This is straightforward to do by counting the number of blow ups and blow downs with their signs. \subsection{Obtaining a spin 4-manifold bounded by $S^3_0(K)$} The argument above suggests that Theorem~\ref{thm:slicing} can give slicing obstructions for a knot $K$ that can be ``efficiently'' unknotted by a sequence of blow-ups. If the characteristic link is an unknot, the framing can be transformed to $\pm 1$ by further blow ups (along meridians) and then we may blow down to get an empty characteristic link. We finish this section by showing how Theorem~\ref{thm:slicing} can be used to prove that positive $(p, kp\pm1)$ torus knots are not smoothly slice for odd $p \ge 3$ \footnote{There are many ways to show that positive torus knots are not smoothly slice. Our goal in presenting this example is to show that our obstruction works well with \emph{generalized twisted torus knots}, which are, roughly speaking, torus knots where there are full-twists between adjacent strands. See Figure~\ref{fig:k6} for an example of a generalized twisted torus knot.} . Given a zero framed positive $(p, kp\pm1)$ torus knot, we first blow up $k$ times negatively around $p$ parallel strands. Each will introduce a negative full twist and, since $p$ is odd, the characteristic link will be a $-kp^2$ framed unknot. Blowing up $kp^2-1$ times positively along meridians and blowing down once negatively will give us a spin manifold $X$. This sequence used $k$ negative blow ups, $kp^2-1$ positive blow ups and one negative blow down so we see $b_2(X)= 1+k+kp^2-1-1= kp^2+k-1$ and $\sigma(X) = -k+kp^2-1+1=kp^2-k$. Now, $4b_2(X)-5|\sigma(X)|-12= -kp^2+9k-16<0$, and so such knots are not slice. \section{Examples}\label{sec:4} The obstruction from Theorem \ref{thm:slicing} is able to detect knots with order two in the smooth concordance group and can also be used to obstruct topologically slice knots from being smoothly slice. This section describes examples which illustrate each of these properties. \subsection{Figure eight knot} \begin{examp}\label{eg:figureeight} The knot $4_1$ is not slice. \end{examp} This knot is shown in Figure \ref{fig8a}. Start with the manifold obtained by attaching a $0$-framed $2$-handle to $D^4$ along $4_1$. Blow up the manifold twice as indicated in Figure \ref{fig8b}. Sliding one of the two blow up curves over the other results in the diagram in Figure~\ref{fig8c}. The characteristic link is a split link whose components are a $0$-framed trefoil and a $-2$-framed unknot. Figure \ref{fig8d} shows just the characteristic link. Blowing up negatively once more changes the characteristic link to a $2$-component unlink with framings $-2$ and $-9$ as in Figure \ref{fig8e}. This is inside a $4$-manifold with signature $-3$ and second Betti number $4$. Positively blowing up meridians nine times changes both framings in the characteristic link to $-1$ and blowing down each of them results in a spin manifold. Counting blow-up and blow-down moves, we see that the signature of this spin manifold is $+8$ and the second Betti number is $11$. Theorem \ref{thm:slicing} then applies. \begin{figure} \caption{A sequence of blow up and blow downs showing that $S^3_0(4_1)$ bounds a spin manifold with $b_2=11$ and $\sigma=8$. The characteristic link at each stage is specified by darker curves.} \label{fig8} \label{fig8a} \label{fig8b} \label{fig8c} \label{fig8d} \label{fig8e} \end{figure} This example shows that Theorem \ref{thm:slicing} may obstruct sliceness of $K$ but not of $K\#K$. The following result describes how the obstruction behaves with respect to connected sums. For any knot $K$, let $\mathfrak{s}_1$ denote the spin structure on $S^3_0(K)$ which does not extend over the $4$-manifold produced by attaching a $0$-framed $2$-handle to $D^4$ along $K$. \begin{prop}\label{prop:sums} Let $K_1, K_2$ be knots and $X_i$ be a smooth spin 2-handlebody with boundary $(S^3_0(K_i), \mathfrak{s}_1)$ for $i=1,2$. There is a smooth spin 2-handlebody $X$ with $\partial X = (S^3_0(K_1\# K_2), \mathfrak{s}_1)$, $\sigma(X) = \sigma(X_1)+\sigma(X_2)$ and $b_2(X) = b_2(X_1) + b_2(X_2) + 1$. \end{prop} \begin{proof} Let $W$ be the 2-handle cobordism from $Y=S^3_0(K_1) \# S^3_0(K_2)$ to $S^3_0(K_1\#K_2)$ illustrated in Figure \ref{nicecobordism}. Let $X$ be the manifold constructed by attaching $W$ to $X_1 \natural X_2$ along $Y$. \begin{figure} \caption{ {$2$-handle cobordism $W:S^3_0(K_1) \# S^3_0(K_2) \to S^3_0(K_1\#K_2)$.}} \label{nicecobordism} \end{figure} The characteristic link for the spin structure $\mathfrak{s}_1$ in $Y$ is the knot $K_1 \# K_2$ and, since the new 2-handle has linking zero with this component, there is a spin structure on $W$ which restricts to $\mathfrak{s}_1 \# \mathfrak{s}_1$ on $Y$ and $\mathfrak{s}_1$ on $S^3_0(K_1\#K_2)$. Consequently, $X$ extends the correct spin structure on its boundary. It is easy to see that $\sigma(W) = 0$ and so $\sigma(X) = \sigma(X_1) + \sigma(X_2)$. Since $X_1, X_2$ and $X$ are all 2-handlebodies \[b_2(X) = \chi(X) -1 = \chi(X_1 \natural X_2) +\chi(W) -1 = 1+b_2(X_1) + b_2(X_2).\] \end{proof} \begin{rmk} The signature of any spin manifold with spin boundary $(S^3_0(K),\mathfrak{s}_1)$ is $8 \operatorname{Arf} K \mod 16$, where $\operatorname{Arf} K$ is the Arf invariant of the knot $K$. (See \cite{Saveliev2002}.) Note that after removing the characteristic link, to get to a spin manifold bounded by the $0$-surgery on $K$, the signature must be a multiple of $8$. \end{rmk} \subsection{A topologically slice example} Let $K$ be the knot shown in Figure \ref{fig:k6}. A straightforward calculation of the Alexander polynomial shows that $\Delta_K(t)=1$ and so $K$ is topologically slice. See~\cite[11.7B~Theorem]{Freedman1990}, \cite[Theorem~7]{Freedman1984}. See also~\cite[Appendix~A]{Garoufalidis2004} and \cite{Cha2014}. \begin{figure}\label{fig:k6} \end{figure} \begin{examp} $K$ is not smoothly slice. \end{examp} Add a $0$-framed $2$-handle to $\partial D^4$ along $K$ and then blow up three times around the sets of strands indicated in Figure \ref{fig:k6BU}. Blow up negatively across nine strands on the top and positively across five and seven strands on the bottom of the diagram. This gives a manifold with signature $1$ and second Betti number $4$. The characteristic link has one component, as shown in Figure \ref{fig:k6BU2}, with framing $-7$. An isotopy verifies that this knot is $4_1$. \begin{figure} \caption{ {$K$ can be simplified by blowing up along the blue curves with appropriate signs. Note that none of the blue curves will be part of the characteristic link.}} \label{fig:k6BU} \end{figure} \begin{figure} \caption{ {Characteristic link is a $-7$-framed figure-eight. }} \label{fig:k6BU2} \end{figure} Following the procedure from Example \ref{eg:figureeight}, we may blow up negatively three times to produce a characteristic link which is a two-component unlink with framings $-2$ and $-16$ in a manifold with $\sigma=-2$ and $b_2=7$. Blow up meridional curves of this unlink until the framing coefficients are both $-1$, then blow down the resulting $-1$-framed unlink. This yields a spin manifold with signature $16$ and second Betti number $21$. Therefore, by Theorem~\ref{thm:slicing}, $K$ is non-slice. Note that Figure \ref{fig:k6} presents $K$ as a generalized twisted torus knot. It is the closure of a braid formed by taking a $(9,8)$ torus knot and then adding negative full twists on seven strands, then on non-adjacent sets of three strands and finally a pair of negative clasps. The obstruction from Theorem \ref{thm:slicing} is generally easier to apply to knots like this because they can be unknotted efficiently by blowing up to remove full twists. For many twisted torus knots this provides a slicing obstruction which is often more easily computable than the signature function. It would be interesting to find other examples where this obstruction applies. It may be able to obstruct smooth sliceness for Whitehead doubles. To apply Theorem~\ref{thm:slicing}, we need the sequence of blow-up moves to predominantly involve blow-ups of the same sign. However, at least for the standard diagrams of Whitehead doubles, it is not easy to see how to do this. Similarly, it should be possible to detect other torsion elements of the knot concordance group. Example~\ref{eg:figureeight} demonstrates this in principle but it would be interesting to obtain new examples of torsion elements. \end{document}
\begin{document} \title{Generalized perfect numbers} \twoauthors{ Antal Bege }{ Sapientia--Hungarian University of Transilvania\\ Department of Mathematics and Informatics,\\ T\^argu Mure\c{s}, Romania }{[email protected] }{ Kinga Fogarasi } { Sapientia--Hungarian University of Transilvania\\ Department of Mathematics and Informatics,\\ T\^argu Mure\c{s}, Romania }{[email protected] } \short{ A. Bege, K. Fogarasi }{ Generalized perfect numbers } \begin{abstract} Let $\sigma(n)$ denote the sum of positive divisors of the natural number $n$. A natural number is perfect if $\sigma(n) = 2n$. This concept was already generalized in form of superperfect numbers $\sigma^2(n) = \sigma (\sigma (n)) = 2n$ and hyperperfect numbers $\sigma(n) = \frac{k+1}{k} n + \frac{k-1}{k}.$ \\ In this paper some new ways of generalizing perfect numbers are investigated, numerical results are presented and some conjectures are established. \end{abstract} \section{Introduction} For the natural number $n$ we denote the sum of positive divisors by \[ \sigma(n)=\sum\limits_{d\mid n} d. \] \begin{definition} A positive integer $n$ is called perfect number if it is equal to the sum of its proper divisors. Equivalently: \[ \sigma (n) = 2n, \] where \end{definition} \begin{example} The first few perfect numbers are: $6, 28, 496, 8128, \dots$ (Sloane's A000396 \cite{11}), since \begin{eqnarray} 6 &=& 1 + 2 + 3 \nonumber\\ 28 &=& 1 + 2 + 4 + 7 + 14\nonumber\\ 496 &=& 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248\nonumber \end{eqnarray} Euclid discovered that the first four perfect numbers are generated by the formula $2^{n-1} (2^n -1)$. He also noticed that $2^n-1$ is a prime number for every instance, and in Proposition IX.36 of "Elements" gave the proof, that the discovered formula gives an even perfect number whenever $2^n-1$ is prime. \\ Several wrong assumptions were made, based on the four known perfect numbers: \begin{itemize} \item[$\bullet$] Since the formula $2^{n-1} (2^n-1)$ gives the first four perfect numbers for $n = 2, 3, 5,$ and 7 respectively, the fifth perfect number would be obtained when $n = 11$. However $2^{11} - 1 = 23 \cdot 89$ is not prime, therefore this doesn't yield a perfect number. \item[$\bullet$] The fifth perfect number would have five digits, since the first four had 1, 2, 3, and 4 digits respectively, but it has 8 digits. The perfect numbers would alternately end in 6 or 8. \item[$\bullet$] The fifth perfect number indeed ends with a 6, but the sixth also ends in a 6, therefore the alternation is disturbed. \end{itemize} \end{example} In order for $2^n-1$ to be a prime, $n$ must itself to be a prime. \begin{definition} A \textbf{Mersenne prime} is a prime number of the form: \[ M_n = 2^{p_n} - 1 \] where $p_n$ must also be a prime number. \end{definition} Perfect numbers are intimately connected with these primes, since there is a concrete one-to-one association between \emph{even} perfect numbers and Mersenne primes. The fact that Euclid's formula gives all possible even perfect numbers was proved by Euler two millennia after the formula was discovered. \\ Only 46 Mersenne primes are known by now (November, 2008 \cite{9}), which means there are 46 known even perfect numbers. There is a conjecture that there are infinitely many perfect numbers. The search for new ones is the goal of a distributed search program via the Internet, named GIMPS (Great Internet Mersenne Prime Search) in which hundreds of volunteers use their personal computers to perform pieces of the search. \\ It is not known if any \emph{odd} perfect numbers exist, although numbers up to $10^{300}$ (R. Brent, G. Cohen, H. J. J. te Riele \cite{brent1}) have been checked without success. There is also a distributed searching system for this issue of which the goal is to increase the lower bound beyond the limit above. Despite this lack of knowledge, various results have been obtained concerning the odd perfect numbers: \begin{itemize} \item[$\bullet$] Any odd perfect number must be of the form $12 m + 1$ or $36m + 9$. \item[$\bullet$] If $n$ is an odd perfect number, it has the following form: \[ n = q^\alpha p_1^{2e_1} \dots p_k^{2e_k}, \] where $q, p_1, \dots, p_k$ are distinct primes and $q \equiv \alpha \equiv 1 \pmod{4}$. (see L. E. Dickson \cite{dickson1}) \item[$\bullet$] In the above factorization, $k$ is at least 8, and if 3 does not divide $N$, then $k$ is at least 11. \item[$\bullet$] The largest prime factor of odd perfect number $n$ is greater than $10^8$ (see T. Goto, Y. Ohno \cite{goto1}), the second largest prime factor is greater than $10^4$ (see D. Ianucci \cite{ianucci1}), and the third one is greater than $10^2$ (see D. Iannucci \cite{ianucci2}). \item[$\bullet$] If any odd perfect numbers exist in form \[ n = q^\alpha p_1^{2e_1} \dots p_k^{2e_k}, \] they would have at least 75 prime factor in total, that means: $\alpha + 2 \sum\limits_{i=1}^k e_i \ge 75.$ (see K. G. Hare \cite{4}) \end{itemize} D. Suryanarayana introduced the notion of superperfect number in 1969 \cite{8}, here is the definition. \begin{definition} A positive integer $n$ is called \textbf{superperfect number} if \[ \sigma (\sigma (n)) = 2n. \] \end{definition} Some properties concerning superperfect numbers: \begin{itemize} \item[$\bullet$] Even superperfect numbers are $2^{p-1}$, where $2^p -1$ is a Mersenne prime. \item[$\bullet$] If any odd superperfect numbers exist, they are square numbers (G. G. Dandapat \cite{dandapat1}) and either $n$ or $\sigma(n)$ is divisible by at least three distinct primes. (see H. J. Kanold \cite{5}) \end{itemize} \section{Hyperperfect numbers} Minoli and Bear \cite{minoli1} introduced the concept of $k$-hyperperfect number and they conjecture that there are $k$-hyperperfect numbers for every $k$. \begin{definition} A positive integer $n$ is called \textbf{$k$-hyperperfect number} if \[ n = 1 + k[\sigma (n) - n-1] \] rearranging gives: \[ \sigma (n) = \frac{k+1}{k} n + \frac{k-1}{k}. \] \end{definition} \noindent We remark that a number is perfect iff it is 1-hyperperfect. In the paper of J. S. Craine \cite{6} all hyperperfect numbers less than $10^{11}$ have been computed \begin{example} The table below shows some $k$-hyperperfect numbers for different $k$ values: \\ \begin{center} \begin{tabular}{|c|l|} \hline $\textbf{k}$ & $\mathbf{k}$-\textbf{hyperperfect} number \\ \hline 1 & 6 ,28, 496, 8128, ... \\ 2 & 21, 2133, 19521, 176661, ... \\ 3 & 325, ... \\ 4 & 1950625, 1220640625, ... \\ 6 & 301, 16513, 60110701, ... \\ 10 & 159841, ... \\ 12 & 697, 2041, 1570153, 62722153, ... \\ \hline \end{tabular} \end{center} \end{example} \noindent Some results concerning hyperperfect numbers: \begin{itemize} \item[$\bullet$] If $k > 1$ is an odd integer and $p = (3k+1)/2$ and $q = 3k + 4$ are prime numbers, then $p^2q$ is $k$-hyperperfect; J. S. McCraine \cite{6} has conjectured in 2000 that all $k$-hyperperfect numbers for odd $k > 1$ are of this form, but the hypothesis has not been proven so far. \item[$\bullet$] If $p$ and $q$ are distinct odd primes such that $k (p+q) = pq - 1$ for some integer, $k$ then $n = pq$ is $k$-hyperperfect. \item[$\bullet$] If $k > 0$ and $p = k+1$ is prime, then for all $i > 1$ such that $q = p^i - p +1$ is prime, $n = p^{i-1} q$ is $k$-hyperperfect (see H. J. J. te Riele \cite{teriele1}, J. C. M. Nash \cite{7}). \end{itemize} We have proposed some other forms of generalization, different from $k$-hyperperfect numbers, and also we have examined \textbf{super-hyperperfect numbers} ("super" in the way as super perfect): \begin{eqnarray} && \sigma (\sigma (n)) = \frac{k+1}{k} n + \frac{k-1}{k} \nonumber \\ && \sigma (n) = \frac{2k-1}{k} n + \frac{1}{k} \nonumber \\ && \sigma (\sigma (n)) = \frac{2k-1}{k} n + \frac{1}{k} \nonumber \\ && \sigma (n) = \frac{3}{2} (n+1) \nonumber \\ && \sigma (\sigma (n)) = \frac{3}{2} (n+1) \nonumber \end{eqnarray} \section{Numerical results} For finding the numerical results for the above equalities we have used the ANSI C programming language, the Maple and the Octave programs. Small programs written in C were very useful for going through the smaller numbers up to $10^7$, and for the rest we used the two other programs. In this chapter the small numerical results are presented only in the cases where solutions were found. 3.1. Super-hyperperfect numbers. The table below shows the results we have reached: \begin{table}[htbp] \centering \begin{tabular}{|c|l|} \hline $\textbf{k}$ & \hspace{2cm} \textbf{n} \\ \hline 1 & $2, 2^2, 2^4, 2^6, 2^{12}, 2^{16}, 2^{18}$ \\ 2 & $3^2, 3^6, 3^{12}$ \\ 4 & $5^2$ \\ \hline \end{tabular} \end{table} 3.2. $\sigma(n) = \frac{2k-1}{k} n + \frac{1}{k}$ For $k = 2:$ \begin{table}[htbp] \centering \begin{tabular}{|c|l|} \hline $\textbf{n}$ & \textbf{prime factorization} \\ \hline 21 & $3 \cdot 7 = 3(3^2 - 2)$ \\ 2133 & $3^3 \cdot 79 = 3^3 \cdot (3^4 - 2)$ \\ 19521 & $3^4 \cdot 241 = 3^4 \cdot (3^5 - 2)$ \\ 176661 & $3^5 \cdot 727 = 3^5 \cdot (3^6 - 2)$ \\ \hline \end{tabular} \end{table} We have performed searches for $k = 3$ and $k = 5$ too, but we haven't found any solution 3.3. $\sigma (\sigma (n)) = \frac{2k-1}{k} n + \frac{1}{k}$ For $k=2:$ \begin{table}[htbp] \centering \begin{tabular}{|c|l|} \hline $\textbf{k}$ & \textbf{prime factorization} \\ \hline 9 & $3^2$ \\ 729 & $3^6$ \\ 531441 & $3^{12}$ \\ \hline \end{tabular} \end{table} We have performed searches for $k = 3$ and $k = 5$ too, but we haven't found any solution 3.4. $\sigma (n) = \frac{3}{2} (n+1)$ \begin{table}[htbp] \centering \begin{tabular}{|c|l|} \hline $\textbf{k}$ & \textbf{prime factorization} \\ \hline 15 & $3\cdot 5$ \\ 207 & $3^2 \cdot 23$ \\ 1023 & $3\cdot 11 \cdot 31$ \\ 2975 & $5^2 \cdot 7 \cdot 17$ \\ 19359 & $3^4 \cdot 239$ \\ 147455 & $5\cdot 7 \cdot 11 \cdot 383$ \\ 1207359 & $3^3 \cdot 97 \cdot 461$ \\ 5017599 & $3^3 \cdot 83 \cdot 2239$\\ \hline \end{tabular} \end{table} \section{Results and conjectures} \begin{proposition} If $n = 3^{k-1} (3^k - 2)$ where $3^k - 2$ is prime, then $n$ is a 2-hyperperfect number. \end{proposition} \begin{proof} Since the divisor function $\sigma$ is multiplicative and for a prime $p$ and prime power we have: \[ \sigma (p) = p+1 \] and \[ \sigma (p^\alpha) = \frac{p^{\alpha +1} - 1}{p-1}, \] it follows that: \begin{eqnarray} \sigma (n) &=& \sigma (3^{k-1} (3^k - 2)) = \sigma (3^{k-1}) \cdot \sigma (3^k - 2) = \frac{3^{(k-1)+1} -1}{3-1} \cdot (3^k - 2+1) = \nonumber\\ &=& \frac{(3^k - 1) \cdot (3^k -1 )}{2} = \frac{3^{2k} - 2\cdot 3^k + 1}{2} = \frac{3}{2} 3^{k-1} (3^k - 2) + \frac{1}{2}.\nonumber \end{eqnarray} \end{proof} \begin{conjecture} All 2-hyperperfect numbers are of the form $n = 3^{k-1} (3^k - 2),$ where $3^k - 2$ is prime. \end{conjecture} \noindent We were looking for adequate results fulfilling the suspects, therefore we have searched for primes that can be written as $3^k - 2$. We have reached the following results: \begin{center} \begin{tabular}{|c|c|} \hline \textbf{ \# }& $k$ \textbf{ for which }$3^k - 2$ \textbf{is prime} \\ \hline 1 & 2 \\ 2 & 4 \\ 3 & 5\\ 4 & 6 \\ 5 & 9 \\ 6 & 22 \\ 7 & 37 \\ 8 & 41 \\ 9 & 90 \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{ \# }& $k$ \textbf{ for which }$3^k - 2$ \textbf{is prime} \\ \hline 10 & 102 \\ 11 & 105 \\ 12 & 317 \\ 13 & 520 \\ 14 & 541 \\ 15 & 561 \\ 16 & 648 \\ 17 & 780 \\ 18 & 786 \\ 19 & 957 \\ 20 & 1353 \\ 21 & 2224 \\ 22 & 2521 \\ 23 & 6184 \\ 24 & 7989 \\ 25 & 8890 \\ 26 & 19217 \\ 27 & 20746 \\ \hline \end{tabular} \end{center} \noindent Therefore the last result we reached is: $3^{20745} (3^{20746} - 2)$, which has 19796 digits. \noindent If we consider the super-hiperperfect numbers in special form $\sigma (\sigma (n)) = \frac{3}{2} n + \frac{1}{2}$ we prove the following result. \begin{proposition} If $n = 3^{p-1}$ where $p$ and $(3^p -1)/2$ are primes, then $n$ is a super-hyperperfect number. \end{proposition} \begin{proof} \begin{eqnarray*} \sigma (\sigma(n)) &=& \sigma (\sigma (3^{p-1})) = \sigma \left(\frac{3^p -1}{2} \right) = \frac{3^p -1}{2} + 1 =\\ &=& \frac{3}{2} \cdot 3^{p-1} + \frac{1}{2}=\frac{3}{2}n+\frac{1}{2}. \end{eqnarray*} \end{proof} \begin{conjecture} All solutions for this generalization are $3^{p-1}$-like numbers, where $p$ and $(3^p -1)/2$ are primes. \end{conjecture} We were looking for adequate results fulfilling the suspects, therefore we have searched for primes $p$ for which $(3^p-1)/2$ is also prime. We have reached the following results: \begin{center} \begin{tabular}{|c|c|} \hline \textbf{ \# }& $p-1$ \textbf{for which}$p$ and $(3^p - 1)/2$ \textbf{are primes} \\ \hline 1 & 2 \\ 2 & 6 \\ 3 & 12\\ 4 & 540 \\ 5 & 1090 \\ 6 & 1626 \\ 7 & 4176 \\ 8 & 9010 \\ 9 & 9550 \\ \hline \end{tabular} \end{center} \noindent Therefore the last result we reached is: $3^{9550}$, which has 4556 digits. \rightline{\emph{Received: November 9, 2008}} \end{document}
\begin{document} \title{\bf A Generalized Heckman Model With Varying Sample Selection Bias and Dispersion Parameters} \author[1,3]{Fernando de S. Bastos\thanks{E-mail: \texttt{[email protected]}}} \affil[1]{\it Instituto de Ci\^encias Exatas e Tecnol\'ogicas, Universidade Federal de Vi\c cosa, Florestal, Brazil} \author[2,3]{Wagner Barreto-Souza\thanks{E-mail: \texttt{[email protected]}}} \affil[2]{\it Statistics Program, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia} \affil[3]{\it Departamento de Estat\'istica, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil} \author[2]{Marc G. Genton\thanks{E-mail: \texttt{[email protected]}}} \date{} \maketitle \begin{abstract} Many proposals have emerged as alternatives to the Heckman selection model, mainly to address the non-robustness of its normal assumption. The 2001 Medical Expenditure Panel Survey data is often used to illustrate this non-robustness of the Heckman model. In this paper, we propose a generalization of the Heckman sample selection model by allowing the sample selection bias and dispersion parameters to depend on covariates. We show that the non-robustness of the Heckman model may be due to the assumption of the constant sample selection bias parameter rather than the normality assumption. Our proposed methodology allows us to understand which covariates are important to explain the sample selection bias phenomenon rather than to only form conclusions about its presence. We explore the inferential aspects of the maximum likelihood estimators (MLEs) for our proposed generalized Heckman model. More specifically, we show that this model satisfies some regularity conditions such that it ensures consistency and asymptotic normality of the MLEs. Proper score residuals for sample selection models are provided, and model adequacy is addressed. Simulated results are presented to check the finite-sample behavior of the estimators and to verify the consequences of not considering varying sample selection bias and dispersion parameters. We show that the normal assumption for analyzing medical expenditure data is suitable and that the conclusions drawn using our approach are coherent with findings from prior literature. Moreover, we identify which covariates are relevant to explain the presence of sample selection bias in this important dataset. \end{abstract} \noindent {\it \textbf{Keywords}:} Asymptotics; Heteroscedasticity; Regularity conditions; Score residuals; Varying sample selection bias. \doublespacing \section{Introduction} \label{sec:intro} \noindent \citet{heckman1974,heckman1976} introduced a model for dealing with the sample selection bias problem with the aid of a bivariate normal distribution to relate the outcome of interest and a selection rule. A semiparametric alternative to this model, known as Heckman’s two-step method, was proposed by \cite{heckman1979}, so that the non-robustness of the normal distribution in the presence of outliers could be handled. The most discussed problem regarding the Heckman model is its sensitivity to the normal assumption of errors. Misspecification of the error distribution leads to inconsistent maximum likelihood estimators, yielding biased estimates \citep{TsayPin}. On the other hand, when the error terms are correctly specified, the estimation by maximum likelihood or by procedures based on likelihood produces consistent and efficient estimators \citep{leung1996choice,enders2010applied}. However, even when the shape of the error density is correctly specified, the heteroskedasticity of the error terms can cause inconsistencies in the parameter estimates, as shown by \cite{hurd1979estimation} and \cite{arabmazar1981further}. In response to this concern, \cite{donald1995two} discussed how heteroskedasticity in sample selection models is relatively neglected and provided two reasons to motivate the importance of taking this into account in practice. The first reason is that typically, the data used to fit sample selection models comprises large databases, in which heterogeneity is commonly found. The second reason is that the estimates of the parameters obtained by fitting the usual selection models may in some cases be more severely affected by heteroscedasticity than by incorrect distribution of the error terms \citep{powell1986symmetrically}. Nevertheless, even though there is a large body of recent research and studies on sample selection models, few studies have been carried out to correct or minimize the impact of heteroscedasticity. This is one of the aims of our paper. \cite{Siddhartha} proposed a semiparametric model for data with sample selection bias. They considered nonparametric functions in their model, which allowed great flexibility in the way covariates affect response variables. They still presented a Bayesian method for the analysis of such models. Subsequently, \cite{Wiesenfarth} introduced another general estimation method based on Markov Chain Monte Carlo simulation techniques and used a simultaneous equation system that incorporates Bayesian versions of penalized smoothing splines. Recent works on sample selection models have aimed to address robust alternatives to the Heckman model. In this direction, \cite{marchenkoGenton} proposed a Student-$t$ sample selection model for dealing with the robustness to the normal assumption in the Heckman model. \cite{zhelonkin2013robustness} proposed a modified robust semiparametric alternative based on Heckman's two-step estimation method. They proved the asymptotic normality of the proposed estimators and provided the asymptotic covariance matrix. To deal with departures from normality due to skewness, \cite{ogundimu2016sample} introduced the skew-normal sample selection model to mitigate the remaining effect of skewness after applying a logarithmic transformation to the outcome variable. Another direction that has been explored in the last few years is the modeling of discrete data with sample selection. For instance, \cite{MARRA2016110} introduced sample selection models for count data, potentially allowing for the use of any discrete distribution, non-Gaussian dependencies between the selection and outcome equations, and flexible covariate effects. The modeling of zero-inflated count data with sample selection bias is discussed by \cite{Wysmar2018}. \cite{Beili} considered the semiparametric identification and estimation of a heteroscedastic binary choice model with endogenous dummy regressors, and no parametric restriction on the distribution of the error term was assumed. This yields general multiplicative heteroscedasticity in both selection and outcome equations and multiple discrete endogenous regressors. A class of sample selection models for discrete and other non-Gaussian outcomes was recently proposed by \cite{azzalinietal}. \cite{Giampiero} introduced a generalized additive model for location, scale and shape, which accounts for non-random sample selection, and \cite{Taeyoung} proposed a Bayesian methodology to correct the bias of estimation of the sample selection models based on a semiparametric Bernstein polynomial regression model that incorporates the sample selection scheme into a stochastic monotone trend constraint, variable selection, and robustness against departures from the normality assumption. In the aforementioned papers, the solution to deal with departures from normality for continuous outcomes is to assume robust alternatives like Student-$t$ or skew-normal distributions. Another common approach is to consider non-parametric structures for the density of the error terms. Our proposal goes in a different direction, one which has not yet been explored in the literature. In this paper, we propose a generalization of the Heckman sample selection model by allowing the sample selection bias and dispersion parameters to depend on covariates. We show that the non-robustness of the Heckman model may be due to the assumption of the constant sample selection bias parameter rather than the normality assumption. Our proposed methodology allows us to understand what covariates are important to explain the sample selection bias phenomenon rather than to solely make conclusions about its presence. It is worthy to mention that our methodology can be straightforwardly adapted for existing sample selection like those proposed by \cite{marchenkoGenton}, \cite{ogundimu2016sample}, and \cite{Wysmar2018}. We now highlight other contributions of our paper: \begin{itemize} \item We explore the inferential aspects of the maximum likelihood estimators (MLEs) for our proposed generalized Heckman model. More specifically, we show that this model satisfies regularity conditions so that it ensures consistency and asymptotic normality of the MLEs. In particular, we show that the Heckman model satisfies the regularity conditions, which is a new finding. \item Proper residual for sample selection models is proposed as a byproduct of our asymptotic analysis. This is another relevant contribution of our paper, since this point has not yet been thoroughly addressed. \item We develop an \texttt{R} package for fitting our proposed generalized Heckman model. It also includes the Student-$t$ and skew-normal sample selection models, which have not been implemented in \texttt{R} \citep{SoftwareR} before. This makes the paper replicable and facilitates the use of our generalized Heckman model by practitioners. \item We show that the normal assumption for analyzing medical expenditure data is suitable and that the conclusions drawn using our approach are consistent with findings from prior literature. Moreover, we identify which covariates are relevant for explaining the presence of sample selection bias in this important dataset. \end{itemize} This paper is organized as follows. In Section \ref{HeckmanGen} we define the generalized Heckman (GH) sample selection model and discuss estimation of the parameters through the maximum likelihood method. Further, diagnostics tools and residual analysis are discussed. Section \ref{asymptotics} is devoted to showing that the GH model satisfies regularity conditions that ensure consistency and asymptotic normality of the maximum likelihood estimators. In Section \ref{simulations}, we present Monte Carlo simulation results for evaluating the performance of the maximum likelihood estimators of our proposed model and for checking the behavior of other existing methodologies under misspecification. In Section \ref{3:application} we apply our generalized Heckman model to the data on ambulatory expenditures from the 2001 Medical Expenditure Panel Survey and show that our methodology overcomes an existing problem in a simple way. Concluding remarks are addressed in Section \ref{concluding_remarks}. This paper contains a Supplementary Material which can be obtained from the authors upon request. \section{Generalized Heckman Model}\label{HeckmanGen} Assume that $\{(Y_{1i}^{*},Y_{2i}^{*})\}_{i=1}^n$ are linearly related to covariates $\pmb{x}_{i}\in \mathbb{R}^{p}$ and $\pmb{w}_{i}\in \mathbb{R}^{q}$ through the following regression structures: \begin{align} Y_{1i}^{*}&=\mu_{1i}+\epsilon_{1i}, \label{eq_reg_heckman} \\ Y_{2i}^{*}&=\mu_{2i}+\epsilon_{2i}, \label{eq_sel_heckman} \end{align} where $\mu_{1i}=\pmb{x}_{i}^\top\pmb\beta$, $\mu_{2i}=\pmb w_{i}^\top\pmb\gamma$, $\pmb{\beta}=(\beta_1,\ldots,\beta_p)^\top\in \mathbb{R}^{p}$ and $\pmb{\gamma}=(\gamma_1,\ldots,\gamma_q)^\top\in \mathbb{R}^{q}$ are vectors of unknown parameters with associated covariate vectors $\pmb x_i$ and $\pmb w_i$, for $i=1,\ldots,n$, and $\{(\epsilon_{1i},\epsilon_{2i})\}_{i=1}^n$ is a sequence of independent bivariate normal random vectors. More specifically, we suppose that \begin{align}\label{2:disterro2} \begin{pmatrix} \epsilon_{1i}\\ \epsilon_{2i} \end{pmatrix} &\sim \mathcal{N}_2 \begin{bmatrix} \begin{pmatrix} 0\\ 0 \end{pmatrix}\!\!,& \begin{pmatrix} \sigma_{i}^{2} & \rho_{i}\sigma_{i} \\ \rho_{i}\sigma_{i} & 1 \end{pmatrix} \end{bmatrix}, \end{align} with the following regression structures for the sample selection bias and dispersion parameters: \begin{eqnarray}\label{addreg} \mbox{arctanh}\,\rho_i=\pmb v_i^\top\pmb\kappa \quad \mbox{and}\quad \log\sigma_i=\pmb e_i^\top\pmb\lambda, \end{eqnarray} where $\pmb{\lambda}=(\lambda_{1},\ldots,\lambda_{r})^\top\in \mathbb{R}^{r}$ and $\pmb{\kappa}=(\kappa_{1},\ldots,\kappa_{s})^\top\in \mathbb{R}^{s}$ are parameter vectors, with associated covariate vectors $\pmb v_i$ and $\pmb e_i$, for $ i=1,\ldots,n$. The $\mbox{arctanh}$ (inverse hyperbolic tangent) link function used for the sample selection bias parameter ensures that it belongs to the interval $(-1,1)$. The variable $Y_{1i}^{*}$ is observed only if $Y_{2i}^{*}>0$ while the variable $Y_{2i}^{*}$ is latent. We only know if $Y_{2i}^{*}$ is greater or less than 0. The equation (\ref{eq_reg_heckman}) is the primary interest equation and the equation (\ref{eq_sel_heckman}) represents the selection equation. In practice, we observe the variables \begin{align} U_{i}&=I\{Y_{2i}^{*}>0\},\\ Y_{i}&=Y_{1i}^{*}U_{i},\label{mod_heckman_ind} \end{align} for $ i=1,\ldots,n$, where $I\{Y_{2i}^{*}>0\}=1$ if $Y_{2i}^{*}>0$ and equals 0 otherwise. Our generalized Heckman sample selection model is defined by (\ref{eq_reg_heckman})-(\ref{mod_heckman_ind}). The classic Heckman model is obtained by assuming constant sample selection bias and dispersion parameters in (\ref{addreg}). The mixed distribution of $Y_{i}$ is composed by the discrete component \begin{align} P(U_i=u_i)=\Phi(\pmb{w}_{i}^\top\boldsymbol{\gamma})^{u_i}\Phi(-\pmb{w}_{i}^\top\boldsymbol{\gamma})^{1-u_i},\quad u_i=0,1, \end{align} and by a continuous part given by the conditional density function \begin{align} \label{1:dens_heckman} f(y_{i}|U_i=1)&=\dfrac{1}{\sigma_{i}}\phi\left(\dfrac{y_{i}-\pmb{x}_{i}^\top\boldsymbol{\beta}}{\sigma_{i}}\right) \dfrac{\Phi\left(\dfrac{\rho_{i}}{\sqrt{1-\rho_{i}^{2}}}\left(\dfrac{y_{i}-\pmb{x}_{i}^\top\boldsymbol{\beta}}{\sigma_{i}}\right)+ \dfrac{\pmb{w}_{i}^\top\boldsymbol{\gamma}}{\sqrt{1-\rho_{i}^{2}}}\right)}{\Phi(\pmb{w}_{i}^\top\boldsymbol{\gamma})}, \end{align} where $\phi(\cdot)$ and $\Phi(\cdot)$ denote the density and cumulative distribution functions of the standard normal, respectively. More details about this are provided in the Supplementary Material. Let $\boldsymbol{\theta}=(\pmb{\beta}^{\top},\pmb{\gamma}^{\top},\pmb{\lambda}^{\top},\pmb{\kappa}^{\top})^{\top}$ be the parameter vector. The log-likelihood function is given by \begin{align}\label{likelihoodGEN} \mathcal{L}(\boldsymbol{\theta})&=\sum_{i=1}^{n} \left\{u_{i}\log{f(y_{i}|u_{i}=1)}+u_{i}\log{\Phi(\mu_{2i})}+ (1-u_{i})\log{\Phi(-\mu_{2i})}\right\}\nonumber\\ \displaystyle &=\sum_{i=1}^{n}u_{i}\left\{\log{\Phi(\zeta_{i})}+ \log{\phi(z_{i})}-\log{\sigma_{i}}\right\}+\sum_{i=1}^{n}(1-u_{i})\log{\Phi(-\mu_{2i})}, \end{align} where $u_{i}=1$ if $y_{i}$ is observed and $u_{i}=0$ otherwise, $ z_{i}\equiv\dfrac{y_{i}-\mu_{1i}}{\sigma_{i}}$, and $\zeta_{i}\equiv\dfrac{\mu_{2i}+\rho_iz_{i}}{\sqrt{1-\rho_i^2}}$, for $i=1,\ldots,n$. Expressions for the score function ${\bf S}_{\boldsymbol{\theta}}=\dfrac{\partial \mathcal{L}(\boldsymbol{\boldsymbol{\theta}})}{\partial\boldsymbol{\theta}}$ and the respective Hessian matrix are presented in the Supplementary Material. The maximum likelihood estimators (MLEs) are obtained as solution of the non-linear system of equations ${\bf S}_{\boldsymbol{\theta}}={\bf 0}$, which does not have an explicit analytic form. We use the quasi-Newton algorithm of Broyden-Fletcher-Goldfarb-Shanno (BFGS) via the package \texttt{optim} of the software \texttt{R} \citep{SoftwareR} to maximize the log-likelihood function. It is important to emphasize that the maximum likelihood method may suffer from possible multicollinearity problems when the selection equation has the same covariates as the regression equation (for example, see \cite{marchenkoGenton}). To reduce the impact of this problem in parameter estimation, the exclusion restriction is suggested in the literature. According to this approach, at least one significant covariate included in the selection equation should not be included in the primary regression. The interested reader can find more details on the exclusion restriction procedure for the Heckman sample selection model in \cite{heckman1976}, \cite{Leung2000} and \cite{Newey2009}. We now discuss diagnostic techniques, which have been proposed to detect observations that could exercise some influence on the parameter estimates or inference in general. Next, for the generalized Heckman model, we describe the generalized Cook distance (GCD), and in the next section, we propose the score residual. Cook's distance is a method commonly used in statistical modeling to evaluate changes in the estimated vector of parameters when observations are deleted. It allows us to assess the effect of each observation on the estimated parameters. The methodology proposed by \cite{RDCook} suggests the deletion of each observation and the evaluation of the log-likelihood function without such a case. According to \cite{XIE20074692}, the generalized Cook distance (GCD) is defined by \begin{align*} \mathrm{GCD}_{i}(\boldsymbol{\theta})=\left(\widehat{\boldsymbol{\theta}}-\widehat{\boldsymbol{\theta}}_{(i)}\right)^\top\boldsymbol{M} \left(\widehat{\boldsymbol{\theta}}-\widehat{\boldsymbol{\theta}}_{(i)}\right), \quad i=1, \ldots, n, \end{align*} where $\boldsymbol{M}$ is a nonnegative definite matrix, which measures the weighted combination of the elements for the difference $\widehat{\boldsymbol{\theta}}-\widehat{\boldsymbol{\theta}}_{(i)}$, and $\widehat{\boldsymbol{\theta}}_{(i)}$ is the MLE of $\boldsymbol\theta$ when removing the $i$th observation. Many choices for $\boldsymbol{M}$ were considered by \cite{cook1982residuals}. We use the inverse variance-covariance matrix $\boldsymbol{M}=-\ddot{\mathcal{L}}(\widehat{\boldsymbol{\theta}})^{-1}.$ To determine whether the $i$th case is potentially influential on inference about $\boldsymbol\theta$, we check if its associated GCD value is greater than $2p/n$. In this case, this point would be a possible influential observation. We illustrate the usage of the GCD in the analysis of the medical expenditure data in Section~\ref{3:application}. Regarding the residual analysis, in the next section, we propose a proper residual for sample selection models, which is one of the aims of this paper. \section{Asymptotic Properties and Score Residuals}\label{asymptotics} Our aim in this section is to show that under some conditions, our proposed generalized Heckman sample selection model satisfies the regularity conditions stated by \cite{cox1979theoretical}. As a consequence, the maximum likelihood estimators discussed in the previous section are consistent and asymptotically normal distributed. As a by-product of our findings here, we propose a score residual that is well-known to be approximately normal distributed. Proofs of the theorems stated in this section can be found in the Appendix. Let $\boldsymbol\Theta$ be the parameter space and $\ell_i(\boldsymbol\theta)= u_{i}\left\{\log{\Phi(\zeta_{i})}+\log{\phi(z_{i})}-\log{\sigma_{i}}\right\}+(1-u_{i})\log{\Phi(-\mu_{2i})}$ be the contribution of the $i$th observation to the log-likelhood function, where $\zeta_{i}$ retains its definition from previous section, for $i=1,\ldots,n$. \begin{theorem}\label{regcond} The score function associated to the generalized Heckman model has mean zero and satisfies the identity $E\left({\bf S}_{\boldsymbol{\theta}}{\bf S}_{\boldsymbol{\theta}}^\top\right)=-E\left(\partial {\bf S}^\top_{\boldsymbol{\theta}}/\partial\boldsymbol\theta\right)$. \end{theorem} We now propose a new residual for sample selection models inspired from Theorem \ref{regcond}. From (\ref{scorebeta}), we define the ordinary score residual by $s_i=z_i-\dfrac{\rho_i}{\sqrt{1-\rho_i^2}}\dfrac{\phi(\zeta_{i})}{\Phi(\zeta_{i})}$ for the non-censored observations (where $u_i=1$) and the standardized score residual by \begin{eqnarray*} S_i=\dfrac{s_i-E(s_i|U_i=1)}{\sqrt{\mbox{Var}(s_i|U_i=1)}}=\dfrac{z_i-\dfrac{\rho_i}{\sqrt{1-\rho_i^2}}\dfrac{\phi(\zeta_{i})}{\Phi(\zeta_{i})}}{\sqrt{1+\mu_{2i}\rho_i^2\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}+\dfrac{\rho_i^2}{1-\rho_i^2}\Psi_i}}, \end{eqnarray*} where $\Psi_i=E\left(\dfrac{\phi^2(\zeta_i)}{\Phi^2(\zeta_i)}\big|U_i=1\right)=\dfrac{1}{\sigma_i\Phi(\mu_{2i})}\displaystyle\int_{-\infty}^\infty\phi(z_i)\phi(\zeta_i)/\Phi(\zeta_i)dy_i$, for $i=1,\ldots,n$ such that $u_i=1$. Alternatively, a score residual based on all observations (including the censored ones) can be defined by \begin{eqnarray}\label{score_residual} S^*_i=\dfrac{s_i-E(s_i)}{\sqrt{\mbox{Var}(s_i)}}=\dfrac{z_i-\dfrac{\rho_i}{\sqrt{1-\rho_i^2}}\dfrac{\phi(\zeta_{i})}{\Phi(\zeta_{i})}}{\sqrt{\Phi(\mu_{2i})\left(1+\mu_{2i}\rho_i^2\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}+\dfrac{\rho_i^2}{1-\rho_i^2}\Psi_i\right)}}, \end{eqnarray} for $i=1,\ldots,n$. In practice, we replace the unknown parameters by their maximum likelihood estimates. The evaluation of the goodness-of-fit of our proposed generalized Heckman model will be performed through the score residual analysis. Based on this approach, discrepant observations are identified, besides being possible to evaluate the existence of serious departures from the assumptions inherent to the model. If the model is appropriate, plots of residuals versus predicted values should have a random behavior around zero. Alternatively, a common approach is to build residual graphics with simulated envelopes \citep{atkinson1985}. In this case, it is not necessary to know about the distribution of the residuals, they just need to be within the region formed by the envelopes so indicating a good fit. Otherwise, residuals outside the envelopes are possible outliers or indicate that the model is not properly specified. We will apply the proposed score residual (\ref{score_residual}) to the MEPS data analysis. As will be shown, the residual analysis indicates that the normal assumption for the data is suitable in contrast with the non-robustness of the Heckman model mentioned in existing papers in the literature on sample selection models. We now establish the consistency and asymptotic normality of the MLEs for our proposed generalized Heckman model. For this, we need to assume some usual regularity conditions. \noindent (C1) The parameter space $\boldsymbol\Theta$ is closed and compact, and the true parameter value, say $\boldsymbol\theta_0$, is an interior point of $\boldsymbol\Theta$. \noindent (C2) The covariates are a sequence of iid random vectors, and ${\bf F}_n$ is the information matrix conditional on the covariates. \noindent (C3) $E({\bf F}_n)$ is well-defined and positive definite, and $E\left(\max_{\boldsymbol\theta\in\boldsymbol\Theta}\|{\bf F}_n\|\right)<\infty$, where $\|\cdot\|$ is the Frobenius norm. \begin{remark} Conditions (C2) and (C3) enable us to apply a multivariate Central Limit Theorem for iid random vectors for establishing the asymptotic normality of the MLEs. These conditions are discussed for instance in \cite{fahkau1985}. \end{remark} \begin{theorem}\label{asympt_results} Under Conditions (C1)--(C3), the maximum likelihood estimator $\widehat{\boldsymbol\theta}$ of $\boldsymbol\theta$ for the generalized Heckman model is consistent and satisfies the weak convergence: ${\bf F}_n^{1/2}\left(\widehat{\boldsymbol\theta}-\boldsymbol\theta_0\right)\stackrel{d}{\longrightarrow}{\cal N}({\bf0},\bf{I})$, where $\bf{I}$ is the identity matrix, ${\bf F}_n$ is the conditional information matrix, and $\boldsymbol\theta_0$ denotes the true parameter vector value. \end{theorem} \begin{remark} An important consequence of Theorem \ref{asympt_results} is that the classic Heckman model is regular under Conditions (C1)--(C3). Therefore, the MLEs for this model are consistent and asymptotically normal distributed, which is a new finding of this paper. \end{remark} \section{Monte Carlo Simulations}\label{simulations} \subsection{Simulation design} In this section, we develop Monte Carlo simulation studies to evaluate and compare the performance of the maximum likelihood estimators under the generalized Heckman, classic Heckman, Heckman-Skew, and Heckman-$t$ models when the assumption of either constant sample selection bias parameter or constant dispersion is not satisfied. To do this, six different scenarios with relevant characteristics for a more detailed evaluation were considered. In Scenarios 1 and 2, we use models with both varying dispersion and correlation (sample selection bias parameter) and with (I) exclusion restriction and (II) without the exclusion restriction. For Scenarios 3-6, the exclusion restriction is considered. More specifically, in Scenarios 3, 4, and 5, we have specified the following (III) constant dispersion and varying correlation; (IV) varying dispersion and constant correlation; (V) both constant dispersion and correlation. To evaluate the sensitivity in the parameter estimation of the selection models at high censoring, in Scenario 6 we simulated from the generalized Heckman model with (VI) varying both sample selection bias and dispersion parameters and an average censorship of $50\%.$ Scenario 1 aims to evaluate the performance of the generalized Heckman model and compare it with its competitors when the assumption of constant sample selection bias parameter and dispersion is not satisfied. Scenario 2 is devoted to demonstrating that despite the absence of exclusion restriction, our model can yield satisfactory parameter estimates. Scenarios 3 and 4 aim to justify the importance of modeling through covariates the correlation and dispersion parameters, respectively. Scenario 5 illustrates some problems that the generalized Heckman model can face as with the classic Heckman model. Finally, Scenario 6 was included to demonstrate the sensitivity of selection models to high correlation and high censoring. We here present the results from Scenario~1. The remaining results are presented in the Supplementary Material. All scenarios were based on the following regression structures: \begin{align} \mu_{1i}&=\beta_{0}+\beta_{1}x_{1i}+\beta_{2}x_{2i},\label{eq_sel_heckman_psim1} \\ \mu_{2i}&=\gamma_{0}+\gamma_{1}x_{1i}+\gamma_{2}x_{2i}+\gamma_{3}x_{3i},\label{eq_sel_heckman_psim2}\\ \log\sigma_{i}&=\lambda_{0}+\lambda_{1}x_{1i},\label{eq_sel_heckman_psim3}\\ \hspace{2pt}\mathrm{arctanh}\,\rho_{i}&=\kappa_{0}+\kappa_{1}x_{1i}, \label{eq_sel_heckman_psim4} \end{align} for $i=1,\ldots,n$. All covariates were generated from a standard normal distribution and were kept constant throughout the experiment. The responses were generated from the generalized Heckman model according to each of the six configurations. We set the sample sizes $n=500,1000,2000$ and $N=1000$ Monte Carlo replicates. Pilot simulations showed that the choice of parameters used in the simulations does not affect the results, as long as they maintain the same average percentage of censorship. We would like to highlight that there is no \texttt{R} package for fitting the Heckman-$t$ and Heckman skew-normal models. Therefore, we developed an \texttt{R} package (to be submitted) able to fit our proposed generalized Heckman model and also the sample selection models by \cite{marchenkoGenton} and \cite{ogundimu2016sample}. In the maximization procedure to estimate parameters based on the $\texttt{optim}$ package, we consider as initial values for $\pmb{\beta}$, $\pmb{\gamma}$, $\lambda_{0}$, and $\kappa_{0}$ the maximum likelihood estimates by the classic Heckman model. For the remaining parameters, we set $\lambda_{i}=0$ and $\kappa_{j}=0$ for $i=1,\ldots,r$ and $j=1,\ldots,s$. The initial values for the degrees of freedom of the Heckman-$t$ model and the skewness parameter for the Heckman-Skew model were set to $1$ and $2$, respectively. These values were chosen after some pilot simulations. \subsection{Scenario 1: Varying sample selection bias and dispersion parameters} Here, we consider (\ref{eq_sel_heckman_psim1})-(\ref{eq_sel_heckman_psim4}) with $\pmb\beta=(1.1,0.7,0.1)^\top$, $\pmb\gamma=(0.9,0.5,1.1,0.6)^\top$, and also $\pmb\lambda=(-0.4,0.7)^\top$ and $\pmb\kappa=(0.3,0.5)^\top$. The regressors are kept fixed throughout the simulations with $x_{1i}, x_{2i}, x_{3i}\overset{iid}{\sim} \mathcal{N}(0,1)$ for all $i=1,\ldots,n.$ In Table \ref{2:res_cen3_tab1}, we present the empirical mean and root mean square error (RMSE) of the maximum likelihood estimates of the parameters based on the generalized Heckman, classic Heckman, Student-$t$, and skew-normal sample selection models under Scenario 1. From this table, we observe a good performance of the MLEs based on the generalized Heckman model, even for estimating the parameters related to the sample selection bias and dispersion. The bias and the RMSE under this model decrease for all the estimates as the sample size increases; therefore, suggesting the consistency of the MLEs, which is in line with our Theorem \ref{asympt_results}. On the other hand, even the regression structures for $\pmb\beta$ and $\pmb\gamma$ being correctly specified, we see that the MLEs for these parameters, based on the classic Heckman, skew-normal, and Student-$t$, do not provide satisfactory estimates even for a large sample. This illustrates the importance of considering covariates for the sample selection bias and dispersion parameters. The mean estimates of the degrees of freedom and skewness for the Student-$t$ and skew-normal sample selection models were $2.4$ and $0.8$, respectively. The above comments are also supported by Figures \ref{2:cen3bp1}, \ref{2:cen3bp2}, and \ref{2:cen3bp3}, where boxplots of the parameter estimates are presented for sample sizes $n=500$, $n=1000$, and $n=2000$, respectively. We did not present the boxplots of the estimates of $\gamma_{1}$, $\gamma_{2}$, and $\beta_{1}$ since they behaved similarly to other boxplots. \begin{table} \centering \caption{Empirical mean and root mean square error (RMSE) of the maximum likelihood estimates of the parameters based on the generalized Heckman, classic Heckman, Student-$t$, and skew-normal sample selection models under Scenario 1.} \begin{tabular}{cccccc@{\hspace{-0.15ex}}ccc@{\hspace{-0.15ex}}ccc@{\hspace{-0.15ex}}cc} \hline \hline &&&\multicolumn{2}{c}{Generalized}&&\multicolumn{2}{c}{Classic}&&\multicolumn{2}{c}{Heckman}&&\multicolumn{2}{c}{Heckman} \\ &&&\multicolumn{2}{c}{Heckman}&&\multicolumn{2}{c}{Heckman}&&\multicolumn{2}{c}{Skew-Normal}&&\multicolumn{2}{c}{Student-$t$} \\ \cline{4-5} \cline{7-8} \cline{10-11}\cline{13-14} \multicolumn{2}{c}{Parameters}& $n$ & mean & RMSE& & mean & RMSE & & mean & RMSE & & mean & RMSE\\ \hline \hline \multirow{3}{*}{$\gamma_{0}$}& \multirow{3}{*}{0.9}&500 & 0.912 & 0.093 & & 0.880 & 0.095 & & 0.854 & 0.364 & & 1.300 & 0.435 \\ &&1000 & 0.905 & 0.063 & & 0.878 & 0.069 & & 0.820 & 0.364 & & 1.293 & 0.409 \\ &&2000 & 0.903 & 0.042 & & 0.877 & 0.048 & & 0.804 & 0.348 & & 1.286 & 0.395 \\ \hline \multirow{3}{*}{$\gamma_{1}$}& \multirow{3}{*}{0.5} &500 & 0.507 & 0.081 & & 0.555 & 0.104 & & 0.527 & 0.095 & & 0.652 & 0.200 \\ &&1000 & 0.506 & 0.056 & & 0.558 & 0.085 & & 0.530 & 0.075 & & 0.663 & 0.185 \\ &&2000 & 0.504 & 0.040 & & 0.555 & 0.070 & & 0.529 & 0.057 & & 0.662 & 0.174 \\ \hline \multirow{3}{*}{$\gamma_{2}$}& \multirow{3}{*}{1.1} &500 & 1.118 & 0.104 & & 1.053 & 0.125 & & 1.001 & 0.169 & & 1.540 & 0.480 \\ &&1000 & 1.109 & 0.074 & & 1.046 & 0.099 & & 0.978 & 0.162 & & 1.522 & 0.441 \\ &&2000 & 1.107 & 0.053 & & 1.033 & 0.090 & & 0.980 & 0.146 & & 1.515 & 0.425 \\ \hline \multirow{3}{*}{$\gamma_{3}$}& \multirow{3}{*}{0.6}&500 & 0.606 & 0.084 & & 0.556 & 0.106 & & 0.531 & 0.127 & & 0.848 & 0.288 \\ &&1000 & 0.604 & 0.065 & & 0.539 & 0.098 & & 0.504 & 0.130 & & 0.828 & 0.253 \\ &&2000 & 0.602 & 0.040 & & 0.543 & 0.074 & & 0.513 & 0.106 & & 0.833 & 0.242 \\ \hline \multirow{3}{*}{$\beta_{0}$} & \multirow{3}{*}{1.1} &500 & 1.105 & 0.068 & & 0.998 & 0.261 & & 0.496 & 0.734 & & 1.112 & 0.103 \\ &&1000 & 1.102 & 0.044 & & 0.954 & 0.273 & & 0.553 & 0.701 & & 1.108 & 0.078 \\ &&2000 & 1.099 & 0.029 & & 0.892 & 0.230 & & 0.498 & 0.697 & & 1.096 & 0.051 \\ \hline \multirow{3}{*}{$\beta_{1}$} & \multirow{3}{*}{0.7} &500 & 0.702 & 0.036 & & 0.882 & 0.219 & & 0.739 & 0.156 & & 0.787 & 0.106 \\ &&1000 & 0.701 & 0.020 & & 0.860 & 0.191 & & 0.753 & 0.158 & & 0.777 & 0.087 \\ &&2000 & 0.700 & 0.014 & & 0.881 & 0.191 & & 0.770 & 0.154 & & 0.774 & 0.079 \\ \hline \multirow{3}{*}{$\beta_{2}$} & \multirow{3}{*}{0.1} &500 & 0.098 & 0.048 & & 0.140 & 0.196 & & 0.040 & 0.225 & & 0.101 & 0.078 \\ &&1000 & 0.099 & 0.029 & & 0.174 & 0.196 & & 0.069 & 0.237 & & 0.104 & 0.057 \\ &&2000 & 0.100 & 0.019 & & 0.221 & 0.147 & & 0.105 & 0.193 & & 0.113 & 0.042 \\ \hline \multirow{3}{*}{$\lambda_{0}$}& \multirow{3}{*}{ $-$0.4} &500 & $-$0.400 & 0.042 & & 0.174 & 0.577 & & 0.366 & 0.771 & & $-$0.476 & 0.113 \\ &&1000 & $-$0.402 & 0.028 & & 0.182 & 0.584 & & 0.339 & 0.743 & & $-$0.470 & 0.092 \\ &&2000 & $-$0.401 & 0.019 & & 0.182 & 0.582 & & 0.331 & 0.734 & & $-$0.463 & 0.075 \\ \hline \multirow{3}{*}{$\lambda_{1}$}& \multirow{3}{*}{0.7} &500 & 0.699 & 0.042 & & $-$ & $-$ & & $-$ & $-$ & & $-$ & $-$ \\ &&1000 & 0.700 & 0.031 & & $-$ & $-$ & & $-$ & $-$ & & $-$ & $-$ \\ &&2000 & 0.701 & 0.020 & & $-$ & $-$ & & $-$ & $-$ & & $-$ & $-$ \\ \hline \multirow{3}{*}{$\kappa_{0}$} & \multirow{3}{*}{0.3 } &500 & 0.314 & 0.248 & & 0.507 & 0.688 & & 0.133 & 0.846 & & 0.307 & 0.300 \\ &&1000 & 0.311 & 0.154 & & 0.588 & 0.714 & & 0.223 & 0.918 & & 0.302 & 0.235 \\ &&2000 & 0.309 & 0.106 & & 0.811 & 0.608 & & 0.307 & 0.814 & & 0.336 & 0.158 \\ \hline \multirow{3}{*}{$\kappa_{1}$} & \multirow{3}{*}{0.5} &500 & 0.573 & 0.240 & & $-$ & $-$ & & $-$ & $-$ & & $-$ & $-$\\ &&1000 & 0.531 & 0.135 & & $-$ & $-$ & & $-$ & $-$ & & $-$ & $-$ \\ &&2000 & 0.510 & 0.092 & & $-$ & $-$ & & $-$ & $-$ & & $-$ & $-$ \\ \hline \end{tabular} \label{2:res_cen3_tab1} \end{table} \begin{sidewaysfigure} \includegraphics[width=\textwidth]{plot500.pdf} \caption{Boxplots of the maximum likelihood estimates of the parameters (a) $\gamma_{0},$ (b) $\gamma_{1},$ (c) $\beta_{0},$ (d) $\beta_{1},$ (e) $\lambda_{0},$ (f) $\lambda_{1}$ and (g) $\kappa_{0}$ and (h) $\kappa_{1}$ based on the (i) generalized Heckman, (ii) classic Heckman, (iii) Heckman-Skew, and (iv) Heckman-$t$ sample selection models. Sample size $n=500.$} \label{2:cen3bp1} \end{sidewaysfigure} \begin{sidewaysfigure} \includegraphics[width=\textwidth]{plot1000.pdf} \caption{Boxplots of the maximum likelihood estimates of the parameters (a) $\gamma_{0},$ (b) $\gamma_{1},$ (c) $\beta_{0},$ (d) $\beta_{1},$ (e) $\lambda_{0},$ (f) $\lambda_{1}$ and (g) $\kappa_{0}$ and (h) $\kappa_{1}$ based on the (i) generalized Heckman, (ii) classic Heckman, (iii) Heckman-Skew, and (iv) Heckman-$t$ sample selection models. Sample size $n=1000.$} \label{2:cen3bp2} \end{sidewaysfigure} \begin{sidewaysfigure} \includegraphics[width=\textwidth]{plot2000.pdf} \caption{Boxplots of the maximum likelihood estimates of the parameters (a) $\gamma_{0},$ (b) $\gamma_{1},$ (c) $\beta_{0},$ (d) $\beta_{1},$ (e) $\lambda_{0},$ (f) $\lambda_{1}$ and (g) $\kappa_{0}$ and (h) $\kappa_{1}$ based on the (i) generalized Heckman, (ii) classic Heckman, (iii) Heckman-Skew, and (iv) Heckman-$t$ sample selection models. Sample size $n=2000.$} \label{2:cen3bp3} \end{sidewaysfigure} We now provide some simulations to evaluate the size and power of likelihood ratio, gradient, and Wald tests. We consider Scenario 1 and present the empirical significance level of the tests in Table \ref{2:tamanhoTesteCen1} for nominal significance levels at 1\%, 5\%, and 10\%. Under the null hypothesis of absence of sample selection bias ($\rho_i=0$ for all $i$, that is $\kappa_0=\kappa_1=0$), the likelihood ratio, gradient, and Wald tests presented empirical values close to the nominal values only under the generalized Heckman model. For the other models, the type-I error was inflated and indicated the presence of selection bias. This suggests that the tests should be used with caution to test parameters of the sample selection models and that some confounding can be caused due to either varying sample selection bias or heteroskedasticity. It is important to point out that, even for the generalized Heckman model, the Wald test presents a considerable inflated type-I error for $n=500$. \begin{table} \centering \caption{Empirical significance level of the likelihood ratio (LR), gradient (G) and Wald (W) tests for $H_{0}:\rho=0$.} \begin{tabular}{c|ccccccccccccccccc} \hline\hline &\multicolumn{3}{c}{Generalized} &&\multicolumn{3}{c}{Classic}&& \multicolumn{3}{c}{Heckman}&& \multicolumn{3}{c}{Heckman} \\ &\multicolumn{3}{c}{Heckman} &&\multicolumn{3}{c}{Heckman} && \multicolumn{3}{c}{Skew-normal} && \multicolumn{3}{c}{Student-$t$} \\ \cline{2-4} \cline{6-8} \cline{10-12} \cline{14-16} $n$ & LR & G & W && LR & G & W && LR & G & W && LR & G & W \\ \hline \hline \cline{1-16} \multicolumn{16}{c}{$\alpha=1\%$} \\ \hline \hline $500 $ & 1.5& 1.1& 3.5&& 30.3& 1.6& 61.4&& 41.9& 6.4& 71.3&& 2.8& 1.5& 4.3\\ $1000$ & 0.9& 0.7& 2.6&& 72.4& 7.7& 92.7&& 82.7& 26.2& 95.6&& 3.4& 2.5& 4.0\\ $2000$ & 1.1& 0.7& 1.9&& 85.9& 16.5& 96.8&& 91.0& 52.4& 99.0&& 3.2& 2.6& 3.9\\ \hline \hline \multicolumn{16}{c}{$\alpha=5\%$} \\ \hline \hline $500 $ & 6.2& 5.6& 12.0 && 50.5& 12.5& 69.9&& 61.7& 26.9& 78.3&& 9.9& 7.7& 11.6\\ $1000$ & 4.9& 3.7& 6.9&& 86.9& 27.1& 95.5&& 89.9& 49.7& 97.8&& 10.1& 8.6& 11.8\\ $2000$ & 5.9& 5.1& 7.4&& 94.2& 40.8& 97.7&& 94.5& 68.3& 99.5&& 10.2& 9.4& 10.6\\ \hline \hline \multicolumn{16}{c}{$\alpha=10\%$} \\ \hline \hline $500 $ & 13.0& 10.5& 19.3&& 59.8& 25.2& 74.4&& 69.8& 40.8& 82.2&& 16.2& 14.8& 18.4\\ $1000$ & 9.4& 8.2& 13.7&& 91.5& 41.4& 96.1&& 93.3& 61.1& 98.2&& 16.9& 15.4& 18.0\\ $2000$ &11.2& 10.8& 12.6&& 96.3& 55.7& 98.3&& 95.5& 74.8& 99.6&& 16.1& 15.8& 16.4\\ \hline \hline \end{tabular} \label{2:tamanhoTesteCen1} \end{table} In Table \ref{2:Poder1TesteCen1}, we present the empirical power of the likelihood ratio, gradient and Wald tests (in percentage) for simulated data according to Scenario 1 under generalized Heckman, classic Heckman, Heckman-Skew, and Heckman-$t$ models, with significance level at $1\%, 5\%$ and $10\%$. From these results, we can observe that the tests, under the generalized Heckman model, provide high power, mainly when the sample size increases. On the other hand, since tests based on the other models do not provide the correct nominal significance level, the power of tests in these cases are not really comparable. \begin{table} \centering \caption{Empirical power of the likelihood ratio (LR), gradient (G) and Wald (W) tests (in percentage) for simulated data according to Scenario 1 under generalized Heckman, classic Heckman, Heckman-Skew, and Heckman-$t$ models, with significance level at $1\%, 5\%$ and $10\%$.} \begin{tabular}{c|ccccccccccccccccc} \hline\hline &\multicolumn{3}{c}{Generalized} &&\multicolumn{3}{c}{Classic}&& \multicolumn{3}{c}{Heckman}&& \multicolumn{3}{c}{Heckman} \\ &\multicolumn{3}{c}{Heckman} &&\multicolumn{3}{c}{Heckman} && \multicolumn{3}{c}{Skew-normal} && \multicolumn{3}{c}{Student-$t$} \\ \cline{2-4} \cline{6-8} \cline{10-12} \cline{14-16} $n$ & LR & G & W && LR & G & W && LR & G & W && LR & G & W \\ \hline \hline \cline{1-16} \multicolumn{16}{c}{$\alpha=1\%$} \\ \hline \hline $500 $ &71.7&68.4&75.5&&58.4& 8.9&81.9&&45.7& 4.5&79.7&&14.6& 9.3&18.4\\ $1000$ &95.3&95.2&96.9&&93.7&30.8&99.2&& 88.0& 14.0&98.4&&20.9& 16.0&26.6\\ $2000$ &99.9&99.9&99.9&&99.6&74.4& 100&&98.2&28.4&99.4&&48.3& 46.0&52.1\\ \hline \hline \multicolumn{16}{c}{$\alpha=5\%$} \\ \hline \hline $500 $ &88.8&87.8&91.6&&74.1&32.6&89.3&&69.2&20.1&86.3&&29.6&25.1&33.6\\ $1000$ & 99&98.9&99.4&&98.1&57.4&99.9&&94.5&33.5& 99&&41.9&37.9&46.2\\ $2000$ & 100& 100& 100&&99.9& 87.0& 100&&98.9&46.8&99.7&&68.8&66.9&70.9\\ \hline \hline \multicolumn{16}{c}{$\alpha=10\%$} \\ \hline \hline $500 $ &94.3&93.1&95.5&&82.4&48.4&91.9&&78.2&34.9&89.8&&39.8&37.1&43.1\\ $1000$ &99.8&99.8&99.8&&99.1&69.6& 100&& 96&45.5& 99&&52.4&49.4&54.7\\ $2000$ & 100& 100& 100&& 100&91.6& 100&&99.2&55.9&99.7&&78.4&77.8&79.4\\ \hline \hline \end{tabular} \label{2:Poder1TesteCen1} \end{table} \section{MEPS Data Analysis}\label{3:application} We present an application of the proposed model to a set of real data. Consider the outpatient expense data of the 2001 Medical Expenditure Panel Survey (MEPS) available in the \texttt{R} software in the package \texttt{ssmrob} \citep{ssmrob}. These data were also used by \cite{cameron2009microeconometrics}, \cite{marchenkoGenton}, and \cite{zhelonkin2013robustness} to fit the classic Heckman model, Heckman-$t$ model, and the robust version of the two-step method, respectively. The MEPS is a set of large-scale surveys of families, individuals, and their medical providers (doctors, hospitals, pharmacies, etc.) in the United States. It has data on the health services Americans use, how often they use them, the cost of these services, and how they are paid, as well as data on the cost and reach of health insurance available to American workers. The sample is restricted to persons aged between 21 and 64 years and contains a variable response with $3,328$ observations of outpatient costs, of which 526 (15.8 \%) correspond to unobserved expenditure values identified as zero expenditure. It also includes the following explanatory variables: \texttt{Age} represents age measured in tens of years; \texttt{Fem} is an indicator variable for gender (female receives value 1); \texttt{Educ} represents years of schooling; \texttt{Blhisp} is an indicator for ethnicity (black or Hispanic receive a value of 1); \texttt{Totcr} is the total number of chronic diseases; \texttt{Ins} is the insurance status; and \texttt{Income} denotes the individual income. The variable of interest $Y_{1i}^{*}$ represents the log-expenditure on medical services of the $i$th individual. We consider the logarithm of the expenditure since it is highly skewed (i.e., see Figure~\ref{histogram}, where plots of the expenditure and log-expenditure are presented). The variable $Y_{2i}^{*}$ denoting the willingness of the $i$th individual to spend is not observed. We only observe $U_i=I\{Y_{2i}^{*}>0\}$, which represents the decision or not of the $i$th individual to spend on medical care. \begin{figure} \caption{Histogram of medical expense data (a) and medical expense log (b).} \label{histogram} \end{figure} According to \cite{cameron2009microeconometrics} and \cite{zhelonkin2013robustness}, it is natural to fit a sample selection model to such data, since the willingness to spend $(Y_{2}^{*})$ is likely to be related to the expense amount $(Y_{1}^{*}).$ However, after fitting the classic Heckman model and using the Wald statistics for testing $H_{0}:\rho=0$ against $H_{1}:\rho\neq 0,$ the conclusion is that there is no statistical evidence $(p\textrm{-value}>0.1)$ to reject $H_{0},$ that is, there is no sample selection bias. \cite{cameron2009microeconometrics} suspected this conclusion on the absence of sample selection bias, and \cite{marchenkoGenton} argued that a more robust model could evidence the presence of sample selection bias in the data; these authors proposed a Student-$t$ sample selection model to deal with this problem. However, as will be illustrated in this application, this problem of the classic Heckman model can be due to the assumption of constant sample selection bias and constant dispersion parameters rather than the normal assumption itself. After a preliminary analysis, we consider the following regression structures for our proposed generalized Heckman model: \begin{align*} \mu_{1i}&= \beta_{0}+ \beta_{1}\texttt{Age}_{i}+ \beta_{2}\texttt{Fem}_{i}+ \beta_{3}\texttt{Educ}_{i}+ \beta_{4}\texttt{Blhisp}_{i}+\beta_{5}\texttt{Totchr}_{i}+\beta_{6}\texttt{Ins}_{i}, \\ \mu_{2i}&=\gamma_{0}+\gamma_{1}\texttt{Age}_{i}+\gamma_{2}\texttt{Fem}_{i}+\gamma_{3}\texttt{Educ}_{i}+\gamma_{4}\texttt{Blhisp}_{i}+\gamma_{5}\texttt{Totchr}_{i}+\gamma_{6}\texttt{Ins}_{i}+\gamma_{7}\texttt{Income}_{i}, \\ \log{\sigma_{i}} &=\lambda_{0}+\lambda_{1}\texttt{Age}_{i}+\lambda_{2}\texttt{Totchr}_{i}+\lambda_{3}\texttt{Ins}_{i},\\ \mbox{arctanh}\,\rho_{i}&=\kappa_{0}+\kappa_{1}\texttt{Fem}_{i}+\kappa_{2}\texttt{Totchr}_{i}, \end{align*} for $i=1,\ldots, 3328$. The primary equation has the same covariates of the selection equation with the additional covariate \texttt{Income}, so that exclusion restriction is in force. In Table \ref{2:dados_reais_tab1}, we present the summary of the fits of the classic Heckman and generalized Heckman models. From this table, we can observe that the covariates \texttt{Fem} and \texttt{Totchr} are significant to explain the sample selection bias, by using any significance level. We perform a likelihood (LR) test for checking the absence ($H_0:\,\kappa_0=\kappa_1=\kappa_2=0$) or presence of sample selection bias. The LR statistic was $28.16$ with corresponding $p$-value equal to $3\times10^{-6}$. Therefore, our proposed generalized Heckman model is able to detect the presence of sample selection bias even under normal assumption. We also performed the gradient and Wald tests, which confirmed this conclusion. Further, the covariates \texttt{age}, \texttt{totcr}, and \texttt{ins} were significant for the dispersion parameter. For the selection equation, the covariate \texttt{Income} is not significant (significance level at 5\%) based on the generalized Heckman model in contrast with the classic Heckman model. Anyway, it is important to keep it in order to satisfy the exclusion restriction. Regarding the primary equation, we observe that the covariate \texttt{Ins} is only significant under the classic Heckman model. Another interesting point is that \texttt{Educ} is strongly significant for the primary equation under our proposed generalized Heckman model. \begin{table} \centering \caption{Summary fits of the classic Heckman model (HM) and generalized Heckman model (GHM). The GHM summary fit contains estimates with their respective standard errors, z-value, $p\textrm{-value}$, and inferior and superior bounds of the $95\%$ confidence interval.} \begin{tabular}{lcccccccc} \hline \multicolumn{9}{c}{\texttt{Selection Equation}}\\ \hline covariates & HM-est. & p-value & GHM-est. & stand. error & z-value & p-value & Inf. & Sup. \\ \hline \texttt{Intercept} & $-$0.676 & 0.000 & $-$0.590 & 0.187 & $-$3.162 & 0.002 & $-$0.956 & $-$0.224 \\ \texttt{Age} & 0.088 & 0.001 & 0.086 & 0.026 & 3.260 & 0.001 & 0.034 & 0.138 \\ \texttt{Fem} & 0.663 & 0.000 & 0.630 & 0.060 & 10.544 & 0.000 & 0.513 & 0.747 \\ \texttt{Educ} & 0.062 & 0.000 & 0.057 & 0.011 & 4.984 & 0.000 & 0.035 & 0.079 \\ \texttt{Blhisp} & $-$0.364 & 0.000 & $-$0.337 & 0.060 & $-$5.644 & 0.000 & $-$0.454 & $-$0.220 \\ \texttt{Totchr} & 0.797 & 0.000 & 0.758 & 0.069 & 11.043 & 0.000 & 0.624 & 0.893 \\ \texttt{Ins} & 0.170 & 0.007 & 0.173 & 0.061 & 2.825 & 0.005 & 0.053 & 0.293 \\ \texttt{Income} & 0.003 & 0.040 & 0.002 & 0.001 & 1.837 & 0.066 & 0.000 & 0.005 \\ \hline \multicolumn{9}{c}{\texttt{Primary Equation}}\\ \hline covariates & HM-est. & p-value & GHM-est. & stand. error & z-value & p-value & Inf. & Sup. \\ \hline \texttt{Intercept} & 5.044 & 0.000 & 5.704 & 0.193 & 29.553 & 0.000 & 5.326 & 6.082 \\ \texttt{Age} & 0.212 & 0.000 & 0.184 & 0.023 & 7.848 & 0.000 & 0.138 & 0.230 \\ \texttt{Fem} & 0.348 & 0.000 & 0.250 & 0.059 & 4.252 & 0.000 & 0.135 & 0.365 \\ \texttt{Educ} & 0.019 & 0.076 & 0.001 & 0.010 & 0.129 & 0.897 & $-$0.019 & 0.021 \\ \texttt{Blhisp} & $-$0.219 & 0.000 & $-$0.128 & 0.058 & $-$2.221 & 0.026 & $-$0.242 & $-$0.015 \\ \texttt{Totchr} & 0.540 & 0.000 & 0.431 & 0.031 & 14.113 & 0.000 & 0.371 & 0.490 \\ \texttt{Ins} & $-$0.030 & 0.557 & $-$0.103 & 0.051 & $-$1.999 & 0.046 & $-$0.203 & $-$0.002 \\ \hline \multicolumn{9}{c}{\texttt{Dispersion Parameter}}\\ \hline covariates & HM-est. & p-value & GHM-est. & stand. error & z-value & p-value & Inf. & Sup. \\ \hline \texttt{Intercept} & 1.271 & $-$ & 0.508 & 0.057 & 8.853 & 0.000 & 0.396 & 0.621 \\ \texttt{Age} &$-$ &$-$ & $-$0.025 & 0.013 & $-$1.986 & 0.047 & $-$0.049 & 0.000 \\ \texttt{Totchr} &$-$ &$-$ & $-$0.105 & 0.019 & $-$5.475 & 0.000 & $-$0.142 & $-$0.067 \\ \texttt{Ins} &$-$ &$-$ & $-$0.107 & 0.028 & $-$3.864 & 0.000 & $-$0.161 & $-$0.053 \\ \hline \multicolumn{9}{c}{\texttt{Sample Selection Bias Parameter}}\\ \hline covariates & HM-est. & p-value & GHM-est. & stand. error & z-value & p-value & Inf. & Sup. \\ \hline \texttt{Intercept} & $-$0.131 & 0.375 & $-$0.648 & 0.114 & $-$5.668 & 0.000 & $-$0.872 & $-$0.424 \\ \texttt{Fem} & $-$ &$-$ & $-$0.403 & 0.136 & $-$2.973 & 0.003 & $-$0.669 & $-$0.137 \\ \texttt{Totchr} & $-$ & $-$ & $-$0.438 & 0.186 & $-$2.353 & 0.019 & $-$0.803 & $-$0.073 \\ \hline \end{tabular} \label{2:dados_reais_tab1} \end{table} We conclude this application by checking the goodness-of-fit of the fitted generalized sample selection Heckman model. In Figure \ref{QQplot}, we provide the QQ-plot of the score residuals given in (\ref{score_residual}) with simulated envelopes, and also a Cook distance plot for detecting global influence. Based on this last plot, we do not detect any outlier observation, since all points are below the reference line $2p/n=0.013$. Anyway, we investigate if the highlighted point \#2602 (above the line $8\times 10^{-4}$) is influential. We fitted our model by removing this observation and no changes either on the parameter estimates or different conclusions about the significance of covariates were obtained. Regarding the QQ-plot of the score residuals, we observe a great performance of our model, since 96\% of the points are inside of the envelope. This confirms that the normal assumption for this particular dataset is adequate and that our generalized Heckman model is suitable for the MEPS data analysis. \begin{figure} \caption{QQ-plot and its simulated envelope for the score residuals (left) and index plot of the GCD (right) for the generalized Heckman model for the medical expenditure panel survey data.} \label{QQplot} \end{figure} \section{Concluding Remarks}\label{concluding_remarks} In this paper, a generalization of the Heckman model was proposed by allowing both sample selection bias and dispersion parameters to vary across covariates. We showed that the proposed model satisfies certain regularity conditions that ensure consistency and asymptotic normality of the maximum likelihood estimators. Furthermore, a proper score residual for sample selection models was proposed. These finding are new contributions on this topic. The MEPS data analysis based on the generalized Heckman model showed that the normal assumption for the data is suitable in contrast with existing findings in the literature. Future research should address (i) generalization of other sample selection models such as Student-$t$ and skew-normal to allow varying sample selection bias and dispersion parameters; (ii) proposal of proper residuals for other sample selection models; and (iii) deeper study of influence analysis. An \texttt{R} package for fitting our proposed generalized Heckman, Student-$t$, and skew-normal models has been developed and will be available soon. \section*{Appendix} \noindent {\it Proof of Theorem \ref{regcond}}. We here show the results for the derivatives with respect to $\boldsymbol\beta$. The results involving the other derivatives follow similarly and therefore they are omitted. For $i=1,\ldots,n$ and $j=1,\ldots,p$, we have that \begin{eqnarray}\label{scorebeta} \dfrac{\partial\ell_i}{\partial\beta_j}=\left\{-\dfrac{\rho_i}{\sqrt{1-\rho_i^2}}\dfrac{\phi(\zeta_{i})}{\Phi(\zeta_{i})}+z_i\right\}x_{ij}u_i/\sigma_i. \end{eqnarray} By using basic properties of conditional expectation, it follows that $E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\right)=E\left[E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\big|U_i\right)\right]$ and it is immediate that $E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\big|U_i=0\right)=0$. Let us now compute the conditional expectations involved in $E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\big|U_i=1\right)$. Here it is worth to remember the notations $z_i=\dfrac{y_i-\mu_{1i}}{\sigma_i}$ and $\zeta_i=\dfrac{\mu_{2i}+z_i\rho_i}{\sqrt{1-\rho_i^2}}$, for $i=1,\ldots,n$. We now use the conditional density function given in (\ref{1:dens_heckman}) to obtain that \begin{eqnarray*} E\left(\dfrac{\phi(\zeta_{i})}{\Phi(\zeta_{i})}\big|U_i=1\right)&=&\dfrac{1}{\sigma_i\Phi(\mu_{2i})}\int_{-\infty}^\infty\phi(\zeta_i)\phi(z_i)dy_i=\dfrac{1}{2\pi\sigma_i\Phi(\mu_{2i})}\int_{-\infty}^\infty\exp\{-(\zeta_i^2+z_i^2)/2\}dy_i\\ &=&\dfrac{e^{-\mu_{2i}^2/2}}{2\pi\sigma_i\Phi(\mu_{2i})}\int_{-\infty}^\infty\exp\left\{-\dfrac{(y_i-\mu_{1i}+\sigma_i\mu_{2i}\rho_i)^2}{2\sigma_i^2(1-\rho_i^2)}\right\}dy_i= \sqrt{1-\rho_i^2}\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}, \end{eqnarray*} where the last equality follows by identifying a normal kernel in the integral. On the other hand, we use the fact that $Z_i$ given $U_i=1$ has mean equal to $\mu_{1i}+\rho_i\sigma_i\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}$ (more details are given in the Supplementary Material) and get \begin{eqnarray*} E\left(Z_i\big|U_i=1\right)=-\dfrac{\mu_{1i}}{\sigma_i}+\dfrac{1}{\sigma_i}E(Y_i|U_i=1)=-\dfrac{\mu_{1i}}{\sigma_i}+\dfrac{1}{\sigma_i}\left(\mu_{1i}+\rho_i\sigma_i\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}\right)=\rho_i\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}. \end{eqnarray*} With the results above, we obtain that $E\left(\dfrac{\phi(\zeta_{i})}{\Phi(\zeta_{i})}\big|U_i\right)=0$ almost surely and therefore $E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\right)=0$ for $j=1,\ldots,p$. We now concentrate our attention to prove the identity stated in the theorem. It follows that \begin{eqnarray*} \dfrac{\partial^2\ell_i}{\partial\beta_j\beta_l}=-\left\{\dfrac{\rho_i^2}{1-\rho_i^2}\left[\zeta_i\dfrac{\phi(\zeta_i)}{\Phi(\zeta_i)}+\dfrac{\phi^2(\zeta_i)}{\Phi^2(\zeta_i)}\right]+1\right\}\dfrac{x_{ij}x_{il}}{\sigma_i^2}u_i \end{eqnarray*} and \begin{eqnarray*} \dfrac{\partial\ell_i}{\partial\beta_j}\dfrac{\partial\ell_i}{\partial\beta_l}=\left\{z_i^2-2z_i\dfrac{\rho_i}{\sqrt{1-\rho_i^2}}\dfrac{\phi(\zeta_i)}{\Phi(\zeta_i)}+\dfrac{\rho_i^2}{1-\rho_i^2}\dfrac{\phi^2(\zeta_i)}{\Phi^2(\zeta_i)}\right\}\dfrac{x_{ij}x_{il}}{\sigma_i^2}u_i, \end{eqnarray*} where we have used that $u_i^2=u_i$ (since $u_i\in\{0,1\}$) in the last equality. It is immediate that $E\left(\dfrac{\partial^2\ell_i}{\partial\beta_j\beta_l}\big|U_i=0\right)=-E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\dfrac{\partial\ell_i}{\partial\beta_l}\big|U_i=0\right)=0$. Following in a similar way as before, after some algebra we obtain that \begin{eqnarray*} E\left(\zeta_i\dfrac{\phi(\zeta_i)}{\Phi(\zeta_i)}\big|U_i=1\right)=-\mu_{2i}(1-\rho_i^2)\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}\quad\mbox{and}\quad E\left(Z_i^2\big|U_i=1\right)=1-\mu_{2i}\rho_i^2\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}. \end{eqnarray*} By combining these results, we have that \begin{eqnarray*} -E\left(\dfrac{\partial^2\ell_i}{\partial\beta_j\beta_l}\big|U_i=1\right)&=&E\left(\dfrac{\partial\ell_i}{\partial\beta_j}\dfrac{\partial\ell_i}{\partial\beta_l}\big|U_i=1\right)\\ &=&\left\{1+\mu_{2i}\rho_i^2\dfrac{\phi(\mu_{2i})}{\Phi(\mu_{2i})}+\dfrac{\rho_i^2}{1-\rho_i^2}E\left(\dfrac{\phi^2(\zeta_i)}{\Phi^2(\zeta_i)}\big|U_i=1\right)\right\}\dfrac{x_{ij}x_{il}}{\sigma_i^2}. \end{eqnarray*} Since the conditional expectations coincide, the marginal expectations also coincide so giving the desired result. $\square$ \noindent {\it Proof of Theorem \ref{asympt_results}}. Conditions (C1)--(C3) and Theorem \ref{regcond} give us the consistency of the MLEs. To establish the asymptotic normality of the estimators, we need to show that the third derivatives of the log-likelihood function are bounded by integrable functions not depending on the parameters. We will show here that this is possible for the derivatives involving the $\boldsymbol\beta$'s. The other cases follow in a similar way as discussed in the proof of Theorem \ref{regcond} and therefore they are omitted. By computing the third derivatives with respect to the $\boldsymbol\beta$'s and using the triangular inequality, we have that \begin{eqnarray*} \left|\dfrac{\partial^3\ell_i}{\partial\beta_j\partial\beta_l\partial\beta_k}\right|&\leq&\dfrac{\rho_i^2}{\sigma_i^3(1-\rho_i^2)^{3/2}}\left\{(1+z_i^2)\dfrac{\phi(\zeta_i)}{\Phi(\zeta_i)}+\zeta_i^2\dfrac{\phi^2(\zeta_i)}{\Phi^2(\zeta_i)}+2\zeta_i\dfrac{\phi^2(\zeta_i)}{\Phi^2(\zeta_i)}+2\dfrac{\phi^2(\zeta_i)}{\Phi^3(\zeta_i)}\right\}x_{ij}x_{il}x_{ik}\\ &\equiv& g_i(\boldsymbol{\theta})\leq g_i(\boldsymbol{\theta}^*), \end{eqnarray*} for $j,l,k=1,\ldots,p$, where $\boldsymbol{\theta}^*=\mbox{argmax}_{\boldsymbol{\theta}\in\boldsymbol\Theta}g_i(\boldsymbol{\theta})$, which is well-defined due to Assumption (C1). We now need to show that the expectations of the terms in $g_i(\boldsymbol{\theta}^*)$ are finite. Let us show that $E_{\boldsymbol\theta_0}\left(\zeta_i^{*2}\dfrac{\phi^2(\zeta_i^*)}{\Phi^2(\zeta_i^*)}\right)<\infty$, where $E_{\boldsymbol\theta_0}(\cdot)$ denotes the expectation with respect to the true parameter vector value $\boldsymbol\theta_0$ and $\zeta_i^*$ is defined as $\zeta_i$ by replacing $\boldsymbol{\theta}$ by $\boldsymbol{\theta}^*$. The proofs for the remaining terms follow from this one or in a similar way. For $\zeta_i^*\leq\sqrt2$, it follows that $\phi^2(\zeta_i^*)/\Phi^2(\zeta_i^*)\leq \left\{2\pi\Phi^2(\sqrt2)\right\}^{-1}$. Now, consider $\zeta_i^*>\sqrt2$. Theorem 1.2.6 from \cite{durrett} gives us the following inequality for $x>0$: \begin{eqnarray*} \left(\dfrac{1}{x}-\dfrac{1}{x^3}\right)e^{-x^2/2}\leq \int_x^\infty e^{-y^2/2}dy. \end{eqnarray*} Using this inequality and under $\zeta_i^*>\sqrt2$, we obtain that $\dfrac{\phi^2(\zeta_i^*)}{\Phi^2(\zeta_i^*)}\leq \dfrac{\zeta_i^{*3}}{\zeta_i^{*2}-1}\leq \zeta_i^{*}$. These results imply that \begin{eqnarray*} E_{\boldsymbol\theta_0}\left(\zeta_i^{*2}\dfrac{\phi^2(\zeta_i^*)}{\Phi^2(\zeta_i^*)}\right)&=&E_{\boldsymbol\theta_0}\left(\zeta_i^{*2}\dfrac{\phi^2(\zeta_i^*)}{\Phi^2(\zeta_i^*)}I\left\{\zeta_i^*\leq\sqrt2\right\}\right)+E_{\boldsymbol\theta_0}\left(\zeta_i^{*2}\dfrac{\phi^2(\zeta_i^*)}{\Phi^2(\zeta_i^*)}I\left\{\zeta_i^*>\sqrt2\right\}\right)\\ &\leq& \sqrt{2}\left\{2\pi\Phi^2(\sqrt2)\right\}^{-1}+E_{\boldsymbol\theta_0}\left(\left|\zeta_i^{*}\right|^3\right)<\infty, \end{eqnarray*} with $E_{\boldsymbol\theta_0}\left(\left|\zeta_i^{*}\right|^3\right)<\infty$ being proved in the same lines that the first two moments presented in the Supplementary Material, which completes the proof of the desired result. $\square$ \end{document}
\begin{document} \definecolor{light-gray}{gray}{0.95} \lstset{columns=fullflexible, basicstyle=\ttfamily, backgroundcolor=\color{white},xleftmargin=0.5cm,frame=lr,framesep=8pt,framerule=0pt} \begin{frontmatter} \title{Pseudo Bayesian Mixed Models under Informative Sampling} \runtitle{Pseudo Bayesian Mixed Models} \author{\fnms{Terrance D.} \snm{Savitsky}\thanksref{t1}\ead[label=e1]{[email protected]}} \and \author{\fnms{Matthew R.} \snm{Williams}\thanksref{t2,t3}\ead[label=e2]{[email protected]}} \thankstext{t1}{U.S. Bureau of Labor Statistics, Office of Survey Methods Research, Washington, DC, USA, [email protected] } \thankstext{t2}{National Center for Science and Engineering Statistics, National Science Foundation, Alexandria, VA, USA, [email protected] } \thankstext{t3}{Corresponding Author} \runauthor{Savitsky and Williams} \begin{abstract} When random effects are correlated with survey sample design variables, the usual approach of employing individual survey weights (constructed to be inversely proportional to the unit survey inclusion probabilities) to form a pseudo-likelihood no longer produces asymptotically unbiased inference. We construct a weight-exponentiated formulation for the random effects distribution that achieves approximately unbiased inference for generating hyperparameters of the random effects. We contrast our approach with frequentist methods that rely on numerical integration to reveal that only the Bayesian method achieves both unbiased estimation with respect to the sampling design distribution and consistency with respect to the population generating distribution. Our simulations and real data example for a survey of business establishments demonstrate the utility of our approach across different modeling formulations and sampling designs. This work serves as a capstone for recent developmental efforts that combine traditional survey estimation approaches with the Bayesian modeling paradigm and provides a bridge across the two rich but disparate sub-fields. \end{abstract} \begin{keyword} \kwd{Labor force dynamics} \kwd{Markov chain Monte Carlo} \kwd{Pseudo-Posterior distribution} \kwd{Survey sampling} \kwd{Weighted likelihood} \end{keyword} \end{frontmatter} \section{Introduction} Hierarchical Bayesian models provide a flexible and powerful framework for social science and economic data, which often include nested units of analysis such as industry, geography, and individual. Yet, social science and economic data are commonly acquired from a survey sampling procedure. It is typically the case that the underlying survey sampling design distribution governing the procedure induces a correlation between the response variable(s) of interest and the survey sampling inclusion probabilities assigned to units in the target population from which the sample was taken. Survey sampling designs where there is a correlation between the response variable and the sampling inclusion probabilities are referred to as informative and will result in the distribution of the response variable in observed samples being different from that from the underlying population about which we seek to perform model-based inference. Sample designs may also be informative when the inclusion probabilities for groups are correlated with the corresponding latent random effects. The resulting distribution of random effects in the sample is different from that of the population of random effects. The current literature for Bayesian methods has partially addressed population model estimation of survey data under informative sampling designs through the use of survey sampling weights to obtain consistent estimates of fixed effects or top level global parameters. Yet the survey statistics literature suggests that parameters related to random effects, or local parameters are still potentially estimated with bias. The possibility for survey-induced bias in estimation of random effects severely limits the applicability of the full suite of Bayesian models to complex social and economic data. This paper proposes a Bayesian survey sample-weighted, plug-in framework for the simultaneous estimation of fixed effects and generating hyperparameters (e.g., variance) of random effects that is unbiased with the respect to the distribution over samples and asymptotically consistent with respect to the population distribution. \subsection{Informative Sampling Designs} Survey sampling designs that induce a correlation between the response variable of interest, on the one hand, and the survey sample inclusion probabilities, on the other hand, are deemed ``informative" and produce samples that express a different balance of information than that of the underlying population, thus estimation methods that do not incorporate sample design information lead to incorrect inferences. For example, the U.S. Bureau of Labor Statistics (BLS) administers the Job Openings and Labor Turnover Survey (JOLTS) to business establishments for the purpose of estimating labor force dynamics, such as the total number of hires, separations and job openings for area-indexed domains. The units are business establishments and their inclusion probabilities are set to be proportional to their total employment (as obtained on a lagged basis from a census instrument). Since the number of hires, separations and job openings for establishments are expected to be correlated to the number of employees, this survey sampling design induces informativeness, so that hiring, separations and openings would be expected to be larger in the samples than in the underlying population. \subsection{Bayesian Models for Survey Data} There is growing and rich literature on employment of survey sampling weights (constructed to be inversely proportional to unit inclusion probabilities) for correction of the population model estimated on the observed survey sample to produce asymptotically unbiased estimation. Some recent papers focus on the use of Bayesian modeling for the specific purpose of producing mean and total statistics under either empirical or nonparametric likelihoods, but these methods don't allow the data analyst to specify their own population models for the purpose of parameter estimation and prediction \citep{kunihama:2014,wu:2010,si2015}. In particular, the set-up for our paper is one where the data analyst has specified a particular Bayesian hierarchical model for the population (from which the sample was taken) under which they wish to perform inference from data taken from a complex sampling design. So, having to specify a model that is specific to the realized sample, but unrelated to the population model constructed by the data analyst does not allow them to conduct the inference they seek. \citet{2015arXiv150707050S} and \citet{2018dep} complement these efforts by employing a pseudo-posterior to allow the data analyst to estimate a population model of their choice on an informative sample taken from that population. The pseudo-likelihood exponentiates each unit likelihood contribution by its sampling weight, which re-balances information in the observed sample to approximate that in the population. The use of the pseudo-posterior may be situated in the more general class of approximate or composite likelihoods \citep{2009arXiv0911.5357R}. All of the above Bayesian approaches that allow analyst specification of the underlying population generating model to be estimated on the observed informative sample \emph{only} address models with fixed or global effects, not random effects. Yet, it is routine in Bayesian modeling to employ one or more sets of random effects under prior formulations designed to capture complex covariance structures. Hierarchical specifications make such population models readily estimable. \subsection{Extending the Pseudo-Posterior to Mixed Effects Models} There are two survey designs considered in this paper: 1. Clusters or groups of units are sampled in a first stage, followed by the sampling of nested units in a second stage. We refer to this procedure as the ``direct" sampling of clusters or groups; 2. Units are sampled in a single stage without directly sampling the clusters or groups in which they naturally nest (e.g., geography). We refer to the case where groups used in the population model are not included in the sampling design as ``indirect" sampling of groups, since a group is included in the sample to the extent that a nested unit is directly sampled. This paper extends the approaches of \citet{2015arXiv150707050S} and \citet{2018dep} from global-only parameters to mixed effects (global and local parameter) models by exponentiating both the data likelihood contributions \emph{and} the group-indexed random effects prior distributions by sampling weights - an approach that we label, ``double-weighting" - that is multiplied, in turn, by the joint prior distribution for the population model parameters to form a joint pseudo-posterior distribution with respect to the observed data and random effects for the sampled groups. Our augmented (by sample-weighting the prior for random effects) pseudo-posterior method introduced in the next section is motivated by a data analyst who specifies a population generating (super population) model that includes group-indexed random effects for which they desire to perform inference. The observed data are generated under an informative sampling design such that simply estimating the population model on the observed sample will produce biased parameter estimates. Our augmented pseudo-posterior model estimator uses survey sampling weights to perform a relatively minor adjustment to the model augmented likelihood such that parameter draws taken on the observed informative sample approximates inference with respect to the population generating distribution (and the resulting parameter estimates are consistent with respect to the population generating distribution). We demonstrate that our pseudo-posterior formulation achieves both unbiasedness with respect to the two-stage survey sampling design distribution \emph{and} consistency with respect to the population generating (super population) model for the observed response variable and the latent cluster/group random effects under both direct and indirect sampling of groups. Our weighted pseudo-posterior is relatively simple in construction because we co-sample the random effects and model parameters in our numerical Markov chain Monte Carlo (MCMC) scheme and marginalize out the random effects \emph{after} estimation. \begin{comment} , $P^{\pi}(\bm{\delta})$ for sampled inclusion indicators $\delta_{h}, \delta_{\ell | h},\delta_{h\ell }\in \{0,1\}$ with $\pi_{h} = Pr(\delta_{i} = 1)$, $\pi_{\ell | h} = Pr(\delta_{h|\ell} = 1)$, and $\pi_{h\ell} = \pi_{\ell | h} \pi_{h}$, \emph{and} consistency with respect to the population generating (super population) model, $P_{\theta_{0}, \phi_{0}}(\bm{y}, \bm{u})$ for response $y_{h\ell}$ and cluster random effects $u_{h}$ under both direct and indirect sampling of groups. \end{comment} \begin{comment} We specify and evaluate alternative forms for induced group inclusion probabilities used to formulate weights for group-indexed random effects distributions in the case of indirect group sampling. We both formalize and extend \citet{doi:10.1081/STA-120017802} that discusses, but does not implement, the possibility for inducing group-indexed weights under indirect sampling of groups. We discuss the extensions of frequentist consistency for our Bayesian survey sampling-weighted mixed effects models under each of direct and indirect sampling of groups by generalizing \citet{2018dep}. \end{comment} The case of indirect sampling of groups is particularly important in Bayesian estimation as it is common to specify multiple random effects terms that parameterize a complex covariance structure because the random effects terms are readily estimable in an MCMC scheme. \begin{comment} A recent publication of \citet{doi:10.1111/rssa.12362} addresses frequentist estimation of a population mean under informative non-response from a population model that includes unit-indexed random effects. The score function for non-responding units is replaced with it’s expectation with respect to the distribution for non-responding units and then uses a Bayes' rule formula to replace that expectation with respect a ratio of expectations with respect to distribution for responding units. The approach next uses expectation conditioning to compute a double integral where the first conditions on observation indexed random effects and the second integrates the random effects with respect to the distribution for the observed sample units. Because the approach focuses on estimation of the population model and produces a complex expression (composed of a ratio of expectations), they simply replace the population estimates for random effects with those computed from the sample as a plug-in. Porting this method for Bayesian estimation would require numerical computation of complicated integrals on every iteration of the posterior sampler and it would not address more general inference beyond the population mean. A major advantage of our pseudo-posterior construction under double-weighting is that the data analyst is only required to make minor changes to the posterior sampler designed for their population model to insert weights into the computations of the full conditional distributions. \end{comment} The remainder of the paper proceeds to develop our proposed double-weighting methods for estimation of mixed effects models under both direct and indirect sampling of groups on data acquired under informative sampling in Section~\ref{doublepseudo}. Simulation studies are conducted in Section~\ref{simulation} that compare our proposed method to the usual case of likelihood weighting under direct sampling of groups. Section~\ref{application} applies our double-weighting method to estimation of the number of hires for business establishments under employment of industry-indexed random effects in the population model where we reveal that our double-weighting approach produces more heterogeneous random effects estimates to better approximate the population from the observed sample than does the usual practice. We offer a concluding discussion in Section~\ref{discussion}. \section{Mixed Effects Pseudo Posterior Distribution}\label{doublepseudo} \begin{comment} \subsection{Fixed Effects Pseudo Posterior} \citet{2015arXiv150707050S} and \citet{2017pair} construct a sampling-weighted pseudo-likelihood that provides a noisy approximation to the population likelihood that when convolved with a prior distribution induces a pseudo-posterior distribution for \emph{fixed} effects models. We begin by briefly reviewing this construction before we proceed to extend their formulation to a linear, mixed effects population model construction that employs random effects. Suppose there exists a Lebesgue measurable population-generating density, $\pi\left(y\vert\theta\right)$, indexed by parameters, $\theta \in \Theta$, where values for a response variable of interest, $y_{1},\ldots,y_{N} \in P_{\theta_{0}},~\theta_{0}\in\Theta$. Let $\delta_{i} \in \{0,1\}$ denote the sample inclusion indicator for unit, $i = 1,\ldots,N$, from a population, $U$. The density for the \emph{observed} sample, $S = (1,\ldots,n)$ (where the index of units drawn from the population, $U$, are re-labeled sequentially without loss of generality) is denoted by, $f\left(y\vert\theta\right) := \pi\left(y\vert \delta = 1,\theta\right)$. The plug-in estimator for posterior density under the analyst-specified model for $\theta \in \Theta$ is \begin{equation} f^{\pi}\left(\theta\vert \mathbf{y}\right) \propto \left[\mathop{\prod}_{j = 1}^{n}f\left(y_{j}\vert \theta\right)^{w_{j}}\right]f\left(\theta\right), \label{pseudolike} \end{equation} where $\mathop{\prod}_{j = 1}^{n}f\left(y_{j}\vert \theta\right)^{w_{j}}$ denotes the pseudo-likelihood for observed sample responses, $\mathbf{y}$. The joint prior density on model space assigned by the analyst is denoted by $f\left(\theta\right)$. The sampling weights, $\{w_{j} \propto 1/\pi_{j}\}$, are inversely proportional to unit inclusion probabilities and normalized to sum to the sample size, $n$. Let $f^{\pi}(\cdot\vert \mathbf{y})$ denote the noisy approximation to (the population) posterior distribution, $f(\cdot\vert\mathbf{y})$, based on the data, $\mathbf{y}$, and sampling weights, $\{\mathbf{w}\}$, confined to those units \emph{observed} in the sample, $S$. \citet{2017pair} demonstrate asymptotic consistency of the pseudo-posterior distribution with respect to the joint distribution, $(P_{\theta_{0}},P^{\pi})$, that governs population generation and the taking of an informative sample, $S$, from the underlying population, $U$. \end{comment} The focus of this paper addresses sampling units naturally collected into a population of groups; for example, defined by geography. There is typically a dependence among the response values for units within each group such that units are more similar within than between groups. Sampling designs are typically constructed as multi-stage where the collection of groups in the population are first sampled, followed by the sequential taking of a sub-sample of units from the population of units within \emph{selected} or sampled groups. By contrast, an alternative set of sampling designs may proceed to draw a sample from the population of units in a \emph{single} stage such that the groups are included the sample, indirectly, when one or more of their member units are selected. These two sampling designs - sampling groups, followed by sampling units within groups in a multi-stage sampling design, on the one hand, as compared to a single-stage sampling of units without directly sampling their member groups, on the other hand - will lead us to design two formulations for extending the pseudo-posterior distribution of \citet{2018dep}. The pseudo-posterior exponentiates the likelihood contributions of the \emph{single level} fixed effects model (that does not utilize random effects) by the survey weights for the observed sample units, which are inversely proportional to their probabilities of being selected into the sample $w_{i} \propto 1/\pi_{i}$: \begin{equation}\label{eq:pseudopost} f^{\pi}\left(\theta\vert \mathbf{y},\tilde{\mathbf{w}}\right) \propto \left[\textcolor{red}{\mathop{\prod}_{i = 1}^{n}f\left(y_{i}\vert \theta\right)^{\tilde{w}_{i}}}\right]f\left(\theta\right) \end{equation} where the normalized weights sum to the sample size $\tilde{w}_{i} = w_{i}/\frac{\sum w_{i}}{n}$. The sum of the weights directly affects the amount of posterior uncertainty estimated in the posterior distribution for $\bm{\theta}$, so normalizing it to sum to the effective sample size regulates that uncertainty. Equation \ref{eq:pseudopost} is a noisy approximation to the true (but not fully known) joint distribution of the population model $P_{\theta_{0}}(\mathbf{y})$ and the sampling process $P^{\pi}(\bm{\delta})$, where $\bm{\delta}$ denotes a vector of sample design inclusion indicators for units and groups (that are governed by $P^{\pi}$) and formally defined under the 2-stage class of sampling designs considered in this paper in the upcoming section. The noisy approximation for the population likelihood obtained by constructing the sample-weighted pseudo-posterior estimator for the observed (informative) sample leads to consistent estimation of population generating (super-population) parameters $\theta$ (as the sample size, $n$, grows) . The pseudo-posterior construction requires only a minor change to the population model specified by the data analyst on which they wish to perform inference (by weighting each unit-indexed likelihood contribution by its associated marginal sampling weight). In particular, the data analyst may specify population distributions for $f\left(y_{i}\vert \theta\right)$ and priors $f(\bm{\theta})$; for example, if the data are count data that we work with in the sequel, they may specify a Poisson likelihood with mean, $\bm{\mu}$, for which they define a latent regression model formulation. The data analyst is interested to perform inference for the generating parameters under the population generation and not the distribution of the observed sample. Under informative sampling the two distributions are different and the pseudo-posterior corrects the distribution of the observed sample back to the population of interest. We demonstrate in the sequel, that the formulation in Equation \ref{eq:pseudopost} can be extended to multi-level models by exponentiating \emph{both} the likelihood (conditioned on the random effects) \emph{and} the prior distribution for random effects by sampling weights. \subsection{Mixed Effects Posterior Under \emph{Direct} Sampling of Population Groups}\label{sec:direct} Assign units, $i \in (1,\ldots,N)$, that index a population, $U$, to groups, $h \in (1,\ldots,G_{U})$, where each population group, $h$, nests $U_{h} = 1,\ldots,N_{h}$ units, such that $N = \lvert U \rvert = \mathop{\sum}_{h = 1}^{G_{U}}N_{h}$, with $N_{h} = \lvert U_{h} \rvert$. Construct a 2-stage informative sampling design whose first stage takes a direct sample of the $G_{U}$ groups, where $\pi_{h}\in(0,1]$ denotes the marginal sample inclusion probability for group, $h \in (1,\ldots,G_{U})$. Let $g \in (1,\ldots,G_{S})$, index the \emph{sampled} groups, where $G_{S}$ denotes the number of observed groups from the population of groups, $G_{U} \supset G_{S}$. Our first result defines a pseudo-posterior estimator on the observed sample for our population model that includes group-indexed random effects in the case where we \emph{directly} sample groups, followed by the sampling of units nested within selected groups, in a multistage survey sampling design. Our goal is to achieve unbiased inference for $(\theta,\phi)$ (where $\theta$ denotes fixed effects for generating population responses, $y$, and $\phi$ denotes the generating parameters of random effects, $u$, for the population), estimated on our observed sample taken under an informative survey sampling design. We assume that random effects $u$ are indexed by group and are independent conditional on the generating parameter $\phi$. Multistage designs that sample groups or clusters, followed by the further sampling of nested units, are commonly used for convenience to mitigate costs of administration where in-person interviews are required and also in the case where a sampling frame of end-stage units is constructed after sampling groups in the first stage. The second stage of the survey sampling design takes a sample from the $N_{g}$ (second stage) units $\forall g \in S_{g}$, where $S_{g} \subset U_{g}$. The second stage units are sampled with conditional inclusion probabilities, $\pi_{\ell\mid g} \in (0,1]$ for $\ell = 1,\ldots,N_{g}$, conditioned on inclusion of group, $g\in (1,\ldots,G_{S})$. Let $j \in (1,\ldots,n_{g})$ index the sampled or observed second stage units linked to or nested within sampled group, $g \in (1,\ldots,G_{S})$. Denote the marginal group survey sampling weight, $w_{g} \propto 1/\pi_{g}$ for $\pi_{g} \in (0,1]$. Denote the marginal unit survey sampling weight, $w_{g j} \propto 1/\pi_{g j}$, for $\pi_{g j} \in (0,1]$, the joint inclusion probability for unit, $j$, nested in group, $g$, both selected into the sample. The group marginal inclusion probabilities and conditional unit inclusion probabilities under our 2-stage survey sampling design are governed by distribution, $P^{\pi}$. \begin{theorem}\label{th:direct} Under a proper prior specification, $f(\bm{\theta})f(\bm{\phi})$, the following pseudo-posterior estimator achieves approximately unbiased inference for super-population (population generating) model, $f(\bm{\theta},\bm{\phi}|\mathbf{y})$, with respect to the distribution governing the taking of samples from an underlying finite population, $P^{\pi}$, \begin{equation}\label{eq:sampdirmodel} f^{\pi}\left(\theta, \phi \vert \mathbf{y} \right) \propto \left[\mathop{\int}_{\mathbf{u} \in \mathcal{U}} \left\{ \textcolor{blue}{\mathop{\prod}_{g\in S}} \left( \textcolor{red}{\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{g}, \theta \right)^{w_{g j}}} \right) \textcolor{blue}{f\left(u_{g}\mid \phi \right)^{w_g}} \right\} d\mathbf{u} \right]f(\theta)f(\phi). \end{equation} where $f^{\pi}(\cdot)$ denotes a sampling-weighted pseudo-distribution, $j \in S_{g}$ denotes the subset of units, $j \in (1,\ldots,n_{g} = \lvert S_{g}\rvert)$, linked to group, $g\in (1,\ldots,G_{S})$. Parameters, $(\bm{\theta},\bm{\phi})$, index the super-population model posterior distribution, $f(\bm{\theta},\bm{\phi}|\mathbf{y})$, that is the target for estimation. The integral for the vector $\mathbf{u} = (u_{1},\ldots,u_{n_{g}})$ is taken over its support, $\mathcal{U}$, for each component, $u_{g} \in \mathbf{u}$. \end{theorem} We employ a pseudo-likelihood for the first level of the model for sampled observations $y_{gj}$ within sampled clusters $g$ by exponentiating by the sample weight $w_{gj}$. This provides a noisy approximation to the first stage likelihood. For the second level model (or prior) for the random effects $u_{g}$, we exponentiate this distribution by its corresponding sampling weights $w_{g}$. This provides a noisy approximation to the population distribution of random effects. Both approximations are needed because the distributions of both the responses and the random effects in the sample can differ substantially from those in the corresponding population due to the informative sampling design at both the cluster $g$ and the within cluster $j|g$ stages, where the latter notation denotes the sampling of unit $j$ conditioned on / within sampled group $g$. Under our augmented (by weighting the prior of the group-indexed random effects) pseudo-likelihood of Equation~\ref{eq:sampindirmodel}, $f\left(y_{g j}\mid u_{g}, \theta \right)$ and $f\left(u_{g}\mid \phi \right)$ are not restricted; for example, we select a Poisson distribution for the observed data likelihood, $f\left(y_{g j}\mid u_{g}, \theta \right)$, for our simulation study and application in the sequel. Similarly, the form of the distribution for the random effects prior distribution is not restricted under our construction, though it is most commonly defined as Gaussian under a GLM specification. Replacing the single Gaussian with a mixture of Gaussian distributions would also fit our set-up. Our approach also readily incorporates additional levels of random effects with no conceptual changes. \begin{proof} We first construct the complete joint model for the finite population, $U$, as if the random effects, $(u_{h})$, were directly observed, \begin{equation}\label{eq:popmodel} f_{U}\left(\theta, \phi \vert \mathbf{y},\mathbf{u} \right) \propto \left[\mathop{\prod}_{h = 1}^{G_{U}} \left(\mathop{\prod}_{\ell= 1}^{N_{h}} f\left(y_{h \ell}\mid u_{h}, \theta \right)\right) f\left(u_{h}\mid \phi \right) \right]f(\theta)f(\phi). \end{equation} Under a complex sampling design, we specify random sample inclusion indicators for groups, $\delta_h$, with marginal probabilities $\pi_{h} = P(\delta_h= 1)$ for $h \in (1,\ldots,G_{U})$, governed by $P^{\pi}$. We further specify random sample inclusion indicator, $\delta_{\ell \mid h} = (\delta_{\ell} \mid \delta_h = 1) \in \{0,1\}$, with probability $\pi_{\ell\mid h} = P(\delta_{\ell \mid h}= 1)$, for unit $\ell \in (1,\ldots,N_{h})$, conditioned on the inclusion of group, $h$, such that the indicator for the joint sampling of unit $\ell$ nested within group $h$ is denoted as $\delta_{h\ell} = \delta_{\ell \mid h} \times \delta_{h}$, with the associated marginal inclusion probability, $\pi_{h\ell} = P(\delta_{h\ell}=1)$. The taking of an observed sample is governed by the survey sampling distribution, $P^{\pi}$ (as contrasted with $P_{\theta,\phi}$, the population generation distribution for $(y,u)$). The pseudo-likelihood with respect to the joint distribution, $(P^{\pi},P_{\theta,\phi})$, is then constructed by exponentiating components of the likelihood in the population such that the \emph{expected value} of the survey sample pseudo log-likelihood function with respect to $P^{\pi}$ equals that of the log-likelihood for the entire population (and thus the score functions also match in expectation). Let $\ell_{U}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right) \equiv \log f_{U}\left(\theta, \phi \vert \mathbf{y},\mathbf{u} \right)$ denote the population model log-likelihood. Applying this approach to the log-likelihood of the joint model, above, leads to the following pseudo- likelihood formulation: \begin{align} \ell ^{\pi}_{U}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right) &\propto \mathop{\sum}_{h=1}^{G_{U}} \left(\mathop{\sum}_{\ell = 1}^{N_{h}} \left(\frac{\delta_{\ell\mid h}}{\pi_{\ell\mid h}}\right)\left(\frac{\delta_h}{\pi_h}\right) \ell \left(y_{h \ell}\mid u_{h}, \theta \right)\right) + \left(\frac{\delta_h}{\pi_h}\right) \ell \left(u_{h}\mid \phi \right)\\ & = \mathop{\sum}_{h=1}^{G_{U}} \left(\mathop{\sum}_{\ell = 1}^{N_{h}} \left(\frac{\delta_{h \ell}}{\pi_{h \ell}}\right) \ell \left(y_{h \ell}\mid u_{h}, \theta \right)\right) + \left(\frac{\delta_h}{\pi_h}\right) \ell \left(u_{h}\mid \phi \right)\label{estimator} \end{align} where $P^{\pi}$ governs all possible samples, $(\delta_{h},\delta_{\ell\mid h})_{\ell\in U_{h}, h=1,\ldots,G_{U}}$, taken from population, $U$. Let joint group-unit inclusion indicator, $\delta_{h \ell} = \delta_{h}\times\delta_{\ell\mid h}$ with $\pi_{h \ell} = P(\delta_{h \ell}= 1) = P(\delta_{h}=1,\delta_{\ell\mid h} = 1)$. For each \emph{observed} sample $\ell ^{\pi}_{U}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right) = \ell ^{\pi}_{S}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right)$ where \begin{equation}\label{estimatorsample} \ell ^{\pi}_{S}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right) = \mathop{\sum}_{g=1}^{G_{S}} \left(\mathop{\sum}_{j \in S_{g}} w_{g j} \ell \left(y_{g j}\mid u_{g}, \theta \right)\right) + w_g \ell \left(u_{g}\mid \phi \right) \end{equation} and $w_{g j} \propto \pi_{g,j}^{-1}$ and $w_g \propto \pi_{g}^{-1}$. The expectation of our estimator in Equation~\ref{estimator} is unbiased with respect to $P^{\pi}$, \begin{align} \mathbb{E}^{\pi}\left[\ell ^{\pi}_{U}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right)\bigg\vert P_{\theta,\phi}\right] &\equiv \\ \mathbb{E}^{\pi}\left[\ell ^{\pi}_{U}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right)\right] &= \ell_{U}\left(\mathbf{y}, \mathbf{u} \vert \theta, \phi \right)\label{unbiased}, \end{align} where the expectation, $\mathbb{E}^{\pi}(\cdot)$, is taken with respect to the survey sampling distribution, $P^{\pi}$, that governs the survey sampling inclusion indicators, $\{\delta_{h \ell}, \delta_h\}$, conditional on the data $\{\mathbf{y}, \mathbf{u}\}$ generated by $P_{\theta,\phi}$. The final equality in Equation~\ref{unbiased} is achieved since $\mathbb{E}^{\pi}(\delta_{h \ell}) = \pi_{h \ell}$ and $\mathbb{E}^{\pi}(\delta_{h}) = \pi_{h}$. Thus, we use the following sampling-weighted model approximation to the complete population model of Equation~\ref{eq:popmodel}: \begin{equation}\label{eq:sampcompletemodel} f^{\pi}\left(\theta, \phi \vert \mathbf{y}, \mathbf{u}\right) \propto \left[\mathop{\prod}_{g \in S}\left(\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{g}, \theta \right)^{w_{g j}}\right) f\left(u_{g}\mid \phi \right)^{w_g} \right]f(\theta)f(\phi). \end{equation} We can then construct a sampling-weighted version of the observed model: \begin{equation}\label{eq:sampobsmodel} f^{\pi}\left(\theta, \phi \vert \mathbf{y} \right) \propto \left[\mathop{\int}_{\mathbf{u} \in \mathcal{U}} \left\{ \mathop{\prod}_{g \in S} \left(\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{g}, \theta \right)^{w_{g j}}\right) f\left(u_{g}\mid \phi \right)^{w_g} \right\} d \mathbf{u} \right]f(\theta)f(\phi). \end{equation} The walk from Equation~\ref{eq:sampcompletemodel} to Equation~\ref{eq:sampobsmodel} is possible because we co-estimate the $(\mathbf{u})$ with $(\theta,\phi)$ and then perform the integration step to marginalize over the $(\mathbf{u})$ \emph{after} estimation. \end{proof} Theorem~\ref{th:direct} requires the exponentiation of the prior contributions for the sampled random effects, $(u_{g})$, by a sampling weight, $w_{g} \propto 1/\pi_{g}$ in order to achieve approximately unbiased inference for $\phi$; it is not enough to exponentiate each data likelihood contribution, $f\left(y_{g j}\mid u_{g}, \theta \right)$, by a unit (marginal) sampling weight, $w_{g j}$. This formulation is generally specified for \emph{any} population generating model, $P_{\theta,\phi}$. Our result may be readily generalized to survey sampling designs of more than two stages where each collection of later stage groups are nested in earlier stage groups (such as households of units nested within geographic PSUs). The proposed method under direct sampling of Equation~\ref{eq:sampdirmodel} is categorized as a plug-in estimator that exponentiates the likelihood contributions for nested units by the unit-level marginal sampling weights \emph{and}, in turn, exponentiates the prior distribution for cluster-indexed random effects by the cluster (or PSU) marginal sampling weights. Samples from the joint pseudo-posterior distribution over parameters are interpreted as samples from the underlying (latent) population generating model since the augmented (by the weighted random effects prior distribution) pseudo-likelihood estimated on the observed sample provides a noisy approximation for the population generating likelihood. Although the augmented pseudo-likelihood is unbiased with respect to the distribution over samples, it will not, generally, produce correct uncertainty quantification. In particular, the credibility intervals will be too optimistic relative to valid frequentist confidence intervals because the plug-in method is not fully Bayesian in that it doesn't model uncertainty in the sampling weights, themselves. Although the employment of random effects captures dependence among nested units, the warping and scaling induced by the sampling weights will result in failure of Bartlett's second identity such that the asymptotic hyperparameter covariance matrix for our plug-in mixed effects model will not be equal to the sandwich form of the asymptotic covariance matrix for the MLE. The result of the lack of equality is that the model credibility intervals, without adjustment, will not contract on valid frequentist confidence intervals. A recent work of \citet{leonnovelo2021fully} jointly models the unit level marginal sampling weights and the response variable and includes group-indexed random effects parameters in their joint model. They demonstrate correct uncertainty quantification because the asymptotic covariance matrix of their fully Bayesian model (that also co-models the sampling weights) is equal to that for the MLE. Their method specifies an exact likelihood for the observed sample that is complicated and requires a closed-form solution for an integral that restricts the class of models that may be considered. This approach requires a different model formulation than that specified for the population and of interest to the data analyst. By contrast, our plug-in augmented pseudo-posterior distribution requires only minor change to the underlying population model specified by the data analyst and may be easily adapted to complicated population models. Correct uncertainty quantification for $(\bm{\theta},\bm{\phi})$ may be achieved by using the method of \citet{2018arXiv180711796W} to perform a post-processing of the MCMC parameter draws that replaces the pseudo-posterior covariance with the sandwich form of the MLE. This paper, by contrast, focuses on providing unbiased point estimation for mixed effects models as an extension of \citet{2015arXiv150707050S} because \citet{2018arXiv180711796W} may be readily applied, post sampling. \begin{comment} There are two related papers \citep{pfeffmix:1998, rh:1998} that address unbiased inference for linear mixed models under a continuous and dichotomous response, respectively, in the frequentist setting where the random effects utilized in the population model index groups used to draw nested units in a multistage sampling design where, in the case of a 2-stage sampling design, each group in the population is assigned an inclusion probability. In a set-up where the inclusion probabilities for the random effects are informative, both papers accomplish estimation by multiplying the logarithm of the distribution for each group-indexed random effect by a weight set to be inversely proportional to the group inclusion probability. The log-likelihood contribution for each unit, which is nested in a group, is multiplied by a weight set to be inversely proportional to the conditional inclusion probability for that unit, given that its group was sampled in an earlier stage. Each paper evaluates alternatives for normalizing the collection of conditional units weights in each group in order to reduce the bias for estimation of the generating random effects variance in the case of a small number of units linking to each group, though they both recommend normalizing to the sum of within group sample size. \end{comment} Our pseudo-likelihood in Equation~\ref{estimator} is jointly conditioned on $(\mathbf{y},\mathbf{u})$, such that the random weights, $\left(\delta_{h \ell}/\pi_{h \ell}\right)$, are specified in linear summations. This linear combination of weights times log-likelihoods ensures (design) unbiasedness with respect to $P^{\pi}$ because the weight term is separable from the population likelihood term. We may jointly condition on $(\mathbf{y},\mathbf{u})$ in our Bayesian set-up because we co-sample $(\mathbf{u},\theta,\phi)$, numerically, in our MCMC such that the integration step over $\mathbf{u}$ is applied \emph{after} co-estimation. In other words, we accomplish estimation in our MCMC by sampling $\mathbf{u}$ jointly with $(\theta,\phi)$ on each MCMC iteration and then ignoring $\mathbf{u}$ to perform marginal inferences on $\theta$ and $\phi$, which is a common approach with Bayesian hierarchical models. By contrast, \citet{pfeffmix:1998}, \citet{rh:1998}, and others \citep[for example see][]{kim2017statistical} specify the following integrated likelihood under frequentist estimation for an observed sample where units are nested within groups, \begin{equation} \ell^{\pi}(\theta,\phi) = \mathop{\sum}_{g=1}^{G_{S}} w_{g} \ell_{i}^{\pi}(\theta,\phi), \end{equation} for $\ell_{i}^{\pi}(\theta,\phi) = \log L_{i}^{\pi}(\theta,\phi)$ for, \begin{equation}\label{eq:frequnitlike} L_{i}^{\pi}(\theta,\phi) = \mathop{\int}_{u_{g} \in \mathcal{U}}\exp\left[\mathop{\sum}_{j \in S_{g}} w_{j\mid g}\ell \left(y_{g j}\mid u_{g}, \theta \right)\right]f\left(u_{g}\mid \phi \right)du_{g}, \end{equation} which will \emph{not}, generally, be unbiased with respect to the distribution governing the taking of samples for the population likelihood because the unit level conditional weights, $(w_{j\mid g})_{j}$, are nested inside an exponential function (such that replacing $w_{j\mid g}$ with $\delta_{\ell\mid h}/\pi_{\ell \mid h}$ inside the exponential and summing over the population groups and nested units will not produce separable sampling design terms that each integrate to $1$ with respect to $P^{\pi}$, conditioned on the generated population) \citep{yi:2016}. The non-linear specification in Equation~\ref{eq:frequnitlike} results from an estimation procedure that integrates out $\mathbf{u}$ \emph{before} pseudo-maximum likelihood point estimation of $(\theta,\phi)$. This design biasedness (with respect to $P^{\pi}$) is remedied for pseudo-maximum likelihood estimation by \citet{yi:2016} with their alternative formulation, \begin{align} \ell^{\pi}(\theta,\phi) &=\mathop{\sum}_{g=1}^{G_{S}} w_{g} \mathop{\sum}_{j < k; j,k\in S_{g}} w_{j,k\mid g} \ell_{gj,k}(\theta,\phi)\label{composite}\\ \ell_{gj,k}(\theta,\phi) &= \log \left\{\mathop{\int}_{u_{g} \in \mathcal{U}}f(y_{g j}\mid u_{g},\theta)f(y_{g k}\mid u_{g},\theta)f(u_{g}\mid\phi)du_{g} \right\}, \end{align} where $w_{j,k\mid g} \propto 1/\pi_{j,k\mid g}$ denotes the joint inclusion probability for units $(j,k)$, both nested in group, $g$, conditioned on the inclusion of group, $g$, in the observed sample. Equation~\ref{composite} specifies an integration over $u_{g}$ for \emph{each} $f(y_{g j}\mid u_{g},\theta)f(y_{g k}\mid u_{g},\theta)$ pair, which allows the design weights to enter in a linear construction outside of each integral. This set-up establishes linearity for inclusion of design weights, resulting in unbiasedness with respect to the distribution governing the taking of samples for computation of the pseudo-maximum likelihood estimate, though under the requirement that pairwise unit sampling weights be published to the data analyst or estimated by them. Yet, the marginalization of the random effects \emph{before} applying the group weight, $w_{g}$, fails to fully correct for the prior distribution for $u_{g}$. We show in the sequel that $\bm{\phi}$ is estimated with bias by \citet{yi:2016} due to this integration of the random effects being performed on the unweighted prior of $u_g$. Our method, by contrast, weights the prior for $u_{g}$ and performs the integration of $u_{g}$ numerically in our MCMC (after sampling $u_{g}$). \subsection{Mixed Effects Posterior Under \emph{Indirect} Sampling of Population Groups}\label{sec:indirect} Bayesian model specifications commonly employ group-level random effects (often for multiple simultaneous groupings) to parameterize a complex marginal covariance structure. Those groups are often not \emph{directly} sampled by the survey sampling design. We, next, demonstrate that weighting the prior contributions for the group-indexed random effects is \emph{still} required, even when the groups are not directly sampled, in order to achieve unbiased inference for the generating parameters of the random effects, $\phi$. Again, as throughout, we assume the group-indexed random effects are conditionally independent given generating parameter $\phi$. We focus our result on a simple, single-stage sampling design, that may be readily generalized, where we reveal that the group-indexed survey sampling weights are constructed from unit marginal inclusion probabilities. Constructing sampled group weights from those of member units appeals to intuition because groups are included in the observed sample only if any member unit is selected under our single-stage survey sampling design. Suppose the same population set-up as for Theorem~\ref{th:direct}, with population units, $\ell \in U_{h}$, linked to groups, $h \in (1,\ldots,G_{U})$, where each unit, $(h,\ell)$, maps to $i \in (1,\ldots,N)$. We now construct a \emph{single} stage sampling design that directly samples each ($h,\ell$) unit with marginal inclusion probability, $\pi_{h \ell}$, governed by $P^{\pi}$. Group, $g \in G_{S}$, is \emph{indirectly} sampled based on whether there is \emph{any} linked unit, ($g j$), observed in the sample. \begin{theorem}\label{th:indirect} The following pseudo-posterior estimator achieves approximately unbiased inference with respect to $P^{\pi}$, \begin{comment} \begin{dmath}\label{eq:sampindirmodel} f^{\pi}\left(\theta, \phi \vert \mathbf{y} \right) \propto \left[\mathop{\prod}_{g\in S}\mathop{\int}_{u_{g} \in \mathcal{U}} \left\{\left(\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{g}, \theta \right)^{w_{g j}}\right) f\left(u_{g}\mid \phi \right)^{w_{g} = \frac{1}{N_{g}}\mathop{\sum}_{j\in S_{g}} w_{g j}} \right\} du_{g} \right]f(\theta)f(\phi), \end{dmath} \end{comment} \begin{equation}\label{eq:sampindirmodel} \begin{split} f^{\pi}\left(\theta, \phi \vert \mathbf{y} \right) \propto &\left[\mathop{\int}_{\mathbf{u} \in \mathcal{U}} \left\{ \textcolor{blue}{ \mathop{\prod}_{g\in S}} \left( \textcolor{red}{\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{g}, \theta \right)^{w_{g j}}} \right)\right.\right.\\ &\left.\left.\vphantom{ \mathop{\int}_{u_{g} \in \mathcal{U}}\left(\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{g}, \theta \right)^{w_{g j}}\right) } \textcolor{blue}{ f\left(u_{g}\mid \phi \right)^{w_g = \frac{1}{N_{g}}\displaystyle\mathop{\sum}_{j\in S_{g}}w_{g j}}} \right\} d \mathbf{u} \right]f(\theta)f(\phi), \end{split} \end{equation} where $w_{g j} \propto 1/\pi_{gj}$. \end{theorem} \begin{proof} We proceed as in Theorem~\ref{th:direct} by supposing the population $U$ of units and associated group-indexed random effects, $(u_{h})$, were fully observed. We first construct the likelihood for the fully observed population. \begin{align} f_{U}\left(\theta, \phi \vert \mathbf{y},\mathbf{u} \right) &\propto \left[\mathop{\prod}_{h = 1}^{G_{U}} \left(\mathop{\prod}_{\ell= 1}^{N_{h}} f\left(y_{h \ell}\mid u_{h}, \theta \right)\right) f\left(u_{h}\mid \phi \right) \right]f(\theta)f(\phi)\label{eq:popunitindirect}\\ &=\left[\mathop{\prod}_{h = 1}^{G_{U}} \mathop{\prod}_{\ell= 1}^{N_{h}} \left\{f\left(y_{h \ell}\mid u_{h}, \theta \right) f\left(u_{h}\mid \phi \right)^{\frac{1}{N_{h}}} \right\} \right]f(\theta)f(\phi)\label{eq:popindirect}. \end{align} We proceed to formulate the pseudo-likelihood for all possible random samples taken from $U$, $f^{\pi}_{U}(\cdot)$, governed jointly by $(P^{\pi},P_{\theta,\phi})$, from which we render the pseudo-likelihood for any sample, $f^{\pi}(\cdot)$, which is constructed to be unbiased with respect to the distribution governing the taking of samples for the population model of Equation~\ref{eq:popindirect} under $P^{\pi}$, \begin{align}\label{eq:popindmodel} f^{\pi}_{U}\left(\theta, \phi \vert \mathbf{y}, \mathbf{u}\right) &\propto \left[\mathop{\prod}_{h=1}^{G_{U}}\mathop{\prod}_{\ell \in U_{h}} \left\{f\left(y_{h \ell}\mid u_{h}, \theta \right)f\left(u_{h}\mid \phi \right)^{\frac{1}{N_{h}}}\right\}^{\frac{\delta_{h \ell}}{\pi_{h \ell}}} \right]f(\theta)f(\phi)\\ &=\left[\mathop{\prod}_{h=1}^{G_{U}}f\left(u_{h}\mid \phi \right)^{\frac{1}{N_{h}}\mathop{\sum}_{\ell\in U_{h}}\frac{\delta_{h \ell}}{\pi_{h \ell}}}\mathop{\prod}_{\ell \in U_{h}} f\left(y_{h \ell}\mid u_{\ell}, \theta \right)^{\frac{\delta_{h \ell}}{\pi_{h \ell}}}\right]f(\theta)f(\phi). \end{align} This pseudo-posterior reduces to the following expression for the observed sample, \begin{equation} f^{\pi}\left(\theta, \phi \vert \mathbf{y}, \mathbf{u}\right) \propto \left[\mathop{\prod}_{g=1}^{G_{S}}f\left(u_{g}\mid \phi \right)^{\frac{1}{N_{g}}\mathop{\sum}_{j\in S_{g}}w_{g j}}\mathop{\prod}_{j \in S_{g}} f\left(y_{g j}\mid u_{j}, \theta \right)^{w_{g j}}\right]f(\theta)f(\phi), \end{equation} where $\pi_{g j} = P(\delta_{g j} = 1)$ (under $P^{\pi}$), $w_{g j} \propto 1/\pi_{g,j}$ and $N_{g}$ denotes the number of units in the population linked to observed group, $g \in(1,\ldots,G_{S})$ observed in the sample. We set $w_{g} := 1/N_{g}\times \mathop{\sum}_{j\in S_{g}}w_{g j}$ and the result is achieved. \end{proof} This result derives from eliciting group-indexed weights from unit inclusion probabilities for units linked to the groups. While the resulting pseudo-posterior estimators look very similar across the two theorems, the sampling designs are very different from one another in that groups are not directly sampled in this latter case, which is revealed in their differing formulations for $w_{g}$. The averaging of unit weights formulation for $w_{g}$ naturally arises under the derivation of Equation~\ref{eq:popindirect} when sampling units, rather than groups under a model that utilizes group-indexed random effects to capture within group dependence that naturally arises among units in the population. Exponentiating the augmented pseudo-likelihood of Equation~\ref{eq:popunitindirect} by survey variables anticipates the integration of the random effects to produce an observed data pseudo-likelihood. We may intuit this result by interpreting this form for $w_{g}$ proportional to the average importance of units nested in group each group, $g$. It bears mention that in the indirect sampling case, there is no probability of group selection defined for a single stage design. In practice, it is not common for the data analyst to know the population group sizes, $(N_{g})$, for the groups, $g\in (1,\ldots,G_{S})$ observed in the sample, so one estimates an $\hat{N}_{g}$ to replace $N_{g}$ in Equation~\ref{eq:sampindirmodel}. Under a single stage sampling design where the groups are indirectly sampled through inclusions of nested units into the observed sample, we assume that we only have availability of the marginal unit inclusion sampling weights, $(w_{g j})$. The group population size, $N_{g}$, needed for the sum-weights method of Equation~\ref{eq:sampindirmodel}, may be estimated by $\hat{N}_{g} = \mathop{\sum}_{j = 1}^{n_{g}}w_{j\mid g}$. To approximate $w_{j\mid g}$, we first utilize the sum-probabilities result to estimate, $\hat{w}_{g} = 1/\hat{\tilde{\pi}}_{g}$, and proceed to extract $(w_{j\mid g})$ from $w_{gj} \approx w_{g}w_{j\mid g}$. If we invert the resultant group-indexed weight, $w_{g}= 1/N_{g}\times\mathop{\sum}_{j \in S_{g}}w_{g j}$, for the case where groups are not directly sampled, we may view the inverse of the group $g$ weight, $\tilde{\pi}_{g} = 1/w_{g}$, as a ``pseudo" group inclusion probability, since we don't directly sample groups. One may envision other formulations for the pseudo group inclusion probabilities, $\tilde{\pi}_{g}$, that we may, in turn, invert to formulate alternative group-indexed survey sampling weights, $(w_{g})$. Please see Appendix \ref{sec:pseudoprobs} where we develop other methods, in addition to sum-weights, for computing $\tilde{\pi}_{g}$. In application, we normalize the by-group, survey sampling weights, $\displaystyle (w_{g})_g = 1,\ldots,G_{S}$, to sum to the number of observed groups in the sample, $G_{S}$, and normalize unit weights, $(w_{gj})_{j = 1,\ldots,n_{g}}$ to sum to the overall sample size, $n$. These normalizations regulate uncertainty quantification for posterior estimation of $(u_{g})$ and global parameters, $(\phi,\theta)$ by encoding an effective number of observed groups and units. So, we normalize them to sum to the number of groups and units observed in the sample to regulate the estimated pseudo-posterior variance of $(\phi,\theta)$. (In practice, these normalizations often produce somewhat optimistic credibility intervals due to dependence induced by the survey sampling design. \citet{2018arXiv180711796W} provide an algorithm that adjusts pseudo-posterior draws to incorporate this dependence). We refer to our proposed procedure for weight exponentiating both the data likelihood contributions and the prior distributions of the $(u_{g})$ as ``double-weighting", as mentioned in the introduction, to be contrasted with the usual approach of ``single-weighting" of \citet{2018dep} developed for models with global effects parameters. \begin{comment} \subsection{Frequentist Consistency of the Pseudo-Posterior Estimators} The frequentist consistency of our estimators of Theorems~\ref{th:direct} and \ref{th:indirect} with respect to the joint distribution of population generation and the taking of a sample, $(P_{\theta_{0},\phi_{0}},P^{\pi})$, is readily shown under the same conditions and contraction rate as specified in \citet{2018dep}. Let $\nu \in \mathbb{Z}^{+}$ index a collection of populations, $U_{\nu}$, such that $N_{\nu^{'}} > N_{\nu}$, for $\nu^{'} > \nu$. $\nu$ controls the asymptotics such that for each increment of $\nu$ we formulate new units and generate $(y_{\nu},\pi_{\nu})$. Let $p_{\lambda}$ denote a population model density with respect to parameters, $\lambda$. Under direct sampling of groups, we construct the estimator, \begin{equation}\label{eq:theory1} \mathop{\prod}_{h=1}^{G_{\nu}}p_{\phi}\left(u_{\nu h}\right)^{\frac{\delta_{\nu h}}{\pi_{\nu h}}} \times \prod_{\ell = 1}^{N_{\nu g}}p_{\theta}\left(y_{\nu h\ell}\mid u_{\nu h}\right)^{\frac{\delta_{\nu h \ell}}{\pi_{\nu h \ell}}} , \end{equation} which may be readily shown achieves the contraction rate specified in Theorem 1 of \citet{2018dep}. We note that Condition (A4) of \citet{2018dep}, which imposes an upper bound on the inverse of the marginal unit inclusion probability, $1/\pi_{\nu h\ell} = 1/(\pi_{\nu\ell\mid h}\pi_{\nu h}) < \gamma$ serves to bound $(\pi_{\nu h},\pi_{\nu\ell \mid h}) > 0 $ (away from $0$). Condition (A5.2) constructs blocks in which unit pairwise dependencies induced by the survey sampling design are allowed to never attenuate with increasing $\nu$, so long as the blocks grow at less than $\mathcal{O}(N_{\nu})$ and the dependence among units induced by the sampling design \emph{between} the blocks attenuates to $0$. We meet this condition by defining the population groups, $h \in (1,\ldots,G_{U})$, as the blocks, so that asymptotic dependence within the groups is allowed, but asymptotic independence must be achieved between groups. This is a very weak condition that is met by nearly all sampling designs used, in practice, under direct sampling of groups. Under the indirect sampling of groups, we formulate the estimator, \begin{equation}\label{eq:theory2} \mathop{\prod}_{h=1}^{G_{\nu}}\prod_{\ell = 1}^{N_{\nu g}}\left[p_{\phi}\left(u_{\nu h}\right)^{\frac{1}{N_{\nu h}}} \times p_{\theta}\left(y_{\nu h\ell}\mid u_{\nu h}\right)\right]^{\frac{\delta_{\nu h \ell}}{\pi_{\nu h \ell}}}. \end{equation} This estimator is only a slight generalization from that of \citet{2018dep} and achieves consistency under the specified conditions. We note that the prior contributions, $p_{\phi}(\cdot)$, in both Equations~\ref{eq:theory1} and \ref{eq:theory2} are treated as likelihoods under a population generating model governed by $\phi$. \end{comment} \section{Simulation study}\label{simulation} Our simulation study in the sequel focuses on a count data response rather than the usual continuous response, both because count data are the most common data type for the employment data collected by BLS and because our Bayesian construction is readily estimable under any response data type. We generate a count data response variable, $y$, for a population of size, $N = 5000$ units, where the logarithm of the generating mean parameter, $\mu$ is constructed to depend on a size predictor, $x_{2}$, in both fixed and random effects terms; in this way, we construct both fixed and random effects to be informative, since our proportion-to-size survey sampling design sets unit inclusion probabilities to be proportional to $x_{2}$. We generate a population of responses using, \begin{align}\label{eq:margmodel} y_{i} &\sim \mathcal{P}\left(\mu_{i}\right) \nonumber\\ \log\mu_{i} &= \beta_{0} + x_{1i}\beta_{1} + x_{2i}\beta_{2} + \left[1,x_{2i}\right]\bm{\gamma}_{h\{i\}}, \end{align} where $\mathcal{P}(\cdot)$ denotes the Poisson distribution, $x_{1i} \sim \mathcal{N}\left(0,1\right)$ is the inferential predictor of interest to the data analyst and $x_{2i} \sim \mathcal{E}\left(1/2.5\right)$ (where $\mathcal{E}(\cdot)$ denotes the Exponential distribution) is the size variable, which is generated from a skewed distribution to reflect real-world survey data, particularly for employment counts. The expression, $h\{i\}$, denotes the group $h \in (1,\ldots,G_{U})$ linked to unit $i\in (1,\ldots,N)$. We generate $\displaystyle\mathop{\bm{\gamma}_{h}}^{2\times 1} \sim \mathcal{N}_{2}\left(\mathbf{0},\mbox{diag}(\bm{\sigma})\times \mathop{R}^{2\times 2} \times \mbox{diag}(\bm{\sigma})\right)$, where $\bm{\sigma} = (1.0,0.5)^{'}$. We set $R = \mathbb{I}_{2}$, where $\mathbb{I}_{2}$ denotes the identity matrix of size $2$. Finally, we set $\bm{\beta} = \left(\beta_{0}, \beta_{1},\beta_{2}\right)^{'} = (0, 1.0,0.5)$, where we choose the coefficient of $x_{2}$ to be lower than that for $x_{1}$ to be moderately informative, which is conservative. The allocation of units, $i = 1,\ldots,N$ to groups, $h = 1,\ldots,G_{U}$ is performed by sorting the units, $i$, based on size variable, $x_{2}$. This allocation procedure constructs sized-based groups that accord well with survey designs that define groups as geographic regions, for convenience, where there is expected more homogeneity within groups than between groups. The population size for each group, $N_{h}$, is fixed under direct sampling of groups; for example, $N_{h} = 4$ in the case of $G_{U} = 1250$, which produces $N = 5000$ units, so the number of population units per group is constructed as $(4, 10, 25, 50, 100)$ for population group sizes, $G_{U} = (1250,500,200,100,50)$, respectively. \begin{comment} In the case where we conduct an indirect sampling of groups by directly sampling units in a single-stage pps design, the number of population units per group, $N_{h}$, is set to randomly vary among the $G_{U}$ population groups using a log-normal distribution centered on the $(4, 10, 25, 50, 100)$ units per group used in the case of direct sampling, with a variance of $0.5$. In the case of $G_{U} = 1250$, this produces a right-skewed distribution of the number of units in each group, ranging from approximately $1$ to $40$ units per group and the total number of population units per group is restricted to sum to $N = 5000$. We sort the groups such that groups with larger-sized units are assigned relatively fewer units and groups with smaller-sized units are assigned relatively more units. This set-up of assigning more units to smaller-sized groups mimics the estimation of employment counts among business establishments analyzed in our application in the sequel, where there are relatively few establishments with a large number of employees (e.g., $> 50$) (which is the size variable), while there are, by contrast, many more establishments (small businesses) that have a small number of employees (e.g., $< 10$). \end{comment} Although the population response $y$ is generated with $\mu = f(x_1,x_2)$, we estimate the marginal model for the \emph{population}, $\mu = f(x_1)$ to which we will compare estimated results on samples taken from the population to assess bias and mean-squared error (MSE). We use $x_{2}$ in the generation of the population values for $y$ because the survey sampling inclusion probabilities are set proportionally to $x_{2}$, which instantiates the informativeness of the sampling design. In practice, however, the analyst does not have access to $x_{2}$ for the population units or, more generally, to all the information used to construct the survey sampling distribution that sets inclusion probabilities for all units in the population. The marginal estimation model under exclusion of size variable, $x_{2}$, is specified as: \begin{align} \label{eq:estmodel} y_{i} &\sim \mathcal{P}\left(\mu_{i}\right) \nonumber\\ \log\mu_{i} &= \beta_{0} + x_{1i}\beta_{1} + u_{h\{i\}}\\ u_{h} & \sim \mathcal{N}\left(0,\sigma_{u}^{2}\right) \nonumber \end{align} where now $u_{h}$ is an intercept random effect, $h = 1,\ldots,G_{U}$. Our goal is to estimate the global parameters, $(\beta_{0},\beta_{1},\sigma_{u}^{2})$, from informative samples of size, $n = 500$, taken from the population (of size, $N = 5000$). We utilize the following simulation algorithm: \begin{enumerate} \item Each Monte Carlo iteration of our simulator (that we run for $B = 300$ iterations) generates the population $(y_{i},x_{1i},x_{2i})_{i=1}^{N}$ from Equation~\ref{eq:margmodel} on which we estimate the marginal population model of Equation~\ref{eq:estmodel} to determine the population true values for $(\mu_{i},\sigma_{u}^{2})$. \item Our simulation study focuses on the direct sampling of groups, followed by a sub-sampling of units within the selected groups. We use a proportion-to-size design to directly sample from the $G_{U}$ groups in the first stage, where the group inclusion probabilities, $\pi_{h} \propto \frac{1}{N_{h}}\mathop{\sum}_{i\in U_{h}}x_{2i}$. We draw a sample of groups in the first stage and observe $G_{S} < G_{U}$ groups. In particular, fixed sample of total size $n = 500$ is taken where the number of groups sampled, $G_{S} = n/(Nf) \times G_{U}$. \item The second stage size-based sampling of units is accomplished with inclusion probabilities, $\pi_{\ell\mid g} \propto x_{2\ell}$ for $\ell \in (1,\ldots,N_{g})$. We perform a further sub-sampling of $f\%$ of population units in the selected $G_{S}$ groups. \item Estimation is performed for $(\beta_{0},\beta_{1},\sigma_{u}^{2})$ from the observed sample of $n = 500$ under three alternatives: \begin{enumerate} \item Single-weighting, where we solely exponentiate the likelihood contributions for $(y_{g j})$ by sampling weights, $(w_{g j} \propto 1/\pi_{g j})$ (and don't weight the prior for the random effects, $(u_{g})$); \item Double-weighting, where we exponentiate \emph{both} the likelihood for the $(y_{g j})$ by sampling weights, $(w_{g j})$, and also exponentiate the prior distribution for $u_{g}$ by weight, $w_{g} \propto 1/\pi_{g}$ (for each of $g = 1,\ldots,G_{S}$). We compute the marginal unit weights used in both single- and double-weighting as $w_{g j} \propto 1/\pi_{g j}$, where $\pi_{g j}$ is the marginal inclusion probability, formulated as, $\pi_{g j} = \pi_{g}\pi_{j\mid g}$ for $ j = 1,\ldots,n_{g}$ for each group, $g \in 1,\ldots, G_{S}$ in the case of direct sampling of groups. \item SRS, where in the case of direct sampling of groups, we take a simple random (equal probability) sample of groups in a first stage, followed by a simple random sample of units within selected groups. We take the SRS sample from the same population as is used to take the two-stage, pps informative sample. The inclusion of model estimation under (a non-informative) SRS is intended to serve as a gold standard against which we may judge the bias and MSE performance of single- and double-weighting under informative sampling. \end{enumerate} \end{enumerate} \begin{comment} Our second simulation study conducts an indirect sampling of groups by directly sampling units in a single stage design such that groups are included in the sample to the extent that at least one of their member units is sampled. As in the case of the direct sampling of groups, a proportion to size design is used with the marginal inclusion probability, $\pi_{i} \propto x_{2i},~i = 1,\ldots,N$. Group inclusion probabilities for double-weighting are composed using the sum-weights method as described in Section~\ref{sec:indirect}. Please see Appendix \ref{sec:simstudy} for description for comparative simulation results for other methods (in addition to sum-weights) for computing pseudo inclusion probabilities and associated weights for estimation. \end{comment} We use Stan \citep{stan:2015} to estimate the double-weighted mixed effects model of Equation~\ref{eq:sampcompletemodel} for the specific case of the Poisson likelihood that we use in our simulations and application that next follows. We fully specify our Stan probability model for the Poisson likelihood under double-weighting in Appendix~\ref{sec:stanscript}. In particular, we specify a multivariate Gaussian joint prior distribution for the $K\times 1$, $\bm{\beta}$ coefficients with a vector of standard deviation parameters, $\sigma_{\beta}$ drawn from a truncated Cauchy prior. The associated correlation matrix for the multivariate Gaussian prior for $\bm{\beta}$ is drawn from a prior distribution that is uniform over the space of $K \times K$ correlation matrices. The prior for the standard deviation parameter of the random effects, $\sigma_{u}$, is also specified as a truncated Cauchy distribution. These prior distributions are designed as weakly informative by placing large probability mass over a large support region, while expressing a mode to promote regularity and a stable posterior geometry that is readily explored under Stan's Hamiltonian Monte Carlo scheme. The single-weighting case is achieved as a special / simplified case of the double-weighting model. Please see Appendix \ref{sec:simstudy} for results of a second simulation study under the \emph{indirect} sampling of groups. \subsection{Informative Random Effects Under Direct Sampling of Groups} \begin{figure} \caption{Each plot panel compares distributions of $\overline{(y-\exp(x\beta))}_{g}$ for each of a synthetic Population and a single sample taken from that population, faceted by a sequence for the number of population groups, $G_{U}$. The resulting violin plots present each distribution within $95\%$ quantiles.} \label{fig:biasgrp} \end{figure} \afterpage{\FloatBarrier} To make concrete the notion of informative random effects, we generate a single population and subsequently take a single, informative sample of groups from that population of groups under a proportion-to-size design, using the procedures for population generation and the direct sampling of groups, described above. The size for each population is $N = 5000$ and the sample size is $n = 500$. We, next, average the item responses, $y$ in each group after centering by removing the fixed effects observed in the sample (excluding $x_{2}$). For illustration, the computed $\left(\overline{(y-\exp(x\beta))}_{g}\right)$ will be used as a naive indicator of the distribution of the random effects, $\exp(u_{g})$. Each plot panel in Figure~\ref{fig:biasgrp} compares the distributions of this group-indexed centered mean statistic between the generated population and resulting informative sample for a population. A collection of plot panels for a sequence of populations with $G_{U} = (1250, 500, 100)$ number of population groups is presented, from left-to-right. Fixing a plot panel, each violin distribution plot includes horizontal lines for the $(0.25,0.5,0.75)$ quantiles. We see that under a proportion-to-size design that the distributions for the centered, group mean statistic in the sample are different from the underlying populations and skew larger than those for the populations. This upward skewness in each sample indicates that performing population estimation on the observed sample will induce bias for random effects variance, $\phi$, without correction of the group indexed random effects distribution in the sample, which we accomplish by weighting the distribution over random effects back to that for the population. \begin{comment} We begin our simulation study by assessing estimation bias for $\phi$, the generating hyperparameter of the random effects from Equation~\ref{eq:sampdirmodel}, that arises from informative random effects under the canonical case of unit-indexed random effects, where $G_{U} = N$ (the number of units in the population, $U$), such that the number of units per group is $N_{h} = 1$ for all groups, which reduces to setting $\pi_{i} \propto x_{2i}$, a single stage, proportion-to-size design, with size variable, $x_{2} \sim \mathcal{E}(1)$. We compare the use of single- and double-weighting utilizing bias and MSE statistics (based on the true population values from the marginal model) for the generating random effects variance, $\phi \equiv\sigma_{u}^{2}$ and fixed effects, $\left(\beta_{0}, \beta_{1}\right)$. Our results show in Figure~\ref{fig:unitre} and associated Table~\ref{tab:unitre} reveal a strong bias under single-weighting for the generating random effects variance, $\sigma_{u}^{2}$, that is substantially removed under double-weighting. This is a consequential result because, as with fixed effects, we lack \emph{a priori} knowledge of the informativeness of the sampling design for any data set acquired under a complex survey sampling design, so this result suggests the necessity to exponentiate the prior contribution for each random effect, $u_{i}$. We also note a bias reduction in the intercept parameter, $\beta_{0}$, for double-weighting. Bias may arise in $\beta_{0}$ due to a location non-identifiability when the mean of the random effects, $(u_{i})$, are biased away from $0$. \begin{figure} \caption{Distributions and quantiles (25\%, 50\%, 75\%) of Posterior Means for random effects generating variance, $\sigma_{u}^{2}$ for Unit-Indexed Random Effects model. Double- and Single-weighting under informative sampling across B = \# simulations compared to simple random sampling (SRS).} \label{fig:unitre} \end{figure} \afterpage{\FloatBarrier} \rowcolors{2}{gray!6}{white} \begin{table}[!h] \caption{\label{tab:unitre}Bias and Mean Square Error of Posterior Means for Unit-Indexed Random Effects model. Double- and Single-weighting under informative sampling across B = 300 iterations compared to simple random sampling (SRS).} \centering \begin{tabular}[t]{r|r|r|l|l} \hiderowcolors \toprule $\beta_{0}$ & $\beta_{1}$ & $\sigma_{u}^{2}$ & Statistic & Method\\ \midrule \showrowcolors 0.15 & 0.02 & 0.98 & bias & Single-weighting\\ 0.03 & 0.01 & 1.15 & MSE & Single-weighting\\ \hline 0.01 & -0.03 & -0.12 & bias & Double-weighting\\ 0.01 & 0.01 & 0.05 & MSE & Double-weighting\\ \hline -0.04 & -0.01 & 0.06 & bias & SRS\\ 0.01 & 0.01 & 0.13 & MSE & SRS\\ \bottomrule \end{tabular} \end{table} \rowcolors{2}{white}{white} \afterpage{ } \end{comment} \subsection{Varying Number of Population Groups, $G_{U}$}\label{sim:varygroups} We assess bias for a population model constructed using group-indexed random effects, where each group links to multiple units. Our results presented in Figure~\ref{fig:groupre} compare our double-weighting method to single-weighting in the case we conduct a proportion-to-size \emph{direct} sampling of groups and, subsequently, sub-sample $f = 50\%$ of member units within groups. We include an SRS sample of groups and units within selected groups taken from the same population. The results reveal that bias is most pronounced in the case of a relatively larger number of groups e.g., $G_{U}=(1250,500)$ for $N=5000$, where each group links relatively few units. By contrast, as the number of groups decreases, fixing the population size, $N$, the number of units linking to each group increases, which will have the effect of reducing variation in resulting sampling weights among the groups until, in the limit, there is a single group (with $\pi_{g} = 1$). The relative bias of single-weighting, therefore, declines as the number of groups declines (and units per group increases), such that residual bias in $\sigma_{u}^{2}$ for $G_{U}=100$ is dominated by increasing variability (because we sample fewer groups) for all methods. We, nevertheless, detect a small decrease in bias when we use double-weighting. We include Table~\ref{tab:groupre} that presents the bias in the estimation for the posterior mean values of $(\beta_{0},\beta_{1},\sigma_{u}^{2})$ that confirms the reduction in bias for $\sigma_{u}^{2}$ under double-weighting for $G_{U} = 100$. Our set-up may be viewed as more likely to induce bias because we assign units to groups by sorting units on the values of the size variable, $x_{2} \sim \mathcal{E}(1/(2.5))$ for allocation to groups. Our proportion-to-size sampling design selects groups based on the mean size variable for each group, $\bar{x}_{2}$. This set-up will tend to accentuate the variance in the resulting group-indexed size variable (and, hence, the resulting survey sample inclusion probabilities) as compared to a random allocation of units to groups. Our simulation set-up is, nevertheless, realistic because many surveys are characterized by relatively homogenous clusters; for example, the geographically-indexed metropolitan statistical areas (MSAs) (which may be viewed as clusters) used by the Current Employment Statistics survey (administered by the Bureau of Labor Statistics) tends to express larger (higher number of employees) establishments in more highly populated MSAs. \begin{figure} \caption{Direct sampling of groups: Monte Carlo distributions and quantiles (0.5\%, 25\%, 50\%, 75\%, 99.5\%) for $B = 300$ iterations of differences between Posterior Means and truth under Single- and Double-weighting schema as compared to SRS for varying number of random effect groups, $G_{U}$, under $x_{2}\sim \mathcal{E}(1/2.5)$ for $N = 5000$ and $n=500$. Parameters $(\beta_{0},\beta_{1},\sigma_{u}^{2})$ along columns and number of population groups, $G_{U}$, in descending order along the rows.} \label{fig:groupre} \end{figure} \afterpage{ } \rowcolors{2}{gray!6}{white} \begin{table}[!h] \caption{\label{tab:groupre}Difference of Single- and Double-weighting with Increasing Number of Groups, $G$, under $x_{2}\sim \mathcal{E}(1/2.5)$} \centering \begin{tabular}[t]{r|r|r|l|l|r} \hiderowcolors \toprule $\beta_{0}$ & $\beta_{1}$ & $\sigma_{u}^{2}$ & Statistic & Method\\ \midrule 0.03 & 0.03 & 0.25 & bias & Single-weighting & 1250\\ -0.06 & 0.03 & 0.06 & bias & Double-weighting & 1250\\ -0.05 & 0.01 & 0.02 & bias & sRS & 1250\\ \hline 0.15 & -0.13 & 0.19 & bias & Single-weighting & 500\\ 0.04 & -0.13 & 0.02 & bias & Double-weighting & 500\\ 0.04 & -0.12 & 0.04 & bias & sRS & 500\\ \hline 0.10 & -0.02 & 0.26 & bias & Single-weighting & 200\\ -0.01 & -0.02 & 0.09 & bias & Double-weighting & 200\\ -0.02 & 0.00 & 0.05 & bias & sRS & 200\\ \hline 0.09 & -0.09 & 0.31 & bias & Single-weighting & 100\\ -0.01 & -0.10 & 0.11 & bias & Double-weighting & 100\\ -0.03 & -0.10 & 0.07 & bias & sRS & 100\\ \hline -0.06 & -0.03 & 0.44 & bias & Single-weighting & 50\\ -0.14 & -0.03 & 0.29 & bias & Double-weighting & 50\\ -0.13 & -0.02 & 0.27 & bias & sRS & 50\\ \bottomrule \end{tabular} \end{table} \rowcolors{2}{white}{white} \afterpage{\FloatBarrier} We next compare our double-weighting approach to the best available method in the literature, the pairwise composite likelihood method of \citet{yi:2016}, specified in Equation~\ref{composite} , which we refer to as ``pair-integrated''. We compare both methods in the case of relatively few units linked to each group (e.g., $G = 500, 1250$) because \citet{yi:2016} demonstrate superior bias removal properties as compared to \citet{rh:1998} in this setup. We exclude smaller values of $G$ because as the number of individuals within each group grows, the number of pairwise terms to include in the pair-integrated method grows quadratically. Our simulation set-up conducts a first-stage proportion to size sampling of groups in exactly the same manner as the previous simulation study. We additionally include an SRS of groups and, in turn, units within groups, as a benchmark. The custom R code to implement the ``pair-integrated" point estimation can be found in Appendix \ref{sec:pairwisecode}. Figure \ref{fig:groupre_pair} presents the Monte Carlo distributions for parameter estimates, where the columns denote parameters, $(\beta_{0},\beta_{1},\sigma_{u}^{2})$, and the rows denote number of population groups, $G_{U}$. The results demonstrate that double-weighting leads to unbiased estimation of both the fixed effects parameters and the random effects variance relative to using a two-stage SRS sample. By contrast, the pair-integrated method demonstrates both bias and variability for the random effects variance, $\sigma_{u}^{2}$, which is exactly the set-up where it is hoped to perform relatively well. This bias for pair-integrated in the random effects variance also induces bias for the fixed effects intercept, $\beta_{0}$. As mentioned in Section~\ref{sec:direct} the pair-integrated method integrates out the random effects (from the unweighted prior distribution) \emph{before} applying the group weights, which fails to fully correct for the informative sampling of groups. Our method, by contrast, weights the prior the random effects and integrates them out numerically through our MCMC (after sampling the random effects). It bears mention that \cite{yi:2016} only evaluate informative sampling of units within groups, but not the informative sampling of the groups themselves, which may be why the estimation bias for $\sigma_{u}$ was not discovered. \begin{figure} \caption{Direct sampling of groups: Monte Carlo distributions and quantiles (0.5\%, 25\%, 50\%, 75\%, 99.5\%) for $B = 300$ iterations within a $99\%$ interval for difference of Posterior Means and truth under Double-weighting, Pair-integrated estimation and Simple Random Sampling (SRS). Parameters $(\beta_{0},\beta_{1},\sigma_{u}^{2})$ (columns); Varying number of random effects, $G_{U}$ (rows), under $x_{2}\sim \mathcal{E}(1/2.5)$, for a population of $N = 5000$.} \label{fig:groupre_pair} \end{figure} \afterpage{ } \begin{comment} \subsection{Sampling Units Rather than Groups (under Indirect Sampling of Groups)}\label{sim:indirectsamp} This next Monte Carlo simulation study implements case $3$ from Section~\ref{doublepseudo} that addresses sampling designs where the population group identifiers used to generate the response variable of interest are \emph{not} directly sampled by the survey sampling design. The synthetic population (for each Monte Carlo iteration) utilizes group-indexed random effects under size-based assignment of population units to groups under each alternative for total number of groups, $G_{U}$, as in Section~\ref{sim:varygroups}. In this study, however, we randomly vary the number of population units assigned to each group with the mean values for each $G_{U}$ set to be equal to the fixed number of units per group used in Section~\ref{sim:varygroups}. We allocate a relatively higher number of units to those groups with smaller-sized units under each group size, $G_{U}$, to mimic our application. The survey sampling design employed here is a \emph{single-stage}, proportion-to-size design that directly samples the units (not the groups) with unit inclusion probabilities proportional to the size variable, $x_{2} \sim \mathcal{E}(1/3.5)$. Each plot panel in Figure~\ref{fig:inducegrpre} shows the distributions over Monte Carlo simulations for estimates of the generating variance, $\sigma_{u}^{2}$, of the random effects, $(u_{g})$, under each of the following weighting methods: single-weighting, sum-weights double-weighting (Equation~\ref{eq:sampindirmodel}), and SRS (no weighting under simple random sampling of the same population from which the pps sample was taken). The panels are ordered from left-to-right for a sequence of $G_{U} = \left(1250,500,200,100,50\right)$. As earlier mentioned, the number of population units per group, $N_{h}$, is set to randomly vary under a lognormal distribution, though there will on average be more units sampled per group from synthetic populations with a smaller number of population groups, $G_{U}$, than there will be units per group sampled under a larger number of population groups. The sum-weights method for accomplishing double-weighting generally performs better than single-weighting for all group sizes. When the number of population groups, $G_{U}$, is small, however, noise induced by sampling error results in double-weighting under-performing compared to SRS. Yet, as the number of units per group increases with $G_{U} = 500$, we see that sum-weights outperforms SRS, which is expected because the pps design is generally more efficient such that the contraction rate of the estimator on the truth will be faster for pps (occur at a lower sample size). Table~\ref{tab:inducegrpre} presents the relative bias, defined as the bias divided by the true value, and the normalized root MSE, defined as the square root of MSE divided by the true value, for the regression coefficients, $(\beta_{0},\beta_{1})$, to accompany Figure~\ref{fig:inducegrpre}. We show the relative bias and normalized RMSE quantities in this study because the true values of the marginal model, $\sigma_{u}^{2} = \left(0.578,0.349,0.216,0.169,0.136\right)$, varies over the sequence of sizes for $G_{U}$. As in the case of direct sampling of groups, there is an association between the amount bias in estimation of $\sigma_{u}^{2}$ and in the intercept coefficient, $\beta_{0}$. \begin{figure} \caption{Indirect sampling of groups: Monte Carlo distributions and quantiles (0.5\%, 25\%, 50\%, 75\%, 99.5\%) for $B = 300$ iterations for $\sigma_{u}^{2}$ for difference of posterior means and truth under alternative weighting schema for varying number of groups, $G$. Population of $N = 5000$ and sample of $n = 500$. In each plot panel, from left-to-right is the Single-weighting method, Sum-weights double-weighting method of Equation~\ref{eq:sampindirmodel} and Simple-random sampling (SRS).} \label{fig:inducegrpre} \end{figure} \afterpage{\FloatBarrier} \rowcolors{2}{gray!6}{white} \begin{table}[!h] \caption{\label{tab:inducegrpre}Normalized Bias and RMSE for Double-weighting as compared to Single-weighting and SRS for Increasing Units Per Random Effect Under \emph{Indirect} Sampling of Groups for $B = 300$ iterations with population of $N = 5000$ and sample of $n = 500$.} \centering \begin{adjustbox}{width=0.8\textwidth} \small \begin{tabular}{l|r|r|r|r|r} \toprule \multicolumn{2}{c}{ } & \multicolumn{2}{c}{Relative Bias} & \multicolumn{2}{c}{Normalized RMSE} \\ \cmidrule(l{3pt}r{3pt}){3-4} \cmidrule(l{3pt}r{3pt}){5-6} Model & G & beta\_0 & beta\_1 & beta\_0 & beta\_1\\ \midrule Single-weighting & 1250 & -0.55 & 0.00 & 0.58 & 0.11\\ \rowcolor{gray!6} SRS & 1250 & -0.58 & 0.00 & 0.61 & 0.11\\ \rowcolor{gray!6} Sum-weights & 1250 & -0.52 & -0.01 & 0.55 & 0.11\\ \addlinespace \rowcolor{gray!6} Single-weighting & 500 & -0.45 & 0.01 & 0.48 & 0.14\\ SRS & 500 & -0.46 & 0.00 & 0.49 & 0.11\\ Sum-weights & 500 & -0.41 & 0.00 & 0.45 & 0.13\\ \addlinespace Single-weighting & 200 & -0.26 & 0.00 & 0.34 & 0.12\\ \rowcolor{gray!6} SRS & 200 & -0.29 & 0.00 & 0.36 & 0.11\\ \rowcolor{gray!6} Sum-weights & 200 & -0.23 & 0.00 & 0.33 & 0.12\\ \addlinespace \rowcolor{gray!6} Single-weighting & 100 & -0.13 & -0.02 & 0.30 & 0.13\\ SRS & 100 & -0.19 & 0.00 & 0.33 & 0.12\\ Sum-weights & 100 & -0.13 & -0.02 & 0.30 & 0.13\\ \addlinespace Single-weighting & 50 & -0.07 & 0.00 & 0.39 & 0.11\\ \rowcolor{gray!6} SRS & 50 & -0.08 & -0.01 & 0.39 & 0.11\\ \rowcolor{gray!6} Sum-weights & 50 & -0.07 & 0.00 & 0.39 & 0.11\\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \rowcolors{2}{white}{white} \afterpage{\FloatBarrier} We note that under $G_{U} = 50$ the results in Figure~\ref{fig:inducegrpre} show that the performance for the sum-weights double-weighting methods collapses onto those for SRS and single-weighting. This result is explained by the de facto inclusion of all $G_{U} = 50$ groups in \emph{every sample} across the Monte Carlo iterations. The inclusion of all population groups in the observed sample under $G_{U} = 50$ is not surprising as there are many units per group, so it is likely that at least one unit in \emph{every} group will be sampled in each Monte Carlo iteration. In the case that every sample contains a member of every group, there is no weighting of the random effects distributions needed and we see in Figure~\ref{fig:inducegrpre} that all of the double-weighting methods perform nearly the same as single-weighting and SRS. \begin{figure} \caption{Monte Carlo distribution for number of sampled groups, $G_{S}$, over $B=300$ iterations.} \label{fig:numgrps} \end{figure} \afterpage{\FloatBarrier} \end{comment} \begin{comment} Lastly, it is often the case that while the data analyst may know aspects of the survey sampling design, such as the groups sampled in a multistage design, they do \emph{not} have access to marginal stagewise group inclusion probabilities or conditional last stage unit inclusion probabilities; rather, they may only have last stage marginal weights. So we next compare the performances under use of our \emph{induced} group weighting methods (Product-complement, Sum-probabilities, Sum-weights) based on pseudo group inclusion probabilities formulated when the groups are not directed sampled. In this case, however, the groups \emph{are} directly sampled and we label with ``Double-weighting", the usual method that employs both group-level weights and unit-level conditional weights as outlined in Theorem 1. Our goal is to determine whether our weighting approach formulated in the case of indirect sampling of groups is able to perform better than single-weighting when the groups are directly sampled, but stagewise sampling weights are not available. We follow the same simulation procedure as outlined in Section~\ref{sim:varygroups} to directly sample groups (and to take all units), but we generate our response using $x_{2}\sim \mathcal{E}(1/3.5)$. Figure~\ref{fig:sampgrpsindest} illustrates that, while the resulting double-weighting Sum-weights indirect estimator performs about as well as Double-weighting (formulated using group-indexed weights under direct sampling of groups), both of which handily outperform the other two indirect estimators (Product-complement, Sum-probabilities) and Single-weighting. \begin{figure} \caption{Monte Carlo distributions for $B = 300$ iterations within a $99\%$ interval for difference of Posterior Means and truth under direct sampling of groups to compare Single-weighting with two cases of Double-weighting: 1. Direct estimator (``Double-weighting"); 2. Indirect estimator (``Product-complement",``Sum-probabilities",``Sum-weights").} \label{fig:sampgrpsindest} \end{figure} \afterpage{\FloatBarrier} \end{comment} We briefly comment on the simulation study for the indirect sampling of of groups detailed in Appendix \ref{sec:simstudy}. The results accord with the direct sampling of groups where double-weighting outperforms single-weighting. When the number of population groups, $G_{U}$, is small, however, noise induced by sampling error results in double-weighting under-performing compared to SRS. Yet, as the number of units per group increases with $G_{U} = 500$, the sum-weights approach outperforms SRS, which is expected because the pps design is generally more efficient such that the contraction rate of the estimator on the truth will be faster for pps (occur at a lower sample size). Lastly, we note that while we have focused on a simple Poisson random effects formulation, our survey-weighted pseudo Bayesian posterior method readily extends to any number of levels and simultaneous employment of multiple sets of random effects without any modification to the approach. Competitor methods, by contrast, are not readily estimable. \section{Application}\label{application} We compare single- and double-weighting under a linear mixed effects model estimated on a dataset published by the Job Openings and Labor Turnover Survey (JOLTS), which is administered by BLS on a monthly basis to a randomly-selected sample from a frame composed of non-agricultural U.S. private (business) and public establishments. JOLTS focuses on the demand side of U.S. labor force dynamics and measures job hires, separations (e.g. quits, layoffs and discharges) and openings. We construct a univariate count data population estimation model with our response, $y$, defined to be the number of hires. We formulate the associated log mean with, \begin{equation} \log~\mu_{i} = \mathbf{x}_{i}^{'}\bm{\beta} + u_{g\{i\}}, \end{equation} where groups, $g = 1,\ldots,(G=892)$, denote industry groupings (defined as $6-$ digit North American Industry Classification (NAICS) codes) that collect the participating business establishments. We expect a within-industry dependence among the hiring levels for business establishments since there are common, industry-driven economic factors that impact member establishments. We construct the fixed effects predictors, $\mathbf{x} = \left[1,\text{ownership status},\text{region}\right]$, which are categorical predictors where ownership status holds four levels, $1.$ Private; $2.$ Federal government; $3.$ State government; $4.$ Local government. The region predictor holds four levels, $1.$ Northeast; $2.$ South; $3.$ Midwest; $4.$ West. Private and Northeast are designated as the reference levels. The JOLTS sampling design assigns inclusion probabilities (under sampling without replacement) to establishments to be proportional to the number of employees for each establishment (as obtained from the Quarterly Census of Employment and Wages (QCEW)). This design is informative in that the number of employees for an establishment will generally be correlated with the number of hires, separations and openings. We perform our modeling analysis on a May, $2012$ data set of $n = 9743$ responding establishments. We \emph{a priori} expect the random effects, $(u_{g})$, to be informative since larger-sized establishments would be expected to express larger variances in their hiring levels. We choose the sum-weights method for inducing industry-level weights (from Equation~\ref{eq:sampindirmodel}) to construct our double-weighted estimation model on the observed sample. \begin{figure} \caption{Distributions and quantiles (25\%, 50\%, 75\%) of estimated posterior mean values for random effects, $u_{g},~ g = 1,\ldots,(G=892)$, for the JOLTS application under Single- and Double-weighting.} \label{fig:joltsu} \end{figure} \afterpage{\FloatBarrier} \begin{figure} \caption{Distributions and quantiles (25\%, 50\%, 75\%) of posterior samples for $\sigma_{u}^{2}$, the generating variance for random effects, and a single random effect parameter, $u_{i}$, for the JOLTS application, under Single- and Double-weighting.} \label{fig:joltsig} \end{figure} \afterpage{\FloatBarrier} The more diffuse distribution over the $G = 892$ posterior mean values for random effects, $(u_{g})$, under double-weighting than single-weighting shown in Figure~\ref{fig:joltsu} demonstrates that co-weighting the likelihood and random effects distribution produces notably different inference for the group-indexed random effects; in particular, the observed sample is more homogenous in the number of hires by setting inclusion probabilities to concentrate or over-sample large-sized establishments relative to the more size-diverse population of establishments. So the weighting of the random effects distributions in the observed sample produces a distribution over the posterior mean values for the random effects that better reflects the size-diversity of establishments in the underlying population from which the sample was taken. Figure~\ref{fig:joltsig} presents the estimated pseudo-posterior distributions for the generating random effects variance, $\sigma_{u}^{2}$ and also a single random effect parameter, $u_{i}$, under both single- and double-weighting. This figure reinforces the observed result for the random effects where the observed hiring levels in the survey data are more homogenous than those in the underlying population, which induces a larger posterior variation in the estimated random effects parameters for double-weighting. \section{Discussion}\label{discussion} In this work, we demonstrate the existence of biased estimation of both fixed \emph{and} random effects parameters when performing inference on data generated from a complex survey sample. This risk is largely unrecognized in the Bayesian literature. The current remedies come from the survey literature and are motivated from a frequentist perspective. They provide an incomplete and somewhat ad-hoc approach to the solution. We present a principled development of the ``double-weighting'' approach based on the joint distribution of the population generating model of inferential interest and the complex sampling design represented by sample inclusion indicators. We exploit the latent variable formulation of mixed models and their related posterior sampling techniques to avoid awkward numerical integration required for frequentist solutions. We show that this simplicity also leads to reductions in bias. \begin{comment} Through Monte Carlo simulations, we demonstrate the effectiveness of double-weighting for a variety of modeling and sampling situations such as unit-indexed random effects and group-indexed random effects models. These scenarios were chosen because they correspond to common situations in practice, as demonstrated by the JOLTS example. However, it should be clear that this approach also applies to more complex hierarchical models in general. For example, we could consider more levels of sampling and modeling and utilize more complex distributions for random effects, such as non-parametric process mixtures. In such cases, we might replace the term ``double-weighting'' with ``hierarchical-weighting''. \end{comment} This work culminates recent developmental work in combining traditional survey estimation approaches with the Bayesian modeling paradigm. The pseudo-posterior framework simultaneously offers complex survey data analysis to Bayesian modelers and the full suite of hierarchical Bayesian methods to those well-versed in traditional fixed effect analysis of survey data. \appendix \section{Alternative Pseudo Group Inclusion Probabilities Under Indirect Sampling}\label{sec:pseudoprobs} If we invert the resultant group-indexed weight, $w_{g}= 1/N_{g}\times\mathop{\sum}_{j \in S_{g}}w_{g j}$, from Equation~\ref{eq:sampindirmodel}, where groups are not directly sampled, we may view the inverse of the group $g$ weight, $\tilde{\pi}_{g} = 1/w_{g}$, as a ``pseudo" group inclusion probability, since we don't directly sample groups. The construction for one form of $\tilde{\pi}_{g}$ motivates our consideration of other formulations for the pseudo group inclusion probabilities that we may, in turn, invert to formulate alternative group-indexed survey sampling weights, $(w_{g})$. The resulting $w_{g}$ of Equation~\ref{eq:sampindirmodel} requires either knowledge of $N_{g}$ or a method for its approximation. The sum of nested unit weights is further composed as a harmonic sum of inverse inclusion probabilities of member units in each group, which may be overly dominated by units with small unit inclusion probabilities. Our first alternative more directly constructs a pseudo group inclusion probability as the union of probabilities for inclusion of any member unit in the observed sample (in which case the group will be represented in the sample) and does not require estimation of population quantities, such as $N_{g}$. Under a weak assumption of nearly independent sampling within groups, this alternative is constructed as, \begin{align} \tilde{\pi}_{g} &= \mathop{\sum}_{\ell = 1}^{N_{g}}\pi_{\ell}\label{eq:sprpop}\\ \hat{\tilde{\pi}}_{g} &= \mathop{\sum}_{\ell = 1}^{N_{g}}\frac{\delta_{\ell}}{\pi_{\ell}} \times \pi_{\ell}\\ &= \mathop{\sum}_{j=1}^{n_{g}} w_{j} \times \pi_{j}\label{eq:sprsamp} \end{align} where $\pi_{\ell}$ denotes the marginal inclusion probability for unit, $\ell \in (1,\ldots,N_{g})$, where we recall that $N_{g}$ denotes the number of units linked to group, $g \in(1,\ldots,G_{U})$, in the population of groups. We may estimate the pseudo group inclusion probabilities in the observed sample by making the same walk from population-to-observed-sample as is done in Equation~\ref{estimator} to Equation~\ref{estimatorsample}; by including unit sampling weights, $(w_{j})_{j \in S_{g}}$ ($S_{g} = \{1,\ldots,n_{g}\}$). We normalize the $(w_{j})_{j \in S_{g}}$ to sum to $1$ as our intent is to re-balance the information (among sampled units) within a group to approximate that of the population of units within the group. While this estimator has the undesirable property of computing $\tilde{\pi}_{g} > 1$, we utilize this quantity to weight the random effects prior density contributions with, $w_{g} \propto 1/\tilde{\pi}_{g}$, so we focus on the effectiveness of estimation bias removal for generating hyperparameters of the $(u_{g})_{g \in G_{U}}$. We label this method as the ``sum-probabilities" method in contrast to the ``sum-weights" methods with which we label the result of Equation~\ref{eq:sampindirmodel}. \begin{comment} Under a single stage sampling design where the groups are indirectly sampled through inclusions of nested units into the observed sample, we assume that we only have availability of the marginal unit inclusion sampling weights, $(w_{g j})$. The group population size, $N_{g}$, needed for the sum-weights method of Equation~\ref{eq:sampindirmodel}, may be estimated by $\hat{N}_{g} = \mathop{\sum}_{j = 1}^{n_{g}}w_{j\mid g}$. To approximate $w_{j\mid g}$, we first utilize the sum-probabilities result to estimate, $\hat{w}_{g} = 1/\hat{\tilde{\pi}}_{g}$, and proceed to extract $(w_{j\mid g})$ from $w_{gj} \approx w_{g}w_{j\mid g}$. \end{comment} Our second alternative for estimation of a pseudo group inclusion probability is designed to ensure $\tilde{\pi}_{g} \leq 1$ by using a product complement approach that computes the union of member unit probabilities for a group, indirectly, by first computing its complement and subtracting that from $1$. To construct this estimator, we assume that units, $j \in S$ are sampled independently \emph{with} replacement, which is a tenable assumption when drawing a small sample from a large population of units. Let $\pi^{(1)}_{j}$ denote the probability of selecting unit, $j$, in a sample of size $1$ (e.g., a single draw). Then we may construct the marginal inclusion probability of unit, $\pi_{j}$, for a sample of size, $n = \vert S\vert$, as the complement that unit $j$ does not appear in any of the $n$ draws, \begin{equation}\label{eq:pwr} \pi_{j} = 1- \left(1-\pi^{(1)}_{j}\right)^{n}, \end{equation} where $\mathop{\sum}_{j\in U}\pi^{(1)}_{j} = 1$. By extension, $0 < \tilde{\pi}^{(1)}_{g} = \mathop{\sum}_{j \in U_{g}}\pi^{(1)}_{j} \leq 1$, where $\tilde{\pi}^{(1)}_{g}$ denotes the pseudo group, $g \in (1,\ldots,G_{U})$ inclusion probability for a sample of size $1$ and is composed as the union of size $1$ probabilities for member units. The expression for the pseudo group inclusion probability derives from the underlying sampling of members with replacement, \begin{equation}\label{eq:gpwr} \tilde{\pi}_{g} = 1 - \left(1-\tilde{\pi}^{(1)}_{g}\right)^{n} = 1 - \left(1-\mathop{\sum}_{j=1}^{N_{g}}\pi^{(1)}_{j}\right)^{n}, \end{equation} where we exponentiate the complement term, $\left(1-\tilde{\pi}^{(1)}_{g}\right)$, by the number of draws of units, $n$, (rather than $G_{S}$, the number of groups represented in the observed sample) because we don't directly sample groups. We solve for $\pi^{(1)}_{j}$ using Equation~\ref{eq:pwr}, $\pi^{(1)}_{j} = 1 - \left(1-\pi_{j}\right)^{(1/n)}$, and plug into Equation~\ref{eq:gpwr} to achieve, \begin{align} \tilde{\pi}_{g} &= 1 - \left(1-\mathop{\sum}_{j=1}^{N_{g}}\left(1-\left(1-\pi_{j}\right)^{(1/n)}\right)\right)^{n}\\ \hat{\tilde{\pi}}_{g} &= 1 - \left(1-\mathop{\sum}_{j=1}^{N_{g}}\frac{\delta_{j}}{\pi_{j}}\left(1-\left(1-\pi_{j}\right)^{(1/n)}\right)\right)^{n}\\ &= 1 - \left(1-\mathop{\sum}_{\ell=1}^{n_{g}}w_{\ell}\left(1-\left(1-\pi_{\ell}\right)^{(1/n)}\right)\right)^{n}\label{eq:pcprsamp}, \end{align} where, as with the sum-probabilities formulation, we normalize the unit weights within each group, $(w_{\ell})_{\ell \in S_{g}}$, to sum to $1$. We label this method as ``product-complement". \begin{comment} Where did we conclude on normalizing these weights vs. not. For some reason I was thinking PC did better with normalizing but sum prod did better without? \end{comment} \section{Simulation Study Results for Alternative Pseudo Group Inclusion Probabilities}\label{sec:simstudy} We present the results for the simulation study that samples units, rather than groups, for the expanded set of methods developed in Appendix \ref{sec:pseudoprobs} for the pseudo group inclusion probabilities. We recall that under this single stage sampling of units, groups are not directly sampled under the survey sampling and are included to the extent that one or more member units are sampled. The synthetic population (for each Monte Carlo iteration) utilizes group-indexed random effects under size-based assignment of population units to groups under each alternative for total number of groups, $G_{U}$. In this study, we randomly vary the number of population units assigned to each group with the mean values for each $G_{U}$ set to be equal to the fixed number of units per group. We allocate a relatively higher number of units to those groups with smaller-sized units under each group size, $G_{U}$, to mimic our application. The number of population units per group, $N_{h}$, is set to randomly vary among the $G_{U}$ population groups using a log-normal distribution centered on the $(4, 10, 25, 50, 100)$ units per group used in the case of direct sampling, with a variance of $0.5$. In the case of $G_{U} = 1250$, this produces a right-skewed distribution of the number of units in each group, ranging from approximately $1$ to $40$ units per group and the total number of population units per group is restricted to sum to $N = 5000$. We sort the groups such that groups with larger-sized units are assigned relatively fewer units and groups with smaller-sized units are assigned relatively more units. This set-up of assigning more units to smaller-sized groups mimics the estimation of employment counts among business establishments analyzed in our application in the sequel, where there are relatively few establishments with a large number of employees (e.g., $> 50$) (which is the size variable), while there are, by contrast, many more establishments (small businesses) that have a small number of employees (e.g., $< 10$). The survey sampling design employed here is a \emph{single-stage}, proportion-to-size design that directly samples the units (not the groups) with unit inclusion probabilities proportional to the size variable, $x_{2} \sim \mathcal{E}(1/(m_{2}=3.5))$. Each Monte Carlo iteration of our simulator (that we run for $B = 300$ iterations) generates the population $(y_{i},x_{1i},x_{2i})_{i=1}^{N}$, assigns group and unit inclusion probabilities for the population in the case of direct sampling of groups or assigns unit inclusion probabilities in the case of indirect sampling. A sample of $n = 500$ is then taken and estimation is performed for $(\beta_{0},\beta_{1},\sigma_{u}^{2})$ from the observed sample under three alternatives: \begin{enumerate} \item Single-weighting, where we solely exponentiate the likelihood contributions for $(y_{g j})$ by sampling weights, $(w_{g j} \propto 1/\pi_{g j})$ (and don't weight the prior for the random effects, $(u_{g})$); \item Double-weighting, where we exponentiate \emph{both} the likelihood for the $(y_{g j})$ by sampling weights, $(w_{g j})$, and also exponentiate the prior distribution for $u_{g}$ by weight, $w_{g} \propto 1/\pi_{g}$ (for each of $g = 1,\ldots,G_{S}$). We estimate $\pi_{g}$ using each of the three methods presented in Appendix \ref{sec:pseudoprobs}: ``sum-weights'', ``sum-probabilities'', and ``product-complement''. \item SRS, where we take a single-stage simple random sample of units. \end{enumerate} The inclusion of model estimation under (a non-informative) SRS is intended to serve as a gold standard against which we may judge the bias and MSE performance of single- and double-weighting under informative sampling. Each plot panel in Figure~\ref{fig:inducegrprealts} shows the distributions over Monte Carlo simulations for estimates of the generating variance, $\sigma_{u}^{2}$, of the random effects, $(u_{g})$, under each of the following weighting methods: single-weighting, product-complement double-weighting (Equation~\ref{eq:pcprsamp}), sum-probabilities double-weighting (Equation~\ref{eq:sprsamp}), sum-weights double-weighting (Equation~\ref{eq:sampindirmodel}), and SRS (no weighting under simple random sampling of the same population from which the pps sample was taken). The panels are ordered from left-to-right for a sequence of $G_{U} = \left(1250,500,200,100,50\right)$. As earlier mentioned, the number of population units per group, $N_{h}$, is set to randomly vary under a lognormal distribution, though there will on average be more units sampled per group from synthetic populations with a smaller number of population groups, $G_{U}$, than there will be units per group sampled under a larger number of population groups. The sum-probabilities and sum-weights methods for accomplishing double-weighting generally perform nearly identically to one another and better than single-weighting for all group sizes. Since sum-probabilities and sum-weights perform nearly identically, one may choose to prefer use of the former because it does not require our estimation of $\hat{N}_{g}$, as does the latter. Table~\ref{tab:inducegrprealts} presents the relative bias, defined as the bias divided by the true value, and the normalized root MSE, defined as the square root of MSE divided by the true value, for the regression coefficients, $(\beta_{0},\beta_{1})$, to accompany Figure~\ref{fig:inducegrprealts}. We show the relative bias and normalized RMSE quantities in this study because the true values of the marginal model, $\sigma_{u}^{2} = \left(0.578,0.349,0.216,0.169,0.136\right)$, varies over the sequence of sizes for $G_{U}$. As in the case of direct sampling of groups, there is an association between the amount bias in estimation of $\sigma_{u}^{2}$ and in the intercept coefficient, $\beta_{0}$. \begin{figure} \caption{Indirect sampling of groups: Monte Carlo distributions and quantiles (0.5\%, 25\%, 50\%, 75\%, 99.5\%) for $B = 300$ iterations for $\sigma_{u}^{2}$ for difference of posterior means and truth under alternative weighting schema for varying number of groups, $G$. Population of $N = 5000$ and sample of $n = 500$. In each plot panel, from left-to-right is the Single-weighting method, Product Complement double-weighting method of Equation~\ref{eq:pcprsamp}, Sum-probabilities double-weighting method of Equation~\ref{eq:sprsamp}, Sum-weights double-weighting method of Equation~\ref{eq:sampindirmodel} and Simple-random sampling (SRS).} \label{fig:inducegrprealts} \end{figure} \afterpage{\FloatBarrier} \rowcolors{2}{gray!6}{white} \begin{table}[!h] \caption{\label{tab:inducegrprealts} Normalized Bias and RMSE for Double-weighting methods as compared to Single-weighting and SRS for Increasing Units Per Random Effect Under \emph{Indirect} Sampling of Groups for $B = 300$ iterations with population of $N = 5000$ and sample of $n = 500$.} \centering \begin{adjustbox}{width=0.6\textwidth} \begin{tabular}{l|r|r|r|r|r} \toprule \multicolumn{2}{c}{ } & \multicolumn{2}{c}{Relative Bias} & \multicolumn{2}{c}{Normalized RMSE} \\ \cmidrule(l{3pt}r{3pt}){3-4} \cmidrule(l{3pt}r{3pt}){5-6} Model & G & beta\_0 & beta\_1 & beta\_0 & beta\_1\\ \midrule \rowcolor{gray!6} Product-complement & 1250 & -0.54 & -0.01 & 0.58 & 0.11\\ Single-weighting & 1250 & -0.55 & 0.00 & 0.58 & 0.11\\ \rowcolor{gray!6} SRS & 1250 & -0.58 & 0.00 & 0.61 & 0.11\\ Sum-probabilities & 1250 & -0.52 & -0.01 & 0.55 & 0.11\\ \rowcolor{gray!6} Sum-weights & 1250 & -0.52 & -0.01 & 0.55 & 0.11\\ \addlinespace Product-complement & 500 & -0.45 & 0.01 & 0.48 & 0.13\\ \rowcolor{gray!6} Single-weighting & 500 & -0.45 & 0.01 & 0.48 & 0.14\\ SRS & 500 & -0.46 & 0.00 & 0.49 & 0.11\\ \rowcolor{gray!6} Sum-probabilities & 500 & -0.41 & 0.00 & 0.45 & 0.13\\ Sum-weights & 500 & -0.41 & 0.00 & 0.45 & 0.13\\ \addlinespace \rowcolor{gray!6} Product-complement & 200 & -0.26 & 0.00 & 0.34 & 0.12\\ Single-weighting & 200 & -0.26 & 0.00 & 0.34 & 0.12\\ \rowcolor{gray!6} SRS & 200 & -0.29 & 0.00 & 0.36 & 0.11\\ Sum-probabilities & 200 & -0.23 & 0.00 & 0.33 & 0.12\\ \rowcolor{gray!6} Sum-weights & 200 & -0.23 & 0.00 & 0.33 & 0.12\\ \addlinespace Product-complement & 100 & -0.13 & -0.02 & 0.30 & 0.13\\ \rowcolor{gray!6} Single-weighting & 100 & -0.13 & -0.02 & 0.30 & 0.13\\ SRS & 100 & -0.19 & 0.00 & 0.33 & 0.12\\ \rowcolor{gray!6} Sum-probabilities & 100 & -0.13 & -0.02 & 0.30 & 0.13\\ Sum-weights & 100 & -0.13 & -0.02 & 0.30 & 0.13\\ \addlinespace \rowcolor{gray!6} Product-complement & 50 & -0.07 & 0.00 & 0.39 & 0.11\\ Single-weighting & 50 & -0.07 & 0.00 & 0.39 & 0.11\\ \rowcolor{gray!6} SRS & 50 & -0.08 & -0.01 & 0.39 & 0.11\\ Sum-probabilities & 50 & -0.07 & 0.00 & 0.39 & 0.11\\ \rowcolor{gray!6} Sum-weights & 50 & -0.07 & 0.00 & 0.39 & 0.11\\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \rowcolors{2}{white}{white} \afterpage{\FloatBarrier} \section{Stan Code for Estimating Poisson Mixed Effects Model }\label{sec:stanscript} We next present the Stan \citep{stan:2015} script that enumerates the probability model specification of the Poisson likelihood and associated prior distributions. We utilized Stan to estimate the mixed effects Poisson model implemented in Sections~\ref{simulation} and \ref{application}. {\small \begin{lstlisting}[language=R] functions{ real wt_pois_lpmf(int[] y, vector mu, vector weights, int n){ real check_term; check_term = 0.0; for( i in 1:n ) { check_term = check_term + weights[i] * poisson_log_lpmf(y[i] | mu[i]); } return check_term; } real normalwt_lpdf(vector y, vector mu, real sigma, vector weights, int n) { real check_term; check_term = 0.0; for( i in 1:n ) { check_term = check_term + weights[i] * normal_lpdf(y[i] | mu[i], sigma); } return check_term; } } /* end function{} block */ data { int<lower=1> n; // number of observations int<lower=1> K; // number of linear predictors int<lower=1> G; // number of random effects groups int<lower=0> s_re[n]; // assignment of groups to items int<lower=0> y[n]; // Response variable vector<lower=0>[n] weights; // observation-indexed (sampling) weights vector<lower=0>[G] weights_re; // group-indexed (sampling) weights matrix[n, K] X; // coefficient matrix } transformed data{ vector<lower=0>[G] zeros_g; vector<lower=0>[K] zeros_beta; zeros_beta = rep_vector(0,K); zeros_g = rep_vector(0,G); } /* end transformed data block */ parameters{ vector[K] beta; /* regression coefficients from linear predictor */ vector<lower=0>[K] sigma_beta; /* cholesky of correlation matrix for Sigma_beta */ cholesky_factor_corr[K] L_beta; vector[G] u; real<lower=0> sigma_u; } transformed parameters{ real<lower=0> sigma_u2; vector[n] mu; vector[n] fixed_effects; fixed_effects = X * beta; mu = fixed_effects + u[s_re]; sigma_u2 = pow(sigma_u,2); } /* end transformed parameters block */ model{ L_beta ~ lkj_corr_cholesky(6); sigma_u ~ student_t(3,0,1); sigma_beta ~ student_t(3,0,1); /* Implement a beta ~ N_{T}(0,Q^{-1}) */ /* K x 1, vector */ beta ~ multi_normal_cholesky( zeros_beta, diag_pre_multiply(sigma_beta,L_beta) ); /* directly update the log-probability for sampling */ u ~ normalwt(zeros_g, sigma_u, weights_re, G); // weighting the random effects target += wt_pois_lpmf(y | mu, weights, n); } /* end model{} block */ \end{lstlisting} } \section{R Code for Pairwise Integrated Poisson Mixed Effects Model }\label{sec:pairwisecode} We next present the R \citep{R} scripts to implement the pairwise integrated likelihood approach of \citep{yi:2016} for Poison models. We utilized this model in Sections~\ref{simulation}. {\small \begin{lstlisting}[language=R] library(fastGHQuad) ##Poison model integrated likelihood #joint density - does not include exp(-u^2) which is included in ghQuad #change of variables z = sqrt(2)*u*sig.z, #where z ~N(0, sig.z) random effect pwpois.GH <- function(u,x1,x2, mu1, mu2, sig.z){ #first argument is what is integrated out d <- dpois(x1, mu1*exp(sqrt(2)*u*sig.z))* dpois(x2, mu2*exp(sqrt(2)*u*sig.z))*1/sqrt(pi) return(d) } #see qhQuad help for more details intpois.ghfs <- function(x1, x2, mu1, mu2, sig.z){ #scalar arguments for each intz <- ghQuad(pwpois.GH, rule = gaussHermiteData(5), x1 = x1, x2 = x2, mu1 = mu1, mu2 = mu2, sig.z = sig.z) return(intz) } ##Weighted Logliklihood #assume y1 and y2 are vectors of pairs #with corresponding matrix predictors X1 and X2 and vector #of pairwise weights pwt wt_comp_pois_ll <- function(par, y1,y2, X1,X2, pwt){ #need vector of parameters par as input for optim k <- length(par) beta <- par[1:(k-1)] #have log(sig) as parameter to keep bounds -Inf to Inf in optim sig <- exp(par[k]) mu1 <- exp(X1 mu2 <- exp(X2 ll <- 0 for(i in seq_along(y1)){ #print(i) inti <- intpois.ghfs(y1[i],y2[i],mu1[i], mu2[i], sig) ll <- ll + pwt[i]*max(log(inti), -743) #truncate to keep above -INF #print(ll) } return(-ll) #need to minimize -LL } ## Input into optim #Example with 3 parameters (Beta1, Beta2, Sigma_u) ## starting values testpar <- c(rnorm(1,0,.5), rnorm(1,0,.5), log(rgamma(1,3,rate=3))) est1 <- optim(par = testpar, fn = wt_comp_pois_ll, method = "BFGS", y1 = y1, y2=y2, X1= X1, X2= X2, pwt = pwt) ## extract parameter estimates pars <- est1$par pars[3] <- exp(pars[3]) #back-transform to sigma in [0,Inf] beta_hat <- pars[1:2] sigmau2_hat <- pars[3]^2 \end{lstlisting} } \end{document}
\begin{document} \title[]{Edgeworth expansions for independent bounded integer valued random variables.} \vskip 0.1cm \author{Dmitry Dolgopyat and Yeor Hafouta} \dedicatory{ } \maketitle \begin{abstract} We obtain asymptotic expansions for local probabilities of partial sums for uniformly bounded independent but not necessarily identically distributed integer-valued random variables. The expansions involve products of polynomials and trigonometric polynomials. Our results do not require any additional assumptions. As an application of our expansions we find necessary and sufficient conditions for the classical Edgeworth expansion. It turns out that there are two possible obstructions for the validity of the Edgeworth expansion of order $r$. First, the distance between the distribution of the underlying partial sums modulo some $h\in {\mathbb N}$ and the uniform distribution could fail to be $o(\sigma_N^{1-r})$, where $\sigma_N$ is the standard deviation of the partial sum. Second, this distribution could have the required closeness but this closeness is unstable, in the sense that it could be destroyed by removing finitely many terms. In the first case, the expansion of order $r$ fails. In the second case it may or may not hold depending on the behavior of the derivatives of the characteristic functions of the summands whose removal causes the break-up of the uniform distribution. We also show that a quantitative version of the classical Prokhorov condition (for the strong local central limit theorem) is sufficient for Edgeworth expansions, and moreover this condition is, in some sense, optimal. \end{abstract} \tableofcontents \section{Introduction.}\label{SecInt} Let $X_1,X_2,...$ be a uniformly bounded sequence of independent integer-valued random variables. Set $S_N=X_1+X_2+...+X_N$, $V_N=V(S_N)=\text{Var}(S_N)$ and ${\sigma}_N=\sqrt{V_N}$. Assume also that $V_N\to\infty$ as $N\to\infty$. Then the central limit theorem (CLT) holds true, namely the distribution of $(S_N-{\mathbb E}(S_N))/{\sigma}_N$ converges to the standard normal distribution as $N\to\infty$. Recall that the local central limit theorem (LLT) states that, uniformly in $k$ we have \[ {\mathbb P}(S_N=k)=\frac1{\sqrt{2\pi}{\sigma}_N}e^{-\left(k-{\mathbb E}(S_N)\right)^2/2V_N}+o({\sigma}_N^{-1}). \] This theorem is also a classical result, and it has origins in De Moivre-Laplace theorem. The stable local central limit theorem (SLLT) states that the LLT holds true for any integer-valued square integrable independent sequence $X_1',X_2',...$ which differs from $X_1,X_2,...$ by a finite number of elements. We recall a classical result due to Prokhorov. \begin{theorem} \cite{Prok} \label{ThProkhorov} The SLLT holds iff for each integer $h>1$, \begin{equation}\label{Prokhorov} \sum_n {\mathbb P}(X_n\neq m_n \text{ mod } h)=\infty \end{equation} where $m_n=m_n(h)$ is the most likely residue of $X_n$ modulo $h$. \end{theorem} We refer the readers' to \cite{Rozanov, VMT} for extensions of this result to the case when $X_n$'s are not necessarily bounded (for instance, the result holds true when $\displaystyle \sup_n\|X_n\|_{L^3}<\infty$). Related results for local convergence to more general limit laws are discussed in \cite{D-MD, MS74}. The above result provides a necessary and sufficient condition for the SLLT. It turns out that the difference between LLT and SLLT is not that big. \begin{proposition} \label{PrLLT-SLLT} Suppose $S_N$ obeys LLT. Then for each integer $h\geq2$ at least one of the following conditions occur: either (a) $\displaystyle \sum_n {\mathbb P}(X_n\neq m_n(h) \text{ mod } h)=\infty$. or (b) $\exists j_1, j_2, \dots, j_k$ with $k<h$ such that $\displaystyle \sum_{s=1}^k X_{j_s}$ mod $h$ is uniformly distributed. In that case for all $N\geq \max(j_1, \dots, j_k)$ we have that $S_N$ mod $h$ is uniformly distributed. \end{proposition} Since we could not find this result in the literature we include the proof in Section \ref{FirstOrder}. Next, we provide necessary and sufficient conditions for the regular LLT. We need an additional notation. Let $\displaystyle K=\sup_n\|X_n\|_{L^\infty}$. Call $t$ \textit{resonant} if $t=\frac{2\pi l}{m}$ with $0<m\leq 2K$ and $0\leq l<m.$ \begin{theorem} \label{ThLLT} The following conditions are equivalent: (a) $S_N$ satisfies LLT; (b) For each $\xi\in {\mathbb R}\setminus {\mathbb Z}$, $\displaystyle \lim_{N\to\infty} {\mathbb E}\left(e^{2\pi i \xi S_N}\right)=0$; (c) For each non-zero resonant point $\xi$, $\displaystyle \lim_{N\to\infty} {\mathbb E}\left(e^{2\pi i \xi S_N}\right)=0$; (d) For each integer $h$ the distribution of $S_N$ mod $h$ converges to uniform. \end{theorem} The proof of this result is also given in Section \ref{FirstOrder}. We refer the readers to \cite{Do, DS} for related results in more general settings. The local limit theorem deals with approximation of $P(S_N=k)$ up to an error term of order $o({\sigma}_N^{-1})$. Given $r\geq1$, the Edgeworth expansion of order $r$ holds true if there are polynomials $P_{b, N}$, whose coefficients are uniformly bounded in $N$ and their degrees do no depend on $N,$ so that uniformly in $k\in{\mathbb Z}$ we have that \begin{equation}\label{EdgeDef} {\mathbb P}(S_N=k)=\sum_{b=1}^r \frac{P_{b, N} (k_N)}{\sigma_N^b}\mathfrak{g}(k_N)+o(\sigma_N^{-r}) \end{equation} where $k_N=\left(k-{\mathbb E}(S_N)\right)/{\sigma}_N$ and $\mathfrak{g}(u)=\frac{1}{\sqrt{2\pi}} e^{-u^2/2}. $ In Section \ref{Sec5} we will show, in particular, that Edgeworth expansions of any order $r$ are unique up to terms of order $o({\sigma}_N^{-r}),$ and so the case $r=1$ coincides with the LLT. Edgeworth expansions for discrete (lattice-valued) random variables have been studied in literature for iid random variables \cite[Theorem 4.5.4]{IL} \cite[Chapter VII]{Pet75}, (see also \cite[Theorem 5]{Ess}), homogeneous Markov chains \cite[Theorems 2-4]{Nag2}, decomposable statistics \cite{MRM}, or { dynamical systems \cite{FL} with good spectral properties such as expanding maps. Papers \cite{Bor16, GW17} discuss the rate of convergence in the LLT. Results for non-lattice variables were obtained in \cite{Feller, BR76, CP, Br} (which considered random vectors) and \cite{FL} (see also \cite{Ha} for corresponding results for random expanding dynamical systems). In this paper we obtain analogues of Theorems \ref{ThProkhorov} and \ref{ThLLT} for higher order Edgeworth expansions for independent but not identically distributed integer-valued uniformly bounded random variables. We begin with the following result. \begin{theorem} \label{ThEdgeMN} Let $\displaystyle K=\sup_j\|X_j\|_{L^\infty}.$ For each $r\in{\mathbb N}$ there is a constant $R\!\!=\!\!R(r, K)$ such that the Edgeworth expansion of order $r$ holds~if \[ M_N:=\min_{2\leq h\leq 2K}\sum_{n=1}^N{\mathbb P}(X_n\neq m_n(h) \text{ mod } h)\geq R\ln V_N. \] In particular, $S_N$ obeys Edgeworth expansions of all orders if $$ \lim_{N\to\infty} \frac{M_N}{\ln V_N}=\infty. $$ \end{theorem} The number $R(r,K)$ can be chosen according to Remark \ref{R choice}. This theorem is a quantitative version of Prokhorov's Theorem \ref{ThProkhorov}. We observe that logarithmic in $V_N$ growth of various non-perioidicity characteristics of individual summands are often used in the theory of local limit theorems (see e.g. \cite{Mal78, MS70, MPP}). We will see from the examples of Section \ref{ScExamples} that this result is close to optimal. However, to justify the optimality we need to understand the conditions necessary for the validity of the Edgeworth expansion. \begin{theorem}\label{r Char} For any $r\geq1$, the Edgeworth expansion of order $r$ holds if and only if for any nonzero resonant point $t$ and $0\leq\ell<r$ we have \[ \bar \Phi_{N}^{(\ell)}(t)=o\left({\sigma}_N^{\ell+1-r}\right). \] where $\bar\Phi_{N}(x)={\mathbb E}[e^{ix (S_N-{\mathbb E}(S_N))}]$ and $\bar\Phi_{N}^{(\ell)}(\cdot)$ is its $\ell$-th derivative. \end{theorem} This result\sout{s} generalizes Theorem \ref{ThLLT}, however in contrast with that theorem, in the case $r>1$ we also need to take into account the behavior of the derivatives of the characteristic function at nonzero resonant points. The values of the characteristic function at the resonant points $2\pi l/m$ have clear probabilistic meaning. Namely, they control the rate equidistribution modulo $m$ (see part (d) of Theorem \ref{ThLLT} or Lemma \ref{LmUnifFourier}). Unfortunately, the probabilistic meaning of the derivatives is less clear, so it is desirable to characterize the validity of the Edgeworth expansions of orders higher than 1 without considering the derivatives. Example \ref{ExUniform} shows that this is impossible without additional assumptions. Some of the reasonable additional conditions are presented below. We start with the expansion of order 2. \begin{theorem}\label{Thm SLLT VS Ege} Suppose $S_N$ obeys the SLLT. Then the following are equivalent: (a) Edgeworth expansion of order 2 holds; (b) $|\Phi_N(t)|=o(\sigma_N^{-1})$ for each nonzero resonant point $t$; (c) For each $h\leq 2K$ the distribution of $S_N$ mod $h$ is $o(\sigma_N^{-1})$ close to uniform. \end{theorem} Corollary \ref{CorNoDer} provides an extension of Theorem \ref{Thm SLLT VS Ege} for expansions of an arbitrary order $r$ under an additional assumption that $\displaystyle \varphi:=\min_{t\in{\mathcal{R}}}\inf_{n}|\phi_n(t)|>0$, where ${\mathcal{R}}$ is the set of all nonzero resonant points. The latter condition implies in particular, that for each $\ell$ there is a uniform lower bound on the distance between the distribution of $X_{n_1}+X_{n_2}+\dots +X_{n_\ell}$ \text{mod }$m$ and the uniform distribution, when $\{n_1, n_2,\dots ,n_\ell\} \in{\mathbb N}^\ell$ and $m\geq2$. Next we discuss an analogue of Theorem \ref{ThProkhorov} for expansions of order higher than 2. It requires a stronger condition which uses an additional notation. Given $j_1, j_2,\dots,j_s$ with $j_l\in [1, N]$ we write $$ S_{N; j_1, j_2, \dots,j_s}=S_N-\sum_{l=1}^s X_{j_l}. $$ Thus $S_{N; j_1, j_2, \dots ,j_s}$ is a partial sum of our sequence with $s$ terms removed. We will say that $\{X_n\}$ {\em admits an Edgeworth expansion of order $r$ in a superstable way} (which will be denoted by $\{X_n\}\in EeSs(r)$) if for each ${\bar s}$ and each sequence $j_1^N, j_2^N,\dots ,j_{s_N}^N$ with $s_N\leq {\bar s}$ there are polynomials $P_{b, N}$ whose coefficients are $O(1)$ in $N$ and their degrees do not depend on $N$ so that uniformly in $k\in{\mathbb Z}$ we have that \begin{equation}\label{EdgeDefSS} {\mathbb P}(S_{N; j_1^N, j_2^N, \dots, j_{s_N}^N}=k)=\sum_{b=1}^r \frac{P_{b, N} (k_N)}{\sigma_N^b} \mathfrak{g}(k_N)+o(\sigma_N^{-r}) \end{equation} and the estimates in $O(1)$ and $o(\sigma_N^{-r})$ are uniform in the choice of the tuples $j_1^N, \dots ,j_{s_N}^N.$ That is, by removing a finite number of terms we can not destroy the validity of the Edgeworth expansion (even though the coefficients of the underlying polynomials will of course depend on the choice of the removed terms). Let $\Phi_{N; j_1, j_2,\dots, j_s}(t)$ be the characteristic function of $S_{N; j_1, j_2,\dots, j_s}.$ \begin{remark} We note that in contrast with SLLT, in the definition of the superstrong Edgeworth expansion one is only allowed to remove old terms, but not to add new ones. This difference in the definition is not essential, since adding terms with sufficiently many moments (in particular, adding bounded terms) does not destroy the validity of the Edgeworth expansion. See the proof of Theorem \ref{Thm Stable Cond} (i) or the second part of Example \ref{ExNonAr}, starting with equation \eqref{Convolve}, for details. \end{remark} \begin{theorem}\label{Thm Stable Cond} (1) $S_N\in EeSs(1)$ (that is, $S_N$ satisfies the LLT in a superstable way) if and if it satisfies the SLLT. (2) For arbitrary $r\geq 1$ the following conditions are equivalent: (a) $\{X_n\}\in EeSs(r)$; (b) For each $j_1^N, j_2^N,\dots ,j_{s_N}^N$ and each nonzero resonant point $t$ we have $\Phi_{N; j_1^N, j_2^N,\dots, j_{s_N}^N}(t)=o(\sigma_N^{1-r});$ (c) For each $j_1^N, j_2^N,\dots ,j_{s_N}^N$, and each $h \leq 2K$ the distribution of $S_{N; j_1^N, j_2^N,\dots, j_{s_N}^N}$ mod $h$ is $o(\sigma_N^{1-r})$ close to uniform. \end{theorem} To prove the above results we will show that for any order $r$, we can always approximate $\mathbb{P}(S_N=k)$ up to an error $o(\sigma_N^{-r})$ provided that instead of polynomials we use products of regular and the trigonometric polynomials. Those products allow us} to take into account possible oscillatory behavior of $P(S_N=k)$ when $k$ belongs to different residues mod $h$, where $h$ is denominator of a {\em resonant frequency.} When $M_N\geq R V_N$ for $R$ large enough, the new expansion coincides with the usual Edgeworth expansions. We thus derive that the condition $M_N\geq R\ln V_N$ is in a certain sense optimal. \section{Main result}\label{Main R} Let $X_1,X_2,...$ be a sequence of independent integer-valued random variables. For each $N\in{\mathbb N}$ we set $\displaystyle S_N=\sum_{n=1}^N X_n$ and $V_N=\text{Var}(S_N)$. We assume in this paper that $\displaystyle \lim_{N\to\infty}V_N=\infty$ and that there is a constant $K$ such that $$\sup_n \|X_n\|_{L^\infty}\leq K. $$ Denote $\sigma_N=\sqrt{V_N}.$ For each positive integer $m$, let $ q_n(m)$ be the second largest among $$\displaystyle \sum_{l\equiv j \text{ mod }m} {\mathbb P}(X_n=l)={\mathbb P}(X_n\equiv j \text{ mod }m),\,j=1,2,...,m$$ and $j_n(m)$ be the corresponding residue class. Set $$ M_N(m)=\sum_{n=1}^N q_n(m)\quad\text{and}\quad M_N=\min_m M_N(m).$$ \begin{theorem}\label{IntIndThm} There $\exists J=J(K)<\infty$ and polynomials $P_{a, b, N}$, where $a\in 0, \dots ,J-1,$ $b\in \mathbb{N}$, with degrees depending only on $b$ but not on $a, K$ or on any other characteristic of $\{X_n\}$, such that the coefficients of $P_{a,b, N}$ are uniformly bounded in $N$, and, for any $r\geq1$ uniformly in $k\in{\mathbb Z}$ we have $${\mathbb P}(S_N=k)-\sum_{a=0}^{J-1} \sum_{b=1}^r \frac{P_{a, b, N} ((k-a_N)/\sigma_N)}{\sigma_N^b} \mathfrak{g}((k-a_N)/\sigma_N) e^{2\pi i a k/J} =o(\sigma_N^{-r}) $$ where $a_N={\mathbb E}(S_N)$ and $\mathfrak{g}(u)=\frac{1}{\sqrt{2\pi}} e^{-u^2/2}. $ Moreover, $P_{0,1,N}\equiv1$, and given $K, r$, there exists $R=R(K,r)$ such that if $M_N\geq R \ln V_N$ then we can choose $P_{a, b, N}=0$ for $a\neq 0.$ \end{theorem} We refer the readers to (\ref{r=1}) for more details on these expansions in the case $r=1$, and to Section \ref{FirstOrder} for a discussion about the relations with local limit theorems. The resulting expansions in the case $r=2$ are given in (\ref{r=2'}). We note that the constants $J(K)$ and $R(K,r)$ can be recovered from the proof of Theorem \ref{IntIndThm}. \begin{remark} Since the coefficients of the polynomials $P_{a,b,N}$ are uniformly bounded, the terms corresponding to $b=r+1$ are of order $O({\sigma}_N^{-(r+1)})$ uniformly in $k$. Therefore, in the $r$-th order expansion we actually get that the error term is $O({\sigma}_N^{-(r+1)})$. \end{remark} \begin{remark} In fact, the coefficients of the polynomials $P_{a,b,N}$ for $a>0$ are bounded by a constant times $(1+M_N^{q})e^{-c_0 M_N}$, where $c_0>0$ depends only on $K$ and $q\geq 0$ depends only on $r$ and $K$. Therefore, these coefficient are small when $M_N$ is large. When $M_N\geq R(r,K)\ln V_N$ these coefficients become of order $o({\sigma}^{-r}_N).$ Therefore, they only contribute to the error term, and so we can replace them by $0$, as stated in Theorem \ref{IntIndThm}. \end{remark} \begin{remark} As in the derivation of the classical Edgeworth expansion, the main idea of the proof of Theorem \ref{IntIndThm} is the stationary phase analysis of the characteristic function. However, in contrast with the iid case there may be resonances other than 0 which contribute to the oscillatory terms in the expansion. Another interesting case where the classical Edgeworth analysis fails is the case of iid terms where the summands are non-arithmetic but take only finitely many values. It is shown in \cite{DF} that in that case, the leading correction to the Edgeworth expansion also comes from resonances. However, in the case studied in \cite{DF} the geometry of resonances is more complicated, so in contrast to our Theorem \ref{IntIndThm}, \cite{DF} does not get the expansion of all orders. \end{remark} \section{Edgeworth expansions under quantitative Prokhorov condition.} \label{ScEdgeLogProkh} In this section we prove Theorem \ref{ThEdgeMN}. In the course of the proof we obtain the estimates of the characteristic function on intervals not containing resonant points which will also play an important role in the proof of Theorem \ref{IntIndThm}. The proof of Theorem \ref{IntIndThm} will be completed in Section \ref{ScGEE} where we analyze additional contribution coming from nonzero resonant points which appear in the case $M_N\leq R \ln {\sigma}_N.$ Those contributions constitute the source of the trigonometric polynomials in the generalized Edgeworth expansions. \subsection{Characterstic function near 0.} Here we recall some facts about the behavior of the characteristic function near 0, which will be useful in the proofs of Theorems \ref{ThEdgeMN} and \ref{IntIndThm}. The first result holds for general uniformly bounded sequences $\{X_n\}$ (which are not necessarily integer-valued). \begin{proposition}\label{PropEdg} Suppose that $\displaystyle \lim_{N\to\infty} {\sigma}_N=\infty$, where ${\sigma}_N\!=\!\sqrt{V_N}= \sqrt{V(S_N)}$. Then for $k=1,2,3,...$ there exists a sequence of polynomials $(A_{k,N})_N$ whose degree $d_k$ depends only on $k$ so that for any $r\geq 1$ there is ${\delta}_r>0$ such that for all $N\geq1$ and $t\in[-{\delta}_r{\sigma}_N,{\delta}_r{\sigma}_N]$, \begin{equation}\label{FinStep.0} {\mathbb E}\left(e^{it(S_N-{\mathbb E}(S_N))/{\sigma}_N}\right)=e^{-t^2/2}\left(1+\sum_{k=1}^r \frac{A_{k,N}(t)}{{\sigma}_N^k}+\frac{t^{r+1}}{{\sigma}_N^{r+1}} O(1)\right). \end{equation} Moreover, the coefficients of $A_{k,N}$ are algebraic combinations of moments of the $X_m$'s and they are uniformly bounded in $N$. Furthermore \begin{equation}\label{A 1,n .1} A_{1,N}(t)= -\frac{i}{6}{\gamma}_N t^3\,\,\text{ and }\,\,A_{2,N}(t)=\Lambda_{4}(\bar S_N){\sigma}_N^{-2}\frac{t^4}{4!}-\frac{1}{36}{\gamma}_N^2 t^6 \end{equation} where $\bar S_N=S_N-{\mathbb E}(S_N)$, ${\gamma}_N={\mathbb E}[(\bar S_N)^3]/{\sigma}_N^2$ and $\Lambda_{4}(\bar S_N)$ is the fourth comulant of $\bar S_N$. \end{proposition} The proof is quite standard, so we just sketch the argument. The idea is to fix some $B_2>B_1>0$, and to partition $\{1,...,N\}$ into intervals $I_1,...,I_{m_N}$ so that $\displaystyle B_1\leq \text{Var}(S_{I_l})\leq B_2 $ where for each $l$ we set $\displaystyle S_{I_l}=\sum_{j\in I_l}X_i$. It is clear that $m_N/{\sigma}_N^2$ is bounded away from $0$ and $\infty$ uniformly in $N$. Recall next that there are constants $C_p$, $p\geq2$ so that for any $n\geq1$ and $m\geq 0$ we have \begin{equation} \label{CenterMoments} \left\|\sum_{j=n}^{n+m}\big(X_j-{\mathbb E}(X_j)\big)\right\|_{L^p}\leq C_p\left(1+\left\|\sum_{j=n}^{n+m}\big(X_j-{\mathbb E}(X_j)\big)\right\|_{L^2}\right). \end{equation} This is a consequence of the multinomial theorem and some elementary estimates, and we refer the readers to either Lemma 2.7 in \cite{DS}, or Theorem 6.17 in \cite{Pel} for such a result in a much more general settings. Using the latter estimates we get that the $L^p$-norms of $S_{I_l}$ are uniformly bounded in $l$. This reduces the problem to the case when the variance of $X_n$ is uniformly bounded from below, and all the moments of $X_n-{\mathbb E}(X_n)$ are uniformly bounded. In this case, the proposition follows by considering the Taylor expansion of the function $\ln {\mathbb E}\big(e^{it(S_N-{\mathbb E}(S_N))/{\sigma}_N}\big)+\frac12t^2$, see \cite[\S XVI.6]{Feller}. \begin{proposition} \label{PrHalf} Given a square integrable random variable $X$, let $\bar{X}=X-{\mathbb E}(X).$ Then for each $h\in\mathbb{R}$ we have $$\left|{\mathbb E}(e^{ih \bar X})-1\right|\leq \frac12 h^2V(X).$$ \end{proposition} \begin{proof} Set $\varphi(h)={\mathbb E}(e^{ih \bar X})$. Then by the integral form of the second order Taylor reminder we have $$ |\varphi(h)-\varphi(0)-h\varphi'(0)|=|\varphi(h)-\varphi(0)|=\left|\int_0^h(t-h)\varphi''(t)dt\right|$$ $\hskip4.7cm \displaystyle \leq V(X)\int_{0}^{|h|}(|h|-t)dt= \frac12 h^2V(X). $ \end{proof} \subsection{ Non resonant intervals.} \label{SSNonRes} As in almost all the proofs of the LLT, the starting point in the proof of Theorem \ref{ThEdgeMN} (and Theorem \ref{IntIndThm}) is that for $k, N\in{\mathbb N}$ we have \begin{equation} \label{EqDual} 2\pi {\mathbb P}(S_N=k)=\int_{0}^{2\pi} e^{-itk}{\mathbb E}(e^{it S_N})dt. \end{equation} Denote ${\mathbb T}={\mathbb R}/2\pi {\mathbb Z}.$ Let $$ \Phi_N(t)={\mathbb E}(e^{it S_N})=\prod_{n=1}^N \phi_n(t) \quad \text{where}\quad \phi_n(t)={\mathbb E}(e^{it X_n}).$$ Divide ${\mathbb T}$ into intervals $I_j$ of small size $\delta$ such that each interval contains at most one resonant point and this point is strictly inside $I_j.$ We call an interval resonant if it contains a resonant point inside. Then \begin{equation}\label{SplitInt} 2\pi {\mathbb P}(S_N=k)=\sum_{j}\int_{I_j} e^{-itk}{\mathbb E}(e^{it S_N})dt. \end{equation} We will consider the integrals appearing in the above sum individually. \begin{lemma}\label{Step1} There are constants $C,c>0$ which depend only on ${\delta}$ and $K$ so that for any non-resonant interval $I_j$ and $N\geq1$ we have $$ \int_{I_j} |\Phi_N(t)| dt\leq C e^{-c V_N}. $$ \end{lemma} \begin{proof} Let ${\hat q}_n, {\bar q}_n$ be the largest and the second largest values of ${\mathbb P}(X_n=j)$ and let ${\hat j}_n, {\bar j}_n$ be the corresponding values. Note that \begin{equation} \label{phiNSum} \phi_n(t)={\hat q}_n e^{it {\hat j}_n}+{\bar q}_n e^{it {\bar j}_n}+\sum_{l\neq {\hat j}_n, {\bar j}_n} {\mathbb P}(X_n=l) e^{i t l}. \end{equation} Since $I_j$ is non resonant, the angle between $e^{it {\hat j}_n}$ and $e^{it {\bar j}_n}$ is uniformly bounded from below. Indeed if this was not the case we would have $t {\bar j}_n-t{\hat j}_n\approx 2\pi l_n$ for some $l_n\in{\mathbb Z}.$ Then $t\approx \frac{2\pi l_n}{m_n}$ where $m_n={\bar j}_n-{\hat j}_n$ contradicting the assumption that $I_j$ is non-resonant. Accordingly $\exists c_1>0$ such that $\displaystyle \left|e^{it {\hat j}_n}+e^{it {\bar j}_n}\right|\leq 2-c_1. $ Therefore $$ \left|{\hat q}_n e^{it {\hat j}_n}+{\bar q}_n e^{it {\bar j}_n}\right| \leq { ({\hat q}_n-{\bar q}_n)+{\bar q}_n \left| e^{it {\hat j}_n}+e^{it {\bar j}_n} \right|} \leq {\hat q}_n+{\bar q}_n-2c_1 {\bar q}_n. $$ Plugging this into \eqref{phiNSum}, we conclude that $|\phi_n(t)|\leq 1-2c_1 {\bar q}_n$ for $t\in I_j.$ Multiplying these estimates over $n$ and using that $1-x\leq e^{-x},\, x>0$, we get $$ |\Phi_N(t)|\leq e^{-2c_1\sum_n {\bar q}_n}. $$ Since $V(X_n)\leq c_2 {\bar q}_n$ for a suitable constant $c_2$ we can rewrite the preceding as \begin{equation}\label{NonResDec} |\Phi_N(t)|\leq e^{-c_3V_N},\, c_3>0. \end{equation} Integrating over $I_j$ we obtain the result. \end{proof} \subsection{Prokhorov estimates} Next we consider the case where $I_j$ contains a nonzero resonant point $t_j=\frac{2\pi l}{m}.$ \begin{lemma}\label{Step2} There is a constant $c_0$ which depends only on $K$ so that for any nonzero resonant point $t_j=2\pi l/m$ we have \begin{equation}\label{Roz0} \sup_{t\in I_j}|{\mathbb E}(e^{it S_N})|\leq e^{-c_0 M_N(m)}. \end{equation} Thus, for any $r\geq1$ there is a constant $R=R(r,K)$ such that if $M_N(m)\geq R \ln V_N$, then the integral $\int_{I_j} e^{-itk}{\mathbb E}(e^{it S_N})dt$ is $o(\sigma_N^{-r})$ uniformly in $k$, and so it only contributes to the error term. \end{lemma} \begin{proof} The estimate (\ref{Roz0}) follows from the arguments in \cite{Rozanov}, but for readers' convenience we recall its proof. Let $X$ be an integer-valued random variable so that $\|X\|_{L^\infty}\leq K$. Let $t_0=2\pi l/m$ be a nonzero resonant point, where $\gcd(l,m)=1$. Let $t\in{\mathbb T}$ be so that \begin{equation} \label{Neart0} |t-t_0|\leq{\delta}, \end{equation} where ${\delta}$ is a small positive number. Let $\phi_X(\cdot)$ denote the characteristic function of $X$. Since $x\leq e^{x-1}$ for any real $x$ we have \[ |\phi_X(t)|^2\leq e^{|\phi(t)|^2-1}. \] Next, we have \[ |\phi_X(t)|^2-1=\phi(t)\phi(-t)-1=\sum_{j=-2K}^{2K}\sum_s\tilde P_j \left[\cos(t_j)-1\right] \] where \[ \tilde P_j=\sum_{s}{\mathbb P}(X=s){\mathbb P}(X=j+s). \] Fix some $-2K\leq j\leq 2K$. We claim that if ${\delta}$ in \eqref{Neart0} is small enough and $j\not\equiv 0\text{ mod }m$ then for each integer $w$ we have $|t-2\pi w/j|\geq {\varepsilon}_0$ for some ${\varepsilon}_0>0$ which depends only on $K$. This follows from the fact that $-2K\leq j\leq 2K$ and that $2\pi w/j\not=t_0$ (and there is a finite number of resonant points). Therefore, \[ \cos(tj)-1\leq -{\delta}_0 \] for some ${\delta}_0>0$. On the other hand, if $j=km$ for some integer $k$ then with $w=lk$ we have \begin{eqnarray*} \cos(tj)-1=-2\sin^2(tj/2)=-2\sin^2\left((tj-2\pi w)/2\right)\\=-2\sin^2\left(j(t-t_0)/2\right)\leq -{\delta}_1(t-t_0)^2 \end{eqnarray*} for some ${\delta}_1>0$ (assuming that $|t-t_0|$ is small enough). We conclude that \[ |\phi_X(t)|^2-1\leq -{\delta}_0\sum_{j\in A}\tilde P_j-{\delta}_1(t-t_0)^2\sum_{j\in B}\tilde P_j \] where $A=A(X)$ is the set of $j$'s between $-2K$ and $2K$ so that $j\not\equiv 0\text{ mod }m$ and $B=B(X)$ is its complement in ${\mathbb Z}\cap[-2K,2K]$. Let $s_0$ be the most likely residue of $X$ mod $m$ and $s_1$ be the second most likely residue class. Since $$ {\mathbb P}(X\equiv s_0 \text{ mod }m)\geq \frac{1}{m} \quad\text{and}\quad {\mathbb P}(X\equiv s_1 \text{ mod }m)=q_m(X) $$ it follows that $\displaystyle \sum_{j\in A} \tilde P_j\geq \frac{q_m(X)}{m}.$ Combining this with the trivial bound $\displaystyle \sum_{j\in B} \tilde P_j\geq {\mathbb P}^2(X\equiv s_0)\geq \frac{1}{m^2}$ we obtain $$ |\phi_X(t)|\leq \exp-\left[\frac12\left(\frac{{\delta}_0 q_m(X)}{m} + \frac{{\delta}_1 (t-t_0)^2}{m^2}\right)\right]. $$ Applying the above with $t_0=t_j$ and $X=X_n$, $1\leq n\leq N$ we get that \begin{equation}\label{CharEstRoz} |\Phi_N(t)|\leq e^{-c_0M_N(m)-\bar{c}_0 N(t-t_j)^2 }\leq e^{-c_0M_N(m)} \end{equation} where $c_0$ is some constant. \end{proof} \begin{remark} Using the first inequality in (\ref{CharEstRoz}) and arguing as in \cite[page 264]{Rozanov}, we can deduce that there are positive constants $C, c_1, c_2$ such that \begin{equation}\label{RozArg} \int_{I_j}|{\mathbb E}(e^{it S_N})|dt\leq C\left(e^{-c_1{\sigma}_N}+\frac{e^{-c_2M_N(m)}}{{\sigma}_N}\right). \end{equation} This estimate plays an important role in the proof of the SLLT in \cite{Rozanov}, but for our purposes a weaker bound \eqref{Roz0} is enough. Note also that in order to prove (\ref{Roz0}) we could have just used the trivial inequality $\cos(t_j)-1\leq 0$ when $j\equiv 0\text{ mod }m$, but we have decided to present this part from \cite{Rozanov} in full. \end{remark} \begin{remark}\label{R choice} Let $d_{{\mathcal{R}}}$ be the minimal distance between two different resonant points. Then, when ${\delta}<2d_{{\mathcal{R}}}$, we can take ${\delta}_0=1-\cos(d_{{\mathcal{R}}})$ in the proof of Lemma \ref{Step2}. Therefore, we can take $c_0=\frac{1-\cos(d_{{\mathcal{R}}})}{4K}$ in \eqref{Roz0}. Hence Lemma \ref{Step2} holds with $R(r,K)=\frac{r+1}{2c_0}.$ \end{remark} \vskip0.2cm \subsection{ Proof of Theorem \ref{ThEdgeMN}} \label{Cmplt1} Fix some $r\geq1$. Lemmas \ref{Step1} and \ref{Step2} show that if $M_N\geq R(r,K)\ln V_N$, then all the integrals in the right hand side of (\ref{SplitInt}) are of order $o({\sigma}_N^{-r})$, except for the one corresponding to the resonant point $t_j=0$. That is, for any ${\delta}>0$ small enough, uniformly in $k$ we have $$2\pi {\mathbb P}(S_N=k)=\int_{-{\delta}}^{\delta} e^{-ih k}\Phi_N(h)dh+o({\sigma}_N^{-r}). $$ In order to complete the proof of Theorem \ref{ThEdgeMN}, we need to expand the above integral. Making a change of variables $h\to h/{\sigma}_N$ and using Proposition \ref{PropEdg}, we conclude that if ${\delta}$ is small enough then $$ \int_{-{\delta}}^{\delta} e^{-ih k}\Phi_N(h)dh=$$ $${\sigma}_N^{-1}\int_{-{\delta}{\sigma}_N}^{{\delta}{\sigma}_N}e^{-ihk_N}e^{-h^2/2}\left(1+\sum_{u=1}^r \frac{A_{u,N}(h)}{{\sigma}_N^k}+\frac{h^{r+1}}{{\sigma}_N^{r+1}} O(1)\right)dh $$ where $k_N=\left(k-{\mathbb E}(S_N)\right)/{\sigma}_N$. Since the coefficients of the polynomials $A_{u,N}$ are uniformly bounded in $N$, we can just replace the above integral with the corresponding integral over all ${\mathbb R}$ (i.e. replace $\pm{\delta}{\sigma}_N$ with $\pm\infty$). Now the Edgeworth expansions are achieved using that for any nonnegative integer $q$ we have that $(it)^qe^{-t^2/2}$ is the Fourier transform of the $q$-th derivative of $\textbf{n}(t)=\frac{1}{\sqrt{2\pi}}e^{-t^2/2}$ and that for any real $a$, \begin{equation}\label{Fourir} \int_{-\infty}^\infty e^{-iat}\widehat{\textbf{n}^{(q)}}(t) dt=\textbf{n}^{(q)}(a)=\frac{1}{\sqrt{2\pi}}(-1)^{q}H_q(a)e^{-a^2/2} \end{equation} where $H_q(a)$ is the $q$-th Hermite polynomial. \section{Generalized Edgeworth expansions: Proof of Theorem Theorem~ \ref{IntIndThm}} \label{ScGEE} \subsection{Contributions of resonant intervals.} Let $r\geq1$. As in the proof of Theorem \ref{ThEdgeMN}, our starting point is the equality \begin{equation} \label{EqDual} 2\pi {\mathbb P}(S_N=k)=\int_{0}^{2\pi} e^{-itk}{\mathbb E}(e^{it S_N})dt=\sum_{j}\int_{I_j} e^{-itk}{\mathbb E}(e^{it S_N})dt \end{equation} which holds for any $k\in{\mathbb N}$. We will consider the integrals appearing in the above sum individually. By Lemma \ref{Step1} the integrals over non-resonant intervals are of order $o({\sigma}_N^{-r})$, and so they can be disregarded. Moreover, in \S \ref{Cmplt1} we have expanded the integral over the resonant interval containing $0$. Now we will see that in the case $M_N< R(r,K) \ln V_N$ the contribution of nonzero resonant points need not be negligible. Let $t_j=\frac{2\pi l}{m}$ be a nonzero resonant point so that $M_N(m)<R(r,K) \ln V_N$ and let $I_j$ be the resonant interval containing it. Theorem~\ref{IntIndThm} will follow from an appropriate expansion of the integral $$\int_{I_j} e^{-itk}{\mathbb E}(e^{it S_N})dt.$$ We need the following simple result, which for readers' convenience is formulated as a lemma. \begin{lemma}\label{EpsLem} There exists ${\bar\varepsilon}>0$ so that for each $n\geq1$ with $q_n(m)\leq {\bar\varepsilon}$ we have $|\phi_n(t_j)|\geq\frac12$. In fact, we can take ${\bar\varepsilon}=\frac1{4m}$. \end{lemma} \begin{proof} Recall that $t_j=2\pi l/m$. The lemma follows since for any random variable $X$ we have $\displaystyle |{\mathbb E}(e^{i t_j X})|=$ $$\left|e^{it_js(m,X)}-\sum_{u\not\equiv s(m,X)\text{ mod m}}\big(e^{it_j s(m,X)}-e^{it_j u}\big)P(X\equiv u\text{ mod } m)\right| $$ $$ \geq 1-2mq(m,X) $$ where $s(m,X)$ is the most likely value of $X\text{ mod }m$ and $q(m,X)$ is the second largest value among $P(X\equiv u\text{ mod } m)$, $u=0,1,2,...,m-1$. Therefore, we can take ${\bar\varepsilon}=\frac1{4m}$. \end{proof} Next, set ${\bar\varepsilon}=\frac{1}{8K}$ and let $N_0=N_0(N,t_j,{\bar\varepsilon})$ be the number of all $n$'s between $1$ to $N$ so that $q_n(m)>{\bar\varepsilon}$. Then $N_0\leq \frac{R \ln V_N}{{\bar\varepsilon}}$ because $M_N(m)\leq R \ln V_N.$ By permuting the indexes $n=1,2,...,N$ if necessary we can assume that $q_n(m)$ is non increasing. Let $N_0$ be the largest number such that $q_{N_0}\geq {\bar\varepsilon}.$ Decompose \begin{equation} \label{NPert-Pert} \Phi_N(t)=\Phi_{N_0}(t) \Phi_{N_0, N}(t) \end{equation} where $\displaystyle \Phi_{N_0, N}(t)=\prod_{n=N_0+1}^N \phi_n(t).$ \begin{lemma}\label{Step3} If the length ${\delta}$ of $I_j$ is small enough then for any $t=t_j+h\in I_j$ and $N\geq1$ we have \[ \Phi_{N_0,N}(t)=\Phi_{N_0,N}(t_j)\Phi_{N_0,N}(h) \Psi_{N_0,N}(h) \] where $$ \Psi_{N_0,N}(h)=\exp\left[O(M_N(m))\sum_{u=1}^\infty (O(1))^u h^u\right]. $$ \end{lemma} \begin{proof} Denote $$ \mu_n={\mathbb E}(X_n), \quad {\bar X}_n=X_n-\mu_n, \quad {\bar\phi}_n(t)={\mathbb E}(e^{it {\bar X}_n}).$$ Let $j_n(m)$ be the most likely residue mod $m$ for $X_n.$ Decompose $$ {\bar X}_n=s_n+Y_n+Z_n$$ where $Z_n\in m{\mathbb Z}$, $s_n=j_n(m)-\mu_n$, so that ${\mathbb P}(Y_n\neq 0)\leq m q_n(m).$ Then for $t=t_j+h$, \begin{equation} \label{IndCharTaylor} {\bar\phi}_n(t)=e^{i t_j s_n} {\mathbb E}\left(e^{i t_j Y_n} e^{ih {\bar X}_n}\right)={\bar\phi}_n(t_j) \psi_n(h) \end{equation} where $$ \psi_n(h)= \left(1+ \frac{i h {\mathbb E}(e^{i t_j Y} {\bar X}_n)-\frac{h^2}{2} {\mathbb E}(e^{i t_j Y} ({\bar X}_n)^2)+\dots}{{\mathbb E}(e^{i t_j Y_n})}\right) . $$ Next, using that for any $x\in(-1,1)$ we have $$1+x=e^{\ln (1+x)}=e^{x-x^2/2+x^3/3-...}$$ we obtain that for $h$ close enough to $0$, \begin{equation}\label{Expand} \psi_n(h)= \exp\left(\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}\left(\frac1{{\mathbb E}(e^{it_jY_n})}\sum_{q=1}^\infty \frac{(ih)^q}{q!}{\mathbb E}(e^{it_j Y_n}(\bar X_n)^q)\right)^k\right) \end{equation} \begin{eqnarray} =\exp\left(\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}\sum_{1\leq j_1,...,j_k}\frac1{({\mathbb E}(e^{it_jY_n}))^k}\prod_{r=1}^{k}\frac{(ih)^{j_r}}{j_r!}{\mathbb E}(e^{it_j Y_n}(\bar X_n)^{j_r})\right)\nonumber\\= \exp\left(\sum_{u=1}^\infty\left(\sum_{k=1}^{u}\frac{(-1)^{k+1}}{k}\sum_{j_1+...+j_k=u}\,\prod_{r=1}^{k}\frac{{\mathbb E}(e^{it_j Y_n}(\bar X_n)^{j_r})}{{\mathbb E}(e^{it_j Y_n})j_r!}\right)(ih)^u\right).\nonumber \end{eqnarray} Observe next that \[ {\mathbb E}[e^{it_j Y_n}(\bar X_n)^{j_r}]={\mathbb E}\left[(e^{it_j Y_n}-1)\big((\bar X_n)^{j_r}-{\mathbb E}[(\bar X_n)^{j_r}]\big)\right]+ {\mathbb E}[(\bar X_n)^{j_r}]{\mathbb E}(e^{it_j Y_n}) \] and so with $C=2K$, we have \[ \frac{{\mathbb E}[e^{it_j Y_n}(\bar X_n)^{j_r}]}{{\mathbb E}(e^{it_j Y_n})}=O(q_n(m))O(C^{j_r})+{\mathbb E}[(\bar X_n)^{j_r}]. \] Plugging this into (\ref{Expand}) and using that for $h$ small enough, \begin{equation*} \exp\left[ \sum_{u=1}^\infty\left(\sum_{k=1}^u\frac{(-1)^{k+1}}k\sum_{j_1+...+j_k=u}\prod_{r=1}^k\frac{{\mathbb E}({\bar X}_n^{j_r})}{j_r!}\right)(ih)^u\right]={\mathbb E}\left(e^{ih {\bar X}_n}\right) \end{equation*} we conclude that $$ \psi_n(h)={\mathbb E}(e^{ih\bar X_n}) \exp\left[\sum_{u=1}^\infty(O(1))^uO(q_n(m))h^u\right].$$ Therefore, \[ \Phi_{N_0,N}(t)=\Phi_{N_0,N}(t_j)\Phi_{N_0,N}(h) \Psi_{N_0,N}(h) \] where $\displaystyle \Psi_{ N_0, N}(h)=\exp\left[O(M_N(m))\sum_{u=1}^\infty (O(1))^u h^u\right]. $ \end{proof} \begin{remark}\label{CoeffRem} We will see in \S\ref{Fin} that the coefficients of the polynomials appearing in Theorem~\ref{IntIndThm} depend on the coefficients of the power series $\Psi_{N_0, N}(h)$ (see, in particular, \eqref{Cj(k)}). The first term in this series is $\displaystyle ih \sum_{n={N_0+1}}^N a_{n,j}$, where \begin{equation}\label{a n,j} a_{n,j}=\frac{{\mathbb E}[(e^{it_j Y_n}-1)\bar X_n]}{{\mathbb E}(e^{i t_j Y_n})}=\frac{{\mathbb E}(e^{it_j X_n}\bar X_n)}{{\mathbb E}(e^{it_j X_n})} \end{equation} while the second term is $\displaystyle \frac{h^2}{2} \sum_{n={N_0+1}}^N b_{n,j}$, where \begin{equation} \label{SecondTerm} b_{n,j}=\frac{{\mathbb E}[(e^{it_j Y_n}-1)\bar X_n]^2}{{\mathbb E}(e^{i t_j Y_n})^2}-\frac{{\mathbb E}[(e^{it_j Y_n}-1)((\bar X_n)^2-V(X_n))]}{{\mathbb E}(e^{i t_j Y_n})} \end{equation} $$ =a_{n,j}^2-\frac{{\mathbb E}\big(e^{it_j X_n}(\bar X_n)^2\big)}{{\mathbb E}(e^{it_j X_n})}. $$ In Section \ref{Sec2nd} we will use (\ref{a n,j}) to compute the coefficients of the polynomials from Theorem \ref{IntIndThm} in the case $r=2$, and \eqref{SecondTerm} is one of the main ingredients for the computation in the case $r=3$ (which will not be explicitly discussed in this manuscript). \end{remark} The next step in the proof of Theorem \ref{IntIndThm} is the following. \begin{lemma}\label{Step4} For $t=t_j+h\in I_j$ we can decompose \begin{equation}\label{S4} \Phi_{N_0}(t)=\Phi_{N_0}(t_j+h)=\sum_{l=0}^L \frac{\Phi_{N_0}^{(l)}(t_j)}{l!} h^l+O\left((h \ln V_N)^{L+1}\right). \end{equation} \end{lemma} \begin{proof} The lemma follows from the observation that the derivatives of $\Phi_{N_0}$ satisfy $|\Phi_{N_0}^{(k)}(t)|\leq O(N_0^k)\leq (C \ln V_N)^k$. \end{proof} \subsection{Completing the proof}\label{Fin} Recall \eqref{EqDual} and consider a resonant interval $I_j$ which does not contain $0$ such that $M_N(m)\leq R\ln{\sigma}_N$. Set $U_j=[-u_j,v_j]=I_j-t_j$. Let $N_0$ be as described below Lemma \ref{EpsLem}. Denote \begin{equation} \label{SNN0} S_{N_0,N}=S_N-S_{N_0},\, S_0=0, \end{equation} $$V_{N_0,N}=\text{Var}(S_N-S_{N_0})=V_N-V_{N_0}\quad\text{and} \quad {\sigma}_{N_0,N}=\sqrt{V_{N_0,N}}.$$ Then \begin{equation}\label{Vars} V_{N_0,N}=V_N+O(\ln V_N)=V_N(1+o(1)). \end{equation} Denote $h_{N_0,N}=h/{\sigma}_{N_0,N}.$ By \eqref{FinStep.0}, if $|h_{N, N_0}|$ is small enough then \begin{equation}\label{FinStep} {\mathbb E}(e^{ih_{N_0,N} S_{N_0,N}})= \end{equation} $$e^{ih_{N_0,N}{\mathbb E}(S_{N_0,N})}e^{-h^2/2}\left(1+\sum_{k=1}^r \frac{A_{k,N_0,N}(h)}{{\sigma}_{N_0,N}^k}+\frac{h^{r+1}}{{\sigma}_{N_0,N}^{r+1}} O(1)\right)$$ where $A_{k,N_0,N}$ are polynomials with bounded coefficients (the degree of $A_{k,N_0,N}$ depends only on $k$). Let us now evaluate $\displaystyle \int_{I_j}e^{-itk}{\mathbb E}(e^{it S_N})dt. $ By Lemma \ref{Step3}, \begin{equation} \label{ResInt} \int_{I_j}e^{-itk}\Phi_N(t)dt= \end{equation} $$ e^{-it_j k}\Phi_{N_0,N}(t_j)\int_{U_j}e^{-ihk}\Phi_{N_0}(t_j+h)\Phi_{N_0,N}(h) \Psi_{N_0, N}(h)\; dh. $$ Therefore, it is enough to expand the integral on the RHS of \eqref{ResInt}. Fix a large positive integer $L$ and plug \eqref{S4} into \eqref{ResInt}. Note that for $N$ is large enough, $h_0$ small enough and $|h|\leq h_0$, Proposition \ref{PrHalf} and \eqref{Vars} show that there exist positive constants $c_0, c$ such that \begin{equation}\label{expo} |\Phi_{N_0,N}(h)|=|{\mathbb E}(e^{ih S_{N_0,N}})|\leq e^{-c_0(V_N-V_{N_0})h^2}\leq e^{-cV_N h^2}. \end{equation} Thus, the contribution coming from the term $O\left((h \ln V_N)^{L+1}\right)$ in the right hand side of (\ref{S4}) is at most of order \[ V_N^{R{\delta}}(\ln V_n)^{L+1}\int_{-\infty}^\infty h^{L+1}e^{-c V_N h^2}dh \] where ${\delta}$ is the diameter of $I_j$. Changing variables $x={\sigma}_N h$, where ${\sigma}_N=\sqrt{V_N}$ we get that the latter term is of order $(\ln V_n)^{L+1}{\sigma}_{N}^{-(L+1-2R{\delta})}$ and so when $L$ is large enough we get that this term is $o({\sigma}_N^{-r-1})$ (alternatively, we can take $L=r$ and ${\delta}$ to be sufficiently small). This means that it is enough to expand each integral of the form \begin{equation} \label{HLInt} \int_{U_j}e^{-ih k}h^l\Phi_{N_0,N}(h) \Psi_{N_0, N}(h) dh \end{equation} where $l=0,1,...,L$ (after changing variables the above integral is divided by ${\sigma}_{N_0,N}^{l+1}$). Next, Lemma \ref{Step3} shows that for any $d\in {\mathbb Z}$ we have \begin{equation} \label{EqOrderD} \Psi_{N_0, N}(h) =1+\sum_{u=1}^{d}C_{w,N}h^u+h^{d+1}O(1+M_N(m)^{d+1})|V_N|^{O(|h|)}, \end{equation} where $C_{w,N}=C_{w,N, t_j}$ are $O(M_{N}^u(m))=O((\ln V_N)^{u})$. Note that, with $a_{n,j}$ and $b_{n,j}$ defined in Remark \ref{CoeffRem}, we have \begin{equation}\label{C 1 N} C_{1,N}=i\sum_{n=N_0+1}^{N}a_{n,j} \end{equation} and \[ C_{2,N}=\frac12\sum_{n=N_0+1}^{N}b_{n,j}-\frac 12\left(\sum_{n=N_0+1}^{N}a_{n,j}\right)^2. \] Take $d$ large enough and plug \eqref{EqOrderD} into \eqref{HLInt}. Using (\ref{expo}), we get again that the contribution of the term $$h^{d+1}O(1+M_N(m)^{d+1})|V_N|^{O(|h|)}h^l\Phi_{N_0,N}(h)$$ to the above integral is $o({\sigma}_N^{-r})$. Thus, it is enough to expand each term of the form \[ \int_{U_j}e^{-ih k}h^{q}\Phi_{N_0,N}(h)dh \] where $0\leq q\leq L+d$. Using (\ref{FinStep}) and making the change of variables $h\to h/{\sigma}_{N_0,N}$ it is enough to expand \begin{equation}\label{MainInt} \int_{-\infty}^\infty e^{-ih (k-{\mathbb E}[S_{N_0,N}])/{\sigma}_{N_0,N}}h^qe^{-h^2/2}\left(1+\sum_{w=1}^r \frac{A_{w,N_0,N}(h)}{{\sigma}_{N_0,N}^w}+\frac{h^{r+1}}{{\sigma}_{N_0,N}^{r+1}} O(1)\right) \frac{dh}{{\sigma}_{N_0, N}}. \end{equation} This is achieved by using that $(it)^qe^{-t^2/2}$ is the Fourier transform of the $q$-th derivative of $\textbf{n}(t)=\frac{1}{\sqrt{2\pi}}e^{-t^2/2}$ and that for any real $a$, \begin{equation}\label{Fourir} \int_{-\infty}^\infty e^{-iat}\widehat{\textbf{n}^{(q)}}(t) dt=\textbf{n}^{(q)}(a)=\frac{1}{\sqrt{2\pi}}(-1)^{q}H_q(a)e^{-a^2/2} \end{equation} where $H_q(a)$ is the $q$-th Hermite polynomial. Note that in the above expansion we get polynomials in the variable $\displaystyle k_{N_0,N}=\frac{k-{\mathbb E}[S_N-S_{N_0}]}{{\sigma}_{N,N_0}}$, not in the variable $k_N=\frac{k-{\mathbb E}(S_N)}{{\sigma}_N}$. Since $k_{N_0,N}=k_N{\alpha}_{N_0,N}+O(\ln {\sigma}_N/{\sigma}_N)$, where ${\alpha}_{N_0,N}={\sigma}_{N}/{\sigma}_{N_0,N}=O(1)$, the binomial theorem shows that such polynomials can be rewritten as polynomials in the variable $k_N$ whose coefficients are uniformly bounded in $N$. We also remark that in the above expansions we get the exponential terms \[ e^{-\frac{(k-a_{N_0,N})^2}{2(V_N-V_{N_0})}} \quad\text{where}\quad a_{N_0,N}={\mathbb E}[S_N-S_{N_0}] \] and not $e^{-(k-a_N)^2/2V_N}$ (as claimed in Theorem \ref{IntIndThm}). In order to address this fix some ${\varepsilon}<1/2$. Note that for $|k-a_{N_0,N}|\geq V_N^{\frac12+{\varepsilon}}$ we have \[ e^{-\frac{(k-a_{N_0,N})^2}{2(V_N-V_{N_0})}}=o(e^{-cV_N^{2{\varepsilon}}}) \quad \text{and} \quad e^{-\frac{(k-a_{N_0,N})^2}{2V_N}}=o(e^{-cV_N^{2{\varepsilon}}}) \text{ for some }c>0. \] Since both terms are $o({\sigma}_N^{-s})$ for any $s$, it is enough to explain how to replace $\displaystyle e^{-\frac{(k-a_{N_0,N})^2}{2(V_N-V_{N_0})}} $ with $\displaystyle e^{-\frac{(k-a_{N})^2}{2V_N}}$ when $|k-a_{N_0,N}|\leq V_N^{\frac12+{\varepsilon}}$ (in which case $\displaystyle |k-a_N|=O(V_N^{\frac12+{\varepsilon}})$). For such $k$'s we can write \begin{equation}\label{ExpTrans1} \exp\left[-\frac{(k-a_{N_0,N})^2}{2(V_N-V_{N_0})}\right]= \end{equation} $$ \exp\left[-\frac{(k-a_{N_0,N})^2}{2V_N}\right]\; \exp\left[-\frac{(k-a_{N_0,N})^2 V_{N_0}}{2V_N(V_N-V_{N_0})}\right]. $$ Since $\displaystyle \frac{(k-a_{N_0,N})^2 V_{N_0}}{2V_N(V_N-V_{N_0})} =O\left(V_N^{-(1-3{\varepsilon})}\right)$, for any $d_1$ we have \begin{equation}\label{ExTrans1.1} \exp\left[-\frac{(k-a_{N_0,N})^2 V_{N_0}}{2V_N(V_N-V_{N_0})}\right]= \end{equation} $$\sum_{j=0}^{d_1}\frac{V_{N_0}^j}{2^j(V_N-V_{N_0})^j j!} \left(\frac{(k-a_{N_0,N})^2}{{\sigma}_N^2}\right)^j+O(V_N^{-(d_1+1)(2-3{\varepsilon})}).$$ Note that (using the binomial formula) the first term on the above right hand side is a polynomial of the variable $(k-a_{N})/{\sigma}_N$ whose coefficients are uniformly bounded in $N$. Next we analyze the first factor in the RHS of \eqref{ExpTrans1}. As before, it is enough to consider $k$'s such that $|k-a_N|\leq V_N^{\frac12+{\varepsilon}}$ for a sufficiently small ${\varepsilon}.$ We have \begin{equation}\label{Centring} \exp\left[-\frac{(k-a_{N,N_0})^2}{2V_N}\right]= \end{equation} $$ \exp\left[-\frac{(k-a_N)^2}{2V_N}\right] \exp\left[-\frac{2(k-a_N)a_{N_0}+a_{N_0}^2}{2V_N}\right]. $$ Note that $\frac{(k-a_N)a_{N_0}+a_{N_0}^2}{2V_N} =k_N\beta_{N_0,N}+\theta_{N_0,N}$, where $$\beta_{N_0,N}=\frac{a_{N_0}}{2{\sigma}_N}=O\left(\frac{\ln {\sigma}_N}{{\sigma}_N}\right) \;\text{ and }\; \theta_{N_0,N}=\frac{a_{N_0}^2}{2V_N}=O\left(\frac{\ln^2{\sigma}_N}{V_N}\right).$$ Approximating $e^{\frac{(k-a_N)a_{N_0}+a_{N_0}^2}{2V_N}}$ by a polynomial of a sufficiently large degree $d_2$ in the variable $\frac{(k-a_N)a_{N_0}+a_{N_0}^2}{2V_N}$ completes the proof of existence of polynomials $P_{a,b,N}$ claimed in the theorem (the Taylor reminder in the last approximation is of order $\displaystyle O\left(V_N^{-d_2(\frac12-{\varepsilon})}\right)$, so we can take $d_2=4(r+1)$ assuming that ${\varepsilon}$ is small enough). Finally, let us show that the coefficients of the polynomials $P_{a,b,N}$ constructed above are uniformly bounded in $N$. In fact, we will show that for each nonzero resonant point $t_j=2\pi l/m$, the coefficients of the polynomials coming from integration over $I_j$ are of order $$O\left((1+M_N^{q_0}(m))e^{-M_N(m)}\right),$$ where $q_0=q_0(r)$ depends only on $r$. Observe that the additional contribution to the coefficients of the polynomials coming from the transition between the variables $k_N$ and $k_{N_0,N}$ is uniformly bounded in $N$. Hence we only need to show that the coefficients of the (original) polynomials in the variable $k_{N_0,N}$ are uniformly bounded in $N$. The possible largeness of these coefficient can only come from the terms $C_{u,N,t_j}$, for $u=0,1,2,...,d$ which are of order $M_N^u(m)$, respectively. However, the corresponding terms are multiplied by terms of the form $\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)$ for certain $\ell$'s which are uniformly bounded in $N$ (see also \eqref{Cj(k)}). We conclude that there are constants $W_j\in{\mathbb N}$ and $a_j\in{\mathbb N}$ which depend only on $t_j$ and $r$ so that the coefficients of the resulting polynomials are composed of a sum of at most $W_j$ terms of order $(M_N(m))^{a_j}\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)$, where $\ell\leq E(r)$ for some $E(r)\in{\mathbb N}$. Next, we have \begin{equation} \label{DerComb} \Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j)= \end{equation} $$\sum_{n_1,\dots, n_k\leq N_0; \atop \ell_1+\dots+\ell_k=\ell} \gamma_{\ell_1,\dots, \ell_k} \left(\prod_{q=1}^k \phi_{n_q}^{(\ell_q)}(t_j) \right) \left[\prod_{n\leq N, \; n\neq n_k} \phi_n(t_j)\right]$$ where $\gamma_{\ell_1,\dots, \ell_k}$ are bounded coefficients of combinatorial nature. Using (\ref{Roz0}) we see that for each $n_1,\dots, n_k$ the product in the square brackets is at most $C e^{-c_0 M_N(m)+O(1)}$ for some $C, c_0>0$. Hence $$|\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j)|\leq \hat{C} N_0^{\ell} \; e^{-c_0 M_N(m)},\,\,\hat C>0.$$ Now, observe that the definition of $N_0$ gives $M_N(m)\geq {\varepsilon}_0 N_0$, ${\varepsilon}_0>0$. Therefore $\displaystyle |\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j)|\leq C_0 M_N^\ell (m)e^{-c_0 M_N(m)}$, and so each one of the above coefficients is of order $M_N^{\ell'}(m)e^{-c_0 M_N(m)}$ for some $\ell'$ which does not depend on $N$. \qed \begin{remark} The transition between the variables $k_{N_0,N}$ and $k_{N}$ changes the monomials of the polynomials $P_{a,b,N}$, $a\not=0$ coming from integration over $I_j$, for $t_j\not=0$ into monomials of the form $\frac{c_N a_{N_0}^{j_1}{\sigma}_{N_0}^{j_2}k_N^{j_3}}{{\sigma}_N^{u}}$ for some bounded sequence $(c_N)$, $j_1,j_2,j_3\geq0$ and $u\in{\mathbb N}$. As we have explained, the coefficients of these monomials are uniformly bounded. Still, it seems more natural to consider such monomials as part of the polynomial $P_{a,b+u,N}$. In this case we still get polynomials with bounded coefficients since $a_{N_0}$ and ${\sigma}_{N_0}$ are both $O(N_0)$, $N_0=O(M_{N}(m))$ and $c_N$ contains a term of the form $\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j)$. \end{remark} \begin{remark} As can be seen from the proof, the resulting expansions might contain terms corresponding to ${\sigma}_N^{-s}$ for $s>r$. Such terms can be disregarded. For $\frac{|k-a_N|}{{\sigma}_N}\leq V_N^{\varepsilon}$ this follows because the coefficients of our exapansions are $O(1)$ and for $\frac{|k-a_N|}{{\sigma}_N}\geq V_N^{\varepsilon}$ this follows from \eqref{expo}. In practice, some of the polynomials $P_{a,b, N}$ with $b\leq r$ might have coefficients which are $\displaystyle o({\sigma}_N^{b-r})$ (e.g. when $b+u>r$ in the last remark) so they also can be disregarded. The question when the terms $P_{a,b, N}$ may be disregarded is in the heart of the proof of Theorem \ref{r Char} given in the next section. \end{remark} \subsection{A summary} The proofs of Proposition \ref{PrLLT-SLLT}, Theorem \ref{ThLLT} and Theorem \ref{r Char} will be based on careful analysis of the formulas of the polynomials from Theorem \ref{IntIndThm}. For this purpose, it will be helpful to summarize the main conclusions from the proof of Theorem \ref{IntIndThm}. Let $r\geq1$ and $t_j=2\pi l/m$ be a nonzero resonant point. Then the arguments in the proof of Theorem \ref{IntIndThm} yield that the contribution to the expansion coming from $t_j$ is \begin{equation}\label{Cj(k)} \textbf{C}_j(k):= \end{equation} $$ e^{-it_j k}\Phi_{N_0,N}(t_j)\sum_{s\leq r-1}\left(\sum_{u+l=s}\frac{\Phi_{N_0}^{(l)}(t_j)C_{u,N}}{l!}\right)\int_{U_j}e^{-ihk}h^{s}\Phi_{N_0,N}(h)dh $$ where $U_j=I_j-t_j$, $C_{u, N}$ are given by \eqref{EqOrderD} and $C_{0,N}=1$. When $t_j=0$ then it is sufficient to consider only $s=0$, $N_0=0$ and the contribution is just the integral \[ \int_{-{\delta}}^{\delta} e^{-ihk}\Phi_{N}(h)dh \] where ${\delta}$ is small enough. As in (\ref{MainInt}), changing variables we can replace the integral corresponding to $h^s$ with \begin{eqnarray}\label{s} {\sigma}_{N_0,N}^{-s-1}\int_{-\infty}^\infty e^{-ih (k-{\mathbb E}[S_{N_0,N}])/{\sigma}_{N_0,N}}h^s e^{-h^2/2}\\\times\left(1+\sum_{w=1}^r \frac{A_{w,N_0,N}(h)}{{\sigma}_{N_0,N}^w}+\frac{h^{r+1}}{{\sigma}_{N_0,N}^{r+1}} O(1)\right)dh.\nonumber \end{eqnarray} After that was established, the proof was completed using \eqref{Fourir} and some estimates whose whose purpose was to make the transition between the variables $k_{N_0,N}$ and $k_N$. \section{Uniqueness of trigonometric expansions.}\label{Sec5} In several proofs we will need the following result. \begin{lemma}\label{Lemma} Let $r\geq1$ and $d\geq0$. Set ${\mathcal{R}}_0={\mathcal{R}}\cup\{0\}$ where ${\mathcal{R}}$ is the set of nonzero resonant points. For any $t_j\in{\mathcal{R}}_0$, let $A_{0,N}(t_j)$,...,$A_{d,N}(t_j)$ be sequences so that, uniformly in $k$ such that $\displaystyle k_N=\frac{k-{\mathbb E}(S_N)}{{\sigma}_N}=O(1)$ we have \[ \sum_{t_j\in{\mathcal{R}}_0}e^{-it_j k}\left(\sum_{m=0}^{d}k_N^m A_{m,N}(t_j)\right) =o({\sigma}_N^{-r}). \] Then for all $m$ and $t_j$ \begin{equation}\label{A def} A_{m,N}(t_j)=o({\sigma}_N^{-r}). \end{equation} In particular the polynomials from the definition of the (generalized) Edgeworth expansions are unique up to terms of order $o({\sigma}_N^{-r})$. \end{lemma} \begin{proof} The proof is by induction on $d$. Let us first set $d=0$. Then, for any $k\in{\mathbb N}$ we have \begin{equation}\label{d=0} \sum_{t_j\in{\mathcal{R}}_0}e^{-it_j k}A_{0,N}(t_j)=o({\sigma}_N^{-r}). \end{equation} Let $T$ be the number of nonzero resonant points, and let us relabel them as $\{x_1,...,x_T\}$. Consider the vector $$\mathfrak{A}_N=(A_{0,N}(0), A_{0,N}(x_1),...,A_{0,N}(x_T)).$$ Let ${\mathcal V}$ be the transpose of the Vandermonde matrix of the distinct numbers ${\alpha}_j=e^{-ix_j}, j=0,1,2,...,T$ where $x_0:=0$. Then ${\mathcal V}$ is invertible and by considering $k=0,1,2,...,T$ in (\ref{d=0}) we see that (\ref{d=0}) holds true if and only if \[ \mathfrak{A}_N={\mathcal V}^{-1}o({\sigma}_N^{-r})=o({\sigma}_N^{-r}). \] Alternatively, let $Q$ be the least common multiple of the denominators of $t_j\in{\mathcal{R}}$. Let $a_N(p)=A_{0,N}(2\pi p/Q)$ if $2\pi p/Q$ is a resonant point and $0$ otherwise. Then for $m=0,1,...,Q-1$ we have \[ \hat a_N(m):=\sum_{p=0}^{Q-1}a_N(p)e^{-2\pi pm/Q}=o({\sigma}_N^{-r}). \] Therefore, by the inversion formula of the discrete Fourier transform, \[ a_N(p)=Q^{-1}\sum_{m=0}^{Q-1} \hat a_N(m) e^{2\pi i m p/Q}=o({\sigma}_N^{-r}). \] Assume now that the theorem is true for some $d\geq 0$ and any sequences functions $A_{0,N}(t_j),...,A_{d,N}(t_j)$. Let $A_{0,N}(t_j),...,A_{d+1,N}(t_j)$ be sequences so that uniformly in $k$ such that $\displaystyle k_N:=\frac{k-{\mathbb E}(S_N)}{{\sigma}_N}=O(1)$ we have \begin{equation}\label{I} \sum_{t_j\in{\mathcal{R}}_0}e^{-it_j k}\left(\sum_{m=0}^{d+1}k_N^m A_{m,N}(t_j)\right)=o({\sigma}_N^{-r}). \end{equation} Let us replace $k$ with $k'=k+[{\sigma}_N]Q$, where $Q$ is the least common multiply of all the denominators of the nonzero $t_j$'s. Then $e^{-it_jk}=e^{-it_j k'}$. Thus, \[ \sum_{t_j\in{\mathcal{R}}_0}e^{-it_j k}\left(\sum_{m=0}^{d+1}(k_N'^m-k_N^{m})A_{m,N}(t_j)\right)=o({\sigma}_N^{-r}). \] Set $L_N=[{\sigma}_N]Q/{\sigma}_N\thickapprox Q$. Then the LHS above equals \[ L_N\sum_{t_j\in{\mathcal{R}}_0}e^{-it_j k}\left(\sum_{s=0}^{d}k_N^s{\mathcal A}_{s,N}(t_j)\right) \] where \[ {\mathcal A}_{s,N}(t_j)=\sum_{m=s+1}^{d+1}A_{m,N}(t_j)L_N^{m-s-1}. \] By the induction hypothesis we get that \[ {\mathcal A}_{s,N}(t_j)=o({\sigma}_N^{-r}) \] for any $s=0,1,...,d$. In particular \[ {\mathcal A}_{d,N}(t_j)=A_{d+1,N}(t_j)=o({\sigma}_N^{-r}). \] Substituting this into \eqref{I} we can disregard the last term $A_{d+1,N}(t_j)$. Using the induction hypothesis with $A_{0,N}(t_j),A_{1,N}(t_j),...,A_{d,N}(t_j)$ we obtain \eqref{A def}. \end{proof} \section{First order expansions}\label{FirstOrder} In this section we will consider the case $r=1$. By \eqref{Cj(k)} and \eqref{s}, we see that the contribution coming from the integral over $I_j$ is \[ {\sigma}_{N_0,N}^{-1}e^{-it_j k}\Phi_{N}(t_j)\sqrt {2\pi} e^{-k_{N_0,N}^2/2}+o({\sigma}_N^{-1}) \] where $k_{N_0,N}=(k-{\mathbb E}(S_{N_0,N}))/{\sigma}_{N_0,N}$. Now, using the arguments at the end of the proof of Theorem \ref{IntIndThm} when $r=1$ we can just replace $e^{-k_{N_0,N}^2/2}$ with $e^{-(k-{\mathbb E}(S_N))^2/2V_N}$ (since it is enough to consider the case when $k_{N_0,N}$ and $k_{0,N}$ are of order $V_N^{\varepsilon}$). Therefore, taking into account that ${\sigma}_{N_0,N}^{-1}-{\sigma}_{N}^{-1}=O({\sigma}_N^{-2}N_0)$ we get that \begin{equation}\label{r=1} \sqrt 2\pi{\mathbb P}(S_N=k)= \end{equation} $$\left(1+\sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\Phi_{N}(t_j)\right){\sigma}_{N}^{-1}e^{-(k-{\mathbb E}[S_N])^2/2V_N}+o(\sigma_N^{-1}). $$ Here ${\mathcal{R}}$ is the set of all nonzero resonant points $t_j=2\pi l_j/m_j$. Indeed \sout{for} the contribution of the resonant points satisfying $M_N(m_j)\leq R(r,K)\ln V_N$ is analyzed in \S \ref{Fin}. The contribution of the other nonzero resonant points $t$ is $o({\sigma}_N^{-1})$ due to \eqref{Roz0} in Section \ref{ScEdgeLogProkh}. In particular, \eqref{Roz0} implies that $\Phi_N(t)=o({\sigma}_N^{-1})$ so adding the points with $M_N(m_j)\geq R(r,K)\ln V_N$ only changes the sum in the RHS of \eqref{r=1} by $o(\sigma_N^{-1}). $ \begin{corollary}\label{FirstOrCor} The local limit theorem holds if and only if $\displaystyle \max_{t\in R}|\Phi_N(t)|=o(1)$. \end{corollary} \begin{proof} It follows from (\ref{r=1}) that the LLT holds true if and only if for any $k$ we have $$ \sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\Phi_{N}(t_j)=o(1). $$ Now, the corollary follows from Lemma \ref{Lemma}. \end{proof} Before proving Theorem \ref{ThLLT} we recall a standard fact which will also be useful in the proofs of Theorems \ref{Thm SLLT VS Ege} and \ref{Thm Stable Cond}. \begin{lemma} \label{LmUnifFourier} Let $\{\mu_N\}$ be a sequence of measures probability measures on ${\mathbb Z}/m{\mathbb Z}$ and $\{\gamma_N\}$ be a positive sequence. Then $\mu_N(a)=\frac{1}{m}+O(\gamma_N)$ for all $a\in {\mathbb Z}/m{\mathbb Z}$ if and only iff $\hat\mu_N(b)=O(\gamma_N)$ for all $b\in \left({\mathbb Z}/m{\mathbb Z}\right)\setminus \{0\}$ where $\hat\mu$ is the Fourier transform of $\mu.$ \end{lemma} \begin{proof} If $\mu_N(a)=\frac{1}{m}+O(\gamma_n)$ then $$ \hat\mu_N(b)=\sum_{a=0}^{m-1} \mu_N(a) e^{2\pi i ab/m}= \sum_{a=0}^{m-1} \frac{1}{m} e^{2\pi iab/m}+O(\gamma_N)=O(\gamma_N).$$ Next $\hat\mu_N(0)=1$ since $\mu_N$ are probabilities. Hence if $\hat\mu_N(b)=O(\gamma_N)$ for all $b\in \left({\mathbb Z}/m{\mathbb Z}\right)\setminus \{0\}$ then $$ \mu_N(a)=\frac{1}{m} \sum_{b=0}^{m-1} \hat\mu_N(b) e^{-2\pi i ba/m}= \frac{1}{m} \left[1+\sum_{b=1}^{m-1} \hat\mu_N(b) e^{-2\pi i ba/m}\right]=\frac{1}{m}+O(\gamma_N)$$ as claimed. \end{proof} \begin{proof}[Proof of Theorem \ref{ThLLT}] The equivalence of conditions (b) and (c) comes from the fact that for non-resonant points the characteristic function decays faster than any power of $\sigma_N$ (see \eqref{NonResDec}). The equivalence of (a) and (c) is due to Corollary \ref{FirstOrCor}. Finally, the equivalence between (c) and (d) comes from Lemma \ref{LmUnifFourier}. \end{proof} \begin{remark} Theorem \ref{ThLLT} can also be deduced from \cite[Corollary 1.4]{Do}. Indeed the corollary says that either the LLT holds or there is an integer $h\in (0, 2K)$ and a bounded sequence $\{a_N\}$ such that the limit $$ {\mathbf{p}}(j)=\lim_{N\to\infty} {\mathbb P}(S_N-a_N=j \text{ mod } h) $$ exists and moreover if $k-a_n\equiv j \text{ mod }h$ then $$ \sigma_N {\mathbb P}(S_N=k)={\mathbf{p}}(j) h \mathfrak{g}\left(\frac{k-{\mathbb E}(S_N)}{\sigma_N}\right)+o\left(\sigma_N^{-1}\right).$$ Thus in the second case the LLT holds iff ${\mathbf{p}}(j)=\frac{1}{h}$ for all $j$ which is equivalent to $S_N$ being asymptotically uniformly distributed mod $h$ and also to the Fourier transform of ${\mathbf{p}}(j)$ regarded as the measure on ${\mathbb Z}/(h{\mathbb Z})$ being the $\delta$ measure at 0. Thus the conditions (a), (c) and (d) of the theorem are equivalent. Also by the results of \cite[Section 2]{Do} (see also [\S 3.3.2]\cite{DS}) if ${\mathbb E}\left(e^{i\xi S_N}\right)$ does not converge to 0 for some non zero $\xi$ then $\displaystyle \left(\frac{2\pi}{\xi} \right) {\mathbb Z}\bigcap 2\pi {\mathbb Z}$ is a lattice in ${\mathbb R}$ which implies that $\xi$ is resonant, so condition (b) of the theorem is also equivalent to the other conditions. \end{remark} \begin{proof}[Proof of Proposition \ref{PrLLT-SLLT}.] Let $S_N$ satisfy LLT. Fix $m\in \mathbb{N}$ and suppose that $\displaystyle \sum_n q_n(m)<\infty.$ Let $s_n$ be the most likely residue of $X_n$ mod $m$. Then for $t=\frac{2\pi l}{m}$ we have $$\phi_n(t)=e^{i t s_n}-\sum_{j\not\equiv s_n\; \text{mod}\; m} {\mathbb P}(X_n\equiv j\; \text{mod m})\left(e^{its_n}-e^{itj}\right),$$ so that $1\geq |\phi_n(t)|\geq 1-2 m q_n(m).$ It follows that for each ${\varepsilon}>0$ there is $N({\varepsilon})$ such that $\displaystyle \left|\prod_{n=N({\varepsilon})+1}^\infty \phi_n(t)\right|>1-{\varepsilon}. $ Applying this for ${\varepsilon}=\frac{1}{2}$ we have \begin{equation} \label{PhiNHalf} \frac{1}{2}\leq\liminf_{N\to\infty} \left|\Phi_{N(1/2), N}(t)\right|\neq 0. \end{equation} On the other hand the LLT implies that \begin{equation} \label{PhiLim} \lim_{N\to\infty} \Phi_N(t)=0. \end{equation} Since $\Phi_N=\Phi_{N(1/2)} \Phi_{N(1/2), N}$, \eqref{PhiNHalf} and \eqref{PhiLim} imply that $\Phi_{N(1/2)}(t)=0.$ Since $\displaystyle \Phi_{N(1/2)}\left(\frac{2\pi l}{m}\right)=\prod_{n=1}^{N(1/2)} \phi_n\left(\frac{2\pi l}{m}\right)$ we conclude that there exists $n_l\leq N(1/2)$ such that $\phi_{n_l}(\frac{2\pi l}{m})=0.$ Hence $Y=X_{n_1}+X_{n_2}+\dots X_{n_{m-1}}$ satisfies ${\mathbb E}\left(e^{2\pi i (k/m)Y}\right)=0$ for $k=1,\dots m-1.$ By Lemma \ref{LmUnifFourier} both $Y$ and $S_N$ for $N\geq N(1/2)$ are uniformly distributed. This proves the proposition. \end{proof} \section{Characterizations of Edgeworth expansions of all orders.} \label{ScCharacterization} \subsection{Derivatives of the non-perturbative factor.} Next we prove the following result. \begin{proposition}\label{Thm} Fix $r\geq1,$ and assume that $M_N\leq R(r, K)\ln{\sigma}_N$ (possibly along a subsequence). Then Edgeworth expansions of order $r$ hold true (i.e. (\ref{EdgeDef}) holds for such $N$'s) iff for each $t_j\in{\mathcal{R}}$ and $0\leq\ell<r$ (along the underlying subsequence) we have \begin{equation}\label{Cond} {\sigma}_N^{r-\ell-1}\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)=o(1). \end{equation} \end{proposition} \begin{proof} First, in view of \eqref{Cj(k)} and \eqref{expo}, it is clear that the condition (\ref{Cond}) is sufficient for expansions of order $r$. Let us now prove that the condition (\ref{Cond}) is necessary for the expansion of order $r.$ We will use induction on $r$. For $r=1$ (see \eqref{r=1}) our expansions read \[ {\mathbb P}(S_N=k)={\sigma}_N^{-1}e^{-k_N^2/2} \left[1+\sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\Phi_N(t_j)\right]+o({\sigma}_N^{-1}). \] Therefore if \[ {\mathbb P}(S_N=k)={\sigma}_N^{-1}e^{-k_N^2/2}P_N(k_N)+o({\sigma}_N^{-1}) \] for some polynomial $P_N$ then Lemma \ref{Lemma} tells us that, in particular $\Phi_{N}(t_j)=o(1)$ for each $t_j\in{\mathcal{R}}$. Let us assume now that the necessity part in Proposition \ref{Thm} holds for $r'=r-1$ and prove that it holds for $r$. We will use the following lemma. \begin{lemma}\label{LemInd} Assume that for some $t_j\in{\mathcal{R}}$, \begin{equation}\label{Ind} {\sigma}_{N}^{r-2-l}\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(l)}(t_j)=o(1),\, l=0,1,...,r-2. \end{equation} Then, up to an $o({\sigma}_N^{-r})$ error term, the contribution of $t_j$ to the generalized Edgeworh expansions of order $r$ is \begin{equation}\label{Val0} e^{-it_j k}e^{-k_{N}^2/2}\left(\frac{\Phi_{N_0}(t_j)}{{\sigma}_N}+\sum_{q=2}^{r}\frac{\mathscr H_{N,q}(k_{N})}{{\sigma}_N^{q}}\right) \end{equation} with \begin{equation} \label{DefCH} \mathscr H_{N,q}(x)=\mathscr H_{N,q}(x;t_j)= \end{equation} $$ {\mathcal H}_{N,q,1}(x)+{\mathcal H}_{N,q,2}(x)+{\mathcal H}_{N,q,3}(x)+{\mathcal H}_{N,q,4}(x) $$ where $$ {\mathcal H}_{N, q,1}(x)= \frac{(i)^{q-1}H_{q-1}(x)\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(q-1)}(t_j)}{(q-1)!},$$ $$ {\mathcal H}_{ N,q, 2}(x)= \frac{(i)^{q-1}H_{q-1}(x)\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(q-2)}(t_j)C_{1,N,t_j}}{(q-2)!}, $$ $${\mathcal H}_{N,q,3}(x)= \frac{a_{N_0}(i)^{q-2}H'_{q-2}(x)\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(q-2)}(t_j)}{(q-2)!},$$ $$ {\mathcal H}_{N,q,4}(x)=-\frac{xa_{N_0}(i)^{q-2}H_{q-2}(x)\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(q-2)}(t_j)}{(q-2)!}, $$ and $H_q$ are Hermite polynomials. Here $C_{1,N,t_j}$ is given by \eqref{C 1 N} when $M_N(m)\leq R(K, r)\ln{\sigma}_N$, and $C_{1,N,t_j}=0$ when $M_N(m)>R(K,r)\ln{\sigma}_N.$ (Note that in either case $C_{1,N,t_j}=O(M_N(m))=O(\ln{\sigma}_N)$). As a consequence, when the Edgeworth expansions of order $r$ hold true and \eqref{Ind} holds, then uniformly in $k$ so that $k_N=O(1)$ we have \begin{equation}\label{Val} \frac{\Phi_{N}(t_j)}{{\sigma}_N}+\sum_{q=2}^{r}\frac{\mathscr H_{N,q}(k_{N};t_j)}{{\sigma}_N^{q}}=o({\sigma}_N^{-r}). \end{equation} \end{lemma} The proof of the lemma will be given in \S \ref{SSSummingUp} after we finish the proof of Proposition \ref{Thm}. By the induction hypothesis the condition \eqref{Ind} holds true. Let us prove now that for $\ell=0,1,2,...,r-1$ and $t_j\in{\mathcal{R}}$ we have \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)=o({\sigma}_N^{-r+1+\ell}). \] Let us write \[ \frac{\Phi_{N}(t_j)}{{\sigma}_N}+\sum_{q=2}^{r}\frac{\mathscr H_{N,q}(k_{N})}{{\sigma}_N^{q}}=\sum_{m=0}^{r-1}k_N^m A_{m,N}(t_j). \] Applying Lemmas \ref{Lemma} and \ref{LemInd} we get that \[ A_{m,N}(t_j)=o({\sigma}_N^{-r}) \] for each $0\leq m\leq r-1$ and $t_j\in{\mathcal{R}}$. Fix $t_j\in{\mathcal{R}}$. Using Lemma \ref{LemInd} and the fact that the Hermite polynomials $H_{u}$ have the same parity as $u$ and that their leading coefficient is $1$ we have \begin{eqnarray} \label{AEq1} A_{r-1,N}(t_j)={\sigma}_N^{-r}(i)^{r-2}\Bigg(i\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-1)}(t_j)/(r-1)\\+\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-2)}(t_j)(iC_{1,N}-a_{N_0})\Bigg)=o({\sigma}_N^{-r}) \notag \end{eqnarray} and \begin{eqnarray} \label{AEq2} \quad A_{r-2,N}(t_j)={\sigma}_N^{-r+1}(i)^{r-3}\Bigg(i\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-2)}(t_j)/(r-2)\\+\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-3)}(t_j)(iC_{1,N}-a_{N_0})\Bigg)=o({\sigma}_N^{-r}). \notag \end{eqnarray} Since $\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-3)}(t_j)=o({\sigma}_N^{-1})$, \eqref{AEq2} yields \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-2)}(t_j)=o({\sigma}_N^{-1}\ln{\sigma}_N). \] Plugging this into \eqref{AEq1} we get \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-1)}(t_j)=o(1). \] Therefore we can just disregard $\mathscr H_{r,N}(k_N;t_j)$ since its coefficients are of order $o({\sigma}_N^{-r})$. Since the term ${\mathcal H}_{r,N}(k_N;t_j)$ no longer appears, repeating the above arguments with $r-1$ in place of $r$ we have \begin{eqnarray*} A_{r-3,N}(t_j)={\sigma}_N^{-r+2}(i)^{r-4}\Bigg(i\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-3)}(t_j)/(r-3)\\+\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-4)}(t_j)(iC_{1,N}-a_{N_0})\Bigg)=o({\sigma}_N^{-r}). \end{eqnarray*} Since $\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-4)}(t_j)=o({\sigma}_N^{-2})$, the above asymptotic equality yields that \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-3)}(t_j)=o({\sigma}_N^{-2}\ln{\sigma}_N). \] Plugging this into \eqref{AEq2} we get \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(r-2)}(t_j)=o({\sigma}_N^{-1}). \] Hence, we can disregard also the term $\mathscr H_{r-1,N}(k_N;t_j)$. Proceeding this way we get that $\displaystyle \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)=o({\sigma}_N^{\ell+1-r}) $ for any $0\leq \ell<r$. \qed Before proving Lemma \ref{LemInd}, let us state the following result, which is a consequence of Proposition \ref{Thm} and \eqref{Roz0}. \begin{corollary}\label{CorNoDer} Suppose that for each nonzero resonant point $t$ we have $\displaystyle \inf_{n}|\phi_n(t)|>0$. Then for any $r$, the sequence $S_N$ obeys Edgeworth expansions of order $r$ if and only if $\Phi_N(t)=o({\sigma}_N^{1-r})$ for each nonzero resonant point $t$. \end{corollary} \subsection{Proof of Lemma \ref{LemInd}.} \label{SSSummingUp} \begin{proof} First, because of \eqref{Ind}, for each $0\leq s\leq r-1,$ the terms indexed by $l<s-1$ in \eqref{Cj(k)}, are of order $o({\sigma}_N^{-r})$ and so they can be disregarded. Therefore, we need only to consider the terms indexed by $l=s$ and $l=s-1$. For such $l$, using again \eqref{Ind} we can disregard all the terms in \eqref{s} indexed by $w\geq1$, since the resulting terms are of order $o({\sigma}_N^{-w-r}\ln{\sigma}_N)=o({\sigma}_N^{-r})$. Now, since ${\sigma}_{N_0,N}^{-1}-{\sigma}_{N}^{-1}=O(V_{N_0}/{\sigma}_N^3)$ we can replace ${\sigma}_{N_0,N}$ with ${\sigma}_{N}^{-1}$ in \eqref{Cj(k)}, as the remaining terms are of order $o({\sigma}_N^{-r-1})$. Therefore, using \eqref{Fourir} we get the following contributions from $t_j\in{\mathcal{R}}$, \begin{equation*} e^{-it_j k}\; e^{-k_{N_0,N}^2/2}\left(\frac{\Phi_{N}(t_j)}{{\sigma}_N}+\sum_{q=2}^{r}\frac{{\mathcal H}_{N,q}(k_{N_0,N})}{{\sigma}_N^{q}}\right) \end{equation*} where ${\mathcal H}_{N,q}(x)={\mathcal H}_{N,q,1}(x)+{\mathcal H}_{N,q,2}(x)$ and ${\mathcal H}_{N,q,j}, j=1,2$ are defined after \eqref{DefCH}. Note that when $x=O(1)$ and $q<r$, \begin{equation}\label{Order} \frac{{\mathcal H}_{N,q,1}(x)}{{\sigma}_N^q}=o({\sigma}_N^{-r+1})\,\,\text { and }\,\,\frac{{\mathcal H}_{N,q,2}(x)}{{\sigma}_N^q}=o({\sigma}_N^{-r}\ln{\sigma}_N). \end{equation} while when $q=r$, \begin{equation}\label{Order.1} \frac{{\mathcal H}_{N,r,1}(x)}{{\sigma}_N^r}=O({\sigma}_N^{-r}\ln{\sigma}_N)\,\,\text { and }\,\,\frac{{\mathcal H}_{N,r,2}(x)}{{\sigma}_N^r}=o({\sigma}_N^{-r}\ln{\sigma}_N). \end{equation} Next \[ k_{N_0,N}=(1+\rho_{N_0,N})k_N+\frac{a_{N_0}}{{\sigma}_N}+\theta_{N_0,N} \] where $\rho_{N_0,N}={\sigma}_{N}/{\sigma}_{N_0,N}-1=O(\ln{\sigma}_N/{\sigma}^2_N)$ and $$\theta_{N_0,N}=a_{N_0}\left(\frac{1}{{\sigma}_{N_0,N}}-\frac{1}{{\sigma}_N}\right)=O(\ln^2{\sigma}_N/{\sigma}_N^3).$$ Hence, when $|k_{N_0,N}|\leq{\sigma}_N^{{\varepsilon}}$ (and so $\displaystyle k_N=O({\sigma}_N^{{\varepsilon}})$) for some ${\varepsilon}>0$ small enough then for each $m\geq 1$ we have \[ k_{N_0,N}^m=k_{N}^m+mk_{N}^{m-1}a_{N_0}/{\sigma}_N+o({\sigma}_N^{-1}). \] Therefore, \eqref{Order} and \eqref{Order.1} show that upon replacing $H_{q-1}(k_{N_0,N})$ with $H_{q-1}(k_{N})$ the only additional term is \[ \frac{a_{N_0}(i)^{q-1}H'_{q-1}(k_N)\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(q-1)}(t_j)}{(q-1)!{\sigma}_N^{q+1}} \] for $q=2,3,...,r-1$. We thus get that the contribution of $t_j$ is \begin{equation*} e^{-it_j k} \;e^{-k_{N_0,N}^2/2}\left(\frac{\Phi_{N}(t_j)}{{\sigma}_N}+\sum_{q=2}^{r}\frac{{\mathcal C}_{N,q}(k_{N})}{{\sigma}_N^{q}}\right) \end{equation*} where \begin{equation*} {\mathcal C}_{N,q}(x)={\mathcal H}_{N,q}(x)+ \frac{a_{N_0}(i)^{q-2}H'_{q-2}(x)\Phi_{N_0,N}(t_j) \Phi_{N_0}^{(q-2)}(t_j)}{(q-2)!}. \end{equation*} Note that ${\mathcal C}_{N,2}(\cdot)={\mathcal H}_{N,2}(\cdot)$. Finally, we can replace $e^{-k_{N_0,N}^2/2}$ with \[ (1-k_Na_{N_0}/{\sigma}_N)e^{-k_{N}^2/2} \] since all other terms in the transition between $e^{-k_{N_0,N}^2/2}$ to $e^{-k_{N}^2/2}$ are of order $o({\sigma}_N^{-1})$ (see (\ref{ExpTrans1}) and (\ref{ExTrans1.1})). The term $-k_Na_{N_0}/{\sigma}_N$ shifts the $u$-th order term to the $u+1$-th term, $u=1,2,...,r-1$ multiplied by $-k_Na_{N_0}$. Next, relying on \eqref{Order} and \eqref{Order.1} we see that after multiplied by $k_Na_{N_0}/{\sigma}_N$, the second term ${\mathcal H}_{N,q,2}(k_N)$ from the definition of ${\mathcal H}_{N,q}(k_N)$ is of order $o({\sigma}_N^{-r-1}\ln^2{\sigma}_N){\sigma}_N^q$ and so this product can be disregarded. Similarly, we can ignore the additional contribution coming from multiplying the second term from the definition of ${\mathcal C}_{N,q}(k_N)$ by $-k_Na_{N_0}/{\sigma}_N$ (since this term is of order $o({\sigma}_N^{-r}\ln{\sigma}_N){\sigma}_N^q$). We conclude that, up to a term of order $o({\sigma}_N^{-r})$, the total contribution of $t_j$ is \begin{equation*} e^{-it_j k} \; e^{-k_{N}^2/2}\left(\frac{\Phi_{N}(t_j)}{{\sigma}_N}+\sum_{q=2}^{r}\frac{\mathscr H_{N,q}(k_{N}; t_j)}{{\sigma}_N^{q}}\right) \end{equation*} where $\displaystyle \mathscr H_{N,q}(x;t_j)={\mathcal C}_{N,q}(x)-\frac{x a_{N_0}(i)^{q-2}H_{q-2}(x) \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(q-2)}(t_j)}{(q-2)!} $ which completes the proof of \eqref{Val0}. Next we prove \eqref{Val}. On the one hand, by assumption we have Edgeworth expansions or order $r$, and, on the other hand, we have the expansions from Theorem \ref{IntIndThm}. Therefore, the difference between the two must be $o({\sigma}_N^{-r})$. Since the usual Edgeworth expansions contain no terms corresponding to nonzero resonant points, applying Lemma \ref{Lemma} and \eqref{Val0} we obtain \eqref{Val}. \end{proof} Note that the formulas of Lemma \ref{LemInd} together with already proven Proposition \ref{Thm} give the following result. \begin{corollary} \label{CrFirstNonEdge} Suppose that $\mathbb{E}(S_N)$ is bounded, $S_N$ admits the Edgeworth expansion of order $r-1$, and, either (a) for some ${\bar\varepsilon}\leq 1/(8K)$ we have $N_0=N_0(N,t_j,{\bar\varepsilon})=0$ for each each nonzero resonant point $t_j$, or (b) $\displaystyle \varphi:=\min_{t\in{\mathcal{R}}}\inf_{n}|\phi_n(t)|>0$. Then $$ \sqrt{2\pi} \mathbb{P}(S_N=k)$$ $$=e^{-k_N^2/2} \left[{\mathcal E}_r(k_N)+ \sum_{t_j\in{\mathcal{R}}}\left(\frac{\Phi_N(t_j)}{{\sigma}_N}+\frac{ik_N C_{1,N,t_j}\Phi_N(t_j)}{{\sigma}_N^2}\right) e^{-i t_j k}\right]+o(\sigma_N^{-r})$$ where ${\mathcal E}_r(\cdot)$ is the Edgeworth polynomial of order $r$ (i.e. the contribution of $t=0$), and we recall that $$iC_{1,N,t_j}=-\sum_{n=1}^N\frac{{\mathbb E}(e^{it_j X_n}\bar X_n)}{{\mathbb E}(e^{it_j X_n})}.$$ \end{corollary} \begin{proof} Part (a) holds since under the assumption that $N_0=0$ all terms ${\mathcal H}_{N,q,j}$ in \eqref{DefCH} except ${\mathcal H}_{N, 2, 2}$ vanish. Part (b) holds since in this case the argument proceeds similarly to the proof of Theorem \ref{IntIndThm} if we set $N_0=0$ for any $t_j$ (since we only needed $N_0$ to obtain a positive lower bound on $|\phi_n(t_j)|$ for $t_j\in{\mathcal{R}}$ and $N_0<n\leq N$). \end{proof} \begin{remark} Observe that $\displaystyle {\sigma}_N^{-1}\gg |C_{1, N, {t_j}}|{\sigma}_N^{-2},$ so if the conditions of the corollary are satisfied but $|\Phi_N(t_j)|\leq c {\sigma}_N^{1-r}$ (possibly along a subsequence), then the leading correction to the Edgeworth expansion comes from $$ e^{-k_N^2/2} \sum_{t_j\in{\mathcal{R}}}\left(\frac{\Phi_N(t_j)}{{\sigma}_N} \right).$$ Thus Corollary \ref{CrFirstNonEdge} strengthens Corollary \ref{CorNoDer} by computing the leading correction to the Edgeworth expansion when the expansion does not hold. \end{remark} \subsection{Proof of Theorem \ref{r Char}} We will use the following. \begin{lemma}\label{Lem} Let $t_j$ be a nonzero resonant point, $r>1$ and suppose that $M_N\leq R\ln{\sigma}_N$, $R=R(r,K)$ and that $|{\mathbb E}(S_N)|=O(\ln{\sigma}_N)$. Then (\ref{Cond}) holds for all $0\leq \ell<r$ if and only if \begin{equation}\label{CondDer} |\Phi_{N}^{(\ell)}(t_j)|=o\left({\sigma}_N^{-r+\ell+1}\right) \end{equation} for all $0\leq \ell<r$. \end{lemma} \begin{proof} Let us first assume that (\ref{Cond}) holds. Recall that \begin{equation} \label{PhiNProduct} \Phi_N(t)=\Phi_{N_0}(t) \Phi_{N_0, N}(t) \end{equation} with \begin{equation} \label{TripleProduct} \Phi_{N_0, N}(t)=\Phi_{N_0, N} (t_j)\Phi_{N_0, N}(h) \Psi_{N_0, N}(h) \end{equation} where $t=t_j+h$ and \begin{equation} \label{TripleFactorization} \Psi_{N_0, N}(h)=\exp\left[O(M_N(m))\sum_{u=1}^\infty (O(1))^u h^u\right]. \end{equation} For $\ell=0$ the result reduces to \eqref{PhiNProduct}. For larger $\ell$'s we have \begin{equation}\label{Relation} \Phi_{N}^{(\ell)}(t_j)=\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)+\sum_{k=0}^{\ell-1}\binom{\ell}{k}\Phi_{N_0,N}^{(\ell-k)}(t_j)\Phi_{N_0}^{(k)}(t_j). \end{equation} Fix some $k<\ell$. Then by \eqref{TripleProduct}, \begin{eqnarray*} \Phi_{N_0, N}^{(\ell-k)}(t_j)=\Phi_{N_0,N}(t_j)\sum_{u=0}^{\ell-k}\binom{\ell-k}{u}\Phi_{N_0,N}^{(u)}(0)\Psi_{N_0,N}^{(\ell-k-u)}(0)\\=O(\ln^{\ell-k}{\sigma}_N)\Phi_{N_0,N}(t_j) \end{eqnarray*} where we have used that $\bar S_{N_0,N}=S_{N_0,N}-{\mathbb E}(S_{N_0,N})$ satisfies $|{\mathbb E}[(\bar S_{N_0,N})^q]|\leq C_q{\sigma}_{N_0,N}^{2q}$, (see \eqref{CenterMoments}). Therefore \begin{equation}\label{Thereofre} \Phi_{N_0,N}^{(\ell-k)}(t_j)\Phi_{N_0}^{(k)}(t_j)=O(\ln^{\ell-k}{\sigma}_N)\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(k)}(t_j). \end{equation} Finally, by (\ref{Cond}) we have \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(k)}(t_j)=o({\sigma}_N^{k+1-r}) \] and so, since $k<\ell$, \[ \Phi_{N_0}^{(\ell-k)}(t_j)\Phi_{N_0}^{(k)}(t_j)=o({\sigma}_N^{\ell+1-r}). \] This completes the proof that \eqref{CondDer} holds. Next, suppose that (\ref{CondDer}) holds for each $0\leq\ell<r$. Let use prove by induction on $\ell$ that \begin{equation}\label{Above} |\Phi_{N_0,N}\Phi_{N_0}^{(\ell)}(t_j)|=o\left({\sigma}_N^{-r+\ell+1}\right). \end{equation} For $\ell=1$ this follows from \eqref{Relation}. Now take $\ell>1$ and assume that \eqref{Above} holds with $k$ in place of $\ell$ for each $k<\ell$. Then by (\ref{Relation}), (\ref{Thereofre}) and the induction hypothesis we get that \[ \Phi_{N}^{(\ell)}(t_j)=\Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)+o({\sigma}_N^{\ell+1-r}). \] By assumption we have $\Phi_{N}^{(\ell)}(t_j)=o({\sigma}_N^{\ell+1-r})$ and hence \[ \Phi_{N_0,N}(t_j)\Phi_{N_0}^{(\ell)}(t_j)=o({\sigma}_N^{\ell+1-r}) \] as claimed. \end{proof} Theorem \ref{r Char} in the case $M_N\leq R\ln \sigma_N$ follows now by first replacing $X_n$ with $X_n-c_n$, where $(c_n)$ is a bounded sequence of integers so that ${\mathbb E}[S_N-C_N]=O(1)$, where \begin{equation} \label{CNInteger} C_N=\sum_{j=1}^n c_j \end{equation} (see Lemma 3.4 in \cite{DS}), and then applying Lemma~\ref{Lem} and Proposition~\ref{Thm}. It remains to consider the case when $M_N(m)\geq {\bar R} \ln \sigma_N$ where ${\bar R}$ is large enough. In that case, by Theorem \ref{ThEdgeMN}, the Edgeworth expansion of order $r$ hold true, and so, after the reduction to the case when ${\mathbb E}(S_N)$ is bounded, it is enough to show that $\displaystyle \Phi_N^{(\ell)}(t_j)=o\left(\sigma_N^{-r+\ell+1}\right)$ for all $0\leq \ell<r.$ By the arguments of Lemma \ref{Lem} it suffices to show that for each $0\leq \ell<r$ we have $\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j) =o(\sigma_N^{-r})$. To this end we write $$\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j) =\sum_{n_1,\dots, n_k\leq N_0; \atop \ell_1+\dots+\ell_k=\ell} \gamma_{\ell_1,\dots, \ell_k} \left(\prod_{q=1}^k \phi_{n_q}^{(\ell_q)}(t_j) \right) \left[\prod_{n\leq N, \; n\neq n_k} \phi_n(t_j)\right]$$ where $\gamma_{\ell_1,\dots, \ell_k}$ are bounded coefficients of combinatorial nature. Using (\ref{Roz0}) we see that for each $n_1,\dots, n_k$ the product in the square brackets is at most $C e^{-c_0 M_N(m)+O(1)}$ for some $C, c_0>0$. Hence $$|\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j)|\leq \hat{C} N_0^{\ell} \; e^{-c M_N(m)}.$$ It remains to observe that the definition of $N_0$ gives $M_N(m)\geq \hat{\varepsilon} N_0.$ Therefore $\displaystyle |\Phi_{N_0}^{(\ell)}(t_j)\Phi_{N_0, N}(t_j)|\leq C^* M_N^\ell (m) \; e^{-c M_N(m)}=o(\sigma_N^{-r})$ provided that $M_N\geq {\bar R}\ln \sigma_N$ for ${\bar R}$ large enough. \qed \section{Edgeworth expansions and uniform distribution.} \subsection{Proof of Theorem \ref{Thm SLLT VS Ege}} \label{SSEdgeR=2} In view of Proposition \ref{Thm} with $r=2$, it is enough to show that if $\Phi_{N}(t_j)=o(\sigma_N^{-1})$ then the SLLT implies that \begin{equation} \label{C11-C02} |\Phi_{N_0,N}(t_j)\Phi_{N_0}'(t_j)|=o(1) \end{equation} for any non-zero resonant point $t_j$ (note that the equivalence of conditions (b) and (c) of the theorem follows from Lemma \ref{LmUnifFourier}). Denote $\displaystyle \Phi_{N; k}(t)=\prod_{l\neq k, l\leq N} \phi_l(t)$. Let us first assume that $\phi_k(t_j)\not=0$ for all $1\leq k\leq N$. Then $\phi_k'(t_j) \Phi_{N; k}(t_j)=\phi_k'(t_j)\Phi_N(t_j)/\phi_k(t_j)$. Let ${\varepsilon}_N=\frac{\ln{\sigma}_N}{{\sigma}_N}$. If for all $1\leq k\leq N_0$ we have $|\phi_k(t_j)|\geq {\varepsilon}_N$ then $$ \left|\Phi_{N_0,N}(t_j)\Phi_{N_0}'(t_j)\right|=\left|\sum_{k=1}^{N_0}\phi_k'(t_j) \Phi_{N; k}(t_j)\right|\leq|\Phi_N(t_j)|\sum_{k=1}^{N_0}|\phi'_k(t_j)/\phi_k(t_j)| $$ $$ \leq C{\varepsilon}_N^{-1} N_0 |\Phi_N(t_j)| \leq C'{\sigma}_N|\Phi_N(t_j)|\to 0 \text{ as }N\to\infty $$ where we have used that $N_0=O(\ln V_N)$. Next suppose there is at least one $1\leq k\leq N_0$ such that $|\phi_k(t_j)|<{\varepsilon}_N$. Let us pick some $k=k_N$ with the latter property. Then for any $k\not=k_N$, $1\leq k\leq N_0$ we have \[ |\phi_k'(t_j) \Phi_{N; k}(t_j)|\leq C|\phi_{k_N}(t_j)|<C{\varepsilon}_N. \] Therefore, \[ \left|\sum_{k\not=k_N,\,1\leq k\leq N_0}\phi_k'(t_j) \Phi_{N; k}(t_j)\right|\leq \frac{C'\ln^2 {\sigma}_N}{{\sigma}_N}=o(1). \] It follows that \begin{equation} \label{SingleTerm} \Phi_{N_0,N}(t_j)\Phi_{N_0}'(t_j)=\Phi_{N; k_N}(t_j) \phi_{k_N}'(t_j)+o(1). \end{equation} Next, in the case when $\phi_{k_0}(t_j)=0$ for some $1\leq k_0\leq N_0$, then \eqref{SingleTerm} clearly holds true with $k_N=k_0$ since all the other terms vanish. In summary, either \eqref{C11-C02} holds or we have \eqref{SingleTerm}. In the later case, using (\ref{Roz0}) we obtain \begin{equation} \label{PhiKBound} \left|\mathbb{E}\left(e^{i t_j S_{N; k_N}}\right)\right|\leq e^{-c_2\sum_{s\not=k_N, 1\leq s\leq N}q_s(m)}=e^{-c_2M_{N}(m)-q_{k_N}(m)} \end{equation} where $S_{N; k}=S_N-X_k$, and $c_2>0$ depends only on $K$. Since the SLLT holds true, $M_{N}$ converges to $\infty$ as $N\to\infty$. Taking into account that $0\leq q_{k_N}(m)\leq1$ we get that the left hand side of \eqref{PhiKBound} converges to $0$, proving \eqref{C11-C02}. \end{proof} \subsection{Proof of Theorem \ref{Thm Stable Cond}} We start with the proof of part (1). Assume that the LLT holds true in a superstable way. Let $X_1',X_2',...$ be a square integrable integer-valued independent sequence which differs from $X_1,X_2,...$ by a finite number of elements. Then there is $n_0\in{\mathbb N}$ so that $X_n=X'_n$ for any $n>n_0$. Set $\displaystyle S_N'=\sum_{n=1}^N X'_n$, $Y=S'_{n_0}$ and $Y_N=Y{\mathbb I}(|Y|<{\sigma}_N^{1/2+{\varepsilon}})$, where ${\varepsilon}>0$ is a small constant. By the Markov inequality we have $$ {\mathbb P}(|Y|\geq {\sigma}_N^{1/2+{\varepsilon}})={\mathbb P}(|Y|^2\geq {\sigma}_N^{1+2{\varepsilon}})\leq\|Y\|_{L^2}^2{\sigma}_N^{-1-2{\varepsilon}}=o({\sigma}_N^{-1}). $$ Therefore, for any $k\in{\mathbb N}$ and $N>n_0$ we have \begin{eqnarray*} {\mathbb P}(S'_N=k)={\mathbb P}(S_{N;1,2,...,n_0}+Y_N=k)+o({\sigma}_N^{-1})\\={\mathbb E}[{\mathbb P}(S_{N;1,2,...,n_0}=k-Y_N|X_1',...,X_{n_0}')]+o({\sigma}_N^{-1})\\= {\mathbb E}[P_{N:n_1,...,n_0}(k-Y_N)]+o({\sigma}_N^{-1}) \end{eqnarray*} where $P_{N:n_1,...,n_0}(s)={\mathbb P}(S_{N;1,2,...,n_0}=s)$ for any $s\in{\mathbb Z}$. Since the LLT holds true in a superstable way, we have, uniformly in $k$ and the realizations of $X_1',...,X_{n_0}'$ that $$ P_{N:n_1,...,n_0}(k-Y_N)=\frac{e^{-(k-Y_N-{\mathbb E}(S_N))^2/(2V_N)}}{\sqrt{2\pi}{\sigma}_N}+o({\sigma}_N^{-1}). $$ Therefore, \begin{equation}\label{Above1} {\mathbb P}(S'_N=k)= \end{equation} $$ \frac{e^{-(k-{\mathbb E}(S_N))^2/2V_N}}{\sqrt{2\pi}{\sigma}_N} {\mathbb E}\big(e^{-(k-{\mathbb E}(S_N))Y_N/V_N+Y_N^2/(2V_N)}\big)+o({\sigma}_N^{-1}). $$ Next, since $|Y_N|\leq {\sigma}_N^{1/2+{\varepsilon}}$ we have that $\|Y_N^2/(2V_N)\|_{L^\infty}\leq {\sigma}_{N}^{2{\varepsilon}-1}$, and so when ${\varepsilon}<1/2$ we have $\|Y_N^2/2V_N\|_{L^\infty}=o(1)$. Recall that $k_N=(k-{\mathbb E}(S_N))/{\sigma}_N$. Suppose first that $|k_N|\geq {\sigma}_N^{{\varepsilon}}$ with ${\varepsilon}<1/4.$ Since $$\big|(k-{\mathbb E}(S_N))Y_N/V_N\big|\leq |k_N|{\sigma}_N^{{\varepsilon}-\frac12},$$ we get that the RHS of \eqref{Above1} is $o({\sigma}_N^{-1})$ (uniformly in such $k$'s). On the other hand, if $|k_N|<{\sigma}_N^{{\varepsilon}}$ then $${\mathbb E}\big(e^{-(k-{\mathbb E}(S_N))Y_N/V_N+Y_N^2/2V_N}\big)=1+o(1)$$ (uniformly in that range of $k$'s). We conclude that, uniformly in $k$, we have $$ {\mathbb P}(S'_N=k)=\frac{e^{-(k-{\mathbb E}(S_N))^2/(2V_N)}}{\sqrt{2\pi}{\sigma}_N}+o({\sigma}_N^{-1}). $$ Lastly, since $\displaystyle \sup_{N}|{\mathbb E}(S_N)-{\mathbb E}(S_N')|\!\!<\!\infty$ and $\displaystyle \sup_{N}|\text{Var}(S_N)-\text{Var}(S_N')|\!<\!\!\infty,$ $$ {\mathbb P}(S'_N=k)=\frac{e^{-(k-{\mathbb E}(S'_N))^2/(2V'_N)}}{\sqrt{2\pi}{\sigma}'_N}+o(1/{\sigma}_N') $$ where $V_N'=\text{Var}(S_N')$ and ${\sigma}_N'=\sqrt{V_N'}$. Conversely, if the SLLT holds then $M_N(h)\to \infty$ for each $h\geq2$. Now if $t$ is a nonzero resonant point with denominator $h$ then \eqref{Roz0} gives $$ |\Phi_{N; j_1^{N}, j_2^{N},\dots ,j_{s_N}^{N}}(t)|\leq C e^{-c M_N(h)+\bar C {\bar s}},\, \,C,\bar C>0 $$ for any choice of $j_1^N,...,j_{s_N}^N$ and $\bar s$ with $s_N\leq \bar s$. Since the RHS tends to 0 as $N\to\infty$, $\{X_n\}\in EeSS(1)$ completing the proof of part (1). For part (2) we only need to show that (a) is equivalent to (b) as the equivalence of (b) and (c) comes from Lemma \ref{LmUnifFourier}. By replacing again $X_n$ with $X_n-c_n$ it is enough to prove the equivalency in the case when ${\mathbb E}(S_N)=O(1)$. The proof that (a) and (b) are equivalent consists of two parts. The first part is the following statement whose proof is a straightforward adaptation of the proof of Theorem \ref{r Char} and is therefore omitted. \begin{proposition} $\{X_n\}\in SsEe(r)$ if and only if for each ${\bar s}$, each sequence $j_1^N, j_2^N, \dots ,j_{s_N}^N$ with $s_N\leq {\bar s}$, each $\ell<r$ and each $t\in{\mathcal{R}}$ we have \begin{equation} \label{PhiDerFM} \Phi^{(\ell)}_{N; j_1^N, j_2^N, \dots, j_{s_N}^N}(t)= o(\sigma_N^{\ell+1-r}). \end{equation} \end{proposition} Note that the above proposition shows that the condition $\displaystyle \Phi_{N; j_1^N, j_2^N, \dots ,j_{s_N}^N}(t)= o(\sigma^{1-r}_{N})$ is necessary. The second part of the argument is to show that if $$ \Phi_{N; j_1^N, j_2^N, \dots ,j_{s_N}^N}(t)= o(\sigma^{1-r}_{N}) $$ holds for every finite modification of $S_N$ with $s_N\leq {\bar s}+\ell$ (uniformly) then \eqref{PhiDerFM} holds for every modifications with $s_N\leq {\bar s}$ so that the condition $\displaystyle \Phi_{N; j_1^N, j_2^N, \dots ,j_{s_N}^N}(t)= o(\sigma^{1-r}_{N})$ is also sufficient. To this end we introduce some notation. Fix a nonzero resonant point $t=\frac{2\pi l}{m}.$ Let $\check\Phi_{N}$ be the characteristic function of the sum $\check S_{N}$ of all $X_n$'s such that $1\leq n\leq N$, $n\not\in\{j_1^{N},j_2^{N},...,j_{s_N}^N\}$ and $q_{n}(m)\geq\bar\epsilon$. Let $\check N$ be the number of terms in $\check S_N.$ Denote $\tilde S_{N}=S_{N;j_1^N,j_2^N,\dots,j_{s_N}^{N}}-\check S_{N}$ and let $\tilde\Phi_N(t)$ be the characteristic function of $\tilde S_N.$ Similarly to the proof of Theorem \ref{r Char} it suffices to show that for each $\ell<r$ $$ \left| \check \Phi_N^{(\ell)} \tilde \Phi_N(t)\right|=o(\sigma_N^{1+\ell-r}) $$ and, moreover, we can assume that $M_N(m)\leq {\bar R} \ln \sigma_N$ so that $\check N=O(\ln \sigma_N).$ We have (cf. \eqref{DerComb}) , $$ \check \Phi_N^{(\ell)} \tilde \Phi_N(t) =\sum_{\substack{n_1, \dots ,n_k; \;\\ \ell_1+\dots+\ell_k=\ell}} \prod_{w=1}^k \gamma_{\ell_1,\dots, \ell_k} \phi_{n_w}^{(\ell_w)}(t_j) \prod_{n\not\in\{n_1,n_2 \dots ,n_k, j_1^N,j_2^N \dots, j_{s_N}^N\}} \phi_n(t_j)$$ where the summation is over all tuples $n_1,n_2, \dots ,n_k$ such that $q_{n_w}(m)\geq\bar\epsilon$. Note that the absolute value of each term in the above sum is bounded by $C |\Phi_{N; n_1, \dots ,n_k, j_1\dots, j_{s_N}^N}(t_j)|=o(\sigma_N^{1-r}).$ It follows that the whole sum is $$ o\left(\sigma_N^{1-r} \check N^\ell\right) = o\left(\sigma_N^{1-r} \ln^\ell \sigma_N \right) $$ completing the proof. \qed \begin{remark} Lemma \ref{LmUnifFourier} and Theorem \ref{r Char} show that the convergence to uniform distribution on any factor $\mathbb{Z}/h \mathbb{Z}$ with the speed $o({\sigma}_N^{1-r})$ is necessary for Edgeworth expansion of order $r.$ This is quite intuitive. Indeed calling $\mathscr E_r$ the Edgeworth function of order $r$, (i.e. the contribution from zero), then it is a standard result from numerical integration (see, for instance, \cite[Lemma A.2]{DNP}) that for each $s\in \mathbb{N}$ and each $j\in \mathbb{Z}$ $$ \sum_{k\in \mathbb{Z}} h \mathscr E_r\left(\frac{j+hk}{\sqrt{{\sigma}_N}}\right) =\int_{-\infty}^\infty \mathscr E_r(x) dx+o\left({\sigma}_N^{-s}\right)= 1+o\left({\sigma}_N^{-s}\right) $$ where in the last inequality we have used that the non-constant Hermite polynomials have zero mean with respect to the standard normal law (since they are orthogonal to the constant functions). However, using this result to show that $$\displaystyle \sum_{k\in \mathbb{Z}} \mathbb{P}(S_N=j+kh)=\frac{1}{h}+o\left({\sigma}_N^{1-r}\right) $$ requires a good control on large values of $k.$ While it appears possible to obtain such control using the large deviations theory it seems simpler to estimate the convergence rate towards uniform distribution from our generalized Edgeworth expansion. \end{remark} \section{Second order expansions}\label{Sec2nd} In this section we will compute the polynomials in the general expansions in the case $r=2$. First, let us introduce some notations which depend on a resonant point $t_j.$ Let $t_j=2\pi l_j/m_j$ be a nonzero resonant point such that $M_N(m_j)\leq R(2,K)\ln V_N$ where $R(2,K)$ is specified in Remark \ref{R choice}. Let $\check\Phi_{j,N}$ be the characteristic function of the sum $\check S_{j,N}$ of all $X_n$'s such that $1\leq n\leq N$ and $q_{n}(m_j)\geq\bar\epsilon=\frac1{8K}$. Note that $\check S_{j,N}$ was previously denoted by $S_{N_0}$. Let $\tilde S_{N,j}=S_N-\check S_{N,j}$ and denote by $\tilde \Phi_{N,j}$ its characteristic function. (In previous sections we denoted the same expression by $S_{N_0,N}$, but here we want to emphasize the dependence on $t_j$.) Let $\gamma_{N,j}$ be the ratio between the third moment of $\tilde S_{N,j}-{\mathbb E}(\tilde S_{N,j})$ and its variance. Recall that by \eqref{CenterMoments} $|\gamma_{N,j}|\leq C$ for some $C$. Also, let $C_{1,N,t_j}$ be given by (\ref{C 1 N}), with the indexes rearranged so that the $n$'s with $q_n(m)\geq\bar{\varepsilon}$ are the first $N_0$ ones ($C_{1,N,t_j}$ is at most of order $M_N(m)=O(\ln V_{N})$). For the sake of convenience, when either $t_j=0$ or $M_N(m_j)\geq R(2,K)\ln V_N$ we set $C_{1,N,t_j}=0$, $\tilde S_{N,j}=S_N$ and $\check S_{N,j}=0$. In this case $\tilde\Phi_{N,j}=\tilde\Phi_{N}$ and $\check\Phi_{N,j}\equiv 1$. Also denote $k_N=(k-{\mathbb E}(S_N))/{\sigma}_N,$ $\bar S_N=S_N-{\mathbb E}(S_N)$, and $\gamma_N={\mathbb E}(\bar S_N^3)/V_N$, ($\gamma_N$ is bounded). \begin{proposition}\label{2 Prop} Uniformly in $k$, we have \begin{eqnarray}\label{r=2'} \sqrt {2\pi}{\mathbb P}(S_N=k)=\Big(1+\sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\Phi_{N}(t_j)\Big){\sigma}_{N}^{-1}e^{-k_N^2/2}\\-{\sigma}_{N}^{-2}e^{-k_N^2/2}\left(\gamma_N k_N^3/6+\sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\tilde\Phi_{N,j}(t_j)P_{N,j}(k_N)\right)\nonumber\\+o(\sigma_N^{-2})\nonumber \end{eqnarray} where $$ P_{N,j}(x)=\big(\check\Phi_{N,j}(t_j)(iC_{1,N,t_j}-{\mathbb E}(\check S_{N,j}))+i\check\Phi_{N,j}'(t_j)\big)x+\check\Phi_{N,j}(t_j)\gamma_{N,j}x^3/6. $$ \end{proposition} \begin{proof} Let $t_j=\frac{2\pi l}{m}$ be a resonant point with $M_N(m)\leq R(2,K) \ln V_N.$ Recall that $\textbf{C}_j(k)$ are given by \eqref{Cj(k)}. First, in order to compute the term corresponding to ${\sigma}_{N_0,N}^{-2}$ we need only to consider the case $s\leq1$ in (\ref{s}). Using \eqref{A 1,n .1} we end up with the following contribution of the interval containing $t_j$, $$ \sqrt{(2\pi)^{-1}}e^{-it_j k}\tilde \Phi_{N,j}(t_j){\sigma}_{N_0,N}^{-1}\Bigg(\int_{-\infty}^{\infty}e^{-ih(k-{\mathbb E}[\tilde S_{N,j}])/{\sigma}_{N,j}}e^{-h^2/2}dh$$ $$+{\sigma}_{N,j}^{-1}\int_{-\infty}^\infty e^{-ih(k-{\mathbb E}[\tilde S_{N,j}])/{\sigma}_{N,j}}\left(\frac{ih^3}{6}{\mathbb E}\left[\big(\tilde S_{N,j}-{\mathbb E}(\tilde S_{N,j})\big)^3\right] {\sigma}_{N,j}^{-3}\right)dh$$ $$+{\sigma}_{N,j}^{-1}(C_{1,N}\check\Phi_{N,j}(t_j)+\check\Phi_{N,j}'(t_j))\int_{-\infty}^\infty e^{-ih(k-{\mathbb E}(\tilde S_{N,j}))/{\sigma}_{N,j}}he^{-h^2/2}dh\Bigg)$$ $$=e^{-it_j k}\tilde\Phi_{N,j}(t_j)\sqrt{(2\pi)^{-1}} e^{-k_{N,j}^2/2}{\sigma}_{N,j}^{-1}\Big(\check\Phi_{N,j}(t_j)+i\big(C_{1,N,t_j}\check\Phi_{N,j}(t_j)+\check\Phi_{N,j}'(t_j)\big)$$ $$\times k_{N,j}{\sigma}_{N,j}^{-1}+\check\Phi_{N,j}(t_j)(k_{N,j}^3-3k_{N,j})\gamma_{N,j}{\sigma}_{N,j}^{-1}/6\Big) $$ where ${\sigma}_{N,j}=\sqrt{V(\tilde S_{N,j})}$, $k_{N,j}=(k-{\mathbb E}(\tilde S_{N,j}))/{\sigma}_{N,j}$ and $\gamma_{N,j}= \frac{{\mathbb E}[(\tilde S_{N,j}-{\mathbb E}(\tilde S_{N,j}))^3]}{{\sigma}_{N,j}^2}$ (which is uniformly bounded). As before we shall only consider the case where $|k_N|\leq V_N^{\varepsilon}$ with ${\varepsilon}=0.01$ since otherwise both the LHS and the RHS \eqref{r=2'} are $O({\sigma}_N^{-r})$ for all $r.$ Then, the last display can be rewritten as $I+I\!\!I$ where \begin{equation} I=\frac{e^{-i t_j k }}{\sqrt{2\pi} {\sigma}_{N,j}}\; e^{-k_{N,j}^2/2}\; \Phi_N(t_j); \end{equation} $$ I\!\!I =\frac{e^{-i t_j k }}{\sqrt{2\pi} {\sigma}^2_{N,j}}\; e^{-k_{N,j}^2/2}\times$$ $$ \left[\Phi_N(t_j) \left(i C_{1, N,t_j} k_{N,j}+\frac{\gamma_{N,j}}{6} \left(k_{N,j}^3-3 k_{N,j}\right)\right)+i\check\Phi_{N,j}'(t_j) \tilde\Phi_{N,j}(t_j) k_{N,j} \right]. $$ In the region $|k_N|\leq V_N^{\varepsilon}$ we have $$ I= \frac{e^{-i t_j k }}{\sqrt{2\pi} {\sigma}_{N}}\; e^{-k_{N}^2/2}\; \left[1-q_{N,j} k_N \right]\Phi_N(t_j)+ o\left({\sigma}_N^{-2}\right)$$ where $$q_{N,j}={\mathbb E}(\check S_{N,j})/{\sigma}_{N,j}={\mathbb E}(\check S_{N,j})/{\sigma}_N+O(\ln{\sigma}_N/{\sigma}_N^3)= O(\ln{\sigma}_N/{\sigma}_N) $$ while $$ I\!\!I=\frac{e^{-i t_j k }}{\sqrt{2\pi} {\sigma}^2_{N}}\; e^{-k_{N}^2/2}\times$$ $$ \left[\Phi_N(t_j) \left(i C_{1, N,t_j} k_{ N}+\frac{{\gamma}_{N,j} \left(k_{N}^3-3 k_{N}\right)}{6} \right)+i\check\Phi_{N,j}'(t_j) \tilde\Phi_{N,j}(t_j) k_{N} \right]$$ $$+o\left({\sigma}_N^{-2}\right).$$ This yields (\ref{r=2'}) with ${\mathcal{R}}_N$ in place of ${\mathcal{R}}$, where ${\mathcal{R}}_N$ is the set on nonzero resonant points $t_j=2\pi l/m$ such that $M_N(m)\leq R(2,K)\ln V_N$. Next, \eqref{Roz0} shows that if $M_N(m)\geq R(2,k)\ln V_N$ then \[ \sup_{t\in I_j}|\Phi_{N}(t)|\leq e^{-c_0M_N(m)}=o({\sigma}_N^{-2}) \] and so the contribution of $I_j$ to the right hand side of \eqref{r=2'} is $o({\sigma}_N^{-2}).$ Finally, the contribution coming from $t_j=0$ is \[ e^{-k_N^2/2}\left({\sigma}_N^{-1}+{\sigma}_N^{-2}\gamma_N^3 k_N^3/6\right) \] and the proof of the proposition is complete. \end{proof} \begin{remark}\label{Alter 2nd Order} Suppose that $M_N(m)\geq R(2,K)\ln V_N$ and let $N_0$ is the number of $n$'s between $1$ to $N$ so that $q_n(m)\geq \frac{1}{8K}$. Then using (\ref{Roz0}) we also have $$ |\check\Phi'_{N,j}(t_j)\tilde \Phi_{N,j}(t_j)|\leq\sum_{n\in{\mathcal B}_{{\bar\varepsilon}}(m)}|{\mathbb E}[X_ne^{it_j X_n}]|\cdot|\Phi_{N;n}(t_j)| $$ $$ \leq CN_0(N,t_j,{\bar\varepsilon})e^{-c_0 M_{N}(m)}\leq C'M_{N}(m)e^{-c_0 M_{N}(m)}, $$ where $${\mathcal B}_{N,{\bar\varepsilon}}(m)=\{1\leq n\leq N: q_n(m)>{\bar\varepsilon}\}.$$ Since $M_N(m)\geq R(2,K)\ln V_N$, for any $0<c_1<c_0$, when $N$ is large enough we have $$M_{N}(m)e^{-c_0 M_{N}(m)}\leq C_1e^{-c_1M_N(m)}=o({\sigma}_N^{-2}).$$ Similarly, $|{\mathbb E}(\check S_{N,j})\Phi_N(t_j)|=o({\sigma}_N^{-2})$ and $$C_{1,N,t_j}\Phi_{N}(t_j)=O(M_N(m))\Phi_{N}(t_j)=o({\sigma}_N^{-2}).$$ Therefore we get (\ref{r=2'}) when $\tilde S_{N,j}$ and $\check S_{N,j}$ are defined in the same way as in the case $M_N(m)\leq R(2,K)\ln V_N$. \end{remark} Under additional assumptions the order 2 expansion can be simplified. \begin{corollary} \label{CrR2-SLLT} If $S_N$ satisfies SLLT then $$ \sqrt {2\pi}{\mathbb P}(S_N=k)=\frac{e^{-k_N^2/2}}{{\sigma}_N} \left(1+\sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\Phi_{N}(t_j) -\frac{\gamma_N k_N^3}{6{\sigma}_N}\right) +o(\sigma_N^{-2}). $$ \end{corollary} \begin{proof} The estimates of \S \ref{SSEdgeR=2} together with \eqref{Roz0} show that if $S_N$ satisfies the SLLT then for all $j$ $$(1+M_N(m))\tilde\Phi_{N,j} (t_j) \check \Phi_{N,j}(t_j)=o(1) \text{ and } \tilde\Phi_{N,j} (t_j) \check \Phi'_{N,j}(t_j)=o(1).$$ Thus all terms in the second line of \eqref{r=2'} except the first one make a negligible contribution, and so they could be omitted. \end{proof} Next, assume that $S_N$ satisfies the LLT but not SLLT. According to Proposition \ref{PrLLT-SLLT}, in this case there exists $m$ such that $M_N(m)$ is bounded and for $k=1,\dots ,m-1$ there exists $n=n(k)$ such that $\phi_{n}(k/m)=0$. Let ${\mathcal{R}}_s$ denote the set of nonzero resonant points $t_j=\frac{2\pi k}{m}$ so that $M_N(m)$ is bounded and $\phi_{\ell_j}(t_j)=0$ for unique $\ell_j.$ \begin{corollary}\label{CrR2Z} Uniformly in $k$, we have \begin{eqnarray*} \sqrt {2\pi}{\mathbb P}(S_N=k)=\Big(1+\sum_{t_j\in {\mathcal{R}}}e^{-it_j k}\Phi_{N}(t_j)\Big){\sigma}_{N}^{-1}e^{-k_N^2/2}\\-{\sigma}_{N}^{-2}e^{-k_N^2/2}\left(\gamma_N k_N^3/6+ \sum_{t_j\in {\mathcal{R}}_s}i e^{-it_j k}\Phi_{N; \ell_j}(t_j) \phi_{\ell_j}'(t_j) k_N \right)+o(\sigma_N^{-2}).\nonumber \end{eqnarray*} \end{corollary} \begin{proof} As in the proof of Corollary \ref{CrR2-SLLT} we see that the contribution of the terms with $k/m$ with $M_N(m)\to\infty$ is negligible. Next, for terms in ${\mathcal{R}}_s$ the only non-zero term in the second line in \eqref{r=2'} corresponds to $\Phi_{N; \ell_j}(t_j) \phi_{\ell_j}'(t_j)$ while for the resonant points such that $\phi_{\ell}(t_j)=0$ for two different $\ell$s all terms vanish. \end{proof} \section{Examples.} \label{ScExamples} \begin{example} \label{ExNonAr} Suppose $X_n$ are iid integer valued with step $h>1.$ That is there is $s\in {\mathbb Z}$ such that ${\mathbb P}(X_n\in s+h {\mathbb Z})=1$ and $h$ is the smallest number with this property. In this case \cite[Theorem 4.5.4]{IL} (see also \cite[Theorem 5]{Ess}) shows that there are polynomials $P_b$ such that \begin{equation} \label{EssEdge} {\mathbb P}(S_N=k)= \sum_{b=1}^r \frac{P_{b} ((k-{\mathbb E}[S_N])/\sigma_N)}{\sigma_N^b} \mathfrak{g}\left(\frac{k-{\mathbb E}(S_N)}{\sigma_N}\right) +o(\sigma_N^{-r}) \end{equation} for all $k\in sN+h{\mathbb Z}.$ Then $$ \sum_{a=0}^{h-1} \sum_{b=1}^r e^{2\pi i a(k-sN)/h} \frac{P_{b} ((k-{\mathbb E}[S_N])/\sigma_N)}{\sigma_N^b} \mathfrak{g}((k-{\mathbb E}(S_N))/\sigma_N)$$ provides $\displaystyle o(\sigma_N^{-r})$ approximation to ${\mathbb P}(S_N=k)$ which is valid for {\em all} $k\in {\mathbb Z}.$ Next let ${\bar S}_N=X_0+S_N$ where $X_0$ is bounded and arithmetic with step 1. Then using the identity \begin{equation} \label{Convolve} {\mathbb P}({\bar S}_N=k)=\sum_{u\equiv k-s N\text{ mod } h} {\mathbb P}(X_0=u) {\mathbb P}(S_N=k-u), \end{equation} \noindent invoking \eqref{EssEdge} and expanding $\displaystyle \mathfrak{g}\left(\frac{k-u-{\mathbb E}(S_N)}{\sigma_N}\right)$ in the Taylor series about $\frac{k-{ {\mathbb E}(S_N)}}{\sigma_N}$ we conclude that there are polynomials $P_{b,j}$ such that we have for $k\in j+h{\mathbb Z}$, $$ {\mathbb P}({\bar S}_N=k)= \sum_{b=1}^r \frac{P_{b, j} ((k-{\mathbb E}[S_N])/\sigma_N)}{\sigma_N^b} \mathfrak{g}\left(\frac{k-{\mathbb E}(S_N)}{\sigma_N}\right) +o(\sigma_N^{-r}). $$ Again $$ \sum_{a=0}^{h-1} \sum_{j=0}^{h-1} e^{2\pi i a (k-j)/h} \sum_{b=1}^r \frac{P_{b,j} ((k-{\mathbb E}[S_N])/\sigma_N)}{\sigma_N^b} \mathfrak{g}\left(\frac{k-{\mathbb E}(S_N)}{\sigma_N}\right) $$ provides the oscillatory expansion valid for all integers. \end{example} \begin{example} \label{ExUniform} Our next example is a small variation of the previous one. Fix a positive integer $m.$ Let $X'$ be a random variable such that $X'$ mod $m$ is uniformly distributed. Then its characteristic function satisfies $\phi_{X'}(\frac{2\pi a}{m})=0$ for $a=1, \dots, m-1.$ We also assume that $\phi_{X'}'(\frac{2\pi a}{m})\neq 0$ for $a$ as above (for example one can suppose that $X'$ takes the values $Lm$, $1, 2, \dots ,m-1$ with equal probabilities where $L$ is a large integer). Let $X''$ take values in $m{\mathbb Z}$ and have zero mean. We also assume that $X''$ does not take values at $m_0{\mathbb Z}$ for a larger $m_0$. Then $q(X'',m_0)>0$ for any $m_0\not=m$. Fix $r\in \mathbb{N}$ and let $$X_n=\begin{cases} X' & n\leq r, \\ X'' & n>r. \end{cases}$$ Then $M_N(m_0)$ grows linearly fast in $N$ if $m_0\not=m$ and $M_N(m)$ is bounded in $N$. We claim that $S_N$ admits the Edgeworth expansion of order $r$ but does not admit Edgeworth expansion of order $r+1.$ The first statement holds due to Theorem \ref{r Char}, since $\Phi^{(\ell)}_N(\frac{2\pi a}{m})=0$ for each $a\in {\mathbb Z}$ and each $\ell<r.$ On the other hand, since $\Phi_{N}^{(\ell)}(\frac{2\pi a}m)=0$ for any $\ell<r$, using Lemma \ref{Lem} we see that the conditions of Lemma \ref{LemInd} are satisfied with $r+1$ in place of $r$. Moreover, with $t_j=2\pi a/m$, $a\not=0$ we have ${\mathcal H}_{N,r+1,s}(x,t_j)\equiv0$ for any $q\leq r+1$ and $s=2,3,4$ while ${\mathcal H}_{N,q,w}(x,t_j)\equiv0$ for any $q\leq r$ and $w=1,2,3,4$. Furthermore, when $N\geq r$ we have \begin{eqnarray*} {\mathcal H}_{N,r+1,1}(x;t_j)=\frac{i^{r}H_{r}(x)\big(\phi_{X''}(2\pi a/m)\big)^{N-r}\Phi_{r}^{(r)}(2\pi a/m)}{r!}\\=(i)^{r}H_{r}(x)\big(\Phi_{X'}'(2\pi a/m)\big)^r. \end{eqnarray*} We conclude that $$ {\mathbb P}(S_N=k)$$ $$=\frac{e^{-k_N^2/2}}{\sqrt{2\pi}} \left[{\mathcal E}_{r+1}(k_N)+ \frac{i^r}{{\sigma}_N^{r+1}} \sum_{a=1}^{m-1} e^{-2\pi i ak/m} \left(\phi_{X'}'\left(\frac{2\pi a}{m}\right) \right)^r H_{r}(k_N) \right] $$ $$ +o({\sigma}_N^{-r-1}) $$ where ${\mathcal E}_{r+1}$ the Edgeworth polynomial (i.e. the contribution of 0) and $H_{r}(x)$ is the Hermite polynomial. Observe that since the uniform distribution on ${\mathbb Z}/m{\mathbb Z}$ is shift invariant, $S_N$ are uniformly distributed mod $m$ for all $N\in {\mathbb N}.$ This shows that for $r\geq 1$, one can not characterize Edgeworth expansions just in term of the distributions of $S_N$ mod $m$, so the additional assumptions in Theorems \ref{Thm SLLT VS Ege} and \ref{Thm Stable Cond} are necessary. Next, consider a more general case where for each $n$, $X_n$ equals in law to either $X'$ or $X''$, however, now we assume that $X'$ appears infinitely often. In this case $S_N$ obeys Edgeworth expansions of all orders since for large $N$, $\Phi_N(t)$ has zeroes of order greater $N$ at all points of the form $\frac{2\pi a}{m},$ $ a=1, \dots, m-1.$ In fact, the Edgeworth expansions hold in the superstable way since removing a finite number of terms does not make the order of zero to fall below $r.$ \end{example} \begin{example}\label{Eg1} Let $\mathfrak{p}_n=\min(1, \frac{\theta}{n})$ and let $X_n$ take value $0$ with probability $\mathfrak{p}_n$ and values $\pm 1$ with probability $\frac{1-\mathfrak{p}_n}{2}.$ In this example the only non-zero resonant point is $\pi=2\pi\times \frac{1}{2}.$ Then for small $\theta$ the contributions of $P_{1, b, N}$ (the only non-zero $a$ is $1$) are significant and as a result $S_N$ does not admit the ordinary Edgeworth expansion. Increasing $\theta$ we can make $S_N$ to admit Edgeworth expansions of higher and higher orders. Namely we get that for large $n,$ $\displaystyle \phi_n(\pi)=\frac{2\theta}{n}-1. $ Accordingly $$ \ln (-\phi_n(\pi))=-\frac{2\theta}{n}+O\left(\frac{1}{n^2}\right). $$ Now the asymptotic relation $$ \sum_{n=1}^N \frac{1}{n}=\ln N+{\mathfrak{c}}+O\left(\frac{1}{N}\right),$$ where ${\mathfrak{c}}$ is the Euler-Mascheroni constant, implies that that there is a constant $\Gamma(\theta)$ such that $$ \Phi_N(\pi)=\frac{(-1)^N e^{\Gamma(\theta)}}{N^{2\theta}}\left(1+O(1/N)\right).$$ Therefore $S_N$ admits the Edgeworth expansions of order $r$ iff $\displaystyle \theta>\frac{r-1}{4}.$ Moreover, if $\displaystyle \theta\in \left(\frac{r-2}{4}, \frac{r-1}{4}\right],$ then Corollary \ref{CrFirstNonEdge} shows that $$ \mathbb{P}(S_N=k)=\frac{e^{-k_N^2/2}}{\sqrt{2\pi}} \left[{\mathcal E}_{r}(k_N)+ \frac{(-1)^{N+k} e^{\Gamma(\theta) }}{N^{2\theta+(1/2)}}+O\left(\frac{1}{N^{2\theta+1} }\right)\right] $$ where ${\mathcal E}_r$ is the Edgeworth polynomial of order $r.$ In particular if $\theta\in (0, 1/4)$ then using that \begin{equation} \label{VarNearN} V_N=N+O(\ln N)=N\left(1+O\left(\frac{\ln N}{N}\right)\right) \end{equation} and hence \begin{equation} \label{SDNearN} \sigma_N=\sqrt{N}\left(1+O\left(\frac{\ln N}{N}\right)\right) \end{equation} we conclude that $$ \mathbb{P}(S_N=k)=\frac{e^{-k^2/(2N)}}{\sqrt{2\pi}} \left[\frac{1}{\sqrt{N}}+ \frac{(-1)^{N+k} e^{\Gamma(\theta) }}{N^{2\theta+(1/2)}}+O\left(\frac{1}{N^{2\theta+1}}\right)\right]. $$ Next, take $\mathfrak{p}_n=\min\left(1, \frac{\theta}{n^2}\right)$. Then the SLLT does not hold, since the Prokhorov condition fails. Instead we have (\ref{r=1}) with ${\mathcal{R}}=\{\pi\}$. Namely, uniformly in $k$ we have \begin{equation*} \sqrt 2\pi{\mathbb P}(S_N=k)=\left(1 +(-1)^k\prod_{u=1}^{N}\left(2\mathfrak{p}_u-1\right)\right){\sigma}_{N}^{-1}e^{-k^2/2V_N}+o(\sigma_N^{-1}). \end{equation*} Next, $\mathfrak{p}_u$ is summable and moreover \[ \prod_{u=1}^{N}(2\mathfrak{p}_u-1)=(-1)^N U(1+O(1/N)) \] where $\displaystyle U=\prod_{n=1}^{\infty}(1-2\mathfrak{p}_u)$. We conclude that \begin{equation} \label{A1Ex3} \sqrt 2\pi{\mathbb P}(S_N=k)=\left(1+(-1)^{k+N}U\right){\sigma}_{N}^{-1}e^{-k^2/2V_N}+ O\left(\sigma_N^{-2}\right) \end{equation} uniformly in $k$. In this case the usual LLT holds true if and only if $U=0$ in agreement with Proposition \ref{PrLLT-SLLT}. In fact, in this case we have a faster rate of convergence. To see this we consider expansions of order $2$ for $\mathfrak{p}_n$ as above. We observe that $q_m(2)=\mathfrak{p}_n$ for large $n.$ Thus \[ |{\mathbb E}(e^{\pi iX_n})|=1-2 \mathfrak{p}_n \] and so $|{\mathbb E}(e^{\pi X_n})|\geq \frac12$ when $n\geq N_{\theta}$ for some minimal $N_{\theta}$. Therefore, we can take $N_0=N_{\theta}$. Note also that we have $Y_n=X_n\text{ mod }2-1$. We conclude that for $n>N_0$ we have \[ a_{n}=a_{n,j}=\frac{{\mathbb E}[((-1)^{Y_n}-1)X_n]}{{\mathbb E}[(-1)^{Y_n}]}=0 \] and so the term $C_{1,N}$ vanishes. Next, we observe that \[ \gamma_{N,j}=\frac{\sum_{n=N_0+1}^{N}{\mathbb E}(X_n^3)}{\sum_{n=N_0+1}^{N}(1-\mathfrak{p}_n)}=0. \] Finally, we note that ${\mathbb E}[(-1)^{X_n}X_n]=0$, and hence $\Phi_{N_0}'(\pi)=0$. Therefore, the second term in (\ref{r=2'}) vanishes and we have \begin{equation*} \sqrt 2\pi{\mathbb P}(S_N=k)=\left(1 +(-1)^{k+N} U \right){\sigma}_{N}^{-1}e^{-k^2/(2V_N)}+ O\left(\sigma_N^{-3}\right). \end{equation*} Taking into account \eqref{VarNearN} and \eqref{SDNearN} we obtain $$ \sqrt 2\pi{\mathbb P}(S_N=k)=\frac{1 +(-1)^{k+N} U}{\sqrt{N}} \; e^{-k^2/(2N)}+ O\left(N^{-3/2}\right). $$ In particular, \eqref{A1Ex3} holds with the stronger rate $O\left(\sigma_N^{-3}\right)$. \end{example} \begin{example}\label{NonSym r=2 Eg} The last example exhibited significant simplifications. Namely, there was only one resonant point, and, in addition, the second term vanished due to the symmetry. We now show how a similar analysis could be performed when the above simplifications are not present. Let us assume that $X_n$ takes the values $-1,0$ and $3$ with probabilities $a_n,b_n$ and $c_n$ so that $a_n+b_n+c_n=1$. Let us also assume that $ b_n<\frac18$ and that $a_n,c_n\geq \rho>0$ for some constant $\rho$. Then $$V(X_n)=9(c_n-c_n^2)+6a_nc_n+(a_n-a_n^2)\geq 6 \rho^2$$ \noindent and so $V_N$ grows linearly fast in $N$. Next, since we can take $K=3$, the denominators $m$ of the nonzero resonant points can only be $2,3,4,5$ or $6$. An easy check shows that for $m=3,5,6$ we have $q_n(m)\geq \rho$, and that for $m=2,4$ we have $q_n(m)=b_n$. Therefore, for $m=3,5,6$ we have $M_N(m)\geq \rho N$, and so we can disregard all the nonzero resonant points except for $\pi/2,\pi$ and $3\pi/2$. For the latter points we have \begin{equation} \label{Res1} \phi_n\left(\frac{\pi}{2}\right)=b_n-i(1-b_n), \end{equation} \begin{equation} \label{Res2} \phi_n(\pi)=2b_n-1,\quad \phi_n\left(\frac{3\pi}{2}\right)=b_n+i(1-b_n). \end{equation} Hence, denoting $\eta_n=b_n(1-b_n),$ we have \begin{equation*} \left|\phi_n\left(\frac{\pi}{2}\right)\right|^2=\left|\phi_n\left(\frac{3\pi}{2}\right)\right|^2=1-2\eta_n, \quad \big|\phi_n(\pi)\big|^2=1-4\eta_n. \end{equation*} Since we suppose that $\eta_n\leq b_n<\frac{1}{8}$ it follows that $1-4\eta_n\geq \frac12$. Then for the above three resonant points we can take $N_0=0$. Now Proposition~\ref{Thm} and a simple calculation show that for any $r$ we get the Edgeworth expansions of order $r$ if and only if \[ \prod_{n=1}^N(1-2\eta_n)= o\left(N^{1-r}\right). \] Let us focus for the moment on the case when $b_n=\gamma/n$ for $n$ large enough where $\gamma>0$ is a constant. Rewriting \eqref{Res1}, \eqref{Res2} as \begin{equation} \label{PhiNRatio} \frac{\phi_n\left(\frac{\pi}{2}\right)}{-i}=(1-b_n)+ib_n, \quad \frac{\phi_n\left(\frac{3\pi}{2}\right)}{i}=(1-b_n)-ib_n, \quad \end{equation} $$ -\phi_n(\pi)=1-2 b_n $$ and, using that the condition $b_n<\frac{1}{8}$ implies that that $\phi_n(t)\neq 0$ for all $n\in \mathbb{N}$ and all $t\in \left\{\frac{\pi}{2}, \pi, \frac{3\pi}{2}\right\}$, we conclude similarly to Example \ref{Eg1} that there are non-zero complex numbers $\kappa_1, \kappa_3$ and a non-zero real number $\kappa_2$ such that $$ \Phi_N\left(\frac{\pi}{2}\right)=\frac{(-i)^N \kappa_1}{N^\gamma} e^{i \gamma \ln N } \left(1+O\left(\frac1N\right)\right), $$ $$ \Phi_N\left(\frac{3\pi}{2}\right)=\frac{i^N \kappa_3}{N^\gamma} e^{-i \gamma \ln N } \left(1+O\left(\frac1N\right)\right), $$ $$ \Phi_N(\pi)=(-1)^N \frac{\kappa_2e^{2\gamma w_N}}{N^{2\gamma}} \left(1+O\left(\frac1N\right)\right). $$ It follows that $S_N$ admits Edgeworth expansion of order $r$ iff $\gamma>\frac{r-1}{2}.$ In fact if $\frac{r-1}{2}<\gamma\leq \frac{r}{2}$ then Corollary \ref{CrFirstNonEdge} shows that $$ \mathbb{P}(S_N=k)= \frac{e^{-k_N^2/2}}{\sqrt{2\pi}} \Bigg[{\mathcal E}_{r}(k_N)+ \frac{\kappa_1 e^{i\gamma \ln N}} {N^\gamma {\sigma}_N} + \frac{\kappa_3 e^{-i\gamma \ln N}} {N^\gamma {\sigma}_N} +O\left(N^{-\eta}\right)\Bigg] $$ where ${\mathcal E}_r$ is the Edgeworth polynomial of order $r$ and $\eta\!\!=\!\!\min\left(2\gamma, \frac{r}{2}\right)+\frac{1}{2}.$ To give a specific example, let us suppose that $\frac{1}{2}\leq \gamma<1$ and that $E(X_n)=0$ which means that \begin{equation} \label{Ex4-A-C} a_n=\frac{3(1-b_n)}{4}, \quad c_n=\frac{1-b_n}{4}. \end{equation} Then \begin{equation} \label{Ex4M2M3} V_N=3N-3\gamma \ln N+O(1), \quad E(S_N^3)=6N-6\gamma \ln N+O(1), \end{equation} so Proposition \ref{2 Prop} gives $$ \sqrt{2\pi} \mathbb{P}(S_N=k)=$$ $$e^{-k^2/6 N} \left[\frac{1}{\sqrt{3N}} \left(1+\frac{\kappa_1 i^{k-N} e^{i\gamma \ln N} +\kappa_3 i^{N-k} e^{-i\gamma \ln N}} {N^\gamma}\right)- \frac{k^3}{81\sqrt{3 N^5}} \right] $$ $$ +O\left(N^{-3/2}\right) $$ Next, let us provide the second order trigonometric expansions under the sole assumption that $1-4\eta_n\geq \frac12$ and $a_n,c_n\geq \rho$. As we have mentioned, we only need to consider the nonzero resonant points $\pi/2,\pi,3\pi/2$ and for these points we have $N_0=0$. Therefore, the term involving the derivative in the right hand side of (\ref{r=2'}) vanishes. Now, a direct calculation shows that \[ C_{1,N,\pi}=\sum_{n=1}^{N}\frac{{\mathbb E}(e^{i\pi X_n}\bar X_n)}{{\mathbb E}(e^{i\pi X_n})}=2\sum_{n=1}^{N}\frac{(a_n-3c_n)b_n}{2b_n-1} \] and $$ C_{1,N,\pi/2}=\sum_{n=1}^N\frac{(a_n-3c_n)(1+i)b_n}{b_n-i(1-b_n)},\quad C_{1,N,3\pi/2}=\sum_{n=1}^N\frac{(a_n-3c_n)(1-i)b_n}{b_n+i(1-b_n)}. $$ Note that $3c_n-a_n={\mathbb E}(X_n)$. Set $$ \Gamma_{1,N}=\prod_{n=1}^{N}(b_n-i(1-b_n)), \quad \Gamma_{2,N}=\prod_{n=1}^N(2b_n-1), \quad \Gamma_{3,N}=\prod_{n=1}^{N}(b_n+i(1-b_n)). $$ Then $\Gamma_{s,N}={\mathbb E}(e^{\frac{s\pi i}{2} S_N})$. We also set $$ \Theta_{s,N}=C_{1,N,s\pi/2}\Gamma_{s,N},\,s=1,2,3 $$ and $$\Gamma_{N}(k)=\sum_{j=1}^{3}e^{-j\pi i k/2}\Gamma_{j,N}, \quad \Theta_{N}(k)=\sum_{j=1}^{3}e^{-j\pi i k/2}\Theta_{j,N}.$$ Then by Proposition \ref{2 Prop} and Remark \ref{Alter 2nd Order}, uniformly in $k$ we have \begin{equation}\label{EgGen} \sqrt{2\pi}P(S_N=k)={\sigma}_N^{-1}\left(1+\Gamma_N(k)\right)e^{-k_N^2/2} \end{equation} $$ -{\sigma}_N^{-2}\left(k_N^3 T_N\big(1+\Gamma_{N}(k)\big)+ik_N\Theta_{N}(k)\right)e^{-k_N^2/2}+o({\sigma}_N^{-2}) $$ where $T_N=\frac{\displaystyle \sum_{n=1}^{N} {\mathbb E}(\bar X_n^3)}{6V_N}$, $\bar X_n=X_n-{\mathbb E}(X_n)$. Let us now consider a more specific situation. Namely we suppose that $b_n=\frac{\gamma}{n^{3/2}}$ for large $n$ and that $E(X_n)=0.$ Then \eqref{Ex4-A-C} shows that $C_{1, N, s\pi/2}=0.$ Next \eqref{PhiNRatio} gives $$ \frac{\Phi_N\left(\frac{\pi}{2}\right)}{(-i)^N}= \prod_{n=1}^N \left[(1-b_n)+ib_n\right]= \frac{{\bar\kappa}_1}{\displaystyle \prod_{n=N+1}^\infty\left[(1-b_n)+ib_n\right]}$$ $$= {\bar\kappa}_1 \left(1+\frac{2 \gamma(1-i)}{\sqrt{N}}+O\left(\frac{1}{N}\right)\right) $$ where $\displaystyle {\bar\kappa}_1=\prod_{n=1}^\infty \left[(1-b_n)+ib_n\right].$ Likewise $$ \frac{\Phi_N\left(\frac{3\pi}{2}\right)}{i^N}= {\bar\kappa}_3 \left(1+\frac{2 \gamma(1+i)}{\sqrt{N}}+O\left(\frac{1}{N}\right)\right) $$ and $$ \frac{\Phi_N\left(\pi\right)}{(-1)^N}= {\bar\kappa}_2 \left(1+\frac{4 \gamma}{\sqrt{N}}+O\left(\frac{1}{N}\right)\right). $$ Taking into account \eqref{Ex4M2M3} we can reduce \eqref{EgGen} to the following expansion $$ \sqrt{2\pi}{\mathbb P}(S_N=k)=e^{-k^2/6N} \left[ \frac{1}{\sqrt{3N}} \left(1+\sum_{s=1}^3 {\bar\kappa}_s i^{s(k-N)} \right)\right. $$ $$\left.+\frac{1}{N}\left( -\frac{\tilde k_N^3}{3} + \sum_{s=1}^3 {\bar\kappa}_s i^{s(k-N)} \left(\frac{2\gamma(1-i^{-s}) }{\sqrt{3}}-\frac{\tilde k_N^3}{3} \right)\right) \right]+O\left(\frac{1}{N^{3/2}}\right) $$ where $\tilde k_N=k/\sqrt{3N}$. \end{example} \begin{example} Let $X'$ take value $\pm 1$ with probability $\frac{1}{2}$, $X''$ take values $0$ and $1$ with probability $\frac{1}{2}$, and $X^{\delta}$, ${\delta}\in[0,1]$ be the mixture of $X'$ and $X''$ with weights $\delta$ and $1-\delta.$ Thus $X^\delta$ take value $-1$ with probability $\frac{\delta}{2},$ the value $0$ with probability $\frac{1-\delta}{2}$ and value $1$ with probability $\frac{1}{2}$. Therefore, ${\mathbb E}(e^{\pi i X^\delta})=-{\delta}$. We suppose that $X_{2m}$ and $X_{2m-1}$ have the same law which we call $Y_m.$ The distribution of $Y_m$ is defined as follows. Set $k_j=3^{3^j}$, and let $Y_{k_j}$ have the same distribution as $X^{{\delta}_j}$ where ${\delta}_j=\frac{1}{\sqrt{k_{j+1}}}$. When $m\not\in\{k_j\}$ we let $Y_m$ have the distribution of $X'$. It is clear that $V_N$ grows linearly fast in $N$. Note also that ${\mathbb E}(e^{\pi i Y_m})=-{\delta}_j$ when $m=k_j$ for some $j$, and otherwise ${\mathbb E}(e^{\pi i Y_m})=-1$. Now, take $N\in{\mathbb N}$ such that $N>2k_2$, and let $J_N$ be so that $2k_{J_N}\leq N<2k_{J_N+1}$. Then $$ |\Phi_{N}(\pi)|\leq \prod_{j=1}^{J_N}(k_{j+1})^{-1}. $$ Since $k_{J_N}\leq \frac{N}2<k_{J_N+1}$ and $k_j=(k_{j+1})^{1/3}$ we have $k_{J_N+1}^{-1}\leq 2N^{-1}$ and $k_{J_N+1-m}\leq 2^{3^{-m}} N^{-3^{-m}}$ for any $0<m\leq J_N$. Denote $\displaystyle \alpha_N=\sum_{j=1}^{J_N-1}3^{-j}.$ Since $\alpha_N>1/3$ we get that \[ |\Phi_{N}(\pi)|\leq 2^{3/2}N^{-\alpha_N} =o(N^{-1-1/3}). \] Similarly, for each $j_1, j_2\leq N$, \begin{equation}\label{2nd} |\Phi_{N: j_1}(\pi)|\leq 2^{3/2} N^{-1/2-\alpha_N}=o(N^{-1/2-1/3}) \end{equation} and \begin{equation}\label{3rd} |\Phi_{N: j_1,j_2}(\pi)|\leq 2^{3/2} N^{-\alpha_N}=o(N^{-1/3}). \end{equation} Indeed, the largest possible values are obtained for $j_1=2k_{J_N}$ (or $j_1=2k_{J_{N+1}}-1$ if it is smaller than $N+1$) and $j_2=2k_{J_N}-1$ (or $j_2=2k_{J_{N}}$). Using the same estimates as in the proof of of Theorem \ref{Thm Stable Cond} we conclude from (\ref{2nd}) that $\displaystyle \Phi_N'(\pi)=o\left(1/\sqrt{N}\right)$ and we conclude from \eqref{3rd} that $\displaystyle \Phi_N''(\pi)=o(1).$ It follows from Lemma \ref{Lem} and Proposition \ref{Thm} that $S_N$ satisfies an Edgeworth expansion of order 3. The same conclusion holds if we remove a finite number of terms from the beginning of the sequence $\{X_n\}$ because the smallness of $\Phi_N(\pi)$ comes from the terms $X_{2 k_{j}-1} $ and $X_{2 k_j}$ for arbitrary large $j$'s. On the other hand $$ \left|\Phi_{2k_j; 2k_j, 2k_j-1, 2k_{j-1}, 2k_{j-1}-1}(\pi)\right|= \prod_{s=2}^{j-1} \left(3^{-3^{s}}\right)$$ $$= 3^{-(3^j-9)/2}=\frac{3^{9/2}}{\sqrt{k_j}} \gg \frac{1}{k_j}=3^{-3^j}. $$ It follows that $S_{2k_j; 2k_j, 2k_j-1, 2k_{j-1}, 2k_{j-1}-1}$ does not obey the Edgeworth expansion of order 3. Accordingly, stable Edgeworth expansions need not be superstable if $r=3.$ A similar argument allows to construct examples showing that those notions are different for all $r>2.$ \end{example} \section{Extension for uniformly bounded integer-valued triangular arrays} In this section we will describe our results for arrays of independent random variables. We refer to \cite{Feller}, \cite{Mu84, Mu91} and \cite{Dub}, \cite{VS}, \cite{Pel1} and \cite{DS} for results for triangular arrays of inhomogeneous Markov chains. Example where Markov arrays appear naturally include the theory of large deviations for inhomogeneous systems (see \cite{SaSt91, PR08, FH} and references wherein), random walks in random scenery \cite{CGPPS, GW17}, and statistical mechanics \cite{LPRS}. Let $X_n^{(N)},\,1\leq n\leq L_N$ be a triangular array such that for each fixed $N$, the random variables $X_n^{(N)}$ are independent and integer valued. Moreover, we assume that $$K:=\sup_{N}\sup_{n}\|X_n\|_{L^\infty}<\infty.$$ For each $N$ we set $\displaystyle S_N=\sum_{n=1}^{L_N}X_n^{(N)}$. Let $V_N=\text{Var}(S_N)$. We assume that $V_N\to\infty$, so that, by Lindenberg--Feller Theorem, the sequence $(S_N-{\mathbb E}(S_N))/{\sigma}_N$ obeys the CLT, where ${\sigma}_N=\sqrt{V_N}$. We say that the array $X_n^{(N)}$ obeys the SLLT if for any $k$ the LLT holds true for any uniformly square integrable array $Y_n^{(N)},\,1\leq n\leq L_N$, so that $Y_n^{(N)}=X_n^{(N)}$ for all but $k$ indexes $n$. Set $$ M_N:=\min_{2\leq h\leq 2K}\sum_{n=1}^{L_N}P(X_n\neq m_n^{(N)}(h) \text{ mod } h)\geq R\ln V_N $$ where $m_n^{(N)}(h)$ is the most likely value of $X_n^{(N)}$ modulo $h$. Observe now that the proofs of Proposition \ref{PropEdg} and Lemmas \ref{Step1}, \ref{Step2}, \ref{Step3} and \ref{Step4} proceed exactly the same for arrays. Therefore, all the arguments in the proof of Theorem \ref{IntIndThm} proceed the same for arrays instead of a fixed sequence $X_n$. That is, we have \begin{theorem}\label{IntIndThmAr} There $\exists J=J(K)<\infty$ and polynomials $P_{a, b, N}$ with degrees depending only on $a$ and $b$, whose coefficients are uniformly bounded in $N$ such that, for any $r\geq1$ uniformly in $k\in{\mathbb Z}$ we have $${\mathbb P}(S_N=k)-\sum_{a=0}^{J-1} \sum_{b=1}^r \frac{P_{a, b, N} ((k-a_N)/\sigma_N)}{\sigma_N^b} \mathfrak{g}((k-a_N)/\sigma_N) e^{2\pi i a k/J} =o(\sigma_N^{-r}) $$ where $a_N={\mathbb E}(S_N)$ and $\mathfrak{g}(u)=\frac{1}{\sqrt{2\pi}} e^{-u^2/2}. $ Moreover, $P_{0,1,N}\equiv1$ and given $K, r$, there exists $R=R(K,r)$ such that if $M_N\geq R \ln V_N$ then we can choose $P_{a, b, N}=0$ for $a\neq 0.$ \end{theorem} All the formulas for the coefficients of the polynomials $P_{a,b,N}$ remain the same in the arrays setup. In particular, we get that, uniformly in $k$ we have \begin{equation}\label{r=1 Ar} {\mathbb P}(S_N=k)=\left(1+\sum_{t\in{\mathcal{R}}}e^{-itk}\Phi_N(t)\right)e^{-k_N^2/2}{\sigma}_N^{-1}+o({\sigma}_N^{-1}) \end{equation} where $\Phi_N(t)={\mathbb E}(e^{it S_N})$. Next, our version for Proposition \ref{PrLLT-SLLT} for arrays is as follows. \begin{proposition} \label{PrLLT-SLLT Ar} Suppose $S_N$ obeys LLT. Then for each integer $h\geq2$, at least one of the following conditions occur: \vskip0.2cm either (a) $\displaystyle \lim_{N\to\infty}\sum_{n=1}^{L_{N}} {\mathbb P}(X_n\neq m_n^{(N)}(h) \text{ mod } h)=\infty$. \vskip0.2cm or (b) there exists a subsequence $N_k$, numbers $s\in{\mathbb N}$ and ${\varepsilon}_0>0$ and indexes $1\leq j_1^{k},...,j_{s_k}^{k}\leq L_{N_k}$, $s_k\leq s$ so that the distribution of $\displaystyle \sum_{u=1}^{s_k}X^{(N_k)}_{j_u}$ converges to uniform $\text{mod }h$, and the distance between the distribution of $\displaystyle S_{N_k}-\sum_{q=1}^{s_k}X^{(N_k)}_{j_q}$ and the uniform distribution $\text{mod }h$ is at least ${\varepsilon}_0$. \end{proposition} \begin{proof} First, by \eqref{r=1 Ar} and Lemma \ref{Lemma} if the LLT holds then for any nonzero resonant point $t$ we have $\displaystyle \lim_{N\to\infty}|\Phi_N(t)|=0$. Now, if (a) does not hold true then there is a subsequence $N_k$ so that $\displaystyle \sum_{n=1}^{L_{N_k}}q(X_n^{(N_k)},h)\leq C$, where $C$ is some constant. Set $\displaystyle q_n^{(N_k)}(h)=q\left(X_n^{(N_k)},h\right)$. Then there are at most $8hC$ $n$'s between $1$ and $L_{N_k}$ so that $q_n^{(N_k)}(h)>\frac1{8 h}$. Let us denote these $n$'s by $n_{1,k},...,n_{s_k,k}$, $s_k\leq 8h C$. Next, for any $n$ and a nonzero resonant point $t=2\pi l/h$ we have \begin{equation} \label{PhiQArr} |\phi_n^{(N_k)}(t)|\geq 1-2hq_n^{(N_k)}(h) \geq e^{-2\gamma h q_n^{(N_k)}} \end{equation} where $\phi_n^{(N_k)}$ is the characteristic function of $X_n^{(N_k)}$ and $\gamma$ is such that for $\theta\in [0, 1/4]$ we have $1-\theta\geq e^{-\gamma \theta}.$ We thus get that \begin{equation}\label{C1.} \prod_{n\not\in\{n_{u,k}\}}|\phi_{n}^{(N_k)}(t)|\geq\prod_{n\not\in\{n_{u,k}\}}(1-2hq_n^{(N_k)}(h))\geq C_0 \end{equation} where $C_0>0$ is some constant. Therefore, $$ |\Phi_{N_k}(t)|\geq \prod_{u=1}^{s_k}|\phi_{n_{u,k}}^{(N_k)}(t)|\cdot C_0 $$ and so we must have \begin{equation}\label{C2.} \lim_{k\to\infty}\prod_{u=1}^{s_k}|\phi_{n_{u,k}}^{(N_k)}(t)|=0. \end{equation} Now (b) follows from \eqref{C1.}, \eqref{C2.} and Lemma \ref{LmUnifFourier}. \end{proof} Using (\ref{r=1 Ar}) we can now prove a version of Theorem \ref{ThProkhorov} for arrays. \begin{theorem}\label{ThProkhorovAr} The SLLT holds iff for each integer $h>1$, \begin{equation}\label{Prokhorov Ar} \lim_{N\to\infty}\sum_{n=1}^{L_N} {\mathbb P}(X_n^{(N)}\neq m_n \text{ mod } h)=\infty \end{equation} where $m_n=m_n^{(N)}(h)$ is the most likely residue of $X_n^{(N)}$ modulo $h$. \end{theorem} \begin{proof} First, the arguments in the proof of (\ref{Roz0}) show that there are constants $c_0,C>0$ so that for any nonzero resonant point $t=2\pi l/h$ we have \begin{equation}\label{ROZ} |\Phi_N(t)|\leq Ce^{-c_0M_N(h)},\quad \text{where}\quad M_N(h):=\sum_{n=1}^{L_N}q(X_n^{(N)},h). \end{equation} Let us assume that \eqref{Prokhorov Ar} holds for all integers $h>1$. Consider $s_N$--tuples $1\leq j_1^N,...,j_{s_N}^N\leq L_N$, where $s_N\leq \bar s$ is bounded in $N$. Then by applying \eqref{ROZ} with $\displaystyle \tilde S_N=S_N-\sum_{l=1}^{s_N}X_{j_l^N}^{(N)}$ we have \begin{equation}\label{TILD} \lim_{N\to\infty}|{\mathbb E}(e^{it\tilde S_N})|=0. \end{equation} Now, arguing as in the proof of Theorem \ref{Thm Stable Cond}(1), given a uniformly square integrable array $Y_n^{(N)}$ as in the definition of the SLLT, we still have \eqref{r=1 Ar}, even though the new array is not necessarily uniformly bounded. Applying (\ref{TILD}) we see that for any nonzero resonant point $t$ we have $$ \lim_{N\to\infty}\left|{\mathbb E}\left(\exp\left[it \sum_{n=1}^{L_N}Y_n^{(N)}\right]\right)\right|=0 $$ and so $\displaystyle S_N Y:=\sum_{n=1}^{L_N}Y_n^{(N)}$ satisfies the LLT. Now let us assume that $M_N(h)\not\to\infty$ for some $2\leq h\leq 2K$ (it not difficult to see that \eqref{Prokhorov Ar} holds for any $h>2K$). In other words after taking a subsequence we have that $M_{N_k}(h)\leq L$ for some $L<\infty.$ The proof of Proposition \ref{PrLLT-SLLT Ar} shows that there $s<\infty$ such that after possibly removing terms $n_{1, k}, n_{2, k},\dots ,n_{s_k, k}$ with $s_k\leq s$ we can obtain that $q_n^{(N_k)}(h) \leq \frac{1}{8h},$ $n\not\in\{n_{j, k}\}$. In this case \eqref{PhiQArr} shows that for each $\ell$ $$ |\Phi_{N_k; n_{1, k} , \dots, n_{s_k, k}}(2\pi\ell/h)|\geq e^{-2\gamma L}. $$ By Proposition \ref{PrLLT-SLLT Ar}, $S_{N_k; n_{1, k} , \dots, n_{s_k, k}}$ does not satisfy the LLT. \end{proof} Next, all the other arguments in our paper proceed similarly for arrays since they essentially rely only on the specific structure of the polynomials from Theorem \ref{IntIndThm}. For the sake of completeness, let us formulate the main (remaining) results here. \begin{theorem}\label{ThLLT Ar} The following conditions are equivalent: (a) $S_N$ satisfies LLT; (b) For each $\xi\in {\mathbb R}\setminus {\mathbb Z}$, $\displaystyle \lim_{N\to\infty} {\mathbb E}\left(e^{2\pi i \xi S_N}\right)=0$; (c) For each non-zero resonant point $\xi$, $\displaystyle \lim_{N\to\infty} {\mathbb E}\left(e^{2\pi i \xi S_N}\right)=0$; (d) For each integer $h$ the distribution of $S_N$ mod $h$ converges to uniform. \end{theorem} \begin{theorem} \label{ThEdgeMN Ar} For each $r$ there is $R=R(r, K)$ such that the Edgeworth expansion of order $r$ holds true if $M_N\geq R\ln V_N$. In particular, $S_N$ obeys Edgeworth expansions of all orders if $$ \lim_{N\to\infty} \frac{M_N}{\ln V_N}=\infty. $$ \end{theorem} \begin{theorem}\label{r Char Ar} For any $r\geq1$, the Edgeworth expansion of order $r$ holds if and only if for any nonzero resonant point $t$ and $0\leq\ell<r$ we have \[ \bar \Phi_{N}^{(\ell)}(t)=o\left({\sigma}_N^{\ell+1-r}\right) \] where $\bar\Phi_{N}(x)={\mathbb E}[e^{ix (S_N-{\mathbb E}(S_N))}]$. \end{theorem} \begin{theorem}\label{Thm SLLT VS Ege Ar } Suppose $S_N$ obeys the SLLT. Then the following are equivalent: (a) Edgeworth expansion of order 2 holds; (b) $|\Phi_N(t)|=o(\sigma_N^{-1})$ for each nonzero resonant point $t$; (c) For each $h\leq 2K$ the distribution of $S_N$ mod $h$ is $o(\sigma_N^{-1})$ close to uniform. \end{theorem} Next, we say that an array $\{X_n^{(N)}\}$ {\em admits an Edgeworth expansion of order $r$ in a superstable way} (denoted by $\{X_n^{(N)}\}\in EeSs(r)$) if for each ${\bar s}$ and each sequence $j_1^N, j_2^N,\dots ,j_{s_N}^N$ with $s_N\leq {\bar s}$ and $j_i^N\leq L_N$ there are polynomials $P_{b, N}$ whose coefficients are $O(1)$ in $N$ and their degrees do not depend on $N$ so that uniformly in $k\in{\mathbb Z}$ we have that \begin{equation}\label{EdgeDefSS Ar} {\mathbb P}(S_{N; j_1^N, j_2^N, \dots,j_{{s_N}^N}}=k)=\sum_{b=1}^r \frac{P_{b, N} (k_N)}{\sigma_N^b} \mathfrak{g}(k_N)+o(\sigma_N^{-r}) \end{equation} and the estimates in $O(1)$ and $o(\sigma_N^{-r})$ are uniform in the choice of the tuples $j_1^N, \dots ,j_{s_N}^N.$ Let $\Phi_{N; j_1, j_2,\dots, j_s}(t)$ be the characteristic function of $S_{N; j_1, j_2,\dots, j_s}.$ \begin{theorem}\label{Thm Stable Cond Ar} (1) $S_N\in EeSs(1)$ (that is, $S_N$ satisfies the LLT in a superstable way) if and if it satisfies the SLLT. (2) For arbitrary $r\geq 1$ the following conditions are equivalent: (a) $\{X_n^{(N)}\}\in EeSs(r)$; (b) For each $j_1^N, j_2^N,\dots ,j_{s_N}^N$ and each nonzero resonant point $t$ we have $\Phi_{N; j_1^N, j_2^N,\dots, j_{s_N}^N}(t)=o(\sigma_N^{1-r});$ (c) For each $j_1^N, j_2^N,\dots ,j_{s_N}^N$, and each $h\leq 2K$ the distribution of $S_{N; j_1^N, j_2^N,\dots, j_{s_N}^N}$ mod $h$ is $o(\sigma_N^{1-r})$ close to uniform. \end{theorem} \end{document}
\begin{document} \title{Areas of triangles and $ ext{SL} \begin{abstract} In Euclidean space, one can use the dot product to give a formula for the area of a triangle in terms of the coordinates of each vertex. Since this formula involves only addition, subtraction, and multiplication, it can be used as a definition of area in $R^2$, where $R$ is an arbitrary ring. The result is a quantity associated with triples of points which is still invariant under the action of $\text{SL}_2(R)$. One can then look at a configuration of points in $R^2$ in terms of the triangles determined by pairs of points and the origin, considering two such configurations to be of the same type if corresponding pairs of points determine the same areas. In this paper we consider the cases $R=\mathbb{F}_q$ and $R=\mathbb{Z}/p^\ell \mathbb{Z}$, and prove that sufficiently large subsets of $R^2$ must produce a positive proportion of all such types of configurations. \end{abstract} \section{Introduction} There are several interesting combinatorial problems asking whether a sufficiently large subset of a vector space over a finite field must generate many different objects of some type. The most well known example is the Erdos-Falconer problem, which asks whether such a set must contain all possible distances, or at least a positive proportion of distances. More precisely, given $E\subset\mathbb{F}_q^d$ we define the distance set \[ \Delta(E)=\{(x_1-y_1)^2+\cdots +(x_d-y_d)^2:x,y\in E\}. \] Obviously, $\Delta(E)\subset\mathbb{F}_q$. The Erdos-Falconer problem asks for an exponent $s$ such that $\Delta(E)=\mathbb{F}_q$, or more generally $|\Delta(E)|\gtrsim q$, whenever $|E|\gtrsim q^s$ (Throughout, the notation $X\lesssim Y$ means there is a constant $C$ such that $X\leq CY$, $X\approx Y$ means $X\lesssim Y$ and $Y\lesssim X$, and $O(X)$ denotes a quantity that is $\lesssim X$). In \cite{IR}, Iosevich and Rudnev proved that $\Delta(E)=\mathbb{F}_q$ if $|E|\gtrsim q^{\frac{d+1}{2}}$ In \cite{Sharpness} it is proved by Hart, Iosevich, Koh, and Rudnev that the exponent $\frac{d+1}{2}$ cannot be improved in odd dimensions, althought it has been improved to $4/3$ in the $d=2$ case (first in \cite{WolffExponent} in the case $q\equiv 3 \text{ (mod 4)}$ by Chapman, Erdogan, Hart, Iosevich, and Koh, then in general in \cite{GroupAction} by Bennett, Hart, Iosevich, Pakianathan, and Rudnev). Several interesting variants of the distance problem have been studied as well. A result of Pham, Phuong, Sang, Valculescu, and Vinh studies the problem when distances between pairs of points are replaced with distances between points and lines in $\mathbb{F}_q^2$; they prove that if sets $P$ and $L$ of points and lines, respectively, satisfy $|P||L|\gtrsim q^{8/3}$, then they determine a positive proportion of all distances \cite{Lines}. Birklbauer, Iosevich, and Pham proved an analogous result about distances determined by points and hyperplanes in $\mathbb{F}_q^d$ \cite{Planes}.\\ We can replace distances with dot products and ask the analogous question. Let \[ \Pi(E)=\{x_1y_1+\cdots +x_dy_d:x,y\in E\}, \] and again ask for an exponent $s$ such that $|E|\gtrsim q^s$ implies $\Pi(E)$ contains all distances (or at least a positive proportion of distances). Hart and Iosevich prove in \cite{HI} that the exponent $s=\frac{d+1}{2}$ works for this question as well. The proof is quite similar to the proof of the same exponent in the Erdos-Falconer problem; in each case, the authors consider a function which counts, for each $t\in\mathbb{F}_q$, the number of representations of $t$ as, respectively, a distance and a dot product determined by the set $E$. These representation functions are then studied using techniques from Fourier analysis. \\ Another interesting variant of this problem was studied in \cite{Angles}, where Lund, Pham, and Vinh defined the angle between two vectors in analogue with the usual geometric interpretation of the dot product. Namely, given vectors $x$ and $y$, they consider the quantity \[ s(x,y)=1-\frac{(x\cdot y)^2}{\|x\|\|y\|}, \] where $\|x\|=x_1^2+\cdots x_d^2$ is the finite field distance defined above. Note that since we cannot always take square roots in finite fields, the finite field distance corresponds to the square of the Euclidean distance; therefore, $s(x,y)$ above is the correct finite field analogue of $\sin^2\theta$, where $\theta$ is the angle between the vectors $x$ and $y$. This creates a variant of the dot product problem, since one can obtain different dot products from the same angle by varying length. The authors go on to prove that the exponent $\frac{d+2}{2}$ guarantees a positive proportion of angles. \\ It is of interest to generalize these types of results to point configurations. By a $(k+1)$-point configuration in $\mathbb{F}_q^d$, we simply mean an element of $(\mathbb{F}_q^d)^{k+1}$. Throughout, we will use superscripts to denote different vectors in a given configuration, and subscripts to denote the coordinates of each vector. For example, a $(k+1)$ point configuration $x$ is made up of vectors $x^1,...,x^{k+1}$, each of which has coordinates $x_1^i,x_2^i$. Given a set $E\subset\mathbb{F}_q^d$, we can consider $(k+1)$-point configurations in $E$ (i.e., elements of $E^{k+1}$) and ask whether $E$ must contain a positive proportion of all configurations, up to some notion of equivalence. For example, we may view $(k+1)$-point configurations as simplices, and our notion of equivalence is geometric congruence; any two simplices are congruent if there is a translation and a rotation that maps one onto the other. Since a $2$-simplex is simply a pair of points, and two such simplices are congruent if and only if the distance is the same, congruence classes simply correspond to distance. Hence, the Erdos-Falconer distance problem may be viewed as simply the $k=1$ case of the simplex congruence problem. In \cite{Ubiquity}, Hart and Iosevich prove that $E$ contains the vertices of a congruent copy of every non-degenerate simplex (non-degenerate here means the points are in general position) whenever $|E|\gtrsim q^{\frac{kd}{k+1}+\frac{k}{2}}$. However, in order for this result to be non-trivial the exponent must be $<d$, and that only happens when $\binom{k+1}{2}<d$. So, the result is limited to fairly small configurations. This result is improved in \cite{GroupAction} by Bennett, Hart, Iosevich, Pakianathan, and Rudnev, who prove that for any $k\leq d$ a set $E\subset\mathbb{F}_q^d$ determines a positive proportion of all congruence classes of $(k+1)$-point configurations provided $|E|\gtrsim q^{d-\frac{d-1}{k+1}}$. This exponent is clearly non-trivial for all $k$. In \cite{Me}, I extended this result to the case $k\geq d$. \\ In this paper, we consider a different notion of equivalence. We will consider the problem over both finite fields and rings of integers modulo powers of primes, so I will define the equivalence relation in an arbitrary ring. \begin{dfn} Let $R$ be a ring, and let $E\subset R^2$. We define an equivalence relation $\sim$ on $E^{k+1}$ by $(x^1,...,x^{k+1})\sim (y^1,...,y^{k+1})$ (or more breifly $x\sim y$) if and only if for each pair $i,j$ we have $x^i\cdot x^{j\perp}=y^i\cdot y^{j\perp}$. Define $\mathcal{C}_{k+1}(E)$ to be the set of equivalence classes of $E$ under this relation. \end{dfn} In the Euclidean setting, $\frac{1}{2} |x\cdot y^\perp|$ is the area of the triangle with vertices $0,x,y$. So, we may view each pair of points in a $(k+1)$-point configuration as determining a triangle with the origin, and we consider two such configurations to be equivalent if the triangles they determine all have the same areas. As we will prove in section $2$, this equivalence relation is closely related to the action of $\text{SL}_2(R)$ on tuples of points; except for some degenerate cases, two configurations are equivalent if and only if there is a unique $g$ mapping one to the other. This allows us to analyze the problem in terms of this action; in section 2, we define a counting function $f(g)$ and reduce matters to estimating the sum $\sum_g f(g)^{k+1}$. In section 3, we show how to turn an estimate for $\sum_g f(g)^2$ into an estimate for $\sum_g f(g)^{k+1}$. Since we already understand the $k=1$ case (it is essentially the same as the dot product problem discussed above), this reduction allows us to obtain a non-trivial result. Our first theorem is as follows. \begin{thm} \label{MT1} Let $q$ be a power of an odd prime, and let $E\subset\mathbb{F}_q^2$ satisfy $|E|\gtrsim q^s$, where $s={2-\frac{1}{k+1}}$. Then $\mathcal{C}_{k+1}(E)\gtrsim \mathcal{C}_{k+1}(\mathbb{F}_q^2)$. \end{thm} In addition to proving this theorem, we will consider the case where the finite field $\mathbb{F}_q$ is replaced by the ring $\mathbb{Z}/p^\ell\mathbb{Z}$. The structure of the proof is largely the same; the dot product problem over such rings is studied in \cite{CIP}, giving us the $k=1$ case, and the machinery which lifts that case to arbitrary $k$ works the same way. However, many details in the proofs are considerably more complicated. The theorem is as follows. \begin{thm} \label{MT2} Let $p$ be an odd prime, let $\ell\geq 1$, and let $E\subset(\mathbb{Z}/p^\ell\mathbb{Z})^2$ satisfy $|E|\gtrsim \ell^{\frac{2}{k+1}}p^{\ell s}$, where $s=2-\frac{1}{\ell(k+1)}$. Then $\mathcal{C}_{k+1}(E)\gtrsim \mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2).$ \end{thm} We first note that, as we would expect, Theorem \ref{MT2} coincides with Theorem \ref{MT1} in the case $\ell=1$. We also note that, for fixed $p$ and $k$, the exponent in Theorem \ref{MT2} is always less than 2, but it tends to $2$ as $\ell\to\infty$. This does not happen in the finite field case, where the exponent depends on $k$ but not on the size of the field. \\ Finally, we want to state the extent to which these results are sharp. There are examples which show that the exponent must tend to $2$ as $k\to\infty$ in the finite field case, and as either $k\to\infty$ or $\ell\to\infty$ in the $\mathbb{Z}/p^\ell\mathbb{Z}$ case. \begin{thm}[Sharpness] We have the following: \begin{enumerate}[i] \item For any $s<2-\frac{2}{k+1}$, there exists $E\subset\mathbb{F}_q^2$ such that $|E|\approx q^s$ and $\mathcal{C}_{k+1}(E)=o(\mathcal{C}_{k+1}(\mathbb{F}_q^2))$. \item For any $s<2-\min\left(\frac{2}{k+1},\frac{1}{\ell}\right)$, there exists $E\subset(\mathbb{Z}/p^\ell\mathbb{Z})^2$ such that $|E|\approx p^{\ell s}$ and $\mathcal{C}_{k+1}(E)=o(\mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2))$. \end{enumerate} \end{thm} \section{Characterization of the equivalence relation in terms of the $\text{SL}_2(R)$ action} Our main tool in reducing the problem of $(k+1)$-point configurations to the $k=1$ case is the fact that we can express the equivalence relation in terms of the action of the special linear group; with some exceptions, tuples $x$ and $y$ are equivalent if and only if there exists a unique $g\in \text{SL}_2$ such that for each $i$, we have $y^i=gx^i$. In order to use this, we need to bound the number of exceptions to this rule. This is easy in the finite field case, and a little more tricky in the $\mathbb{Z}/p^\ell\mathbb{Z}$ case. The goal of this section is to describe and and bound the number of exceptional configurations in each case. We begin with a definition. \begin{dfn} Let $R$ be a ring. A configuration $x=(x^1,...,x^{k+1})\in (R^2)^{k+1}$ is called \textbf{good} if there exist two indices $i,j$ such that $x^i\cdot x^{j\perp}$ is a unit. A configuration is \textbf{bad} if it is not good. \end{dfn} As we will see, the good configurations are precisely those for which equivalence is determined by the action of $\text{SL}_2(R)$. To prove this, we will need the following theorems about determinants of matrices over rings, which can be found in \cite{DF}, section 11.4. \begin{thm} \label{DF1} Let $R$ be a ring, let $A_1,...,A_n$ be the columns of an $n\times n$ matrix $A$ with entries in $R$. Fix an index $i$, and let $A'$ be the matrix obtained from $A$ by replacing column $A_i$ by $c_1A_1+\cdots+c_nA_n$, for some $c_1,...,c_n\in R$. Then $\det(A')=c_i\det(A)$. \end{thm} \begin{thm} \label{DF2} Let $R$ be a ring, and let $A$ be an $n\times n$ matrix with entries in $R$. The matrix $A$ is invertible if and only if $\det(A)$ is a unit in $R$. \end{thm} \begin{thm} \label{DF3} Let $R$ be a ring, and let $A$ and $B$ be $n\times n$ matrices with entries in $R$. Then $\det(AB)=\det(A)\det(B)$. \end{thm} We are now ready to prove that equivalence of good configurations is given by the action of the special linear group. \begin{lem} \label{SL2} Let $R$ be a ring, and let $x,y$ be good configurations such that $x^i\cdot x^{j\perp}=y^i\cdot y^{j\perp}$ for every pair of indices $i,j$. Then there exists a unique $g\in \text{SL}_2(R)$ such that $y^i=gx^i$ for each $i$. \end{lem} \begin{proof} Because $x$ and $y$ are good, there exist indices $i$ and $j$ such that $x^i\cdot x^{j\perp}$ is a unit; equivalently, the determinant of the $2\times 2$ matrix with columns $x^i$ and $x^j$ is a unit. Denote this matrix by $(x^i\ x^j)$. By theorem \ref{DF2}, this matrix is invertible. Let \[ g=(y^i\ y^j)(x^i\ x^j)^{-1}. \] Since $g(x^i\ x^j)=(gx^i\ gx^j)$, it follows that $y^i=gx^i$ and $y^j=gx^j$. Also note that by Theorem \ref{DF3}, we have $\det(g)=1$. Let $n$ be any other index. We want to write $x^n=ax^i+bx^j$; this amounts to solving the matrix equation \[ \begin{pmatrix} x_1^i & x_1^j \\ x_2^i & x_2^j \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} =x^n \] Since we have already established the matrix $(x^i\ x^j)$ is invertible, we can solve for $a$ and $b$. Similarly, let $y^n=a'y^i+b'y^j$. By Theorem \ref{DF1}, we have $\det(x^i\ x^n)=b\det(x^i\ x^j)$ and $\det(y^i\ y^n)=b'\det(y^i\ y^j)$. It follows that $b=b'$, and an analogous argument yields $a=a'$. Therefore, \[ gx^n=g(ax^i+bx^j)=agx^i+bgx^j=ay^i+by^j=y^n. \] So, we have established existance. To prove uniqueness, note that $g$ must satisfy $g(x^i\ x^j)=(y^i\ y^j)$, and since $(x^i\ x^j)$ is invertible we can solve for $g$. \end{proof} Now that we know that good tuples allow us to use the machinery we need, we must prove that the bad tuples are negligible. \begin{lem} \label{countbad} Let $R$ be a ring and let $E\subset R^2$. We have the following: \begin{enumerate}[i] \item If $R=\mathbb{F}_q$, then $E^{k+1}$ contains $\lesssim q^k|E|$ bad tuples. In particular, if $|E|\gtrsim q^{1+\e}$ for any constant $\e>0$, the number of bad tuples in $E^{k+1}$ is $o(|E|^{k+1})$. \item If $R=\mathbb{Z}/p^\ell \mathbb{Z}$, the number of bad tuples in $R^{k+1}$ is $\lesssim p^{(2\ell-1)(k+1)+1}$. In particular, if $|E|\gtrsim p^{2\ell-1+\frac{1}{k+1}+\e}$ for any constant $\e>0$, then the number of bad tuples in $E^{k+1}$ is $o(|E|^{k+1})$. \end{enumerate} \end{lem} \begin{proof} We first prove the first claim. Since the only non-unit of $\mathbb{F}_q$ is 0, a bad tuple must consist of $k+1$ points which all lie on a line through the origin. Therefore, we may choose $x^1$ to be anything in $E$, after which the next $k$ points must be chosen from the $q$ points on the line through the origin and $x^1$. \\ To prove the second claim, first observe that the number of tuples where at least one coordinate is a non-unit is $p^{2(\ell-1)(k+1)}$, which is less then the claimed bound. So, it suffices to bound the set of bad tuples where all coordinates are units. Let $B$ be this set. Define \[ \psi(x_1^1,x_2^1,\cdots ,x_1^{k+1},x_2^{k+1})=(p^{\ell-1}x_1^1,x_2^1,\cdots ,p^{\ell-1}x_1^{k+1},x_2^{k+1}). \] If $x\in B$, then $x^i\cdot x^{j\perp}$ is a non-unit, meaning it is divisible by $p$, and \[ (p^{\ell-1}x_1^i,x_2^i)\cdot (p^{\ell-1}x_1^j,x_2^j)=p^{\ell-1}x^i\cdot x^{j\perp}=0. \] Therefore, $\psi$ maps bad tuples $x$ to tuples $y$ with $y^i\cdot y^{j\perp}=0$, or $y_1^iy_2^j-y_1^jy_2^i=0$. Rearranging, using the fact that the second coordinate of each $y^i$ is a unit, we conclude that $\frac{y_1^i}{y_2^i}$ is a constant independent of $i$ which is divisible by $p^{\ell-1}$. In other words, each $y^i$ is on a common line through the origin and a point $(n,1)$ where $p^{\ell-1}|n$. There are $p$ such lines, and once we fix a line there are $p^{\ell(k+1)}$ choices of tuples $y$. Therefore, $|\psi(B)|\leq p\cdot p^{\ell(k+1)}$. Finally, we observe that the map $\psi$ is $p^{(\ell-1)(k+1)}$-to-1. This gives us the claimed bound on $|B|$. \end{proof} \begin{lem} \label{flemma} Let $R$ be either $\mathbb{F}_q$ or $\mathbb{Z}/p^\ell\mathbb{Z}$. Let $E\subset R^2$, and let $G\subset E^{k+1}$ be the set of good tuples. Suppose $|E|\gtrsim q^{1+\e}$ if $R=\mathbb{F}_q$ and $|E|\gtrsim p^{2\ell-1+\frac{1}{k+1}+\e}$ if $R=\mathbb{Z}/p^\ell\mathbb{Z}$. For $g\in \text{SL}_2(R)$, define $f(g)=\sum_x E(x)E(gx)$. Then \[ |E|^{2(k+1)}\lesssim \mathcal{C}_{k+1}(E)\sum_{g\in\text{SL}_2(R)} f(g)^{k+1}. \] \end{lem} \begin{proof} By Cauchy-Schwarz, we have \[ |G|^2\leq |G/\sim|\cdot |\{(x,y)\in G\times G:x\sim y\}|. \] By assumption and Lemma \ref{countbad}, $|E|^{k+1}\approx |G|$, and therefore the left hand side above is $\approx |E|^{2(k+1)}$. Since $G\subset E^{k+1}$ the right hand side above is $\leq \mathcal{C}_{k+1}(E)|\{(x,y)\in G\times G:x\sim y\}|$. It remains to prove $|\{(x,y)\in G\times G: x\sim y\}|\leq \sum_{g\in\text{SL}_2(R)} f(g)^{k+1}$. By lemma \ref{SL2}, \[ |\{(x,y)\in G\times G: x\sim y\}|=\sum_{x,y\in G}\sum_{\substack{g \\ y=gx}} 1. \] By extending the sum over $G$ to one over all of $E^{k+1}$, we bound the above sum by \begin{align*} &\sum_{x,y\in E^{k+1}}\sum_{\substack{g \\ y=gx}} 1 \\ =&\sum_x E(x^1)\cdots E(x^{k+1})\sum_g E(gx^1)\cdots E(gx^{k+1}) \\ =&\sum_g\left(\sum_{x^1} E(x^1)E(gx^1)\right)^{k+1} \\ =&\sum_g f(g)^{k+1} \end{align*} \end{proof} \section{Lifting $L^2$ estimates to $L^{k+1}$ estimates} In both the case $R=\mathbb{F}_q$ and $R=\mathbb{Z}/p^\ell\mathbb{Z}$, results are known for pairs of points, which is essentially the $k=1$ case. The finite field version was studied in \cite{HI}, and the ring of integers modulo $p^\ell$ was studied in \cite{CIP}. In section 2, we defined a function $f$ on $\text{SL}_2(R)$ and related the number of equivalence classes determined by a set to the sum $\sum_g f(g)^{k+1}$. Since results are known for the $k=1$ case, we have information about the sum $\sum_g f(g)^2$. We wish to turn that into a bound for $\sum_g f(g)^{k+1}$. This is achieved with the following lemma. \begin{lem} \label{induction} Let $S$ be a finite set, and let $F:S\to \mathbb{R}_{\geq 0}$. Let \[ A=\frac{1}{|S|}\sum_{x\in S}F(x) \] denote the average value of $F$, and \[ M=\sup_{x\in S}F(x) \] denote the maximum. Finally, suppose \[ \sum_{x\in S}F(x)^2=A^2|S|+R. \] Then there exist constants $c_k$, depending only on $k$, such that \[ \sum_{x\in S}F(x)^{k+1}\leq c_k(M^{k-1}R+A^{k+1}|S|). \] \end{lem} \begin{proof} We proceed by induction. For the base case, let $c_1=1$ and observe that the claimed bound is the one we assumed for $\sum_x F(x)^2$. Now, let $\{c_k\}$ be any sequence such that $k\binom{k}{j}c_j\leq c_k$ holds for all $j<k$; for example, $c_k=2^{k^2}$ works. Now, suppose the claimed bound holds for all $1\leq j<k$, and also observe that the bound is trivial for $j=0$. By direct computation, we have \begin{align*} &\sum_{x\in S}(F(x)-A)^2 \\ =&\sum_{x\in S}F(x)^2-2A\sum_{x\in S}F(x)+A^2|S| \\ =&\sum_{x\in S}F(x)^2-A^2|S| \\ =&R. \end{align*} We also have \[ \sum_{x\in S}F(x)^{k+1}=\sum_{x\in S}(F(x)-A)^kF(x)+\sum_{j=0}^{k-1}\binom{k}{j}(-1)^{k-j+1}A^{k-j}\sum_{x\in S}F(x)^{j+1}. \] To bound the first term, we simply use the trivial bound. Since $F(x)\leq M$ for all $x$, $A\leq M$, and $F(x),A\geq 0$, we conclude $|F(x)-A|\leq M$ for each $x$. Therefore, \[ \sum_{x\in S}(F(x)-A)^kF(x)\leq M^{k-1}\sum_{x\in S}(F(x)-A)^2=M^{k-1}R. \] To bound the second term, we use the inductive hypothesis and the triangle inequality. We have \begin{align*} &\left|\sum_{j=0}^{k-1}\binom{k}{j}(-1)^{k-j+1}A^{k-j}\sum_{x\in S}F(x)^{j+1}\right| \\ \leq &k\cdot\sup_{0\leq j<k} \binom{k}{j}A^{k-j}\sum_{x\in S}F(x)^{j+1} \\ \leq &k\cdot\sup_{0\leq j<k} \binom{k}{j}A^{k-j}c_j(M^{j-1}R+A^{j+1}|S|) \\ \leq & c_k \cdot\sup_{0\leq j<k}(A^{k-j}M^{j-1}R+A^{k+1}|S|) \end{align*} Since $A\leq M$, it follows that $A^{k-j}M^{j-1}R\leq M^{k-1}R$ for any $j<k$, so the claimed bound holds. \end{proof} \section{Some lemmas about the action of $\text{SL}_2(R)$} \begin{lem} \label{action} Let $G$ be a finite group acting transitively on a finite set $X$. Define $\ph:X\times X\to\mathbb{N}$ by $\ph(x,y)=|\{g\in G:gx=y\}|$. We have \[ \ph(x,y)=\frac{|G|}{|X|} \] for every pair $x,y$. If $h:X\to\mathbb{C}$ and $x_0\in X$, then \[ \sum_{g\in G} h(gx_0)=\frac{|G|}{|X|}\sum_{x\in X}h(x). \] \end{lem} \begin{proof} The second statement follows from the first by a simple change of variables. To prove the first, we have \[ \sum_{x,y\in X}\ph(x,y)=\sum_{g\in G}\sum_{\substack{x,y\in X \\ gx=y}}1. \] On the right, for any fixed $g$, one can choose any $x$ and there is a unique corresponding $y$, so the inner sum is $|X|$ and the right hand side is therefore $|G||X|$. On the other hand, $\ph$ is constant. To prove this, let $x,y,z,w\in X$ and let $h_1,h_2\in G$ such that $h_1x=z$ and $h_2w=y$. This means for any $g$ with $gz=w$, we have $(g_2gh_1)x=y$, so $\ph(z,w)\leq \ph(x,y)$. By symmetry, equality holds. If $c$ is the constant value of $\ph(x,y)$, the left hand side above must be $c|X|^2$, and therefore $c=\frac{|G|}{|X|}$ as claimed. \end{proof} \begin{lem} \label{SL2size} We have $|\text{SL}_2(\mathbb{F}_q)|=q^3-q$ and $|\text{SL}_2(\mathbb{Z}/p^\ell\mathbb{Z})|=p^{3\ell}-p^{3\ell-2}$. \end{lem} \begin{proof} We are counting solutions to the equation $ad-bc=1$ where $a,b,c,d\in\mathbb{F}_q$. We consider two cases. If $a$ is zero, then $d$ can be anything, and we must have $bc=1$. This means $b$ can be anything non-zero, and $c$ is determined. So, there are $q^2-q$ solutions with $a=0$. With $a\neq 0$, $b$ and $c$ can be anything, and $d$ is determined, giving $q^3-q^2$ solutions in this case. So, there are $(q^3-q^2)+(q^2-q)$ total solutions. \\ Next, we want to count solutions to $ad-bc=1$ with $a,b,c,d\in\mathbb{Z}/p^\ell\mathbb{Z}$. The arguments are essentially the same as in the proof of the finite field case, but slightly more complicated because there are non-zero elements which are still not units. We again consider separately two cases according to whether $a$ is a unit or not. If $a$ is a unit, then $b,c$ can be anything and then $d$ is determined, so there are $(p^{\ell}-p^{\ell-1})p^{2\ell}$ such solutions. If $a$ is not a unit, then $b$ and $c$ must be units, as otherwise $1$ would be divisible by $p$. So there are $p^{\ell-1}$ choices for $a$, $p^{\ell}$ choices for $d$, $p^{\ell}-p^{\ell-1}$ for $b$, and $c$ is determined. Putting this together, we get the claimed number of solutions. \end{proof} \section{Proof of Theorem \ref{MT1}} We are now ready to prove theorem \ref{MT1}. \begin{proof} First observe that good tuples are equivalent to $\approx q^{3}$ distinct tuples, so there are $\approx q^{2k-1}$ equivalence classes of good tuples. Since the only non-unit in the finite field case is 0, the bad tuples are all in the same equivalence class. So, our goal is to prove $\mathcal{C}_{k+1}(E)\gtrsim q^{2k-1}$. We first must prove the estimate \[ \sum_g f(g)^2=\frac{|E|^4}{q}+O(q^2|E|^2). \] We expand the sum on the left hand side and change variables to obtain \[ \sum_g f(g)^2=\sum_{x^1,x^2,y^1,y^2}E(x^1)E(x^2)E(y^1)E(y^2)\left(\sum_{\substack{g \\ gx=y}}1\right). \] We first observe we may ignore the pairs $x,y$ which are on a line through the origin. This is because if $x^2=tx^1$ and $y^2=sy^1$, there will exist $g$ with $gx=y$ if and only if $t=s$, in which case there are $\approx q$ choices for $g$. So, we have $|E|$ choices for $x^1$ and $y^1$, $q$ choices for $t$, and $\approx q$ choices for $g$ giving an error of $O(q^2|E|^2)$, as claimed. For all other pairs $x,y$, the inner sum in $g$ is 1 if $x\sim y$ and 0 otherwise. Therefore, if $\nu(t)=|\{(x,y)\in E\times E:x\cdot y^\perp=t\}|$, we have \[ \sum_g f(g)^2=O(|E|^2q^2)+\sum_t\sum_{\substack{x^1, x^2, y^1, y^2 \\ x^1\cdot x^{2\perp}=t \\ y^1\cdot y^{2\perp}=t}}E(x^1)E(x^2)E(y^1)E(y^2)=O(|E|^2q^2) +\sum_t\nu(t)^2. \] The proof of theorem 1.4 in \cite{HI} shows that $\nu(t)=\frac{|E|^2}{q}+O(|E|q^{1/2})$, so this gives \[ \sum_t \nu(t)^2-\frac{|E|^4}{q}=\sum_t \left(\nu(t)-\frac{|E|^2}{q}\right)^2=O(|E|^2q^2), \] which proves the equation above. We now apply lemma \ref{induction} with $F=f$. Lemmas \ref{SL2size} and \ref{action} imply \[ A=\frac{1}{|\text{SL}_2(\mathbb{F}_q)|}\sum_xE(x)\sum_gE(gx)=\frac{1}{(q^2-1)}|E|^2=\frac{|E|^2}{q^2}+O\left(\frac{|E|^2}{q^4}\right) \] and \[ |S|=q^3+O(q^2). \] Putting this together gives \[ A^2|S|=\frac{|E|^4}{q}+O\left(\frac{|E|^4}{q^2}\right), \] and therefore \[ \sum_g f(g)^2=A^2|S|+R \] with $R=O(q^2|E|^2)$. Finally, we observe that $f$ has maximum $M\leq |E|$. Therefore, lemma \ref{induction} gives \[ \sum_g f(g)^{k+1}\lesssim q^2|E|^{k+1}+\frac{|E|^{2(k+1)}}{q^{2k-1}}. \] Together with lemma \ref{flemma}, this gives \[ |E|^{2(k+1)}\lesssim \mathcal{C}_{k+1}(E)\left(q^2|E|^{k+1}+\frac{|E|^{2(k+1)}}{q^{2k-1}}\right). \] If the second term on the right is bigger, we get the result for free. If the first term is bigger, we get \[ \mathcal{C}_{k+1}(E)\gtrsim \frac{|E|^{k+1}}{q^2}. \] This will be $\gtrsim q^{2k-1}$ when $|E|\gtrsim q^{2-\frac{1}{k+1}}$, as claimed. \end{proof} \section{Size of $\mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2)$} Since $|\text{SL}_2(R)|\approx |R|^3$, we expect each tuple in $(R^2)^{k+1}$ to be equivalent to $\approx |R|^3$ other tuples, and therefore we expect the number of congruence classes to be $|R|^{2k-1}$. In the finite field case, this was proved as the first step of the proof of Theorem \ref{MT1}, but the proof in the $R=\mathbb{Z}/p^\ell\mathbb{Z}$ is more complicated so we will prove it here, separately from the proof of Theorem \ref{MT2} in the next section. \begin{thm} \label{target} We have $\mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2)\approx (p^\ell)^{2k-1}$. More precisely, the good $(k+1)$-point configurations of $(\mathbb{Z}/p^\ell\mathbb{Z})^2$ determine $\approx (p^\ell)^{2k-1}$ classes, and the bad configurations determine $o((p^\ell)^{2k-1})$ classes. \end{thm} \begin{proof} We first establish that there are $\approx p^{\ell(2k-1)}$ classes of good tuples. This is easy; if $x$ is a good tuple, we have seen the map $g\mapsto gx$ is injective, so each class has size $\approx p^{3\ell}$ and there are $p^{2\ell (k+1)}$ tuples, meaning there are $p^{2\ell(k+1)-3\ell}$ classes. \\ It remains to bound the number of bad classes. We first establish the $k=1,2$ cases. When $k=1$, we want to prove there are $o(p^\ell)$ equivalence classes. This is clear, because in the $k=1$ case we are looking at pairs $(x^1,x^2)$ whose class is determined by the scalar $x^1\cdot x^{2\perp}$. The classes therefore correspond to the underlying set of scalars in $\mathbb{Z}/p^\ell\mathbb{Z}$, and the bad classes correspond to non-units. In the $k=2$ case, we are looking at triples $(x^1,x^2,x^3)$ whose class is determined by the three scalars $(x^1\cdot x^{2\perp},x^2\cdot x^{3\perp},x^3\cdot x^{1\perp})$. So, the space of equivalence classes can be identified with $((\mathbb{Z}/p^\ell\mathbb{Z})^2)^3$, and the bad classes correspond to triples of non-units. \\ For $k\geq 3$, we use the following theorem, which is really just a more specific version of Theorem \ref{DF2}, also found in \cite{DF}, chapter 11. \\ \begin{thm*}[\ref{DF2}'] For any $2\times 2$ matrix $A$, there exists a $2\times 2$ matrix $B$ with $AB=BA=(\det(A))I_2$, where $I_2$ is the $2\times 2$ identity matrix. \end{thm*} We also make a more specific version of the definition of good and bad tuples. Namely, let$x$ be a $(k+1)$ point configuration in $\mathbb{Z}/p^\ell\mathbb{Z}$, and let $m\leq \ell$ be minimal with respect to the property that $p^m$ divides $x^i\cdot x^{j\perp}$ for every pair of indices $(i,j)$. We say that $x$ is \textbf{$m$-bad}. Observe that according to our previous definition, good tuples are $0$-bad and bad tuples are $m$-bad for some $m>0$. Also observe that $m$-badness is preserved by equivalence, so we may define $m$-bad equivalence classes analogously. An easy variant of the argument in Lemma \ref{countbad} shows that the number of $m$-bad tuples is $\lesssim p^{(2\ell-m)(k+1)+m}$; note that this bound can be rewritten as $p^{\ell(2k-1)+3\ell-km}$. We claim that every $m$-bad equivalence class has at least $p^{3\ell-2m}$ elements. It follows from the claim that there are $\lesssim p^{\ell(2k-1)+(2-k)m}$ $m$-bad classes, and since we may assume $k\geq 3$ the theorem follows from here. To prove the claim, note that the equivalence class containing $x$ also contains $gx$ for any $g\in\text{SL}_2(\mathbb{Z}/p^\ell\mathbb{Z})$, so for a lower bound on the size of a class we need to determine the size of the image of the map $g\mapsto gx$. First note that we may assume without loss of generality that each coordinate of $x^1$ is a unit. This is because given $x$ we can shift any factor of $p$ from $x^1$ onto each other vector $x^i$ and obtain another representative of the same equivalence class. Next, observe that if $x$ is $m$-bad and $gx=hx$, then by Theorem \ref{DF2}' we have $p^mg=p^mh$. It follows that $h=g+p^{\ell-m}A$ for some matrix $A$ with entries between $0$ and $p^m$. Using the fact that \[ \det(A+B)=\det(A)+\det(B)+\mathcal{B}(A,B), \] where $\mathcal{B}$ is bilinear, we conclude that if $h=g+p^{\ell-m}A$ and $\det(g)=\det(h)=1$, we must have \[ 0=p^{2(\ell-m)}\det(A)+p^{\ell-m}\mathcal{B}(g,A). \] Let $m'$ be the minimal power of $p$ which divides all entries of $A$. Since the entries of $g$ cannot all be divisible by $p$, it follows that $\ell-m+m'$ is the maximal power of $p$ which divides the second term above. Since $2(\ell-m+m')$ divides the first term, it follows that both terms must be 0 for the equation to hold. In particular, we must have $\mathcal{B}(g,A)=0$. Since at least one entry of $g$ must be a unit, we can solve for one entry of $A$ in terms of the others. Now observe that in order to have $gx=hx$, we must have $p^{\ell-m}Ax=0$. In particular, $p^{\ell-m}Ax^1=0$. Since each coordinate of $x^1$ is a unit, we may solve for another entry of the matrix $A$. This means there are at most $p^{2m}$ choices for $A$, and hence the map $g\mapsto gx$ is at most $p^{2m}$-to-one. It follows that $m$-bad classes have at least $p^{3\ell-2m}$ elements, as claimed. \end{proof} \section{Proof of Theorem \ref{MT2}} \begin{proof} In keeping with the rest of this paper, the proof of the $\mathbb{Z}/p^\ell\mathbb{Z}$ case is essentially the same as the finite field case, but more complicated casework is required to deal with non-units. By our work in the previous section, our goal is to show $\mathcal{C}_{k+1}(E)\gtrsim (p^\ell)^{2k-1}$. Following the line of reasoning in the proof of Theorem \ref{MT1}, we want to establish the estimate \[ \sum_g f(g)^2=\frac{|E|^4}{p^\ell}+O(\ell^2|E|^2(p^\ell)^{3-\frac{1}{\ell}}). \] We have, after a change of variables, \[ \tag{$*$} \sum_g f(g)^2=\sum_{x^1,x^2,y^1}E(x^1)E(x^2)E(y^1)\sum_{\substack{g \\ gx^1=y^1}}E(gx^2). \] We first want to throw away terms where $x^1,y^1$ have non-units in their first coordinates. Note that there are $\approx p^{4\ell-2}$ such pairs. For each, there are $|E|$ many choices for $x^2$. We claim that there are $\leq p^\ell$ choices of $g$ which map $x^1$ to $y^1$ under this constraint. It follows from this claim that those terms contribute $\lesssim p^{5\ell-2}|E|$ to ($*$), which is less then the claimed error term. To prove the claim, observe that we are counting solutions to the system of equations \begin{align*} ax_1^1+bx_2^1&=y_1^1 \\ cx_1^1+dx_1^1&=y_2^1\\ ad-bc&=1 \end{align*} in $a,b,c,d$. Since $x_1^1$ is a unit, we can solve the first two equations for $a$ and $c$, respectively. Plugging these solutions into the third equation yields \[ 1=\frac{y_1^1}{x_1^1}d-\frac{y_2^1}{y_1^1}b. \] Since $y_1^1$ is a unit, for every $b$ there is a unique $d$ satisfying the equation. This proves the claim. Now, we want to remove all remaining terms from ($*$) corresponding to $x^1,x^2$ where $x^1\cdot x^{2\perp}$ is not a unit. To bound this contribution, we observe that for any such pair, we can write $x^2=tx^1+k$, where $0<t<p$ and $k$ is a vector where both entries are non-units. Therefore, there are $\leq |E|^2$ choices for $(x^1,y^1)$, there are $\leq p^{2\ell-1}$ choices for $x^2$, and there are $\leq p^\ell$ choices for $g$ as before. This gives the bound $|E|^2p^{3\ell-1}$, smaller than the claimed error term. This means, up to the error term, ($*$) can be written as \[ \sum_{\substack{x^1,x^2,y^1,y^2 \\ x^1\cdot x^{2\perp}=y^1\cdot y^{2\perp}}}E(x^1)E(x^2)E(y^1)E(y^2)=\sum_t \nu(t)^2, \] where $\nu(t)=|\{(x,y)\in E\times E:x\cdot y^\perp=t\}|$. This function was studied in \cite{CIP}; in that paper, it is proved that $\nu(t)=\frac{|E|^2}{q}+O(\ell|E|(p^\ell)^{\frac{1}{2}(2-\frac{1}{\ell})})$, leading to the claimed estimate for $\sum_g f(g)^2$, using the same reasoning as in the proof of Theorem \ref{MT1}. Applying Lemma \ref{induction} and Lemma \ref{flemma} with $A\approx \frac{|E|^2}{p^{2\ell}},|S|\approx p^{3\ell},M\leq |E|, R=O(\ell^2 |E|^2(p^\ell)^{3-\frac{1}{\ell}})$ gives \[ |E|^{2(k+1)}\lesssim \mathcal{C}_{k+1}(E)\left(\ell^2|E|^{k+1}(p^\ell)^{3-\frac{1}{\ell}}+\frac{|E|^{2(k+1)}}{p^{\ell(2k-1)}}\right). \] If the second term on the right is bigger, we get the result for free. If the first term is bigger, we have \[ \mathcal{C}_{k+1}(E)\gtrsim \frac{|E|^{k+1}}{\ell^2p^{3\ell-1}}. \] If $|E|\gtrsim \ell^{\frac{2}{k+1}}p^{\ell s}$, then this is $\gtrsim p^{\ell s(k+1)-3\ell+1}$, which is $\gtrsim p^{\ell(2k-1)}$ when $s\geq 2-\frac{1}{\ell(k+1)}$. \end{proof} \section{Proof of sharpness} \begin{proof} We first consider the finite field case. Let $1\leq s<2-\frac{2}{k+1}$, and let $E$ be a union of $q^{s-1}$ circles of distinct radii. Since each circle has size $\approx q$, this is a set of size $\approx q^s$. Observe that for any $x\in E^{k+1}$ and any $g$ in the orthogonal group $O_2(\mathbb{F}_q)$, we have $gx\in E^{k+1}$. Therefore, every configuration of points in $E$ is equivalent to at least $|O_2(\mathbb{F}_q)|\approx q$ other configurations. This means that \[ \mathcal{C}_{k+1}(E)\lesssim q^{-1}|E|^{k+1}\approx q^{s(k+1)-1}=o(q^{2k-1}), \] where in the last step we use the assumed bound on $s$. \\ Now, consider the $\mathbb{Z}/p^\ell\mathbb{Z}$ case. Let $1\leq s<2-\min\left(\frac{2}{k+1},\frac{1}{\ell}\right)$. We consider two different examples, according to which of $\frac{2}{k+1}$ or $\frac{1}{\ell}$ is smaller. In the first case, the example that works for finite fields also works here; circles still have size $\approx p^\ell$, so nothing is changed. In the second, let \[ E=\{(t+pn,t+pm):0\leq t<p,0\leq m,n\leq p^{\ell-1}\}. \] Clearly $|E|=p^{2\ell -1}=(p^\ell)^{2-\frac{1}{\ell}}$, but it is also easy to check that $x\cdot y^\perp$ is never a unit for any $x,y\in E$. Therefore, every configuration of points in $E$ is bad, and we have shown that this is $o(\mathcal{C}_{k+1}(p^{\ell(2k-1)}))$. \end{proof} \end{document}
\begin{document} \author{Olivia Dumitrescu} \title{Plane curves with prescribed triple points: a toric approach} \maketitle \begin{abstract} We will use toric degenerations of the projective plane ${{\mathbb{P}}^ 2}$ to give a new proof of the triple points interpolation problems in the projective plane. We also give a complete list of toric surfaces that are useful as components in this degeneration. \end{abstract} \section{Introduction} Let ${\mathcal{L}}_{d}(m_{1},...,m_{r})$ denote the linear system of curves in ${{\mathbb{P}}^ 2}$ of degree $d$, that pass through $r$ points $P_{1},...,P_{r}$, with multiplicity at least $m_{i}$. A natural question would be to compute the projective dimension of the linear system ${\mathcal{L}}$. The virtual dimension of $\mathcal{L}$ is $$\ v ({\mathcal{L}}_{d}(m_1,...,m_r)):= \binom {d+2} {2}-\sum^{r}_{i=1} \binom {m_{i}+1} {2}-1 $$ and the expected dimension is $\ e ({\mathcal{L}}):=max\{v ({\mathcal{L}}),-1\}.$ There are some elementary cases for which $\ dim({\mathcal{L}}) \neq \ e ({\mathcal{L}}).$ A linear system for which $dim {\mathcal{L}}> e ({\mathcal{L}})$ is called a special linear system. However if we consider the homogeneous case when all multiplicities are equal $m_1=...=m_r=m$, the linear system (denoted by ${\mathcal{L}}_{d}(m^r)$) is expected to be non-special when $r$ is large enough. In this paper we will only consider the case $m=3$; the virtual dimension becomes $v({\mathcal{L}}_{d}(3^r))=\frac{d(d+3)}{2}-6m-1$. In \cite{CDM07} the authors used a toric degeneration of the Veronese surface into a union of projective planes for the double points interpolation problems i.e. $m=2$. This paper extends the degeneration used in \cite{CDM07} to the triple points interpolation problem. A triple point in the projective plane imposes six conditions, so in this paper we will classify the toric surfaces $(X, {\mathcal{L}})$ with $h^{0}(X, {\mathcal{L}})=6$ (see Theorem \ref{Classification}). In particular we will analyse the ones for which the linear system becomes empty when imposing a triple point, call them $Y_{i}$. We will then use a toric degeneration of the embedded projective space via a linear system ${\mathcal{L}}$ into a union of planes, quadircs and $r$ disjoint toric surfaces $Y_{i}$. On each surface $Y_{i}$ we will place one triple point and by a semicontinuity argument we will prove the non-speciality of ${\mathcal{L}}$, see Theorem \ref {triple plane}.\\\\ We remark that this result can be generelized to any dimension i.e. a list of toric varieties becoming empty when imposing a muliplicity $m$ point could be described in a similar way. However the combinatorical degeneration and the construction of the lifting function is not very well understood. In \cite{Len08} T. Lenarcik used an algebraic approach to study triple points interpolation in ${\mathbb{P}}^ {1}\times {\mathbb{P}}^ {1}$; however the list of the algebraic polygons is slightly different than ours and the connection with toric degenerations is not explicit. Toric degenerations of three dimensional projective space have been used by S. Brannetti to give a new proof of the Alexander-Hirschowitz theorem in dimension three in \cite{Brann08}. Degenerations of $n$-dimensional projective space have been used by E. Postinghel to give a new proof for the Alexander-Hirschowitz theorem in any dimension in \cite{Postinghel}. \indent \section{Toric varieties and toric degenerations.}\label{sec:toric} We recall a few basic facts about toric degenerations of projective toric varieties. We refer to \cite {hu} and \cite{WF} for more information on the subject and to \cite {gath} for relations with tropical geometry. The datum of a pair $(X,{\mathcal{L}})$, where $X$ is a projective, $n$--dimensional toric variety and ${\mathcal{L}}$ is a base point free, ample line bundle on $X$, is equivalent to the datum of an $n$ dimensional convex polytope ${\mathcal{P}}$ in ${\mathbb{R}}^ n$, determined up to translation and $SL_{n}^{\pm}({\mathbb{Z}})$. We consider a polytope ${\mathcal{P}}$ and a \emph{subdivision} ${\mathcal{D}}$ of ${\mathcal{P}}$ into convex subpolytopes; i.e. a finite family of $n$ dimensional convex polytopes whose union is ${\mathcal{P}}$ and such that any two of them intersect only along a face (which may be empty). Such a subdivision is called \emph{regular} if there is a convex, positive, linear on each face, function $F$ defined on ${\mathcal{P}}$. Such function $F$ will be called a lifting function. Regular subdivisions correspond to degeneration of toric varieties. We will now prove a technical lemma that will enable us to easily demonstrate the existence of a lifting function when we need them. Let $X$ be a toric surface and ${\mathcal{P}}$ be its associated polygon. Consider $L$ a line that separates two disjoint polygons ${\mathcal{P}_1}$ and ${\mathcal{P}_2}$ in ${\mathcal{P}}$ and $X_{1}$ and $X_{2}$ their corresponding toric surfaces such that $L$ does not contain any integer point. \begin{lemma} The toric variety $X$ degenerates into a union of toric surfaces two of which are $X_{1}$ and $X_{2}$ which are skew. \end{lemma} \begin{proof} We consider the convex piecewise linear function given by $$f(x,y,z)=\max \{z, L+z \}$$ Consider the image of the points on the boundary of the polygons $X_{1}$ and $X_{2}$ through $f$. Change the function $f$ by interposing the convex hull of the boundary points separated by $L$ between $l_{1}$ and $l_{2}$ (as in the Figure \ref{lift1}). The function will still be convex and piecewise linear, therefore we get a regular subdivision. We consider now the toric varieties associated to each polygon, and since ${\mathcal{P}_{1}}$ and ${\mathcal{P}_{2}}$ are disjoint, we obtain that two of the toric surfaces that appeared in the degeration, namely $X_1$ and $X_2$ are skew. \end{proof} For example, in the picture below we have four polygons, two of which are disjoint. The corresponding degeneration will contain four toric varieties, two of them $X_{1}$ and $X_{2}$, being skew. \begin{figure}\label{lift1} \end{figure} It is easy to see how we could iterate this process. Let $M$ be a line cutting the polygon associated to $X_{2}$ and not containing any of its interior points. Then $X_{2}$ degenerates into a union of toric surfaces, two of which are skew, $Y_{2}$ and $Y_{3}$. In this case, we conclude that $X$ degenerates into nine toric surfaces three of which $X_{1}$, $Y_{2}$ and $Y_{3}$ being skew, as the Figure \ref{lift2} indicates. \begin{figure}\label{lift2} \end{figure} Later on, we will ignore the varieties lying in between the disjoint ones; they are only important for the degeneration and not for the analysis itself. \section{The Classification of polygons.} We recall that the group $SL_2^{\pm}({\mathbb{Z}})$ acts on the column vectors of ${\mathbb{R}}^ 2$ by left multiplication. This induces an action of $SL_2^{\pm}({\mathbb{Z}})$ on the set of convex polygons ${\mathcal{P}}$ by acting on its enclosed points ($SL_2^{-}({\mathbb{Z}})$ corresponds to orientation reversing lattice equivalences). Obviously, ${\mathbb{Z}^{2}}$ acts on vectors of ${\mathbb{R}}^ 2$ by translation. Next, we will classify all convex polygons enclosing six lattice integer points modulo the actions described above. We first start with a definition. \begin{definition} We say the polygon ${\mathcal{P}}$ is equivalent to one in \it{standard position} if \begin{my_enumerate} \item It contains $O=(0,0)$ as a vertex \item $OS$ is a vertex where $S=(0,m)$ and $m$ is the largest edge length \item $OP$ is an edge where $P=(p,q)$ and $0\leq p <q$ \end{my_enumerate} \end{definition} \begin{remark} Every polygon has a standard position. \end{remark} Indeed, we first choose the longest edge and then we translate one of its vertices to the origin. We will now rotate the polygon to put the longest edge on the positive side of the $x$ axis and then we shift it such that the adjacent edge lies in the upper half of the first quadrant. Indeed, if $OP$ is an edge with $P=(s,q)$ and $s\geq q$; then $s=mq+p$ for $0\leq p<q$ so we shift left by $m$. We will call this procedure normalization.\\ It is easy to see that the standard position of the polygon may not be unique, it depends on the choice of the longest edge, and of the choice of the special vertex that becomes the origin. We can now present the classification of the polygons in standard position according to $m$ (the maximum number of integral points lying on the edges of the polygon), and also according to their number of edges, $n$. Obviously, the polygons ${\mathcal{P}}$ will have at most six edges, and at most five points on an edge. We leave the elementary details to the reader; we only remark that Pick's lemma is useful (for more details see \cite{Dum10}). \begin{proposition}\label{Classification} Any polygon enclosing six lattice points is equivalent to exactly one from the following list \end{proposition} We now recall that any rational convex polygon ${\mathcal{P}}$ in ${\mathbb{R}^{n}}$ enclosing a fixed number of integer lattice points defines an $n$ dimensional projective toric variety $X_{\mathcal{P}}$ endowed with an ample line bundle on $X_{\mathcal{P}}$ which has the integer points of the polygon as sections. We get the following result \begin{corollary} Any toric surface endowed with an ample line bundle with six sections is completely described by exactly one of the polygons from the above list. \end{corollary} \section{Triple Point Analysis.} We first observe that six, the number of integer points enclosed by the polygon, represents exactly the number of conditions imposed by a triple point. We will now classify all polygons from Proposition \ref{Classification} for which their corresponding linear system becomes empty when imposing a triple point. There are two methods for testing the emptiness of these linear systems: an algebraic method and a geometric method and we will briefly describe them below. For the algebraic approach, checking that a linear system is non-empty when imposing a triple point reduces to showing that the conditions imposed by a triple point in ${\mathbb{P}^{2}}$ are dependent. For this, one needs to look at the rank of the six by six matrix where the first column represents the sections of the line bundle and the other five columns represent all first and second derivatives in $x$ and $y$. We conclude that the six conditions are dependent if and only if the determinant of the matrix is identically zero. The geometric method for testing when a planar linear system is empty is to explicitly find it and show that it contains no curve, using ${\mathbb{P}^{2}}$ as a minimal model for the surface $X$ and writing its resolution of singularities. \begin{remark}\label{Elimination} The corresponding linear systems of the following polygons are non-empty when imposing a base point with multiplicity three. \end{remark} \begin{proof} It is easy to check that the algebraic conditions imposed by at least four sections on a line are always dependent. Indeed, we have two possible cases, if the line of sections is an edge, or if is enclosed by the polygon. For the first case, we can only have sections on two levels so the vanishing of the second derivative in $y$ gives a dependent condition (The same argument applies for case $m=3$ representing the embedded ${\mathbb{P}^{1}}\times {\mathbb{P}^{1}}$). For the second case we notice that the vanishing of the first derivative in $y$ and the second derivative in $x$ and $y$ give two linearly dependent conditions. \end{proof} We will use the Remark \ref{Elimination} to eliminate the polygons that don't have the desired property and we obtain five polygons for which we will study the corresponding algebraic surfaces and linear systems using toric geometry methods. For any polygon consider its fan by dualizing the polygon's angles and in the case that the toric variety obtained by gluing the cones is singular, take it's resolution of singularites. In this way we obtain the all the toric surfaces using ${\mathbb{P}^{2}}$ as a minimal model. The associated linear system may not have general points. In general, we will use the notations ${\mathcal{L}}_{d}([1,1]), {\mathcal{L}}_{d}([2,1]), {\mathcal{L}}_{d}([1,1,1])$ for linear systems of degree $d$ that pass through a base point with a defined tangent, a double point with a defined tangent or having a flex direction. For example, ${\mathcal{L}}_{4}([1,1,1]^3)$ represents quartics with three base points that are flex to the line joining any two of them. Since the base points are special, the linear systems will need a different analysis. We conclude the emptiness of each linear systems with a triple point by applying birational transformations and splitting off $-1$ curves. The last column of the table indicates the geometric conditions that corresponding to the infinitely near multiplicities. We obtain the following result: \begin{lemma}\label {Empty poly} The linear systems corresponding to the following polygons become empty after imposing a triple point. \end{lemma} All the linear systems from the table become empty when imposing a triple point. For example the fourth polygon in the above table describes a projective plane embedded by a linear system of conics, ${\mathcal{L}}_{2}$. By imposing a triple point ${\mathcal{L}}_{2}$ becomes empty. One can obtain more polygons with an empty linear system by rotating or by shifting the main ones by any integer numbers. \section{Triple points in ${\mathbb{P}}^ 2$.} We denote by $V_d$ the image of the Veronese embedding $v_d:\mathbb{P}^2 \to \mathbb{P}^{d(d+3)/2}$ that transforms the plane curves of degree $d$ to hyperplane sections of the Veronese variety $V_d$. We degenerate $V_d$ into a union of disjoint surfaces and ordinary planes and we place one point on each one of the disjoint surfaces. The surfaces are chosen such that the restriction of a hyperplane section to each one of them to be linear system that becomes empty when we impose a triple point. We conclude that any hyperplane section to $V_d$ needs to contain all disjoint surfaces, and in particular all of the coordinate points of the ambient projective space covered in this way. Therefore if $V_{d}$ degenerates exactly into a union of disjoint special surfaces and planes (or quadrics) with no points left over we conclude that the desired linear system is empty, and therefore it has the expected dimension. Using semicontinuity this argument can easily be extended to any degeneration as in \cite{CDM07}. In order to give an inductive proof for triple points in the projective plane we will first analyze triple points in ${\mathbb{P}}^ 1\times{\mathbb{P}}^ 1$. We will only prove the most difficult case when the linear systems in ${\mathbb{P}}^ 1\times{\mathbb{P}}^ 1$ with virtual dimension $-1$ are empty. The general case will follow by induction, but it was already proved in a similar way using algebraic methods by T. Lenarcik in \cite{Len08}. \begin{lemma}\label{bidegree $(5,n)$} Fix $n\geq 3$. Then linear systems of bidegree $(5,n)$ for $n\neq 4, (11,n), (2, 4n-9), (8,2n-3)$ and an arbitrary number of triple points have the expected dimension. \end{lemma} \begin{proof} \begin{itemize} \item For any linear systems of bidegree $(5,n)$ we find a skew $n+1$ set of surfaces and we place each of the $n+1$ triple points in one of the surfaces. We denote the degenerations presented below as $C_{5}^{5}$, $C_{5}^{6}$, $C_{5}^{8}$ and $C_{5}^{3}$ For every $n>2$, $n\neq 4$ take $i\in \{3,5,6,8 \}$ such that $\frac{n-i}{4}$ is an integer, $k$. For any arbitrary $n$ we consider the degeneration $C_{5}^{n}=C_{5}^{i}+k C_{5}^{3}$ where the sum of two blocks means attaching the two disjoint blocks together along the edge of length $5$. \item For linear systems of bidegree $(11,n)$ and 2$n+2$ triple points we find skew surfaces. We denote the degenerations presented below by $C_{11}^{2}$, $C_{11}^{3}$, and $C_{11}^{4}.$ For every $n>2$ take $i\in \{2,3,4\}$ such that $\frac{n-i}{3}$ is an integer, $k$. For any arbitrary $n$ we consider the degeneration $C_{11}^{n}=C_{11}^{i}+k C_{11}^{2}.$ \item For curves of bidegree $(2,4n-9)$ we consider the degeneration $C_{2}^{4n-9}$ given by $(n-3)C_{2}^{3}$ (in particular, $C_{2}^{11}=3C_{2}^{3}$) and for $C_{8}^{2n-3}$ we use combinations of $C_{8}^{3}$ and $C_{8}^{5}$ \end{itemize} \end{proof} \begin{corollary}\label{main} Linear systems in ${\mathbb{P}}^ {1}\times {\mathbb{P}}^ {1}$ with triple points of virtual dimension $-1$ are empty. \end{corollary} \begin{proof} We have to prove the statement for linear systems of bidegree $(6k-1, n)$ and $(3k-1, 2n-1).$ For the bidegree $(6k-1, n)$ we distinguish two cases. If $k$ is even $k=2k'$ we use the degeneration $C_{12k'-1}^{n}=k'C_{11}^{n}$; while if $k$ is odd of the form $2k'+1$ we use $C_{12k'+5}^{n}=C_{5}^{n}+k'C_{11}^{n}$, for $n\neq 4$. For $n=4$ we use the following degeneration for $C_{17}^{4}$ and we generalize this case by adding $C_{11}^{4}$ blocks For the bidegree $(3k-1, 2n-1)$ we reduce to the case when $k$ is odd of the form $2k'+1$ and depending on the parity of $k'$, if $6k'=6+12r$ we use the degeneration $C_{8}^{2n-1}+rC_{11}^{2n-1}$ while $6k'=12r$ we use $C_{5}^{2n-1}+C_{8}^{2n-1}+(r-1)C_{11}^{2n-1}$. \end{proof} We can now obtain the following result \begin{theorem}\label{triple plane} $\mathcal{L}_d(3^n)$ has the expected dimension whenever $d \geq 5$. \end{theorem} \begin{proof} It is enough to prove the theorem for the number of triple points for which the virtual dimension is $-1$ so in that case we claim that the linear system is empty. Note that $\binom{d+2}{2}\equiv 0$ mod $6$ if $d\equiv \{2,7, 10, 11\}$ mod $6$; $\binom{d+2}{2}\equiv 1$ mod $6$ if $d\equiv \{0,9\}$ mod $6$; $\binom{d+2}{2}\equiv 3$ mod $6$ if $d\equiv \{1, 4, 5, 8\}$ mod $6$ and $\binom{d+2}{2}\equiv 4$ mod $6$ if $d\equiv \{3, 6\}$ mod $6$. We will use the induction step $V_{12(k+1)+j}=V_{12k+j}+kC^{11}_{11}+C^{11}_{j+1}+V_{10}$ with $j=1,...,12$, $k\geq 0$, $(i,j)\neq (1,4)$ and to finish the proof we present the degenerations of $V_{j}$ if $j\leq 12.$ \end{proof} \begin{remark} {\rm Notice that ${\mathcal{L}_4(3^2)}$ consists of quartics with two triple points and the expected dimension is $2$. This linear system has a fixed part, the double line through the two points and a movable part ${\mathcal{L}_2(1^2)}$ i.e. conics through two points, that has dimension $3$. A simple argument shows that if $d=4$, the linear system ${\mathcal{L}}$ is $-1$--special (we have a $-1$--curve, line connecting the 2 points, splitting off twice) and therefore special.\\ \indent One could mention that case $d=4$ is also a special case for the double points interpolation problem since ${\mathcal{L}_4(2^5)}$ is expected to be empty but it consists of the double conic determined by the $5$ general points. } \end{remark} \end{document}
\begin{document} \title{ A Unified Convergence Rate Analysis of The Accelerated Smoothed Gap Reduction Algorithm } \titlerunning{ A Unified Convergence Rate Analysis of The ASGARD Algorithm } \author{Quoc Tran-Dinh$^{*}$} \authorrunning{Q. Tran-Dinh} \institute{Quoc Tran-Dinh \at Department of Statistics and Operations Research\\ The University of North Carolina at Chapel Hill\\ 333 Hanes Hall, UNC-CH, Chapel Hill, NC27599. \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In this paper, we develop a unified convergence analysis framework for the \textit{Accelerated Smoothed GAp ReDuction algorithm} (ASGARD) introduced in \cite[\textit{Tran-Dinh et al}]{TranDinh2015b}. Unlike \cite{TranDinh2015b}, the new analysis covers three settings in a single algorithm: general convexity, strong convexity, and strong convexity and smoothness. Moreover, we establish the convergence guarantees on three criteria: (i) gap function, (ii) primal objective residual, and (iii) dual objective residual. Our convergence rates are optimal (up to a constant factor) in all cases. While the convergence rate on the primal objective residual for the general convex case has been established in \cite{TranDinh2015b}, we prove additional convergence rates on the gap function and the dual objective residual. The analysis for the last two cases is completely new. Our results provide a complete picture on the convergence guarantees of ASGARD. Finally, we present four different numerical experiments on a representative optimization model to verify our algorithm and compare it with Nesterov's smoothing technique. \keywords{ Accelerated smoothed gap reduction \and primal-dual algorithm \and Nesterov's smoothing technique \and convex-concave saddle-point problem. } \subclass{90C25 \and 90C06 \and 90-08} \end{abstract} \section{Introduction}\label{sec:intro} We consider the following classical convex-concave saddle-point problem: \begin{equation}\label{eq:saddle_point_prob} \min_{x\in\mathbb{R}^p}\max_{y\in\mathbb{R}^n}\Big\{ \mathcal{L}(x, y) := f(x) + \iprods{Kx, y} - g^{*}(y) \Big\}, \end{equation} where $f : \mathbb{R}^p \to \mathbb{R}\cup\{+\infty\}$ and $g : \mathbb{R}^n\to\mathbb{R}\cup\{+\infty\}$ are proper, closed, and convex, $K : \mathbb{R}^p \to \mathbb{R}^n$ is a linear operator, and $g^{*}(y) := \sup_{x}\set{ \iprods{y, x} - g(x)}$ is the Fenchel conjugate of $g$. The convex-concave saddle-point problem \eqref{eq:saddle_point_prob} can be written in the primal and dual forms. The primal problem is defined as \begin{equation}\label{eq:primal_prob} F^{\star} := \min_{x\in\mathbb{R}^p}\Big\{F(x) := f(x) + g(Kx) \Big\}. \end{equation} The corresponding dual problem is written as \begin{equation}\label{eq:dual_prob} D^{\star} := \min_{y\in\mathbb{R}^n}\Big\{ D(y) := f^{*}(-K^{\top}y) + g^{*}(y) \Big\}. \end{equation} Clearly, both the primal problem \eqref{eq:primal_prob} and its dual form \eqref{eq:dual_prob} are convex. \noindent\textit{\textbf{Motivation.}} In \cite{TranDinh2015b}, two accelerated smoothed gap reduction algorithms are proposed to solve \eqref{eq:primal_prob} and its special constrained convex problem. Both algorithms achieve optimal sublinear convergence rates (up to a constant factor) in the sense of black-box first-order oracles \cite{Nemirovskii1983,Nesterov2004}, when $f$ and $g$ are convex, and when $f$ is strongly convex and $g$ is just convex, respectively. The first algorithm in \cite{TranDinh2015b} is called ASGARD (\textit{Accelerated Smoothed GAp ReDuction}). To the best of our knowledge, except for a special case \cite{TranDinh2012a}, ASGARD was the first primal-dual first-order algorithm that achieves a non-asymptotic optimal convergence rate on the last primal iterate. ASGARD is also different from the alternating direction method of multipliers (ADMM) and its variants, where it does not require solving complex subproblems but uses the proximal operators of $f$ and $g^{*}$. However, ASGARD (i.e. \cite[Algorithm 1]{TranDinh2015b}) only covers the general convex case, and it needs only one proximal operation of $f$ and of $g^{*}$ per iteration. To handle the strong convexity of $f$, a different variant is developed in \cite{TranDinh2015b}, called ADSGARD, but requires two proximal operations of $f$ per iteration. Therefore, the following natural question is arising: \begin{itemize} \item[] ``\textit{Can we develop a unified variant of ASGARD that covers three settings: general convexity, strong convexity, and strong convexity and smoothness?}'' \end{itemize} \noindent\textit{\textbf{Contribution.}} In this paper, we affirmatively answer this question by developing a unified variant of ASGARD that covers the following three settings: \begin{itemize} \item[] Case 1: Both $f$ and $g^{*}$ in \eqref{eq:saddle_point_prob} are only convex, but not strongly convex. \item[] Case 2: Either $f$ or $g^{*}$ is strongly convex, but not both $f$ and $g^{*}$. \item[] Case 3: Both $f$ and $g^{*}$ are strongly convex. \end{itemize} The new variant only requires one proximal operation of $f$ and of $g^{*}$ at each iteration as in the original ASGARD and existing primal-dual methods, e.g., in \cite{boct2020variable,Chambolle2011,Chen2013a,Condat2013,Esser2010,connor2014primal,Goldstein2013,vu2013variable}. Our algorithm reduces to ASGARD in Case 1, but uses a different update rule for $\eta_k$ (see Step~\ref{step:i2} of Algorithm~\ref{alg:A1}) compared to ASGARD. In Case 2 and Case 3, our algorithm is completely new by incorporating the strong convexity parameter $\mu_f$ of $f$ and/or $\mu_{g^{*}}$ of $g^{*}$ in the parameter update rules to achieve optimal $\BigO{1/k^2}$ sublinear and $(1 - \BigO{1/\sqrt{\kappa_F}})$ linear rates, respectively, where $k$ is the iteration counter and $\kappa_F := \norms{K}^2/(\mu_f\mu_{g^{*}})$. In terms of convergence guarantees, we establish that, in all cases, our algorithm achieves optimal rates on the last primal sequence $\sets{x^k}$ and an averaging dual sequence $\set{\tilde{y}^k}$. Moreover, our convergence guarantees are on three different criteria: (i) the gap function for \eqref{eq:saddle_point_prob}, (ii) the primal objective residual $F(x^k) - F^{\star}$ for \eqref{eq:primal_prob}, and (iii) the dual objective residual $D(\tilde{y}^k) - D^{\star}$ for \eqref{eq:dual_prob}. Our paper therefore provides a unified and full analysis on convergence rates of ASGARD for solving three problems \eqref{eq:saddle_point_prob}, \eqref{eq:primal_prob}, and \eqref{eq:dual_prob} simultaneously. We emphasize that primal-dual first-order methods for solving \eqref{eq:saddle_point_prob}, \eqref{eq:primal_prob}, and \eqref{eq:dual_prob}, and their convergence analysis have been well studied in the literature. To avoid overloading this paper, we refer to our recent works \cite{TranDinh2015b,tran2019non} for a more thorough discussion and comparison between existing methods. Hitherto, there have been numerous papers studying convergence rates of primal-dual first-order methods, including \cite{boct2020variable,Chambolle2011,chambolle2016ergodic,Chen2013a,Davis2014a,He2012,valkonen2019inertial}. However, the best known and optimal rates are only achieved via averaging or weighted averaging sequences, which are also known as ergodic rates. The convergence rates on the last iterate sequence are often slower and suboptimal. Recently, the optimal convergence rates of the last iterates have been studied for primal-dual first-order methods, including \cite{TranDinh2015b,tran2019non,valkonen2019inertial}. As pointed out in \cite{Chambolle2011,Davis2015,sabach2020faster,TranDinh2015b,tran2019non}, the last iterate convergence guarantee is very important in various applications to maintain some desirable structures of the final solutions such as sparsity, low-rankness, or sharp-edgedness in images. This also motivates us to develop ASGARD. \noindent\textit{\textbf{Paper outline.}} The rest of this paper is organized as follows. Section~\ref{sec:preliminaries} recalls some basic concepts, states our assumptions, and characterizes the optimality condition of \eqref{eq:saddle_point_prob}. Section~\ref{sec:alg_scheme} presents our main results on the algorithm and its convergence analysis. Section~\ref{sec:num_exp} provides a set of experiments to verify our theoretical results and compare our method with Nesterov's smoothing scheme in \cite{Nesterov2005c}. Some technical proofs are deferred to the appendix. \section{Basic Concepts, Assumptions, and Optimality Condition}\label{sec:preliminaries} We are working with Euclidean spaces, $\mathbb{R}^p$ and $\mathbb{R}^n$, equipped with the standard inner product $\iprods{\cdot,\cdot}$ and the Euclidean norm $\norms{\cdot}$. We will use the Euclidean norm for the entire paper. Given a proper, closed, and convex function $f$, we use $\dom{f}$ and $\partial{f}(x)$ to denote its domain and its subdifferential at $x$, respectively. We also use $\nabla{f}(x)$ for a subgradient or the gradient (if $f$ is differentiable) of $f$ at $x$. We denote by $f^{*}(y) := \sup\set{\iprod{y, x} - f(x) : x\in\dom{f}}$ the Fenchel conjugate of $f$. We denote by $ \mathrm{ri}(\mathcal{X})$ the relative interior of $\mathcal{X}$. A function $f$ is called $M_f$-Lipschitz continuous if $\vert f(x) - f(\tilde{x})\vert \leq M_f\norms{x - \tilde{x}}$ for all $x, \tilde{x} \in \dom{f}$, where $M_f \in [0, +\infty)$ is called a Lipschitz constant. A proper, closed, and convex function $f$ is $M_f$-Lipschitz continuous if and only if $\partial{f}(\cdot)$ is uniformly bounded by $M_f$ on $\dom{f}$. For a smooth function $f$, we say that $f$ is $L_f$-smooth (or Lipschitz gradient) if for any $x, \tilde{x}\in\dom{f}$, we have $\Vert\nabla{f}(x) - \nabla{f}(\tilde{x})\Vert \leq L_f\norms{x - \tilde{x}}$, where $L_f \in [0, +\infty)$ is a Lipschitz constant. A function $f$ is called $\mu_f$-strongly convex with a strong convexity parameter $\mu_f \geq 0$ if $f(\cdot) - \frac{\mu_f}{2}\norms{\cdot}^2$ remains convex. For a proper, closed, and convex function $f$, $\textrm{prox}_{\gamma f}(x) := \mathrm{arg}\!\min\big\{f(\tilde{x}) + \tfrac{1}{2\gamma}\norms{\tilde{x} - x}^2 : \tilde{x}\in\dom{f} \big\}$ is called the proximal operator of $\gamma f$, where $\gamma > 0$. \subsection{\textbf{Basic assumptions and optimality condition}}\label{subsec:primal_dual_formulation} In order to show the relationship between \eqref{eq:saddle_point_prob}, \eqref{eq:primal_prob} and its dual form \eqref{eq:dual_prob}, we require the following assumptions. \begin{assumption}\label{as:A1} The following assumptions hold for \eqref{eq:saddle_point_prob}. \begin{itemize} \item[$\mathrm{(a)}$] Both functions $f$ and $g$ are proper, closed, and convex on their domain. \item[$\mathrm{(b)}$] There exists a saddle-point $z^{\star} := (x^{\star}, y^{\star})$ of $\mathcal{L}$ defined in \eqref{eq:saddle_point_prob}, i.e.: \begin{equation}\label{eq:saddle_point} \mathcal{L}(x^{\star}, y) \leq \mathcal{L}(x^{\star}, y^{\star}) \leq \mathcal{L}(x, y^{\star}), \quad \forall (x, y)\in\dom{f}\times\dom{g^{*}}. \end{equation} \item[$\mathrm{(c)}$] The Slater condition $0 \in \mathrm{ri}(\dom{g} - K\dom{f})$ holds. \end{itemize} \end{assumption} Assumption~\ref{as:A1} is standard in convex-concave saddle-point settings. Under Assumption~\ref{as:A1}, strong duality holds, i.e. $F^{\star} = \mathcal{L}(x^{\star}, y^{\star}) = -D^{\star}$. To characterize a saddle-point of \eqref{eq:saddle_point_prob}, we define the following gap function: \begin{equation}\label{eq:gap_func} \mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x, y) := \sup\big\{ \mathcal{L}(x, \tilde{y}) - \mathcal{L}(\tilde{x}, y) : \tilde{x}\in\mathcal{X}, \ \tilde{y}\in\mathcal{Y} \big\}, \end{equation} where $\mathcal{X}\subseteq\dom{f}$ and $\mathcal{Y}\subseteq\dom{g^{*}}$ are nonempty, closed, and convex subsets such that $\mathcal{X}\times\mathcal{Y}$ contains a saddle-point $(x^{\star},y^{\star})$ of \eqref{eq:saddle_point_prob}. Clearly, we have $\mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x, y) \geq 0$ for all $(x, y)\in\mathcal{X}\times\mathcal{Y}$. Moreover, if $(x^{\star}, y^{\star})$ is a saddle-point of \eqref{eq:saddle_point_prob} in $\mathcal{X}\times \mathcal{Y}$, then $\mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x^{\star}, y^{\star}) = 0$. \subsection{\textbf{Smoothing technique for $g$}}\label{subsec:proximal_smoothing} We first smooth $g$ in \eqref{eq:primal_prob} using Nesterov's smoothing technique \cite{Nesterov2005c} as \begin{equation}\label{eq:smoothed_g} g_{\beta}(u, \dot{y}) := \max_{y\in\mathbb{R}^n}\set{\iprods{u, y} - g^{*}(y) - \tfrac{\beta}{2}\norms{y - \dot{y}}^2}, \end{equation} where $g^{*}$ is the Fenchel conjugate of $g$, $\beta > 0$ is a smoothness parameter, and $\dot{y}$ is a given proximal center. We denote by $\nabla_ug_{\beta}(u) = \textrm{prox}_{g^{*}/\beta}(\dot{y} + \frac{1}{\beta}u)$ the gradient of $g_{\beta}$ w.r.t. $u$. Given $g_{\beta}$ defined by \eqref{eq:smoothed_g}, we can approximate $F$ in \eqref{eq:primal_prob} by \begin{equation}\label{eq:F_beta} F_{\beta}(x, \dot{y}) := f(x) + g_{\beta}(Kx, \dot{y}). \end{equation} The following lemma provides two key inequalities to link $F_{\beta}$ to $\mathcal{L}$ and $D$. \begin{lemma}\label{le:basic_properties} Let $F_{\beta}$ be defined by \eqref{eq:F_beta} and $(x^{\star}, y^{\star})$ be a saddle-point of \eqref{eq:saddle_point_prob}. Then, for any $x, \bar{x}\in\dom{f}$ and $\dot{y}, \tilde{y}, y \in\dom{g^{*}}$, we have \begin{equation}\label{eq:key_est01} \begin{array}{lcl} \mathcal{L}(\bar{x}, y) & \leq & F_{\beta}(\bar{x}, \dot{y}) + \frac{\beta}{2}\norms{y - \dot{y}}^2, \\ D(\tilde{y}) - D(y^{\star}) & \leq & F_{\beta}(\bar{x}, \dot{y}) - \mathcal{L}(\tilde{x}, \tilde{y}) + \frac{\beta}{2}\norms{y^{\star} - \dot{y}}^2, \quad \forall\tilde{x} \in \partial{f^{*}}(-K^{\top}\tilde{y}). \end{array} \end{equation} \end{lemma} \begin{proof} Using the definition of $\mathcal{L}$ in \eqref{eq:saddle_point_prob} and of $F_{\beta}$ in \eqref{eq:F_beta}, we have $\mathcal{L}(\bar{x}, y) = f(\bar{x}) + \iprods{K\bar{x}, y} - g^{*}(y) \leq f(\bar{x}) + \sup_{y}\sets{\iprods{K\bar{x}, y} - g^{*}(y) - \frac{\beta}{2}\norms{y - \dot{y}}^2} + \frac{\beta}{2}\norms{y - \dot{y}}^2 = F_{\beta}(\bar{x}, \dot{y}) + \frac{\beta}{2}\norms{y - \dot{y}}^2$, which proves the first line of \eqref{eq:key_est01}. Next, for any $\tilde{x} \in \partial{f^{*}}(-K^{\top}\tilde{y})$, we have $f^{*}(-K^{\top}\tilde{y}) = \iprods{-K^{\top}\tilde{y}, \tilde{x}} - f(\tilde{x})$ by Fenchel's equality \cite{Bauschke2011}. Hence, $D(\tilde{y}) = f^{*}(-K^{\top}\tilde{y}) + g^{*}(\tilde{y}) = -\mathcal{L}(\tilde{x}, \tilde{y})$. On the other hand, by \eqref{eq:saddle_point}, we have $\mathcal{L}(\bar{x}, y^{\star}) \geq \mathcal{L}(x^{\star}, y^{\star}) = -D(y^{\star})$. Combining these two expressions and the first line of \eqref{eq:key_est01}, we obtain $D(\tilde{y}) - D(y^{\star}) \leq \mathcal{L}(\bar{x}, y^{\star}) - \mathcal{L}(\tilde{x},\tilde{y}) \leq F_{\beta}(\bar{x}, \dot{y}) - \mathcal{L}(\tilde{x}, \tilde{y}) + \frac{\beta}{2}\norms{\dot{y} - y^{\star}}^2$. $\square$ \end{proof} \section{New ASGARD Variant and Its Convergence Guarantees}\label{sec:alg_scheme} In this section, we derive a new and unified variant of ASGARD in \cite{TranDinh2015b} and analyze its convergence rate guarantees for three settings. \subsection{\textbf{The derivation of algorithm and one-iteration analysis}} Given $\dot{y} \in\mathbb{R}^n$ and $\hat{x}^k \in\mathbb{R}^p$, the main step of ASGARD consists of one primal and one dual updates as follows: \begin{equation}\label{eq:prox_grad_step0} \left\{\begin{array}{lcl} y^{k+1} & := & \textrm{prox}_{g^{*}/\beta_k}\big(\dot{y} + \tfrac{1}{\beta_k}K\hat{x}^k \big), \\ x^{k+1} & := & \textrm{prox}_{f/L_k}\big(\hat{x}^k - \tfrac{1}{L_k}K^{\top}y^{k+1} \big), \end{array}\right. \end{equation} where $\beta_k > 0$ is the smoothness parameter of $g$, and $L_k > 0$ is an estimate of the Lipschitz constant of $\nabla{g_{\beta_k}}$. Here, \eqref{eq:prox_grad_step0} serves as basic steps of various primal-dual first-order methods in the literature, including \cite{Chambolle2011,He2012,TranDinh2015b}. However, instead of updating $\dot{y}$, ASGARD fixes it for all iterations. The following lemma serves as a key step for our analysis in the sequel. Since its statement and proof are rather different from \cite[Lemma 2]{TranDinh2015b}, we provide its proof in Appendix~\ref{apdx:le:descent_pro}. \begin{lemma}[\cite{TranDinh2015b}]\label{le:descent_pro} Let $(x^{k+1}, y^{k+1})$ be generated by \eqref{eq:prox_grad_step0}, $F_{\beta}$ be defined by \eqref{eq:F_beta}, and $\mathcal{L}$ be given by \eqref{eq:saddle_point_prob}. Then, for any $x\in\dom{f}$ and $\tau_k \in [0,1]$, we have \begin{equation}\label{eq:key_est00_a} \hspace{-2ex} \arraycolsep=0.1em \begin{array}{lcl} F_{\beta_k}(x^{k+1},\dot{y}) & \leq & (1 - \tau_k) F_{\beta_{k-1}}(x^k, \dot{y}) + \tau_k\mathcal{L}(x, y^{k+1}) \\ && + {~} \frac{L_k\tau_k^2}{2} \big\Vert \tfrac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x \big\Vert^2 - \frac{\mu_f(1-\tau_k)\tau_k}{2}\norms{x - x^k}^2 \\ && - {~} \frac{\tau_k^2}{2}\left( L_k + \mu_f \right) \big\Vert \tfrac{1}{\tau_k}[x^{k+1} - (1-\tau_k)x^k] - x \big\Vert^2 \\ && - {~} \frac{L_k}{2}\norms{x^{k+1} - \hat{x}^k}^2 + \frac{1}{2(\mu_{g^{*}}+\beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \\ && - {~} \frac{1-\tau_k}{2}\left[ \tau_k\beta_k - (\beta_{k-1} - \beta_k)\right]\norms{ \nabla_ug_{\beta_k}(Kx^k,\dot{y}) - \dot{y}}^2. \end{array} \hspace{-8ex} \end{equation} \end{lemma} \noindent Together with the primal-dual step \eqref{eq:prox_grad_step0}, we also apply Nesterov's accelerated step to $\hat{x}^k$ and an averaging step to $\tilde{y}^k$ as follows: \begin{equation}\label{eq:prox_grad_scheme} \left\{\begin{array}{lcl} \hat{x}^{k+1} & := & x^{k+1} + \eta_{k+1}(x^{k+1} - x^k), \\ \tilde{y}^{k+1} &:= & (1-\tau_k)\tilde{y}^k + \tau_ky^{k+1}, \end{array}\right. \end{equation} where $\tau_k \in (0, 1)$ and $\eta_{k+1} > 0$ will be determined in the sequel. To analyze the convergence of the new ASGARD variant, we define the following Lyapunov function (also called a potential function): \begin{equation}\label{eq:lyapunov_func1} \begin{array}{lcl} \mathcal{V}_k(x) & := & F_{\beta_{k-1}}(x^k, \dot{y}) - \mathcal{L}(x, \tilde{y}^k) \\ && + {~} \tfrac{(L_{k-1} + \mu_f)\tau_{k-1}^2}{2}\norms{\tfrac{1}{\tau_{k-1}}[x^k - (1 - \tau_{k-1})x^{k-1}] - x}^2. \end{array} \end{equation} The following lemma provides a key recursive estimate to analyze the convergence of \eqref{eq:prox_grad_step0} and \eqref{eq:prox_grad_scheme}, whose proof is given in Appendix~\ref{apdx:le:maintain_gap_reduction1}. \begin{lemma}\label{le:maintain_gap_reduction1} Let $(x^{k}, \hat{x}^k, y^k, \tilde{y}^{k})$ be updated by \eqref{eq:prox_grad_step0} and \eqref{eq:prox_grad_scheme}. Given $\beta_0 > 0$, $\tau_k, \tau_{k+1}\in (0, 1]$, let $\beta_k$, $L_k$, and $\eta_{k+1}$ be updated by \begin{equation}\label{eq:Lk_m_k} \beta_k := \frac{\beta_{k-1}}{1+\tau_k}, \quad L_k := \frac{\norms{K}^2}{\beta_k + \mu_{g^{*}}}, \quad\text{and}\quad \eta_{k+1} := \frac{(1-\tau_{k})\tau_{k}}{\tau_{k}^2 + m_{k+1}\tau_{k+1}}, \end{equation} where $m_{k+1} := \frac{L_{k+1} + \mu_f}{L_{k} + \mu_f}$. Suppose further that $\tau_k \in (0, 1]$ satisfies \begin{equation}\label{eq:param_cond} \hspace{-1ex} \arraycolsep=0.2em \left\{\begin{array}{llcl} &(L_{k-1} + \mu_f)(1-\tau_k)\tau_{k-1}^2 + \mu_f(1-\tau_k)\tau_k &\geq & L_k\tau_k^2, \\ &(L_{k-1} + \mu_f)(L_k + \mu_f)\tau_k\tau_{k-1}^2 + (L_k + \mu_f)^2\tau_k^2 & \geq & (L_{k-1} + \mu_f)L_k\tau_{k-1}^2. \end{array}\right. \hspace{-1ex} \end{equation} Then, for any $x\in\dom{f}$, the Lyapunov function $\mathcal{V}_k$ defined by \eqref{eq:lyapunov_func1} satisfies \begin{equation}\label{eq:key_est1_ncvx} \mathcal{V}_{k+1}(x) \leq (1-\tau_k)\mathcal{V}_{k}(x). \end{equation} \end{lemma} \noindent\textbf{The unified ASGARD algorithm.} Our next step is to expand \eqref{eq:prox_grad_step0}, \eqref{eq:prox_grad_scheme}, and \eqref{eq:Lk_m_k} algorithmically to obtain a new ASGARD variant (called ASGARD+) as presented in Algorithm~\ref{alg:A1}. \begin{algorithm}[ht!]\caption{(New Accelerated Smoothed GAp ReDuction (ASGARD+))}\label{alg:A1} \normalsize \begin{algorithmic}[1] \State{\bfseries Initialization:} \label{step:i0a} Choose $x^0 \in\dom{f}$, $\tilde{y}^0\in\dom{g^{*}}$, and $\dot{y}\in \mathbb{R}^n$. \State\label{step:i0b} Choose $\tau_0 \in (0, 1]$ and $\beta_0 > 0$. Set $L_0 := \frac{\norms{K}^2}{\mu_{g^{*}} + \beta_0}$ and $\hat{x}^0 := x^0$. \State{\bfseries For $k := 0, 1, \cdots, k_{\max}$ do} \State\hspace{2ex}\label{step:i1} Update $\tau_{k+1}$ as in Theorem~\ref{th:convergence1}, \ref{th:convergence_1a}, or \ref{th:convergence_1b}, and update $\beta_{k+1} := \frac{\beta_{k}}{1+\tau_{k+1}}$. \State\hspace{2ex}\label{step:i2} Let $L_{k+1} := \frac{\norms{K}^2}{\mu_{g^{*}} + \beta_{k+1}}$, $m_{k+1} := \frac{L_{k+1}+\mu_f}{L_{k} + \mu_f}$, and $\eta_{k+1} := \frac{(1-\tau_{k})\tau_{k}}{\tau_{k}^2 + m_{k+1}\tau_{k+1}}$. \State\hspace{2ex}\label{step:i3} Update \begin{equation*} \left\{\begin{array}{lcl} y^{k+1} & := & \textrm{prox}_{g^{*}/\beta_k}\big(\dot{y} + \tfrac{1}{\beta_k}K\hat{x}^k \big), \\ x^{k+1} & := & \textrm{prox}_{f/L_k}\big(\hat{x}^k - \tfrac{1}{L_k}K^{\top}y^{k+1} \big), \\ \hat{x}^{k+1} & := & x^{k+1} + \eta_{k+1}(x^{k+1} - x^k), \\ \tilde{y}^{k+1} & := & (1-\tau_k)\tilde{y}^k + \tau_ky^{k+1}. \end{array}\right. \end{equation*} \State{\bfseries EndFor} \end{algorithmic} \end{algorithm} Compared to the original ASGARD in \cite{TranDinh2015b}, Algorithm~\ref{alg:A1} requires one additional averaging dual step on $\tilde{y}^k$ at Step~\ref{step:i3} to obtain the dual convergence. Note that Algorithm~\ref{alg:A1} also incorporates the strong convexity parameters $\mu_f$ of $f$ and $\mu_{g^{*}}$ of $g^{*}$ to cover three settings: general convexity ($\mu_f = \mu_{g^{*}} = 0$), strong convexity ($\mu_f > 0$ and $\mu_{g^{*}} = 0$), and strong convexity and smoothness ($\mu_f > 0$ and $\mu_{g^{*}} > 0$). More precisely, $L_k$ and the momentum step-size $\eta_{k+1}$ are also different from \cite{TranDinh2015b} by incorporating $\mu_{g^{*}}$ and $\mu_f$. The per-iteration complexity of ASGSARD+ remains the same as ASGARD except for the averaging dual update $\tilde{y}^k$. However, this step is not required if we only solve \eqref{eq:primal_prob}. We highlight that if we apply a new approach from \cite{tran2019non} to \eqref{eq:saddle_point_prob}, then we can also update the proximal center $\dot{y}$ at each iteration. \subsection{\textbf{Case 1: Both $f$ and $g^{*}$ are just convex ($\mu_f = \mu_{g^{*}} = 0$)}}\label{subsec:general_cvx_ccv} The following theorem establishes convergence rates of Algorithm~\ref{alg:A1} for the general convex case where both $f$ and $g^{*}$ are just convex. \begin{theorem}\label{th:convergence1} Suppose that Assumption~\ref{as:A1} holds and both $f$ and $g^{*}$ are only convex, i.e. $\mu_f = \mu_{g^{*}} = 0$. Let $\sets{(x^k, \tilde{y}^k)}$ be generated by Algorithm~\ref{alg:A1} for solving \eqref{eq:saddle_point_prob}, where $\tau_0 := 1$ and $\tau_{k+1}$ is the unique solution of the cubic equation $\tau^3 + \tau^2 + \tau_k^2\tau - \tau_k^2 = 0$ in $\tau$, which always exists. Then, for all $k \geq 1$, we have: \begin{itemize} \item[$\mathrm{(a)}$] The gap function $\mathcal{G}_{\mathcal{X}\times\mathcal{Y}}$ defined by \eqref{eq:gap_func} satisfies \begin{equation}\label{eq:main_result1} \mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x^k, \tilde{y}^k) \leq \frac{\norms{K}^2}{2\beta_0k} \sup_{x\in\mathcal{X}}\norms{x^0 - x}^2 + \frac{\beta_0}{k+1}\sup_{y\in\mathcal{Y}}\norms{y - \dot{y}}^2. \end{equation} \item[$\mathrm{(b)}$] If $g$ is $M_g$-Lipschitz continuous on $\dom{g}$, then for \eqref{eq:primal_prob}, it holds that \begin{equation}\label{eq:main_result1b} F(x^k) - F^{\star} \leq \frac{\norms{K}^2}{2\beta_0 k}\norms{x^0 - x^{\star}}^2 + \frac{\beta_0}{k+1}(\norms{\dot{y}} + M_g)^2. \end{equation} \item[$\mathrm{(c)}$] If $f^{*}$ is $M_{f^{*}}$-Lipschitz continuous on $\dom{f^{*}}$, then for \eqref{eq:dual_prob}, it holds that \begin{equation}\label{eq:main_result1c} D(\tilde{y}^k) - D^{\star} \leq \frac{\norms{K}^2}{2\beta_0 k}(\norms{x^0} + M_{f^{*}})^2 + \frac{\beta_0}{k+1}\norms{\dot{y} - y^{\star}}^2. \end{equation} \end{itemize} \end{theorem} \begin{proof} First, since $\mu_f = \mu_{g^{*}} = 0$, $L_k = \frac{\norms{K}^2}{\beta_k}$, and $(1+\tau_k)\beta_k = \beta_{k-1}$, the two conditions of \eqref{eq:param_cond} respectively reduce to \begin{equation*} (1-\tau_k)\tau_{k-1}^2 \geq (1+\tau_k)\tau_k^2 \quad\text{and}\quad (1 + \tau_k)\tau_k^2 \geq \tau_{k-1}^2(1-\tau_k). \end{equation*} These conditions hold if $\tau_k^3 + \tau_k^2 + \tau_{k-1}^2\tau_k - \tau_{k-1}^2 = 0$. We first choose $\tau_0 := 1$, and update $\tau_k$ by solving the cubic equation $\tau^3 + \tau + \tau_{k-1}^2\tau - \tau_{k-1}^2 = 0$ for $k\geq 1$. Note that this equation has a unique positive real solution $\tau_k \in (0, 1)$ due to Lemma~\ref{le:tech_lemma}(b). Moreover, we have $\prod_{i=1}^k(1-\tau_i) \leq \frac{1}{k+1}$ and $\beta_k \leq \frac{2\beta_0}{k+2}$. Next, by induction, \eqref{eq:key_est1_ncvx} leads to $\mathcal{V}_k(x) \leq \left[\prod_{i=1}^{k-1}(1-\tau_i)\right]\mathcal{V}_1(x) \leq \frac{1}{k}\mathcal{V}_1(x)$, where we have used $\prod_{i=1}^{k-1}(1-\tau_i) \leq \frac{1}{k}$ from Lemma~\ref{le:tech_lemma}(b). However, from \eqref{eq:key_est00_proof1} in the proof of Lemma~\ref{le:maintain_gap_reduction1} and $\tau_0 = 1$, we have $\mathcal{V}_1(x) \leq (1-\tau_0)\mathcal{V}_0(x) + \frac{L_0\tau_0^2}{2}\norms{\hat{x}^0 - x}^2 = \frac{\norms{K}^2}{2\beta_0}\norms{x^0 - x}^2$. Hence, we eventually obtain \begin{equation}\label{eq:proof_th1_01} \mathcal{V}_k(x) \leq \frac{\norms{K}^2}{2\beta_0k}\norms{x^0 - x}^2. \end{equation} Using \eqref{eq:key_est01} from Lemma~\ref{le:basic_properties} and $\beta_{k-1} \leq \frac{2\beta_0}{k+1}$ from Lemma~\ref{le:tech_lemma}(b), we get \begin{equation*} \begin{array}{lcl} \mathcal{L}(x^k, y) - \mathcal{L}(x, \tilde{y}^k) & \overset{\tiny\eqref{eq:key_est01}}{\leq} & F_{\beta_{k-1}}(x^k, \dot{y}) - \mathcal{L}(x,\tilde{y}^k) + \frac{\beta_{k-1}}{2}\norms{y- \dot{y}}^2 \\ & \overset{\tiny\eqref{eq:lyapunov_func1}}{\leq} & \mathcal{V}_k(x) + \frac{\beta_{k-1}}{2}\norms{y - \dot{y}}^2 \\ & \overset{\tiny\eqref{eq:proof_th1_01}}{\leq} & \frac{\norms{K}^2}{2\beta_0k}\norms{x^0 - x}^2 + \frac{\beta_0}{k+1}\norms{\dot{y} - y}^2. \end{array} \end{equation*} Taking the supreme over $\mathcal{X}$ and $\mathcal{Y}$ both sides of the last estimate and using \eqref{eq:gap_func}, we obtain \eqref{eq:main_result1}. Now, since $F_{\beta_{k-1}}(x^k,\dot{y}) - F^{\star} \overset{\tiny\eqref{eq:saddle_point}}{\leq} F_{\beta_{k-1}}(x^k,\dot{y}) - \mathcal{L}(x^{\star}, \tilde{y}^k) \overset{\tiny\eqref{eq:lyapunov_func1}}{\leq} \mathcal{V}_k(x^{\star})$, combining this inequality and \eqref{eq:proof_th1_01}, we get \begin{equation*} F_{\beta_{k-1}}(x^k,\dot{y}) - F^{\star} \leq \frac{\norms{K}^2}{2\beta_0k}\norms{x^0 - x^{\star}}^2. \end{equation*} On the other hand, since $g$ is $M_g$-Lipschitz continuous, we have \begin{equation*} \sup\set{ \norms{y-\dot{y}} : y\in\partial{g}(Kx^k)} \leq \norms{\dot{y}} + \sup\sets{\norms{y} : \norms{y} \leq M_g} = \norms{\dot{y}} + M_g. \end{equation*} Hence, by \eqref{eq:key_bound_apdx3} of Lemma~\ref{le:g_properties}, we have $F(x^k) \leq F_{\beta_{k-1}}(x^k, \dot{y}) + \frac{\beta_{k-1}}{2}(\norms{\dot{y}} + M_g)^2 \leq F_{\beta_{k-1}}(x^k, \dot{y}) + \frac{\beta_0}{k+1}(\norms{\dot{y}} + M_g)^2$. Combining both estimates, we obtain \eqref{eq:main_result1b}. Finally, using \eqref{eq:key_est01}, we have \begin{equation*} \begin{array}{lcl} D(\tilde{y}^k) - D^{\star} & \leq & F_{\beta_{k-1}}(x^k, \dot{y}) - \mathcal{L}(\tilde{x}^k, \tilde{y}^k) + \frac{\beta_{k-1}}{2}\norms{\dot{y} - y^{\star}}^2 \\ &\overset{\tiny\eqref{eq:lyapunov_func1}}{\leq} & \mathcal{V}_k(\tilde{x}^k) + \frac{\beta_0}{k+1}\norms{ \dot{y} - y^{\star}}^2 \\ &\overset{\tiny\eqref{eq:proof_th1_01}}{\leq} & \frac{\norms{K}^2}{2\beta_0k} (\norms{x^0} + M_{f^{*}})^2 + \frac{\beta_0}{k+1}\norms{\dot{y} - y^{\star}}^2, \end{array} \end{equation*} which proves \eqref{eq:main_result1c}. Here, since $\tilde{x}^k \in \partial{f^{*}}(-K^{\top}\tilde{y}^k)$, we have $\norms{x^0 - \tilde{x}^k} \leq \norms{\tilde{x}^k} + \norms{x^0} \leq M_{f^{*}} + \norms{x^0}$, which has been used in the last inequality. $\square$ \end{proof} \subsection{\textbf{Case 2: $f$ is strongly convex and $g^{*}$ is convex ($\mu_f > 0$ and $\mu_{g^{*}} = 0$)}}\label{subsect:f_or_gstar_strongly_convex} Next, we consider the case when only $f$ or $g^{*}$ is strongly convex. Without loss of generality, we assume that $f$ is strongly convex with a strong convexity parameter $\mu_f > 0$, but $g^{*}$ is only convex with $\mu_{g^{*}} = 0$. Otherwise, we switch the role of $f$ and $g^{*}$ in Algorithm~\ref{alg:A1}. The following theorem establishes an optimal $\BigO{1/k^2}$ convergence rate (up to a constant factor) of Algorithm~\ref{alg:A1} in this case (i.e. $\mu_f > 0$ and $\mu_{g^{*}} = 0$). \begin{theorem}\label{th:convergence_1a} Suppose that Assumption~\ref{as:A1} holds and that $f$ is strongly convex with a convexity parameter $\mu_f > 0$, but $g^{*}$ is just convex $($i.e. $\mu_{g^{*}} = 0$$)$. Let us choose $\tau_0 := 1$ and $\beta_0 \geq \frac{0.382\norms{K}^2}{\mu_f}$. Let $\sets{(x^k, \tilde{y}^k)}$ be generated by Algorithm~\ref{alg:A1} using the update rule $\tau_{k+1} := \frac{\tau_k}{2}\big(\sqrt{\tau_k^2 + 4} - \tau_k\big)$ for $\tau_k$. Then, we have: \begin{itemize} \item[$\mathrm{(a)}$] The gap function $\mathcal{G}_{\mathcal{X}\times\mathcal{Y}}$ be defined by \eqref{eq:gap_func} satisfies \begin{equation}\label{eq:main_est300} \mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x^k, \tilde{y}^k) \leq \frac{2\norms{K}^2}{\beta_0(k+1)^2}\sup_{x\in\mathcal{X}}\norms{x^0 -x}^2 + \frac{10\beta_0}{(k+3)^2} \sup_{y\in\mathcal{Y}}\norms{\dot{y} - y}^2. \end{equation} \item[$\mathrm{(b)}$] If $g$ is $M_g$-Lipschitz continuous on $\dom{g}$, then for \eqref{eq:primal_prob}, it holds that \begin{equation}\label{eq:main_est300b} F(x^k) - F^{\star} \leq \frac{2\norms{K}^2\norms{x^0 - x^{\star}}^2}{\beta_0(k+1)^2} + \frac{10\beta_0(\norms{\dot{y}} + M_g)^2}{(k+3)^2}. \end{equation} \item[$\mathrm{(c)}$] If $f^{*}$ is $M_{f^{*}}$-Lipschitz continuous on $\dom{f^{*}}$, then for \eqref{eq:dual_prob}, it holds that \begin{equation}\label{eq:main_est300c} D(\tilde{y}^k) - D^{\star} \leq \frac{2\norms{K}^2(\norms{x^0} + M_{f^{*}})^2}{\beta_0(k+1)^2} + \frac{10\beta_0\norms{\dot{y} - y^{\star}}^2}{(k+3)^2}. \end{equation} \end{itemize} \end{theorem} \begin{proof} Since $\mu_{g^{*}} = 0$ and $\tau_k^2 = \tau_{k-1}^2(1 - \tau_k)$ by the update rule of $\tau_k$, $\beta_k = \frac{\beta_{k-1}}{1+\tau_k}$, and $L_k = \frac{\norms{K}^2}{\beta_k}$, the first condition of \eqref{eq:param_cond} is equivalent to $\beta_{k-1} \geq \frac{\norms{K}^2\tau_k^2}{\mu_f}$. However, since $\beta_{k-1} \geq \frac{\beta_0\tau_{k}^2}{\tau_1^2}$ due to Lemma~\ref{le:tech_lemma}(a), and $\tau_1 = 0.6180$, $\beta_{k-1} \geq \frac{\norms{K}^2\tau_k^2}{\mu_f}$ holds if $\beta_0 \geq \frac{\norms{K}^2\tau_1^2}{\mu_f} = \frac{0.382\norms{K}^2}{\mu_f}$. Thus we can choose $\beta_0 \geq \frac{0.382\norms{K}^2}{\mu_f}$ to guarantee the first condition of \eqref{eq:param_cond}. Similarly, using $m_k = \frac{L_k + \mu_f}{L_{k-1} + \mu_f} \geq 1$, the second condition of \eqref{eq:param_cond} is equivalent to $m_k^2\tau_k^2 + m_k\tau_k\tau_{k-1}^2 \geq \frac{L_k\tau_{k-1}^2}{L_{k-1} + \mu_f}$. Since $\frac{L_k}{L_{k-1} + \mu_f} \leq m_k$, the last condition holds if $m_k^2\tau_k^2 + m_k\tau_k\tau_{k-1}^2 \geq m_k\tau_{k-1}^2$. Using again $\tau_k^2 = \tau_{k-1}^2(1-\tau_k)$, this condition becomes $m_k\tau_k^2 \geq \tau_{k-1}^2(1-\tau_k) = \tau_k^2$. This always holds true since $m_k \geq 1$. Therefore, the second condition of \eqref{eq:param_cond} is satisfied. As a result, we have the recursive estimate \eqref{eq:key_est1_ncvx}, i.e.: \begin{equation}\label{eq:recursive_form} \mathcal{V}_{k+1}(x) \leq (1-\tau_k)\mathcal{V}_{k}(x). \end{equation} From \eqref{eq:recursive_form}, Lemma~\ref{le:tech_lemma}(a), \eqref{eq:key_est00_proof1}, and noting that $\tilde{x}^0 := x^0$ and $\tau_0 := 1$, we have \begin{equation*} \arraycolsep=0.2em \begin{array}{lcl} \mathcal{V}_k(x) & \leq & \big[\prod_{i=1}^{k-1}(1-\tau_i)\big]\mathcal{V}_1(x) = \Theta_{1,k-1}\mathcal{V}_1(x) \leq \frac{4}{(k+1)^2}\mathcal{R}_0(x). \end{array} \end{equation*} where $\mathcal{R}_0(x) := \frac{\Vert K\Vert^2}{2\beta_0}\Vert x^0 - x\Vert^2$. Similar to the proof of Theorem~\ref{th:convergence1}, using the last inequality and $\beta_{k-1} \leq \frac{4\beta_0\tau_0^2}{\tau_2^2[\tau_0(k+1) + 2]^2} \leq \frac{20\beta_0}{(k+3)^2}$ from Lemma~\ref{le:tech_lemma}(a), we obtain the bounds \eqref{eq:main_est300}, \eqref{eq:main_est300b}, and \eqref{eq:main_est300c} respectively. $\square$ \end{proof} \begin{remark}\label{re:compared} The variant of Algorithm~\ref{alg:A1} in Theorem~\ref{th:convergence_1a} is completely different from \cite[Algorithm 2]{TranDinh2015b} and \cite[Algorithm 2]{tran2019non}, where it requires only one $\textrm{prox}_{f/L_k}(\cdot)$ as opposed to two proximal operations of $f$ as in \cite{TranDinh2015b,tran2019non}. \end{remark} \subsection{\textbf{Case 3: Both $f$ and $g^{*}$ are strongly convex ($\mu_f > 0$ and $\mu_{g^{*}} > 0$)}}\label{sec:both_f_gstar_scvx} Finally, we assume that both $f$ and $g^{*}$ are strongly convex with strong convexity parameters $\mu_f > 0$ and $\mu_{g^{*}} > 0$, respectively. Then, the following theorem proves the optimal linear rate (up to a constant factor) of Algorithm~\ref{alg:A1}. \begin{theorem}\label{th:convergence_1b} Suppose that Assumption~\ref{as:A1} holds and both $f$ and $g^{*}$ in \eqref{eq:saddle_point_prob} are strongly convex with $\mu_f > 0$ and $\mu_{g^{*}} > 0$, respectively. Let $\sets{(x^k, \tilde{y}^k)}$ be generated by Algorithm~\ref{alg:A1} using $\tau_k := \tau = \frac{1}{\sqrt{1 + \kappa_F}}\in (0, 1)$ and $\beta_0 > 0$, where $\kappa_F := \frac{\norms{K}^2}{\mu_f\mu_{g^{*}}}$. Then, the following statements hold: \begin{itemize} \item[$\mathrm{(a)}$] The gap function $\mathcal{G}_{\mathcal{X}\times\mathcal{Y}}$ defined by \eqref{eq:gap_func} satisfies \begin{equation}\label{eq:key_est40} \mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x^k,\tilde{y}^k) \leq (1-\tau)^k\bar{\mathcal{R}}_p + \frac{1}{2(1+\tau)^k}\sup_{y\in\mathcal{Y}}\norms{\dot{y} - y}^2, \end{equation} where $\bar{\mathcal{R}}_p := {\displaystyle\sup_{x\in\mathcal{X}}}\Big\{ (1-\tau)\big[\mathcal{F}_{\beta_0}(x^0) - \mathcal{L}(x, \tilde{y}^0)\big] + \frac{\norms{K}^2\tau^2}{2(\mu_{g^{*}} + \beta_{0})} \norms{x^0 - x}^2 \Big\}$. \item[$\mathrm{(b)}$] If $g$ is $M_g$-Lipschitz continuous on $\dom{g}$, then for \eqref{eq:primal_prob}, it holds that \begin{equation}\label{eq:key_est40b} F(x^k) - F^{\star} \leq (1-\tau)^k\bar{\mathcal{R}}_p^{\star} \ + \ \frac{\beta_0}{2(1+\tau)^k} \left(\norms{\dot{y}} + M_g\right)^2, \end{equation} where $\bar{\mathcal{R}}_p^{\star} := (1-\tau)\big[ \mathcal{F}_{\beta_0}(x^0) - \mathcal{L}(x^{\star}, \tilde{y}^0) \big] + \frac{\norms{K}^2\tau^2}{2(\mu_{g^{*}} + \beta_{0})}\norms{x^0 - x^{\star}}^2$. \item[$\mathrm{(d)}$] If $f^{*}$ is $M_{f^{*}}$-Lipschitz continuous on $\dom{f^{*}}$, then for \eqref{eq:dual_prob}, it holds that \begin{equation}\label{eq:key_est40c} D(\tilde{y}^k) - D^{\star} \leq (1-\tau)^k\bar{\mathcal{R}}_d^{\star} \ + \ \frac{\beta_0\norms{\dot{y} - y^{\star}}^2}{2(1+\tau)^k}, \end{equation} where $\bar{\mathcal{R}}_d^{\star} := (1-\tau)\big[\mathcal{F}_{\beta_0}(x^0) - D(\tilde{y}^0)\big] + \frac{\norms{K}^2\tau^2}{2(\mu_{g^{*}} + \beta_{0})}\left(\norms{x^0} + M_{f^{*}}\right)^2$. \end{itemize} \end{theorem} \begin{proof} Since $\tau_k = \tau = \frac{1}{\sqrt{1+\kappa_F}} = \sqrt{\frac{\mu_f\mu_{g^{*}}}{\norms{K}^2 + \mu_f\mu_{g^{*}}}} \in (0, 1)$ and $\beta_{k-1} = (1+\tau)\beta_k$, after a few elementary calculations, we can show that the first condition of \eqref{eq:param_cond} automatically holds. The second condition of \eqref{eq:param_cond} is equivalent to $m_k\tau + m_k^2 \geq \frac{L_k}{L_{k-1} + \mu_f}$. Since $m_k \geq \frac{L_k}{L_{k-1} + \mu_f}$, this condition holds if $m_k\tau + m_k^2 \geq m_k$, which is equivalent to $\tau + m_k \geq 1$. This obviously holds true since $\tau > 0$ and $m_k \geq 1$. From \eqref{eq:key_est1_ncvx} of Lemma~\ref{le:maintain_gap_reduction1}, we have $\mathcal{V}_{k+1}(x) \leq (1-\tau)\mathcal{V}_{k}(x)$. Therefore, by induction and using again \eqref{eq:key_est00_proof1}, we get \begin{equation}\label{eq:recursive_form} \mathcal{V}_k(x) \leq (1-\tau)^k\mathcal{V}_1(x) \leq (1-\tau)^k\bar{\mathcal{R}}_0(x). \end{equation} where $\bar{\mathcal{R}}_p(x) := (1-\tau)\big[ \mathcal{F}_{\beta_0}(x^0) - \mathcal{L}(x,\tilde{y}^0)\big] + \frac{\norms{K}^2\tau^2}{2(\mu_{g^{*}} + \beta_{0})}\norms{x^0 - x}^2$. Now, since $\beta_{k-1} = \frac{\beta_0}{(1+\tau)^k}$ due to the update rule of $\beta_k$, by \eqref{eq:key_est01}, we have \begin{equation*} \mathcal{L}(x^k, y) - \mathcal{L}(x, \tilde{y}^k) \leq \mathcal{V}_k(x) + \frac{\beta_{k-1}}{2}\norms{\dot{y} - y}^2 \leq (1-\tau)^k\bar{\mathcal{R}}_p(x) + \frac{\norms{\dot{y} - y}^2}{2(1+\tau)^k}. \end{equation*} This implies \eqref{eq:key_est40}. The estimates \eqref{eq:key_est40b} and \eqref{eq:key_est40c} can be proved similarly as in Theorem~\ref{th:convergence1}, and we omit the details here. $\square$ \end{proof} \begin{remark}\label{re:cond_number} Since $g^{*}$ is $\mu_{g^{*}}$-strongly convex, it is well-known that $g\circ K$ is $\frac{\norms{K}^2}{\mu_{g^{*}}}$-smooth. Hence, $\kappa_F := \frac{\norms{K}^2}{\mu_f\mu_{g^{*}}}$ is the condition number of $F$ in \eqref{eq:primal_prob}. Theorem shows that Algorithm~\ref{alg:A1} can achieve a $\big(1 - \frac{1}{(1+\sqrt{2})\sqrt{\kappa_F}}\big)^k$-linear convergence rate. Consequently, it also achieves $\BigO{ \sqrt{\kappa_F}\log\left(\frac{1}{\varepsilon}\right)}$ oracle complexity to obtain an $\varepsilon$-primal-dual solution $(x^k,\tilde{y}^k)$. This linear rate and complexity are optimal (up to a constant factor) under the given assumptions in Theorem~\ref{th:convergence_1b}. However, Algorithm~\ref{alg:A1} is very different from existing accelerated proximal gradient methods, e.g., \cite{Beck2009,Nesterov2007,tseng2008accelerated} for solving \eqref{eq:primal_prob} since our method uses the proximal operator of $g^{*}$ (and therefore, the proximal operator of $g$) instead of the gradient of $g$ as in \cite{Beck2009,Nesterov2007,tseng2008accelerated}. \end{remark} \begin{remark}\label{re:optimal_rate} The $\BigO{1/k}$, $\BigO{1/k^2}$, and linear convergence rates in Theorem~\ref{th:convergence1}, \ref{th:convergence_1a}, and \ref{th:convergence_1b}, respectively are already optimal (up to a constant factor) under given assumptions as discussed, e.g., in \cite{ouyang2018lower,tran2019non}. The primal convergence rate on $\sets{F(x^k) - F^{\star}}$ has been proved in \cite[Theorem~4]{TranDinh2015b}, but only for the case $\BigO{1/k}$. The convergence rates on $\sets{\mathcal{G}_{\mathcal{X}\times\mathcal{Y}}(x^k,\tilde{y}^k)}$ and $\sets{D(\tilde{y}^k) - D^{\star}}$ are new. Moreover, the convergence of the primal sequence is on the last iterate $x^k$, while the convergence of the dual sequence is on the averaging iterate $\tilde{y}^k$. \end{remark} \section{Numerical Experiments}\label{sec:num_exp} In this section, we provide four numerical experiments to verify the theoretical convergence aspects and the performance of Algorithm~\ref{alg:A1}. Our algorithm is implemented in Matlab R.2019b running on a MacBook Laptop with 2.8GHz Quad-Core Intel Core i7 and 16GB RAM. We also compare our method with Nesterov's smoothing technique in \cite{Nesterov2005c} as a baseline. We emphasize that our experiments bellow follow exactly the parameter update rules as stated in Theorem~\ref{th:convergence1} and Theorem~\ref{th:convergence_1a} without any parameter tuning trick. To further improve practical performance of Algorithm~\ref{alg:A1}, one can exploit the restarting strategy in \cite{TranDinh2015b}, where its theoretical guarantee is established in \cite{tran2018adaptive}. The nonsmooth and convex optimization problem we use for our experiments is the following representative model: \begin{equation}\label{eq:LAD} \min_{x\in\mathbb{R}^p}\Big\{ F(x) := \Vert Kx - b\Vert_2 + \lambda\norms{x}_1 + \frac{\rho}{2}\norms{x}_2^2 \Big\}, \end{equation} where $K\in\mathbb{R}^{n\times p}$ is a given matrix, $b\in\mathbb{R}^n$ is also given, and $\lambda > 0$ and $\rho \geq 0$ are two given regularization parameters. The norm $\norms{\cdot}_2$ is the $\ell_2$-norm (or Euclidean norm). If $\rho = 0$, then \eqref{eq:LAD} reduces to the square-root LASSO model proposed in \cite{Belloni2011}. If $\rho > 0$, then \eqref{eq:LAD} becomes a square-root regression problem with elastic net regularizer similar to \cite{zou2005regularization}. Clearly, if we define $g(y) := \norms{y - b}_2$ and $f(x) := \lambda\norms{x}_1 + \frac{\rho}{2}\norms{x}_2^2$, then \eqref{eq:LAD} can be rewritten into \eqref{eq:primal_prob}. To generate the input data for our experiments, we first generate $K$ from standard i.i.d. Gaussian distributions with either uncorrelated or $50\%$ correlated columns. Then, we generate an observed vector $b$ as $b := Kx^{\natural} + \mathcal{N}(0,\sigma)$, where $x^{\natural}$ is a predefined sparse vector and $\mathcal{N}(0, \sigma)$ stands for standard Gaussian noise with variance $\sigma = 0.05$. The regularization parameter $\lambda$ to promote sparsity is chosen as suggested in \cite{Belloni2011}, and the parameter $\rho$ is set to $\rho := 0.1$. We first fix the size of problem at $p := 1000$ and $n := 350$ and choose the number of nonzero entries of $x^{\natural}$ to be $s := 100$. Then, for each experiment, we generate $30$ instances of the same size but with different input data $(K, b)$. For Nesterov's smoothing method, by following \cite{Nesterov2005c}, we smooth $g$ as \begin{equation*} g_{\gamma}(y) := \max_{v\in\mathbb{R}^n}\Big\{ \iprods{y - b, v} - \frac{\gamma}{2}\norms{v}^2: \norms{v}_2 \leq 1 \Big\}, \end{equation*} where $\gamma > 0$ is a smoothness parameter. In order to correctly choose $\gamma$ for Nesterov's smoothing method, we first solve \eqref{eq:LAD} with CVX \cite{Grant2004} using Mosek with high precision to get a high accurate solution $x^{\star}$ of \eqref{eq:LAD}. Then, we set $\gamma^{*} := \frac{\sqrt{2}\norms{K}\norms{x^0 - x^{\star}}}{k_{\max}\sqrt{D_{\mathcal{V}}}}$ by minimizing its theoretical bound from \cite{Nesterov2005c} w.r.t. $\gamma > 0$, where $D_{\mathcal{V}} := \frac{1}{2}$ is the prox-diameter of the unit $\ell_2$-norm ball, and $k_{\max}$ is the maximum number of iterations. For Algorithm~\ref{alg:A1}, using \eqref{eq:main_result1b} and $\dot{y} := 0$, we can set $\beta_0 = \beta^{*} := \frac{\norms{K}\norms{x^0 - x^{*}}}{M_g}$ by minimizing the right-hand side of \eqref{eq:main_result1b} w.r.t. $\beta_0 > 0$, where $M_g := 1$. We choose $k_{\max} := 5000$ for all experiments. To see the effect of the smoothness parameters $\gamma$ and $\beta_0$ on the performance of both algorithms, we also consider two variants by increasing or decreasing these parameters $10$ times, respectively. More specifically, we set them as follows. \begin{itemize} \itemsep=0.1em \item For Nesterov's smoothing scheme, we consider two additional variants by setting $\gamma := 10\gamma^{*}$ and $\gamma = 0.1\gamma^{*}$, respectively. \item For Algorithm~\ref{alg:A1}, we also consider two other variants with $\beta_0 := 10\beta^{*}$ and $\beta_0 := 0.1\beta^{*}$, respectively. \end{itemize} We first conduct two different experiments for the square-root LASSO model (i.e. setting $\rho := 0$ in \eqref{eq:LAD}). In this case, the underlying optimization problem is non-strongly convex and fully nonsmooth. \begin{itemize} \itemsep=0.1em \item \textit{Experiment 1:} We test Algorithm \ref{alg:A1} (abbreviated by \texttt{Alg. 1}) and Nesterov's smoothing method (abbreviated by \texttt{Nes. Alg.}) on $30$ problem instances with uncorrelated columns of $K$. Since both algorithms have essentially the same per-iteration complexity, we report the relative primal objective residual $\frac{F(x^k) - F(x^{\star})}{\max\sets{1, \vert F(x^{\star})\vert}}$ against the number of iterations. \item \textit{Experiment 2:} We conduct the same test on another set of $30$ problem instances, but using $50\%$ correlated columns in the input matrix $K$. \end{itemize} The results of both experiments are depicted in Figure~\ref{fig:sqrtLASSO_exp1}, where the left plot is for \textit{Experiment 1} and the right plot is for \textit{Experiment 2}. The solid line of each curve presents the mean over $30$ problem samples, and the corresponding shaded area represents the sample variance of $30$ problem samples (i.e. the area from the lowest and the highest deviation to the mean). \begin{figure} \caption{The convergence behavior of Algorithm~\ref{alg:A1} and Nesterov's smoothing scheme on $30$ problem instances of \eqref{eq:LAD} (the non-strongly convex case). Left plot: uncorrelated columns in $K$, and Right plot: $50\%$ correlated columns in $K$.} \label{fig:sqrtLASSO_exp1} \end{figure} From Figure \ref{fig:sqrtLASSO_exp1}, we can observe that, with the choice $\beta_0 := \beta^{*}$ and $\gamma := \gamma^{*}$ as suggested by the theory, both algorithms perform best compared to other smaller or larger values of these parameters. We also see that Algorithm~\ref{alg:A1} outperforms Nesterov's smoothing scheme in both experiments. If $\beta_0$ (respectively, $\gamma$) is large, then both algorithms make good progress in early iterations, but become saturated at a given objective value in the last iterations. Alternatively, if $\beta_0$ (respectively, $\gamma$) is small, then both algorithms perform worse in early iterations, but further decrease the objective value when the number of iterations is increasing. This behavior also confirms the theoretical results stated in Theorem~\ref{th:convergence1} and in \cite{Nesterov2005c}. In fact, if $\beta_0$ (or $\gamma$) is small, then the algorithmic stepsize is small. Hence, the algorithm makes slow progress at early iterations, but it better approximates the nonsmooth function $g$, leading to more accurate approximation from $F(x^k)$ to $F(x^{\star})$. In contrast, if $\beta_0$ (or $\gamma$) is large, then we have a large stepsize and therefore a faster convergence rate in early iterations. However, the smoothed approximation is less accurate. In order to test the strongly convex case in Theorem~\ref{th:convergence_1a}, we conduct two additional experiments on \eqref{eq:LAD} with $\rho := 0.1$. In this case, problem \eqref{eq:LAD} is strongly convex with $\mu_f := 0.1$. Since \cite{Nesterov2005c} does not directly handle the strongly convex case, we only compare two variants of Algorithm~\ref{alg:A1} stated in Theorem~\ref{th:convergence1} (\texttt{Alg. 1}) and Theorem~\ref{th:convergence_1a} (\texttt{Alg. 1b}), respectively. We set $\beta_0 := \frac{0.382\norms{K}^2}{\mu_f}$ in \texttt{Alg. 1b)} as suggested by Theorem~\ref{th:convergence_1a}. We consider two experiments as follows: \begin{itemize} \itemsep=0.1em \item \textit{Experiment 3:} Test two variants of Algorithm \ref{alg:A1} on a collection of $30$ problem instances with uncorrelated columns of $K$. \item \textit{Experiment 4:} Conduct the same test on another set of $30$ problem instances, but using $50\%$ correlated columns in $K$. \end{itemize} The results of both variants of Algorithm~\ref{alg:A1} are reported in Figure \ref{fig:sqrtLASSO_exp2}, where the left plot is for \textit{Experiment 3} and the right plot is for \textit{Experiment 4}. \begin{figure} \caption{The convergence behavior of the two variants of Algorithm~\ref{alg:A1} on a collection of $30$ problem instances of \eqref{eq:LAD} (the strongly convex case). Left plot: uncorrelated columns in $K$, and Right plot: $50\%$ correlated columns in $K$.} \label{fig:sqrtLASSO_exp2} \end{figure} Clearly, as shown in Figure \ref{fig:sqrtLASSO_exp2}, \texttt{Alg. 1b} (i.e. corresponding to Theorem~\ref{th:convergence_1a}) highly outperforms \texttt{Alg. 1} (corresponding to Theorem~\ref{th:convergence1}). \texttt{Alg. 1} matches well the $\BigO{1/k}$ convergence rate as stated in Theorem~\ref{th:convergence1}, while \texttt{Alg. 1b} shows its $\BigO{1/k^2}$ convergence rate as indicated by Theorem~\ref{th:convergence_1a}. Note that since $g^{*}$ in \eqref{eq:LAD} is non-strongly convex, we omit testing the result of Theorem~\ref{th:convergence_1b}. This case is rather well studied in the literature, see, e.g., \cite{Chambolle2011}. \section{Concluding Remarks}\label{sec:concl} We have developed a new variant of ASGRD introduced in \cite[Algorithm 1]{TranDinh2015b}, Algorithm~\ref{alg:A1}, that unifies three different settings: general convexity, strong convexity, and strong convexity and smoothness. We have proved the convergence of Algorithm~\ref{alg:A1} for three settings on three convergence criteria: gap function, primal objective residual, and dual objective residual. Our convergence rates in all cases are optimal up to a constant factor and the convergence rates of the primal sequence is on the last iterate. Our preliminary numerical experiments have shown that the theoretical convergence rates of Algorithm~\ref{alg:A1} match well the actual rates observed in practice. The proposed algorithm can be easily extended to solve composite convex problems with three or multi-objective terms. It can also be customized to solve other models, including general linear and nonlinear constrained convex problems as discussed in \cite{sabach2020faster,TranDinh2015b,tran2019non}. \vskip 2mm \noindent{\bf Acknowledgments.} This work is partly supported by the Office of Naval Research under Grant No. ONR-N00014-20-1-2088 (2020 - 2023), and the Nafosted Vietnam, Grant No. 101.01-2020.06 (2020 - 2022). \appendix \section{Appendix 1: Technical Lemmas}\label{apdx:basic} We need the following technical lemmas for our convergence analysis in the main text. \begin{lemma}\label{le:g_properties}$($\cite[Lemma 10]{TranDinh2015b}$)$ Given $\beta > 0$, $\dot{y}\in\mathbb{R}^n$, and a proper, closed, and convex function $g : \mathbb{R}^n \to\mathbb{R}\cup\{+\infty\}$ with its Fenchel conjugate $g^{*}$, we define \begin{equation}\label{eq:smoothed_varphi0} g_{\beta}(u, \dot{y}) := \max_{y\in\mathbb{R}^n}\set{\iprods{u, y} - g^{*}(y) - \tfrac{\beta}{2}\norms{y - \dot{y}}^2}. \end{equation} Let $y^{*}_{\beta}(u, \dot{y})$ be the unique solution of \eqref{eq:smoothed_varphi0}. Then, the following statements hold: \begin{itemize} \item[$(a)$] $g_{\beta}(\cdot,\dot{y})$ is convex w.r.t. $u$ on $\dom{g}$ and $\frac{1}{\beta + \mu_{g^{*}}}$-smooth w.r.t. $u$ on $\dom{g}$, where $\nabla_u{g_{\beta}}(u,\dot{y}) = \textrm{prox}_{g^{*}/\beta}(\dot{y} + \frac{1}{\beta}u)$. Moreover, for any $u, \hat{u} \in\dom{g}$, we have \begin{equation}\label{eq:key_bound_apdx1} g_{\beta}(\hat{u},\dot{y}) + \iprods{\nabla{g}_{\beta}(\hat{u},\dot{y}), u - \hat{u}} \leq g_{\beta}(u,\dot{y}) - \frac{\beta + \mu_{g^{*}}}{2}\Vert \nabla_u{g_{\beta}}(\hat{u},\dot{y}) - \nabla_u{g_{\beta}}(u,\dot{y})\Vert^2. \end{equation} \item[$(b)$] For any $\beta > 0$, $\dot{y}\in\mathbb{R}^n$, and $u\in\dom{g}$, we have \begin{equation}\label{eq:key_bound_apdx3} \hspace{-1ex} \begin{array}{lcl} g_{\beta}(u,\dot{y}) \leq g(u) \leq g_{\beta}(u,\dot{y}) + \frac{\beta}{2} [ D_{g}(\dot{y})]^2, \ \text{where} \ D_{g}(\dot{y}) := \sup_{y\in\partial{g(u)}} \norm{y - \dot{y}}. \end{array} \hspace{-1ex} \end{equation} \item[$(c)$] For $u\in\dom{g}$ and $\dot{y}\in\mathbb{R}^n$, $g_{\beta}(u,\dot{y})$ is convex in $\beta$, and for all $\hat{\beta} \geq \beta > 0$, we have \begin{equation}\label{eq:key_bound_apdx4} g_{\beta}(u,\dot{y}) \leq g_{\hat{\beta}}(u,\dot{y}) + \big(\tfrac{\hat{\beta} - \beta}{2}\big)\norms{\nabla_u{g_{\beta}}(u,\dot{y}) - \dot{y} }^2. \hspace{-2.5ex} \end{equation} \item[$(d)$] For any $\beta > 0$, and $u, \hat{u}\in\dom{g}$, we have \begin{equation}\label{eq:key_bound_apdx5} \hspace{5ex} \begin{array}{lcl} g_{\beta}(u,\dot{y}) + \iprods{\nabla_u{g_{\beta}}(u, \dot{y}), \hat{u} - u} & \leq & \ell_{\beta}(\hat{u}, \dot{y}) - \frac{\beta}{2}\norms{\nabla_u{g_{\beta}}(u,\dot{y}) - \dot{y}}^2, \end{array} \end{equation} where $\ell_{\beta}(\hat{u}, \dot{y}) := \iprods{\hat{u}, \nabla_u{g_{\beta}}(u,\dot{y}) } - g^{*}(\nabla_u{g_{\beta}}(u,\dot{y})) \leq g(\hat{u}) - \frac{\mu_{g^{*}}}{2}\norms{\nabla_u{g_{\beta}}(u,\dot{y}) - \nabla{g}(\hat{u})}^2$ for any $\nabla{g}(\hat{u}) \in \partial{g}(\hat{u})$. \end{itemize} \end{lemma} \begin{lemma}\label{le:tech_lemma} The following statements hold. \begin{itemize} \item[$\mathrm{(a)}$] Let $\set{\tau_k}\subset (0, 1]$ be computed by $\tau_{k+1} := \frac{\tau_k}{2}\big[ (\tau_k^2 + 4)^{1/2} - \tau_k\big]$ for some $\tau_0 \in (0, 1]$. Then, we have \begin{equation*} \tau_k^2 = (1-\tau_k)\tau_{k-1}^2, \quad \frac{1}{k + 1/\tau_0} \leq \tau_k < \frac{2}{k + 2/\tau_0}, \quad\text{and}\quad \frac{1}{1 + \tau_{k-2}} \leq 1 - \tau_k \leq \frac{1}{1+\tau_{k-1}}. \end{equation*} Moreover, we also have \begin{equation*} \begin{array}{ll} & \Theta_{l,k} := \displaystyle\prod_{i=l}^k(1-\tau_i) = \dfrac{\tau_k^2}{\tau_{l-1}^2} \quad\text{for}\ 0\leq l\leq k, \qquad \Theta_{0,k} = \dfrac{(1-\tau_0)\tau_k^2}{\tau_0^2} \leq \dfrac{4(1-\tau_0)}{(\tau_0k+2)^2}, \\ \text{and}\quad &\dfrac{\tau_{l+1}^2}{\tau_{k+2}^2} \leq \Gamma_{l,k} := \displaystyle\prod_{i=l}^k(1+\tau_i) \leq \dfrac{\tau_l^2}{\tau_{k+1}^2} \quad\text{for} \ 0 \leq l \leq k. \end{array} \end{equation*} If we update $\beta_k := \frac{\beta_{k-1}}{1+\tau_k}$ for a given $\beta_0 > 0$, then \begin{equation*} \frac{4\beta_0\tau_0^2}{\tau_1^2[\tau_0(k+1) + 2]^2} \leq \frac{\beta_0\tau_{k+1}^2}{\tau_1^2} \leq \beta_k = \frac{\beta_0}{\Gamma_{1,k}}\leq \frac{\beta_0\tau_{k+2}^2}{\tau_2^2} \leq \frac{4\beta_0\tau_0^2}{\tau_2^2[\tau_0(k+2) + 2]^2}. \end{equation*} \item[$\mathrm{(b)}$] Let $\set{\tau_k}\subset (0, 1]$ be computed by solving $\tau_k^3 + \tau_k^2 + \tau_{k-1}^2\tau_k - \tau_{k-1}^2 = 0$ for all $k\geq 1$ and $\tau_0 := 1$. Then, we have $\frac{1}{k+1} \leq \tau_k \leq \frac{2}{k+2}$ and $\Theta_{1,k} := \prod_{i=1}^k(1-\tau_i) \leq \frac{1}{k+1}$. Moreover, if we update $\beta_k := \frac{\beta_{k-1}}{1+\tau_k}$, then $\beta_k \leq \frac{2\beta_0}{k+2}$. \end{itemize} \end{lemma} \begin{proof} The first two relations of (a) have been proved, e.g., in \cite{TranDinh2012a}. Let us prove the last inequality of (a). Since $\frac{1}{1+\tau_{k-2}} \leq 1-\tau_k$ is equivalent to $\tau_{k-2}(1-\tau_k) \geq \tau_k$. Using $1- \tau_k = \frac{\tau_k^2}{\tau_{k-1}^2}$, we have $\tau_k\tau_{k-2} \geq \tau_{k-1}^2$. Utilizing $\tau_k = \frac{\tau_{k-1}}{2}\big[(\tau_{k-1}^2 + 4)^2 - \tau_{k-1}\big]$, this condition is equivalent to $\tau_{k-2}^2 \geq \tau_{k-1}^2(1 + \tau_{k-2})$. However, since $\tau_{k-1}^2 = (1-\tau_{k-1})\tau_{k-2}^2$, the last condition becomes $1 \geq (1-\tau_{k-1})(1+\tau_{k-2})$, or equivalently, $\tau_{k-1} \leq \tau_{k-2}$, which automatically holds. To prove $1-\tau_k \leq \frac{1}{1 + \tau_{k-1}}$, we write it as $\tau_{k-1}(1-\tau_k) \leq \tau_{k}$. Using again $\tau_k^2 = (1-\tau_k)\tau_{k-1}^2$, the last inequality is equivalent to $\tau_k \leq \tau_{k-1}$, which automatically holds. The last statements of (a) is a consequence of $1-\tau_k = \frac{\tau_k^2}{\tau_{k-1}^2}$ and the previous relations. (b) We consider the function $\varphi(\tau) := \tau^3 + \tau^2 + \tau_{k-1}^2\tau - \tau_{k-1}^2$. Clearly, $\varphi(0) = -\tau_{k-1}^2 < 0$ and $\varphi(1) = 2 > 0$. Moreover, $\varphi'(\tau) = 3\tau^2 + 2\tau + \tau_{k-1}^2 > 0$ for $\tau \in [0, 1]$. Hence, the cubic equation $\varphi(\tau) = 0$ has a unique solution $\tau_k \in (0, 1)$. Therefore, $\sets{\tau_k}_{k\geq 0}$ is well-defined. Next, since $\tau_k^3 + \tau_k^2 + \tau_k\tau_{k-1}^2 - \tau_{k-1}^2 = 0$ is equivalent to $\tau_{k-1}^2(1-\tau_k) = \tau_k^2(1+\tau_k)$, we have $\tau_{k-1}^2(1-\tau_k) = \tau_k^2(1+\tau_k) \leq \frac{\tau_k^2}{1-\tau_k}$. This inequality becomes $\tau_k \geq \frac{\tau_{k-1}}{1 + \tau_{k-1}}$. By induction and $\tau_0 = 1$, we can easily show that $\tau_k \geq \frac{1}{k+1}$. On the other hand, $\tau_{k-1}^2(1-\tau_k) = \tau_k^2(1+\tau_k) \geq \tau_k^2$. From this inequality, with a similar argument as in the proof of the statement (a), we can also easily show that $\tau_k \leq \frac{2}{k+2}$. Hence, we have $\frac{1}{k+1} \leq \tau_k \leq \frac{2}{k+2}$ for all $k\geq 0$. Finally, since $\tau_k \geq \frac{1}{k+1}$, we have $\prod_{i=1}^k(1-\tau_i) \leq \prod_{i=1}^k\left(1 - \frac{1}{i+1}\right) = \frac{1}{k+1}$. Alternatively, $\prod_{i=1}^k(1+\tau_i) \geq \prod_{i=1}^k\left(1 + \frac{1}{i+1}\right) = \frac{k+2}{2}$. However, since $\beta_k = \frac{\beta_{k-1}}{1+\tau_k}$, we have $\beta_k = \beta_0\prod_{i=1}^k\frac{1}{1+\tau_i} \leq \frac{2\beta_0}{k+2}$. $\square$ \end{proof} \begin{lemma}\label{le:tech_lemma2}$($\cite[Lemma 4]{ZhuLiuTran2020} and \cite{TranDinh2015b}$)$ The following statements hold. \begin{itemize} \item[$\mathrm{(a)}$] For any $u, v, w\in\mathbb{R}^p$ and $t_1, t_2 \in \mathbb{R}$ such that $t_1 + t_2 \neq 0$, we have \begin{equation*} t_1\norms{u - w}^2 + t_2\norms{v - w}^2 = (t_1 + t_2)\norms{w - \tfrac{1}{t_1+t_2}(t_1u + t_2v)}^2 + \tfrac{t_1t_2}{t_1+t_2}\norms{u-v}^2. \end{equation*} \item[$\mathrm{(b)}$] For any $\tau \in (0, 1)$, $\hat{\beta}, \beta > 0$, $w, z\in\mathbb{R}^p$, we have \begin{equation*} \begin{array}{lcl} \beta(1-\tau) \norms{w - z}^2 + \beta\tau\norms{w}^2 - (1-\tau)(\hat{\beta} - \beta)\norms{z}^2 & = & \beta\norms{w - (1-\tau)z}^2 \\ && + {~} (1-\tau)\big[\tau\beta - (\hat{\beta} - \beta) \big]\norms{z}^2. \end{array} \end{equation*} \end{itemize} \end{lemma} The following lemma is a key step to address the strongly convex case of $f$ in \eqref{eq:saddle_point_prob}. \begin{lemma}\label{le:element_est12} Given $L_k > 0$, $\mu_f > 0$, and $\tau_k \in (0, 1)$, let $m_k := \frac{L_k + \mu_f}{L_{k-1} + \mu_f}$ and $a_k := \frac{L_k}{L_{k-1} + \mu_f}$. Assume that the following two conditions hold: \begin{equation}\label{eq:element_cond2} \arraycolsep=0.2em \left\{\begin{array}{llcl} &(1-\tau_k) \big[ \tau_{k-1}^2 + m_k\tau_k \big]&\geq & a_k\tau_k \\ &m_k\tau_k\tau_{k-1}^2 + m_k^2\tau_k^2 &\geq & a_k\tau_{k-1}^2. \end{array}\right. \end{equation} Let $\set{x^k}$ be a given sequence in $\mathbb{R}^p$. We define $\hat{x}^k := x^k + \frac{1}{\omega_k}(x^k - x^{k-1})$, where $\omega_k$ is chosen such that \begin{equation}\label{eq:x_extra_step} \max\set{\frac{\tau_{k-1} + \sqrt{\tau_{k-1}^2 + 4a_k}}{2(1-\tau_{k-1})}, \frac{a_k\tau_k}{(1-\tau_k)(1-\tau_{k-1})\tau_{k-1}}} \leq \omega_k \leq \frac{\tau_{k-1}^2 + m_k\tau_k}{\tau_{k-1}(1-\tau_{k-1})}. \end{equation} Then, $\omega_k$ is well-defined, and for any $x \in \mathbb{R}^p$, we have \begin{equation}\label{eq:element_est12} \hspace{-2ex} \arraycolsep=0.2em \begin{array}{ll} &L_k\tau_k^2\norms{\frac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x}^2 - \mu_f \tau_k(1-\tau_k)\norms{x^k - x}^2 \\ &\qquad \leq {~} (1-\tau_k) \left(L_{k-1} + \mu_f\right)\tau_{k-1}^2\norms{\frac{1}{\tau_{k-1}}[x^k - (1-\tau_{k-1})x^{k-1}] - x}^2. \end{array} \hspace{-2ex} \end{equation} \end{lemma} \begin{proof} Firstly, from the definition $\hat{x}^k := x^k + \frac{1}{\omega_k}(x^k - x^{k-1})$ of $\hat{x}^k$, we have $\omega_k(\hat{x}^k - x^k) = x^k - x^{k-1}$. Hence, we can show that \begin{equation*} \arraycolsep=0.2em \begin{array}{lcl} \tau_{k-1}^2\norms{\frac{1}{\tau_{k-1}}[x^k - (1-\tau_{k-1})x^{k-1}] - x}^2 &= & \norms{(1-\tau_{k-1})(x^k - x^{k-1}) + \tau_{k-1}(x^k - x)}^2 \\ &= & \norms{(1-\tau_{k-1})\omega_k(\hat{x}^k - x^{k}) + \tau_{k-1}(x^k - x)}^2 \\ &= & \omega_k^2(1-\tau_{k-1})^2\norms{\hat{x}^k - x^k}^2 + \tau_{k-1}^2\norms{x^k - x}^2 \\ && + {~} 2\omega_k(1-\tau_{k-1})\tau_{k-1}\iprods{\hat{x}^k - x^k, x^k - x}. \end{array} \end{equation*} Alternatively, we also have \begin{equation*} \arraycolsep=0.2em \begin{array}{lcl} \tau_k^2\norms{\frac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x}^2 & = & \norms{\hat{x}^k - x^k}^2 + \tau_k^2\norms{x^k - x}^2 + 2\tau_k\iprods{\hat{x}^k - x^k, x^k - x}. \end{array} \end{equation*} Utilizing the two last expressions, \eqref{eq:element_est12} can be rewritten equivalently to \begin{equation*} \arraycolsep=0.2em \begin{array}{lcl} \mathcal{T}_{[1]} &:= & 2\left[ \left(L_{k-1} + \mu_f\right)(1-\tau_k)(1-\tau_{k-1})\tau_{k-1}\omega_k - L_k\tau_k \right] \iprods{\hat{x}^k - x^k, x - x^k} \\ &\leq & \left[ \left(L_{k-1} + \mu_f \right)(1-\tau_k)(1-\tau_{k-1})^2\omega_k^2 - L_{k}\right] \norms{\hat{x}^k - x^k}^2 \\ && + {~} \left[ \left(L_{k-1} + \mu_f\right)(1-\tau_k)\tau_{k-1}^2 - L_k\tau_k^2 + \mu_f\tau_k(1-\tau_k)\right] \norms{x^k - x}^2. \end{array} \end{equation*} Now, let us denote \begin{equation*} \arraycolsep=0.2em \left\{\begin{array}{lcl} c_1 &:= & \left(L_{k-1} + \mu_f\right)(1-\tau_k)(1-\tau_{k-1})\tau_{k-1}\omega_k - L_k\tau_k \\ c_2 &:= & \left(L_{k-1} + \mu_f\right)(1-\tau_k)(1-\tau_{k-1})^2\omega_k^2 - L_k \\ c_3 &:= & \left(L_{k-1} + \mu_f\right)(1-\tau_k)\tau_{k-1}^2 - L_k\tau_k^2 + \mu_f(1-\tau_k)\tau_k. \end{array}\right. \end{equation*} Then, \eqref{eq:element_est12} is equivalent to \begin{equation}\label{eq:need_to_prove} 2c_1\iprods{\hat{x}^k - x^k, x - x^k} \leq c_2\norms{\hat{x}^k - x^k}^2 + c_3\norms{x - x^k}^2. \end{equation} Secondly, we need to guarantee that $c_1 \geq 0$. This condition holds if we choose $\omega_k$ such that \begin{equation}\label{eq:upper_bound_omegak2} \omega_k \geq \frac{a_k\tau_k}{(1-\tau_k)(1-\tau_{k-1})\tau_{k-1}}. \end{equation} Thirdly, we also need to guarantee $c_2 \geq c_1$, which is equivalent to \begin{equation*} c_2 - c_1 = \left(L_{k-1} + \mu_f\right)(1-\tau_k)(1-\tau_{k-1})\left[(1-\tau_{k-1})\omega_k^2 - \tau_{k-1}\omega_k \right] - L_k(1-\tau_k) \geq 0. \end{equation*} This condition holds if \begin{equation}\label{eq:upper_bound_omegak1} \omega_k \geq \frac{\tau_{k-1} + \sqrt{\tau_{k-1}^2 + 4a_k}}{2(1-\tau_{k-1})}. \end{equation} Alternatively, we also need to guarantee $c_3 \geq c_1$, which is equivalent to \begin{equation*} c_3 - c_1 = \left(L_{k-1} + \mu_f\right)(1-\tau_k)\left[\tau_{k-1}^2 - (1-\tau_{k-1})\tau_{k-1}\omega_k \right] + (L_k + \mu_f)\tau_k(1-\tau_k) \geq 0. \end{equation*} This condition holds if \begin{equation}\label{eq:lower_bound_omegak} \omega_k \leq \frac{\tau_{k-1}^2 + m_k\tau_k}{\tau_{k-1}(1-\tau_{k-1})}. \end{equation} Combining \eqref{eq:upper_bound_omegak2}, \eqref{eq:upper_bound_omegak1}, and \eqref{eq:lower_bound_omegak}, we obtain \begin{equation*} \max\set{\frac{\tau_{k-1} + \sqrt{\tau_{k-1}^2 + 4a_k}}{2(1-\tau_{k-1})}, \frac{a_k\tau_k}{(1-\tau_k)(1-\tau_{k-1})\tau_{k-1}}} \leq \omega_k \leq \frac{\tau_{k-1}^2 + m_k\tau_k}{\tau_{k-1}(1-\tau_{k-1})}, \end{equation*} which is exactly \eqref{eq:x_extra_step}. Here, under the condition \eqref{eq:element_cond2}, the left-hand side of the last expression is less than or equal to the right-hand side. Therefore, $\omega_k$ is well-defined. Finally, under the choice of $\omega_k$ as in \eqref{eq:x_extra_step}, we have $c_2 \geq c_1 \geq 0$ and $c_3\geq c_1 \geq 0$. Hence, \eqref{eq:need_to_prove} holds, which is also equivalent to \eqref{eq:element_est12}. $\square$ \end{proof} \section{Appendix 2: Technical Proof of Lemmas \ref{le:descent_pro} and \ref{le:maintain_gap_reduction1} in Section~\ref{sec:alg_scheme}}\label{apdx:sec:A2} This section provides the full proof of Lemma~\ref{le:descent_pro} and Lemma~\ref{le:maintain_gap_reduction1} in the main text. \subsection{The proof of Lemma~\ref{le:descent_pro}: Key estimate of the primal-dual step~\eqref{eq:prox_grad_step0}}\label{apdx:le:descent_pro} \begin{proof} From the first line of \eqref{eq:prox_grad_step0} and Lemma~\ref{le:g_properties}(a), we have $\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) = K^{\top}y^{k+1}$. Now, from the second line of \eqref{eq:prox_grad_step0}, we also have \begin{equation*} 0 \in \partial{f}(x^{k+1}) + L_k(x^{k+1} - \hat{x}^k) + K^{\top}\nabla_u{g_{\beta_k}}(K\hat{x}^k, \dot{y}). \end{equation*} Combining this inclusion and the $\mu_f$-convexity of $f$, for any $x\in\dom{f}$, we get \begin{equation*} \begin{array}{lcl} f(x^{k+1}) & \leq & f(x) + \iprods{\nabla_u{g_{\beta_k}}(K\hat{x}^k, \dot{y}), K(x - x^{k+1})} + L_k\iprods{x^{k+1} - \hat{x}^k, x - x^{k+1}} \\ && - {~} \frac{\mu_f}{2}\norms{x^{k+1} - x}^2. \end{array} \end{equation*} Since $g_{\beta}(\cdot, \dot{y})$ is $\frac{1}{\beta + \mu_{g^{*}}}$-smooth by Lemma~\ref{le:g_properties}(a), for any $x\in\dom{f}$, we have \begin{equation*} \arraycolsep=0.2em \begin{array}{lcl} g_{\beta_k}(Kx^{k+1}, \dot{y}) & \leq & g_{\beta_k}(K\hat{x}^k, \dot{y}) + \iprods{\nabla_u{g}_{\beta_k}(K\hat{x}^k, \dot{y}), K(x^{k+1} - \hat{x}^k)} \\ && + {~} \frac{1}{2(\beta_k + \mu_{g^{*}})}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \\ & = & g_{\beta_k}(K\hat{x}^k, \dot{y}) + \iprods{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}), K(x - \hat{x}^k)} \\ && - {~} \iprods{\nabla_u{g}_{\beta_k}(K\hat{x}^k, \dot{y}), K(x - x^{k+1})} \\ && + {~} \frac{1}{2(\mu_{g^{*}} + \beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2. \end{array} \end{equation*} Now, combining the last two estimates, we get \begin{equation}\label{eq:proof_est1} \hspace{-1ex} \arraycolsep=0.2em \begin{array}{lcl} f(x^{k+1}) + g_{\beta_k}(Kx^{k+1}, \dot{y}) & \leq & f(x) + g_{\beta_k}(K\hat{x}^k, \dot{y}) + \iprods{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}), K(x - \hat{x}^k)} \\ && + {~} L_k\iprods{x^{k+1} - \hat{x}^k, x - \hat{x}^k} - L_k\norms{x^{k+1} - \hat{x}^k}^2 \\ && + {~} \frac{1}{2(\mu_{g^{*}} + \beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 - \frac{\mu_f}{2}\norms{x - x^{k+1}}^2. \end{array} \hspace{-5ex} \end{equation} Using Lemma~\ref{le:g_properties}(a) again, we have \begin{equation}\label{eq:proof_est2} \arraycolsep=0.2em \hspace{-2ex} \begin{array}{lcl} \ell_{\beta_k}(x^k, \dot{y}) &:= & g_{\beta_k}(K\hat{x}^k, \dot{y}) + \iprods{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}), K(x^k - \hat{x}^k)} \\ &\leq & g_{\beta_k}(Kx^k, \dot{y}) - \frac{\beta_k+\mu_{g^{*}}}{2}\norms{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) - \nabla_ug_{\beta_k}(Kx^k, \dot{y})}^2. \end{array} \hspace{-2ex} \end{equation} Substituting $x := x^k$ into \eqref{eq:proof_est1}, and multiplying the result by $1-\tau_k$ and adding the result to \eqref{eq:proof_est1} after multiplying it by $\tau_k$, then using \eqref{eq:proof_est2}, we can derive \begin{equation}\label{eq:proof_est3} \hspace{-1ex} \arraycolsep=0.2em \begin{array}{lcl} F_{\beta_k}(x^{k+1}, \dot{y}) &:= & f(x^{k+1}) + g_{\beta_k}(Kx^{k+1}, \dot{y}) \\ & \leq & (1 - \tau_k)[ f(x^k) + g_{\beta_k}(Kx^k, \dot{y})] + \tau_k\left[f(x) + \ell_{\beta_k}(x, \dot{y}) \right] \\ && - {~} L_k\norms{x^{k+1} - \hat{x}^k}^2 + \frac{1}{2(\mu_{g^{*}}+\beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \\ && + {~} L_k\iprods{x^{k+1} - \hat{x}^k, \tau_kx - \hat{x}^k + (1-\tau_k)x^k} \\ && - {~} \frac{\mu_f}{2}\left[(1-\tau_k)\norms{x^{k+1} - x^k}^2 + \tau_k\norms{x - x^{k+1}}^2\right] \\ && - {~} \frac{(1-\tau_k)(\beta_k+\mu_{g^{*}})}{2}\norms{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) - \nabla_ug_{\beta_k}(Kx^k, \dot{y})}^2. \end{array} \hspace{-5ex} \end{equation} From Lemma~\ref{le:tech_lemma2}(a), we can easily show that \begin{equation*} \begin{array}{lcl} (1-\tau_k)\norms{x^{k+1} - x^k}^2 + \tau_k\norms{x^{k+1} - x}^2 & = & \tau_k^2\norms{\tfrac{1}{\tau_k}[ x^{k+1} - (1-\tau_k)x^k] - x}^2 \\ && + {~} \tau_k(1-\tau_k)\norms{x - x^k}^2. \end{array} \end{equation*} We also have the following elementary relation \begin{equation*} \arraycolsep=0.1em \begin{array}{lcl} \iprods{x^{k+1} - \hat{x}^k, \tau_k x - [\hat{x}^k - (1-\tau_k)x^k]} &= & \frac{\tau_k^2}{2}\norms{\tfrac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x}^2 + \frac{1}{2}\norms{x^{k+1} - \hat{x}^k}^2 \\ && - {~} \frac{\tau_k^2}{2}\norms{\tfrac{1}{\tau_k}[x^{k+1} - (1-\tau_k)x^k] - x}^2. \end{array} \end{equation*} Substituting the two last expressions into \eqref{eq:proof_est3}, we obtain \begin{equation}\label{eq:proof4} \arraycolsep=0.2em \begin{array}{lcl} F_{\beta_k}(x^{k+1}, \dot{y}) & \leq & (1 - \tau_k) F_{\beta_k}(x^k, \dot{y}) + \tau_k \left[ f(x) + \ell_{\beta_k}(x, \dot{y}) \right] \\ && + {~} \frac{L_k\tau_k^2}{2} \norms{\tfrac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x}^2 \\ && - {~} \frac{\tau_k^2}{2}\left( L_k + \mu_f \right) \norms{\tfrac{1}{\tau_k}[x^{k+1} - (1-\tau_k)x^k] - x}^2 \\ && - {~} \frac{(1-\tau_k)(\mu_{g^{*}} + \beta_k)}{2}\norms{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) - \nabla_ug_{\beta_k}(Kx^k, \dot{y})}^2 \\ && - {~} \frac{L_k}{2}\norms{x^{k+1} - \hat{x}^k}^2 + \frac{1}{2(\mu_{g^{*}}+\beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \\ && - {~} \frac{\mu_f(1-\tau_k)\tau_k}{2}\norms{x - x^k}^2. \end{array} \hspace{-2ex} \end{equation} One the one hand, by \eqref{eq:key_bound_apdx4} of Lemma~\ref{le:g_properties}, we have \begin{equation*} F_{\beta_k}(x^k, \dot{y}) \leq F_{\beta_{k-1}}(x^k, \dot{y}) + \frac{(\beta_{k-1} - \beta_k)}{2}\norms{\nabla_ug_{\beta_{k}}(Kx^k, \dot{y}) - \dot{y}}^2. \end{equation*} On the other hand, by \eqref{eq:key_bound_apdx5} of Lemma~\ref{le:g_properties}, we get \begin{equation*} f(x) + \ell_{\beta_k}(x, \dot{y}) \leq \mathcal{L}(x, y^{k+1}) - \frac{\beta_k}{2}\norms{ \nabla_ug_{\beta_{k}}(K\hat{x}^k, \dot{y}) - \dot{y}}^2, \end{equation*} where $\mathcal{L}(x, y^{k+1}) := f(x) + \iprods{Kx, y^{k+1}} - g^{*}(y^{k+1})$ is the Lagrange function in \eqref{eq:saddle_point_prob}. Now, substituting the last two inequalities into \eqref{eq:proof4}, and using Lemma~\ref{le:tech_lemma2}(b) with $w := \nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) - \dot{y}$ and $z := \nabla_ug_{\beta_k}(Kx^k, \dot{y}) - \dot{y}$, we arrive at \begin{equation*} \arraycolsep=0.2em \begin{array}{lcl} F_{\beta_k}(x^{k+1}, \dot{y}) & \leq & (1 - \tau_k) F_{\beta_{k-1}}(x^k, \dot{y}) + \tau_k\mathcal{L}(x, y^{k+1}) + \frac{L_k\tau_k^2}{2} \norms{\tfrac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x}^2 \\ && - {~} \frac{\tau_k^2}{2}\left( L_k + \mu_f \right) \norms{\tfrac{1}{\tau_k}[x^{k+1} - (1-\tau_k)x^k] - x}^2 - \frac{\mu_f(1-\tau_k)\tau_k}{2}\norms{x - x^k}^2 \\ && - {~} \frac{L_k}{2}\norms{x^{k+1} - \hat{x}^k}^2 + \frac{1}{2(\mu_{g^{*}}+\beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \\ && - {~} \frac{(1-\tau_k)}{2}\left[ \tau_k\beta_k - (\beta_{k-1} - \beta_k)\right]\norms{ \nabla_ug_{\beta_k}(Kx^k, \dot{y}) - \dot{y}}^2 \\ && - {~} \frac{\beta_k}{2}\norms{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) - \dot{y} - (1-\tau_k)\left[ \nabla_ug_{\beta_k}(Kx^k, \dot{y}) - \dot{y} \right]}^2 \\ && - {~} \frac{(1-\tau_k)\mu_{g^{*}}}{2}\norms{\nabla_ug_{\beta_k}(K\hat{x}^k, \dot{y}) - \nabla_ug_{\beta_k}(Kx^k, \dot{y})}^2. \end{array} \end{equation*} By dropping the last two nonpositive terms in the last inequality, we obtain \eqref{eq:key_est00_a}. $\square$ \end{proof} \subsection{The proof of Lemma~\ref{le:maintain_gap_reduction1}: Recursive estimate of the Lyapunov function}\label{apdx:le:maintain_gap_reduction1} \begin{proof} First, from the last line $\tilde{y}^{k+1} = (1-\tau_k)\tilde{y}^k + \tau_ky^{k+1}$ of \eqref{eq:prox_grad_scheme}, and the $\mu_{g^{*}}$-convexity of $g^{*}$, we have \begin{equation*} \begin{array}{lcl} \mathcal{L}(x, \tilde{y}^{k+1}) & := & f(x) + \iprods{Kx, \tilde{y}^{k+1}} - g^{*}(\tilde{y}^{k+1}) \\ & \geq & (1-\tau_k)\mathcal{L}(x, \tilde{y}^k) + \tau_k\mathcal{L}(x, y^{k+1}) + \frac{\mu_{g^{*}}\tau_k(1-\tau_k)}{2}\norms{y^{k+1} - \tilde{y}^k}^2. \end{array} \end{equation*} Hence, $\tau_k\mathcal{L}(x, y^{k+1}) \leq \mathcal{L}(x, \tilde{y}^{k+1}) - (1-\tau_k)\mathcal{L}(x, \tilde{y}^k) - \frac{\mu_{g^{*}}\tau_k(1-\tau_k)}{2}\norms{y^{k+1} - \tilde{y}^k}^2$. Substituting this estimate into \eqref{eq:key_est00_a} and dropping the term $- \frac{\mu_{g^{*}}\tau_k(1-\tau_k)}{2}\norms{y^{k+1} - \tilde{y}^k}^2$, we can derive \begin{equation}\label{eq:key_est00_proof1} \hspace{-2ex} \arraycolsep=0.1em \begin{array}{lcl} F_{\beta_k}(x^{k+1},\dot{y}) & \leq & (1 - \tau_k) F_{\beta_{k-1}}(x^k, \dot{y}) + \mathcal{L}(x, \tilde{y}^{k+1}) - (1-\tau_k)\mathcal{L}(x, \tilde{y}^k) \\ && + {~} \frac{L_k\tau_k^2}{2} \big\Vert \tfrac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x \big\Vert^2 \\ && - {~} \frac{\tau_k^2}{2}\left(L_k + \mu_f\right) \big\Vert \tfrac{1}{\tau_k}[x^{k+1} - (1-\tau_k)x^k] - x \big\Vert^2 \\ && - {~} \frac{L_k}{2}\norms{x^{k+1} - \hat{x}^k}^2 + \frac{1}{2(\mu_{g^{*}} + \beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \\ && - {~} \frac{\mu_f\tau_k(1-\tau_k)}{2}\norms{x^k - x}^2. \end{array} \hspace{-2ex} \end{equation} Now, it is obvious to show that the condition \eqref{eq:param_cond} is equivalent to the condition \eqref{eq:element_cond2} of Lemma~\ref{le:element_est12}. In addition, we choose $\eta_k = \frac{1}{\omega_k}$ in our update \eqref{eq:Lk_m_k}, where $\omega_k := \frac{\tau_{k-1}^2 + m_k\tau_k}{\tau_{k-1}(1-\tau_{k-1})}$, which is the upper bound of \eqref{eq:x_extra_step}. Hence, \eqref{eq:x_extra_step} automatically holds. Using \eqref{eq:element_est12}, we have \begin{equation*} \begin{array}{lcl} \mathcal{T}_{[2]} &:= & \frac{L_k\tau_k^2}{2} \big\Vert \tfrac{1}{\tau_k}[\hat{x}^k - (1-\tau_k)x^k] - x \big\Vert^2 - \frac{\mu_f\tau_k(1-\tau_k)}{2}\norms{x^k - x}^2 \\ &\leq & \frac{\tau_{k-1}^2}{2} (1-\tau_k)\left(L_{k-1} + \mu_f\right)\big\Vert \tfrac{1}{\tau_{k-1}}[x^{k} - (1-\tau_{k-1})x^{k-1}] - x \big\Vert^2. \end{array} \end{equation*} Moreover, $\frac{1}{2(\mu_{g^{*}} + \beta_k)}\Vert K(x^{k+1} - \hat{x}^k)\Vert^2 \leq \frac{\norms{K}^2}{2(\mu_{g^{*}} + \beta_k)}\Vert x^{k+1} - \hat{x}^k\Vert^2 = \frac{L_k}{2}\norms{x^{k+1} - \hat{x}^k}^2$ due to the definition of $L_k$ in \eqref{eq:Lk_m_k}. Substituting these two estimates into \eqref{eq:key_est00_proof1}, and utilizing the definition \eqref{eq:lyapunov_func1} of $\mathcal{V}_k$, we obtain \eqref{eq:key_est1_ncvx}. $\square$ \end{proof} \end{document}
\begin{document} \title{Fixed points of $n$-valued maps, the fixed point property and the case of surfaces~--~a braid approach} \author{DACIBERG~LIMA~GON\c{C}ALVES\\ Departamento de Matem\'atica - IME-USP,\\ Caixa Postal~66281~-~Ag.~Cidade de S\~ao Paulo,\\ CEP:~05314-970 - S\~ao Paulo - SP - Brazil.\\ e-mail:~\url{[email protected]}\vspace*{4mm}\\ JOHN~GUASCHI\\ Normandie Universit\'e, UNICAEN,\\ Laboratoire de Math\'ematiques Nicolas Oresme UMR CNRS~\textup{6139},\\ CS 14032, 14032 Caen Cedex 5, France.\\ e-mail:~\url{[email protected]}} \maketitle \begin{abstract} \noindent We study the fixed point theory of $n$-valued maps of a space $X$ using the fixed point theory of maps between $X$ and its configuration spaces. We give some general results to decide whether an $n$-valued map can be deformed to a fixed point free $n$-valued map. In the case of surfaces, we provide an algebraic criterion in terms of the braid groups of $X$ to study this problem. If $X$ is either the $k$-dimensional ball or an even-dimensional real or complex projective space, we show that the fixed point property holds for $n$-valued maps for all $n\geq 1$, and we prove the same result for even-dimensional spheres for all $n\geq 2$. If $X$ is the $2$-torus, we classify the homotopy classes of $2$-valued maps in terms of the braid groups of $X$. We do not currently have a complete characterisation of the homotopy classes of split $2$-valued maps of the $2$-torus that contain a fixed point free representative, but we give an infinite family of such homotopy classes. \end{abstract} \section{Introduction}\label{sec:intro} Multifunctions and their fixed point theory have been studied for a number of years, see for example the books~\cite{Be,Gor}, where fairly general classes of multifunctions and spaces are considered. Continuous $n$-valued functions are of particular interest, and more information about their fixed point theory on finite complexes may be found in~\cite{Brr1,Brr2,Brr3,Brr4,Brr5,Brr6,Bet1,Bet2,Sch0,Sch1,Sch2}. In this paper, we will concentrate our attention on the case of metric spaces, and notably that of surfaces. In all of what follows, $X$ and $Y$ will be topological spaces, and $\phi\colon\thinspace X \multimap Y$ will be a multivalued function \emph{i.e.}\ a function that to each $x\in X$ associates a non-empty subset $\phi(x)$ of $Y$. Following the notation and the terminology of the above-mentioned papers, a multifunction $\phi\colon\thinspace X \multimap Y$ is \emph{upper semi-continuous} if for all $x\in X$, $\phi(x)$ is closed, and given an open set $V$ in $Y$, the set $\set{x\in X}{\phi(x)\subset V}$ is open in $X$, is \emph{lower semi-continuous} if the set $\set{x \in X}{\phi(x)\cap V \neq \varnothing}$ is open in $X$, and is \emph{continuous} if it is upper semi-continuous and lower semi-continuous. Let $I$ denote the unit interval $[0,1]$. We recall the definitions of (split) $n$-valued maps. \begin{defns} Let $X$ and $Y$ be topological spaces, and let $n\in \ensuremath{\mathbb N}$. \begin{enumerate} \item An \emph{$n$-valued map} (or multimap) $\phi\colon\thinspace X \multimap Y$, is a continuous multifunction that to each $x\in X$ assigns an unordered subset of $Y$ of cardinal exactly $n$. \item A \emph{homotopy} between two $n$-valued maps $\phi_1,\phi_2\colon\thinspace X \multimap Y$ is an $n$-valued map $H\colon\thinspace X\times I \multimap Y$ such that $\phi_1=H ( \cdot , 0)$ and $\phi_2=H ( \cdot , 1)$. \end{enumerate} \end{defns} \begin{defn}[\cite{Sch0}] An $n$-valued function $\phi\colon\thinspace X \multimap Y$ is said to be a \emph{split $n$-valued map} if there exist single-valued maps $f_1, f_2, \ldots, f_n\colon\thinspace X \ensuremath{\longrightarrow} Y$ such that $\phi(x)=\brak{f_1(x),\ldots,f_n(x)}$ for all $x\in X$. This being the case, we shall write $\phi=\brak{f_1,\ldots,f_n}$. Let $\splitmap{X}{Y}{n}$ denote the set of split $n$-valued maps between $X$ and $Y$. \end{defn} \emph{A priori}, $\phi\colon\thinspace X \multimap Y$ is just an $n$-valued function, but if it is split then it is continuous by \repr{multicont} in the Appendix, which justifies the use of the word `map' in the above definition. Partly for this reason, split $n$-valued maps play an important r\^ole in the theory. We now recall the notion of coincidence of a pair $(\phi, f)$ where $\phi$ is an $n$-valued map and $f\colon\thinspace X \ensuremath{\longrightarrow} Y$ is a single-valued map (meaning continuous)~\emph{cf.}~\cite{Brr5}. Let $\ensuremath{\operatorname{\text{Id}}}_{X}\colon\thinspace X \ensuremath{\longrightarrow} X$ denote the identity map of $X$. \begin{defns} Let $\phi\colon\thinspace X\multimap Y$ be an $n$-valued map, and let $f\colon\thinspace X \ensuremath{\longrightarrow} Y$ be a single-valued map. The set of coincidences of the pair $(\phi, f)$ is denoted by $\operatorname{\text{Coin}}(\phi, f)=\set{x\in X}{f(x)\in \phi(x)}$. If $X=Y$ and $f=\ensuremath{\operatorname{\text{Id}}}_X$ then $\operatorname{\text{Coin}}(\phi, \ensuremath{\operatorname{\text{Id}}}_X)=\set{x\in X}{x\in \phi(x)}$ is called the \emph{fixed point set} of $\phi$, and will be denoted by $\operatorname{\text{Fix}}(\phi)$. If $f$ is the constant map $c_{y_0}$ at a point $y_0\in Y$ then $\operatorname{\text{Coin}}(\phi, c_{y_0})=\set{x\in X}{y_0\in \phi(x)}$ is called the set of \emph{roots} of $\phi$ at $y_0$. \end{defns} Recall that a space $X$ is said to have the \emph{fixed point property} if any self-map of $X$ has a fixed point. This notion may be generalised to $n$-valued maps as follows. \begin{defn}\label{fppro} If $n\in \ensuremath{\mathbb N}$, a space $X$ is said to have the \emph{fixed point property} for $n$-valued maps if any $n$-valued map $\phi\colon\thinspace X \multimap X$ has a fixed point. \end{defn} If $n=1$ then we obtain the classical notion of the fixed point property. It is well known that the fixed point theory of surfaces is more complicated than that of manifolds of higher dimension. This is also the case for $n$-valued maps. A number of results for singled-valued maps of manifolds of dimension at least three may be generalised to the setting of $n$-valued maps, see for example the results of Schirmer from the 1980's~\cite{Sch0,Sch1,Sch2}. In dimension one or two, the situation is more complex, and has only been analysed within the last ten years or so, see~\cite{Brr1} for the study of $n$-valued maps of the circle. The papers~\cite{Brr4,Brr6} illustrate some of the difficulties that occur when the manifold is the $2$-torus $\ensuremath{\mathbb{T}^{2}}$. Our expectation is that the case of surfaces of negative Euler characteristic will be much more involved. In this paper, we explore the fixed point property for $n$-valued maps, and we extend the famous result of L.~E.~J.~Brouwer that every self-map of the disc has a fixed point to this setting \cite{Bru}. We will also develop some tools to decide whether an $n$-valued map can be deformed to a fixed point free $n$-valued map, and we give a partial classification of those split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that can be deformed to fixed point free $2$-valued maps. Our approach to the study of fixed point theory of $n$-valued maps makes use of the homotopy theory of configuration spaces. It is probable that these ideas can also be adapted to coincidence theory. This viewpoint is fairly general. It helps us to understand the theory, and provides some means to perform (not necessarily easy) computations in general. Nevertheless, for some specific situations, such as for surfaces of non-negative Euler characteristic, these calculations are often tractable. To explain our approach, let $F_{n}(Y)$ denote the \emph{$n\ensuremath{\up{th}}$ (ordered) configuration space} of a space $Y$, defined by: \begin{equation*} F_n(Y)=\setr{(y_1,\ldots,y_n)}{\text{$y_i\in Y$, and $y_i\neq y_j$ if $i\neq j$}}. \end{equation*} Configuration spaces play an important r\^ole in several branches of mathematics and have been extensively studied, see~\cite{CG,FH} for example. The symmetric group $S_n$ on $n$ elements acts freely on $F_n(Y)$ by permuting coordinates. The corresponding quotient space, known as the \emph{$n\ensuremath{\up{th}}$ (unordered) configuration space of $Y$}, will be denoted by $D_n(Y)$, and the quotient map will be denoted by $\pi \colon\thinspace F_{n}(Y) \ensuremath{\longrightarrow} D_{n}(Y)$. The \emph{$n\ensuremath{\up{th}}$ pure braid group $P_n(Y)$} (respectively the \emph{$n\ensuremath{\up{th}}$ braid group $B_n(Y)$}) of $Y$ is defined to be the fundamental group of $F_n(Y)$ (resp.\ of $D_n(Y)$), and there is a short exact sequence: \begin{equation}\label{eq:sesbraid} 1\ensuremath{\longrightarrow} P_n(Y) \ensuremath{\longrightarrow} B_n(Y) \stackrel{\tau}{\ensuremath{\longrightarrow}} S_n \ensuremath{\longrightarrow} 1, \end{equation} where $\tau$ is the homomorphism that to a braid associates its induced permutation. For $i=1,\ldots,n$, let $p_{i}\colon\thinspace F_{n}(Y)\ensuremath{\longrightarrow} Y$ denote projection onto the $i\ensuremath{\up{th}}$ factor. The notion of intermediate configuration spaces was defined in~\cite{GG2,GG4}. More precisely, if $n, m\in \ensuremath{\mathbb N}$, the subgroup $S_n\times S_m \subset S_{n+m}$ acts freely on $F_{n+m}(Y)$ by restriction, and the corresponding orbit space $F_{n+m}(Y)/(S_n\times S_m)$ is denoted by $D_{n, m}(Y)$. Let $B_{n,m}=\pi_{1}(D_{n, m}(Y))$ denote the associated `mixed' braid group. The space $F_{n+m}(Y)$ is equipped with the topology induced by the inclusion $F_{n+m}(Y)\subset Y^{n+m}$, and $D_{n, m}(Y)$ is equipped with the quotient topology. If $Y$ is a manifold without boundary then the natural projections $\overline{p}_{m,n}\colon\thinspace D_{m,n}(Y) \ensuremath{\longrightarrow} D_m(Y)$ onto the first $m$ coordinates are fibrations. For maps whose target is a configuration space, we have the following notions. \begin{defns} Let $X$ and $Y$ be topological spaces, and let $n\in \ensuremath{\mathbb N}$. A map $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ will be called an \emph{$n$-unordered map}, and a map $\Psi\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$ will be called an \emph{$n$-ordered map}. For such an $n$-ordered map, for $i=1,\ldots,n$, there exist maps $f_i\colon\thinspace X \ensuremath{\longrightarrow} Y$ such that $\Psi(x)=(f_1(x),\ldots, f_n(x))$ for all $x\in X$, and for which $f_i(x)\neq f_j(x)$ for all $1\leq i,j\leq n$, $i\neq j$, and all $x\in X$. In this case, we will often write $\Psi=(f_{1},\ldots,f_{n})$. \end{defns} The fixed point-theoretic concepts that were defined earlier for $n$-valued maps carry over naturally to $n$-unordered and $n$-ordered maps as follows. \begin{defns} Let $X$ and $Y$ be topological spaces, let $f\colon\thinspace X \ensuremath{\longrightarrow} Y$ be a single-valued map, let $y_0\in Y$, and let $n\in \ensuremath{\mathbb N}$. \begin{enumerate}[(a)] \item Given an $n$-unordered map $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$, $x\in X$ is said to be a \emph{coincidence} of the pair $(\Phi, f)$ if there exist $(x_1,\ldots,x_n)\in F_n(Y)$ and $j\in\brak{1,\ldots,n}$ such that $\Phi(x)= \pi(x_1,\ldots, x_n)$ and $f(x)=x_j$. The set of coincidences of the pair $(\Phi, f)$ will be denoted by $\operatorname{\text{Coin}}(\Phi, f)$. If $X=Y$ and $f=\ensuremath{\operatorname{\text{Id}}}_X$ then $\operatorname{\text{Coin}}(\Phi, \ensuremath{\operatorname{\text{Id}}}_X)$ is called the \emph{fixed point set} of $\Phi$, and is denoted by $\operatorname{\text{Fix}}(\Phi)$. If $f$ is the constant map $c_{y_0}$ at $y_0$ then $\operatorname{\text{Coin}}(\Phi, c_{y_0})$ is called the set of \emph{roots} of $\Phi$ at $y_0$. \item Given an $n$-ordered map $\Psi\colon\thinspace X\ensuremath{\longrightarrow} F_n(Y)$, the set of coincidences of the pair $(\Psi, f)$ is defined by $\operatorname{\text{Coin}}(\Psi, f)= \set{x\in X}{\text{$f(x)= p_j\circ \Psi(x)$ for some $1\leq j\leq n$}}$. If $X=Y$ and $f=\ensuremath{\operatorname{\text{Id}}}_X$ then $\operatorname{\text{Coin}}(\Psi, \ensuremath{\operatorname{\text{Id}}}_X)=\set{x\in X}{\text{$x=p_j\circ \Psi(x)$ for some $1\leq j\leq n$}}$ is called the \emph{fixed point set} of $\Psi$, and is denoted by $\operatorname{\text{Fix}}(\Psi)$. If $f$ is the constant map $c_{y_0}$ then $\operatorname{\text{Coin}}(\Psi, c_{y_0})=\set{x\in X}{\text{$y_0=p_j\circ \Psi(x)$ for some $1\leq j\leq n$}}$ is called the set of \emph{roots} of $\Psi$ at $y_0$. \end{enumerate} \end{defns} In order to study $n$-valued maps via single-valued maps, we use the following natural relation between multifunctions and functions. First observe that there is an obvious bijection between the set of $n$-point subsets of a space $Y$ and the unordered configuration space $D_{n}(Y)$. This bijection induces a one-to-one correspondence between the set of $n$-valued functions from $X$ to $Y$ and the set of functions from $X$ to $D_n(Y)$. In what follows, given an $n$-valued function $\phi\colon\thinspace X \multimap Y$, we will denote the corresponding function whose target is the configuration space $D_{n}(Y)$ by $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$, and \emph{vice-versa}. Since we are concerned with the study of continuous multivalued functions, we wish to ensure that this correspondence restricts to a bijection between the set of (continuous) $n$-valued maps and the set of continuous single-valued maps whose target is $D_{n}(Y)$. It follows from \reth{metriccont} that this is indeed the case if $X$ and $Y$ are metric spaces. This hypothesis will clearly be satisfied throughout this paper. If the map $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ associated to $\phi$ admits a lift $\widehat{\Phi}\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$ via the covering map $\pi$ then we shall say that $\widehat{\Phi}$ is a \emph{lift} of $\phi$ (see \resec{pres} for a formal statement of this definition). We will make use of this notion to develop a correspondence between split $n$-valued maps and maps from $X$ into $F_n(Y)$. As we shall see, the problems that we are interested in for $n$-valued maps, such as coincidence, fixed point and root problems, may be expressed within the context of $n$-unordered maps, to which we may apply the classical theory of single-valued maps. Our main aims in this paper are to explore the fixed point property of spaces for $n$-valued maps, and to study the problem of whether an $n$-valued map map can be deformed to a fixed point free $n$-valued map. We now give the statements of the main results of this paper. The first theorem shows that for simply-connected metric spaces, the usual fixed point property implies the fixed point property for $n$-valued maps. \begin{thm}\label{th:sccfpp} Let $X$ be a simply-connected metric space that has the fixed point property, and let $n\in \ensuremath{\mathbb N}$. Then every $n$-valued map of $X$ has at least $n$ fixed points, so $X$ has the fixed point property for $n$-valued maps. In particular, for all $n,k\geq 1$, the $k$-dimensional disc $\dt[k]$ and the $2k$-dimensional complex projective space $\ensuremath{\mathbb C} P^{2k}$ have the fixed point property for $n$-valued maps. \end{thm} It may happen that a space does not have the (usual) fixed point property but that it has the fixed point property for $n$-valued maps for $n>1$. This is indeed the case for the $2k$-dimensional sphere $\St[2k]$. \begin{prop}\label{prop:S2fp} If $n\geq 2$ and $k\geq 1$, $\St[2k]$ has the fixed point property for $n$-valued maps. \end{prop} \reth{sccfpp} and \repr{S2fp} will be proved in \resec{pres}. Although the $2k$-dimensional real projective space $\ensuremath{\mathbb R} P^{2k}$ is not simply connected, in \resec{rp2k} we will show that it has the fixed point property for $n$-valued maps for all $n\in \ensuremath{\mathbb N}$. \begin{thm}\label{th:rp2Kfpp} Let $k,n\geq 1$. The real projective space $\ensuremath{\mathbb R} P^{2k}$ has the fixed point property for $n$-valued maps. Further, any $n$-valued map of $\ensuremath{\mathbb R} P^{2k}$ has at least $n$ fixed points. \end{thm} We do not know of an example of a space that has the fixed point property, but that does not have the fixed point property for $n$-valued maps for some $n\geq 2$. In \resec{fixfree}, we turn our attention to the question of deciding whether an $n$-valued map of a surface $X$ of non-negative Euler characteristic $\chi(X)$ can be deformed to a fixed point free $n$-valued map. In the following result, we give algebraic criteria involving the braid groups of $X$. \begin{thm}\label{th:defchineg} Let $X$ be a compact surface without boundary such that $\chi(X)\leq 0$, let $n\geq 1$, and let $\phi\colon\thinspace X \multimap X$ be an $n$-valued map. \begin{enumerate}[(a)] \item\label{it:defchinega} The $n$-valued map $\phi$ can be deformed to a fixed point free $n$-valued map if and only if there is a homomorphism $\varphi\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} B_{1,n}(X)$ that makes the following diagram commute: \begin{equation}\label{eq:commdiag1} \begin{tikzcd}[ampersand replacement=\&] \&\& B_{1,n}(X) \ar{d}{(\iota_{1,n})_{\#}}\\ \pi_{1}(X) \ar[swap]{rr}{(\ensuremath{\operatorname{\text{Id}}}_{X}\times \Phi)_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\varphi} \&\&\pi_{1}(X) \times B_{n}(X), \end{tikzcd} \end{equation} where $\iota_{1,n}\colon\thinspace D_{1,n}(X) \ensuremath{\longrightarrow} X \times D_{n}(X)$ is the inclusion map. \item\label{it:defchinegb} If the $n$-valued map $\phi$ is split, it can be deformed to a fixed point free $n$-valued map if and only if there is a homomorphism $\widehat{\varphi}\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} P_{n+1}(X)$ that makes the following diagram commute: \begin{equation*} \begin{tikzcd}[ampersand replacement=\&] \&\& P_{n+1}(X) \ar{d}{(\widehat{\iota}_{n+1})_{\#}}\\ \pi_{1}(X) \ar[swap]{rr}{(\ensuremath{\operatorname{\text{Id}}}_{X}\times \widehat{\Phi})_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\varphi}} \&\& \pi_{1}(X) \times P_{n}(X). \end{tikzcd} \end{equation*} where $\widehat{\iota}_{n+1}\colon\thinspace F_{n+1}(X) \ensuremath{\longrightarrow} X \times F_{n}(X)$ is the inclusion map. \end{enumerate}\end{thm} If $\phi\colon\thinspace X \multimap X$ is a split $n$-valued map given by $\phi=\{f_1, \cdots ,f_n\}$ that can be deformed to a fixed point free $n$-valued map, then certainly each of the single-valued maps $f_i$ can be deformed to a fixed point free map. The question of whether the converse of this statement holds for surfaces is open. We do not know the answer for any compact surface without boundary different from $\St$ or $\ensuremath{{\mathbb R}P^2}$, but it is likely that the converse does not hold. More generally, one would like to know if the homotopy class of $\phi$ contains a representative for which the number of fixed points is exactly the Nielsen number. Very little is known about this question, even for the $2$-torus. Recall that the Nielsen number of an $n$-valued map $\phi\colon\thinspace X\multimap X$, denoted $N(\phi)$, was defined by Schirmer~\cite{Sch1}, and generalises the usual Nielsen number in the single-valued case. She showed that $N(\phi)$ is a lower bound for the number of fixed points among all $n$-valued maps homotopic to $\phi$. Within the framework of \reth{defchineg}, it is natural to study first the case of $2$-valued maps of the $2$-torus $\ensuremath{\mathbb{T}^{2}}$, which is the focus of \resec{toro}. In what follows, $\mu$ and $\lambda$ will denote the meridian and the longitude respectively of $\ensuremath{\mathbb{T}^{2}}$. Let $(e_1, e_2)$ be a basis of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ such that $e_1=[\mu]$ and $e_2=[\lambda]$. For self-maps of $\ensuremath{\mathbb{T}^{2}}$, we will not be overly concerned with the choice of basepoints since the fundamental groups of $\ensuremath{\mathbb{T}^{2}}$ with respect to two different basepoints may be canonically identified. In \resec{toro2}, we will study the groups $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$, and in \reco{compactpres}, we will see that $P_{2}(\ensuremath{\mathbb{T}^{2}})$ is isomorphic to the direct product of a free group $\mathbb{F}_2(u,v)$ of rank $2$ and $\ensuremath{\mathbb Z}^2$. In what follows, the elements of $P_2(\ensuremath{\mathbb{T}^{2}})$ will be written with respect to the decomposition $\mathbb{F}_2(u,v) \times \ensuremath{\mathbb Z}^2$, and $\operatorname{\text{Ab}}\colon\thinspace \mathbb{F}_2(u,v) \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^{2}$ will denote Abelianisation. \reth{helgath01}, which is a result of~\cite{Sch1} for the Nielsen number of split $n$-valued maps, will be used in part of the proof of the following proposition. \begin{prop}\label{prop:exisfpf} Let $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map of the torus $\ensuremath{\mathbb{T}^{2}}$, and let $\widehat{\Phi}=(f_1,f_2) \colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$ such that $\widehat{\Phi}_{\#}(e_{1})=(w^r,(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^s, (c,d)))$, where $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $w\in \mathbb{F}_2(u,v)$. Then the Nielsen number of $\phi$ is given by: \begin{equation*} N(\phi)=\left\lvert\det\begin{pmatrix} a-1 & c \\ b & d-1 \end{pmatrix}\right\rvert + \left\lvert\det\begin{pmatrix} rm+a-1 & sm+c \\ rn+b & sn+d-1 \end{pmatrix}\right\rvert, \end{equation*} where $\operatorname{\text{Ab}}(w)=(m,n)\in \ensuremath{\mathbb Z}^{2}$. If the map $\phi$ can be deformed to a fixed point free $2$-valued map, then both of the maps $f_1$ and $f_2$ can be deformed to fixed point free maps. Furthermore, $f_1$ and $f_2$ can be deformed to fixed point free maps if and only if either: \begin{enumerate}[(a)] \item\label{it:exisfpfa} the pairs of integers $(a-1, b),(c,d-1)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or \item\label{it:exisfpfb} $s(a-1, b)=r(c,d-1)$. \end{enumerate} \end{prop} Within the framework of \repr{exisfpf}, given a split $2$-valued map $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ for which $N(\phi)=0$, we would like to know whether $\phi$ can be deformed to a fixed point free $2$-valued map. If $N(\phi)=0$, then by this proposition, one of the conditions~(\ref{it:exisfpfa}) or~(\ref{it:exisfpfb}) must be satisfied. The following result shows that condition~(\ref{it:exisfpfb}) is also sufficient. \begin{thm}\label{th:necrootfree3} Let $\widehat{\Phi}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2({\ensuremath{\mathbb{T}^{2}}})$ be a lift of a split $2$-valued map $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ that satisfies $\widehat{\Phi}_{\#}(e_{1})=(w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^{s}, (c,d))$, where $w\in \mathbb{F}_2(u,v)$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$ satisfy $s(a-1,b)=r(c,d-1)$. Then $\phi$ may be deformed to a fixed point free $2$-valued map. \end{thm} With respect to condition~(\ref{it:exisfpfa}), we obtain a partial converse for certain values of $a,b,c,d,m$ and $n$. \begin{thm}\label{th:construct2val} Suppose that $(a-1, b),(c,d-1)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$ generated by an element of the form $(0,q), (1,q), (p,0)$ or $(p,1)$, where $p,q\in \ensuremath{\mathbb Z}$, and let $r,s\in \ensuremath{\mathbb Z}$. Then there exist $w\in \mathbb{F}_2(u,v)$, a split fixed point free $2$-valued map $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ and a lift $\widehat{\Phi} \colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ of $\phi$ such that such that $\operatorname{\text{Ab}}(w)=(m,n)$, $\widehat{\Phi}_{\#}(e_1)=((w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_2)= (w^{s}, (c,d))$. \end{thm} \repr{exisfpf} and Theorems~\ref{th:necrootfree3} and~\ref{th:construct2val} will be proved in \resec{toro}. Besides the introduction and an appendix, this paper is divided into 4 sections. In \resec{pres}, we give some basic definitions, we establish the connection between multimaps and maps whose target is a configuration space, and we show that simply-connected spaces have the fixed point property for $n$-valued maps if they have the usual fixed point property. In \resec{rp2k}, we show that even-dimensional real projective spaces have the fixed point property for $n$-valued maps. In \resec{fixfree}, we provide general criteria of a homotopic and algebraic nature, to decide whether an $n$-valued map can be deformed or not to a fixed point free $n$-valued map, and we give the corresponding statements for the case of roots. In \resec{toro}, we study the fixed point theory of $2$-valued maps of the $2$-torus. In \resec{toro2}, we give presentations of certain braid groups of $\ensuremath{\mathbb{T}^{2}}$, in \resec{descript}, we describe the set of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, and in \resec{fptsplit2}, we study the fixed point theory of split $2$-valued maps. In the Appendix, written with R.~F.~Brown, in \reth{metriccont}, we show that for the class of metric spaces that includes those considered in this paper, $n$-valued maps can be regarded as single-valued maps whose target is the associated unordered configuration space. \section{Generalities and the $n$-valued fixed point property}\label{sec:pres} In \resec{relnsnvm}, we begin by describing the relations between $n$-valued maps and $n$-unordered maps. We will assume throughout that $X$ and $Y$ are metric spaces, so that we can apply \reth{metriccont}. Making use of unordered configuration space, in \relem{split} and \reco{inject}, we prove some properties about the fixed points of $n$-valued maps. In \resec{disc}, we give an algebraic condition that enables us to decide whether an $n$-valued map is split. We also study the case where $X$ is simply connected (the $k$-dimensional disc for example, which has the usual fixed point property) and we prove \reth{sccfpp}, and in \resec{sph}, we analyse the case of the $2k$-dimensional sphere (which does not have the usual fixed point property), and we prove \repr{S2fp}. \subsection{Relations between $n$-valued maps, $n$-(un)ordered maps and their fixed point sets}\label{sec:relnsnvm} A proof of the following result may be found in the Appendix. \begin{thm}\label{th:metriccont} Let $X$ and $Y$ be metric spaces, and let $n\in \ensuremath{\mathbb N}$. An $n$-valued function $\phi\colon\thinspace X \multimap Y$ is continuous if and only if the corresponding function $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ is continuous. \end{thm} It would be beneficial for the statement of \reth{metriccont} to hold under weaker hypotheses on $X$ and $Y$. See~\cite{Brr7} for some recent results in this direction. \begin{defn} If $\phi\colon\thinspace X \multimap Y$ is an $n$-valued map and $\Phi\colon\thinspace X\ensuremath{\longrightarrow} D_{n}(Y)$ is the associated $n$-unordered map, an $n$-ordered map $\widehat{\Phi}\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$ is said to be a \emph{lift} of $\phi$ if the composition $\pi\circ \widehat{\Phi}\colon\thinspace X \ensuremath{\longrightarrow} D_{n}(Y)$ of $\widehat{\Phi}$ with the covering map $\pi\colon\thinspace F_n(Y) \ensuremath{\longrightarrow} D_n(Y)$ is equal to $\Phi$. \end{defn} If $\phi=\brak{f_1,\ldots,f_n}\colon\thinspace X \multimap Y$ is a split $n$-valued map, then it admits a lift $\widehat{\Phi}=(f_1,\ldots,f_n)\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$. For any such lift, $\operatorname{\text{Fix}}(\widehat{\Phi})=\operatorname{\text{Fix}}(\phi)$, and the map $\widehat{\Phi}$ determines an ordered set of $n$ maps $(f_1=p_1\circ \widehat{\Phi},\ldots, f_n=p_n\circ \widehat{\Phi})$ from $X$ to $Y$ for which $f_i(x)\ne f_j(x)$ for all $x\in X$ and all $1\leq i<j\leq n$. Conversely, any ordered set of $n$ maps $(f_1,\ldots,f_n)$ from $X$ to $Y$ for which $f_i(x)\ne f_j(x)$ for all $x\in X$ and all $1\leq i<j\leq n$ determines an $n$-ordered map $\Psi\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$ defined by $\Psi(x)=(f_1(x),\ldots,f_n(x))$ and a split $n$-valued map $\phi=\brak{f_1,\ldots,f_n}\colon\thinspace X \multimap Y$ of which $\Psi$ is a lift. So the existence of such a split $n$-valued map $\phi$ is equivalent to that of an $n$-ordered map $\Psi\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$, where $\Psi=(f_1,\ldots,f_n)$. This being the case, the composition $\pi\circ \Psi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ is the map $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ that corresponds (in the sense described in \resec{intro}) to the $n$-valued map $\phi$. Consequently, an $n$-valued map $\phi\colon\thinspace X \multimap Y$ admits a lift if and only if it is split. As we shall now see, \reth{metriccont} will be of help in the description of the relations between (split) $n$-valued maps and $n$-(un)ordered maps of metric spaces. As we have seen, to each $n$-valued map (resp.\ split $n$-valued map), we may associate an $n$-unordered map (resp.\ a lift), and \emph{vice-versa}. Note that the symmetric group $S_n$ not only acts (freely) on $F_n(Y)$ by permuting coordinates, but it also acts on the set of ordered $n$-tuples of maps between $X$ and $Y$. Further, the restriction of the latter action to the subset $F_{n}(Y)^{X}$ of $n$-ordered maps, \emph{i.e.}\ maps of the form $\Psi\colon\thinspace X \ensuremath{\longrightarrow} F_{n}(Y)$, where $\Psi(x)=(f_1(x),\ldots,f_n(x))$ for all $x\in X$ for which $f_i(x)\ne f_j(x)$ for all $x\in X$ and $1\leq i< j\leq n$, is also free. In what follows, $[X,Y]$ (resp.\ $[X,Y]_{0}$) will denote the set of homotopy classes (resp.\ based homotopy classes) of maps between $X$ and $Y$. \begin{lem}\label{lem:split}\mbox{} Let $X$ and $Y$ be metric spaces, and let $n\in \ensuremath{\mathbb N}$. \begin{enumerate}[(a)] \item\label{it:splitII} The set $\splitmap{X}{Y}{n}$ of split $n$-valued maps from $X$ to $Y$ is in one-to-one correspondence with the orbits of the set of maps $F_{n}(Y)^{X}$ from $X$ to $F_n(Y)$ modulo the free action defined above of $S_n$ on $F_{n}(Y)^{X}$. \item\label{it:splitIII} If two $n$-valued maps from $X$ to $Y$ are homotopic and one is split, then the other is also split. Further, the set $\splitmap{X}{Y}{n}/\!\sim$ of homotopy classes of split $n$-valued maps from $X$ to $Y$ is in one-to-one correspondence with the orbits of the set $[X,F_{n}(Y)]$ of homotopy classes of maps from $X$ to $F_n(Y)$ under the action of $S_n$ induced by that of $S_n$ on $F_{n}(Y)^{X}$. \item\label{it:splitIV} Suppose that $X=Y$. If an $n$-valued map $\phi\colon\thinspace X \multimap X$ is split and deformable to a fixed point free map, then a lift $\widehat{\Phi}\colon\thinspace X \ensuremath{\longrightarrow} F_{n}(X)$ of $\phi$ may be written as ${\Psi}=(f_1,\ldots,f_n)$, where for all $i=1,\ldots,n$, the map $f_i\colon\thinspace X\ensuremath{\longrightarrow} X$ is a map that is deformable to a fixed point free map. \end{enumerate} \end{lem} \begin{proof}\mbox{} \begin{enumerate}[(a)] \item Let $\phi\colon\thinspace X \multimap Y$ be a split $n$-valued map. From the definition, there exists an $n$-ordered map $\widehat{\Phi}\colon\thinspace X\ensuremath{\longrightarrow} F_n(Y)$ such that $\Phi=\pi\circ \widehat{\Phi}$, up to the identification given by \reth{metriccont}. If $\widehat{\Phi}=(f_1,\ldots,f_n)$, the other lifts of $\phi$ are obtained via the action of the group of deck transformations of the covering space, this group being $S_n$ in our case, and so are of the form $(f_{\sigma(1)},\ldots,f_{\sigma(n)})$, where $\sigma\in S_{n}$. This gives rise to the stated one-to-one correspondence between $\splitmap{X}{Y}{n}$ and the orbit space $F_{n}(Y)^{X}/S_{n}$. \item By naturality, the map $\pi\colon\thinspace F_n(Y) \ensuremath{\longrightarrow} D_n(Y)$ induces a map $\widehat{\pi}\colon\thinspace [X, F_n(Y)]\ensuremath{\longrightarrow} [X, D_n(Y)]$ defined by $\widehat{\pi}([\Psi])=[\pi\circ \Psi]$ for any $n$-ordered map $\Psi\colon\thinspace X\ensuremath{\longrightarrow} F_{n}(Y)$. Given two homotopic $n$-valued maps between $X$ and $Y$, which we regard as maps from $X$ to $D_n(Y)$ using \reth{metriccont}, if the first has a lift to $F_n(Y)$, then the lifting property of a covering implies that the second also admits a lift to $F_n(Y)$, so if the first map is split then so is the second map. To prove the second part of the statement, first note that there is a surjective map $f\colon\thinspace F_{n}(Y)^{X} \ensuremath{\longrightarrow} \splitmap{X}{Y}{n}$ given by $f(g)=\pi\circ g$, where we identify $\splitmap{X}{Y}{n}$ with the set of maps $D_{n}(Y)^{X}$ from $X$ to $D_{n}(Y)$, that induces a surjective map $\overline{f}\colon\thinspace [X,F_{n}(Y)] \ensuremath{\longrightarrow} \splitmap{X}{Y}{n}/\!\sim$ on the corresponding sets of homotopy classes. Further, if ${\Psi}_1, {\Psi}_2\in F_{n}(Y)^{X}$ are two $n$-ordered maps that are homotopic via a homotopy $H$, and if $\alpha\in S_n$, then the maps $\alpha \circ\Psi_1,\alpha \circ \Psi_2 \in F_{n}(Y)^{X}$ are also homotopic via the homotopy $\alpha \circ H$, and so we obtain a quotient map $q\colon\thinspace [X,F_{n}(Y)] \ensuremath{\longrightarrow} [X,F_{n}(Y)]/S_{n}$. We claim that $\overline{f}$ factors through $q$ via the map $\overline{\overline{f}}\colon\thinspace [X,F_{n}(Y)]/S_{n}\ensuremath{\longrightarrow} \splitmap{X}{Y}{n}/\!\sim$ defined by $\overline{\overline{f}}([g])=[f(g)]$. To see this, let $g, h\in F_{n}(Y)^{X}$ be such that $q([g])=q([h])$. Then there exists $\alpha\in S_{n}$ such that $\alpha([g])=[h]$. Then $\overline{f}(\alpha[g])=[\alpha f(h)]= \overline{f}([h])=[f(h)]$. But from the definition of $\splitmap{X}{Y}{n}$, $[\alpha f(g)]=[f(g)]$, and so $[f(g)]=[f(h)]$, which proves the claim. By construction, the map $\overline{\overline{f}}$ is surjective. It remains to show that it is injective. Let $g,h\in F_{n}(Y)^{X}$ be such that $\overline{\overline{f}}([g])=\overline{\overline{f}}([h])$. Then $[f(g)]=[f(h)]$, and thus $f(g)$ and $f(h)$ are homotopic via a homotopy $H$ in $\splitmap{X}{Y}{n}$, where $H(0,f(g))=f(g)$ and $H(1,f(g))=f(h)$. Then $H$ lifts to a homotopy $\widetilde{H}$ such that $\widetilde{H}(0,g)=g$, and $\widetilde{H}(1,g)$ is a lift of $f(h)$. But $h$ is also a lift of $f(h)$, so there exists $\alpha\in S_{n}$ such that $f(h)=\alpha h$. Further, $g$ is homotopic to $f(h)$, so is homotopic to $\alpha(h)$, and hence $q([g])=q([\alpha(h)])= q([h])$ from the definition of $q$, which proves the injectivity of $\overline{\overline{f}}$. \item Since $\phi$ is split, we may choose a lift $\widehat{\Phi}=(f_1,\ldots,f_n)\colon\thinspace X \ensuremath{\longrightarrow} F_{n}(X)$ of $\phi$. By hypothesis, there is a homotopy $H\colon\thinspace X\times I \ensuremath{\longrightarrow} D_n(X)$ such that $H(\cdot ,0)=\Phi$, and $H(\cdot ,1)$ is fixed point free. Since the initial part of the homotopy $H$ admits a lift, there exists a lift $\widehat{H}\colon\thinspace X\times I \ensuremath{\longrightarrow} F_n(X)$ of $H$ such that $\widehat{H}(\cdot, 0)=\widehat{\Phi}$, and $\widehat{H}(\cdot, 1)$ is fixed point free. So $\widehat{H}(\cdot, 1)$ is of the form $(f_1,\ldots,f_n)$, where $f_i$ is fixed point free for $1\leq i\leq n$, and the conclusion follows.\qedhere \end{enumerate} \end{proof} \begin{rems}\mbox{} \begin{enumerate}[(a)] \item The action of $S_n$ on the set of homotopy classes $[X, F_n(Y)]$ is not necessarily free (see \repr{classif}(\ref{it:classifb})). \item The question of whether the converse of \relem{split}(\ref{it:splitIV}) is valid for surfaces is open, see the introduction. \end{enumerate} \end{rems} The following consequence of \relem{split}(\ref{it:splitIV}) will be useful in what follows, and implies that if a split $n$-valued map can be deformed to a fixed point free $n$-valued map (through $n$-valued maps), then the deformation is through split $n$-valued maps. \begin{cor}\label{cor:inject} Let $X$ be a metric space, and let $n\in \ensuremath{\mathbb N}$. A split $n$-valued map $\phi\colon\thinspace X\multimap X$ may be deformed within $\splitmap{X}{X}{n}$ to a fixed point free $n$-valued map if and only if any lift $\widehat{\Phi}=(f_1,\ldots,f_n)\colon\thinspace X \ensuremath{\longrightarrow} F_n(X)$ of $\phi$ may be deformed within $F_{n}(X)$ to a fixed point free map $\widehat{\Phi}'=(f_1',\ldots,f_n')\colon\thinspace X \ensuremath{\longrightarrow} F_n(X)$. In particular, for all $1\leq i\leq n$, there exists a homotopy $H_i\colon\thinspace X\times I\ensuremath{\longrightarrow} X$ between $f_i$ and $f_i'$, where $f_i'$ is a fixed point free map, and $H_j(x, t)\ne H_k(x, t)$ for all $1\leq j<k\leq n$, $x\in X$ and $t\in [0,1]$. \end{cor} \begin{proof} The `if' part of the statement may be obtained by considering the composition of the deformation between ${\Psi}$ and ${\Psi}'$ by the projection $\pi$ and by applying \reth{metriccont}. The `only if' part follows in a manner similar to that of the proof of the first part of \relem{split}(\ref{it:splitIII}). \end{proof} \subsection{The fixed point property of simply connected spaces and the $k$-disc $\dt[k]$ for $n$-valued maps} \label{sec:disc} In this section, we analyse the case where $X$ is a simply-connected metric space that possesses the fixed point property, such as the closed $k$-dimensional disc $\dt[k]$. In \relem{split1}, we begin by proving a variant of the so-called `Splitting Lemma' that is more general than the versions that appear in the literature, such as that of Schirmer given in~\cite[Section~2, Lemma~1]{Sch0} for example. The hypotheses are expressed in terms of the homomorphism on the level of the fundamental group of the target $Y$, rather than that of the domain $X$, and the criterion is an algebraic condition, in terms of the fundamental group, for an $n$-valued map from $X$ to $Y$ to be split. This allows us to prove \reth{sccfpp}, which says that a simply-connected metric space that has the fixed point property also possesses the fixed point property for $n$-valued maps for all $n\geq 1$. In particular, $\dt[k]$ satisfies this property for all $k\geq 1$. The $2$-disc will be the only surface with boundary that will be considered in this paper. The cases of other surfaces with boundary, such as the annulus and the M\"obius band, will be studied elsewhere. \begin{lem}\label{lem:split1} Let $n\geq 1$, let $\phi\colon\thinspace X \multimap Y$ be an $n$-valued map between metric spaces, where $X$ is connected and locally arcwise-connected, and let $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ be the associated $n$-unordered map. Then $\phi$ is split if and only if the image of the induced homomorphism $\Phi_{\#}\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} B_n(Y)$ is contained in the image of the homomorphism $\pi_{\#}\colon\thinspace P_n(Y) \ensuremath{\longrightarrow} B_{n}(Y)$ induced by the covering map $\pi\colon\thinspace F_{n}(Y)\ensuremath{\longrightarrow} D_{n}(Y)$. In particular, if $X$ is simply connected then all $n$-valued maps from $X$ to $Y$ are split. \end{lem} \begin{proof} Since $X$ and $Y$ are metric spaces, using \reth{metriccont}, we may consider the $n$-unordered map $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(Y)$ that corresponds to $\phi$. The first part of the statement follows from standard results about the lifting property of a map to a covering space in terms of the fundamental group~\cite[Chapter~5, Section~5, Theorem~5.1]{Mas}. The second part is a consequence of the first part. \end{proof} As a consequence of \relem{split1}, we are able to prove~\reth{sccfpp}. \begin{proof}[Proof of \reth{sccfpp}] Let $X$ be a simply-connected metric space that has the fixed point property. By \relem{split1}, any $n$-valued map $\phi\colon\thinspace X \multimap X$ is split. Writing $\phi=\{f_1,\ldots,f_n\}$, each of the maps $f_i\colon\thinspace X \ensuremath{\longrightarrow} X$ is a self-map of $X$ that has at least one fixed point. So $\phi$ has at least $n$ fixed points, and in particular, $X$ has the fixed point property for $n$-valued maps. The last part of the statement then follows. \end{proof} \subsection{$n$-valued maps of the sphere $\mathbb S^{2k}$}\label{sec:sph} Let $k\geq 1$. Although $\mathbb S^{2k}$ does not have the fixed point property for self-maps, we shall show in this section that it has the fixed point property for $n$-valued maps for all $n>1$, which is the statement of \repr{S2fp}. We first prove a lemma. \begin{lem}\label{lem:S2split} Let $n\geq 1$ and $k\geq 2$. Then any $n$-valued map of $\mathbb S^{k}$ is split. \end{lem} \begin{proof} The result follows from \relem{split1} using the fact that $\mathbb S^k$ is simply connected. \end{proof} \begin{proof}[Proof of \repr{S2fp}] Let $n\geq 2$, and let $\phi \colon\thinspace \mathbb S^{2k} \multimap \mathbb S^{2k}$ be an $n$-valued map. By \relem{S2split}, $\phi$ is split, so it admits a lift $\widehat{\Phi}\colon\thinspace \mathbb S^{2k} \ensuremath{\longrightarrow} F_{n}(\St[2k])$, where $\widehat{\Phi}=(f_1,f_2,\ldots,f_n)$. Since $f_1(x)\ne f_2(x)$ for all $x\in \St[2k]$, we have $f_2(x)\neq -(-f_1(x))$, it follows that $f_2$ is homotopic to $-f_1$ via a homotopy that for all $x\in \mathbb S^{2k}$, takes $-f_1(x)$ to $f_2(x)$ along the unique geodesic that joins them. Thus the degree of one of the maps $f_1$ and $f_2$ is different from $-1$, and so has a fixed point, which implies that $\phi$ has a fixed point. \end{proof} \begin{rem} If $n>2$ and $k=1$ then the result of \repr{S2fp} is clearly true since by~\cite[pp.~43--44]{GG5}, the set $[\St, F_{n}(\St)]$ of homotopy classes of maps between $\St$ and $F_{n}(\St)$ contains only one class, which is that of the constant map. So any representative of this class is of the form $\phi=(f_1,\ldots,f_n)$, where all of the maps $f_i\colon\thinspace \St\ensuremath{\longrightarrow} \St$ are homotopic to the constant map. Such a map always has a fixed point, and hence $\phi$ has at least $n$ fixed points. \end{rem} \section{$n$-valued maps of the projective space $\ensuremath{\mathbb R} P^{2k}$}\label{sec:rp2k} In this section, we will show that the projective space $\ensuremath{\mathbb R} P^{2k}$ also has the fixed point property for $n$-valued maps, which is the statement of \reth{rp2Kfpp}. Since $\ensuremath{\mathbb R} P^{2k}$ is not simply connected, we will require more elaborate arguments than those used in Sections~\ref{sec:disc} and~\ref{sec:sph}. We separate the discussion into two cases, $k=1$, and $k>1$. \subsection{$n$-valued maps of \ensuremath{{\mathbb R}P^2}}\label{sec:rp2ka} The following result is the analogue of \relem{S2split} for $\ensuremath{{\mathbb R}P^2}$. \begin{lem}\label{lem:rp2split} Let $n\geq 1$. Then any $n$-valued map of the projective plane is split. \end{lem} \begin{proof} Let $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ be an $n$-valued map, let $\Phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \ensuremath{\longrightarrow} D_n(\ensuremath{{\mathbb R}P^2})$ be the associated $n$-unordered map, and let $\Phi_{\#}\colon\thinspace \pi_{1}(\ensuremath{{\mathbb R}P^2}) \ensuremath{\longrightarrow} B_{n}(\ensuremath{{\mathbb R}P^2})$ be the homomorphism induced on the level of fundamental groups. Since $\pi_{1}(\ensuremath{{\mathbb R}P^2})$ is isomorphic to the cyclic group of order $2$, it follows that $\im{\Phi_{\#}}$ is contained in the subgroup $\ang{\ft}$ of $B_{n}(\ensuremath{{\mathbb R}P^2})$ generated by the full twist braid $\ft$ of $B_n(\ensuremath{{\mathbb R}P^2})$ because $\ft$ is the unique element of $B_n(\ensuremath{{\mathbb R}P^2})$ of order $2$ \cite[Proposition~23]{GG3}. But $\ft$ is a pure braid, so $\im{\Phi_{\#}}\subset P_{n}(\ensuremath{{\mathbb R}P^2})$. Thus $\Phi$ factors through $\pi$, and hence $\phi$ is split by \relem{split1} as required. \end{proof} \begin{prop}\label{prop:RP2fp} Let $n\geq 1$. Then any $n$-valued map of $\ensuremath{{\mathbb R}P^2}$ has at least $n$ fixed points, in particular $\ensuremath{{\mathbb R}P^2}$ has the fixed point property for $n$-valued maps. \end{prop} \begin{proof} Let $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ be an $n$-valued map of $\ensuremath{{\mathbb R}P^2}$. Then $\phi$ is split by \relem{rp2split}, and so $\phi=\brak{f_1,\ldots,f_n}$, where $f_{1},\ldots, f_{n}\colon\thinspace \ensuremath{{\mathbb R}P^2} \ensuremath{\longrightarrow} \ensuremath{{\mathbb R}P^2}$ are pairwise coincidence-free self-maps of $\ensuremath{{\mathbb R}P^2}$. But $\ensuremath{{\mathbb R}P^2}$ has the fixed point property, and so for $i=1,\ldots,n$, $f_{i}$ has a fixed point. Hence $\phi$ has at least $n$ fixed points. \end{proof} \subsection{$n$-valued maps of $\ensuremath{\mathbb R} P^{2k}$, $k>1$}\label{sec:proje} The aim of this section is to prove that $\ensuremath{\mathbb R} P^{2k}$ has the fixed point property for $n$-valued maps for all $n\geq 1$ and $k>1$. Indeed, we will show that every such $n$-valued map has at least $n$ fixed points. Given an $n$-valued map $\phi\colon\thinspace X \multimap X$ of a topological space $X$, we consider the corresponding map $\Phi\colon\thinspace X \ensuremath{\longrightarrow} D_n(X)$, and the induced homomorphism $\Phi_{\#}\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} \pi_1(D_n(X))$ on the level of fundamental groups, where $\pi_1(D_n(X))=B_n(X)$. By the short exact sequence~\reqref{sesbraid}, $P_n(X)$ is a normal subgroup of $B_n(X)$ of finite index $n!$, so the subgroup $H=\Phi_{\#}^{-1}(P_n(X))$ is a normal subgroup of $\pi_1(X)$ of finite index. Further, if $L=\pi_1(X)/H$, the composition $\pi_1(X) \stackrel{\Phi_{\#}}{\ensuremath{\longrightarrow}} B_n(X) \stackrel{\tau}{\ensuremath{\longrightarrow}} S_n$ is a homomorphism that induces a homomorphism between $L$ and $S_n$. \begin{prop}\label{prop:nielsen} Let $n\in \ensuremath{\mathbb N}$. Suppose that $X$ is a connected, locally arcwise-connected metric space. With the above notation, there exists a covering $q\colon\thinspace \widehat{X} \ensuremath{\longrightarrow} X$ of $X$ that corresponds to the subgroup $H$, and the $n$-valued map $\phi_1=\phi \circ q\colon\thinspace \widehat{X}\multimap X$ admits exactly $n!$ lifts, which are $n$-ordered maps from $\widehat{X}$ to $F_n(X)$. If one such lift $\widehat{\Phi}_{1}\colon\thinspace \widehat{X}\ensuremath{\longrightarrow} F_n(X)$ is given by $\widehat{\Phi}_{1}=(f_1,\ldots, f_n)$, where for $i=1,\ldots,n$, $f_i$ is a map from $\widehat{X}$ to $X$, then the other lifts are of the form $(f_{\tau(1)},\ldots,f_{\tau(n)})$, where $\tau\in S_n$. \end{prop} \begin{proof} The first part is a consequence of~\cite[Theorem~5.1, Chapter~V, Section~5]{Mas}, using the observation that $S_{n}$ is the deck transformation group corresponding to the covering $\pi\colon\thinspace F_n(X) \ensuremath{\longrightarrow} D_n(X)$. The second part follows from the fact that $S_n$ acts freely on the covering space $F_n(X)$ by permuting coordinates. \end{proof} The fixed points of the $n$-valued map $\phi\colon\thinspace X\multimap X$ may be described in terms of the coincidences of the covering map $q\colon\thinspace \widehat{X} \ensuremath{\longrightarrow} X$ with the maps $f_{1},\ldots, f_{n}$ given in the statement of \repr{nielsen}. \begin{prop}\label{prop:coinfix} Let $n\in \ensuremath{\mathbb N}$, let $X$ be a connected, locally arcwise-connected, metric space, let $\phi\colon\thinspace X \multimap X$ be an $n$-valued map, and let ${\widehat \Phi_1}=(f_1,\ldots,f_n)\colon\thinspace \widehat{X} \ensuremath{\longrightarrow} F_n(X)$ be an $n$-ordered map that is a lift of $\phi_1=\phi\circ q\colon\thinspace \widehat{X} \multimap X$ as in \repr{nielsen}. Then the map $q$ restricts to a surjection $q\colon\thinspace \bigcup_{i=1}^{n} \operatorname{\text{Coin}}(q, f_i) \ensuremath{\longrightarrow} \operatorname{\text{Fix}}(\phi)$. Furthermore, the pre-image of a point $x\in \operatorname{\text{Fix}}(\phi)$ by this map is precisely $q^{-1}(x)$, namely the fibre over $x\in X$ of the covering map $q\colon\thinspace \widehat{X} \ensuremath{\longrightarrow} X$. \end{prop} \begin{proof} Let $\widehat{x}\in \operatorname{\text{Coin}}(q, f_i)$ for some $1\leq i\leq n$, and let $x=q(\widehat{x})$. Then $f_i(\widehat{x})=q(\widehat{x})$, and since $\Phi(x)= \pi\circ \widehat{\Phi}_1(\widehat{x})=\brak{f_1(\widehat{x}),\ldots,f_n(\widehat{x})}$, it follows that $x\in \phi(x)$, \emph{i.e.}\ $x\in \operatorname{\text{Fix}}(\phi)$, so the map is well defined. To prove surjectivity and the second part of the statement, it suffices to show that if $x \in \operatorname{\text{Fix}}(\phi)$, then any element $\widehat{x}$ of $q^{-1}(x)$ belongs to $\bigcup_{i=1}^{n} \operatorname{\text{Coin}}(q, f_i) $. So let $x\in \phi(x)$, and let $\widehat{x}\in \widehat{X}$ be such that $q(\widehat{x})=x$. By commutativity of the following diagram: \begin{equation*} \begin{tikzcd}[ampersand replacement=\&] \&\& F_{n}(X) \ar{d}{\pi}\\ \widehat{X} \ar[swap]{r}{q} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\Phi}_1} \& X \ar[swap]{r}{\Phi} \& D_{n}(X), \end{tikzcd} \end{equation*} and the fact that $x\in \operatorname{\text{Fix}}(\phi)$, it follows that $x$ is one of the coordinates, the $j\up{th}$ coordinate say, of $\widehat{\Phi}_1(\widehat{x})$. This implies that $\widehat{x}\in \operatorname{\text{Coin}}(q, f_j)$, which completes the proof of the proposition. \end{proof} \begin{prop}\label{prop:nullhomo} Let $n,k>1$. If $\phi\colon\thinspace \St[2k] \multimap \ensuremath{\mathbb R} P^{2k}$ is an $n$-valued map, then $\phi$ is split, and for $i=1,\ldots,n$, there exist maps $f_i\colon\thinspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ for which $\phi=\{f_1,\ldots,f_n\}$. Further, $f_i$ is null homotopic for all $i\in \brak{1,\ldots,n}$. \end{prop} \begin{proof} The first part follows from \relem{split1}. It remains to prove the second part, \emph{i.e.}\ that each $f_i$ is null homotopic. Since $\St[2k]$ is simply connected, the set $[\mathbb{S}^{2k}, \mathbb{S}^{2k}]=[\mathbb{S}^{2k}, \mathbb{S}^{2k}]_{0}=\pi_{2k}(\mathbb{S}^{2k})$, where $[ \cdot , \cdot ]_{0}$ denotes basepoint-preserving homotopy classes of maps. Let $x_0\in \mathbb{S}^{2k}$ be a basepoint, let $p\colon\thinspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ be the two-fold covering, and let $\overline{x}_0=p(x_{0})\in \ensuremath{\mathbb R} P^{2k}$ be the basepoint of $\ensuremath{\mathbb R} P^{2k}$. Consider the natural map $[(\mathbb{S}^{2k}, x_0), (\mathbb{S}^{2k}, x_0)] \ensuremath{\longrightarrow} [(\mathbb{S}^{2k}, x_0), (\ensuremath{\mathbb R} P^{2k}, \overline{x}_0)]$ from the set of based homotopy classes of self-maps of $\St[2k]$ to the based homotopy classes of maps from $\St[2k]$ to $\ensuremath{\mathbb R} P^{2k}$, that to a homotopy class of a basepoint-preserving self-map of $\St[2k]$ associates the homotopy class of the composition of this self-map with $p$. This correspondence is an isomorphism. The covering map $p$ has topological degree $2$~\cite{Ep}, so the degree of a map $f\colon\thinspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ is an even integer (we use the system of local coefficients given by the orientation of $\ensuremath{\mathbb R} P^{2k}$~\cite{Ol}). Since $H^{l}( \ensuremath{\mathbb R} P^{2k} , \widetilde{\mathbb{Q}})=0$ for $\widetilde{\mathbb{Q}}$ twisted by the orientation and $l\ne 2k$, if $i\neq j$, it follows that the Lefschetz coincidence number $L(f_{i},f_{j})$ is equal to $\deg(f_i)$. But $f_i$ and $f_j$ are coincidence free, so their Lefschetz coincidence number must be zero, which implies that $\deg(f_i)=0$~\cite{GJ}. Since $n>1$, we conclude that $\deg(f_i)=0$ for all $1\leq i\leq n$, and the result follows. \end{proof} We are now able to prove the main result of this section, that $\ensuremath{\mathbb R} P^{2k}$ has the fixed point property for $n$-valued maps for all $k,n\geq 1$. \begin{proof}[Proof of \reth{rp2Kfpp}] The case $n=1$ is classical, so assume that $n\geq 2$. We use the notation introduced at the beginning of this section, taking $X=\ensuremath{\mathbb R} P^{2k}$. Since $\pi_1(\ensuremath{\mathbb R} P^{2k})\cong\ensuremath{\mathbb Z}_2$, $H$ is either $\pi_1(\ensuremath{\mathbb R} P^{2k})$ or the trivial group. In the former case, the $n$-valued map $\phi\colon\thinspace \ensuremath{\mathbb R} P^{2k} \multimap \ensuremath{\mathbb R} P^{2k}$ is split, $\operatorname{\text{Fix}}(\phi)=\bigcup_{i=1}^{n} \operatorname{\text{Fix}}(f_i)$, where $\phi=\brak{f_1,\ldots,f_n}$, and for all $i=1,\ldots,n$, $f_{i}$ is a self-map of $\ensuremath{\mathbb R} P^{2k}$. It follows that $\phi$ has at least $n$ fixed points. So suppose that $H$ is the trivial subgroup of $\pi_1(\ensuremath{\mathbb R} P^{2k})$. Then $\widehat{\ensuremath{\mathbb R} P^{2k}}=\St[2k]$, and $q$ is the covering map $p\colon\thinspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$. We first consider the case $n=2$. Let $\phi\colon\thinspace \ensuremath{\mathbb R} P^{2k} \multimap \ensuremath{\mathbb R} P^{2k}$ be a $2$-valued map, and let $\widehat{\Phi}_1\colon\thinspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb R} P^{2k})$ be a lift of the map $\Phi_1=\Phi\circ p\colon\thinspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb R} P^{2k})$ that factors through the projection $\pi\colon\thinspace F_2(\ensuremath{\mathbb R} P^{2k}) \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb R} P^{2k})$. By \repr{nielsen}, $\widehat{\Phi}_1=(f_1, f_2)$, where for $i=1,2$, $f_i\colon\thinspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ is a single-valued map, and it follows from \repr{nullhomo} that $f_1$ and $f_2$ are null homotopic. If $\operatorname{\text{Coin}}(f_i, p)= \ensuremath{\varnothing}$ for some $i\in \brak{1,2}$, then arguing as in the second part of the proof of \repr{nullhomo}, it follows that $L(p,f_i)=\deg(p)=2$, which yields a contradiction. So $\operatorname{\text{Coin}}(f_i, p)\ne \ensuremath{\varnothing}$ for all $i\in \brak{1,2}$. Using the fact that $f_{1}$ and $f_{2}$ are coincidence free, we conclude that $\phi$ has at least two fixed points, and the result follows in this case. Finally suppose that $n>2$. Arguing as in the case $n=2$, we obtain a lift of $\phi\circ p$ of the form $(f_1, \ldots, f_n)$, where for $i=1,\ldots,n$, $f_i\colon\thinspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ is a map. If $i=1,\ldots,n$ then for all $j\in \brak{1,\ldots,n}$, $j\neq i$, we may apply the above argument to $f_i$ and $f_j$ to obtain $\operatorname{\text{Coin}}(f_i, p)\ne \ensuremath{\varnothing}$. Hence $\phi$ has at least $n$ fixed points, and the result follows. \end{proof} \begin{rem} If $n>1$, we do not know whether there exists a non-split $n$-valued map $\phi\colon\thinspace \ensuremath{\mathbb R} P^{2k} \multimap \ensuremath{\mathbb R} P^{2k}$. \end{rem} \section{Deforming (split) $n$-valued maps to fixed point and root-free maps}\label{sec:fixfree} In this section, we generalise a standard procedure for deciding whether a single-valued map may be deformed to a fixed point free map to the $n$-valued case. We start by giving a necessary and sufficient condition for an $n$-valued map (resp.\ a split $n$-valued map) $\phi\colon\thinspace X \ensuremath{\longrightarrow} X$ to be deformable to a fixed point free $n$-valued map (resp.\ split fixed point free $n$-valued map), at least in the case where $X$ is a manifold without boundary. This enables us to prove \reth{defchineg}. We then go on to give the analogous statements for roots. Recall from \resec{intro} that $D_{1,n}(X)$ is the quotient of $F_{n+1}(X)$ by the action of the subgroup $\brak{1} \times S_n$ of the symmetric group $S_{n+1}$, and that $B_{1,n}(X)=\pi_1(D_{1,n}(X))$. \begin{prop}\label{prop:defor} Let $n\in \ensuremath{\mathbb N}$, let $X$ be a metric space, and let $\phi\colon\thinspace X \multimap X$ be an $n$-valued map. If $\phi$ can be deformed to a fixed point free $n$-valued map, then there exists a map $\Theta\colon\thinspace X\ensuremath{\longrightarrow} D_{1,n}(X)$ such that the following diagram is commutative up to homotopy: \begin{equation}\label{eq:commdiag3} \begin{tikzcd}[ampersand replacement=\&] \& \& D_{1,n}(X) \ar{d}{\iota_{1,n}}\\ X \ar[swap]{rr}{\ensuremath{\operatorname{\text{Id}}}_{X}\times \Phi} \ar[dashrightarrow, end anchor=south west]{rru}{\Theta} \& \& X \times D_{n}(X), \end{tikzcd} \end{equation} where $\iota_{1,n}\colon\thinspace D_{1,n}(X) \ensuremath{\longrightarrow} X \times D_n(X)$ is the inclusion map. Conversely, if $X$ is a manifold without boundary and there exists a map $\Theta\colon\thinspace X\ensuremath{\longrightarrow} D_{1,n}(X)$ such that diagram~\reqref{commdiag3} is commutative up to homotopy, then the $n$-valued map $\phi\colon\thinspace X \multimap X$ may be deformed to a fixed point free $n$-valued map. \end{prop} \begin{proof} For the first part, if $\phi'$ is a fixed point free deformation of $\phi$, then we may take the factorisation map $\Theta\colon\thinspace X\ensuremath{\longrightarrow} D_{1,n}(X)$ to be that defined by $\Theta(x)= (x, \Phi'(x))$. For the converse, the argument is similar to the proof of the case of single-valued maps, and is as follows. Let $\Theta\colon\thinspace X \ensuremath{\longrightarrow} D_{1,n}(X)$ be a homotopy factorisation that satisfies the hypotheses. Composing $\Theta$ with the projection $\overline{p}_{1,n}\colon\thinspace D_{1,n}(X)\ensuremath{\longrightarrow} X$ onto the first coordinate, we obtain a self-map of $X$ that is homotopic to the identity. Let $H\colon\thinspace X\times I \ensuremath{\longrightarrow} X$ be a homotopy between $\overline{p}_{1,n}\circ \Theta$ and $\ensuremath{\operatorname{\text{Id}}}_{X}$. Since $\overline{p}_{1,n}\colon\thinspace D_{1,n}(X)\ensuremath{\longrightarrow} X$ is a fibration and there is a lift of the restriction of $H$ to $X\times \brak{0}$, $H$ lifts to a homotopy $\widetilde{H}\colon\thinspace X\times I \ensuremath{\longrightarrow} D_{1,n}(X)$. The restriction of $\widetilde{H}$ to $X\times \brak{1}$ yields the required deformation. \end{proof} For split $n$-valued maps, the correspondence given by \relem{split}(\ref{it:splitII}) gives rise to a statement analogous to that of \repr{defor} in terms of $F_{n}(X)$. \begin{prop}\label{prop:equivsplit} Let $n\in \ensuremath{\mathbb N}$, let $X$ be a metric space, and let $\phi\colon\thinspace X \multimap X$ be a split $n$-valued map. If $\phi$ can be deformed to a fixed point free $n$-valued map, and if $\widehat{\Phi}\colon\thinspace X \ensuremath{\longrightarrow} F_{n}(X)$ is a lift of $\phi$, then there exists a map $\widehat{\Theta}\colon\thinspace X \ensuremath{\longrightarrow} F_{n+1}(X)$ such that the following diagram is commutative up to homotopy: \begin{equation}\label{eq:commdiag4} \begin{tikzcd}[ampersand replacement=\&] \&\& F_{n+1}(X) \ar{d}{\widehat{\iota}_{n+1}}\\ X \ar[swap]{rr}{\ensuremath{\operatorname{\text{Id}}}_{X}\times \widehat{\Phi}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\Theta}} \&\& X \times F_{n}(X), \end{tikzcd} \end{equation} where $\widehat{\iota}_{n+1}\colon\thinspace F_{n+1}(X) \ensuremath{\longrightarrow} X \times F_n(X)$ is the inclusion map. Conversely, if $X$ is a manifold without boundary and there exists a map $\widehat{\Theta}\colon\thinspace X\ensuremath{\longrightarrow} F_{n+1}(X)$ such that diagram~\reqref{commdiag4} is commutative up to homotopy, then the split $n$-valued map $\phi\colon\thinspace X \multimap X$ may be deformed through split maps to a fixed point free split $n$-valued map. \end{prop} \begin{proof} Similar to that of \repr{defor}. \end{proof} We now apply Propositions~\ref{prop:defor} and~\ref{prop:equivsplit} to prove \reth{defchineg}, which treats the case where $X$ is a compact surface without boundary (orientable or not) of non-positive Euler characteristic. \begin{proof}[Proof of \reth{defchineg}] The space $D_{1,n}(X)$ is a finite covering of $D_{1+n}(X)$, so it is a $K(\pi, 1)$ since $D_{1+n}(X)$ is a $K(\pi, 1)$ by~\cite[Corollary~2.2]{FaN}. To prove the `only if' implication of part~(\ref{it:defchinega}), diagram~\reqref{commdiag1} implies the existence of diagram~\reqref{commdiag3} using the fact that the space $D_{1,n}(X)$ is a $K(\pi, 1)$, where $\varphi=\Theta_{\#}$ is the homomorphism induced by $\Theta$ on the level of the fundamental groups. Conversely, diagram~\reqref{commdiag3} implies that the two maps $\iota_{1,n}\circ \Theta$ and $\ensuremath{\operatorname{\text{Id}}}_X\times \Phi$ are homotopic, but not necessarily by a basepoint-preserving homotopy, so diagram~\reqref{commdiag1} is commutative up to conjugacy. Let $\delta\in\pi_{1}(X) \times B_{n}(X)$ be such that $(\iota_{1,n})_{\#}\circ \varphi(\alpha)=\delta (\ensuremath{\operatorname{\text{Id}}}_X\times \phi)_{\#}(\alpha) \delta^{-1}$ for all $\alpha\in \pi_1(X)$, and let $\widehat{\delta}\in B_{1,n}(X)$ be an element such that $(\iota_{1,n})_{\#}(\widehat{\delta})=\delta.$ Considering the homomorphism $\varphi'\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} B_{1,n}(X)$ defined by $\varphi'(\alpha)=\widehat{\delta}^{-1}\varphi (\alpha)\widehat{\delta}$ for all $\alpha\in \pi_1(X)$, we obtain the commutative diagram~\reqref{commdiag1}, where we replace $\varphi$ by $\varphi'$. The proof of part~(\ref{it:defchinegb}) is similar, and is left to the reader. \end{proof} For the case of roots, we now give statements analogous to those of Propositions~\ref{prop:defor} and~\ref{prop:equivsplit} and of \reth{defchineg}. The proofs are similar to those of the corresponding statements for fixed points, and the details are left to the reader. \begin{prop}\label{prop:rootI} Let $n\in \ensuremath{\mathbb N}$, let $X$ and $Y$ be metric spaces, let $y_0\in Y$ be a basepoint, and let $\phi\colon\thinspace X \multimap Y$ be an $n$-valued map. If $\phi$ can be deformed to a root-free $n$-valued map, then there exists a map $\Theta\colon\thinspace X\ensuremath{\longrightarrow} D_{n}(Y\backslash\{y_0\})$ such that the following diagram is commutative up to homotopy: \begin{equation}\label{eq:commdiag3I} \begin{tikzcd}[ampersand replacement=\&] \& \& D_{n}(Y\backslash\{y_0\}) \ar{d}{\iota_{n}}\\ X \ar[swap]{rr}{ \Phi} \ar[dashrightarrow, end anchor=south west]{rru}{\Theta} \& \& D_{n}(Y), \end{tikzcd} \end{equation} where the map $\iota_{n}\colon\thinspace D_{n}(Y\backslash\{y_0\}) \ensuremath{\longrightarrow} D_n(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$. Conversely, if $Y$ is a manifold without boundary and there exists a map $\Theta\colon\thinspace X\ensuremath{\longrightarrow} D_{n}(Y)$ such that diagram~\reqref{commdiag3I} is commutative up to homotopy, then the $n$-valued map $\phi\colon\thinspace X \multimap Y$ may be deformed to a root-free $n$-valued map. \end{prop} For split $n$-valued maps, the correspondence of \relem{split}(\ref{it:splitII}) gives rise to a statement analogous to that of Proposition \ref{prop:rootI} in terms of $F_{n}(Y)$. \begin{prop}\label{prop:equivsplitI} Let $n\in \ensuremath{\mathbb N}$, let $X$ and $Y$ be metric spaces, let $y_0\in Y$ a basepoint, and let $\phi\colon\thinspace X \multimap Y$ be a split $n$-valued map. If $\phi$ can be deformed to a root-free $n$-valued map then there exists a map $\widehat{\Theta}\colon\thinspace X \ensuremath{\longrightarrow} F_{n}(Y\backslash \{y_0\})$ and a lift $\widehat{\Phi}$ of $\phi$ such that the following diagram is commutative up to homotopy: \begin{equation}\label{eq:commdiag4I} \begin{tikzcd}[ampersand replacement=\&] \&\& F_{n}(Y\backslash \{y_0\}) \ar{d}{\widehat{\iota}_{n}}\\ X \ar[swap]{rr}{ \widehat{\Phi}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\Theta}} \&\& F_{n}(Y), \end{tikzcd} \end{equation} where the map $\widehat{\iota}_{n}\colon\thinspace F_{n}(Y\backslash \{y_0\}) \ensuremath{\longrightarrow} F_n(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$. Conversely, if $Y$ is a manifold without boundary, and there exists a map $\widehat{\Theta}\colon\thinspace X\ensuremath{\longrightarrow} F_{n}(Y\backslash \{y_0\})$ such that diagram~\reqref{commdiag4I} is commutative up to homotopy, then the split $n$-valued map $\phi\colon\thinspace X \multimap Y$ may be deformed through split maps to a root-free split $n$-valued map. \end{prop} Propositions~\ref{prop:rootI} and~\ref{prop:equivsplitI} may be applied to the case where $X$ and $Y$ are compact surfaces without boundary of non-positive Euler characteristic to obtain the analogue of \reth{defchineg} for roots. \begin{thm}\label{th:defchinegI} Let $n\in \ensuremath{\mathbb N}$, and let $X$ and $Y$ be compact surfaces without boundary of non-positive Euler characteristic. \begin{enumerate}[(a)] \item\label{it:defchinegal} An $n$-valued map $\phi\colon\thinspace X \multimap Y$ can be deformed to a root-free $n$-valued map if and only if there is a homomorphism $\varphi\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} B_{n}(Y\backslash\{y_0\})$ that makes the following diagram commute: \begin{equation*} \begin{tikzcd}[ampersand replacement=\&] \&\& B_{n}(Y\backslash\{y_0\}) \ar{d}{(\iota_{n})_{\#}}\\ \pi_{1}(X) \ar[swap]{rr}{\Phi_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\varphi} \&\& B_{n}(Y), \end{tikzcd} \end{equation*} where $\iota_{n}\colon\thinspace D_{n}(Y\backslash\{y_0\}) \ensuremath{\longrightarrow} D_{n}(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$, and $\Phi\colon\thinspace X\ensuremath{\longrightarrow} D_{n}(Y)$ is the $n$-unordered map associated to $\phi$. \item\label{it:defchinegbI} A split $n$-valued map $\phi\colon\thinspace X \multimap Y$ can be deformed to a root-free $n$-valued map if and only if there exist a lift $\widehat{\Phi}\colon\thinspace X \ensuremath{\longrightarrow} F_n(Y)$ of $\phi$ and a homomorphism $\widehat{\varphi}\colon\thinspace \pi_1(X) \ensuremath{\longrightarrow} P_{n}(Y\backslash\{y_0\})$ that make the following diagram commute: \begin{equation*} \begin{tikzcd}[ampersand replacement=\&] \&\& P_{n}(Y\backslash\{y_0\}) \ar{d}{(\widehat{\iota}_{n})_{\#}}\\ \pi_{1}(X) \ar[swap]{rr}{\widehat{\Phi}_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\varphi}} \&\& P_{n}(Y), \end{tikzcd} \end{equation*} where $\widehat{\iota}_{n}\colon\thinspace F_{n}(Y\backslash\{y_0\}) \ensuremath{\longrightarrow} F_{n}(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$. \end{enumerate} \end{thm} \section{An application to split $2$-valued maps of the $2$-torus}\label{sec:toro} In this section, we will use some of the ideas and results of \resec{fixfree} to study the fixed point theory of $2$-valued maps of the $2$-torus $\ensuremath{\mathbb{T}^{2}}$. We restrict our attention to the case where the maps are split, \emph{i.e.}\ we consider $2$-valued maps of the form $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ that admit a lift $\widehat{\Phi}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$, where $\widehat{\Phi}=(f_1, f_2)$, $f_1$ and $f_2$ being coincidence-free self-maps of $\ensuremath{\mathbb{T}^{2}}$. We classify the set of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, and we study the question of the characterisation of those split $2$-valued maps that can be deformed to fixed point free $2$-valued maps. The case of arbitrary $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ will be treated in a forthcoming paper. In \resec{toro2}, we give presentations of the groups $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ that will be used in the following sections, where $1$ denotes a basepoint of $\ensuremath{\mathbb{T}^{2}}$. In \resec{descript}, we describe the set of based and free homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$. In \resec{fptsplit2}, we give a formula for the Nielsen number, and we derive a necessary condition for such a split $2$-valued map to be deformable to a fixed point free $2$-valued map. We then give an infinite family of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that satisfy this condition and that may be deformed to fixed point free $2$-valued maps. To facilitate the calculations, in \resec{p2Tminus1}, we shall show that the fixed point problem is equivalent to a root problem. \subsection{The groups $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$}\label{sec:toro2} In this section, we give presentations of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ that will be used in the following sections. Other presentations of these groups may be found in the literature, see~\cite{Bel,Bi,GM,Sco} for example. We start by considering the group $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$. If $u$ and $v$ are elements of a group $G$, we denote their commutator $uvu^{-1}v^{-1}$ by $[u,v]$, and the commutator subgroup of $G$ by $\Gamma_{2}(G)$. If $A$ is a subset of $G$ then $\ang{\!\ang{A}\!}_{G}$ will denote the normal closure of $A$ in $G$. \begin{prop}\label{prop:presP2T1} The group $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ admits the following presentation: \begin{enumerate} \item[generators:] $\rho_{1,1}$, $\rho_{1,2}$, $\rho_{2,1}$, $\rho_{2,2}$, $B_{1,2}$, $B$ and $B'$. \item[relations:]\mbox{} \begin{enumerate}[(a)] \item\label{it:presP2T1a} $\rho_{2,1}\rho_{1,1}\rho_{2,1}^{-1}=B_{1,2}\rho_{1,1}B_{1,2}^{-1}$. \item $\rho_{2,1}\rho_{1,2}\rho_{2,1}^{-1}=B_{1,2}\rho_{1,2}\rho_{1,1}^{-1}B_{1,2}\rho_{1,1}B_{1,2}^{-1}$. \item $\rho_{2,2}\rho_{1,1}\rho_{2,2}^{-1}=\rho_{1,1}B_{1,2}^{-1}$. \item $\rho_{2,2}\rho_{1,2}\rho_{2,2}^{-1}=B_{1,2}\rho_{1,2}B_{1,2}^{-1}$. \item\label{it:presP2T1e} $\rho_{2,1}B\rho_{2,1}^{-1}=B$ and $\rho_{2,2}B\rho_{2,2}^{-1}=B$. \item\label{it:presP2T1f} $\rho_{2,1}B_{1,2}\rho_{2,1}^{-1}=B_{1,2} \rho_{1,1}^{-1}B_{1,2} \rho_{1,1}B_{1,2}^{-1}$ and $\rho_{2,2}B_{1,2}\rho_{2,2}^{-1}=B_{1,2} \rho_{1,2}^{-1}B_{1,2} \rho_{1,2}B_{1,2}^{-1}$. \item\label{it:relf} $B'\rho_{1,1}B'^{-1}=\rho_{1,1}$ and $B'\rho_{1,2}B'^{-1}=\rho_{1,2}$. \item\label{it:relg} $B'B_{1,2}B'^{-1}=B_{1,2}^{-1}B^{-1} B_{1,2}BB_{1,2}$ and $B'BB'^{-1}=B_{1,2}^{-1}BB_{1,2}$. \item\label{it:relh} $[\rho_{1,1},\rho_{1,2}^{-1}]=BB_{1,2}$ and $[\rho_{2,1},\rho_{2,2}^{-1}]=B_{1,2}B'$. \end{enumerate} \end{enumerate} In particular, $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ is a semi-direct product of the free group of rank three generated by $\brak{\rho_{1,1},\rho_{1,2},B_{1,2}}$ by the free group of rank two generated by $\brak{\rho_{2,1},\rho_{2,2}}$. \end{prop} Geometric representatives of the generators of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ are illustrated in Figure~\ref{fig:gens}. The torus is obtained from this figure by identifying the boundary to a point. \begin{figure}\label{fig:gens} \end{figure} \begin{rem} The inclusion of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$ in $P_{2}(\ensuremath{\mathbb{T}^{2}})$ induces a surjective homomorphism $\map{\alpha}{P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})}{P_{2}(\ensuremath{\mathbb{T}^{2}})}$ that sends $B$ and $B'$ to the trivial element, and sends each of the remaining generators (considered as an element of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$) to itself (considered as an element of $P_2(\ensuremath{\mathbb{T}^{2}})$). Applying this to the presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$ given by \repr{presP2T1}, we obtain the presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ given in~\cite{FH}. \end{rem} \begin{proof}[Proof of \repr{presP2T1}] Consider the following Fadell-Neuwirth short exact sequence: \begin{equation}\label{eq:fnses} 1\ensuremath{\longrightarrow} P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1}) \ensuremath{\longrightarrow} P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2})) \xrightarrow{(p_{2})_{\#}} P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2}) \ensuremath{\longrightarrow} 1, \end{equation} where $(p_{2})_{\#}$ is the homomorphism given geometrically by forgetting the first string and induced by the projection $p_{2}\colon\thinspace F_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})\ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}\setminus\brak{1}$ onto the second coordinate. The kernel $K=P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1})$ of $(p_{2})_{\#}$ (resp.\ the quotient $Q=P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{1})$) is a free group of rank three (resp.\ two). It will be convenient to choose presentations for these two groups that have an extra generator. From Figure~\ref{fig:gens}, we take $K$ (resp.\ $Q$) to be generated by $X=\brak{\rho_{1,1},\rho_{1,2},B_{1,2},B}$ (resp.\ $Y=\brak{\rho_{2,1},\rho_{2,2},B'}$) subject to the single relation $[\rho_{1,1},\rho_{1,2}^{-1}]=BB_{1,2}$ (resp.\ $[\rho_{2,1},\rho_{2,2}^{-1}]=B'$). We apply standard methods to obtain a presentation of the group extension $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$~\cite[Proposition~1, p.~139]{Jo}. This group is generated by the union of $X$ with coset representatives of $Y$, which we take to be the same elements geometrically, but considered as elements of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$. This yields the given generating set. There are three types of relation. The first is that of $K$. The second type of relation is obtained by lifting the relation of $Q$ to $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$, which gives rise to the relation $[\rho_{2,1},\rho_{2,2}^{-1}]B'^{-1}=B_{1,2}$. The third type of relation is obtained by rewriting the conjugates of the elements of $X$ by the chosen coset representatives of the elements of $Y$ in terms of the elements of $X$ using the geometric representatives of $X$ and $Y$ illustrated in Figure~\ref{fig:gens}. We leave the details to the reader. The last part of the statement is a consequence of the fact that $K$ (resp.\ $Q$) is a free group of rank three (resp.\ two), so the short exact sequence~\reqref{fnses} splits. \end{proof} \begin{rem} For future purposes, it will be convenient to have the following relations at our disposal: \begin{align*} \rho_{2,1}^{-1}\rho_{1,1}\rho_{2,1}&=\rho_{1,1}B_{1,2}^{-1}\rho_{1,1}B_{1,2}\rho_{1,1}^{-1} & \rho_{2,1}^{-1}\rho_{1,2}\rho_{2,1}&=\rho_{1,1}B_{1,2}^{-1}\rho_{1,1}^{-1} \rho_{1,2}B_{1,2}^{-1} \rho_{1,1}B_{1,2}\rho_{1,1}^{-1}\\ \rho_{2,2}^{-1}\rho_{1,1}\rho_{2,2}&=\rho_{1,1}\rho_{1,2}B_{1,2}\rho_{1,2}^{-1} & \rho_{2,2}^{-1}\rho_{1,2}\rho_{2,2}&=\rho_{1,2}B_{1,2}^{-1} \rho_{1,2}B_{1,2}\rho_{1,2}^{-1}\\ \rho_{2,1}^{-1}B_{1,2}\rho_{2,1}&=\rho_{1,1}B_{1,2}\rho_{1,1}^{-1} & \rho_{2,2}^{-1}B_{1,2}\rho_{2,2}&=\rho_{1,2}B_{1,2}\rho_{1,2}^{-1}. \end{align*} As in the proof of \repr{presP2T1}, these equalities may be derived geometrically. \end{rem} The presentation of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1,x_2))$ given by \repr{presP2T1} may be modified to obtain another presentation that highlights its algebraic structure as a semi-direct product of free groups of finite rank. \begin{prop}\label{prop:presTminus1alta} The group $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1,x_2))$ admits the following presentation: \begin{enumerate} \item[generators:] $B$, $u$, $v$, $x$ and $y$. \item[relations:]\mbox{} \begin{enumerate}[(a)] \item\label{it:altpresa} $xux^{-1}=u$. \item\label{it:altpresb} $xvx^{-1}=v[v^{-1}, u]B^{-1}[u, v^{-1}]$. \item\label{it:altpresc} $xBx^{-1}=u[v^{-1}, u]B[u, v^{-1}]u^{-1}$. \item\label{it:altpresd} $yuy^{-1}=v[v^{-1}, u]Buv^{-1}$. \item\label{it:altprese} $yvy^{-1}=v$. \item\label{it:altpresf} $yBy^{-1}=v[v^{-1},u]B[u,v^{-1}]v^{-1}=uvu^{-1}Buv^{-1}u^{-1}$. \end{enumerate} \end{enumerate} In particular, $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1,x_2))$ is a semi-direct product of the free group of rank three generated by $\brak{u,v,B}$ by the free group of rank two generated by $\brak{x,y}$. \end{prop} \begin{proof} Using relation~(\ref{it:relh}) of \repr{presP2T1}, we define $B'$ as $B_{1,2}^{-1} [\rho_{2,1},\rho_{2,2}^{-1}]$ and $B_{1,2}$ as $B^{-1} [\rho_{1,1},\rho_{1,2}^{-1}]$. We then apply the following change of variables: \begin{equation}\label{eq:uvxy} \text{$u=\rho_{1,1}$, $v=\rho_{1,2}$, $x=\rho_{1,1}B_{1,2}^{-1}\rho_{2,1}$ and $y=\rho_{1,2}B_{1,2}^{-1}\rho_{2,2}$.} \end{equation} Relations~(\ref{it:presP2T1a})--(\ref{it:presP2T1e}) of \repr{presP2T1} may be seen to give rise to relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta}. Rewritten in terms of the generators of \repr{presTminus1alta}, relations~(\ref{it:presP2T1f})--(\ref{it:relg}) of \repr{presP2T1} are consequences of relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta}. To see this, using the relations of \repr{presTminus1alta}, first note that: \begin{align*} xB_{1,2}x^{-1}&= x B^{-1} [u,v^{-1}] x^{-1}=u[v^{-1}, u]B^{-1} [u, v^{-1}]u^{-1} \ldotp \bigl[u, [v^{-1}, u] B[u, v^{-1}] v^{-1}\bigr]\\ &= B^{-1} [u,v^{-1}] = B_{1,2}\;\text{and}\\ yB_{1,2}y^{-1} &= y B^{-1} [u,v^{-1}] y^{-1} = v[v^{-1},u]B^{-1}[u,v^{-1}]v^{-1}\ldotp \bigl[ v[v^{-1}, u]Buv^{-1}, v^{-1}\bigr]\\ &= B^{-1} [u,v^{-1}] = B_{1,2}. \end{align*} In light of these relations, it is convenient to carry out the calculations using $B_{1,2}$ instead of $B$. In conjunction with the relations of the preceding remark, we obtain the following relations: \begin{equation}\label{eq:conjxyuv} \left\{ \begin{aligned} yuy^{-1}&=vB_{1,2}^{-1}uv^{-1},\; xvx^{-1}=uvu^{-1} B_{1,2}\\ y^{-1}uy&=B_{1,2} v^{-1}uv,\; x^{-1}vx=u^{-1}v B_{1,2}^{-1}u, \end{aligned} \right. \end{equation} from which it follows that: \begin{align*} \rho_{2,1}B_{1,2}\rho_{2,1}^{-1}&=B_{1,2}u^{-1}x B_{1,2} x^{-1}u B_{1,2}^{-1}= B_{1,2}u^{-1} B_{1,2}u B_{1,2}^{-1}=B_{1,2} \rho_{1,1}^{-1}B_{1,2} \rho_{1,1}B_{1,2}^{-1}\;\text{and}\\ \rho_{2,2}B_{1,2}\rho_{2,2}^{-1}&=B_{1,2}v^{-1}y B_{1,2} y^{-1}v B_{1,2}^{-1}=B_{1,2}v^{-1} B_{1,2} v B_{1,2}^{-1}= B_{1,2} \rho_{1,2}^{-1}B_{1,2} \rho_{1,2}B_{1,2}^{-1}. \end{align*} Thus relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta} imply relations~(\ref{it:presP2T1f}) of \repr{presP2T1}. Now $B'= B_{1,2}^{-1}[\rho_{2,1},\rho_{2,2}^{-1}]$, and a straightforward computation shows that: \begin{align*} B'=u^{-1}x y^{-1}vB_{1,2}^{-1} x^{-1}u v^{-1}y&=u^{-1}x y^{-1}vB_{1,2}^{-1} x^{-1}u v^{-1} \ldotp xyx^{-1}\ldotp xy^{-1}x^{-1}y\\ &=[v^{-1},u]B_{1,2}\ldotp xy^{-1}x^{-1}y. \end{align*} One may then check that: \begin{align*} xy^{-1}x^{-1}y u y^{-1}xyx^{-1}&=B_{1,2}^{-1}[u,v^{-1}] u[v^{-1}, u] B_{1,2}\;\text{and}\\ xy^{-1}x^{-1}y v y^{-1}xyx^{-1}&=B_{1,2}^{-1}[u,v^{-1}] v [v^{-1}, u] B_{1,2}, \end{align*} and that: \begin{align*} B' \rho_{1,1}B'^{-1}&=B' uB'^{-1}=[v^{-1},u] B_{1,2}\ldotp xy^{-1}x^{-1}y u y^{-1}xyx^{-1} B_{1,2}^{-1} [u,v^{-1}]=u=\rho_{1,1}\; \text{and}\\ B' \rho_{1,2} B'^{-1}&= B' vB'^{-1}=[v^{-1},u] B_{1,2}\ldotp xy^{-1}x^{-1}y v y^{-1}xyx^{-1} B_{1,2}^{-1} [u,v^{-1}]=v=\rho_{1,2}. \end{align*} Hence relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta} imply relations~(\ref{it:relf}) of \repr{presP2T1}. Furthermore, \begin{align*} B' B_{1,2}B'^{-1} &= v^{-1}uvu^{-1} B_{1,2} xy^{-1}x^{-1}y B_{1,2} y^{-1}xyx^{-1} B_{1,2}^{-1} uv^{-1}u^{-1}v\\ &= v^{-1}uvu^{-1} B_{1,2} uv^{-1}u^{-1}v=B_{1,2}^{-1}\ldotp B_{1,2} [v^{-1},u] B_{1,2} [u,v^{-1}] B_{1,2}^{-1} \ldotp B_{1,2}\\ &= B_{1,2}^{-1} B^{-1} B_{1,2} B B_{1,2}, \end{align*} and since \begin{align*} xy^{-1}x^{-1}y [u , v^{-1}] y^{-1}xyx^{-1} &= \left[B_{1,2}^{-1}[u,v^{-1}] u[v^{-1}, u] B_{1,2}, B_{1,2}^{-1}[u,v^{-1}] v^{-1} [v^{-1}, u] B_{1,2}\right]\\ &= B_{1,2}^{-1}[u,v^{-1}] B_{1,2}, \end{align*} we obtain: \begin{align*} B' B B'^{-1} &=[v^{-1},u]B_{1,2} B_{1,2}^{-1}[u,v^{-1}] B_{1,2}^{-1} B_{1,2} B_{1,2}^{-1}[u,v^{-1}]=B_{1,2}^{-1} [u,v^{-1}] B_{1,2}^{-1} \ldotp B_{1,2}\\ &=B_{1,2}^{-1} B B_{1,2}. \end{align*} Hence relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta} imply relations~(\ref{it:relg}) of \repr{presP2T1}. This proves the first part of the statement. The last part of the statement is a consequence of the nature of the presentation. \end{proof} The homomorphism $\map{\alpha}{P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})}{P_{2}(\ensuremath{\mathbb{T}^{2}})}$ mentioned in the remark that follows the statement of \repr{presP2T1} may be used to obtain a presentation of $P_{2}(T)$ in terms of the generators of \repr{presTminus1alta}. To do so, we first show that $\ker{\alpha}$ is the normal closure in $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ of $B$ and $B'$. \begin{prop}\label{prop:presTminus2alta} $\ker{\alpha}=\ang{\!\ang{B,B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$. \end{prop} \begin{proof} Consider the presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ given in \repr{presP2T1}, as well as the short exact sequence~\reqref{fnses} and the surjective homomorphism $\map{\alpha}{P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})}{P_{2}(\ensuremath{\mathbb{T}^{2}})}$. In terms of the generators of \repr{presP2T1}, for $i=1,2$, $\alpha$ sends $\rho_{i,j}$ (resp.\ $B_{1,2}$) (considered as an element of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$) to $\rho_{i,j}$ (resp.\ to $B_{1,2}$) (considered as an element of $P_{2}(\ensuremath{\mathbb{T}^{2}})$), and it sends $B$ and $B'$ to the trivial element. Since $B,B'\in \ker{\alpha}$, it is clear that $\ang{\!\ang{B,B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}\subset \ker{\alpha}$. We now proceed to prove the converse inclusion. Using the projection $(p_{2})_{\#}$ and $\alpha$, we obtain the following commutative diagram of short exact sequences: \begin{equation}\label{eq:keralpha} \begin{gathered} \begin{tikzcd}[ampersand replacement=\&] \& 1 \ar{d} \& 1 \ar{d} \& 1 \ar{d} \&\\ 1 \ar{r} \& \ker{\overline{\alpha}} \ar{r} \ar[dashed]{d}{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.} \& P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1}) \ar[dashed]{r}{\overline{\alpha}} \ar{d}{\tau} \& P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}},x_{1}) \ar{r} \ar{d} \& 1\\ 1 \ar{r} \& \ker{\alpha} \ar{r} \ar[dashed, shift left=1]{d}{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.} \& P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2})) \ar{r}{\alpha} \ar{d}{(p_{2})_{\#}} \& P_{2}(\ensuremath{\mathbb{T}^{2}}, (x_{1},x_{2})) \ar{r} \ar{d}{(p_{2}')_{\#}} \& 1\\ 1 \ar{r} \& \ker{\alpha'} \ar{r} \ar{d} \ar[dashed, shift left=1]{u}{s} \& P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2}) \ar{r}{\alpha'} \ar{d} \& P_{1}(\ensuremath{\mathbb{T}^{2}},x_{2}) \ar{r} \ar{d} \& 1.\\ \& 1 \& 1 \& 1 \& \end{tikzcd} \end{gathered} \end{equation} Diagram~\reqref{keralpha} is constructed in the following manner: the surjective homomorphism $\map{\alpha'}{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})}{P_{1}(\ensuremath{\mathbb{T}^{2}},x_{2})}$ is induced by the inclusion $\map{\iota'}{\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}}{\ensuremath{\mathbb{T}^{2}}}$, and the right-hand column is a Fadell-Neuwirth short exact sequence, where the fibration $\map{p_{2}'}{F_{2}(\ensuremath{\mathbb{T}^{2}})}{\ensuremath{\mathbb{T}^{2}}}$ is given by projecting onto the second coordinate. The homomorphism $\map{\tau}{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1})}{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))}$ is interpreted as inclusion. Taking $\brak{\rho_{2,1},\rho_{2,2},B'}$ to be a generating set of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ subject to the relation $[\rho_{2,1},\rho_{2,2}^{-1}]=B'$, as for $\alpha$, we see that $\alpha'(\rho_{2,j})=\rho_{2,j}$ and $\alpha'(B')=1$. Alternatively, we may consider $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ to be the free group on $\brak{\rho_{2,1},\rho_{2,2}}$, and $P_{1}(\ensuremath{\mathbb{T}^{2}},x_{2})$ to be the free Abelian group on $\brak{\rho_{2,1},\rho_{2,2}}$, so $\alpha'$ is Abelianisation, and $\ker{\alpha'}=\Gamma_{2}(P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2}))$. Interpreting $\alpha'$ as the canonical projection of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ onto its quotient by the normal closure of the element $[\rho_{2,1},\rho_{2,2}^{-1}]$ in $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$, we see that $\ker{\alpha'}=\ang{\!\ang{\,[\rho_{2,1},\rho_{2,2}^{-1}]\,}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})}$. But $[\rho_{2,1},\rho_{2,2}^{-1}]=B'$ in $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$, hence: \begin{equation}\label{eq:keralphaprime} \ker{\alpha'}=\ang{\!\ang{B'}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1},x_{2})}. \end{equation} The fact that $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ is a free group implies that $\ker{\alpha'}$ is too (albeit of infinite rank). The commutativity of the lower right-hand square of~\reqref{keralpha} is a consequence of the following commutative square: \begin{equation*} \begin{tikzcd}[ampersand replacement=\&] F_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}) \ar[r, "\iota"] \ar[d, "p_{2}"] \& F_{2}(\ensuremath{\mathbb{T}^{2}}) \ar[d, "p_{2}'"]\\ F_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}) \ar[r, "\iota'"] \& F_{2}(\ensuremath{\mathbb{T}^{2}}), \end{tikzcd} \end{equation*} where $\map{\iota}{F_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}{F_{2}(\ensuremath{\mathbb{T}^{2}})}$ is induced by the inclusion $\iota'$. Together with $\alpha$ and $\alpha'$, the second two columns of~\reqref{keralpha} give rise to a homomorphism $\map{\overline{\alpha}}{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, x_{1})}{P_{1}(\ensuremath{\mathbb{T}^{2}} \setminus\brak{x_{2}},x_{1})}$. Taking $\brak{\rho_{1,1},\rho_{1,2},B_{1,2},B}$ to be a generating set of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, x_{1})$, by commutativity of the diagram, we see that $\overline{\alpha}$ sends each of $\rho_{1,1}$, $\rho_{1,2}$ and $B_{1,2}$ (considered as an element of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, (x_{1},x_{2}))$) to itself (considered as an element of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})$), and sends $B'$ to the trivial element. In particular, $\overline{\alpha}$ is surjective. As for $\alpha'$, we may consider $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, x_{1})$ to be the free group of rank three generated by $\brak{\rho_{1,1},\rho_{1,2},B_{1,2}}$, and $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})$ to be the group generated by $\brak{\rho_{1,1},\rho_{1,2},B_{1,2}}$ subject to the relation $[\rho_{1,1},\rho_{1,2}^{-1}]=B_{1,2}$. The homomorphism $\alpha'$ may thus be interpreted as the canonical projection of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})$ onto its quotient by $\ang{\!\ang{[\rho_{1,1},\rho_{1,2}^{-1}]B_{1,2}^{-1}}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})}$. But by relation~(\ref{it:relh}) of \repr{presP2T1}, $B'=[\rho_{1,1},\rho_{1,2}^{-1}]B_{1,2}^{-1}$, hence: \begin{equation*} \ker{\overline{\alpha}}=\ang{\!\ang{B'}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}},x_{1})}. \end{equation*} By exactness, the first (resp.\ second) two rows of~\reqref{keralpha} give rise to an induced homomorphism $\map{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.}{\ker{\overline{\alpha}}}{\ker{\alpha}}$ (resp.\ $\map{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}{\ker{\alpha}}{\ker{\alpha'}}$), and $\tau\left\lvert_{\ker{\overline{\alpha}}}\right.$ is injective because $\tau$ is. The homomorphism $(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.$ is surjective, because by~\reqref{keralphaprime}, any element $x$ of $\ker{\alpha'}$ may be written as a product of conjugates of $B'$ and its inverse by products of $\rho_{2,1}$, $\rho_{2,2}$ and $B'$. This expression, considered as an element of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$, belongs to $\ker{\alpha}$, and its image under $(p_{2})_{\#}$ is equal to $x$. The fact that $\im{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.}\subset \ker{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}$ follows from exactness of the second column of~\reqref{keralpha}. Conversely, if $z\in \ker{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}$ then $z\in \ker{\tau}$ by exactness of the second column. So there exists $y\in P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1})$ such that $\tau(y)=z$. But $\overline{\alpha}(y)=\alpha(\tau(y))=\alpha(z)=1$, and hence $y\in \ker{\overline{\alpha}}$. This proves that $\ker{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}\subset \im{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.}$, and we deduce that the first column is exact. Finally, since $\ker{\alpha'}$ is free, we may pick an infinite basis consisting of conjugates of $B'$ by certain elements of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$. Further, there exists a section $\map{s}{\ker{\alpha'}}[\ker{\alpha}]$ for $(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.$ that consists in sending each of these conjugates (considered as an element of $\ker{\alpha'}$) to itself (considered as an element of $\ker{\alpha}$). In particular, $\ker{\alpha}$ is an internal semi-direct product of $\tau(\ker{\overline{\alpha}})$, which is contained in $\ang{\!\ang{B}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$, by $s(\ker{\overline{\alpha}})$, which is contained in $\ang{\!\ang{B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$. Hence $\ker{\alpha}$ is contained in the subgroup $\ang{\!\ang{B,B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$. This completes the proof of the proposition. \end{proof} We may thus deduce the following useful presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}})$. \begin{cor}\label{cor:compactpres} The group $P_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ admits the following presentation: \begin{enumerate} \item[generators:] $u$, $v$, $x$ and $y$. \item[relations:]\mbox{} \begin{enumerate}[(a)] \item\label{it:altpresA} $xux^{-1}=u$ and $yuy^{-1}=u$. \item\label{it:altpresB} $xvx^{-1}=v$ and $yvy^{-1}=v$. \item\label{it:altpresC} $xyx^{-1}=y$. \end{enumerate} \end{enumerate} In particular, $P_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ is isomorphic to the direct product of the free group of rank two generated by $\brak{u,v}$ and the free Abelian group of rank two generated by $\brak{x,y}$. \end{cor} \begin{rem} The decomposition of \reco{compactpres} is a special case of~\cite[Lemma~17]{BGG}. \end{rem} \begin{proof}[Proof of \reco{compactpres}] By \repr{presTminus2alta}, a presentation of $P_2(\ensuremath{\mathbb{T}^{2}})$ may be obtained from the presentation of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$ given in \repr{presTminus1alta} by setting $B$ and $B'$ equal to $1$. Under this operation, relations~(\ref{it:altpresa}) and~(\ref{it:altpresd}) (resp.~(\ref{it:altpresb}) and~(\ref{it:altprese})) of \repr{presTminus1alta} are sent to relations~(\ref{it:altpresA}) (resp.~(\ref{it:altpresB})) of \reco{compactpres}, and relations~(\ref{it:altpresc}) and~(\ref{it:altpresf}) of \repr{presTminus1alta} become trivial. We must also take into account the fact that $B'=1$ in $P_2(\ensuremath{\mathbb{T}^{2}})$. In $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$, we have: \begin{equation*} B' = B_{1,2}^{-1} [\rho_{2,1}, \rho_{2,2}^{-1}]= [v^{-1},u] B \bigl[B^{-1}[u,v^{-1}]\ldotp u^{-1}x, y^{-1}v [v^{-1},u] B\bigr], \end{equation*} and taking the image of this equation by $\alpha$, we obtain: \begin{align*} 1 &= [v^{-1},u] \bigl[[u,v^{-1}] u^{-1}x, y^{-1}v [v^{-1},u] \bigr]\\ & = [v^{-1},u] \ldotp [u,v^{-1}] u^{-1}x \ldotp y^{-1}v [v^{-1},u] \ldotp x^{-1}u [v^{-1},u] \ldotp [u,v^{-1}] v^{-1} y = u^{-1}x y^{-1}uvu^{-1} x^{-1}u v^{-1} y \end{align*} in $P_2(\ensuremath{\mathbb{T}^{2}})$. Using relations~(\ref{it:altpresA}) (resp.~(\ref{it:altpresB})) of \reco{compactpres}, it follows that $[x, y^{-1}]=1$, which yields relation~(\ref{it:altpresC}) of \reco{compactpres}. The last part of the statement is a consequence of the nature of the presentation. \end{proof} \begin{rem} As we saw in \repr{presTminus1alta}, $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1, x_2))$ is a semi-direct product of the form $\mathbb{F}_3(u,v,B)\rtimes \mathbb{F}_2(x,y)$, the action being given by the relations of that proposition. Transposing the second two columns of the commutative diagram~\reqref{keralpha}, we obtain, up to isomorphism, the following commutative diagram: \begin{equation}\label{eq:f3f2} \begin{tikzcd}[ampersand replacement=\&] 1 \ar{r}\& \mathbb{F}_3(u,v,B) \ar{d}{\overline{\alpha}} \ar{r}{\tau} \& \mathbb{F}_3(u,v,B)\rtimes \mathbb{F}_2(x,y) \ar{d}{\alpha} \ar{r}{(p_{2})_{\#}} \ar{r} \& \mathbb{F}_2(x,y) \ar{d}{\alpha'} \ar{r} \& 1\\ 1 \ar{r}\& \mathbb{F}_2(u,v) \ar{r} \& \mathbb{F}_2(u,v) \times \ensuremath{\mathbb Z}^{2} \ar{r}{(p_{2}')_{\#}} \& \ensuremath{\mathbb Z}^{2} \ar{r} \& 1, \end{tikzcd} \end{equation} where $\alpha(u)=u$, $\alpha(v)=v$, $\alpha(B)=1$, $\alpha(x)=(1; (1,0))$ and $\alpha(y)=(1; (0,1))$. This is a convenient setting to study the question of whether a $2$-valued map may be deformed to a root-free $2$-valued map (which implies that the corresponding map may be deformed to a fixed point free map), since using~\reqref{f3f2} and \reth{defchinegI}(\ref{it:defchinegbI}), the question is equivalent to a lifting problem, to which we will refer in \resec{fptsplit2}, notably in Propositions~\ref{prop:necrootfree2} and~\ref{prop:construct2valprop}. \end{rem} Using the short exact sequence~\reqref{sesbraid}, we obtain the following presentation of the full braid group $B_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ from that of $P_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ given by \reco{compactpres}. \begin{prop}\label{prop:presful} The group $B_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ admits the following presentation: \begin{enumerate} \item[generators:] $u$, $v$, $x$, $y$ and $\sigma$. \item[relations:]\mbox{} \begin{enumerate}[(a)] \item\label{it:presB2Ta} $xux^{-1}=u$ and $yuy^{-1}=u$. \item $xvx^{-1}=v$ and $yvy^{-1}=v$. \item\label{it:presB2Tc} $xyx^{-1}=y$. \item\label{it:presB2Td} $\sigma^{2}=[u,v^{-1}]$. \item\label{it:presB2Te} $\sigma x \sigma^{-1}=x$ and $\sigma y \sigma^{-1}=y$. \item\label{it:presB2Tf} $\sigma u \sigma^{-1}=[u,v^{-1}]u^{-1}x$ and $\sigma v \sigma^{-1}=[u,v^{-1}]v^{-1}y$. \end{enumerate} \end{enumerate} \end{prop} \begin{proof} Once more, we apply the methods of~\cite[Proposition~1, p.~139]{Jo}, this time to the short exact sequence~\reqref{sesbraid} for $X=\ensuremath{\mathbb{T}^{2}}$ and $n=2$, where we take $P_{2}(\ensuremath{\mathbb{T}^{2}})$ to have the presentation given by \reco{compactpres}. A coset representative of the generator of $\ensuremath{\mathbb Z}_{2}$ is given by the braid $\sigma=\sigma_{1}$ that swaps the two basepoints. Hence $\brak{u,v,x,y,\sigma}$ generates $B_{2}(\ensuremath{\mathbb{T}^{2}})$. Relations~(\ref{it:presB2Ta})--(\ref{it:presB2Tc}) emanate from the relations of $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Relation~(\ref{it:presB2Td}) is obtained by lifting the relation of $\ensuremath{\mathbb Z}_{2}$ to $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and using the fact that $\sigma^{2}=B_{1,2}=[u,v^{-1}]$. To obtain relations~(\ref{it:presB2Te}) and~(\ref{it:presB2Tf}), by geometric arguments, one may see that for $j\in \brak{1,2}$, $\sigma \rho_{1,j} \sigma^{-1}= \rho_{2,j}$ and $\sigma \rho_{2,j} \sigma^{-1}= B_{1,2}\rho_{2,j}B_{1,2}^{-1}$, and one then uses \req{uvxy} to express these relations in terms of $u,v,x$ and $y$. \end{proof} \subsection{A description of the homotopy classes of $2$-ordered and split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, and the computation of the Nielsen number}\label{sec:descript} In this section, we describe the homotopy classes of $2$-ordered (resp.\ split $2$-valued) maps of $\ensuremath{\mathbb{T}^{2}}$ using the group structure of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ (resp.\ of $B_{2}(\ensuremath{\mathbb{T}^{2}})$) given in \resec{toro2}. \begin{prop}\label{prop:baseT2}\mbox{} \begin{enumerate} \item\label{it:baseT2a} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]_0$ of based homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of commuting, ordered pairs of elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$. \item\label{it:baseT2b} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$ of homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of commuting, conjugate, ordered pairs of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, i.e.\ two commuting pairs $(\alpha_1, \beta_1)$ and $(\alpha_2, \beta_2)$ of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ give rise to the same homotopy class of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ if there exists $\delta\in P_{2}(\ensuremath{\mathbb{T}^{2}})$ such that $\delta\alpha_1\delta^{-1}=\alpha_2$ and $\delta\beta_1\delta^{-1}=\beta_2$. \item\label{it:baseT2c} Under the projection $\widehat{\pi}\colon\thinspace [\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})] \ensuremath{\longrightarrow} [\ensuremath{\mathbb{T}^{2}},D_{2}(\ensuremath{\mathbb{T}^{2}})]$ induced by the covering map $\pi\colon\thinspace F_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb{T}^{2}})$, two homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ are sent to the same homotopy class of $2$-unordered maps if and only if any two pairs of braids that represent the maps are conjugate in $B_{2}(\ensuremath{\mathbb{T}^{2}})$. \end{enumerate} \end{prop} \begin{proof} Let $x_0\in \ensuremath{\mathbb{T}^{2}}$ and $(y_0, z_0)\in F_2(\ensuremath{\mathbb{T}^{2}})$ be basepoints, and let $\Psi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a basepoint-preserving $2$-ordered map. The restriction of $\Psi$ to the meridian $\mu$ and the longitude $\lambda$ of $\ensuremath{\mathbb{T}^{2}}$, which are geometric representatives of the elements of the basis $(e_{1},e_{2})$ of $\pi_1(\ensuremath{\mathbb{T}^{2}})$, gives rise to a pair of geometric braids. The resulting pair $(\Psi_{\#}(e_{1}),\Psi_{\#}(e_{2}))$ of elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ obtained via the induced homomorphism $\Psi_{\#}\colon\thinspace \pi_{1}(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} P_{2}(\ensuremath{\mathbb{T}^{2}})$ is an invariant of the based homotopy class of the map $\Psi$, and the two braids $\Psi_{\#}(e_{1})$ and $\Psi_{\#}(e_{2})$ commute. Conversely, given a pair of braids $(\alpha, \beta)$ of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, let $f_{1}\colon\thinspace \St[1] \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ and $f_{2}\colon\thinspace \St[1] \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be geometric representatives of $\alpha$ and $\beta$ respectively, \emph{i.e.}\ $\alpha=[f_{1}]$ and $\beta=[f_{2}]$. Then we define a geometric map from the wedge of two circles into $F_{2}(\ensuremath{\mathbb{T}^{2}})$ by sending $x\in \St[1] \vee \St[1]$ to $f_{1}(x)$ (resp.\ to $f_{2}(x)$) if $x$ belongs to the first (resp.\ second) copy of $\St[1]$. By classical obstruction theory in low dimension, this map extends to $\ensuremath{\mathbb{T}^{2}}$ if and only if $\alpha$ and $\beta$ commute as elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, and part~(\ref{it:baseT2a}) follows. Parts~(\ref{it:baseT2b}) and~(\ref{it:baseT2c}) are consequences of part~(\ref{it:baseT2a}) and classical general facts about maps between spaces of type $K(\pi, 1)$, see~\cite[Chapter~V, Theorem~4.3]{Wh} for example. \end{proof} Applying \repr{equivsplit} to $\ensuremath{\mathbb{T}^{2}}$, we obtain the following consequence. \begin{prop}\label{prop:lifttorus} If $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ is a split $2$-valued map and $\widehat{\Phi}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ is a lift of $\phi$, the map $\phi$ can be deformed to a fixed point free $2$-valued map if and only if there exist commuting elements $\alpha_1, \alpha_2 \in P_{3}(\ensuremath{\mathbb{T}^{2}})$ such that $\alpha_1$ (resp.\ $\alpha_2$) projects to $(e_1, \widehat{\Phi}_{\#}(e_1))$ (resp.\ $(e_2, \widehat{\Phi}_{\#}(e_2))$) under the homomorphism induced by the inclusion map $\widehat{\iota}_{3}\colon\thinspace F_{3}(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}\times F_{2}(\ensuremath{\mathbb{T}^{2}})$. \end{prop} \begin{proof} Since $\ensuremath{\mathbb{T}^{2}}$ is a space of type $K(\pi, 1)$, the existence of diagram~\reqref{commdiag4} is equivalent to that of the corresponding induced diagram on the level of fundamental groups. It then suffices to take $\alpha_{1}=\Theta_{\#}(e_1)$ and $\alpha_{2}=\Theta_{\#}(e_2)$ in the statement of \repr{equivsplit}. \end{proof} Proposition~\ref{prop:lifttorus} gives a criterion to decide whether a split $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$ can be deformed to a fixed point free $2$-valued map. However, from a computational point of view, it seems better to use an alternative condition in terms of roots (see \resec{exrfm}). In the following proposition, we make use of the identification of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ with $\mathbb{F}_{2} \times \ensuremath{\mathbb Z}^{2}$ given in \reco{compactpres}. \begin{prop}\label{prop:exismaps}\mbox{} \begin{enumerate} \item\label{it:exismapsa} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]_0$ of based homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of pairs $(\alpha, \beta)$ of elements of $\mathbb{F}_2 \times \ensuremath{\mathbb Z}^{2}$ of the form $\alpha=(w^r,(a,b))$, $\beta=(w^s, (c,d))$, where $(a,b), (c,d), (r,s)\in \ensuremath{\mathbb Z}^{2}$ and $w\in \mathbb{F}_2$. Further, up to taking a root of $w$ if necessary, we may assume that $w$ is either trivial or is a primitive element of $\mathbb{F}_2$ (i.e.\ $w$ is not a proper power of another element of $\mathbb{F}_2$). \item\label{it:exismapsb} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$ of homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of the equivalence classes of pairs $(\alpha, \beta)$ of elements of $\mathbb{F}_2 \times \ensuremath{\mathbb Z}^{2}$ of the form given in part~(\ref{it:exismapsa}), where the equivalence relation is defined as follows: the pairs of elements $( (w_1^{r_{1}},(a_1,b_1)), (w_1^{s_{1}}, (c_1,d_1)))$ and $( (w_2^{r_{2}},(a_2,b_2)), (w_2^{s_{2}}, (c_2,d_2)))$ of $\mathbb{F}_2 \times \ensuremath{\mathbb Z}^{2}$ are equivalent if and only if $(a_1,b_1,c_1,d_1) = (a_2,b_2,c_2,d_2)$, and either: \begin{enumerate}[(i)] \item $w_1=w_2=1$, or \item $w_1$ and $w_2$ are primitive, and there exists $\ensuremath{\varepsilon}\in \brak{1,-1}$ such that $w_1$ and $w_2^{\ensuremath{\varepsilon}}$ are conjugate in $\mathbb{F}_2$, and $(r_1,s_{1})=\ensuremath{\varepsilon} (r_2,s_{2})\ne (0,0)$. \end{enumerate} \end{enumerate} \end{prop} \begin{proof} Part~(\ref{it:exismapsa}) follows using \repr{baseT2}(\ref{it:baseT2a}), the identification of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ with $\mathbb{F}_2\times \ensuremath{\mathbb Z}^2$ given by \reco{compactpres}, and the fact that two elements of $\mathbb{F}_2$ commute if and only if they are powers of some common element of $\mathbb{F}_2$. Part~(\ref{it:exismapsb}) is a consequence of \repr{baseT2}(\ref{it:baseT2b}), \reco{compactpres}, and the straightforward description of the conjugacy classes of the group $\mathbb{F}_2\times \ensuremath{\mathbb Z}^2$. \end{proof} To describe the homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, let us consider the set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$ of homotopy classes of $2$-ordered maps and the action of $\ensuremath{\mathbb Z}_2$ on this set that is induced by the action of $\ensuremath{\mathbb Z}_2$ on $F_2(\ensuremath{\mathbb{T}^{2}})$. By \relem{split}(\ref{it:splitIII}), the corresponding set of orbits $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]/\ensuremath{\mathbb Z}_{2}$ is in one-to-one correspondence with the set $\splitmap{\ensuremath{\mathbb{T}^{2}}}{\ensuremath{\mathbb{T}^{2}}}{2}/\!\sim$ of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$. Given a homotopy class of a $2$-ordered map of $\ensuremath{\mathbb{T}^{2}}$, choose a based representative $f\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$. The based homotopy class of $f$ is determined by the element $f_{\#}$ of $\operatorname{\text{Hom}}(\ensuremath{\mathbb Z}^{2}, P_2(\ensuremath{\mathbb{T}^{2}}))$. In turn, by \repr{exismaps}(\ref{it:exismapsa}), $f_{\#}$ is determined by a pair of elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ of the form $( (w^r,(a,b)), (w^s, (c,d)))$, where $(a,b)$, $(c,d)$ and $(r,s)$ belong to $\ensuremath{\mathbb Z}^2$, and $w\in \mathbb{F}_2$. To characterise the equivalence class of $f$ in $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]/\ensuremath{\mathbb Z}_{2}$, we first consider the set of conjugates of this pair by the elements of $P_2(\ensuremath{\mathbb{T}^{2}})$, which by \repr{baseT2}(\ref{it:baseT2b}) describes the homotopy class of $f$ in $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$, and secondly, we take into account the $\ensuremath{\mathbb Z}_2$-action by conjugating by the elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. So the equivalence class of $f$ in $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]/\ensuremath{\mathbb Z}_{2}$ is characterised by the set of conjugates of the pair $( (w^r,(a,b)), (w^s, (c,d)))$ by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. The presentation of $B_{2}(\ensuremath{\mathbb{T}^{2}})$ given by~\repr{presful} contains the action by conjugation of $\sigma$ on $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Consider the homomorphism involution of $\mathbb{F}_2(u,v)$ that is defined on the generators of $\mathbb{F}_2(u,v)$ by $u \longmapsto u^{-1}$ and $v \longmapsto v^{-1}$. The image of an element $w\in \mathbb{F}_2(u,v)$ by this automorphism will be denoted by $\widehat{w}$. With respect to the decomposition of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ given by \reco{compactpres}, let $\gamma\colon\thinspace P_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \mathbb{F}_2(u,v)$ denote the projection onto $\mathbb{F}_2(u,v)$. Let $\operatorname{\text{Ab}}\colon\thinspace \mathbb{F}_2(u,v) \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^{2}$ denote the Abelianisation homomorphism that sends $u$ to $(1,0)$ and $v$ to $(0,1)$. We write $\operatorname{\text{Ab}}(w)=(|w|_{u}, |w|_{v})$, where $|w|_{u}$ (resp.\ $|w|_{v}$) denotes the exponent sum of $u$ (resp.\ $v$) in the word $w$, and $\ell(w)$ will denote the word length of $w$ with respect to $u$ and $v$. One may check easily that $(\widehat{w})^{-1}=\widehat{w^{-1}}$ and $\ell(w)=\ell(\widehat{w})$ for all $w\in \mathbb{F}_2(u,v)$. Note also that if $\lambda\in \mathbb{F}_2(u,v)$ and $r\in \ensuremath{\mathbb Z}$ then: \begin{equation}\label{eq:lambdahat} \widehat{\lambda} (\lambda \widehat{\lambda})^{r} \lambda=\widehat{\lambda} (\lambda \widehat{\lambda})^{r} \widehat{\lambda}^{-1} \ldotp \widehat{\lambda} \lambda=(\widehat{\lambda} \lambda \widehat{\lambda}\widehat{\lambda}^{-1})^{r} \widehat{\lambda} \lambda=(\widehat{\lambda}\lambda)^{r+1}. \end{equation} \begin{lem}\label{lem:conjhat} For all $w\in \mathbb{F}_{2}(u,v)$, $\widehat{w}=vu^{-1}\gamma(\sigma w\sigma^{-1})uv^{-1}$ and \begin{equation}\label{eq:conjsigma} \sigma w \sigma^{-1}=(uv^{-1}\widehat{w} vu^{-1}, (|w|_{u}, |w|_{v})). \end{equation} In particular, $\widehat{w}$ is conjugate to $\gamma(\sigma w\sigma^{-1})$ in $\mathbb{F}_2(u,v)$. \end{lem} \begin{proof} By \repr{presful}, we have: \begin{equation*} \text{$\sigma u \sigma^{-1} = [u,v^{-1}]u^{-1} x=uv^{-1}\widehat{u} vu^{-1} x^{|u|_{u}}$ and $\sigma v \sigma^{-1} = [u,v^{-1}]v^{-1} y=uv^{-1}\widehat{v} vu^{-1} y^{|v|_{v}}$.} \end{equation*} If $w\in \mathbb{F}_{2}(u,v)$ then~\reqref{conjsigma} follows because $x$ and $y$ belong to the centre of $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Thus $\gamma(\sigma w \sigma^{-1})=uv^{-1}\widehat{w} vu^{-1}$ as required. \end{proof} \begin{lem}\label{lem:conjhat1}\mbox{} \begin{enumerate} \item\label{it:classifI} Let $a,b\in \mathbb{F}_{2}(u,v)$ be such that $ab$ is written in reduced form. Then $ab=\widehat{b}\,\widehat{a}$ if and only if there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that: \begin{equation}\label{eq:concl1} \text{$a=(\widehat{\lambda} \lambda)^s\widehat{\lambda}$ and $b=(\lambda\widehat{\lambda})^r\lambda$.} \end{equation} \item\label{it:classifII} For all $w\in \mathbb{F}_{2}(u,v)$, $w$ and $\widehat{w}$ are conjugate in $\mathbb{F}_2(u,v)$ if and only if there exist $\lambda\in \mathbb{F}_2(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that: \begin{equation}\label{eq:concl2} w=(\lambda \widehat{\lambda})^l. \end{equation} \end{enumerate} \end{lem} \begin{rem} By modifying the definition of the $\widehat{\cdot}$ homomorphism appropriately, \relem{conjhat1} and its proof may be generalised to any free group of finite rank on a given set. \end{rem} \begin{proof}[Proof of \relem{conjhat1}] We first prove the `if' implications of~(\ref{it:classifI}) and~(\ref{it:classifII}). For part~(\ref{it:classifI}), let $a,b\in \mathbb{F}_{2}(u,v)$ be such that $ab$ is written in reduced form, and that~\reqref{concl1} holds. Using~\reqref{lambdahat}, we have: \begin{equation*} ab=(\widehat{\lambda} \lambda)^s\widehat{\lambda}(\lambda\widehat{\lambda})^r\lambda=(\widehat{\lambda} \lambda)^{r+s+1}=(\widehat{\lambda} \lambda)^{r}\widehat{\lambda}(\lambda \widehat{\lambda})^{s}\lambda= \widehat b\,\widehat a. \end{equation*} For part~(\ref{it:classifII}), if~\reqref{concl2} holds then $\widehat{w}=(\widehat{\lambda} \lambda)^l=\widehat{\lambda}(\lambda\widehat{\lambda})^l(\widehat{\lambda})^{-1}=\widehat{\lambda} w\widehat{\lambda}^{-1}$ by~\reqref{lambdahat}, so $w$ and $\widehat{w}$ are conjugate in $\mathbb{F}_2(u,v)$. Finally, we prove the `only if' implications of~(\ref{it:classifI}) and~(\ref{it:classifII}) simultaneously by induction on the length $k$ of the words $ab$ and $w$, which we assume to be non trivial and written in reduced form. Let~(E1) denote the equation $ab=\widehat{b}\,\widehat{a}$, and let~(E2) denote the equation $\widehat{w}=\theta w \theta^{-1}$, where $\theta\in \mathbb{F}_2(u,v)$. Note that if $a$ and $b$ satisfy~(E1) then both $a$ and $b$ are non trivial, and that $\widehat{b}\,\widehat{a}$ is also in reduced form. Further, if $z\in \mathbb{F}_2(u,v)$, then since $|z|_{y}=-|\widehat{z}|_{y}$ for $y\in\brak{u,v}$, it follows from the form of~(E1) and~(E2) that $k$ must be even in both cases. We carry out the proof of the two implications by induction as follows. \begin{enumerate}[(i)] \item If $k\leq 4$ then (E1) implies~\reqref{concl1}. One may check easily that $k$ cannot be equal to $2$. Suppose that $k=4$ and that~(E1) holds. Now $\ell(a)\neq 1$, for otherwise $b$ would start with $a^{-1}$, but then $ab$ would not be reduced. Similarly, $\ell(b)\neq 1$, so we must have $\ell(a)=\ell(b)=2$, in which case $b=\widehat{a}$, and it suffices to take $r=s=0$ and $\lambda=b$ in~\reqref{concl1}. \item If $k\leq 4$ then (E2) implies~\reqref{concl2}. Once more, it is straightforward to see that $k$ cannot be equal to $2$. Suppose that $k=4$. Since $w$ and $\widehat{w}$ are conjugate, we have $|w|_{y}=|\widehat{w}|_{y}$ for $y\in\brak{u,v}$, and so $|w|_{y}=0$. Since $w$ is in reduced form, one may then check that~\reqref{concl2} holds, where $\ell(\lambda)=2$. \item\label{it:induct} Suppose by induction that for some $k\geq 4$,~(E1) implies~\reqref{concl1} if $\ell(ab)<k$ and~(E2) implies~\reqref{concl2} if $\ell(w)<k$. Suppose that $a$ and $b$ satisfy~(E1) and that $\ell(ab)=k$. If $\ell(a)=\ell(b)$ then $b=\widehat{a}$, and as above, it suffices to take $r=s=0$ and $\lambda=b$ in~\reqref{concl1}. So assume that $\ell(a)\neq \ell(b)$. By applying the automorphism $\widehat{\cdot}$ and exchanging the r\^{o}les of $a$ and $b$ if necessary, we may suppose that $\ell(a)<\ell(b)$. We consider two subcases. \begin{enumerate}[(A)] \item $\ell(a)\leq \ell(b)/2$: since both sides of~(E1) are in reduced form, $b$ starts and ends with $\widehat{a}$, and there exists $b_{1}\in \mathbb{F}_2(u,v)$ such that $b=\widehat{a}b_{1}\widehat{a}$, written in reduced form. Substituting this into~(E1), we see that $a\widehat{a}b_{1}\widehat{a}= a\widehat{b}_{1}a\widehat{a}$, and thus $\widehat{a}b_{1}=\widehat{b}_{1}a$, written in reduced form. This equation is of the form~(E1), and since $\ell(\widehat{a}b_{1})<\ell(b)<\ell(ab)$, we may apply the induction hypothesis. Thus there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that $\widehat{a}=(\widehat{\lambda} \lambda)^s\widehat{\lambda}$, and $b_1=(\lambda\widehat{\lambda})^{r}\lambda$. Therefore $a=(\lambda\widehat{\lambda} )^s \lambda$ and \begin{equation*} b=\widehat{a} b_1 \widehat{a}=(\widehat{\lambda} \lambda)^s\widehat{\lambda} (\lambda\widehat{\lambda})^{r}\lambda (\widehat{\lambda} \lambda)^s\widehat{\lambda}= (\widehat{\lambda} \lambda)^{2s+r+1}\widehat{\lambda}, \end{equation*} using~\reqref{lambdahat}, which proves the result in this case. \item $\ell(b)/2< \ell(a) <\ell(b)$: since both sides of~(E1) are in reduced form, $b$ starts with $\widehat{a}$, and there exists $b_{1}\in \mathbb{F}_2(u,v)$, $b_{1}\neq 1$, such that $b=\widehat{a}b_{1}$. Substituting this into~(E1), we obtain $a\widehat{a}b_{1}=a\widehat{b}_{1}\widehat{a}$, which is equivalent to $\widehat{a}b_{1}\widehat{a}^{-1}=\widehat{b}_{1}$. This equation is of the form~(E2), and since $\ell(\widehat{b}_{1})<\ell(b)<\ell(ab)=k$, we may apply the induction hypothesis. Thus there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that $b_1=(\lambda\widehat{\lambda})^{l}$. The fact that $b_{1}\neq 1$ implies that $\lambda\widehat{\lambda}\neq 1$ and $l\neq 0$. We claim that $\lambda\widehat{\lambda}$ may be chosen to be primitive. To prove the claim, suppose that $\lambda\widehat{\lambda}$ is not primitive. Since $\mathbb{F}_{2}(u,v)$ is a free group of rank $2$, the centraliser of $\lambda\widehat{\lambda}$ in $\mathbb{F}_{2}(u,v)$ is infinite cyclic, generated by a primitive element $v$, and replacing $v$ by $v^{-1}$ if necessary, there exists $s\geq 2$ such that $v^s= \lambda\widehat{\lambda}$. Therefore $b_{1}=v^{sl}$, and substituting this into the relation $\widehat{b}_{1}=\widehat{a}b_{1}\widehat{a}^{-1}$, we obtain $\widehat{v}^{sl}=\widehat{a} v^{sl}\widehat{a}^{-1}=(\widehat{a} v\widehat{a}^{-1})^{sl}$ in the free group $\mathbb{F}_{2}(u,v)$, from which we conclude that $\widehat{v}=\widehat{a} v\widehat{a}^{-1}$. We may thus apply the induction hypothesis to this relation because $\ell(v)<\ell(b_1)<k$, and since $v$ is primitive, there exists $\gamma\in \mathbb{F}_{2}(u,v)$ for which $v=\gamma\widehat{\gamma}$. Hence $b_{1}=(\gamma\widehat{\gamma})^{sl}$, where $\gamma\widehat{\gamma}$ is primitive, which proves the claim. Substituting $b_1=(\lambda\widehat{\lambda})^{l}$ into the relation $\widehat{a} b_1\widehat{a}^{-1}=\widehat{b}_1$, we obtain: \begin{equation*} (\widehat{a} \lambda\widehat{\lambda} \widehat{a}^{-1})^{l}=\widehat{a} (\lambda\widehat{\lambda})^{l}\widehat{a}^{-1}=\widehat{(\lambda\widehat {\lambda})^{l}}=(\widehat{\lambda} \lambda)^{l}, \end{equation*} where we take $\lambda\widehat{\lambda}$ to be primitive. Once more, since $\mathbb{F}_{2}(u,v)$ is a free group of rank $2$ and $l\neq 0$, it follows that $\widehat{a} \lambda\widehat{\lambda} \widehat{a}^{-1}=\widehat{\lambda} \lambda=\widehat{\lambda} \lambda \widehat{\lambda} \widehat{\lambda}^{-1}$, from which we conclude that $\widehat{\lambda}^{-1}\widehat{a}$ belongs to the centraliser of $\lambda\widehat{\lambda}$. But $\lambda\widehat{\lambda}$ is primitive, so there exists $t\in \ensuremath{\mathbb Z}$ such that $\widehat{\lambda}^{-1}\widehat{a}=(\lambda\widehat{\lambda})^{t}$, and hence $\widehat{a}=\widehat{\lambda} (\lambda\widehat{\lambda})^{t}$. Hence $a=\lambda (\widehat\lambda \lambda)^{t}=(\lambda \widehat\lambda)^{t}\lambda$ and $b=\widehat a b_1=\widehat{\lambda} (\lambda\widehat{\lambda})^{t+l}= (\widehat{\lambda}\lambda)^{t+l}\widehat{\lambda}$ in a manner similar to that of~\reqref{lambdahat}, so~\reqref{concl1} holds. \end{enumerate} \item By the induction hypothesis and~(\ref{it:induct}), we may suppose that for some $k\geq 4$,~(E1) implies~\reqref{concl1} if $\ell(ab)\leq k$ and~(E2) implies~\reqref{concl2} if $\ell(w)<k$. Suppose that $\ell(w)=k$ and that $w$ and $\widehat{w}$ are conjugate. Let $\widehat{w}=\theta w\theta^{-1}$, where $\theta\in \mathbb{F}_{2}(u,v)$. If $\theta=1$ then $w=\widehat{w}$, which is impossible. So $\theta\neq 1$, and since $\ell(w)=\ell(\widehat{w})$, there must be cancellation in the expression $\theta w\theta^{-1}$. Taking the inverse of the relation $\widehat{w}=\theta w\theta^{-1}$ if necessary, we may suppose that cancellation occurs between $\theta$ and $w$. So there exist $\theta_1,\theta_2\in \mathbb{F}_{2}(u,v)$ such that $\theta=\theta_1\theta_2$ written in reduced form, and such that the cancellation between $\theta$ and $w$ is maximal \emph{i.e.}\ if $w_{1}=\theta_{2}w$ is written in reduced form then $\theta_{1}w_{1}$ is also reduced. Let $\ell(\theta)=n$ and $\ell(\theta_2)=r$. We again consider two subcases. \begin{enumerate}[(A)] \item Suppose first that $r=n$. Then $\theta_{1}=1$, $\theta_{2}=\theta$ and $w=\theta^{-1}w_{1}$, so: \begin{equation}\label{eq:ellwinequ} \ell(w)=\ell(\theta^{-1}w_{1})\leq \ell(\theta^{-1})+\ell(w_{1})=n+\ell(w)-n=\ell(w). \end{equation} Hence $\ell(\theta^{-1}w_{1})= \ell(\theta^{-1})+\ell(w_{1})$, from which it follows that $w=\theta^{-1}w_1$ is written in reduced form. Therefore $\widehat{w}=\widehat{\theta}^{-1}\widehat{w}_{1}$ is also written in reduced form. Now $\widehat{w}=\theta w \theta^{-1}=w_{1}\theta^{-1}$, and applying an inequality similar to that of~\reqref{ellwinequ}, we see that $\widehat{w}=w_{1}\theta^{-1}$ is written in reduced form. Hence $\widehat{\theta}^{-1}\widehat{w}_{1}=w_{1}\theta^{-1}$, which is in the form of~(E1), both sides being written in reduced form. Thus $\ell(w_{1}\theta^{-1})= \ell(\theta^{-1}w_1)=\ell(w)=k$, and by the induction hypothesis, there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that $w_1=(\widehat{\lambda} \lambda)^s\widehat{\lambda} $ and $\theta^{-1}=(\lambda\widehat{\lambda})^r\lambda$. So by~\reqref{lambdahat}, $w=\theta^{-1}w_1= (\lambda\widehat{\lambda})^r\lambda(\widehat{\lambda} \lambda)^s\widehat{\lambda} =( \lambda \widehat{\lambda})^{r+s+1}$, which proves the result in the case $r=n$. \item Now suppose that $r<n$. Then there must be cancellation on both sides of $w$. Taking the inverse of both sides of the equation $\widehat{w}=\theta w\theta^{-1}$ if necessary, we may suppose that the length of the cancellation on the left is less than or equal to that on the right. So there exist $\theta_1,\theta_2,\theta_3, w_2\in \mathbb{F}_{2}(u,v)$ such that $\theta=\theta_1\theta_2\theta_3$, $w=\theta_3^{-1}w_2 \theta_2\theta_3$ and $\widehat{w}=\theta_1\theta_2 w_2 \theta_1^{-1}$, all these expressions being written in reduced form. Since $\ell(w)=\ell(\widehat{w})$, it follows from the second two expressions that $\ell(\theta_1)=\ell(\theta_3)$, and that $w=\theta_3^{-1}w_2 \theta_2\theta_3=\widehat{\theta}_1\widehat{\theta}_2 \widehat{w}_2 \widehat{\theta}_1^{-1}$, written in reduced form, from which we conclude that $\widehat{\theta}_1=\theta_3^{-1}$, and that $w_2 \theta_2=\widehat{\theta}_2 \widehat{w}_2$, written in reduced form, which is in the form of~(E1). Now $\ell(w_2 \theta_2)<\ell(w)=k$, and applying the induction hypothesis, there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that $w_2=(\widehat{\lambda} \lambda)^s\widehat{\lambda} $ and $\theta_2=(\lambda\widehat{\lambda})^r\lambda$. Hence $w=\theta_3^{-1}w_2 \theta_2\theta_3= \theta_3^{-1} (\widehat{\lambda} \lambda)^{r+s+1} \theta_3= (\theta_3^{-1}\widehat{\lambda} \widehat{\theta}_3 \ldotp\widehat{\theta}_3^{-1} \lambda \theta_3)^{r+s+1}=(\gamma \widehat{\gamma})^{r+s+1}$, where $\gamma=\theta_3^{-1}\widehat{\lambda} \widehat{\theta}_3$. This completes the proof of the induction step, and hence that of the lemma.\qedhere \end{enumerate} \end{enumerate} \end{proof} \begin{prop}\mbox{}\label{prop:classif} \begin{enumerate} \item\label{it:classifa} Let $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map, and let $\widehat{\Phi}\colon\thinspace \ensuremath{\mathbb{T}^{2}}\ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$ that is determined by the pair $((w^r,(a,b)), (w^s, (c,d)))$ as described in \repr{exismaps}. Let $\mathcal{O}_{\phi}$ denote the set of conjugates of this pair by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. Then $\mathcal{O}_{\phi}$ is the union of the sets $\mathcal{O}_{\phi}^{(1)}$ and $\mathcal{O}_{\phi}^{(2)}$, where $\mathcal{O}_{\phi}^{(1)}$ is the subset of pairs of the form $((w_1^r,(a,b)), (w_1^s, (c,d)))$, where $w_1$ runs over the set of conjugates of $w$ in $\mathbb{F}_2(u,v)$, and $\mathcal{O}_{\phi}^{(2)}$ is the subset of pairs of the form $((w_2^r,(a+r|w|_{u},b+r|w|_{v})), (w_2^s, (c+s|w|_{u},d+s|w|_{v})))$, where $w_{2}$ runs over the set of conjugates of $\widehat{w}$ in $\mathbb{F}_2(u,v)$. Further, the correspondence that to $\phi$ associates $\mathcal{O}_{\phi}$ induces a bijection between the set of homotopy classes of split $2$-valued maps and the set of conjugates of the pairs of the form given by \repr{exismaps}(\ref{it:exismapsa}) by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. \item\label{it:classifb} Let $f=(f_1,f_2)\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ be a $2$-ordered map of $\ensuremath{\mathbb{T}^{2}}$ determined by the pair $((w^r,(a,b)), (w^s, (c,d)))$, let $g=(g_1,g_2)\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$, and let $\widehat{\pi}\colon\thinspace [\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})] \ensuremath{\longrightarrow} [\ensuremath{\mathbb{T}^{2}}, D_2(\ensuremath{\mathbb{T}^{2}})]$ be the projection defined in the proof of \relem{split}(\ref{it:splitIII}). Then $\widehat{\pi}^{-1}(\widehat{\pi}([f]))= \brak{[f],[g]}$. Further, $[f]=[g]$ if and only if there exist $\lambda\in \mathbb{F}_2(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that $w=(\lambda \widehat{\lambda})^l$. \end{enumerate} \end{prop} \begin{proof}\mbox{} \begin{enumerate}[(a)] \item To compute $\mathcal{O}_{\phi}$, we determine the conjugates of the pair $((w^r,(a,b)), (w^s, (c,d)))$ by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$, namely by words of the form $\sigma^{\ensuremath{\varepsilon}}z$, where $\ensuremath{\varepsilon} \in \brak{0,1}$, and $z\in P_2(\ensuremath{\mathbb{T}^{2}})$. With respect to the decomposition of \reco{compactpres}, if $\ensuremath{\varepsilon}=0$, we obtain the elements of $\mathcal{O}_{\phi}^{(1)}$. If $\ensuremath{\varepsilon}=1$, using the computation for $\ensuremath{\varepsilon}=0$ and the fact that $\sigma z^{m} \sigma^{-1}=(uv^{-1}\widehat{z}^{m} vu^{-1}, (m|z|_{u}, m|z|_{v}))$ for all $z\in \mathbb{F}_{2}(u,v)$ and $m\in \ensuremath{\mathbb Z}$ by \relem{conjhat}, we obtain the elements of $\mathcal{O}_{\phi}^{(2)}$. This proves the first part of the statement. The second part is a consequence of classical general facts about maps between spaces of type $K(\pi, 1)$, see~\cite[Chapter~V, Theorem~4.3]{Wh} for example. \item Let $\alpha\in [\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})]$ and let $f=(f_1,f_2)\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ be such that $f\in \alpha$. Taking $g=(f_2,f_1)\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$, under the projection $\widehat{\pi}\colon\thinspace [\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})] \ensuremath{\longrightarrow} [\ensuremath{\mathbb{T}^{2}}, D_2(\ensuremath{\mathbb{T}^{2}})]$, $\widehat{\pi}(\beta)=\widehat{\pi}(\alpha)$, where $\beta=[g]$. From \resec{relnsnvm}, $\alpha$ and $\beta$ are the only elements of $[\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})]$ that project under $\widehat{\pi}$ to $\widehat{\pi}(\alpha)$, which proves the first part. It remains to decide whether $\alpha=\beta$. Suppose that $f$ is determined by the pair $P_1=((w^r,(a,b)), (w^s, (c,d)))$. Since $\widehat{\pi}(\beta)=\widehat{\pi}(\alpha)$, $g$ is determined by a pair belonging to $\mathcal{O}_{\phi}$. Using the fact that $g$ is obtained from $f$ via the $\ensuremath{\mathbb Z}_2$-action on $F_2(\ensuremath{\mathbb{T}^{2}})^{\ensuremath{\mathbb{T}^{2}}}$ that arises from the covering map $\pi$ and applying covering space arguments, there exists $g'\in \beta$ that is determined by the pair $P_2=((\widehat{w}^r,(a+r|w|_{u},b+r|w|_{v})), (\widehat{w}^s, (c+s|w|_{u},d+s|w|_{v})))$. Then $\alpha=\beta$ if and only if $P_1$ and $P_2$ are conjugate by an element of $P_2(\ensuremath{\mathbb{T}^{2}})$, which is the case if and only if $w$ and $\widehat{w}$ are conjugate in the free group $\mathbb{F}_2(u,v)$ (recall from the proof of \relem{conjhat1} that if $w$ and $\widehat{w}$ are conjugate then $|w|_{u}=|w|_{v}=0$). By part~(\ref{it:classifI}) of that lemma, this is equivalent to the existence of $\lambda\in \mathbb{F}_2(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that $w=(\lambda \widehat{\lambda})^l$.\qedhere \end{enumerate} \end{proof} \subsection{Fixed point theory of split $2$-valued maps}\label{sec:fptsplit2} In this section, we give a sufficient condition for a split $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$ to be deformable to a fixed point free $2$-valued map. \repr{lifttorus} already provides one such condition. We shall give an alternative condition in terms of roots, which seems to provide a more convenient framework from a computational point of view. To obtain fixed point free $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, we have two possibilities at our disposal: we may either use \reth{defchineg}(\ref{it:defchinegb}), in which case we should determine the group $P_{3}(\ensuremath{\mathbb{T}^{2}})$, or \reth{defchinegI}(\ref{it:defchinegbI}), in which case we may make use of the results of \resec{toro2}, and notably \repr{presTminus1alta}. We choose the second possibility. We divide the discussion into three parts. In \resec{proofexisfpf}, we prove \repr{exisfpf}. In \resec{p2Tminus1}, we give the analogue for roots of the second part of \repr{exisfpf}, and in \resec{exrfm}, we will give some examples of split $2$-valued maps that may be deformed to root-free $2$-valued maps. \subsubsection{The proof of \repr{exisfpf}}\label{sec:proofexisfpf} Let $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$. As in \reco{compactpres}, we identify $P_{2}(\ensuremath{\mathbb{T}^{2}})$ with $\mathbb{F}_{2} \times \ensuremath{\mathbb Z}^{2}$. The Abelianisation homomorphism is denoted by $\operatorname{\text{Ab}}\colon\thinspace \mathbb{F}_2(u,v) \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^2$. Recall from the beginning of \resec{relnsnvm} that if $\phi$ is split and $\widehat{\Phi}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ is a lift of $\phi$, then $\operatorname{\text{Fix}}(\widehat{\Phi})=\operatorname{\text{Fix}}(\phi)$. In this section, we compute the Nielsen number $N(\phi)$ of $\phi$, and we give necessary conditions for $\phi$ to be homotopic to a fixed point free $2$-valued map. Although the Nielsen number is not the main subject of this paper, it is an important invariant in fixed point theory. The Nielsen number of $n$-valued maps was defined in~\cite{Sch1}. The following result of that paper will enable us to compute $N(\phi)$ in our setting. \begin{thm}[{\cite[Corollary~7.2]{Sch1}}]\label{th:helgath01} Let $n\in \ensuremath{\mathbb N}$, let $K$ be a compact polyhedron, and let $\phi=\{f_1,\ldots,f_n\}\colon\thinspace K \multimap K$ be a split $n$-valued map. Then $N(\phi)=N(f_1)+\cdots+N(f_n)$. \end{thm} \begin{proof}[Proof of \repr{exisfpf}] Let $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$, and let $\widehat{\Phi}=(f_1,f_2) \colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$ such that $\widehat{\Phi}_{\#}(e_{1})=(w^r,(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^s, (c,d)))$, where $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $w\in \mathbb{F}_2(u,v)$. For $i=1,2$, we shall compute the matrix $M_{i}$ of the homomorphism $f_{i\#}\colon\thinspace \ensuremath{\mathbb Z}^{2} \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^{2}$ induced by $f_i$ on the fundamental group of $\ensuremath{\mathbb{T}^{2}}$ with respect to the basis $(e_{1},e_{2})$ of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ (up to the canonical identification of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ for different basepoints if necessary). In practice, the bases in the target are the images of the elements $(\rho_{1,1},\rho_{1,2})$ and $(\rho_{2,1},\rho_{2,2})$ by the homomorphism $p_{i\#}\colon\thinspace P_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \pi_{1}(\ensuremath{\mathbb{T}^{2}})$ induced by the projection $p_i\colon\thinspace F_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$ onto the $i\up{th}$ coordinate, where $i=1,2$. Note that $f_{i\#}=p_{i\#}\circ \widehat{\Phi}_{\#}$. Setting $w=w(u,v)$, and using multiplicative notation and \req{uvxy}, we have: \begin{align*} \widehat{\Phi}_{\#}(e_{1})&=w^r x^{a} y^{b}=(w(\rho_{1,1},\rho_{1,2}))^r (\rho_{1,1}B_{1,2}^{-1}\rho_{2,1})^{a} (\rho_{1,2}B_{1,2}^{-1}\rho_{2,2})^{b}\\ \widehat{\Phi}_{\#}(e_{2})&=w^s x^{c} y^{d}=(w(\rho_{1,1},\rho_{1,2}))^s (\rho_{1,1}B_{1,2}^{-1}\rho_{2,1})^{c} (\rho_{1,2}B_{1,2}^{-1}\rho_{2,2})^{d}. \end{align*} Projecting onto the first (resp.\ second) coordinate, it follows that $f_{1\#}(e_{1})=\rho_{1,1}^{rm+a}\rho_{1,2}^{rn+b}$ and $f_{1\#}(e_{2})=\rho_{1,1}^{sm+c}\rho_{1,2}^{sn+d}$ (resp.\ $f_{2\#}(e_{1})=\rho_{2,1}^{a}\rho_{2,2}^{b}$ and $f_{2\#}(e_{2})=\rho_{2,1}^{c}\rho_{2,2}^{d}$), so $M_{1} =\left( \begin{smallmatrix} rm+a & sm+c \\ rn+b & sn+d \end{smallmatrix}\right)$ and $M_{2} =\left( \begin{smallmatrix} a & c \\ b & d \end{smallmatrix}\right)$. One then obtains the equation for $N(\phi)$ as a consequence of \reth{helgath01} and the usual formula for the Nielsen number of a self-map of $\ensuremath{\mathbb{T}^{2}}$~\cite{BrooBrPaTa}. The second part of the statement is clear, since if $\phi$ can be deformed to a fixed point free $2$-valued map then $f_1, f_2$ can both be deformed to fixed point free maps. To prove the last part, $f_1$ and $f_2$ can both be deformed to fixed point free maps if and only if $\det(M_{i}-I_{2})=0$ for $i=1,2$, which using linear algebra is equivalent to: \begin{gather} \text{$\det(M_{2}-I_{2})=0$, and}\label{eq:onedet}\\ \det\begin{pmatrix} a-1 & sm \\ b & sn \end{pmatrix}+ \det\begin{pmatrix} rm & c \\ rn & d-1 \end{pmatrix}=0.\label{eq:twodet} \end{gather} Equation~\reqref{onedet} is equivalent to the proportionality of $(c,d-1)$ and $(a-1,b)$. Suppose that \req{twodet} holds. If one of the determinants in that equation is zero, then so is the other, and it follows that $(a-1, b),(c,d-1)$ and $(m,n)$ generate a subgroup of $\ensuremath{\mathbb Z}^{2}$ isomorphic to $\ensuremath{\mathbb Z}$, which yields condition~(\ref{it:exisfpfa}) of the statement. If both of these determinants are non zero then $(m,n)$ is neither proportional to $(a-1,b)$ nor to $(c,d-1)$, and since $(c,d-1)$ and $(a-1,b)$ are proportional, $(m,n)$ is not proportional to any linear combination of the two. Further,~\reqref{twodet} may be written as: \begin{equation*} 0=s\det\begin{pmatrix} a-1 & m \\ b & n \end{pmatrix}+ r\det\begin{pmatrix} m & c \\ n & d-1 \end{pmatrix}= \det\begin{pmatrix} s(a-1)-rc & m \\ sb-r(d-1) & n \end{pmatrix}, \end{equation*} from which it follows that $s(a-1, b)=r(c,d-1)$, which is condition~(\ref{it:exisfpfb}) of the statement. The converse is straightforward. \end{proof} \begin{rem} Within the framework of \repr{exisfpf}, the fact that $f_1$ and $f_2$ can be deformed to fixed point free maps does not necessarily imply that there exists a deformation of the pair $(f_1, f_2)$, regarded as a map from $\ensuremath{\mathbb{T}^{2}}$ to $F_2(\mathbb{T}^{2})$, to a pair $(f_1', f_2')$ where the maps $f_1'$ and $f_2'$ are fixed point free. To answer the question of whether the $2$-ordered map $(f_1, f_2)\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ can be deformed or not to a fixed point free $2$-ordered map under the hypothesis that each map can be deformed to a fixed point free map would be a major step in understanding the fixed point theory of $n$-valued maps, and would help in deciding whether the Wecken property holds or not for $\ensuremath{\mathbb{T}^{2}}$ for the class of split $n$-valued maps. \end{rem} \subsubsection{Deformations to root-free $2$-valued maps}\label{sec:p2Tminus1} Recall from the introduction that a root of an $n$-valued map $\phi_0\colon\thinspace X \multimap Y$, with respect to a basepoint $y_0\in Y$, is a point $x$ such that $y_0\in \phi_0(x)$. If $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ is an $n$-valued map then we may construct another $n$-valued map $\phi_0\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ as follows. If $x\in \ensuremath{\mathbb{T}^{2}}$ and $\phi(x)=\{ x_1,\ldots,x_n\}$, then let $\phi_0(x)=\{x_1\ldotp x^{-1},\ldots,x_n \ldotp x^{-1}\}$. The correspondence that to $\phi$ associates $\phi_{0}$ is bijective. Moreover, if $\phi$ is split, so that $\phi=\{f_1, f_2,\ldots, f_n\}$, where the self-maps $f_{i}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$, $i=1,\ldots,n$, are coincidence-free, then $\phi_{0}$ is also split, and is given by $\phi_{0}(x)=\{f_1(x)\ldotp x^{-1},\ldots, f_n(x)\ldotp x^{-1}\}$ for all $x\in \ensuremath{\mathbb{T}^{2}}$. The restriction of the above-mentioned correspondence to the case where the $n$-valued maps are split is also a bijection. The following lemma implies that the question of deciding whether an $n$-valued map $\phi$ can be deformed to a fixed point free map is equivalent to deciding whether the associated map $\phi_{0}$ can be deformed to a root-free map. Let $1$ denote the basepoint of $\ensuremath{\mathbb{T}^{2}}$. \begin{lem}\label{lem:equivroot} With the above notation, a point $x_0\in \ensuremath{\mathbb{T}^{2}}$ is a fixed point of an $n$-valued map $\phi$ of $\ensuremath{\mathbb{T}^{2}}$ if and only if it is a root of the $n$-valued map $\phi_0$ (i.e.\ $1\in \phi(x_0)$). Further, $\phi$ may be deformed to an $n$-valued map $\phi'$ such that $\phi'$ has $k$ fixed points if and only if $\phi_0$ may be deformed to an $n$-valued map $\phi_0'$ such that $\phi_0'$ has $k$ roots. \end{lem} \begin{proof} Straightforward, and left to the reader. \end{proof} The algebraic condition given by \reth{defchinegI}(\ref{it:defchinegbI}) is equivalent to the existence of a homomorphism $g_{\#}\colon\thinspace \pi_1(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} P_{2}(\ensuremath{\mathbb{T}^{2}})$ that factors through $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$. The following result is the analogue for roots of the second part of \repr{exisfpf}. \begin{prop}\label{prop:exisrf} Let $g\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map, and let $\widehat{g}=(g_1, g_2)\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $g$ such that $\widehat{g}_{\#}(e_{1})=(w^{r},(a',b))$ and $\widehat{g}_{\#}(e_{2})=(w^{s}, (c,d'))\in P_{2}(\ensuremath{\mathbb{T}^{2}})$, where $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$, $a',b,c,d'\in \ensuremath{\mathbb Z}$, $w\in \mathbb{F}_2(u,v)$ and $\operatorname{\text{Ab}}(w)=(m,n)$. If $g$ can be deformed to a root-free map, then each of the maps $g_1, g_2\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$ can be deformed to a root-free map. Further, $g_1$ and $g_2$ can both be deformed to root-free maps if and only if either: \begin{enumerate} \item\label{it:exisrfa} the pairs $(a',b),(c,d')$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or \item\label{it:exisrfb} $s(a',b)=r(c,d')$. \end{enumerate} \end{prop} \begin{proof} If $g$ can be deformed to a root-free map then clearly the maps $g_1, g_2\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$ can be deformed to root-free maps. For the second part of the statement, for $i=1,2$, consider the maps $f_{i}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$, where $f_i(x)=g_i(x)\ldotp x$, and where $g_1$ and $g_2$ can both be deformed to root-free maps. Then $f_1$ and $f_2$ can be deformed to fixed point free maps, and the maps $f_1, f_2$ are determined by the elements $(w^r, (a,b))$, $(w^s, (c,d))$ of $P_2(\ensuremath{\mathbb{T}^{2}})$, where $a=a'+1$ and $d=d'+1$. By \repr{exisfpf}, either the elements $(a-1,b),(c,d-1)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or $s(a-1,b)=r(c,d-1)$, which is the same as saying that either the elements $(a',b),(c,d')$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or $s(a',b)=r(c,d')$, and the result follows. \end{proof} \subsubsection{Examples of split $2$-valued maps that may be deformed to root-free $2$-valued maps}\label{sec:exrfm} We now give a family of examples of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that satisfy the necessary condition of \repr{exisrf} for such a map to be deformable to a root-free map. To do so, we exhibit a family of $2$-ordered maps that we compose with the projection $\pi\colon\thinspace F_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb{T}^{2}})$ to obtain a family of split $2$-valued maps. We begin by studying a $2$-ordered map of $\ensuremath{\mathbb{T}^{2}}$ determined by a pair of braids of the form $((w^{r},(a,b))$, $ (w^{s}, (c,d))$, where $s(a,b)=r(c,d)$ (we make use of the notation of \repr{exisrf}). \begin{prop}\label{prop:necrootfree2} If $\widehat{\Phi}\colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2({\ensuremath{\mathbb{T}^{2}}})$ is a lift of a split $2$-valued map $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ that satisfies $\widehat{\Phi}_{\#}(e_{1})=(w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^{s}, (c,d))$, where $w\in \mathbb{F}_2(u,v)$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$ satisfy $s(a,b)=r(c,d)$, then $\phi$ may be deformed to a root-free $2$-valued map. \end{prop} \begin{proof} By hypothesis, the subgroup $\Gamma$ of $\ensuremath{\mathbb Z}^2$ generated by $(a,b)$ and $(c,d)$ is contained in a subgroup isomorphic to $\ensuremath{\mathbb Z}$. Let $\gamma$ be a generator of $\Gamma$. Suppose first that $r$ and $s$ are both non zero. Then we may take $\gamma=(a_0, b_0)=(\ell/r)(a,b)= (\ell/s)(c,d)$, where $\ell=\gcd(r,s)$, and the elements $(w^{r},(a,b))$ and $(w^{s}, (c,d))$ belong to the subgroup of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ generated by $(w^{\ell},(a_0,b_0))$. Let $z\in P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ be an element that projects to $(w^{\ell},(a_0,b_0))$ under the homomorphism $\alpha\colon\thinspace P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})\ensuremath{\longrightarrow} P_2(\ensuremath{\mathbb{T}^{2}})$ induced by the inclusion $\ensuremath{\mathbb{T}^{2}}\setminus\brak{1} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$. The map $\varphi\colon\thinspace \pi_1(\ensuremath{\mathbb{T}^{2}})\ensuremath{\longrightarrow} P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ defined by $\varphi(e_1)=z^{r/\ell}$ and $\varphi(e_2)=z^{s/\ell}$ extends to a homomorphism, and is a lift of $\widehat{\Phi}_{\#}$. The result in this case follows by \reth{defchinegI}(\ref{it:defchinegbI}). Now suppose thar $r=0$ (resp.\ $s=0$). Then $(a,b)=(0,0)$ (resp.\ $(c,d)=(0,0)$) and $(w^{r},(a,b))$ (resp.\ $(w^{s}, (c,d))$) is trivial in $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Let $z\in P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ be an element that projects to $(w^{s},(c,d))$ (resp.\ to $(w^{r},(a,b))$). Then we define $\varphi(e_1)=1$ and $\varphi(e_2)=z$ (resp.\ $\varphi(e_1)=z$ and $\varphi(e_2)=1$), and once more the result follows. \end{proof} \begin{proof}[Proof of \reth{necrootfree3}] This follows directly from \repr{necrootfree2} and the relation between the fixed point and root problems described by \relem{equivroot}. \end{proof} \begin{lem}\label{lem:exfreero} Let $k,l\in \ensuremath{\mathbb Z}$ and suppose that either $p\in \{0,1\}$ or $q\in \{0,1\}$. With the notation of \repr{presTminus1alta}, the elements $(x^py^q)^k$ and $(u^pv^q)^l$ of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}; (x_1,x_2))$ commute. \end{lem} \begin{proof} We will make use of \repr{presTminus1alta} and some of the relations obtained in its proof. If $p$ or $q$ is zero then the result follows easily. So it suffices to consider the two cases $p=1$ and $q=1$. By \req{conjxyuv}, for $\ensuremath{\varepsilon}\in \brak{1,-1}$, we have $x^{\ensuremath{\varepsilon}}v x^{-\ensuremath{\varepsilon}}=u^{\ensuremath{\varepsilon}}v (u^{-1}B_{1,2})^{\ensuremath{\varepsilon}}$ and $y^{\ensuremath{\varepsilon}}u y^{-\ensuremath{\varepsilon}}=(vB_{1,2}^{-1})^{\ensuremath{\varepsilon}} uv^{-\ensuremath{\varepsilon}}$, and by induction on $r$, it follows that $x^{r} v x^{-r}=u^{r} v (u^{-1}B_{1,2})^{r}$ and $y^{r}u y^{-r}=(vB_{1,2}^{-1})^{r} uv^{-r}$ for all $r\in \ensuremath{\mathbb Z}$. So if $p=1$ or $q=1$ then we have respectively: \begin{align*} xy^{q} uv^{q} y^{-q} x^{-1} &= x (vB_{1,2}^{-1})^{q} uv^{-q} \ldotp v^{q}x^{-1}= uv^{q}u^{-1} u =uv^{q}, \;\text{and}\\ x^{p}y u^{p}v y^{-1} x^{-p} &= x^{p} (yvy^{-1})^{p} vx^{-p}= x^{p} v(B_{1,2}^{-1}u)^{p} v^{-1} \ldotp v x^{-p}= u^{p} v (u^{-1}B_{1,2})^{p}(B_{1,2}^{-1}u)^{p}=u^{p} v, \end{align*} as required. \end{proof} This enables us to prove the following proposition and \reth{construct2val}. \begin{prop}\label{prop:construct2valprop} Suppose that $(a, b),(c,d)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$ generated by an element of the form $(0,q), (1,q), (p,0)$ or $(p,1)$, where $p,q\in \ensuremath{\mathbb Z}$, and let $r,s\in \ensuremath{\mathbb Z}$. Then there exist $w\in \mathbb{F}_2(u,v)$, a split $2$-valued map $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ and a lift $\widehat{\Phi} \colon\thinspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ of $\phi$ for which $\operatorname{\text{Ab}}(w)=(m,n)$, $\widehat{\Phi}_{\#}(e_1)=((w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_2)= (w^{s}, (c,d))$, and such that $\phi$ can be deformed to a root-free $2$-valued map. \end{prop} \begin{proof} Once more we apply \reth{defchinegI}(\ref{it:defchinegbI}). Let $(p,q)$ be a generator of the cyclic subgroup given in the statement. So there exist $\lambda_{1}, \lambda_{2}, \lambda_{3}\in \ensuremath{\mathbb Z}$ such that $(a, b)=\lambda_1(p,q)$, $(c,d)=\lambda_2(p,q)$ and $(m,n)=\lambda_3(p,q)$. We define $\varphi\colon\thinspace \pi_1(\ensuremath{\mathbb{T}^{2}})\ensuremath{\longrightarrow} P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ by $\varphi(e_1)=(u^pv^q)^{\lambda_3r}(x^py^q)^{\lambda_1}$ and $\varphi(e_2)=(u^pv^q)^{\lambda_3s}(x^py^q)^{\lambda_2}$. \relem{exfreero} implies that $\varphi$ extends to a well-defined homomorphism, and we may take $w=(u^pv^q)^{\lambda_3}$. \end{proof} \begin{proof}[Proof of \reth{construct2val}] This follows directly from \repr{construct2valprop} and the relation between the fixed point and root problems described in \relem{equivroot}. \end{proof} \begin{rem} \reth{construct2val} implies that there is an infinite family of homotopy classes of $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that satisfy the necessary condition of \repr{exisrf}(\ref{it:exisrfa}), and can be deformed to root-free maps. We do not know whether there exist examples of maps that satisfy this condition but that cannot be deformed to root-free, however it is likely that such examples exist. \end{rem} \section*{Appendix: Equivalence between $n$-valued maps and maps into configuration spaces} This appendix constitutes joint work with R.~F.~Brown. Let $n\in \ensuremath{\mathbb N}$. As observed in \resec{intro}, the set of $n$-valued functions from $X$ to $Y$ is in one-to-one correspondence with the set of functions from $X$ to $D_n(Y)$. As we have seen in the main part of this paper, this correspondence facilitates the study of $n$-valued maps, and more specifically of their fixed point theory. In this appendix, we prove \reth{metriccont} that clarifies the topological relationship preserved by the correspondence under some mild hypotheses on $X$ and $Y$. For the sake of completeness, we will include the proof of a simple fact (\repr{multicont}) mentioned in~\cite{Sch0} that relates the splitting of maps and the continuity of multifunctions. Given a metric space $Y$, let $\mathcal K'$ be the family of non-empty compact sets of $Y$. We equip $\mathcal K'$ with the topology induced by the Hausdorff metric on $\mathcal K'$ defined in~\cite[Chapter~VI, Section~6]{Be}. \begin{thm}[{\cite[Chapter~VI, Section~6, Theorem~1]{Be}}]\label{th:berge} Let $X$ and $Y$ be metric spaces, let $\mathcal K'$ denote the family of non-empty compact sets of $Y$, let $\Gamma\colon\thinspace X \multimap Y$ be a multifunction such that for all $x\in X$, $\Gamma(x)\in \mathcal K'$ and $\Gamma(x)\ne \varnothing$. Then $\Gamma$ is continuous if and only if it is a single-valued continuous mapping from $X$ to $\mathcal K'$. \end{thm} As we mentioned in \resec{intro}, $F_n(Y)$ may be equipped with the topology induced by the inclusion of $F_{n}(Y)$ in $Y^n$, and $D_n(Y)$ may be equipped with the quotient topology using the quotient map $\pi\colon\thinspace F_n(Y) \ensuremath{\longrightarrow} D_n(Y)$, a subset $W$ of $D_n(Y)$ being open if and only if $\pi^{-1}(W)$ is open in $F_n(Y)$. If $Y$ is a metric space with metric $d$, the set $D_n(Y)$ is a subset of $\mathcal K'$, and the Hausdorff metric on $\mathcal K'$ mentioned above restricts to a Hausdorff metric $d_H$ on $D_n(Y)$ defined as follows. If $z,w \in D_n(Y)$ then there exist $(z_1, \ldots , z_n), (w_1, \ldots , w_n)\in F_n(Y)$ such that $z=\pi(z_1, \ldots , z_n)$ and $w=\pi(w_1, \ldots , w_n)$, and we define $d_H$ by: \begin{equation*} d_H(z,w)=\max\Bigl(\max_{1\leq i\leq n} d(z_i,w), \max_{1\leq i\leq n} d(w_i,z) \Bigr), \end{equation*} where $\displaystyle d(z_i,w)=\min_{1\leq j\leq n} d(z_i,w_j)$ for all $1\leq i\leq n$. Notice that $d_H(z,w)$ does not depend on the choice of representatives in $F_n(Y)$. We now prove \reth{metriccont}. \begin{proof}[Proof of \reth{metriccont}] By \reth{berge}, it suffices to show that the set $D_n(Y)$ equipped with the Hausdorff metric $d_{H}$ is homeomorphic to the unordered configuration space $D_n(Y)$ equipped with the quotient topology, or equivalently, to show that a subset of $D_n(Y)$ is open with respect to the Hausdorff metric topology if and only if it is open with respect to the quotient topology. Let $y\in D_n(Y)$, and let $(y_1,\ldots,y_n) \in F_n(Y)$ be such that $\pi(y_1,\ldots,y_n)=y$. For the `if' part, let $U_1,\ldots, U_n$ be open balls in $Y$ whose centres are $y_1,\ldots,y_n$ respectively. Without loss of generality, we may assume that they have the same radius $\ensuremath{\varepsilon}>0$, and are pairwise disjoint. Consider the Hausdorff ball $U_{H}$ of radius $\ensuremath{\varepsilon}$ in $D_n(Y)$ whose centre is $y$. Let $z$ be an element of $U_{H}$, and let $(z_1,\ldots,z_n) \in F_n(Y)$ be such that $\pi(z_1,\ldots,z_n)=z$. Suppose that $z \notin \pi(U_1 \times \cdots \times U_n)$. We argue for a contradiction. Then there exists a ball $U_{i}$ such that $z_j \notin U_i$ for all $j\in \brak{1,\ldots,n}$. So $d(y_i,z)\geq \ensuremath{\varepsilon}$, and from the definition of $d_H$, it follows that $d_H(y,z)\geq \ensuremath{\varepsilon}$, which contradicts the choice of $z$. Hence $z\in \pi(U_1 \times \cdots \times U_n)$, and the `if' part follows. For the `only if' part, let us consider an open ball $U_{H}$ of radius $\ensuremath{\varepsilon}>0$ in the Hausdorff metric $d_H$ whose centre is $y$. We will show that there are open balls $U_1, \ldots ,U_n$ in $Y$ whose centres are $y_1, \ldots , y_n$ respectively, such that the subset of elements $z$ of $D_{n}(Y)$, where $z=\pi(z_1,\ldots,z_n)$ for some $(z_1,\ldots,z_n) \in F_n(Y)$, and where each $U_i$ contains exactly one point of $\brak{z_1,\ldots,z_n}$, is a subset of $U_{H}$. Define $\delta>0$ to be the minimum of $\ensuremath{\varepsilon}$ and the distances $d(y_i, y_j)/2$ for all $i \ne j$. For $j = 1, \dots , n$ let $U_{j}$ be the open ball in $Y$ of radius $\delta$ with respect to $d$ and whose centre is $y_j$. Clearly, $U_i \cap U_j = \varnothing$ for all $i \ne j$. So for all $z=\brak{z_1, \dots , z_n}$ belonging to the set $\pi(U_1 \times \cdots \times U_n)$, each $U_j$ contains exactly one point of $\brak{z_1,\ldots,z_n}$, which up to permuting indices, we may suppose to be $z_j$. Further, for all $j = 1, \dots , n$, $d(z_j,y)=d(z_j,y_j)<\delta$ and $d(y_j,z)=d(y_j,z_j)<\delta$. So from the definition of $d_H$, $d_H(y,z)<\delta<\ensuremath{\varepsilon}$, hence $z\in U_H$, and the `only if' part follows. \end{proof} Just above Lemma~1 of \cite[Section~2]{Sch0}, Schirmer wrote `Clearly a multifunction which splits into maps is continuous'. For the sake of completeness, we provide a short proof of this fact. \begin{prop}\label{prop:multicont} Let $n\in \ensuremath{\mathbb N}$, let $X$ be a topological space, and let $Y$ be a Hausdorff topological space. For $i=1,\ldots,n$, let $f_i\colon\thinspace X \ensuremath{\longrightarrow} Y$ be continuous. Then the split $n$-valued map $\phi= \brak{f_1,\ldots,f_n}\colon\thinspace X \multimap Y$ is continuous. \end{prop} \begin{proof} Let $x_0 \in X$, and let $V$ be an open subset of $Y$ such that $\phi(x_0) \cap V\ne \varnothing$. Then there exists $j\in \brak{1,\ldots,n}$ such that $f_j(x_0) \in V$. Since $f_j$ is continuous, there exists an open subset $U_j$ containing $x_0$ such that if $x \in U_j$ then $f_j(x) \in V$, so $\phi(x)\cap V\ne \varnothing$. Therefore $\phi$ is lower semi-continuous. To prove upper semi-continuity, first note that for all $x\in X$, $\phi(x)$ is closed in $Y$ because $\phi(x)$ is a finite set and $Y$ is Hausdorff. Now let $x_0 \in X$ be such that $\phi(x_0) \subset V$. We then use the continuity of the $f_j$ to define the $U_j$ as before, and we set $U = \bigcap_{j=1}^n U_j$. Since $\phi(U) \subset V$, we have proved that $\phi$ is also upper semi-continuous, and so it is continuous. \end{proof} \end{document}
\begin{document} \begin{abstract} We characterize one-dimensional compact repellers having non-concave Lyapunov spectra. For linear maps with two branches we give an explicit condition that characterizes non-concave Lyapunov spectra. \end{abstract} \title{The Lyapunov spectrum is not always concave} \section{Introduction} The rigorous study of the Lyapunov spectrum finds its roots in the pioneering work of H. Weiss \cite{w1}. For the purpose of simplicity we restrict our discussion to a dynamical system $T : \Lambda \to \Lambda$ where $\Lambda$ is a compact subset of an interval. The Lyapunov exponent $\lambda (x)$ of $T$ at $x$ is $$\lambda (x) := \lim_{n \to \infty} \frac{1}{n} \log |(T^n)'(x)|,$$ whenever this limit exists. The Lyapunov spectrum $L$ encodes the decomposition of the phase space $\Lambda$ into level sets $J_\alpha$ of the Lyapunov exponent (i.e. the set where $\lambda(x) = \alpha$). More precisely, $L$ is the function that assigns to each $\alpha$ (in an appropriate interval) the Hausdorff dimension of $J_\alpha$. Weiss proved that the Lyapunov spectrum is real analytic. A rather surprising result in light of the fact that this decomposition is fairly complicated. For instance, each level set turns out to be dense in $\Lambda$. Relying on his previous joint results with Pesin~\cite{pw1}, Weiss studied the Lyapunov spectrum via the dimension spectrum of the measure of maximal entropy. It is worth to mention that not only the analyticity of $L$ is obtained with this approach but it also gives, for each $\alpha$, an equilibrium measure such that $J_\alpha$ has full measure and the Hausdorff dimension of the measure coincides with that of $J_\alpha$. In between the wealth of novel and correct results about Lyapunov spectra briefly summarized above, the aforementioned paper~\cite{w1} unfortunately contains a claim which is not fully correct. Namely, that compact conformal repellers have concave Lyapunov spectra~\cite[Theorem 2.4 (1)]{w1}. Recently, it has been shown that non-compact conformal repellers may have non-concave Lyapunov spectra (see \cite{ks} for the Gauss map and \cite{io} for the Renyi map). Puzzled by this phenomena, our aim here is to better understand the concavity properties of Lyapunov spectra. On one hand we characterize interval maps with two linear full branches for which the Lyapunov spectra is not concave. In particular, we not only show that compact conformal repellers (see Example \ref{ex}) may have non-concave spectra, but that this already occurs in the simplest possible context. On the other hand, we establish some general conditions under which the Lyapunov spectra is non-concave. Roughly speaking, the asymptotic variance (of the $-t \log |T'|$ potential) should be sufficiently large (in a certain sense). The class of maps that we consider is defined as follows. Given a pairwise disjoint finite family of closed intervals $I_1, \dots, I_n$ contained in $[0,1]$ we say that a map: $$T: \bigcup_{i=1}^n I_i \to [0,1]$$ is a {\sf cookie-cutter map with $n$ branches} if the following holds: \begin{enumerate} \item $T(I_i)= [0,1]$ for every $i \in \{1 , \dots, n\}$, \item The map $T$ is of class $C^{1 + \epsilon}$ for some $\epsilon >0$, \item $|T' (x)| > 1$ for every $x \in I_1 \cup \cdots \cup I_n $. \end{enumerate} We say that $T$ is a {\sf linear cookie-cutter map} if $T$ restricted to each one of the intervals $I_i$ is an affine map. The {\sf repeller} $\Lambda \subset [0,1]$ of $T$ is \[ \Lambda:= \bigcap_{n=0}^{\infty} T^{-n} ([0,1]). \] The {\sf Lyapunov exponent} of the map $T$ at the point $x \in [0,1]$ is defined by \[ \lambda(x) = \lim_{n \to \infty} \frac{1}{n} \log |(T^n)'(x)|, \] whenever the limit exists. Let us stress that the set of points for which the Lyapunov exponent does not exist has full Hausdorff dimension~\cite{bs}. We will mainly be concerned with the Hausdorff dimension of the level sets of $\lambda$. More precisely, the range of $\lambda$ is an interval $[\alpha_{min}, \alpha_{\max}]$ and the {\sf multifractal spectrum of the Lyapunov exponent} is the function given by: $$\begin{array}{rccc} L:& [\alpha_{min}, \alpha_{\max}] & \rightarrow & \mathbb{R} \\ & \alpha & \mapsto & \dim_H ( J_\alpha = \{ x \in \Lambda \mid \lambda(x) = \alpha \}), \end{array}$$ where $\dim_H(J)$ denotes the Hausdorff dimension of $J$. For short we say that $L$ is the {\sf Lyapunov spectrum of $T$}. In our first result we consider linear cookie-cutters with two branches and obtain conditions on the slopes that ensure that the Lyapunov spectrum is concave. This result can also be used to construct examples of non-concave Lyapunov spectrum. \begin{introtheorem} \label{A} Consider the linear cookie-cutter map with two branches $$T: \left[0, \frac{1}{a} \right] \cup \left[1-\frac{1}{b}, 1\right] \to \left[0,1\right] $$ defined by $$T(x) = \begin{cases} a x & \text{ if } x \le \frac{1}{a}, \\ b x + 1 - b & \text{ if } x \ge 1-\frac{1}{b}. \end{cases} $$ Then the Lyapunov spectrum $L:[\log a , \log b] \to \mathbb{R}$ of $T$ is concave if and only if $$\dfrac{\log b}{\log a} \leq \dfrac{\sqrt{2 \log 2} + 1}{\sqrt{2 \log 2} - 1} \approxeq 12.2733202...$$ \end{introtheorem} The above relation gives the bifurcation point dividing the spectra with inflection points form the concave ones. For maps with two linear branches, the combination of Lemma~\ref{tran} with this Theorem implies that the bifurcation between concave spectra and non-concave one may only occur when the Lyapunov exponent, $\alpha_M$, corresponding to the measure of maximal entropy is an inflection point. In order to describe the Lyapunov spectrum we will make use of the thermodynamic formalism. Let $T$ be a cookie-cutter map, denote by $ \mathcal{M}_T$ the set of $T-$invariant probability measures. The {\sf topological pressure} of $-t \log |T'|$ with respect to $T$ is defined by \begin{equation*} P(-t \log |T'|) = \sup \left\{ h(\mu) -t \int \log |T'| \ d\mu : \mu \in \mathcal{M}_T \right\}, \end{equation*} where $ h(\mu)$ denotes the measure theoretic entropy of $T$ with respect to the measure $\mu$ (see \cite[Chapter 4]{wa} for a precise definition of entropy). A measure $\mu_{t} \in \mathcal{M}_T$ is called an {\sf equilibrium measure} for $-t \log |T'|$ if it satisfies: \[ P(-t \log |T'|) = h(\mu_{t}) - t \int \log |T'| \ d\mu_{t}. \] If the function $\log |T'|$ is not cohomologous to a constant then the function $t \mapsto P(-t \log |T'|)$ is strictly convex, strictly decreasing, real analytic and for every $t \in \mathbb{R}$ there exists a unique equilibrium measure $\mu_t$ corresponding to $-t \log |T'|$ (see \cite[Chapters 3 and 4]{pp}). Moreover, there are explicit expressions for the derivatives of the pressure. Indeed (see \cite[Chapter $4$]{pu}), the first derivative of the pressure is given by \[\alpha(t_0):= -\frac{d}{dt} P(-t \log |T'|) \Big|_{t=t_0} = \int \log |T'| \ d \mu_{t_0}.\] The second derivative of the pressure is the {\sf asymptotic variance} \[\frac{d^2}{dt^2} P(-t \log |T'|) \Big|_{t=t_0} = \sigma^2(t_0),\] where \[\sigma^2(t_0):= \lim_{n \to \infty} \int \left( \sum_{i=0}^{n-1} - \log |T'(T^i x)| +n \int \log |T'| d \mu_{t_0} \right) ^2 d\mu_{t_0}(x). \] There exists a close relation between the topological pressure and the Hausdorff dimension of the repeller. In fact, the number $t_d = \dim_H(\Lambda)$ is the unique zero of the Bowen equation (see \cite[Chapter $7$]{pe}) \[P(-t \log |T'|) =0.\] Let $\mu_{t_d}$ be the unique equilibrium measure corresponding to the function $-t_d \log |T'|$ and, let \[\alpha_d := \int \log |T'| \ d\mu_d.\] Our next theorem establishes general conditions for a cookie-cutter map to have (non-)concave Lyapunov spectrum. \begin{introtheorem} \label{B} Let $L$ be Lyapunov spectrum of a cookie-cutter map $T$. Then $L$ is always concave in $[\alpha_{min} , \alpha_d]$. Moreover, $L: [\alpha_{min}, \alpha_{max}] \to \mathbb{R}$ is concave if and only if $${\sigma^2(t)} < \dfrac{\alpha(t)^2} {2 P(-t \log |T'|)} \quad \text{ for all } \quad t < t_d \cdot$$ \end{introtheorem} When considering linear cookie-cutter maps we obtain a simpler formula in terms of the slopes of the map. \begin{introcorollary} \label{C} Consider a linear cookie-cutter map $T$ with $n$-branches of slopes $m_1, \dots, m_n$. Then its Lypunov spectrum $L: [\min\{\log|m_i|\}, \max\{\log|m_i|\}] \to \mathbb{R}$ is concave if and only if, for all $t \in \mathbb{R}$, \begin{equation*} 2 \log \left( \sum_{i=1}^n |m_i|^t \right) \left(\frac{\left( \sum_{i=1}^n |m_i|^t (\log |m_i|)^2 \right) \left( \sum_{i=1}^n |m_i|^t \right)}{\left( \sum_{i=1}^n |m_i|^t \log |m_i| \right)^2} - 1 \right) \leq 1. \end{equation*} \end{introcorollary} In terms of the $L^1(\mu_t)$ and $L^2(\mu_t)$ norms with respect to the corresponding equilibrium measure $\mu_t$ the above formula may be rewritten as: $$ 2 P(-t \log |T'|) \left( \dfrac{ \| \log|T'| \|^2_{2,t}}{ \| \log|T'| \|^2_{1,t} } -1 \right) \le 1.$$ Although our results shed some light on the concavity properties of the Lyapunov spectrum, up to our knowledge, the occurrence of inflection points is not well understood. In fact, given a map with Lyapunov spectrum having an inflection point at $\alpha_0$, it is natural to pose the general problem of understanding the geometric and ergodic properties of the equilibrium measure with exponent $\alpha_0$. From our results it follows that the Lyapunov spectrum of a cookie-cutter map has an even (possibly zero) number of inflections points. Although Theorem~\ref{A} establishes the existence of maps with spectra having at least two inflection points, it does not give an exact count. We conjecture that the spectrum of a cookie-cutter map with two branches has at most two inflection points. Also one may ask: Is there an upper bound on the number of inflection points of the spectrum of a cookie-cutter map? of a compact conformal repeller? of a non-compact conformal repeller? Our results are based on the following formula that ties up the topological pressure with the Lyapunov spectrum \begin{equation} \label{WeissFormula} L(\alpha) = \frac{1}{\alpha} \inf_{t \in \mathbb{R}} (P(-t \log |T'|) +t \alpha). \end{equation} This formula follows form the work of Weiss \cite{w1} and can be found explicitly, for instance, in the work of Kesseb\"ohmer and Stratmann \cite{ks}. Actually, in this setting, the Lyapunov spectrum can be written as \begin{equation} \label{lyapunov} L(\alpha)= \frac{1}{\alpha} (P(-t_{\alpha} \log |T'|) +t_{\alpha} \alpha) = \frac{h(\mu_{\alpha})}{\alpha}, \end{equation} where $t_{\alpha}$ is the unique real number such that \[ -\dfrac{d}{d t} P(-t \log |T'|) \Big|_{t=t_{\alpha}} = \int \log |T'| \ d\mu_{\alpha} = \alpha, \] and $\mu_{\alpha}$ is the unique equilibrium measure corresponding to the potential $-t_{\alpha} \log |T'|$. That is, $\alpha \mapsto t_\alpha$ is the inverse of $t \mapsto \alpha(t)$. Thus, after the substitution $\alpha = \alpha(t)$, equation~\eqref{WeissFormula} becomes: \begin{equation} \label{lt} L(\alpha(t))= \frac{1}{\alpha(t)} \left( P(-t \log |T'|) +t \alpha(t) \right). \end{equation} The structure of the paper is as follows. In Section~\ref{2} we prove Theorem~\ref{A} and construct explicit examples of maps for which the Lyapunov spectrum is not concave. In Section~\ref{nl} we prove Theorem~\ref{B} and finally is Section~\ref{lc} we prove Corollary~\ref{C}. \section{Maps with two branches} \label{2} Our aim now is to prove Theorem~\ref{A}. Throughout this section we let $T$ be the cookie-cutter with two linear branches of slopes $b>a>1$ and Lyapunov spectrum $L$, as in the statement of Theorem~\ref{A}. The proof relies on explicit formulas for $L$ together with a characterization of the inflection points that persist under small changes of the slopes $a$ and $b$ (i.e. transversal). We show that unstable zeros of $\frac{d^2 L}{d \alpha^2} (\alpha)$ may only occur at the Lyapunov exponent of the measure of maximal entropy when the logarithmic ratio of the slopes is as in the statement of the theorem. We start by obtaining an explicit formula for $L$, from equation~\eqref{WeissFormula}. \begin{lema} \label{2BranchFormula} \begin{equation*} L(\alpha) = \frac{1}{\alpha} \left[- \left( \frac{\log b - \alpha}{\log \frac{b}{a}} \right) \log \left( \frac{\log b - \alpha}{\log \frac{b}{a}} \right) - \left( \frac{\alpha - \log a }{\log \frac{b}{a}} \right) \log \left( \frac{\alpha - \log a}{\log \frac{b}{a}} \right) \right]. \end{equation*} \end{lema} \begin{proof} The equilibrium measure $\mu_{\alpha}$, in equation~\eqref{lyapunov}, can be explicitly determined. This is due to the fact that it is a Bernoulli measure (see \cite[Theorem 9.16]{wa}). Since the Lyapunov exponent corresponding to $\mu_{\alpha}$ is $\alpha$ we have that \begin{eqnarray*} \alpha &=& \int \log |T'| \ d\mu_{\alpha} = \mu_{\alpha}(I_1) \log a + \mu_{\alpha}(I_2) \log b \\ &=& \mu_{\alpha}(I_1) \log a +(1- \mu_{\alpha}(I_1)) \log b \\ &=& \mu_{\alpha}(I_1) \left( \log a - \log b \right) + \log b. \end{eqnarray*} Hence, \begin{eqnarray*} \mu_{\alpha}(I_1)& =& \frac{\log b - \alpha}{\log b - \log a}, \\ \mu_{\alpha}(I_2) & = & \frac{\alpha - \log a}{\log b - \log a}. \end{eqnarray*} Therefore, $\mu_{\alpha}$ is the unique Bernoulli measure which satisfies the above conditions. Moreover, the entropy of this measure is \begin{equation} \label{entropy} h(\mu_{\alpha})=-\left( \frac{\log b - \alpha}{\log b - \log a} \right) \log \left( \frac{ \log b - \alpha}{\log b - \log a} \right) - \left( \frac{\alpha - \log a}{\log b - \log a} \right) \log \left( \frac{\alpha - \log a}{\log b - \log a} \right). \end{equation} Hence, from equation \eqref{lyapunov} we obtain that \begin{equation*} \label{dosramas} L(\alpha) = - \frac{1}{\alpha \log \frac{b}{a}} \left[ \left({\log b - \alpha} \right) \log \left( \frac{\log b - \alpha}{\log b - \log a} \right) + \left({\alpha - \log a } \right) \log \left( \frac{\alpha - \log a}{\log b - \log a} \right) \right]. \end{equation*} \end{proof} As suggested by equation~\eqref{lyapunov} the behavior of the entropy function is closely related to the (in)existence of inflection points of the Lyapunov spectrum. In fact: \begin{lema}[Remark 8.1 \cite{io}] \label{et} A point $\alpha_0 \in (\log a , \log b)$ satisfies $ \frac{d^2L}{d \alpha^2} (\alpha_0)=0$ if and only if \[ 2 \frac{d L}{d \alpha} (\alpha_0) = \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) \Big{|}_{\alpha=\alpha_0}. \] \end{lema} Thus, our aim now is to study the second derivative of the entropy: \begin{prop} \label{segunda} Let $\alpha_M= (\log a+ \log b)/2$. Then the function $$\frac{d^2}{d \alpha^2} h(\mu_{\alpha}) :(\log a , \log b) \to \mathbb{R}$$ is concave, increasing in the interval $(\log a ,\alpha_M)$ and, decreasing in the interval $(\alpha_M , \log b)$. In particular, it has a unique maximum at $\alpha = \alpha_M$. Moreover, \begin{eqnarray*} \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) \Big|_{\alpha = \alpha_M} & = & - \Big(\frac{2}{\log \frac{b}{a}} \Big)^2, \\ \lim_{ \alpha \to \log a}\frac{d^2}{d \alpha^2} h(\mu_{\alpha}) &=& - \infty,\\ \lim_{ \alpha \to \log b} \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) & = &- \infty. \end{eqnarray*} \end{prop} \begin{proof} From equation \eqref{entropy} we have that the first derivative of the entropy with respect to $\alpha$ is given by \begin{equation} \label{1-derivative} \frac{d}{d \alpha} h(\mu_{\alpha})=\frac{1}{\log \frac{b}{a}} \left( \log \left( \frac{\alpha - \log a}{\log {b} - \log {a}} \right) - \log \left( \frac{\log b - \alpha}{\log {b}- \log{a}} \right) \right). \end{equation} We can also compute its second derivative, \begin{eqnarray} \label{2-derivative} \frac{d^2}{d \alpha^2} h(\mu_{\alpha})=\frac{-1}{\log \frac{b}{a}} \left( \frac{1}{\alpha - \log a} +\frac{1}{\log b - \alpha} \right). \end{eqnarray} Note that this function has two asymptotes at $\log a$ and at $\log b$. Indeed \[ \lim_{ \alpha \to \log a} \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) = - \infty \textrm{ and } \lim_{ \alpha \to \log b} \frac{d^2}{d \alpha^2} h(\mu_{\alpha})= - \infty. \] In order to obtain the maximum of $\frac{d^2}{d \alpha^2} h(\mu_{\alpha})$ we compute the third derivative of the entropy function, \[ \frac{d^3}{d \alpha^3} h(\mu_{\alpha})= \frac{1}{\log \frac{b}{a}} \left( \frac{1}{(\alpha - \log a)^2} -\frac{1}{(\log b- \alpha)^2} \right), \] which is equal to zero if and only if \[ \alpha=\alpha_M= \frac{\log b + \log a}{2} . \] Moreover, \[ \frac{d^4}{d \alpha^4} h(\mu_{\alpha})= \frac{-2}{\log \frac{b}{a}} \left( \frac{1}{(\alpha - \log a)^3} +\frac{1}{(\log b- \alpha)^3} \right), \] In particular \[ \frac{d^4}{d \alpha^4} h(\mu_{\alpha}) \leq 0. \] \end{proof} Now we combine the Proposition \ref{segunda} with Lemma~\ref{et} in order to characterize inflection points of the spectrum which are stable under small changes of the slopes $a$ and $b$. Namely, transversal intersections between the graphs of $2 \frac{d L}{d \alpha} (\alpha)$ and $\frac{d^2}{d \alpha^2} h(\mu_{\alpha})$. \begin{lema} \label{tran} Assume that $\alpha_i \in (\log a, \log b)$ with $\alpha_i \neq \alpha_M$ is such that $$2 \frac{d L}{d \alpha} (\alpha_i) = \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) \Big|_{\alpha = \alpha_i}.$$ Then the intersection of the graphs of $ \frac{d L}{d \alpha} (\alpha)$ and $ \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) $ at $\alpha = \alpha_i$ is transversal. \end{lema} \begin{proof} By contradiction, suppose that the intersection is not transversal, that is \begin{equation} \label{trans} 2 \frac{d^2 L}{d \alpha^2} (\alpha_i) = \frac{d^3 h(\mu_\alpha)}{d \alpha^3} (\alpha_i). \end{equation} Since $$2 \frac{d}{d \alpha} L(\alpha_i) = \frac{d^2}{d \alpha^2}h (\alpha_i)$$ we have that $ \frac{d^2 L}{d \alpha^2} (\alpha_i)=0$. Therefore equation \eqref{trans} implies that $ \frac{d}{d \alpha}h (\alpha)\Big|_{\alpha =\alpha_i}=0$. Thus, by Proposition \ref{segunda}, $\alpha_i= \alpha_M$. \end{proof} Finally, we are ready to prove the Theorem. \begin{proof}[Proof of Theorem~\ref{A}] From equation \eqref{lyapunov} we obtain that \begin{eqnarray*} 2 \frac{d L}{d \alpha}(\alpha)= \frac{2}{\alpha^2 \log \frac{b}{a} } \left[ \log b \log \left( \frac{\log b - \alpha}{\log {b} - \log {a}} \right) - \log a \log \left( \frac{\alpha - \log a}{\log b - \log{a}} \right) \right]. \end{eqnarray*} Therefore, \begin{eqnarray*} 2 \frac{d L}{d \alpha} (\alpha) \Big|_{\alpha=\alpha_M} &=& -\frac{8 \log 2}{ (\log a + \log b)^2}. \end{eqnarray*} Moreover, \begin{eqnarray*} \lim_{\alpha \to \log a} 2\frac{d L}{d \alpha}(\alpha) & = & \infty, \\ \lim_{\alpha \to \log b} 2\frac{d L'}{d \alpha} (\alpha) & = & -\infty = \lim_{\alpha \to \log b} \left( 2\frac{d L}{d \alpha}(\alpha) - \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) \right). \end{eqnarray*} Hence, a sufficient condition to have two transversal intersections of the graphs of $2\frac{d L}{d \alpha}$ and $\frac{d^2}{d \alpha^2}h$ is $2\frac{d L}{d \alpha} (\alpha_M) > \frac{d^2}{d \alpha^2}h(\alpha_M)$. Now, $$2 \frac{d L}{d \alpha} (\alpha_M) \ge \frac{d^2}{d \alpha^2}h(\alpha_M) \iff \dfrac{\log b}{\log a} \geq \dfrac{\sqrt{2 \log 2} + 1}{\sqrt{2 \log 2} - 1},$$ where equality holds in one equation if and only if it holds in the other. To finish the proof of the theorem we must check that there exist values of $a$ and $b$ such that $$\dfrac{\log b}{\log a} < \dfrac{\sqrt{2 \log 2} + 1}{\sqrt{2 \log 2} - 1},$$ for which the corresponding graphs of $2\frac{d L}{d \alpha}$ and $\frac{d^2}{d \alpha}h$ do not intersect. For this purpose let $a >1$ and $b = \exp(\varepsilon) a$. To ease notation we introduce $$x = \dfrac{\alpha-\log a}{\log b - \log a},$$ and note that $0 \leq x \leq 1$. It follows that $$ \varepsilon^2 x (1 -x) 2 \frac{d L}{d \alpha} (\alpha) = \dfrac{2 \varepsilon}{\alpha^2} g(x),$$ where $g:[0,1] \rightarrow \mathbb{R}$ is the continuous function such that for all $x \in (0,1)$, $$g(x) = \varepsilon x (1-x) \log (1-x) + (\log a) x (1-x) \log (1-x) - (\log a) (1-x) x \log x.$$ Hence, uniformly for $\alpha \in [\log a, \log b]$, we have that $ \varepsilon^2 x (1 -x) 2 \frac{d}{d \alpha}L(\alpha) \to 0$, as $\varepsilon \searrow 0$. However, from equation~\eqref{2-derivative} $$ \varepsilon^2 x (1 -x) \frac{d^2}{d \alpha^2} h(\mu_{\alpha}) = -1.$$ Thus, for $\varepsilon >0$ sufficiently small, we have that $2\frac{d L}{d \alpha} (\alpha) > \frac{d^2}{d \alpha^2}h(\alpha)$ for all $\alpha \in [\log a, \log b]$. \end{proof} \begin{eje}[Non-concave Lyapunov spectrum] \label{ex} {\em Let $T$ be a linear cookie-cutter map with two branches of slopes $a=\exp(1)$ and $b= \exp(45)$. The corresponding Lyapunov spectrum is not concave since \[45=\dfrac{\log b}{\log a} > \dfrac{\sqrt{2 \log 2} + 1}{\sqrt{2 \log 2} - 1}. \] See Figure~\ref{2BranchFigure} (right).} \end{eje} \begin{figure} \caption{The Lyapunov spectra for maps with two linear branches of slopes $a$ and $b$. For all graphs $a=\exp(1)$. At the left a concave spectra corresponding to $b= \exp(10)$. At the center, also concave but $b \approx (\sqrt{2 \log 2} + 1) / (\sqrt{2 \log 2} - 1)$ is the bifurcation value of the slope. At the right, the non-concave graph corresponding to $b=\exp(45)$. } \label{2BranchFigure} \end{figure} \section{The non-linear case} \label{nl} In this section we prove Theorem~\ref{B}, which gives a general condition that ensures the existence of inflection points for the Lyapunov spectrum. In the next section we will show that the mentioned condition is explicit for maps with linear branches. \begin{proof}[Proof of Theorem~\ref{B}] From equation~\eqref{lt} we have \[L(\alpha(t))= \frac{P(-t \log |T'|) +t \alpha(t)}{\alpha(t)}.\] Then, using $f(g(t))'$ to denote the derivative of $f \circ g$ with respect to $t$, we have \[L(\alpha(t))'= \frac{1}{\alpha(t)^2} \left( - \alpha'(t) P(-t \log |T'|) + \alpha(t) P(-t\log |T'|)' +\alpha(t)^2 \right).\] Recall that \begin{equation} \label{rela} \alpha(t) = -P(-t \log |T'|)'. \end{equation} Hence \[L(\alpha(t))' = -\frac{\alpha'(t) P(-t \log |T'|)}{ \alpha(t)^2}. \] Therefore, making use of equation \eqref{rela} we obtain that \begin{eqnarray*} \frac{d^2 L}{d \alpha^2} (\alpha(t)) & =& \frac{1}{\alpha'(t)} \cdot \left( \frac{L(\alpha(t))'}{\alpha'(t)} \right)' \\ & = & \frac{1}{\alpha'(t)} \cdot \left( \frac{-P(-t \log |T'|)}{\alpha(t)^2}\right)' \\ & =& \frac{1}{\alpha'(t)} \cdot \frac{2\alpha'(t) P(-t \log |T'|) - \alpha(t) P(-t \log |T'|)'}{\alpha(t)^3}\\ & =& \frac{1}{\alpha'(t)} \cdot \frac{-2 \alpha'(t) P(-t \log |T'|) + \alpha(t)^2}{\alpha(t)^3} \cdot\end{eqnarray*} Since $$\alpha' (t) = \sigma^2 (t) = P(-t \log |T'|)'' < 0$$ for all $t \in \mathbb{R}$, we conclude that \begin{equation} \label{sindividir} \frac{d^2 L}{d \alpha^2} (\alpha(t)) \le 0 \iff -2 \sigma^2(t) P(-t \log |T'|) + \alpha(t)^2 \ge 0. \end{equation} The theorem follows since $P(-t \log |T'|)$ is positive if and only if $t < t_d$. \end{proof} Let us stress that even though Theorem \ref{B} is very general, it is not easy to deduce explicit conditions on $T$ in order to guarantee the absence or presence of inflection points (in contrast with Theorem \ref{A}). \section{The linear case} \label{lc} Throughout this section we consider a linear cookie-cutter map with $n$ branches of slopes $m_1, \dots, m_n$. We record a straightforward computation in the next lemma. This trivial calculation allow us to ''effectively'' draw the graphs of the corresponding Lyapunov spectra (e.g. see Figure~\ref{3BranchFigure} ). \begin{lema} \label{graph} Consider a linear cookie-cutter map $T$ with $n$ branches of slopes $m_1, \dots, m_n$. Then, \begin{eqnarray*} \alpha(t) &= & \frac{\sum_{i=1}^n |m_i|^t \log |m_i|}{\sum_{i=1}^n |m_i| ^t},\\ L(\alpha(t)) &=& \frac{\left(\sum_{i=1}^n |m_i|^t \right) \log \left(\sum_{i=1}^n |m_i|^t \right)}{\sum_{i=1}^n | m_i|^t \log |m_i|} -t. \end{eqnarray*} \end{lema} Note that the graph of $L: [\min\{\log|m_i|\}, \max\{\log|m_i|\}] \to \mathbb{R}$ coincides with the graph of $\mathbb{R} \ni t \mapsto (\alpha(t), L(t))$. \begin{figure} \caption{The Lyapunov spectra for maps with three linear branches of slopes $a =\exp(1) < b=\exp(2) < c$. At the left a concave spectra corresponding to $c= \exp(4)$. At the center, a non-concave corresponding to $c= \exp(8)$. At the right, another non-concave spectrum corresponding to $b=\exp(16)$. } \label{3BranchFigure} \end{figure} \begin{proof} The pressure function of a linear cookie-cutter map has a simple form (see, for instance, \cite[Example 1]{sa2}), \begin{equation*} P(-t \log |T'|) =\log \sum_{i=1}^n |m_i|^t . \end{equation*} Hence, \begin{equation} \label{al} \alpha(t) = -P(-t \log |T'|)' = \frac{\sum_{i=1}^n |m_i|^t \log |m_i|}{\sum_{i=1}^n |m_i|^t}. \end{equation} The formula for $L(\alpha(t))$ follows from Equation \eqref{lt}. \end{proof} \begin{proof}[Proof of Corollary~\ref{C}] From the formula \eqref{al} for $\alpha(t)$, it follows that: $$\sigma^2(t) = \frac{\sum_{i=1}^n |m_i|^t (\log |m_i|)^2}{\sum_{i=1}^n |m_i| ^t} - \left(\frac{\sum_{i=1}^n |m_i|^t \log |m_i|}{\sum_{i=1}^n |m_i| ^t }\right)^2 . $$ We introduce the following notation: \begin{eqnarray*} \| \log|T'| \|^2_{2,t} & = & \frac{\sum_{i=1}^n |m_i|^t (\log |m_i|)^2}{\sum_{i=1}^n |m_i| ^t}\\ \| \log|T'| \|_{1,t} & = &\frac{\sum_{i=1}^n |m_i|^t \log |m_i|}{\sum_{i=1}^n |m_i| ^t } \end{eqnarray*} Now from Equation~\eqref{sindividir}, we have that $L$ is concave if and only if, for all $t \in \mathbb{R}$, $$ 2 P(-t \log |T'|) \left( \| \log|T'| \|^2_{2,t} - \| \log|T'| \|^2_{1,t} \right) \le \| \log|T'| \|^2_{1,t}.$$ \end{proof} \end{document}
\begin{document} \title{Measuring Phonon Dephasing with \\ Ultrafast Pulses} \author{F. C. Waldermann} \author{Benjamin J. Sussman} \email{[email protected]} \altaffiliation[Also at ]{National Research Council of Canada, Ottawa, Ontario K1A 0R6, Canada} \author{J. Nunn} \author{V. O. Lorenz} \author{K. C. Lee} \author{K. Surmacz} \author{K. H. Lee} \author{D. Jaksch} \author{I. A. Walmsley} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU, UK} \author{P. Spizziri} \author{P. Olivero} \author{S. Prawer} \affiliation{Center for Quantum Computer Technology, School of Physics, The University of Melbourne, Parkville, Victoria 3010, Australia} \begin{abstract} A technique to measure the decoherence time of optical phonons in a solid is presented. Phonons are excited with a pair of time delayed 80~fs, near infrared pulses via spontaneous, transient Raman scattering. The fringe visibility of the resulting Stokes pulse pair, as a function of time delay, is used to measure the phonon dephasing time. The method avoids the need to use either narrow band or few femtosecond pulses and is useful for low phonon excitations. The dephasing time of phonons created in bulk diamond is measured to be $\tau=$6.8~ps ($\Delta \nu=1.56$~cm$^{-1}$). \end{abstract} \maketitle \section{Introduction} Phonons are a fundamental excitation of solids that are responsible for numerous electric, thermal, and acoustic properties of matter. The lifetime of optical phonons plays an important role in determining these physical properties and has been the subject of extensive study. A technique to measure phonon dephasing times is presented here that utilizes ultrafast infrared laser pulses: Transient Coherent Ultrafast Phonon Spectroscopy (TCUPS) offers a number of conveniences for measuring phonon dephasing. TCUPS utilizes commercially available ultrafast pulses (80~fs) and hence does not require a narrow band or extremely short lasers to achieve high spectral or temporal resolution. As well, TCUPS is suitable for measurements in the single phonon excitation regime. The large sampling area and long sampling distance increase the generated Stokes power and avoid sample heating, which is a concern for low temperature studies. Diamonds are well known for their extraordinary physical properties \cite{Field1979} and, as well, offer interesting prospects for use in Quantum Information applications \cite{Wrachtrup2006,Childress2006, Neumann2008,Waldermann2007}. As such, diamond has been selected here as the material for demonstration of TCUPS. Two methods have previously been utilized to measure phonon lifetimes: high-resolution Raman spectroscopy and differential reflectivity measurements. The first is the traditional technique, where the optical phonon lifetime is obtained from high-resolution linewidth measurements of the first-order Raman peak, usually conducted using narrowband excitation lasers and high-resolution spectrometers \cite{lbpb2000}. The alternative technique, working in the time-domain, can directly show the temporal evolution of the surface vibrations of solids \cite{ckk1990}. A femtosecond pump pulse is used to excite a phonon population. The reflectivity (or transmittivity) of a subsequent probe pulse displays a time dependence that follows the vibrational frequency and phonon population. This method was used to study the phonon decay in various solids \cite{cbkmddi1990}, their symmetry modes \cite{lyylkl2003}, and their interaction with charge carriers \cite{hkcp2003} and with other phonons \cite{bdk2000}. In these experiments, impulsive stimulated Raman scattering has been established as the coherent phonon generation mechanism \cite{sfigwn1985,lfgwfum1995}. The time-domain experiments utilize the impulsive regime, \textit{i.e}.~laser pulse lengths much shorter than the phonon oscillation period (inverse phonon frequency). This requirement can be challenging for the application of the differential reflectivity technique to materials with high phonon energies, as laser systems with very short pulse lengths are required (\textit{e.g.}, for diamond, sub-10~fs pulses are required to resolve a phonon frequency of 40~THz). TCUPS operates in the transient Raman scattering regime, \textit{i.e.}, pulse lengths much shorter than the phonon decoherence time, which is usually orders of magnitude slower than the phonon oscillation period (about 25~fs for diamond) \cite{ihk2007}. Stimulated Raman scattering, which implies large phonon excitations, is often employed in dephasing measurements in order to achieve good signal-to-noise ratios. High phonon population numbers, often referred to as \emph{hot phonons}, can be subject to an increased decay rate, as previously observed \cite{ylokl2002} for {GaN}. By contrast, TCUPS investigates the properties of a coherent phonon excitation by direct analysis of the Stokes spectra generated in the Raman process. The use of single photon detectors extends the sensitivity of the experiment to low phonon populations, including the single phonon level. \section{Experiment} \begin{figure} \caption{Experimental setup. An oscillator pulse is split into two time delayed pulses and focused through the diamond sample. Not shown, a bandpass filter cleans the oscillator pulse before the diamond and a longpass filter rejects the pump and transmits the Stokes before the spectrometer.} \label{fig:bulk_raman_experiment} \end{figure} \begin{figure} \caption{Diamond energy level schematic. Ground state phonons are excited with the incident 788~nm pump, via a Raman transition, to the optical phonon mode, emitting an 880~nm Stokes pulse} \label{fig:diamondEnergyLevels} \end{figure} The diamond was classified as a type Ib high pressure high temperature (HPHT) sample with a nitrogen impurity concentration of less than 100~ppm. The Stokes shift of diamond \cite{zaitsev2001} is $\unit[1332]{cm}^{-1}$ and the Raman gain coefficient for diamond has been reported \cite{llk1971} as $g = \unit[7.4 \times 10^{-3}]{cm/MW}$ (corrected for $\lambda = \unit[800]{nm}$). With pump pulse energies ranging over $\unit[1.1 \ldots 380]{pJ}$, the collinear Stokes emission is calculated as $0.004 \ldots 1.3$ photons per pulse, in agreement with the count rates achieved experimentally. The Raman scatter is thus in the spontaneous regime, as verified by a linear pump power dependence ranging over three orders of magnitude (see inset of Fig.~\ref{fig:powerdep}). Therefore, the experiment is performed far below the hot phonon regime. The experimental setup is depicted in fig.~\ref{fig:bulk_raman_experiment}. Phonons are excited, via a Raman transition, with a pair of time-delayed 80~fs, 788~nm pulses (fig.~\ref{fig:diamondEnergyLevels}) from a commercial Ti:Sapphire Oscillator (Coherent Mira). The pulses are focussed into a $2 \times 2 \times 1$~mm diamond with faces polished along [100] plane (Sumitomo). Stokes emission is detected collinearly. The pump laser is spectrally filtered using a bandpass filter to avoid extraneous light at the Stokes frequency, which might seed stimulated emission and decreases the signal-to-noise ratio when detecting single Stokes photons. The Stokes scatter is detected and spectrally analysed by means of a 30~cm spectrometer (Andor Shamrock 303i) and an electron multiplying charged coupled device (iXon DV887DCS-BV), which is capable of statistical single photon counting. The gratings were ruled at 150 lines/mm for data in figures~\ref{fig:two_pulse_spectra_principle}, and 1800 lines/mm for figures~\ref{fig:waterfall_etc} and~\ref{fig:powerdep}. \begin{figure} \caption{Example spectral interference for a delay $\tau = 0.39$~{ps}. Spectra of the broadband excitation laser (left) and the Stokes signal of diamond (right). The single pulse data in (a) and (c) show the pump laser spectrum and the corresponding broadband Raman spectrum. Spectral interference fringes appear for coherent pulse pairs in (b) and (d). } \label{fig:two_pulse_spectra_principle} \end{figure} The spectral interference from the pump pair and Stokes pair is shown in fig.~\ref{fig:two_pulse_spectra_principle}. The fringe spacing $\Delta \lambda$ is as expected for two time-delayed coherent pulses: $\Delta \lambda={\lambda^2}/{c \tau}$ (see also (\ref{eq:IntensityFringes}), below). For the excitation pair, $\lambda$ is the centre wavelength of the pump (fig.~\ref{fig:two_pulse_spectra_principle}b) and for the generated output Raman pair, $\lambda$ is the centre wavelength of the Stokes (fig.~\ref{fig:two_pulse_spectra_principle}d). The fringe spacing of the Raman output corresponds to the Stokes peak wavelength, confirming that the process is a measure of the coherence of the Raman process. Figure \ref{fig:waterfall_etc}a shows the fringe visibility reduction as a function of time delay. The fringe visibility $V=\exp(-\Gamma|\tau|)$ is plotted in fig.~\ref{fig:waterfall_etc}b. The visibility has been renormalized using the laser visibility for each delay to account for beam walk-off and the spectrometer resolution which artificially reduces visibility, due to a sampling effect from the finite pixel size of the spectrometer CCD. \begin{figure} \caption{Decoherence measurement. (a) Stokes spectra for two pump pulses with various delays $\tau$, recorded with a 1800 lines/mm grating. The decrease of spectral interference visibility of the Stokes signal is due to decoherence of the optical phonons created. The respective visibilities are plotted in (b), obtained by curve-fitting the spectra. Asterisks denote data points, the continuous line an exponential decay fit. Part (c) shows a high-resolution Raman spectrum for the same diamond (dots) with a Lorentzian fit (line). } \label{fig:waterfall_etc} \end{figure} \section{Theory} The observed interference visibility can be considered from two perspectives. In the first, the visibility decay arises due to fluctuations of the phase in the classical fields. Each input laser pulse excites optical phonons, via a Raman transition (fig.~\ref{fig:diamondEnergyLevels}), which in turn causes the emission of two Stokes pulses. That is, the Raman interaction maps the electric field of the two input pulses to two output Stokes shifted pulses \begin{equation} E_{Stokes}=E_1(t)+E_2(t). \end{equation} The phase of the first Stokes pulse $E_1$ is determined spontaneously, but the phase (and amplitude at stimulated intensities) of the second pulse $E_2$ is influenced by the coherence maintained in the phonon of the system following the first pulse, so that the output field may also be rewritten as \begin{equation} E_{Stokes}(t)=E_1(t)+e^{i \theta}E_1(t-\tau) \end{equation} where $\tau$ is the time delay between the input pulses and $\theta$ is the spontaneously fluctuating phase difference between the pulses. The spectral intensity of the Stokes pulse pair \begin{equation} |E_{Stokes}(\omega)|^2=2|E_1(\omega)|^2 \left(1+\cos({\omega \tau +\theta}) \right ) \end{equation} contains interference fringes whose position depends on the relative phase $\theta$. Shot-to-shot, decoherence causes spontaneous fluctuations in $\theta$ and the fringe pattern loses visibility. At longer delays $\tau$, the fluctuations increase, eventually reducing the visibility of any integrated fringe pattern to zero. Assuming a Lorentzian lineshape with width $\Gamma$ for the distribution of the phase shift, the shot-to-shot averaged spectral intensity is broadened to \begin{equation} \label{eq:IntensityFringes} \left<|E_{Stokes}(\omega)|^2 \right>_{shots}=2|E_1(\omega)|^2 \left(1+e^{-\Gamma|\tau|}\cos({\omega \tau}) \right ). \end{equation} The phase fluctuations cause a reduction of the fringe visibility. Alternatively, the fluctuating phase perspective can be connected with the second, quantum field perspective. This formalism can also be made applicable in the stimulated regime. The observed spectral intensity expectation value is proportional to the number of Stokes photons: \begin{equation} \left <|E_{Stokes}(\omega)|^2 \right>\propto \left < A^{\dagger} A\right > \end{equation} where the lowering operator $A(\omega)$ is a sum of the first $A_1$ and second $A_2$ pulse mode lowering operators: \begin{equation} \left<|E_{Stokes}(\omega)|^2 \right>\propto\left < A_1^{\dagger} A_1\right > +\left < A_2^{\dagger} A_2\right > +2 \Re \left < A_1^{\dagger} A_2\right >. \end{equation} The final, correlated term (\textit{cf}. the decay term in (\ref{eq:IntensityFringes})) measures the phonon coherence that remains in the system between pulses. During the evolution of the system, the starting time phonon mode $B^\dagger(0)$ is `mixed' into the Stokes photon modes $A_i$ due to application of the laser field. The mixed-in term is then subsequently the source for spontaneous emission. The source term is the same for both pulses, but during the period between pulses the coherence is reduced due to crystal anharmonicity and impurities. For the correlation, the relevant terms to lowest perturbative order are (see appendix for the equations of motion) \begin{equation} A_1\approx A_1(0) -i g \tau_{pump} B^\dagger(0) \end{equation} \begin{equation} A_2\approx A_1(0)e^{i \omega \tau} -i g\tau_{pump} B^\dagger(0)e^{-\Gamma \tau} \end{equation} from which the correlation term can be evaluated as \begin{equation} \left < A_1^{\dagger} A_2\right > \approx g^2\tau_{pump}^2 \left < B(0) B(0) ^\dagger \right > = g^2 \tau_{pump}^2\left < N_B(0) +1 \right > e^{-\Gamma \tau} \end{equation} where $N_B(0)$ is the initial number of phonons, which in this case is the nearly zero thermal population. This result links the phonon decoherence rate $\Gamma$ with the fluctuating phase perspective linewidth $\Gamma$ from (\ref{eq:IntensityFringes}). Therefore, measuring the reduction of the fringe visibility is a direct measure of the phonon dephasing time. \section{Discussion} The TCUPS measurement indicates a phonon dephasing time of $1/\Gamma=6.8\pm0.9$~ps or a linewidth of $\Delta \nu=1.56$~cm$^{-1}$. The literature has reported a great deal of variation in linewidth measurements for diamond, varying from at least 1.1~cm$^{-1}$ to as high as $4.75$~cm$^{-1}$ \cite{lbpb2000,llk1971,Chen1995} . Here, the TCUPS (fig.~\ref{fig:waterfall_etc}b, $\Delta \nu=1.56$~cm$^{-1}$ ) and conventional Raman spectrum (fig.~\ref{fig:waterfall_etc}c, $\Delta \nu=1.95$~cm$^{-1}$) show comparatively good agreement. The lifetime measured here is slightly shorter than the decay rate calculated theoretically by Debernardi \emph{et al.} \cite{dbm1995} for an ideal crystal ($\Delta \nu= \unit[1.01]{cm}^{-1}$ or $1/\Gamma=\unit[10.5]{ps}$), as the decay process is enhanced by lattice imperfections, vacancies, and the high concentration of substitutional nitrogen atoms, as is typical for this sort of diamond. The decay model considering acoustic phonon modes suggests that this deviation from the theoretical optimum is due to inhomogeneous broadening rather than additional pure dephasing. Future work will reveal whether ultra-pure diamond with very low crystal defect density can achieve a longer phonon lifetime. The creation of coherent phonons in diamond is heralded by the emitted Stokes photon, which could be employed for quantum optical experiments operating at room temperature like, \textit{e.g.}, schemes that transfer optical entanglement to matter\cite{maku2004,crfpek2005}. \begin{figure} \caption{Power dependence. The power dependence of the Stokes interference visibility (at constant delay $\tau = \unit[0.51]{ps}$, reduced due to a limited alignment) showing that the experiment can be carried out at arbitrarily low phonon excitation levels (the green horizontal line is plotted to guide the eyes). The inset shows the dependence of the Stokes pulse energy on the average pump power (single pump pulses). The linear power dependence shows that the scattering is in the spontaneous Raman regime. The fraction of pump power converted into collinear Stokes light was measured to be less than $10^{-8}$.} \label{fig:powerdep} \end{figure} The spectral interference pattern persists for low excitation levels, \textit{i.e.,}~a phonon excitation probability per mode of $p < 1$. (The case of $p \gg 1$ would correspond to a strongly stimulated regime, which has previously been studied in molecular hydrogen gas \cite{bswr1993}, although using spatial, not spectral interference.) A constant visibility with excitation power can be seen at low excitation level (fig.~\ref{fig:powerdep}). TCUPS can therefore be employed to measure the decoherence properties of single optical phonons, overcoming the need for large phonon populations for lifetime measurements of phonon decoherence. The excitation probability per mode was much smaller than 1, ranging over $p \approx 10^{-7} \ldots 10^{-5}$ due to the large number of phonon modes in the Brillouin zone for which Stokes scatter is detected ($\sim 10^5$). This level is in fact smaller than the thermal population level of the optical phonons at room temperature, given by $p_\mathrm{thermal} = [\exp(E_\mathrm{vib} / k_B T) - 1]^{-1} \approx 0.0017$. However, a small thermal population of the optical phonon modes does not influence this measurement method, as only the phonons deliberately excited by the pump pulses lead to Stokes scatter, and only Stokes light and its interference are detected. As phonons are governed by bosonic statistics, any finite background excitation level does not inhibit a further excitation. The linewidth increase due to phonon-phonon interaction is negligible at ambient temperatures in diamond due to the low population level \cite{dbm1995}. An increase in the Raman linewidth of diamond due to temperature has been reported \cite{lbpb2000} to begin at around $T \approx \unit[300]{K}$. At $T \approx \unit[800]{K}$, it is more than twice the zero temperature linewidth. At room temperature, the phonon decay is only marginally enhanced by an acoustical phonon population. This insensitivity to a thermal background is in contrast to the differential reflectivity method, where thermal phonons lead to additional noise as both thermal and coherent phonons lead to a change of the material reflectivity. TCUPS is a convenient approach to determining the quantum coherence properties of optical phonons in Raman active solids. The measurement technique relies solely on spontaneous Raman scattering and is therefore useful down to the single phonon levels. In particular, TCUPS enables the measurement of the decoherence time of phonons, which is of paramount importance in many Quantum Information Processing schemes. Spectral interference of the Stokes light from pump pulse pairs is used to measure the Raman linewidth of the material, while maintaining a coherent excitation due to ultrafast excitation. The phonon lifetime of diamond was measured as $\unit[6.8]{ps}$. This lifetime corresponds to a phonon $Q$-factor of $Q = \nu /\Gamma \sim 270$. Although the short lifetime of the excitation makes it unsuitable for long-distance quantum repeaters, such a high $Q$ and the low thermal population at room temperature make it feasible for proof-of-principle demonstrations of typical quantum optics schemes, such as collective-excitation entanglement in the solid state. Acknowledgements. This work was supported by the {QIPIRC} and {EPSRC} (grant number GR/S\-82176\-/\-01), EU RTN project EMALI, and Toshiba Research Europe. \section*{Appendix: Phonon-photon equations of motion} Consider an incident pump laser that Raman scatters off a phonon field of the diamond to produce an output Stokes field. The equations of motion for Stokes field $A(t)$ and the phonon field $B(t)$ are linked by the pump coupling $g$ via \cite{Raymer1990}: \begin{equation} \dot{A}(t)=-ig B^\dagger(t) \end{equation} and \begin{equation} \dot{B}(t)=-igA^\dagger(t)-\Gamma B(t) + F^\dagger(t). \end{equation} The dephasing rate $\Gamma$ is due to crystal anharmonicity and impurities. The Langevin operator $F$ has been added to maintain the normalization of $B$ in the presence of decay, allowing the phonon to decohere, but keeping the operator norm, via the commutation relation $[B,B^\dagger]=1$, constant. The formal solutions are: \begin{equation} B=B(0)e^{-\Gamma t} +\int_0^t e^{-\Gamma (t-t^\prime)}\left [ -i g A^\dagger(t^\prime)+F^\dagger(t^\prime)\right ] dt^\prime \end{equation} and \begin{equation} A=A(0)-i\int_0^t g B^\dagger(t^\prime)dt^\prime. \end{equation} For brevity, the time argument has been dropped from the solutions. In the weak ($g \tau_{pump} \ll 1$) and transient ($\Gamma \tau_{pump}\ll 1$) pump pulse limit, the incident laser leaves the phonon operator approximately in the vacuum state $B(0)$ and the phonon operator solution at lowest order is \begin{equation} B\approx B(0)e^{-\Gamma t} +\int_0^t e^{-\Gamma (t-t^\prime)} F^\dagger(t^\prime) dt^\prime. \end{equation} The Stokes field to first order is then \begin{equation} A \approx A(0) -i g \tau_{pump}B^\dagger(0)e^{-\Gamma t} -i \int_0^t \int_0^{t^\prime} g e^{-\Gamma (\tau-t^{\prime \prime})} F(t^{\prime \prime}) dt^{\prime \prime} dt^\prime \end{equation} where the coupling $g$ in the second term has been taken as a constant step for the duration of the pump. The initial Stokes operator $A(0)$ annihilates the vacuum, but the solution for $A$ mixes in a component of the phonon raising operator $B^\dagger(0)$, which acts as a source for the spontaneous Raman scattering. The decoherence rate $\Gamma$ represents the dephasing of the phonon raising operator $B^\dagger$. The phonon number $N_B=B^\dagger B$ therefore decays at a rate $2\Gamma$. The corresponding spectral frequency linewidth is $\Delta \nu={\Gamma}/{\pi}$. \end{document}
\begin{document} \title[Compactifications of ${\mathcal M}_{0, \lowercase{n}}$ associated with Alexander self dual complexes]{Compactifications of ${\mathcal M}_{0, \lowercase{n}}$ associated with Alexander self dual complexes: Chow ring, $\psi$-classes, and intersection numbers} \author{Ilia Nekrasov$^1$} \email{[email protected]} \address{$^1$Chebyshev laboratory, Department of Math. and Mech., St. Petersburg State University} \author{Gaiane Panina$^2$} \email{[email protected]} \address{$^2$PDMI RAS, St. Petersburg State University} \begin{abstract} An \textit{Alexander self-dual complex} gives rise to a compactification of $\mathcal{M}_{0,n}$, called \textit{ ASD compactification}, which is a smooth algebraic variety. ASD compactifications include (but are not exhausted by) the \textit{polygon spaces}, or the moduli spaces of flexible polygons. We present an explicit description of the Chow rings of ASD compactifications. We study the analogs of Kontsevich's tautological bundles, compute their Chern classes, compute top intersections of the Chern classes, and derive a recursion for the intersection numbers. \end{abstract} \maketitle \keywords{Alexander self dual complex, modular compactification, tautological ring, Chern class, Chow ring} \section{Introduction} The moduli space of $n$-punctured rational curves ${\mathcal M}_{0,n}$ and its various compactifications is a classical object, bringing together algebraic geometry, combinatorics, and topological robotics. Recently, D.I.Smyth \cite{Smyth13} classified all \textit{modular compactifications} of ${\mathcal M}_{0,n}$. We make use of an interplay between different compactifications, and: \begin{itemize} \item describe the classification in terms of (what we call) \textit{preASD simplicial complexes}; \item describe the Chow rings of the compactifications arising from Alexander self-dual complexes (ASD compactifications); \item compute for ASD compactifications the associated Kontsevich's \textit{$\psi$-classes}, their top monomials, and give a recurrence relation for the top monomials. \end{itemize} Oversimplifying, the main approach is as follows. Some (but not all) compactifications are the well-studied \textit{polygon spaces}, that is, moduli spaces of flexible polygons. A polygon space corresponds to a \textit{threshold} Alexander self-dual complex. Its cohomology ring (which equals the Chow ring) is known due to J.-C. Hausmann and A. Knutson \cite{HKn}, and A. Klyachko\cite{Kl}. The paper \cite{NP} gives a computation-friendly presentation of the ring. Due to Smyth \cite{Smyth13}, all the modular compactifications correspond to \textit{preASD complexes}, that is, to those complexes that are contained in an ASD complex. A removal of a facet of a preASD complex amounts to a blow up of the associated compactification. Each ASD compactification is achievable from a threshold ASD compactification by a sequence of blow ups and blow downs. Since the changes in the Chow ring are controllable, one can start with a polygon space, and then (by elementary steps) reach any of the ASD compactifications and describe its Chow ring (Theorem \ref{ASDChow}). M. Kontsevich's $\psi$-classes \cite{Kon} arise here in a standard way. Their computation of is a mere modification of the Chern number count for the tangent bundle over $\mathbb{S}^2$ (a classical exercise in a topology course). The recursion (Theorem \ref{ThmRecursion}) and the top monomial counts (Theorem \ref{main_theorem}) follow. It is worthy mentioning that a disguised compactification by simple games, i.e., ASD complexes, is discussed from a combinatorial viewpoint in \cite{PanGal}. Now let us give a very brief overview of moduli compactifications of ${\mathcal M}_{0,n}$. A compactification by a smooth variety is very desirable since it makes intersection theory applicable. We also expect that (1) a compactification is \textit{modular}, that is, itself is the moduli space of some curves and marked points lying on it, and (2) the complement of ${\mathcal M}_{0,n}$ (the “boundary”) is a divisor. The space ${\mathcal M}_{0,n}$ is viewed as the configuration space of $n$ distinct marked points (``particles'') living in the complex projective plane. The space ${\mathcal M}_{0,n}$ is non-compact due to forbidden collisions of the marked points. Therefore, each compactification should suggest an answer to the question: what happens when two (or more) marked points tend to each other? There exist two possible choices: either one allows some (not too many!) points to coincide, either one applies a blow up. It is important that the blow ups amount to adding points that correspond to $n$-punctured nodal curves of arithmetic genus zero. A compactification obtained by blow ups only is the celebrated \textit{Deligne--Mumford compactification}. If one avoids blow ups and allows (some carefully chosen) collections of points to coincide, one gets an ASD-compactification; among them are the \textit{polygon spaces}. Diverse combinations of these two options (in certain cases one allows points to collide, in other cases one applies a blow up) are also possible; the complete classification is due to \cite{Smyth13}. Now let us be more precise and look at the compactifications in more detail. \subsection{Deligne--Mumford compactification}\label{DMcom} \begin{definition}\cite{DM69} Let $B$ be a scheme. A \textit{family of rational nodal curves with $n$ marked points} over $B$ is \begin{itemize} \item a flat proper morphism $\pi: C \rightarrow B$ whose geometric fibers $E_{\bullet}$ are nodal connected curves of arithmetic genus zero, and \item a set of sections $(s_{1}, \dots, s_{n})$ that do not intersect nodal points of geometric fibers. \newline In this language, the sections correspond to marked points. The above condition means that a nodal point of a curve may not be marked. \end{itemize} A family $(\pi:C \rightarrow B; s_{1}, \dots, s_{n})$ is \textit{stable} if the divisor $K_{\pi}+s_{1}+\dots+s_{n}$ is $\pi$-relatively ample. Let us rephrase this condition: a family $(\pi:C \rightarrow B; s_{1}, \dots, s_{n})$ is {stable} if each irreducible component of each geometric fiber has at least three \textit{special points} (nodal points and points of the sections $s_{i}$). \end{definition} \begin{theorem} \cite{DM69} (1) There exists a smooth and proper over $\mathbb{Z}$ stack $\overline{\mathcal{M}}_{0,n}$, representing the moduli functor of stable rational curves. Corresponding moduli scheme is a smooth projective variety over $\mathbb{Z}$. (2) The compactification equals the moduli space for $n$-punctured stable curves of arithmetic genus zero with $n$ marked points. A stable curve is a curve of arithmetic genus zero with at worst nodal singularities and finite automorphism group. This means that (i) every irreducible component has at least three marked or nodal points, and (ii) no marked point is a nodal point. \end{theorem} The Deligne-Mumford compactification has a natural stratification by stable trees with $n$ leaves. A \textit{stable tree with $n$ leaves} is a tree with exactly $n$ leaves enumerated by elements of $[n]=\{1,...,n\}$ such that each vertice is at least trivalent. Here and in the sequel, we use the following \textbf{notation}: by \textit{vertices} of a tree we mean all the vertices (in the usual graph-theoretical sense) excluding the leaves. A \textit{bold edge} is an edge connecting two vertices (see Figure \ref{Figtrees}). \begin{figure} \caption{Stable nodal curves (left) and the corresponding trees (right)} \label{Figtrees} \end{figure} The initial space ${\mathcal M}_{0,n}$ is a stratum corresponding to the one-vertex tree. Two-vertex trees (Fig.\ref{Figtrees}(b)) are in a bijection with bipartitions of the set $[n]$: $T\sqcup T^{c} = [n]$ s.t. $|T|, |T^{c}| > 1$. We denote the closure of the corresponding stratum by $\D_{T}$. The latter are important since the (Poincaré duals of) closures of the strata generate the Chow ring $\textbf{A}^{*}(\overline{\mathcal{M}}_{0,n})$: \begin{theorem}\cite[Theorem 1]{Keel92} \label{KeelPresentation} The Chow ring $\bf{A}^{*}$$(\overline{\mathcal{M}}_{0,n})$ is isomorphic to the polynomial ring $$\mathbb{Z}\left[\D_{T}:\; T \subset [n]; |T|>1, |T^{c}| >1\right]$$ factorized by the relations: \begin{enumerate} \item $\D_{T} = \D_{T^{c}}$; \item $\D_{T}\D_{S} =0$ unless $S \subset T$ or $T \subset S$ or $S \subset T^{c}$ or $T \subset S^{c}$; \item For any distinct elements $i,j,k,l \in [n]$: $$\sum_{i,j \in T; k,l \not\in T} \D_{T} = \sum_{i,k \in T; j,l \not\in T} \D_{T} = \sum_{i,l \in T; j,k \not\in T} \D_{T}$$ \end{enumerate} \end{theorem} \subsection{Weighted compactifications}\label{Weicom} The next breakthrough step was done by B. Hassett in \cite{Has03}. Define a \textit{weight data} as an element $\mathcal{A} = (a_{1}, \dots, a_{n}) \in \mathbb{R}^{n}$ such that \begin{itemize} \item $0 < a_{i} \leq 1$ for any $i \in [n]$, \item $a_{1}+ \dots +a_{n} > 2$. \end{itemize} \begin{definition} Let $B$ be a scheme. A family of nodal curves with $n$ marked points $(\pi: C \rightarrow B; s_{1}, \dots, s_{n})$ is \textit{$\mathcal{A}$--stable} if \begin{enumerate} \item $K_{\pi}+ a_{1}s_{1}+\dots + a_{n}s_{n}$ is $\pi$-relatively ample, \item whenever the sections $\{s_{i}\}_{i \in I}$ intersect for some $I \subset [n]$, one has $\sum_{i \in I} a_{i} < 1$. \end{enumerate} The first condition can be rephrased as: each irreducible component of any geometric fiber has at least three {distinct} special points. \end{definition} \begin{theorem}\cite[Theorem 2.1]{Has03} For any weight data $\mathcal{A}$ there exist a connected Deligne--Mumford stack $\overline{\mathcal{M}}_{0, \mathcal{A}}$ smooth and proper over $\mathbb{Z}$, representing the moduli functor of $\mathcal{A}$--stable rational curves. The corresponding moduli scheme is a smooth projective variety over $\mathbb{Z}$. \end{theorem} The Deligne--Mumford compactification arises as a special case for the weight data $ (1,\dots, 1)$. It is natural to ask: how much does a weighted compactification $\overline{\mathcal{M}}_{0, \mathcal{A}}$ depend on $\mathcal{A}$? Pursuing this question, let us consider the space of parameters: $${ \mathscr{A}}_{n} = \left\{ \mathcal{A} \in \mathbb{R}^{n}:\; 0< a_{i} \leq 1, \; \sum_{i} a_{i} >2 \right\} \subset \mathbb{R}^{n}.$$ The hyperplanes $\sum_{i \in I} a_{i} = 1$, $I \subset [n], |I| \geq 2$, (called \textit{walls}) cut the polytope ${ \mathscr{A}}_{n}$ into \textit{chambers}. The Hassett compactification depends on a chamber only \cite[Proposition 5.1]{Has03}. Combinatorial stratification of the space $\overline{\mathcal{M}}_{0, \mathcal{A}}$ looks similarly to that of the Deligne--Mumford's with the only difference --- some of the marked points now can coincide \citep{Cey07}. More precisely, a \textit{weighted tree} $(\gamma, I)$ is an ordered $k$-partition $I_{1}\sqcup \dots \sqcup I_{k} = [n]$ and a tree $\gamma$ with $k$ ordered leaves marked by elements of the partition such that (1) $\sum_{j \in I_{m}} a_{j} \leq 1$ for any $m$, and (2) for each vertex, the number of emanating bold edges plus the total weight is greater than $2$. Open strata are enumerated by weighted trees: the stratum of the space $\overline{\mathcal{M}}_{0,\mathcal{A}}$ corresponding to a weighted tree $({\gamma}, I)$ consists of curves whose irreducible components form the tree $\gamma$ and collisions of sections form the partition $I$. Closure of this stratum is denoted by $\D_{({\gamma}, I)}$. \subsection{Polygon spaces as compactifications of ${\mathcal M}_{0,n}$}\label{FlePolcom} Assume that an $n$-tuple of positive real numbers $\mathcal{L} = (l_{1},...,l_{n})$ is fixed. We associate with it a \textit{flexible polygon}, that is, $n$ rigid bars of lengths $l_{i}$ connected in a cyclic chain by revolving joints. A \textit{configuration} of $\mathcal{L}$ is an $n$-tuple of points $(q_{1},...,q_{n}), \; q_i \in \mathbb{R}^3,$ with $|q_i q_{i+1}|=l_{i}, \; \; |q_{n} q_{1}|=l_{n}$. The following two definitions for the \textit{polygon space}, or the \textit{moduli space of the flexible polygon} are equivalent: \begin{definition} \cite{HKn} \begin{itemize} \item[I.]The moduli space $M_{\mathcal{L}}$ is a set of all configurations of $\mathcal{L}$ modulo orientation preserving isometries of $\mathbb{R}^3$. \item[II.] Alternatively, the space $M_{ \mathcal{L}}$ equals the quotient of the space $$\left\{(u_1,...,u_n) \in (\mathbb{S}^2)^n : \sum_{i=1}^n l_iu_i=0\right\}$$ by the diagonal action of the group $\SO_3(\mathbb{R})$. \end{itemize} \end{definition} The second definition shows that the space $M_{\mathcal{L}}$ does not depend on the ordering of $\{l_1,...,l_n\}$; however, it does depend on the values of $l_i$. Let us consider the \textit{parameter space} $$\left\{ (l_{1}, \dots, l_{n}) \in \mathbb{R}^{n}_{>0}:\ l_{i}< \sum_{j\neq i} l_{j} \text{ for }i=1, \dots, n\right\}.$$ This space is cut into open \textit{chambers} by \textit{walls}. The latter are hyperplanes with defining equations $$\sum_{i\in I} l_i = \sum_{j\notin I} l_j.$$ The diffeomorphic type of $M_{ \mathcal{L}}$ depends only on the chamber containing $\mathcal{L}$. For a point $\mathcal{L}$ lying strictly inside some chamber, the space $M_{\mathcal{L}}$ is a smooth $(2n-6)$-dimensional algebraic variety \cite{Kl}. In this case we say that the length vector is \textit{generic}. \begin{definition} For a generic length vector $\mathcal{L}$, we call a subset $J\subset [n]$ {\it long} if $$\sum_{i \in J} l_i > \sum_{i\notin J} l_i.$$ Otherwise, $J$ is called \textit{short}. The set of all short sets we denote by $SHORT(\mathcal{L})$. \end{definition} Each subset of a short set is also short, therefore $SHORT(\mathcal{L})$ is a (threshold Alexander self-dual) simplicial complex. Rephrasing the above, the diffeomorphic type of $M_{ \mathcal{L}}$ is defined by the simplicial complex $SHORT(\mathcal{L})$. \section{ASD and preASD compactifications}\label{ASDpreASD} \subsection{ASD and preASD simplicial complexes} Simplicial complexes provide a necessary combinatorial framework for the description of the category of smooth modular compactifications of ${\mathcal M}_{0,n}$. A \textit{simplicial complex} (a \textit{complex,} for short) $K$ is a subset of $2^{[n]}$ with the hereditary property: $A \subset B\in K$ implies $A \in K$. Elements of $K$ are called \textit{faces} of the complex. Elements of $2^{[n]} \setminus K$ are called \textit{non-faces}. The maximal (by inclusion) faces are called \textit{facets}. We assume that the set of $0$-faces (the set of vertices) of a complex is $[n]$. The complex $2^{[n]}$ is denoted by $\Delta_{n-1}$. Its $k$-skeleton is denoted by $\Delta_{n-1}^{k}$. {In particular, $\Delta_{n-1}^{n-2}$ is the boundary complex of the simplex $\Delta_{n-1}$.} \begin{definition} For a complex $K \subset 2^{[n]}$, its \textit{Alexander dual} is the simplicial complex $$K^{\circ}:= \{A \subset [n]:\; A^{c} \not\in K\} = \{A^{c}:\; A \in 2^{[n]}\backslash K\}.$$ Here and in the sequel, $A^c=[n]\setminus A$ is the complement of $A$. A complex $K$ is \textit{Alexander self-dual (an ASD complex)} if $K^{\circ}=K$. A \textit{pre Alexander self-dual (a pre ASD)} complex is a complex contained in some ASD complex. \end{definition} In other words, ASD complexes (pre ASD complexes, respectively) are characterized by the condition: for any partition $[n]=A\sqcup B$, {exactly one} (at most one, respectively) of $A$, $B$ is a face. Some ASD complexes are \textit{threshold complexes}: they equal $SHORT(\mathcal{L})$ for some generic weight vectors $\mathcal{L}$ (Section \ref{FlePolcom}). It is known that threshold ASD complexes exhaust all ASD complexes for $n \leq 5$. However, for bigger $n$ this is no longer true. Moreover, for $n \rightarrow \infty$ the percentage of threshold ASD complexes tends to zero. To produce new examples of ASD complexes, we use \textit{flips}: \begin{definition}\cite{PanGal} For an ASD complex $K$ and a facet $A \in K$ we build a new ASD complex $$\flip_{A}(K):= (K\backslash A) \cup A^{c}.$$ \end{definition} It is easy to see that \begin{proposition}\label{LemmaNotLonger2} (1) \cite{PanGal} Inverse of a flip is also some flip. (2) \cite{PanGal} Any two ASD complexes are connected by a sequence of flips. (3) For any ASD complex $K$ there exists a threshold ASD complex $K'$ that can be obtained from $K$ by a sequence of flips with some $A_{i}\subset [n]$ such that $|A_{i}| > 2, |A_{i}^c| > 2$. \end{proposition} \begin{proof} We prove (3). It is sufficient to show that for any ASD complex, there exists a threshold ASD complex with the same collection of $2$-element non-faces. For this, let us observe that any two non-faces of an ASD complex necessarily intersect. Therefore, all possible collections of $2$-element non-faces of an ASD complex (up to renumbering) are: \begin{enumerate} \item empty set; \item $(12),\ (23),\ (31)$; \item $(12),(13),\dots ,(1k)$. \end{enumerate} It is easy to find appropriate threshold ASD complexes for all these cases. \end{proof} ASD complexes appear in the game theory literature as ``simple games with constant sum'' (see \citep{vNMor}). One imagines $n$ players and all possible ways of partitioning them into two teams. The teams compete, and a team \textit{looses} if it belongs to $K$. In the language of flexible polygons, a short set is a loosing team. \textbf{Contraction, or freezing operation.} Given an ASD complex $K$, let us build a new ASD complex $K_{(ij)}$ with $n-1$ vertices $\{1,...,\widehat{{i}},...,\widehat{{j}},...,n,(i,j)\}$ by \textit{contracting the edge} $\{i,j\}\in K$, or \textit{freezing} $i$ and $j$ together. The formal definition is: for $A\subset \{1,...,\widehat{{i}},...,\widehat{{j}},...,n\}$, $A\in K_{(ij)}$ iff $A\in K$, and $A\cup \{(ij)\}\in K_{(ij)}$ iff $A \cup \{i,j\}\in K$. Contraction $K_I$ of any other face $I\in K$ is defined analogously. Informally, in the language of simple game, contraction of an edge means making one player out of two. In the language of flexible polygons, ``freezing'' means producing one new edge out of two old ones (the lengths sum up). \begin{figure} \caption{Contraction of $\{1,2\}$ in a simplicial complex} \label{Figcont} \end{figure} \subsection{Smooth extremal assignment compactifications}\label{Smycom} Now we review the results of \cite{Smyth13} and \cite{Han15}, and indicate a relation with preASD complexes. For a scheme $B$, consider the space $\mathcal{U}_{\,0, n}(B)$ of all flat, proper, finitely-presented morphisms $\pi: \;\mathcal{C} \rightarrow B$ with $n$ sections $\{s_{i}\}_{i \in [n]}$, and connected, reduced, one-dimensional geometric fibers of genus zero. Denote by $\mathcal{V}_{0, n}$ the irreducible component of $\mathcal{U}_{\,0,n}$ that contains ${\mathcal M}_{0,n}$. \begin{definition} A modular compactification of ${\mathcal M}_{0,n}$ is an open substack $\mathcal{X} \subset \mathcal{V}_{0,n}$ that is proper over $\mathbb{Z}$. A modular compactification is stable if every geometric point $(\pi:\;\mathcal{C} \rightarrow B; s_{1}, \dots, s_{n})$ is stable. We call a modular compactification {smooth} if it is a smooth algebraic variety. \end{definition} \begin{definition} A smooth extremal assignment $\mathcal{Z}$ over $\overline{\mathcal{M}}_{0,n}$ is an assignment to each stable tree with $n$ leaves a subset of its vertices $$\gamma \mapsto \mathcal{Z}(\gamma) \subset Vert(\gamma)$$ such that: \begin{enumerate} \item for any tree $\gamma$, the assignment is a proper subset of vertices: $\mathcal{Z}(\gamma) \subsetneqq Vert(\gamma)$, \item for any contraction $\gamma \rightsquigarrow \tau$ with $\{v_{i}\}_{i \in I} \subset Vert(\gamma)$ contracted to $v \in Vert(\tau)$, we have $v_{i}\in \mathcal{Z}(\gamma)$ for all $i \in I$ if and only if $v \in \mathcal{Z}(\tau)$. \item for any tree $\gamma$ and $v \in \mathcal{Z}(\gamma)$ there exists a two-vertex tree $\gamma'$ and $v'\in \mathcal{Z}(\gamma')$ such that $$\gamma' \rightsquigarrow \gamma \text{ and }v' \rightsquigarrow v.$$\end{enumerate}\end{definition} \begin{definition}\label{DefAss} Assume that $\mathcal{Z}$ is a smooth extremal assignment. A curve $(\pi:\;\mathcal{C} \rightarrow B; s_{1}, \dots, s_{n})$ is $\mathcal{Z}$--stable if it can be obtained from some Deligne--Mumford stable curve $(\pi':\;\mathcal{C}' \rightarrow B'; s_{1}', \dots, s_{n}')$ by (maximal) blowing down irreducible components of the curve $\mathcal{C}'$ corresponding to the vertices from the set $\mathcal{Z}(\gamma(\mathcal{C}'))$. \end{definition} A smooth assignment is completely defined by its value on two-vertex stable trees with $n$ leaves. The latter bijectively correspond to unordered partitions $A\sqcup A^{c} = [n]$ with $|A|, |A^c| > 1$: sets $A$ and $A^{c}$ are affixed to two vertices of the tree. The first condition of Definition \ref{DefAss} implies that no more than one of $A$ and $A^{c}$ is ``assigned''. One concludes that preASD complexes are in bijection with smooth assignments. All possible modular compactifications of ${\mathcal M}_{0, n}$ are parametrized by smooth extremal assignments: \begin{theorem}\cite[Theorems 1.9 \& 1.21]{Smyth13} and \cite[Theorem 1.3]{Han15} \begin{itemize} \item For any smooth extremal assignment $\mathcal{Z}$ of ${\mathcal M}_{0,n}$, or equivalently, for any preASD complex $K$, there exists a stack $\overline{\mathcal{M}}_{0,\mathcal{Z}} = \overline{\mathcal{M}}_{0,K} \subset \mathcal{V}_{0,n}$ parameterizing all $\mathcal{Z}$--stable rational curves. \item For any smooth modular compactification $\mathcal{X} \subset \mathcal{V}_{0,n}$, there exist a smooth extremal assignment $\mathcal{Z}$ (a preASD complex $K$) such that $\mathcal{X} = \overline{\mathcal{M}}_{0,\mathcal{Z}} = \overline{\mathcal{M}}_{0,K}$. \end{itemize} \end{theorem} There are two different ways to look at a moduli spaces. In the present paper we look at the moduli space as at a smooth algebraic variety equipped with $n$ sections (\textit{fine moduli space}). The other way is to look at it as at a smooth algebraic variety (\textit{coarse moduli space}). Different preASD complexes give rise to different fine moduli spaces. However, two different complexes can yield isomorphic coarse moduli spaces. Indeed, consider two preASD complexes $K$ and $K\cup\{ij\}$ (we abbreviate the latter as $K+(ij)$), assuming that $\{ij\}\notin K$. The corresponding algebraic varieties $\overline{\mathcal{M}}_{0, K}$ and $\overline{\mathcal{M}}_{0, K + (ij)}$ are isomorphic. A vivid explanation is: to let a couple of marked points to collide is the same as to add a nodal curve with these two points sitting alone on an irreducible component. Indeed, this irreducible component would have exactly three special points, and $\PSL_{2}$ acts transitively on triples. \begin{theorem}\cite[Statements 7.6--7.10]{Han15}\label{objects_fixedlevel} The set of smooth modular compactifications of ${\mathcal M}_{0,n}$ is in a bijection with objects of the ${\bf preASD}_{n}/ \thicksim$, where $K\thicksim L$ whenever $K\setminus L$ and $L\setminus K$ consist of two-element sets only. \end{theorem} \begin{example}\label{example} PreASD complexes and corresponding compactifications. \begin{enumerate} \item the 0-skeleton $\Delta_{n-1}^{0} = [n]$ of the simplex $\Delta_{n-1}$ corresponds to the Deligne--Mumford compactification; \item {the complex $\mathcal{P}_{n} : = {\bf pt} \sqcup \Delta_{n-2}^{n-3}$ (disjoint union of a point and the boundary of a simplex $\Delta_{n-2}$)} is ASD. It corresponds to the Hassett weights $(1, \varepsilon, \dots, \varepsilon)$; this compactification is isomorphic to $\mathbb{P}^{n-3}$; \item the Losev--Manin compactification $\overline{\mathcal{M}}_{0, n}^{LM}$ \cite{LosMan00} corresponds to the weights $(1,1,\varepsilon, \dots, \varepsilon)$ and to the complex ${\bf pt}_{1} \sqcup {\bf pt}_{2} \sqcup \Delta_{n-3}$; \item the space $(\mathbb{P}^{1})^{n-3}$ corresponds to weights $(1,1,1,\varepsilon, \dots, \varepsilon)$, and to the complex ${\bf pt}_{1} \sqcup {\bf pt}_{2} \sqcup {\bf pt}_{3} \sqcup \Delta_{n-4}$. \end{enumerate} \end{example} \subsection{ASD compactifications via stable point configurations} ASD compactifications can be explained in a self-contained way, without referring to \cite{Smyth13}. Fix an ASD complex $K$ and consider configurations of $n$ (not necessarily all distinct) points $p_1,...,p_n$ in the projective line. A configuration is called {\em stable} if the index set of each collection of coinciding points belongs to $K$. That is, whenever $p_{i_1}=...=p_{i_k}$, we have $\{i_1,...,i_k\}\in K$. Denote by $STABLE(K)$ the space of stable configurations in the complex projective line. The group $\mathrm{PSL}_{2}(\mathbb{C})$ acts naturally on this space. Set $$\overline{\mathcal{M}}_{0, K}:=STABLE(K)/\mathrm{PSL}_{2}(\mathbb{C}).$$ If $K$ is a threshold complex, that is, arises from some flexible polygon $\mathcal{L}$, then the space $\overline{\mathcal{M}}_{0, K}$ is isomorphic to the polygon space $M_{\mathcal{L}}$ \cite{Kl}. Although the next theorem fits in a broader context of \cite{Smyth13}, we give here its elementary proof. \begin{theorem}\label{ThmSmoothComp} The space $\overline{\mathcal{M}}_{0, K}$ is a compact smooth variety with a natural complex structure. \end{theorem} \begin{proof}$\;$ \textbf{Smoothness.} For a distinct triple of indices $i,j,k \in [n]$, denote by $U_{i,j,k}$ the subset of $\overline{\mathcal{M}}_{0, K}$ defined by $p_i\neq p_j,$ $p_j \neq p_k$, and $p_i \neq p_k$. For each of $U_{i,j,k}$, we get rid of the action of the group $\mathrm{PSL}_{2}(\mathbb{C})$, setting $$U_{i,j,k}=\big\{(p_1,...,p_n)\in \overline{\mathcal{M}}_{0, K}:\; p_i=0,p_j=1,\text{ and }p_k=\infty \big\}.$$ Clearly, each of the charts $U_{i,j,k}$ is an open smooth manifold. Since all the $U_{i,j,k}$ cover $\overline{\mathcal{M}}_{0, K}$, smoothness is proven. \textbf{Compactness.} Let us show that each sequence of $n$-tuples has a converging subsequence. Assume the contrary. Without loss of generality, we may assume that the sequence $(p_1^i=0,p_2^i=1,p_3^i=\infty,p_4^i,...,p_n^i)_{i=1}^{\infty}$ has no converging subsequence. We may assume that for some set $I \notin K$, all $p^i_j$ with $j\in I$ converge to a common point. We say that we have a \textit{collapsing long set} $I$. This notion depends on the choice of a chart. We may assume that our collapsing long set has the minimal cardinality among all long sets that can collapse without a limit (that is, violate compactness) for this complex $K$. We may assume that $I=\{3,4,5,...,k\}$. This long set can contain at most one of the points $p_1,p_2,p_3$. We consider the case when it contains $p_3$; other cases are treated similarly. That is, all the points $p^i_4,...,p^i_k$ tend to $\infty$. Denote by $C_i$ the circle with the minimal radius embracing the points $p^i_3=\infty ,p^i_4,p^i_5,...,p^i_k$. The circle contains at least two points of $p^i_4,...,p^i_k, p_3=\infty$. Apply a transform $\phi_i \in \mathrm{PSL}_{2}(\mathbb{C})$ which turns the radius of $C_i$ to $1$, and keeps at least two of the points $p^i_4,...,p^i_k, p_3=\infty$ away from each other. In this new chart the cardinality of the collapsing long set gets smaller. A contradiction to the minimality assumption. \end{proof} A natural question is: what if one takes a simplicial complex (not a self-dual one), and cooks the analogous quotient space. Some heuristics are: if the complex contains simultaneously some set $A$ and its complement $[n]\setminus A$, we have a stable tuple with a non-trivial stabilizer in $\mathrm{PSL}_{2}(\mathbb{C})$, so the factor has a natural nontrivial orbifold structure. If a simplicial complex is smaller than some ASD complex $K'$, and therefore, we get a proper open subset of $\overline{\mathcal{M}}_{0, K'}$, that is, we lose compactness. \subsection{Perfect cycles} Assume that we have an ASD complex $K$ and the associated compactification $\overline{\mathcal{M}}_{0, K}$. Let $K_I$ be the contraction of some face $I\in K$. Since the variety $\overline{\mathcal{M}}_{0, K_I}$ naturally embeds in $ \overline{\mathcal{M}}_{0, K}$, the contraction procedure gives rise to a number of subvarieties of $\overline{\mathcal{M}}_{0, K}$. These varieties (1) ``lie on the boundary'' \footnote{That is do not intersect the initial space ${\mathcal M}_{0, n}$.} and (2) generate the Chow ring (Theorem \ref{ASDChow} ). Let us look at them in more detail. An \textit{elementary perfect cycle} $(ij)=(ij)_{K}\subset \overline{\mathcal{M}}_{0, K}$ is defined as $$(ij)=(ij)_K=\{(p_1,...,p_n)\in \overline{\mathcal{M}}_{0, K}: p_i=p_j\}.$$ Let $[n]=A_1\sqcup...\sqcup A_k$ be an unordered partition of $[n]$. A \textit{perfect cycle} associated to the partition \begin{align*} (A_1)\cdot...\cdot(A_k)&=(A_1)_{K}\cdot...\cdot(A_k)_{K} = \\ &= \{(p_1,...,p_n)\in \overline{\mathcal{M}}_{0, K}: i,j \in A_m \Rightarrow p_i=p_j\}. \end{align*} Each perfect cycle is isomorphic to $\overline{\mathcal{M}}_{0, K'}$ for some complex $K'$ obtained from $K$ by a series of contractions. Singletons play no role, so we omit all one-element sets $A_i$ from our notation. Consequently, all the perfect cycles are labeled by partitions of some subset of $[n]$ such that all the $A_i$ have at least two elements. Note that for arbitrary $A \in K$, the complex $K_A$ might be ill-defined. This happens if $A \notin K$. In this case the associated perfect cycle $(A)$ is empty. For each perfect cycle there is an associated Poincaré dual element in the cohomology ring. These dual elements we denote by the same symbols as the perfect cycles. The following rules allow to compute the cup-product of perfect cycles: \begin{proposition}\label{ComputRules}$\;$ \begin{enumerate} \item Let $A$ and $B$ be disjoint subsets of $[n]$. Then \begin{enumerate} \item $(A)\smile (B)=(A)\cdot (B).$ \item $(Ai)\smile (Bi)=(ABi).$ \end{enumerate} \item For $A\notin K$, we have $(A)=0$. If one of $A_k$ is a non-face of $K$, then $(A_1)\cdot...\cdot(A_k)=0$. \item The four-term relation: $(ij)+(kl)=(jk)+(il)$ holds for any distinct $i,j,k,l$. \end{enumerate} \end{proposition} \begin{proof} In the cases (1) and (2) we have a transversal intersection of holomorphically embedded complex varieties. The item (3) will be proven in Theorem \ref{ASDChow}. \end{proof} Examples: $$(123)\cdot (345)=(12345);\ \ (12)\cdot (34) \cdot (23)=(1234).$$ A more sophisticated computation: $$(12)\cdot(12) = (12)\cdot \big( (13) + (24) - (34) \big) = (123) + (124) -(12)\cdot(34).$$ \begin{proposition}\label{PropPerf}A cup product of perfect cycles is a perfect cycle. \end{proposition} \textit{Proof.} Clearly, each perfect cycle is a product of elementary ones. Let us prove that the product of two perfect cycles is an integer linear combination of perfect cycles. We may assume that the second factor is an elementary perfect cycle, say, $(12)$. Let the first factor be $(A_1)\cdot(A_2)\cdot(A_3)\cdot\ldots\cdot(A_k)$. We need the following case analysis: \begin{enumerate} \item If at least one of $1,2$ does not belong to $\bigcup A_i$, the product is a perfect cycle by Proposition \ref{ComputRules}, (1). \item If $1$ and $2$ belong to different $A_i$, we use the following: for any perfect cycle $(A_1)\cdot(A_2)$ with $ i\in A_1, j\in A_2$, we have $$(A_1)\cdot(A_2)\smile (ij)=(A_1)\smile (ij)\smile (A_2)= (A_1j)\smile (A_2)= (A_1\cup A_2).$$ \item Finally, assume that $1,2 \in A_1$. Choose $i\notin A_1,\ \ j\notin A_1$ such that $i$ and $j$ do not belong to one and the same $A_k$. By Proposition \ref{ComputRules}, (3), $$(A_1)\cdot(A_2)\cdot(A_3)\cdot\dotso\cdot(A_k)\smile (12)=(A_1)\cdot(A_2)\cdot(A_3)\cdot\ldots\cdot(A_k)\smile \big((1i)+(2j)-(ij)\big).$$ After expanding the brackets, one reduces this to the above cases.\qed \end{enumerate} \begin{lemma}\label{LemmaTriple} For an ASD complex $K$, let $A\sqcup B\sqcup C=[n]$ be a partition of $[n]$ into three faces. Then $(A)\cdot(B)\cdot(C)=1$ in the graded component ${\bf A}^{n-3}_K$ of the Chow ring, which is canonically identified with $\mathbb{Z}$. \end{lemma} \begin{proof} Indeed, the cycles $(A)$, $(B)$, and $(C)$ intersect transversally at a unique point. \end{proof} Now we see that the set of perfect cycles is closed under cup-product. In the next section we show that the Chow ring equals the ring of perfect cycles. \subsection{Flips and blow ups. } Let $K$ be an ASD complex, and let $A \subset [n]$ be its facet. \begin{lemma}\label{LemmaPerfCyclProj} The perfect cycle $(A)$ is isomorphic to $\overline{\mathcal{M}}_{0, \mathcal{P}_{|A^c|+1}} \cong \mathbb{P}^{|A^c|-2}$. \end{lemma} \begin{proof} Contraction of $A$ gives the complex ${\bf pt}\;\sqcup\;\Delta_{|A^c|}^{} = \mathcal{P}_{|A^c|+1}$ from the Example \ref{example}, (2). \end{proof} \begin{lemma}\label{blowupVSflip} For an ASD complex $K$ and its facet $A$, there are two blow up morphisms $$\pi_{ A }: \overline{\mathcal{M}}_{0,K\backslash A} \rightarrow \overline{\mathcal{M}}_{0, K} \text{ and }\pi_{A^c}: \overline{\mathcal{M}}_{0,K\backslash A} \rightarrow \overline{\mathcal{M}}_{0, \flip_{A} (K)}.$$ The centers of these blow ups are the perfect cycles $(A)$ and $(A^c)$ respectively. The exceptional divisors are equal: $\D_{A} = \D_{A^c}$. Both are isomorphic to $\overline{\mathcal{M}}_{0, \mathcal{P}_{|A|+1}} \times \overline{\mathcal{M}}_{0, \mathcal{P}_{|A^c|+1}} \cong \mathbb{P}^{|A|} \times \mathbb{P}^{|A^c|}$. The maps $\pi_{A}|_{\D_{A}}$ and $\pi_{A^c}|_{\D_{A^c}}$ are projections to the first and the second components respectively. \end{lemma} The \textit{proof} literally repeats \cite[Corollary 3.5]{Has03}: $K$--stable but not $K_{A}$($K_{A^c}$)--stable curves have two connected components. The marked points with indices from the set $A$ lie on one of the irreducible components, and marked points with indices from the set $A^c$ lie on the other. \qed \begin{corollary} For an ASD complex $K$ and its facet $A$, the algebraic varieties $\overline{\mathcal{M}}_{0, K}$ and $\overline{\mathcal{M}}_{0, K \backslash A}$ are HI--schemes, i.e., the canonical map from the Chow ring to the cohomology ring is an isomorphism. \end{corollary} \textit{Proof.} This follows from Lemma \ref{blowupVSflip} and Theorem~\ref{ChowCoh}.\qed \section{Chow rings of ASD compactifications} As it was already mentioned, many examples of ASD compactifications are polygon spaces, that is, come from a threshold ASD complex. Their Chow rings were computed in \cite{HKn}. A more relevant to the present paper presentation of the ring is given in \cite{NP}. We recall it below. \begin{definition} Let ${\bf A}^{*}_{univ} = {\bf A}^{*}_{univ, n}$ be the ring $$\mathbb{Z}\big[(I):\; I \subset [n], 2 \leq |I| \leq n-2 \big]$$ factorized by relations: \begin{enumerate} \item ``\textit{The four-term relations}'': $(ij)+(kl)-(ik)-(jl)=0$ for any $i,j,k,l \in [n]$. \item ``\textit{The multiplication rule}'': $(Ik)\cdot(Jk) = (IJk)$ for any disjoint $I, J \subset [n]$ not containing element $k$. \end{enumerate} \end{definition} There is a natural graded ring homomorphism from ${\bf A}^*_{univ}$ to the Chow ring of an ASD-compactification that sends each of the generators $(I)$ to the corresponding perfect cycle. \begin{theorem}\cite{NP}\label{ChowForPolygons} The Chow ring (it equals the cohomology ring) of a polygon space equals the ring ${\bf A}^*_{univ}$ factorized by $$(I)=0 \ \ \ \hbox{whenever $I$ is a long set.}$$ \end{theorem} The following generalization of Theorem \ref{ChowForPolygons} is the first main result of the paper: \begin{theorem}\label{ASDChow} For an ASD complex $L$, the Chow ring ${\bf A}^{*}_{L}:= {\bf A}^{*}(\overline{\mathcal{M}}_{0, L})$ of the moduli space $\overline{\mathcal{M}}_{0, L}$ is isomorphic to the quotient ${\bf A}^{*}_{univ}$ by the ideal $\mathcal{I}_{L}:= \big\langle (I):\;I\not\in L \big\rangle$. \end{theorem} The idea of the proof is: the claim is true for threshold ASD complexes (i.e., for polygon spaces), and each ASD complex is achievable from a threshold ASD complex by a sequence of flips. Therefore it is sufficient to look at a unique flip. Let us consider an ASD complex $K + B$ where $B\notin K$ is a facet in $K + B$. Set $A:=[n]\setminus B$, and consider the ASD complex $K + A= \flip_{B}(K+B)$. \begin{center} \begin{tikzcd}[column sep=small] & K \arrow[dl, hook] \arrow[dr, hook] & \\ K+ B \arrow[rr, dashrightarrow, "\flip_{B}"] & & K+A \end{tikzcd} \end{center} We are going to prove that if the claim of the theorem holds true for $K+B$, then it also holds for $K+A$. By Lemma \ref{blowupVSflip}, the space $\overline{\mathcal{M}}_{0, K}$ is the blow up of $\overline{\mathcal{M}}_{0, K+B}$ along the subvariety $(B)$ and the blow up of $\overline{\mathcal{M}}_{0, K+A}$ along the subvariety $(A)$. The diagram of the blow ups looks as follows: \begin{center} \begin{tikzcd} (B)\arrow[d, hook, "i_{B}"] & \D \arrow[hook]{d}{j_{A} = j_{B}} \arrow{r}{g_{A}}\arrow{l}{g_{B}} & (A) \arrow[hook]{d}{i_{A}} \\ \overline{\mathcal{M}}_{0, K+B} & \overline{\mathcal{M}}_{0, K} \arrow[twoheadrightarrow]{r}{\pi_{A}}\arrow[l, two heads, "\pi_{B}"] & \overline{\mathcal{M}}_{0, K+A } \end{tikzcd} \end{center} The induced diagram of Chow rings is: \begin{center} \begin{tikzcd} {\bf A}^{*}_{(B)} = {\bf A}^{*}_{\mathcal{P}_{|A|+1}} \arrow{r}{g_{B}^{*}} & {\bf A}^{*}_{\mathcal{P}_{|A|+1}}\times {\bf A}^{*}_{\mathcal{P}_{|B|+1}} & {\bf A}^{*}_{(A)} = {\bf A}^{*}_{\mathcal{P}_{|B|+1}} \arrow{l}{g_{A}^{*}} \\ {\bf A}^{*}_{K + B}\arrow[u, "i_{B}^{*}"]\arrow[r, hookrightarrow, "\pi_{B}^{*}" description] & {\bf A}^{*}_{K} \arrow[hook]{u}{j_{A}^{*} = j_{B}^{*}} & {\bf A}^{*}_{K + A} \arrow[l, hookrightarrow , "\pi_{A}^{*}" description]\arrow[u, "i_{A}^{*}" description] \end{tikzcd} \end{center} Let ${\bf A}^{*}_{K+ A, comb}$ be the quotient of ${\bf A}^{*}_{univ}$ by the ideal $\mathcal{I}_{K+ A}$. We have a natural graded ring homomorphism $$\alpha = \alpha_{K+A}:{\bf A}^{*}_{K+ A, comb} \rightarrow {\bf A}^{*}_{K+ A}=:{\bf A}^{*}_{K+ A, alg},$$ where the map $\alpha$ sends each symbol $(I)$ to the associated perfect cycle. A remark on notation: as a general rule, all objects related to ${\bf A}^{*}_{K+ A, comb}$ we mark with a subscript \textit{``comb''}, and objects related to ${\bf A}^{*}_{K+ A, alg}$ we mark with \textit{``alg''}. We shall show that $\alpha$ is an isomorphism. The outline of the proof is: \begin{enumerate} \item The ring ${\bf A}^{*}_{K+A,alg}$ is generated by the first graded component. (The ring ${\bf A}^{*}_{K+A,comb}$ is also generated by the first graded component; this is clear by construction.) \item The restriction of $\alpha$ to the first graded components is a group isomorphism. Therefore, $\alpha$ is surjective. \item The map $\alpha$ is injective. \end{enumerate} \iffalse We make use of the general structure theory for Chow rings under blow ups (the main theorems we collect in Appendix). \fi \begin{lemma} \label{OneGen} The ring ${\bf A}^{*}_{K+A, alg}$ is generated by the group ${\bf A}^{1}_{K+A, alg}$. \end{lemma} \begin{proof} By Theorem \ref{NewChowRing} $${\bf A}^{*}_{K} \cong \frac{ {\bf A}^{*}_{K+A, alg}[T]}{\big( f_{A}(T), T\cdot \ker (i_{A}^{*})\big)}.$$ Observe that: \begin{itemize} \item The zero graded components of ${\bf A}^{*}_{K+A, alg}, {\bf A}^{*}_{K+A, comb}$ equals $\mathbb{Z}$. \item The map $\pi_{A}^*:\;{\bf A}^{*}_{K+A, alg} \rightarrow {\bf A}^{*}_{K}$ is a homomorphism of graded rings. Moreover, the variable $T$ stands for the additive inverse of the class of the exceptional divisor $\D$. And so, $T$ a degree one homogeneous element. \item Since $i_{A}^*$ is the multiplication by the cycle $(A)$, the kernel $\ker(i_{A}^{*})$ equals the annihilator $\Ann(A)_{alg}$ in the ring ${\bf A}^{*}_{K+A, alg}$. {Since the space $\overline{\mathcal{M}}_{0, K+A}$ is an HI-scheme}, the degree of the ideal $\Ann(A)_{alg}$ is strictly positive. \item The polynomial $f_{A}(T)$ is a homogeneous element whose degree equals the degree $\deg_{T}(f_{A}(T))$. Besides, its coefficients are generated by elements from the first graded component since they all belong to the ring $\alpha ({\bf A}^{*}_{K+A, comb})$. \end{itemize} Denote by $\langle{\bf A}^1_{K+A, alg}\rangle$ the subalgebra of ${\bf A}_{K+A, alg}$ generated by the first graded component. First observe that the restriction of the map ${\bf A}^{*}_{K+A, alg}[T] \rightarrow {\bf A}^{*}_{K}$ to the first graded components is injective. Assuming that the lemma is not true, consider a homogeneous element $r$ of the ring ${\bf A}^{*}_{K+A, alg}$ with minimal degree among all not belonging to $\langle{\bf A}^{1}_{K+A, alg}\rangle$. There exist elements $b_{i}\in \langle{\bf A}^{1}_{K+A, alg}\rangle$ such that $b_{p}\cdot T^{p} + \dots + b_{1}\cdot T + b_{0} = r$ in the ring ${\bf A}^{*}_{K+A, alg}[T]$. The elements $b_{i}$ are necessarily homogeneous. Equivalently, $b_{p}\cdot T^{p} + \dots + b_{1}\cdot T + b_{0} - r$ belongs to the ideal $\big(f_A(T), T\cdot \Ann(A_{alg})\big)$. Therefore $b_{p}\cdot T^{p} + \dots + b_{1}\cdot T + b_{0} - r=x\cdot f_A(T) + y \cdot T \cdot i$ with some $x, y \in R[T]$ and $i \in \Ann(A_{alg})$. Setting $T=0$, we get $b_{0} - r = x_{0}\cdot f_{0}$. If the element $x_{0}$ belongs to $\langle{\bf A}^{1}_{K+A, alg}\rangle$, then we are done. Assume the contrary. Then from the minimality assumption we get the following inequalities: $\deg (b_0 - r) = \deg (x_{0}\cdot f_{0}) > \deg(x_{0}) \geq \deg(r)$. A contradiction. \end{proof} \begin{lemma} \label{FirstGroup} For any ASD complex $L$ the groups ${\bf A}^1_{L, comb}$ and ${\bf A}^1_{L, alg}$ are isomorphic. The isomorphism is induced by the homomorphism $\alpha_{L}$. \end{lemma} The \textit{proof} analyses how do these groups change under flips. We know that the claim is true for threshold complexes. Due to Lemma \ref{LemmaNotLonger2} we may consider flips only with $n-2>|A|>2$. Again, we suppose that the claim is true for the complex $K+B$ and will prove for the complex $K+A$ with $A \sqcup B = [n]$. Under such flips ${\bf A}^{1}_{comb}$ does not change. The group ${\bf A}^{1}$ does not change neither. This becomes clear with the following two short exact sequences (see Theorem \ref{SESBlow},e): \begin{align*} 0 &\rightarrow {\bf A}_{n-4}\big(\overline{\mathcal{M}}_{0, \mathcal{P}_{|A|+1}}\big) \rightarrow {\bf A}_{n-4}\big( \overline{\mathcal{M}}_{0, \mathcal{P}_{|A|+1} } \times \overline{\mathcal{M}}_{0, \mathcal{P}_{|B|+1} } \big) \oplus {\bf A}_{n-4}\big( \overline{\mathcal{M}}_{0, K + B} \big) \rightarrow {\bf A}_{n-4}\big(\overline{\mathcal{M}}_{0, K}\big) \rightarrow 0,\\ 0 &\rightarrow {\bf A}_{n-4}\big(\overline{\mathcal{M}}_{0, \mathcal{P}_{|B|+1}}\big) \rightarrow {\bf A}_{n-4}\big( \overline{\mathcal{M}}_{0, \mathcal{P}_{|A|+1} } \times \overline{\mathcal{M}}_{0, \mathcal{P}_{|B|+1} } \big) \oplus {\bf A}_{n-4}\big( \overline{\mathcal{M}}_{0, K + A} \big) \rightarrow {\bf A}_{n-4}\big(\overline{\mathcal{M}}_{0, K}\big) \rightarrow 0. \end{align*} \qed Now we know that $\alpha:\;{\bf A}^{*}_{K+A, comb} \rightarrow {\bf A}^{*}_{K+A, alg}$ is surjective. \begin{proposition}\label{FirstUniv} Let $\Gamma$ be a graph $Vert(\Gamma)=[n]$ which equals a tree with one extra edge. Assume that the unique cycle in $\Gamma$ has the odd length. Then the set of perfect cycles $\{(ij)\}$ corresponding to the edges of $\Gamma$ is a basis of the (free abelian) group ${\bf A}^{1}_{univ}$. \end{proposition} \begin{proof} Any element of the group ${\bf A}^{1}_{univ}$ by definition has a form $\sum_{ij} a_{ij} \cdot (ij)$ with the sum ranges over all edges of the complete graph on the set $[n]$. The four-term relation can be viewed as an alternating relation for a four-edge cycle. One concludes that analogous alternating relation holds for each cycle of even length. Example: $(ij)-(jk)+(kl)-(lm)+(mp)-(pi)=0$. Such a cycle may have repeating vertices. Therefore, if a graph has an even cycle, the perfect cycles associated to its edges are dependant. It remains to observe that the graph $\Gamma$ is a maximal graph without even cycles. \end{proof} By Theorem \ref{NewChowRing}, the Chow rings of the compactifications corresponding to complexes $K$, $K+A$, and $K+B$ are related in the following way: $${\bf A}^{*}_{K} \cong \frac{ {\bf A}^{*}_{K+A, alg}[T]}{\big( f_{A}(T), T\cdot \ker (i_{A}^{*})\big)} \cong \frac{ {\bf A}^{*}_{K+B}[S]}{\big( f_{B}(S), S\cdot \ker (i_{B}^{*})\big)}.$$ Now we need an explicit description of the polynomials $f_{A}$ and $f_{B}$. Assuming that $A=\{x, x_{2}, \dots, x_{a}\}$ $B=\{y, y_{2}, \dots, y_{b}\}$, where $|A|= a$ and $|B| = b$, take the generators \begin{align*} &\big\{(xy); (xy_{i}), i \in \{2, \dots, b\}; (x_{j}y), j \in \{2, \dots, a\}; (yy_{2}) \big\} \text{ for } {\bf A}^{*}_{K+ B} \text{ , and }\\ &\big\{(xy); (xy_{i}), i \in \{2, \dots, b\}; (x_{j}y), j \in \{2, \dots, a\}; (xx_{2}) \big\} \text{ for } {\bf A}^{*}_{K+ A, comb}. \end{align*} \begin{figure} \caption{111} \label{Figbases} \end{figure} Denote by ${\mathcal A}$ the subring of the Chow rings ${\bf A}^{*}_{K+ A, comb}$ and ${\bf A}^{*}_{K+ B}$ generated by the elements $\{ (xy); (xy_{i}), i \in \{2, \dots, b-1\}; (x_{j}y), j \in \{2, \dots, a-1\} \}$. Then ${\bf A}^{*}_{K+ A, comb}$ is isomorphic to ${\mathcal A}[I]/F_{B}(I)$ where $I:=(xx_{2})$ and $F_{B}(I)$ is an incarnation of the expression $(B) = (yy_{2})\cdot\dots\cdot(yy_{b})=0$ via the generators. Analogously, ${\bf A}^{*}_{K+ B, comb} \cong {\mathcal A}[J]/F_{A}(J)$ with $V:= (yy_{2})$. {The cycles $(A)$ and $(B)$ equal to the complete intersection of divisors $(xx_{2}), (xx_{3}), \dots, (xx_{a-1})$ and $(yy_{2}), (yy_{3}), \dots, (yy_{b-1})$ respectively. So the Chern polynomials are:} $$f_{A}(T) = \big(T + (xx_{2})\big)\cdot\dots\cdot\big(T + (xx_{a-1})\big) \text{ and }f_{B}(S) = \big(S + (yy_{2})\big)\cdot\dots\cdot\big(S + (yy_{b-1})\big).$$ Moreover, the new variables $T$ and $S$ correspond to one and the same exceptional divisor $\D_{A}=\D_{B}$. Relation between polynomials $f_{\bullet}$ and $F_{\bullet}$ are clarified in the following lemma. \begin{lemma}\label{PullingUp} The Chow class of the image of a divisor $(ab)_{K+A}$, $a,b \in [n]$ under the morphism $\pi_{A}^{*}$ equals \begin{enumerate} \item $(ab)_{K}$ for $a \in A, b\in B$, or vice versa; \item ${\bf bl}_{(ab)(A)}\big((ab)_{K+A}\big)$ for $\{a,b\} \subset B$; \item ${\bf bl}_{(A)}\big((ab)_{K+A}\big) + \D_{A}$ for $\{a,b\} \subset A$. \end{enumerate} \end{lemma} \begin{proof} In case (1), the cycle $(ab)_{K+A}$ does not intersect $(A)_{K+A}$. It is by definition $\textbf{bl}_{(ab)\cap(A)}\big((ab)_{K+A}\big)$. Then (1) and (2) follow directly from Theorem \ref{CorCodim},(2) by dimension counts. The claim (3) also follows from the blow-up formula Theorem \ref{CorCodim}: $$\pi_{A}^{*} (ab) = \textbf{bl}_{(ab)\cap (A)}\big((ab)_{K+A}\big) + j_{A, *}\big( g^{*}_{A} [(ab)\cap(A)] \cdot s(N_{\D_{A}}\overline{\mathcal{M}}_{0, K}) \big)_{n-4},$$ where $N_{\D_{A}}\overline{\mathcal{M}}_{0, K}$ is a normal bundle and $s(\;)$ is a total Segre class. This follows from the equalities by the functoriality of the total Chern and Segre classes and the equality $s(U)\cdot c(U) = 1$. Namely, we have \begin{align*} s((ab)\cap(A)) &= [(ab)\cap(A)] \cdot s(N_{(ab)\cap(A)}(ab)_{K+A}) = [(ab)\cap(A)]\cdot s\left(N_{(A)}\overline{\mathcal{M}}_{0, K+A}\right), \\ c\left( \frac{g^{*}_{A}N_{(A)}\overline{\mathcal{M}}_{0,K+A}}{N_{\D_{A}}\overline{\mathcal{M}}_{0,K}} \right) &\cdot g^{*}_{A} [(ab)\cap(A)]\cdot g^{*}_{A}s\left(N_{(A)}\overline{\mathcal{M}}_{0, K+A}\right) = g^{*}_{A} [(ab)\cap(A)] \cdot s(N_{\D_{A}}\overline{\mathcal{M}}_{0, K}). \end{align*} Finally, we note that $g^{*}_{A} [(ab)\cap(A)] = \D_{A}$. \iffalse, and for the desired dimension $n-4$ we should take ``1'' from the total Segre class $s(N_{\D_{A}}\overline{\mathcal{M}}_{0, K})$. \textbf{GAIANE DOES NOT LIKE THIS SENTENCE} \fi \end{proof} From Lemma \ref{PullingUp}, we have the equality $$f_{A}(T) = \pi_{A}^{*}\big( (xx_{2})\cdot\dots\cdot(xx_{a-1}) \big) = \pi_{A}^{*}(A)\text{ and }f_{B}(S) = \pi_{B}^{*}(B).$$ \iffalse The following lemma states that combinatorial annihilators of $(A)$ and $(B)$ coincide. \fi \begin{lemma}\label{kernels}$\;$ \begin{enumerate} \item The ideal $\Ann(A)_{comb}$ is generated by its first graded component. \item More precisely, the generators of the ideal $\Ann(A)_{comb}$ are \begin{enumerate} \item the elements of type $(ab)$ with $a\in A, b\in B$, and \item the elements of type $$(a_1a_2)-(b_1b_2),$$ where $a_1,a_2 \in A$; $b_1,b_2 \in B=A^c$. \end{enumerate} \item The annihilators $\Ann(A)_{comb}$ and $\Ann(B)$ are canonically isomorphic. \end{enumerate} \end{lemma} \begin{proof} First observe that the kernel $\ker(i_{A}^*)$ equal the annihilator of the cycle $(A)$. Set $\kappa$ be the ideal generated by $\ker (i_{A}^{*})\cap A^1$. Without loss of generality, we may assume that $A=\{1,2,...,m\}$. Observe that: \begin{enumerate} \item If $I\subset [n]$ has a nonempty intersection with both $A$ and $A^c$, then $(A)(I)=0$. In this case, $(I)$ can be expressed as $(ab)(I')$, where $a\in A,\ b \in A^c$. \item \begin{enumerate} \item[(i)] If $I\subset A$, then $(A)\smile (I)= (A)(m+1...m+|I|).$ \item[(ii)] In this case, the element $(I)-(m+1,...,m+|I|)$ is in $\kappa$. \end{enumerate} Let us demonstrate this by giving an example with $A=\{1,2,3,4\}$, $(I)=(12)$: \newline $(1234)\smile (12)=(A)\smile ((15)+ (16)-(56))= 0+0-(1234)(56)$. We conclude that $(12)-(56)\in \kappa$. Let us show that $(123)-(567)\in \kappa$. Indeed, $(123)-(567)=(12) \smile (23)-(567)\in \kappa \Leftrightarrow (56)\smile(23)-(567) \in \kappa \Leftrightarrow (56)\smile ((23)-(67))\in \kappa$. Since $(23)-(67)\in \kappa$, the claim is proven. \item If $I\subset A^c$, then $(A)\smile (I)= (A)(m+1,...,m+|I|).$ The element $(I)-(m+1,...,m+|I|)$ is in $\kappa$. This follows from (1) and (2). \end{enumerate} Now let us prove the lemma. Assume $x\smile (A)=0$. Let $x=\sum_i a_i (I_1^i)...(I^i_{k_i}).$ We may assume that $x$ is a homogeneous element. Modulo $\kappa$, each summand $(I_1)...(I_{k_i})$ can be reduced to some $ (m+1...m+r_1)(m+r_1+1,...,m+r_2)...(m+r_{k_i}+1...m')$. Modulo $\kappa$, $ (m+1...m+r_1)(m+r_1+1...m+r_2)...(m+r_{k_i}+1,...,m')$ can be reduced to a one-bracket element $ (m+1...m+r_1m+r_1+1...m+r_2...m+r_{k_i}+1,...,m'')$. Indeed, for two brackets we have: $$ (m+1...m+r_1)(m+r_1+1,...,m+r_2) \equiv (m+1...m+r_1)(m+r_1,...,m+r_2-1) \equiv (m+1...m+r_2-1)\;\;(\mathrm{mod} \; \kappa).$$ For a bigger number of brackets, the statement follows by induction. We conclude that a homogeneous $x\in \ker(i_{A}^*)$ modulo $\kappa$ reduces to some $a(m+1...m+m')$, where $a \in \mathbb{Z}$. Then $a=0$. Indeed, $(A)(m+1...m+m')\neq 0$ since by Lemma \ref{LemmaTriple} $(A)(m+1...m+m')(m+m'...n)\neq 0$. \end{proof} \begin{remark} Via the four-term relation any element from b) can be expressed as a linear combination of elements from a). So only a)--elements are sufficient to generate the annihilators. Actually, $${\mathcal A} \cong \mathbb{C} \oplus \Ann(A)_{comb}.$$ \end{remark} We arrive at the following commutative diagram of graded rings: \begin{center} \begin{tikzcd}[column sep=small] & {\bf A}^{*}_{K}\cong \widetilde{{\bf A}^{*}_{K}}/\Ann(B) \cong \widetilde{{\bf A}^{*}_{K}}/\Ann(A)_{comb} & \\ & \widetilde{{\bf A}^{*}_{K}}:= {\bf A}^{*}_{K+A, comb} [T]/f_{(A)}(T) \cong {\bf A}^{*}_{K+B} [S]/f_{(B)}(S) \arrow[u] & \\ & &\\ {\bf A}^{*}_{K+B} \cong {\mathcal A}[J]/F_{A}(J)\arrow[uur, "f_{B} = \pi_{B}^{*} F_{B}"] & & {\bf A}^{*}_{K+A, comb}\cong {\mathcal A}[I]/F_{B}(I) \arrow[uul, "f_{A} = \pi_{A}^{*} F_{A}"] \\ & {\mathcal A} \arrow[ul, "F_{A}"] \arrow[ur, "F_{B}"] & \end{tikzcd} \end{center} Therefore, the following diagram commutes: \begin{center} \begin{tikzcd}[column sep=small] 0\arrow[r] & \Ann(A)_{comb} \arrow[r] \arrow[d]& {\bf A}^{*}_{K+A, comb} [f_{(A)}] \arrow[r]\arrow[d, two heads, "\alpha"] & \sfrac{{\bf A}^{*}_{K+A, comb}[f_{(A)}]}{\Ann(A)_{comb}} \arrow[r] \arrow[d, "\cong"] & 0\\ 0 \arrow[r]& \Ann(A)_{alg} \arrow[r] & {\bf A}^{*}_{K+A, alg} [f_{(A)}]\arrow[r] & \sfrac{{\bf A}^{*}_{K+A, alg} [f_{(A)}]}{\Ann(A)_{alg}} \arrow[r] & 0 \end{tikzcd} \end{center} Here $R[g]$ denotes the extension of a ring $R$ by a polynomial $g(t)$. All three vertical maps are induced by the map $\alpha$; the last vertical map is an isomorphism since both rings are isomorphic to ${\bf A}^{*}_{K}$. The ideals $\Ann(A)_{comb}$ and $\Ann(A)_{alg}$ coincide, so the homomorphism $$\alpha:\;{\bf A}^{*}_{K+A, comb} [f_{(A)}] \rightarrow {\bf A}^{*}_{K+A, alg} [f_{(A)}]$$ is {injective}, and the theorem is proven. \qed \iffalse \begin{lemma} The map $\alpha:\; {\bf A}^{*}_{K+A, comb} \rightarrow {\bf A}^{*}_{K+A, alg}$ is an isomorphism. \end{lemma} The surjective homomorphism $\alpha:\; {\bf A}^{*}_{K+A, comb} \rightarrow {\bf A}^{*}_{K+A, alg}$ induces an isomorphism ${\bf A}^{*}_{K+A, comb} [f_{(A)}] \rightarrow {\bf A}^{*}_{K+A, alg} [f_{(A)}]$. We get the result from this applying the Snake Lemma . \fi \section{Poincaré polynomials of ASD compactifications} \begin{theorem}\label{PoiASD} Poincaré polynomial ${\mathscr P}_{q}\big(\overline{\mathcal{M}}_{0, L}\big)$ for an ASD complex $L$ equals $${\mathscr P}_{q}\big(\overline{\mathcal{M}}_{0, L}\big) = \frac{1}{q(q-1)} \left( (1+q)^{n-1} - \sum\limits_{I \in L} q^{|I|} \right).$$ \end{theorem} \begin{proof} This theorem is proven by Klyachko \cite[Theorem 2.2.4]{Kl} for polygon spaces, that is, for compactifications coming from a threshold ASD complex. Assume that $K + A$ be a threshold ASD complex. For the blow up of the space $\overline{\mathcal{M}}_{0, K+B}$ along the subvariety $(B)$ we have an exact sequence of Chow groups $$0 \rightarrow {\bf A}_{p}\big(\overline{\mathcal{M}}_{0, \mathcal{P}_{|A|+1}}\big) \rightarrow {\bf A}_{p}\big( \overline{\mathcal{M}}_{0, \mathcal{P}_{|A|+1} } \times \overline{\mathcal{M}}_{0, \mathcal{P}_{|B|+1} } \big) \oplus {\bf A}_{p}\big( \overline{\mathcal{M}}_{0, K + B} \big) \rightarrow {\bf A}_{p}\big(\overline{\mathcal{M}}_{0, K}\big) \rightarrow 0,$$ where, as before, $A \sqcup B = [n]$ and $p$ is a natural number. We get the equality $${\mathscr P}_{q}(K) = {\mathscr P}_{q}(K+B) + {\mathscr P}_{q}(\mathcal{P}_{|A|+1})\cdot{\mathscr P}_{q}(\mathcal{P}_{|B|+1})- {\mathscr P}_{q}(\mathcal{P}_{|A|+1}).$$ Also, we have the recurrent relations for the Poincaré polynomials: \begin{align*} {\mathscr P}_{q}(K + B) &= {\mathscr P}_{q}(K + A) + {\mathscr P}_{q}(\mathcal{P}_{|A|+1}) - {\mathscr P}_{q}(\mathcal{P}_{|B|+1}) =\\ &=\frac{1}{q(q-1)}\left((1+q)^{n-1} - \sum\limits_{I\in K + A} q^{|I|} + q^{|A|} - q - q^{|B|} +q \right) = \\ &=\frac{1}{q(q-1)}\left((1+q)^{n-1} - \sum\limits_{I\in K} q^{|I|} -q^{|A|} + q^{|A|} - q - q^{|B|} +q \right) = \\ &= \frac{1}{q(q-1)} \left( (1+q)^{n-1} - \sum\limits_{I \in K + B} q^{|I|} \right). \end{align*} We have used the following: if $U$ is a facet of an ASD complex, then there is an isomorphism $\overline{\mathcal{M}}_{0, \mathcal{P}_{|U|+1}} \cong \mathbb{P}^{|U| - 2}$ (see Example \ref{example}(2) ); the Poincaré polynomial of the projective space $\mathbb{P}^{|U| - 2}$ equals $\frac{q^{|U|} - q}{q(q-1)}$. \end{proof} \section{The tautological line bundles over $\overline{\mathcal{M}}_{0,K}$ and the $\psi$--classes}\label{psi} The \textit{tautological line bundles} $L_i, \ i=1,...,n$ were introduced by M. Kontsevich \cite{Kon} for the Deligne-Mumford compactification. The first Chern classes of $L_i$ are called the $\psi$-\textit{classes}. We now mimic the Kontsevich's original definition for ASD compactifications. Let us fix an ASD complex $K$ and the corresponding compactification $\overline{\mathcal{M}}_{0,K}$. \begin{definition} The line bundle $E_i=E_{i}(L)$ is the complex line bundle over the space $\overline{\mathcal{M}}_{0,K}$ whose fiber over a point $(u_1,...,u_n)\in (\mathbb{P}^1)^n$ is the tangent line\footnote{In the original Kontsevich's definition, the fiber over a point is the cotangent line, whereas we have the tangent line. This replacement does not create much difference.} to the projective line $\mathbb{P}^1$ at the point $u_i$. The first Chern class of $E_{i}$ is called the $\psi$-class and is denoted by $\psi_i$. \end{definition} \begin{proposition} \label{Ch2} \begin{enumerate} \item For any $i \neq j \neq k\in [n]$ we have $$\psi_i= (ij) +(ik)-(jk).$$ \item The four-term relation holds true: $(ij)+(kl)=(ik)+(jl)$ for any distinct $i,j,k,l \in [n].$ \end{enumerate} \end{proposition} \begin{proof} (1) Take a stable configuration $(x_1,...,x_n)\in \overline{\mathcal{M}}_{0,K}$. Take the circle passing through $x_i,x_j$, and $x_k$. It is oriented by the order $ijk$. Take the vector lying in the tangent complex line to $x_i$ which is tangent to the circle and points in the direction of $x_j$. It gives rise to a section of $E_{i}$ which is defined correctly whenever the points $x_i,x_j$, and $x_k$ are distinct. Therefore, $\psi_i = A(ij) +B(ik)+C(jk)$ for some integer $A,B,C$. Detailed analysis specifies their values. Now (2) follows since the Chern class $\psi_i $ does not depend on the choice of $j$ and $k$. \end{proof} Let us denote by $|d_{1}, \dots, d_{n}|_K$ the intersection number $\langle \psi_1^{ d_1} ... \psi_k^{ d_k}\rangle_K= \psi_1^{\smile d_1} \smile ... \smile \psi_k^{\smile d_k}$ related to the ASD complex $K$. \begin{theorem}\label{ThmRecursion} Let $\overline{\mathcal{M}}_{0,K}$ be an ASD compactification. A recursion for the intersection numbers is \begin{align*} |d_{1}, \dots, d_{n}|_{K} = &|d_{1}, \dots, d_{i}+d_{j}-1, \dots, \hat{d_{j}}, \dots, d_{n}|_{K_{(ij)}} + |d_{1}, \dots, d_{i}+d_{k}-1, \dots, \hat{d_{k}}, \dots, d_{n}|_{K_{(ik)}} \\ &-|d_{1}, \dots, d_{i}-1 ,\dots, d_{j}+d_{k}, \dots, \hat{d_{j}}, \hat{d_{k}},\dots, d_{n}|_{K_{(jk)}}, \end{align*} where $i, j, k \in [n]$ are distinct. Remind that $K_{(ij)}$ denotes the complex $K$ with $i$ and $j$ frozen together. {Might happen that $K_{(ij)}$ is ill-defined, that is, $(ij)\notin K$. Then we set the corresponding summand to be zero.} \end{theorem} \begin{proof} By Proposition \ref{Ch2}, $$\langle \psi_1^{d_1} \dots \psi_{n}^{d_n} \rangle_K = \langle \psi_1^{d_1-1} \dots \psi_{n}^{d_n} \rangle_K \smile \big((1i)+(1j)- (ij)\big).$$ It remains to observe that $\langle \psi_1^{d_1-1} \dots \psi_{n}^{d_n} \rangle_K\smile (ab)$ equals the $\langle \psi_1^{d_1-1} \dots \psi_{n}^{d_n} \rangle_{K_{(a,b)}}$.\end{proof} \begin{theorem}\label{main_theorem} Let $\overline{\mathcal{M}}_{0,K}$ be an ASD compactification. Any top monomial in $\psi$-classes modulo renumbering has a form $$\psi_1^{d_1}\smile ...\smile \psi_m^{d_m}$$ with $\sum_{q=1}^m d_q=n-3$ and $d_q \neq 0$ for $q=1,...,m$. Its value equals the signed number of partitions $$[n-2]=I\cup J$$ with $m+1 \in I$ and $I,J \subset K$. Each partition is counted with the sign $$(-1)^N \cdot \varepsilon ,$$ where $$N= |J|+\sum_{q \in J , q\leq m} d_q, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\varepsilon= \left\{ \begin{array}{lll} 1, & \hbox{if \ \ } J\cup \{n\}\in K, \hbox{and\ \ }J\cup \{n-1\}\in K;\\ -1, & \hbox{if \ \ } I\cup \{n\}\in K, \hbox{and\ \ }I\cup \{n-1\}\in K;\\ 0, & \hbox{otherwise.} \end{array} \right.$$ \end{theorem} \textit{Proof} goes by induction. Although \textbf{the base} is trivial, let us look at it. The smallest $n$ which makes sense is $n=4$. There exist two ASD complexes with four vertices, both are threshold. So there exist two types of fine moduli compactifications, both correspond to the configuration spaces of some flexible four-gon. The top monomials are the first powers of the $\psi$-classes. \begin{enumerate} \item For $l_1=1;\ l_2=1;\ l_3=1;\ l_4=0,1$ we have $\psi_1=\psi_2=\psi_3=0$, and \ \ $\psi_4=2$. Let us prove that the theorem holds for the monomial $\psi_1$. There are two partitions of \newline $[n-2]~=~[2]$: \begin{enumerate} \item $J=\{1\},\ I=\{2\}$. Here $\varepsilon =0$, so this partition contributes $0$. \item $J=\emptyset,\ I=\{1,2\}$. Here $I\notin K $, so this partition also contributes $0$. \end{enumerate} \item For $l_1=2,9;\ l_2=1;\ l_3=1;\ l_4=1$, we have $\psi_2=\psi_3=\psi_4=1$, and \ \ $\psi_1=-1$. Let us check that the theorem holds for the monomial $\psi_1$. (The other monomials are checked in a similar way.) There partitions of $[2]$ are the same: \begin{enumerate} \item $J=\{1\},\ I=\{2\}$. Here $\varepsilon =-1, \ N=1+1$, so this partition contributes $-1$. \item $J=\emptyset,\ I=\{1,2\}$. Here $I\notin K $, so it contributes $0$. \end{enumerate} \end{enumerate} For the \textbf{induction step}, let us use the recursion. We shall show that for any partition $[n-2]=I\cup J$, its contribution to the left hand side and the right hand side of the recursion are equal. This is done through a case analysis. We present here three cases; the rest are analogous. \begin{enumerate} \item Assume that $i,j,k \in I$, and $(I,J)$ contributes $1$ to the left hand side count. Then \begin{itemize} \item[$\triangleright$] $(d_{1}, \dots, d_{i}+d_{j}-1, \dots, \hat{d_{j}}, \dots, d_{n})_{K_{(ij)}}$ contributes $1$ to the right hand side. Indeed, neither $N$, nor $\varepsilon$ changes when we pass from $K$ to $K_{(ij)}$. \item[$\triangleright$] $(d_{1}, \dots, d_{i}+d_{k}-1, \dots, \hat{d_{k}}, \dots, d_{n})_{K_{(ik)}} $ contributes $1$, and \item[$\triangleright$] $-(d_{1}, \dots, d_{i}-1 ,\dots, d_{j}+d_{k}, \dots, \hat{d_{j}}, \hat{d_{k}},\dots, d_{n})_{K_{(jk)}}$ contributes $-1$. \end{itemize} \item Assume that $i\in I,\ j,k \in J$, and $(I,J)$ contributes $1$ to the left hand side count. Then \begin{itemize} \item[$\triangleright$] $(d_{1}, \dots, d_{i}+d_{j}-1, \dots, \hat{d_{j}}, \dots, d_{n})_{K_{(ij)}}$ contributes $0$ to the right hand side. \item[$\triangleright$] $(d_{1}, \dots, d_{i}+d_{k}-1, \dots, \hat{d_{k}}, \dots, d_{n})_{K_{(ik)}} $ contributes $0$, and \item[$\triangleright$] $-(d_{1}, \dots, d_{i}-1 ,\dots, d_{j}+d_{k}, \dots, \hat{d_{j}}, \hat{d_{k}},\dots, d_{n})_{K_{(jk)}}$ contributes $1$. Indeed, $N$ turns to $N-1$, whereas $\varepsilon$ stays the same. \end{itemize} \item Assume that $i\in J,\ j,k \in I$, and $(I,J)$ contributes $1$ to the left hand side count. Then \begin{itemize} \item[$\triangleright$] $(d_{1}, \dots, d_{i}+d_{j}-1, \dots, \hat{d_{j}}, \dots, d_{n})_{K_{(ij)}}$ contributes $0$. \item[$\triangleright$] $(d_{1}, \dots, d_{i}+d_{k}-1, \dots, \hat{d_{k}}, \dots, d_{n})_{K_{(ik)}} $ contributes $0$, and \item[$\triangleright$] $-(d_{1}, \dots, d_{i}-1 ,\dots, d_{j}+d_{k}, \dots, \hat{d_{j}}, \hat{d_{k}},\dots, d_{n})_{K_{(jk)}}$ contributes $1$, since $N$ turns to $N-1$, and $\varepsilon$ stays the same.\qed \end{itemize}\end{enumerate} This theorem was proven for polygon spaces (that is, for threshold ASD complexes) in \cite{Agapito}. \section{Appendix. Chow rings and blow ups} Assume we have a diagram of a blow up $\widetilde{Y}:={\bf bl}_{X}(Y)$. Here $X$ and $Y$ are smooth varieties, $\iota: X \hookrightarrow Y$ is a regular embedding, and $\widetilde{X}$ is the exceptional divisor. In this case, $\iota^{*}: A^*(Y) \rightarrow A^*(X)$ is surjective. \begin{center} \begin{tikzcd} \widetilde{X} \arrow{d}{\tau} \arrow[r, hookrightarrow, "\theta"] & \widetilde{Y} \arrow{d}{\pi} \\ X \arrow[r, hookrightarrow, "\iota"] & Y \end{tikzcd} \end{center} Denote by $E$ the relative normal bundle $$E:= \tau^* N_X Y/ N_{\widetilde{X}} \widetilde{Y}.$$ \begin{theorem}\cite[Appendix. Theorem 1]{Keel92}\label{NewChowRing} The Chow ring $A^{*}(\widetilde{Y})$ is isomorphic to $$\frac{A^{*}(Y)[T]}{(P(T), T\cdot \ker i^*)},$$ where $P(T) \in A^*(Y)[T]$ is the pullback from $A^*(X)[T]$ of Chern polynomial of the normal bundle $N_{X}Y$. This isomorphism is induced by $\pi^* : A^*(Y)[T] \rightarrow A^* (\widetilde{Y})$ which sends $-T$ to the class of the exceptional divisor $\widetilde{X}$. \end{theorem} \begin{theorem}\cite[Proposition 6.7]{FulInter}\label{SESBlow} Let $k \in \mathbb{N}$. \begin{itemize} \item[a)](Key formula) For all $x \in A_{k}(X)$ $$\pi^* \iota_{*} (x) = \theta_* (c_{d-1}(E)\cap \tau^* x)$$ \item[e)] There are split exact sequences $$0 \rightarrow A_{k} X \xrightarrow{\upsilon} A_{k} \widetilde{X} \oplus A_{k} Y \xrightarrow{\eta} A_{k} \widetilde{Y} \rightarrow 0$$ with $\upsilon(x) = \big( c_{d-1}(E)\cap \tau^{*}x, -\iota_{*}x \big)$, and $\eta(\tilde{x}, y) = \theta_{*}\tilde{x}+ \pi^* y$. A left inverse for $\upsilon$ is given by $(\tilde{x}, y) \mapsto \tau_{*}(\tilde{x})$. \end{itemize} \end{theorem} \begin{theorem}\cite[Theorem 6.7, Corollary 6.7.2]{FulInter}\label{CorCodim} $\;$ \begin{enumerate} \item (Blow-up Formula) Let $V$ be a $k$-dimensional subvariety of $Y$, and let $\tilde{V} \subset \tilde{Y}$ be the proper transform of $V$, i.e., the blow-up of $V$ along $V\cap X$. Then $$\pi^{*}[V] = [\tilde{V}] + j_{*}\{c(E)\cap \tau^{*} s(V\cap X, V)\}_{k} \text{ in }A_{k}\tilde{Y}.$$ \item If $\dim V\cap X \leq k-d$, then $\pi^* [V] = [\tilde{V}]$. \end{enumerate} \end{theorem} An algebraic variety $Z$ is a \textit{HI--scheme} if the canonical map $\mathrm{cl}: A^{*}(Z) \rightarrow H^{*}(Z, \mathbb{Z})$ is an isomorphism. \begin{theorem}\cite[Appendix. Theorem 2]{Keel92}\label{ChowCoh} \begin{itemize} \item If $X, \widetilde{X}$, and $Y$ are HI, then so is $\widetilde{Y}$. \item If $X, \widetilde{X}$, and $\widetilde{Y}$ are HI, then so is $Y$. \end{itemize} \end{theorem} \textbf{Acknowledgement.} This research is supported by the Russian Science Foundation under grant 16-11-10039. \end{document}
\begin{document} \title{Matrix random products with singular harmonic measure} \author{Vadim A. Kaimanovich} \address{Mathematics, Jacobs University Bremen, Campus Ring 1, D-28759, Bremen, Germany} \email{[email protected]} \author{Vincent Le Prince} \address{IRMAR, Campus de Beaulieu, 35042 Rennes, France} \email{[email protected]} \subjclass[2000]{Primary 60J50; Secondary 28A78, 37D25, 53C35} \keywords{Random walk, matrix random product, Lyapunov exponents, harmonic measure, Hausdorff dimension} \begin{abstract} Any Zariski dense countable subgroup of $SL(d,\R)$ is shown to carry a non-degenerate finitely supported symmetric random walk such that its harmonic measure on the flag space is singular. The main ingredients of the proof are: (1) a new upper estimate for the Hausdorff dimension of the projections of the harmonic measure onto Grassmannians in $\R^d$ in terms of the associated differential entropies and differences between the Lyapunov exponents; (2) an explicit construction of random walks with uniformly bounded entropy and Lyapunov exponents going to infinity. \end{abstract} \maketitle \thispagestyle{empty} \section*{Introduction} The notion of \emph{harmonic measure} (historically first defined in the framework of the theory of potential) has an explicit probabilistic description in terms of the dynamical properties of the associated Markov processes as a \emph{hitting distribution}. Moreover, ``hitting'' can be interpreted both as attaining the target set in the usual sense (in finite time) and as converging to it at infinity (when the target is attached as a boundary to the state space). In concrete situations the target set is usually endowed with additional structures giving rise to other ``natural'' measures (e.g., smooth, uniform, Haar, Hausdorff, maximal entropy, etc.), which leads to the question about \emph{coincidence} of the harmonic and these ``other'' measures (or, in a somewhat weaker form, about coincidence of the respective measure classes). As a general rule, such a coincidence either implies that the considered system has very special symmetry properties or is not possible at all. However, establishing it in a rigorous way is a notoriously difficult problem. See, for example, the cases of the Brownian motion on cocompact negatively curved Riemannian manifolds \cite{Katok88,Ledrappier95}, of Julia sets of endomorphisms of the Riemann sphere \cite{Przytycki-Urbanski-Zdunik89,Zdunik91} and of polynomial-like maps \cite{Lyubich-Volberg95, Balogh-Popovici-Volberg97, Zdunik97}, or of Cantor repellers in a Euclidean space \cite{Makarov-Volberg86,Volberg93}. In the present paper we consider the singularity problem for \emph{random matrix products} $x_n=h_1h_2\dots h_n$ with Bernoulli increments, or, in other words, for \emph{random walks} on the group $SL(d,\R)$ (or its subgroups). Actually, our results are also valid for general non-compact semi-simple Lie groups, see \cite{LePrince04}, but for the sake of expositional simplicity we restrict ourselves just to the case of matrix groups. Another simplification is that we always assume that the considered subgroups of $SL(d,\R)$ are Zariski dense. It guarantees that the harmonic measure of the random walk is concentrated on the space of full flags in $\R^d$ (rather than on its quotient determined by the degeneracies of the Lyapunov spectrum). Modulo an appropriate technical modification our results remain valid without this assumption as well. If the distribution $\mu$ of the increments $\{h_n\}$ has a finite \emph{first moment} $\int\log\|h\|\,d\mu(h)$, then by the famous \emph{Oseledets multiplicative ergodic theorem} \cite{Oseledec68} there exists the \emph{Lyapunov spectrum} $\l$ consisting of \emph{Lyapunov exponents} $\l_1\ge\l_2\ge\dots\l_d$ (they determine the growth of the random products in various directions), and, moreover, a.e.\ sample path $\{x_n\}$ gives rise to the associated \emph{Lyapunov flag} (filtration) of subspaces in $\R^d$. The distribution $\nu=\nu(\mu)$ of these Lyapunov flags is then naturally called the \emph{harmonic measure} of the random product. The geometric interpretation of the Oseledets theorem \cite{Kaimanovich89} is that the sequence $x_n o$ asymptotically follows a geodesic in the associated Riemannian symmetric space $S=SL(d,\R)/SO(d)$ (here $o=SO(d)\in S$); the Lyapunov spectrum determines the Cartan (``radial'') part of this geodesic, whereas the Lyapunov flag determines its direction. If $\supp\mu$ generates a Zariski dense subgroup of $SL(d,\R)$, then the Lyapunov spectrum is \emph{simple} ($\equiv$ the vector $\l$ lies in the interior of the positive Weyl chamber), see \cite{Guivarch-Raugi85,Goldsheid-Margulis89}, so that the associated Lyapunov flags are full ($\equiv$ contain subspaces of all the intermediate dimensions). The space $\B=\B(d)$ of full flags in $\R^d$ is also known under the name of the \emph{Furstenberg boundary} of the associated symmetric space $S$, see \cite{Furstenberg63} for its definition and \cite{Kaimanovich89,Guivarch-Ji-Taylor98} for its relation with the boundaries of various compactifications of Riemannian symmetric spaces. In the case when the Lyapunov spectrum is simple, a.e.\ sequence $x_n o$ is convergent in all reasonable compactifications of the symmetric space $S$, and the corresponding hitting distributions can be identified with the harmonic measure $\nu$ on $\B$. The flag space $\B$ is endowed with a natural smooth structure, therefore it makes sense to compare the harmonic measure class with the smooth measure class (the latter class contains the unique rotation invariant measure on the flag space). The harmonic measure is ergodic, so that it is either absolutely continuous or singular with respect to the smooth (or any other quasi-invariant) measure class. Accordingly, we shall call the jump distribution $\mu$ either \emph{absolutely continuous} or \emph{singular at infinity}. If the measure $\mu$ is absolutely continuous with respect to the Haar measure on the group $SL(d,\R)$ (or even weaker: a certain convolution power of $\mu$ contains an absolutely continuous component) then it is absolutely continuous at infinity \cite{Furstenberg63,Azencott70}. As it turns out, there are also measures $\mu$ which are absolutely continuous at infinity in spite of being supported by a \emph{discrete subgroup} of $SL(d,\R)$. Namely, Furstenberg \cite{Furstenberg71,Furstenberg73} showed that any lattice (for instance, $SL(d,\Z)$) carries a probability measure $\mu$ with a finite first moment which is absolutely continuous at infinity. It was used for proving one of the first results on the rigidity of lattices in semi-simple Lie groups. Furstenberg's construction of measures absolutely continuous at infinity (based on discretization of the Brownian motion on the associated symmetric space) was further extended and generalized in \cite{Lyons-Sullivan84,Kaimanovich92a,Ballmann-Ledrappier96}. Another construction of random walks with a given harmonic measure was recently developed by Connell and Muchnik \cite{Connell-Muchnik07,Connell-Muchnik07a}. Note that the measures $\mu$ arising from all these constructions are inherently infinitely supported. Let us now look at the singularity vs. absolute continuity dichotomy for the harmonic measure from the ``singularity end''. The first result of this kind was obtained by Chatterji \cite{Chatterji66} who established singularity of the distribution of infinite continuous fractions with independent digits. This distribution can indeed be viewed as the harmonic measure associated to a certain random walk on $SL(2,\Z)\subset SL(2,\R)$ \cite{Furstenberg63a}. See \cite{Chassaing-Letac-Mora83} for an explicit description of the harmonic measure in a similar situation and \cite{Kifer-Peres-Weiss01} for a recent very general result on singularity of distributions of infinite continuous fractions. Extending the continuous fractions setup Guivarc'h and Le Jan \cite{Guivarch-LeJan90,Guivarch-LeJan93} proved that the measures $\mu$ on non-compact lattices in $SL(2,\R)$ satisfying a certain moment condition (in particular, all finitely supported measures) are singular at infinity. Together with some other circumstantial evidence (as, for instance, pairwise singularity of various natural boundary measure classes on the visibility boundary of the universal cover of compact negatively curved manidolds \cite{Ledrappier95} or on the hyperbolic boundary of free products \cite{Kaimanovich-LePrince08p}) the aforementioned results lead to the following {\parindent 0pt \textsc{Conjecture.} Any finitely supported probability measure on $SL(d,\R)$ is singular at infinity. } The principal result of the present paper is the following \begin{thrm} \label{th:mn} Any countable Zariski dense subgroup $\G$ of $SL(d,\R)$ carries a non-degenerate symmetric probability measure $\mu$ which is singular at infinity. Moreover, if $\G$ is finitely generated then the measure $\mu$ can be chosen to be finitely supported. \end{thrm} Note here an important difference between the case $d=2$ and the higher rank case $d\ge 3$. If $\G$ is discrete, then for $d=2$ the boundary circle can be endowed with different $\G$-invariant smooth structures (parameterized by the points from the Teichm\"uller space of the quotient surface), so that Furstenberg's discretization construction readily provides existence of measures $\mu$ (not finitely supported though!) which are singular at infinity. Obviously, this approach does not work in the higher rank case, where our work provides first examples of measures singular at infinity. Our principal tool for establishing singularity is the notion of the \emph{dimension} of a measure. Namely, we show that in the setup of our Main Theorem the measure $\mu$ can be chosen in such a way that the Hausdorff dimension of the associated harmonic measure (more precisely, of its projection onto one of the Grassmanians in $\R^d$) is arbitrarily small, which obviously implies singularity. For doing this we establish an inequality connecting the \emph{Hausdorff dimension} with the \emph{asymptotic entropy} and the \emph{Lyapunov exponents} of the random walk. There are several notions of dimension of a probability measure $m$ on a compact metric space $(Z,\r)$ (see \cite{Pesin97} and the discussion in \secref{sec:Hausd} below). These notions roughly fall into two categories. The \emph{global} ones are obtained by looking at the dimension of sets which ``almost'' (up to a piece of small measure $m$) coincide with $Z$; in particular, the \emph{Hausdorff dimension} $\dim_H m$ of the measure $m$ is $\inf\left\{ \dim_H A : m(A)=1 \right\}$, where $\dim_H A$ denotes the Hausdorff dimension of a subset $A\subset Z$. On the other hand, the \emph{local} ones are related to the asymptotic behavior of the measures of concentric balls $B(z,r)$ in $Z$ as the radius $r$ tends to $0$. More precisely, the lower $\un\dim_P m(z)$ (resp., the upper $\ov\dim_P m(z)$) \emph{pointwise dimension} of the measure $m$ at a point $z\in Z$ is defined as the $\liminf$ (resp., $\limsup$) of the ratio $\log m B(z,r)/\log r$ as $r\to 0$. In these terms $\dim_H m$ coincides with $\esssup_z \un\dim_P m(z)$. In particular, if $\un\dim_P m(z)=\ov\dim_P m(z)=D$ almost everywhere for a constant $D$, then $\dim_H m=D$. Moreover, in the latter case all the reasonable definitions of dimension of the measure $m$ coincide. Numerous variants of the formula $\dim m = h/\l$ relating the dimension of an invariant measure $m$ of a differentiable map with its entropy $h=h(m)$ and the characteristic exponent(s) $\l=\l(m)$ have been known since the late 70s -- early 80s, see \cite{Ledrappier81,Young82} and the references therein. Ledrappier \cite{Ledrappier83} was the first to carry it over to the context of random walks by establishing the formula $\dim \nu =h/2\l$ for the dimension of the harmonic measure of random walks on discrete subgroups of $SL(2,\C)$. Here $h=h(\mu)$ is the asymptotic entropy of the random walk with the jump distribution $\mu$, and $\l=\l(\mu)$ is the Lyapunov exponent. However, the dimension appearing in this formula is somewhat different from the classical Hausdorff dimension, because convergence of the ratios $\log m B(z,r)/\log r$ to the value of dimension is only established in measure (rather than almost surely). Le Prince has recently extended the approach of Ledrappier (also see \cite{Ledrappier01,Kaimanovich98}) to a general discrete group $G$ of isometries of a Gromov hyperbolic space $X$; in this case the Gromov product induces a natural metric (more rigorously, a gauge) on the hyperbolic boundary $\pt X$. He proved that if a probability measure $\mu$ on $G$ has a finite first moment with respect to the metric on $X$, then the associated harmonic measure $\nu$ on $\pt X$ has the property that $\ov\dim_P\nu(z)\le h/\ell$ for $\nu$-a.e. point $z\in\pt X$ (which implies that $\dim_H\nu\le h/\ell$) \cite{LePrince07}, and that the \emph{box dimension} of the measure $\nu$ is precisely $h/\ell$ \cite{LePrince08} (here, as before, $h=h(\mu)$ is the asymptotic entropy of the random walk $(G,\mu)$, and $\ell=\ell(\mu)$ is its linear rate of escape with respect to the hyperbolic metric). Note that the question about the pointwise dimension, i.e., about the asymptotic behaviour of the ratios $\log m B(z,r)/\log r$ for a.e. point $z\in\pt X$, is still open in this generality. It was recently proved that these ratios converge to $h/\ell$ almost surely for any symmetric finitely supported random walk on $G$ \cite{Blachere-Haissinsky-Mathieu08}. This should also be true for finitely supported random walks which are not necessarily symmetric. Namely, the results of Ancona \cite{Ancona88} on almost multiplicativity of the Green function imply that in this situation the harmonic measure has the \emph{doubling property }, which in turn can be used to prove existence of the pointwise dimension. Yet another example is the inequality $\dim_H\nu\le h/\ell$ for the Hausdorff dimension of the harmonic measure of iterated function systems in the Euclidean space \cite{Nicol-Sidorov-Broomhead02} (when is the equality attained?). In the context of random walks on general countable groups the asymptotic entropy $h(\mu)$, the rate of escape $\ell(\mu)$ and the exponential growth rate of the group $v$ satisfy the inequality $h(\mu)\le\ell(\mu)v$ (it was first established by Guivarc'h \cite{Guivarch79}; recently Vershik \cite{Vershik00} revitalized interest in it). As it was explained above, in the ``hyperbolic'' situations the ratio $h/\ell$ can (under various additional conditions) be interpreted as the dimension of the harmonic measure, whereas $v$ is the Hausdorff dimension of the boundary itself. Unfortunately, the aforementioned formulas of the type $\dim\nu=h/\ell$ can not be directly carried over to the higher rank case. The reason for this is that in this case the boundary is ``assembled'' of several components which may have different dimension properties. The simplest illustration is provided by the product $\G_1\times\G_2$ of two discrete subgroups of $SL(2,\R)$ with a product jump distribution $\mu=\mu_1\otimes\mu_2$. The Furstenberg boundary of the bidisk associated with the group $SL(2,\R)\times SL(2,\R)$ is the product of the boundary circles of each hyperbolic disc. The harmonic measure of the jump distribution $\mu$ is then the product of the harmonic measures of the jump distributions $\mu_1$ and $\mu_2$ which may have different dimensions (it would be interesting to investigate the dimension properties of the harmonic measure on the boundary of the polydisc for groups and random walks which do not split into a direct product). In the case of $SL(d,\R)$ the role of elementary ``building blocks'' of the flag space $\B$ is played by the Grassmannians $\Gr_i$ in $\R^d$ (which are the minimal equivariant quotients of $\B$). In order to deal with spaces with a $\mu$-stationary measure other than the Poisson boundary of the random walk $(\G,\mu)$ it is convenient to use the notion of the \emph{differential $\mu$-entropy} first introduced by Furstenberg \cite{Furstenberg71}. It measures the ``amount of information'' about the random walk contained in the action. The differential entropy does not exceed the asymptotic entropy of the random walk, coincides with it for the Poisson boundary, and is strictly smaller for any proper quotient of the Poisson boundary \cite{Kaimanovich-Vershik83}. Our point is that in the formulas of the type $\dim\nu=h/\ell$ (see above) for spaces with a $\mu$-stationary measure $\nu$ other than the Poisson boundary, the asymptotic entropy $h$ should be replaced with the differential entropy of the space. In the setup of our Main Theorem let $\mu$ be a non-degenerate probability measure on $\G$ with a finite first moment, so that the associated Lyapunov spectrum $\l_1>\l_2>\dots>\l_d$ is simple. Denote by $\nu_i$ the image of the harmonic measure $\nu$ on the flag space $\B$ under the projection onto the rank $i$ Grassmannian $\Gr_i$, and let $E_i$ be the corresponding differential entropy. All the spaces $(\Gr_i,\nu_i)$ are quotients of the Poisson boundary of the random walk $(\G,\mu)$; if $\G$ is discrete then $(\B,\nu)$ is the Poisson boundary of the random walk $(\G,\mu)$ \cite{Kaimanovich85,Ledrappier85}, otherwise the Poisson boundary of the random walk $(\G,\mu)$ may be bigger than the flag space \cite{Kaimanovich-Vershik83,Bader-Shalom06,Brofferio06} (note that it is still unknown whether all the proper quotients of the flag space $\B$ are also always proper measure-theoretically with respect to the corresponding $\mu$-stationary measures). \begin{thrm1} The Hausdorff dimensions of the measures $\nu_i$ satisfy the inequalities $$ \dim \nu_i \le \frac{E_i}{\l_i-\l_{i+1}} \;. $$ \end{thrm1} In the case when $\G$ is a discrete subgroup of $SL(2,\R)$ the right-hand side of the above inequality precisely coincides with the ratio $h(\mu)/2\l(\mu)$ from Ledrappier's formula \cite{Ledrappier83}. Our Main Theorem follows from a combination of this dimension estimate with the following construction: \begin{thrm2} Let $\mu$ be a non-degenerate probability measure on $\G$ whose entropy and the first moment are finite, and let $\g\in\G$ be an $\R$-regular element (which always exists in a Zariski dense group). Then the measures $$ \mu^k = \frac12\mu+\frac14\left( \d_{\g^k}+\d_{\g^{-k}} \right) $$ have the property that the entropies $H(\mu^k)$ (and therefore the corresponding asymptotic entropies as well) are uniformly bounded, whereas the lengths of their Lyapunov vectors $\l(\mu^k)$ (equivalently, the top Lyapunov exponents $\l_1(\mu^k)$) go to infinity. \end{thrm2} Let us mention here several issues naturally arising in connection with our results. 1. Our study of dimension in the present paper is subordinate to proving singularity at infinity, so that we only use rather rudimentary facts about the dimension of the harmonic measure. Under what conditions is the inequality from \thmref{th:dim} realized as equality? Is there an exact formula for the dimension of the harmonic measure on the flag space? Of course, in these questions one has to specify precisely which definition of the dimension is used, cf. \secref{sec:Hausd}. 2. For our purposes it was enough to establish in \thmref{th:estimate} that the lengths of the Lyapunov vectors $\l(\mu^k)$ go to infinity. Is this also true for all the spectral gaps $\l_i(\mu^k)-\l_{i+1}(\mu^k)$ (which would imply that the dimensions of the harmonic measures on $\B$ go to 0)? What happens with the harmonic measures $\nu^k$ (or with their projection); do they weakly converge, and if so what is the limit? Note that one can ask the latter questions in the hyperbolic (rank 1) situation as well, cf. \cite{LePrince07}. 3. Another interesting issue is the connection of the harmonic measure with the invariant measures of the geodesic flow (more generally, of Weyl chambers in the case of higher rank locally symmetric spaces). As it was proved by Katok and Spatzier \cite{Katok-Spatzier96}, the only invariant measure of the Weyl chamber flow associated to a lattice $\G$ in $SL(d,\R)$ with positive entropy along all one-dimensional directions is the Haar measure. In view of the correspondence between the Radon invariant measures of the Weyl chamber flow and $\G$-invariant Radon measures on the principal stratum of the product $\B\times\B$ (cf. \cite{Kaimanovich90} in the hyperbolic case), it implies that singular harmonic measures on $\B$ either can not be lifted to a Radon invariant measure on $\B\times\B$ (contrary to the hyperbolic case \cite{Kaimanovich90}) or, if they can be lifted to a Radon invariant measure on $\B\times\B$, then the associated invariant measure of the Weyl chamber flow has a vanishing directional entropy (in our setup the latter option most likely can be eliminated). The paper has the following structure. In \secref{sec:RW} we set up the notations and introduce the necessary background information about random products (random walks) on groups. In \secref{sec:exponents}, \secref{sec:flags} and \secref{sec:harm} we specialize these notions to the case of matrix random products, remind the Oseledets multiplicative ergodic theorem, give several equivalent descriptions of the limit flags (Lyapunov filtrations) of random products, and introduce the harmonic measure $\nu$ on the flag space $\B$ and its projections $\nu_i$ to the Grassmannians $\Gr_i$ in $\R^d$. In \secref{sec:entr} we remind the definitions of the asymptotic entropy of a random walk and of the differential entropy of a $\mu$-stationary measure. In \lemref{lem:dentr} we establish an inequality relating the exponential rate of decay of translates of a stationary measure along the sample paths of the reversed random walk with the differential entropy. In \secref{sec:Hausd} we discuss the notion of dimension of a probability measure on a compact metric space. We introduce new \emph{lower} and \emph{upper mean dimensions} and establish in \thmref{th:dims} inequalities (used in the proof of \thmref{th:dim}) relating them to other more traditional definitions of dimension (including the classical Hausdorff dimension). After discussing the notion of the limit set in the higher rank case in \secref{sec:limit}, we finally pass to proving our Main Theorem in \secref{sec:link} and \secref{sec:const}. Namely, in \secref{sec:link} we establish the inequality of \thmref{th:dim}, and in \secref{sec:const} we prove \thmref{th:estimate} on existence of jump distributions with uniformly bounded entropy and arbitrarily big Lyapunov exponents. \section{Random products} \label{sec:RW} We begin by recalling the basic definitions from the theory of random walks on discrete groups, e.g., see \cite{Kaimanovich-Vershik83,Kaimanovich00a}. The \emph{random walk} determined by a probability measure $\mu$ on a countable group $\G$ is a Markov chain with the transition probabilities $$ p(g,gh)=\mu(h) \;, \qquad g,h\in\G \;. $$ \conv{Without loss of generality (passing, if necessary, to the subgroup generated by the support $\supp\mu$ of the measure $\mu$) we shall always assume that the measure $\mu$ is \emph{non-degenerate} in the sense that the smallest subgroup of $\G$ containing $\supp\mu$ is $\G$ itself.} If the position of the random walk at time $0$ is a point $x_0\in\G$, then its position at time $n>0$ is the product $$ x_n = x_0 h_1 h_2 \dots h_n \;, $$ where $(h_n)_{n\ge 1}$ is the sequence of independent $\mu$-distributed \emph{increments} of the random walk. Therefore, provided that $x_0$ is the group identity $e$, the distribution of $x_n$ is the $n$-fold convolution $\mu^{*n}$ of the measure $\mu$. Below it will be convenient to consider bilateral sequences of Bernoulli $\mu$-distributed increments $\h=(h_n)_{n\in\Z}$ and the associated \emph{bilateral sample paths} $\x=(x_n)_{n\in\Z}$ obtained by extending the formula $$ x_{n+1}=x_n h_{n+1} $$ to all $n\in\Z$ under the condition $x_0=e$, so that \begin{equation} \label{eq:iso} x_n = \begin{cases} h_0^{-1}h_{-1}^{-1} \dots h_{n+1}^{-1}\;, & n<0\;; \\ e\;, & n=0\;; \\ h_1 h_2\dots h_n\;, & n>0\;. \\ \end{cases} \end{equation} Thus, the ``negative part'' $\ch x_n=x_{-n},\;n\ge 0,$ of the bilateral random walk is the random walk on $\G$ starting from the group identity at time 0 and governed by the \emph{reflected measure} $\ch\mu(g)=\mu(g^{-1})$. We shall denote by $\P$ the probability measure in the space $\X=\G^\Z$ of bilateral sample paths $\x$ which is the image of the Bernoulli measure in the space of bilateral increments $\h$ under the isomorphism \eqref{eq:iso}. It is preserved by the transformation $T$ induced by the shift in the space of increments: \begin{equation} \label{eq:T} (T\x)_n = h_1^{-1} x_{n+1}\;, \qquad n\in\Z\;. \end{equation} \section{Lyapunov exponents} \label{sec:exponents} We shall fix a standard basis $\E=(e_i)_{1\le i\le d}$ in $\R^d$ and identify elements of the Lie group $G=SL(d,\R)$ with their matrices in this basis. Throughout the paper we shall use the standard Euclidean norms associated with this basis both for vectors and matrices in $\R^d$. \conv{From now on we shall assume that $\G$ is a countable subgroup of the group $G=SL(d,\R)$.} Denote by $A$ the \emph{Cartan subgroup} of $G$ consisting of diagonal matrices $a=\diag(a_i)$ with positive entries $a_i$, and by $$ \af = \left\{ \a=(\a_1,\a_2,\dots,\a_d)\in\R^d : \sum\a_i=0 \right\} $$ the associated \emph{Cartan subalgebra}, so that $A=\exp\af$. Let $$ \af_+ = \{\a\in\af: \a_1>\a_2> \dots> \a_d \} \;, $$ be the standard \emph{positive Weyl chamber} in $\af$, and let $$ A_+ = \exp\af_+ = \{a\in A: a_1> a_2> \dots > a_d\} \;. $$ Denote the closures of $\af_+$ and $A_+$ by $\ov{\af_+}$ and $\ov{A_+}$, respectively. Any element $g\in G$ can be presented as $g=k_1 a k_2$ with $k_{1,2}\in K=SO(d)$ (which is a maximal compact subgroup in $G$) and a uniquely determined $a=a(g)\in\ov{A_+}$ (\emph{Cartan} or \emph{polar} decomposition). \conv{We shall always assume that the probability measure $\mu$ on $\G$ has a \emph{finite first moment} in the ambient group $G$, i.e., $\sum \log \|g\| \mu(g) < \infty$. } Then the asymptotic behaviour of the random walk $(\G,\mu)$ is described by the famous \emph{Oseledets multiplicative ergodic theorem} which we shall state in the form due to Kaimanovich \cite{Kaimanovich89} (and in the generality suitable for our purposes): \begin{thm} \label{th:osel} There exists a vector $\l=\l(\mu)\in\ov\af_+$ (the \emph{Lyapunov spectrum} of the random walk) such that $$ \frac1n \log a(x_n) \mathop{\,\longrightarrow\,}_{n\to+\infty} \l $$ for $\P$-a.e.\ sample path $\x\in\X$. Moreover, for $\P$-a.e.\ $\x$ there exists a uniquely determined positive definite symmetric matrix $$ g=g(\x)=k(\exp\l) k^{-1} \;, \qquad k\in K \;, $$ such that $$ \log \|g^{-n}x_n\|=o(n) \quad \textrm{as} \quad n\to+\infty\;. $$ \end{thm} \begin{thm}[\cite{Guivarch-Raugi85,Goldsheid-Margulis89}] \label{th:simple} If, in addition, the group $\G$ is Zariski dense in $G$, then the Lyapunov spectrum $\l(\mu)$ is \emph{simple}, i.e., it belongs to the Weyl chamber $\af_+$. \end{thm} \begin{rem} \label{rem:inverse} The Lyapunov spectra $(\l_1,\dots,\l_d)$ and $(\ch\l_1,\dots,\ch\l_d)$ of the measure $\mu$ and of the reflected measure $\ch\mu$, respectively, are connected by the formula $$ \ch\l_i = -\l_{d+1-i} \;, \qquad 1\le i\le d\;. $$ \end{rem} \section{Limit flags} \label{sec:flags} Let $S=G/K$ be the \emph{Riemannian symmetric space} associated with the group $G$ (e.g., see \cite{Eberlein96} for the basic notions). We shall fix a reference point $o=K\in S$ (its choice is equivalent to choosing a Euclidean structure on $\R^d$ such that its rotation group is $K$). Being non-positively curved and simply connected, the space $S$ has a natural \emph{visibility compactification} $\ov S = S \cup \pt S$ whose boundary $\pt S$ consists of asymptotic equivalence classes of geodesic rays in $S$ and can be identified with the unit sphere of the tangent space at the point $o$ (since any equivalence class contains a unique ray issued from $o$). The action of the group $G$ extends from $S$ to $\pt S$, and the orbits of the latter action are naturally parameterized by unit length vectors $\a\in\ov{\af_+}$: the orbit $\pt S_\a$ consists of the equivalence classes of all the rays $\g(t)=k\exp(t\a)o,\; k\in K$. Algebraically the orbits $\pt S_\a$ corresponding to the interior vectors $\a\in\af_+$ are isomorphic to the space $\B$ of \emph{full flags} $$ \V=\{V_i\}\;,\quad V_0=\{0\}\subset V_1\subset\dots\subset V_{d-1}\subset V_d=\R^d $$ in $\R^d$ (also known as the \emph{Furstenberg boundary} of the symmetric space $S$), whereas the orbits corresponding to wall vectors are isomorphic to quotients of $\B$, i.e., to flag varieties for which certain intermediate dimensions are missing, see \cite{Kaimanovich89}. \thmref{th:osel} implies that for $\P$-a.e.\ sample path $\x$ $$ d(x_n o, \g(t\|\l\|))=o(n) \quad \textrm{as} \quad n\to+\infty \;, $$ where $\g$ is the geodesic ray $\g(t)=k\exp\left(t\frac{\l}{\|\l\|}\right)o$, hence the sequence $x_n o$ converges in the visibility compactification to a limit point $\bnd\x\in\pt S_{\l/\|\l\|}$. Moreover, $\pt S_{\l/\|\l\|}\cong\B$ by \thmref{th:simple}, so that below we shall consider the aforementioned \emph{boundary map} $\bnd$ as a map from the path space to the flag space $\B$. \thmref{th:osel} and \thmref{th:simple} easily imply the following descriptions of the limit flag $\bnd\x$. \begin{prop} \label{pr:lyap} \begin{itemize} \item[(i)] Denote by $\V_0$ the \emph{standard flag} $$ \qquad \{0\} \subset \spn\{e_1\} \subset \spn\{e_1,e_2\} \subset \dots \subset \spn\{e_1,e_2,\dots,e_{d-1}\} \subset \R^d $$ associated with the basis $\E$. Then $$ \bnd\x = k\V_0 $$ for $k=k(\x)\in K$ from \thmref{th:osel}. \item[(ii)] The spaces $V_i$ from the flag $\bnd\x$ are increasing direct sums of the eigenspaces of the matrix $g=g(\x)$ from \thmref{th:osel} taken in the order of decreasing eigenvalues. \item[(iii)] The flag $\bnd\x$ is the \emph{Lyapunov flag} of the sequence $x_n^{-1}$, i.e., $$ \lim_{n\to+\infty} \frac1n \log \|x_n^{-1} v\| = -\l_i \qquad \forall\, v\in V_i\setminus V_{i-1} \;. $$ \item[(iv)] For any smooth probability measure $\th$ on $\B$ and $\P$-a.e.\ sample path $\x$ $$ \lim_{n\to+\infty} x_n\th = \d_{\bnd\x} $$ in the weak$^*$ topology of the space of probability measures on $\B$. \end{itemize} \end{prop} \begin{rem} Equivariance of the boundary map $\bnd$ implies that for the transformation $T$ \eqref{eq:T} $$ \bnd T^n\x = x_n^{-1}\bnd\x \qquad\forall\,n\in\Z \;. $$ In particular, \begin{equation} \label{eq:Tc} \bnd T^{-n}\x = \ch x_n^{-1}\bnd\x \qquad\forall\,n\ge 0 \;. \end{equation} \end{rem} \section{Harmonic measure} \label{sec:harm} \begin{defn} The image $\nu=\bnd(\P)$ of the probability measure $\P$ in the path space $\X$ under the map $\bnd:\X\to\B$ is called the \emph{harmonic measure} of the random walk $(\G,\mu)$. In other words, $\nu$ is the distribution of the limit flag $\bnd\x$ under the measure $\P$. \end{defn} The harmonic measure is \emph{$\mu$-stationary} in the sense that it is invariant with respect to the convolution with $\mu$: $$ \mu*\nu = \sum_g \mu(g) g\nu = \nu \;. $$ \begin{thm}[\cite{Guivarch-Raugi85,Goldsheid-Margulis89}] \label{th:sub} Under conditions of \thmref{th:simple} $\nu$ is the unique $\mu$-stationary probability measure on $\B$, and any proper algebraic subvariety of $\B$ is $\nu$-negligible. \end{thm} \begin{thm}[\cite{Kaimanovich85,Ledrappier85}] Under conditions of \thmref{th:simple}, if the subgroup $\G$ is discrete, then the measure space $(\B,\nu)$ is isomorphic to the Poisson boundary of the random walk $(\G,\mu)$. \end{thm} \begin{rem} If $\G$ is not discrete, then the random walk may have limit behaviours other than described just by the limit flags, or, in other words, the Poisson boundary of the random walk $(\G,\mu)$ may be bigger than the flag space, see \cite{Kaimanovich-Vershik83} for the first example of this kind (the dyadic-rational affine group) and \cite{Bader-Shalom06,Brofferio06} for the recent developments. \end{rem} Denote by $\Gr_i$ the dimension $i$ \emph{Grassmannian} (the space of all dimension $i$ subspaces) in $\R^d$. There is a natural projection $\pi_i:\B\to\Gr_i$ which consists in assigning to any flag in $\R^d$ its dimension $i$ subspace. Let $\nu_i=\pi_i(\nu),\;1\le i\le d-1,$ be the associated images of the measure $\nu$. Obviously, the measures $\nu_i$ are $\mu$-stationary (along with the measure $\nu$). We shall also use the notation $$ \bnd_i\x = \pi_i(\bnd\x) \in \Gr_i \;, $$ so that $\nu_i=\bnd_i(\P)$. We shall embed each Grassmannian $\Gr_i$ into the projective space $P\bigwedge^i \R^d$ in the usual way and define a $K$-invariant metric $\r=\r_i$ on the latter as \begin{equation} \label{eq:metric} \r (\xi,\z) = \sin\an(\xi,\z) \;, \end{equation} where the angle (varying between $0$ and $\pi/2$) is measured with respect to the standard Euclidean structure on $\bigwedge^i\R^d$ determined by the basis $\E$ (so that $(e_{j_1}\wedge \cdots \wedge e_{j_i})_{1\le j_1 < \cdots < j_i \le d}$ is an orthonormal basis of $\bigwedge^i\R^d$). The \emph{Furstenberg formula} (see \cite{Furstenberg63a,Bougerol-Lacroix85}) \begin{equation} \label{eq:furst} \l_1 + \l_2+\dots+\l_i = \sum_g \mu(g) \int_{\Gr_i} \log\frac{\|gv\|}{\|v\|}\ d\nu_i (\ov{v}) \;, \qquad 1\le i\le d-1 \;, \end{equation} where $v\in\bigwedge^i\R^d$ is the vector presenting a point $\ov{v}\in\Gr_i\subset P\bigwedge^i\R^d$, relates Lyapunov exponents with the harmonic measure. \section{Entropy} \label{sec:entr} Recall that if the measure $\mu$ has a finite entropy $H(\mu)$, then the \emph{asymptotic entropy} of the random walk $(\G,\mu)$ is defined as \begin{equation}\label{eq:entr} h(\G,\mu)=\lim_{n\to+\infty} \frac{H(\mu^{*n})}n \le H(\mu)\;, \end{equation} where $H(\cdot)$ denotes the usual entropy of a discrete probability measure. The asymptotic entropy can also be defined ``pointwise'' along sample paths of the random walk as $$ h(\G,\mu) = \lim_{n\to+\infty} -\frac1n\log\mu^{*n}(x_n) \;, $$ where the convergence holds both $\P$-a.e.\ and in the space $L^1(\X,\P)$, see \cite{Kaimanovich-Vershik83,Derriennic86}. The \emph{$\mu$-entropy} (\emph{Furstenberg entropy, differential entropy}) of a $\mu$-stationary measure $\th$ on a $\G$-space $X$ is defined as \begin{equation}\label{eq:def diff} E_\mu(X,\th) = -\sum_{g\in \G}\mu(g)\int\log \frac{dg^{-1}\th}{d\th}(b)d\th(b) \;, \end{equation} and it satisfies the inequality $E_\mu(X,\th)\le h(\G,\mu)$, see \cite{Furstenberg71,Kaimanovich-Vershik83,Nevo-Zimmer02}. \conv{We shall always assume that the probability measure $\mu$ on $\G$ has a \emph{finite entropy} $H(\mu)<\infty$. } In our context, if the subgroup $\G$ is discrete in $SL(d,\R)$, then finiteness of the first moment of the measure $\mu$ easily implies that $H(\mu)<\infty$ (e.g., see \cite{Derriennic86}). Therefore, $E_i=E_\mu(\Gr_i,\nu_i)<\infty$. Below we shall need the following routine estimate (in fact valid for an arbitrary quotient of the Poisson boundary). \begin{lem} \label{lem:dentr} For any index $i\in\{1,2,\dots,d-1\}$, any subset $A\subset\Gr_i$ with $\nu_i(A)>0$ and $\P$-a.e.\ sample path $\x$ \begin{equation}\label{eq:dentr} \limsup_{n\to+\infty}\left[-\frac{1}{n} \log \ch x_n\nu_i(A)\right] \le E_i \;. \end{equation} \end{lem} \begin{proof} Put $$ F_i(\x)=-\log\frac{d\ch x_1\nu_i}{d\nu_i} (\bnd_i\x)=-\log\frac{d\nu_i(\bnd_iT^{-1}\x)}{d\nu_i(\bnd_i\x)} $$ (see formula \eqref{eq:Tc}), so that $$ E_i = \int F_i(\x)\,d\P(\x) \;. $$ Then $$ \begin{aligned} -\log\frac{d\ch x_n\nu_i}{d\nu_i} (\bnd_i\x) &= -\log\frac{d\nu_i(\bnd_iT^{-n}\x)}{d\nu_i(\bnd_i\x)} \\ &= F_i(\x) + F_i(T^{-1}\x) + \dots + F_i(T^{-n+1}\x) \;, \end{aligned} $$ whence by the ergodic theorem $$ -\frac1n \log\frac{d\ch x_n\nu_i}{d\nu_i} (\bnd_i\x) \mathop{\,\longrightarrow\,}_{n\to+\infty} E_i $$ in $L^1(\X,\P)$. It implies that $$ -\frac1n \int_A \log\frac{d\ch x_n\nu_i}{d\nu_i} (\xi) \frac{d\nu_i(\xi)}{\nu_i(A)} \mathop{\,\longrightarrow\,}_{n\to+\infty} E_i \;, $$ which, by a convexity argument, yields the claim. \end{proof} \section{Dimension of measures} \label{sec:Hausd} Let us recall several notions of the dimension of a probability measure $m$ on a compact metric space $(Z,\r)$ (all the details, unless otherwise specified, can be found in the book \cite{Pesin97}). These notions roughly fall into two categories: the \emph{global} ones are obtained by looking at the dimension of sets which ``almost'' coincide with $Z$ (up to a piece of small measure $m$), whereas the \emph{local} ones are related to the asymptotic behavior of the ratios $\log m B(z,r)/\log r$ as the radius $r$ tends to $0$. \subsection{Global definitions} The \emph{Hausdorff dimension} of the measure $m$ is $$ \dim_H m = \inf\left\{ \dim_H A : m(A)=1 \right\} \;, $$ where $\dim_H A$ denotes the Hausdorff dimension of a subset $A\subset Z$. The \emph{lower} and the \emph{upper box dimensions} of a subset $A\subset Z$ are defined, respectively, as $$ \un\dim_B A = \liminf_{r\to 0}\frac{\log N(A,r)}{\log 1/r} \quad\textrm{and}\quad \ov{\dim}_B A = \limsup_{r\to 0}\frac{\log N(A,r)}{\log 1/r} \;, $$ where $N(A,r)$ is the minimal number of balls of radius $r$ needed to cover $A$. Ledrappier \cite{Ledrappier81} also considered the minimal number $N(r,\e,m)$ of balls of radius $r$ such that the measure $m$ of their union is at least $1-\e$ and defined the ``fractional dimension'' of the measure $m$ as $$ \ov\dim_L m = \sup_{\e\to 0} \limsup_{r\to 0} \frac{\log N(r,\e,m)}{\log 1/r} $$ (we use the notation from \cite{Young82} and below shall call it the \emph{upper Ledrappier dimension}). As it was noticed by Young \cite{Young82}, in the same way one can also define what we call the \emph{lower Ledrappier dimension} of the measure $m$ $$ \un\dim_L m = \sup_{\e\to 0} \liminf_{r\to 0} \frac{\log N(r,\e,m)}{\log 1/r} $$ as well as, in modern terminology, its \emph{lower} and the \emph{upper box dimensions}, respectively, $$ \un\dim_B m = \liminf_{m(A)\to 1} \left\{ \un\dim_B A \right\} \quad \textrm{and} \quad \ov\dim_B m = \liminf_{m(A)\to 1} \left\{ \ov\dim_B A \right\} \;. $$ Obviously, $$ \un\dim_L m \le \ov\dim_L m \quad\textrm{and}\quad \un\dim_B m \le \ov\dim_B m \;. $$ The difference between the Ledrappier and the box dimensions is that in the definition of the box dimensions it is the same set $A$ which has to be covered by balls with varying radii $r$, unlike in the definition of Ledrappier, so that $$ \un\dim_L m \le \un\dim_B m \quad\textrm{and}\quad \ov\dim_L m \le \ov\dim_B m \;. $$ By \cite[Proposition 4.1]{Young82}, \begin{equation}\label{eq:dims} \dim_H m \le \un\dim_L m \;. \end{equation} \subsection{Local definitions} The \emph{lower} and the \emph{upper pointwise dimensions} of the measure $m$ at a point $z\in Z$ are $$ \un\dim_P m (z) = \liminf_{r\to 0}\frac{\log m B(z,r)}{\log r} \quad \textrm{and} \quad \ov\dim_P m (z) = \limsup_{r\to 0}\frac{\log m B(z,r)}{\log r} \;, $$ respectively. Then \begin{equation} \label{eq:HD} \dim_H m = \esssup_z \un\dim_P m (z) \;. \end{equation} In particular, if $m$-a.e.\ \begin{equation} \label{eq:D} \lim_{r\to 0}\frac{\log m B(z,r)}{\log r}=D \;, \end{equation} then $\dim_H m=D$. Moreover, in this case all the reasonable definitions of dimension of the measure $m$ give the same result \cite[Theorem 4.4]{Young82}. \begin{defn} \label{def:M} In the situation when the convergence in \eqref{eq:D} holds just in probability we shall say that $D$ is the \emph{mean dimension} $\dim_M m$ of the measure $m$. We shall also introduce the \emph{lower} and the \emph{upper mean dimensions} of the measure $m$ as, respectively, $$ \begin{aligned} \un\dim_M m &= \sup\left\{t: \left[\frac{\log m B(z,r)}{\log r} - t\right]_-\mathop{\,\longrightarrow\,}^m 0 \right\} \;, \\ \ov\dim_M m &= \inf\left\{t: \left[\frac{\log m B(z,r)}{\log r} - t\right]_+\mathop{\,\longrightarrow\,}^m 0 \right\} \;, \end{aligned} $$ where $[t]_+=\max\{0,t\}, [t]_-=\min\{0,t\}$, and $\displaystyle{\mathop{\,\longrightarrow\,}^m}$ denotes convergence in probability with respect to the measure $m$. \end{defn} The definition of $\dim_M$ first appeared in Ledrappier's paper \cite{Ledrappier83} (also see \cite{Ledrappier84}), whereas $\un\dim_M$ and $\ov\dim_M$ (although obvious generalizations of $\dim_M$) are, apparently, new. Clearly, $\un\dim_M m \le \ov\dim_M m$. In slightly different terms, $[\un\dim_M m, \ov\dim_M m]$ is the minimal closed subinterval of $\R$ with the property that for any closed subset $I$ of its complement $$ m\left\{z\in Z: \frac{\log m B(z,r)}{\log r}\in I\right\} \mathop{\,\longrightarrow\,}_{r\to 0} 0 \;. $$ In particular, if $\dim_M m$ exists, then $\un\dim_M m=\ov\dim_M m=\dim_M m$. \subsection{Mean, box and Ledrappier dimensions} \label{sec:bes} We shall now establish simple inequalities between these dimensions. \begin{prop} For any probability measure $m$ on a compact metric space $(Z,\r)$ $$ \un\dim_M m \le \un\dim_L m \;. $$ \end{prop} \begin{proof} Fix a number $D<\un\dim_M m$. Then for any $\e>0$ there exist $r_0>0$ and a set $A\subset Z$ with $m(A)>1-\e$ and such that $m B(z,r)\le r^D$ for all $z\in A$ and $r\le r_0$. Suppose now that $$ m \left( \bigcup_i B(z_i,r) \right) \ge 1-\e $$ for a certain set of points $\{z_i\}$ of cardinality $N$ and a certain $r\le r_0/2$. If $B(z_i,r)$ intersects $A$, then $B(z_i,r)\subset B(z,2r)$ for some $z\in A$, so that $$ m\left( A \cap B(z_i,r) \right) \le m B(z_i,r) \le m B(z,2r) \le (2r)^D \;. $$ Thus, $$ 1-2\e \le m \left( A \cap \bigcup_i B(z_i,r) \right) \le N (2r)^D \;, $$ whence the claim. \end{proof} For establishing the inverse inequality we shall additionally require that the metric space $(Z,\r)$ has the \emph{Besicovitch covering property}, i.e., that for any precompact subset $A\subset Z$ and any bounded function $r:A\to\R_+$ (important particular case: $r$ is constant) the cover $\{B(z,r(z)), z\in A\}$ of $A$ contains a countable subcover whose multiplicity is bounded from above by a universal constant $M=M(Z,\r)$ (recall that the \emph{multiplicity} of a cover is the maximal number of elements of this cover to which a single point may belong). The Besicovitch property is, for instance, satisfied for the Euclidean space, hence, for all its compact subsets endowed with a metric which is Lipschitz equivalent to the Euclidean one. Therefore, it is satisfied for each of the Grassmannians $\Gr_i$ endowed with the metric \eqref{eq:metric}. \begin{prop} For any probability measure $m$ on a compact space $(Z,\r)$ satisfying the Besicovitch property $$ \ov\dim_B m \le \ov\dim_M m \;. $$ \end{prop} \begin{proof} Take a number $D>\ov\dim_M m$, let $$ A_r = \left\{z\in Z : m B(z,r)\ge r^D \right\} \;, $$ and consider a cover of $A_r$ by the balls $B(z_i,r),\;z_i\in A_r$ obtained from applying the Besicovitch property. The cardinality of this cover is at least $N(A_r,r)$, whereas its multiplicity is at most $M$, whence $$ N(A,r)r^D \le \sum_i m B(z_i,r) \le M \;, $$ so that $$ \frac{\log N(A_r,r)}{\log 1/r}\le D+\frac{\log M}{\log 1/r} \;. $$ For $r\to0$ the right-hand side of the above inequality tends to $D$, whereas $m(A_r)\to 1$ by the choice of $D$, whence $\ov\dim_B m\le D$. \end{proof} \subsection{Final conclusions} Taking stock of the above discussion we obtain \begin{thm}\label{th:dims} For any probability measure $m$ on a compact metric space $$ \left. \begin{aligned} &\dim_H m\\ &\,\un\dim_M m\\ \end{aligned} \right\} \le \un\dim_L m \le \un\dim_B m \;. $$ If, in addition, the space has the Besicovitch property, then $$ \ov\dim_L m \le \ov\dim_B m \le \ov\dim_M m \;. $$ \end{thm} \begin{rem} There is no general inequality between $\dim_H m$ and $\un\dim_M m$. For instance, take two singular measures $m_1,m_2$ for which the dimensions \eqref{eq:D} exist and are different, say, $D_1<D_2$, and let $m$ be their convex combination. Then $\un\dim_M m=D_1$, whereas $\dim_H m=D_2>\un\dim_M m$ by \eqref{eq:HD}. On the other hand, by exploiting the difference between the convergence in probability and the convergence almost everywhere one can also construct examples with $\dim_H m<\un\dim_M m$. We shall briefly describe one such example. Let $Z'$ be the space of unilateral binary sequences $\a=(\a_1,\a_2,\dots)$ with the uniform measure $m'$ and the usual metric $\r'$ for which $-\log\r'(\a,\b)$ is the length of the initial common segment of the sequences $\a$ and $\b$. Take a sequence of cylinder sets $A_n$ with the property that $m' A_n\to 0$, but any point of $Y$ belongs to infinitely many sets $A_n$. Also take a sequence of integers $s_n$ (to be specified later), and let $Z$ be the image of the space $Z'$ under the following map: given a sequence $\a\in Z'$ take the set $I=\{n:\a\in A_n\}$ and replace with 0 all the symbols $\a_k$ with $s_n\le k\le 2s_n$ for a certain $n\in I$. The space $Z$ is endowed with the quotient measure $m$ and the quotient metric $\r$. If the sequence $s_n$ is very rapidly increasing, then \eqref{eq:HD} can be used to show that $\dim_H m = \log 2/2$, whereas $\un\dim_M m=\dim_M m=\log 2$. \end{rem} \section{Limit set} \label{sec:limit} Denote by $\Pr(\B)$ the compact space of probability measures on the flag space $\B$ endowed with the weak$^*$ topology, and let $m$ be the unique $K$-invariant probability measure on $\B$. Then the map $go\mapsto gm$ determines an embedding of the symmetric space $S=G/K$ into $\Pr(\B)$, and gives rise to the \emph{Satake--Furstenberg compactification} of $S$. Its boundary $\ov S\setminus S$ contains the space $\B$ under the identification of its points with the associated delta-measures (but, unless the rank of $G=SL(d,\R)$ is 1, i.e., $d=2$, it also contains other limit measures, see \cite{Guivarch-Ji-Taylor98}). The \emph{limit set} $L_\G$ of a subgroup $\G\subset G$ in this compactification is then defined (see \cite{Guivarch90}) as $$ L_\G =\ov{\G o}\cap \B \subset \B \;. $$ The limit set is obviously $\G$-invariant and closed. Moreover, \begin{thm}[\cite{Guivarch90}] \label{th:limset} If the group $\G$ is Zariski dense in $G$, then its action on the limit set $L_\G$ is minimal (i.e., $L_\G$ has no proper closed $\G$-invariant subsets). \end{thm} Below we shall need the following elementary property. \begin{prop} \label{pr:min} Under the conditions of \thmref{th:limset} let $U\subset\B$ be an open set with $U\cap L_\G \neq \emp$. Then there exists finitely many elements $\g_1,\dots,\g_r\in\G$ such that $$ L_\G \subset \bigcup_i \g_i U\ . $$ \end{prop} \begin{proof} Minimality of $L_\G$ means that any closed $\G$-invariant subset of $\B$ either contains $L_\G$ or does not intersect it. Since the set $\B\setminus\bigcup_{\g\in\G}\g U$ does not contain $L_\G$ (as $U\cap L_\G \neq \emp$), it does not intersect $L_\G$, so that $L_\G \subset \bigcup_{\g\in\G}\g U$. Finally, since $L_\G$ is compact, the above cover contains a finite subcover. \end{proof} \begin{rem} \thmref{th:limset} and \propref{pr:min} obviously carry over to the projections $L^i_\G=\pi_i(L_\G)\subset\Gr_i$ of the limit set $L_\G$ to the Grassmannians $\Gr_i$. \end{rem} By \propref{pr:lyap}(iv) (see \cite{Guivarch-Raugi85} for a more general argument) a.e.\ sample path $\x$ converges as $n\to+\infty$ to the limit flag $\bnd\x$ in the Satake--Furstenberg compactification as well. Therefore, $\supp\nu\subset L_\G$. If $\supp\mu$ generates $\G$ as a semigroup, then $\mu$-stationarity of the harmonic measure $\nu$ implies its quasi-invariance, so that in this case $\supp\nu$ is $\G$-invariant, and, if $\G$ is Zariski dense, $\supp\nu=L_\G$ by \thmref{th:limset}. \begin{rem} If $\G$ is a lattice, then $L_\G=\B$. On the other hand, if $\G$ is Zariski dense, then, as it is shown in \cite{Guivarch90}, $L_\G$ has positive Hausdorff dimension, which is deduced from the positivity of the dimension of the harmonic measure (under the assumption that $\mu$ has an exponential moment). Also see \cite{Link04} for recent results on the Hausdorff dimension of the \emph{radial limit set}. It would be interesting to investigate existence of random walks such that their harmonic measure has the maximal Hausdorff dimension (equal to the dimension of the limit set), cf. \cite{Connell-Muchnik07}. \end{rem} \section{Dimension of the harmonic measure} \label{sec:link} \subsection{Rate of contraction estimate} Recall (see \secref{sec:RW}) that the negative part $(\ch x_n)_{n\ge 0}=(x_{-n})_{n\ge 0}$ of a bilateral sample path $\x$ performs the random walk on $\G$ governed by the reflected measure $\ch\mu$. Denote by $\bnd^-\x\in\B$ the corresponding limit flag, and for $i\in\{1,2,\dots,d-1\}$ let $$ \xi_\x=\left(\bnd_{d-i}^-\x\right)^\bot\in\Gr_i $$ be the orthogonal complement in $\R^d$ of the $(d-i)$-dimensional subspace of the flag $\bnd^-\x$ (for simplicity we omit the index $i$ in the notation for $\xi_\x$; the Grassmannian $\Gr_i$ to which it belongs should always be clear from the context). \begin{thm} \label{th:contr} For any Grassmannian $\Gr_i$, any $r<1$, and $\P$-a.e.\ $\x\in\X$, $$ \liminf_n \left[ - \frac1n \log\diam \ch x_n^{-1} B(\xi_\x,r) \right] \ge \l_i - \l_{i+1} \;. $$ \end{thm} \begin{proof} Let us consider first the case when $i=1$. Then by the definition of the metric $\r$ \eqref{eq:metric} on $\Gr_1\cong P\R^d$ the ball $B(\xi_\x,r)$ consists of the projective classes of all the vectors $v+w$, where $v\in\xi_\x\setminus\{0\}$ and $w\bot\xi_\x$ with \begin{equation} \label{eq:C} \frac{\|w\|}{\|v\|}\le C \quad \textrm{for a certain constant} \quad C=C(r) \;. \end{equation} By \propref{pr:lyap}(iii) applied to the random walk $(\G,\ch\mu)$ we have that $$ \lim_n \frac1n \log\frac{\|\ch x_n^{-1} w\|}{\|\ch x_n^{-1} v\|} \le\l_2-\l_1 $$ uniformly under condition \eqref{eq:C}, whence the claim. The general case argument follows the same lines with the only difference that now instead of the action on $P\R^d$ one has to consider the action on $P\bigwedge^i\R^d$. The sequence of matrices $\bigwedge^i \ch x_n^{-1}$ (which are the images of $\ch x_n^{-1}$ in the $i$-th external power representation) is also Lyapunov regular with the Lyapunov spectrum consisting of all the sums of the form $\l_{j_1}+\l_{j_2}+\dots+\l_{j_i}$ with $1\le j_1 < \cdots < j_i \le d$. In particular, the top of the spectrum (corresponding precisely to the eigenspace $\xi_\x\in P\bigwedge^i\R^d$ as orthogonal to the highest dimension proper subspace of the Lyapunov flag) is $\l_1+\l_2+\dots+\l_i$, whereas the second point in the spectrum is $\l_1+\l_2+\dots+\l_{i-1}+\l_{i+1}$, the spectral gap being $\l_i-\l_{i+1}$. \end{proof} \begin{prop}\label{pr:balls} For any index $i\in\{1,2,\dots,d-1\}$, any $\e>0$ and $\P$-a.e.\ sample path $\x$ the sets $$ A_N = \bigcap_{n\ge N} \ch x_n B \left( \bnd_iT^{-n}\x, e^{-n(\l_i-\l_{i+1}-\e)}\right) \subset \Gr_i $$ have positive measure $\nu_i$ for all sufficiently large $N$. \end{prop} \begin{proof} By definition of the metric $\r$ \eqref{eq:metric} its values do not exceed 1, and for any $\xi\in\Gr_i$ the radius 1 sphere $S(\xi,1)$ centered at $\xi$ consists precisely of those $\z\in\Gr_i$ for which the vectors from $\bigwedge^i\R^d$ associated with $\xi$ and $\z$ are orthogonal. The latter condition on $\z$ is algebraic, so that by \thmref{th:sub} $\nu_i S(\xi,1)=0$ for any $\xi\in\Gr_i$ (the measure $\nu_i$ being the projection of the measure $\nu$). Therefore, for a.e.\ sample path $\x$ there exists $r<1$ such that $\nu_i B(\xi_\x,r)>0$. On the other hand, by \thmref{th:contr} and \eqref{eq:Tc} $B(\xi_\x,r)\subset A_N$ for all sufficiently large $N$. \end{proof} \subsection{Dimension estimate} Recall that $\bnd_i T^{-n}\x = \ch x_n^{-1}\bnd_i\x$ \eqref{eq:Tc}. By \lemref{lem:dentr} and \propref{pr:balls}, for $\P$-a.e.\ sample path $\x$ \begin{equation}\label{eq:minentr} \begin{split} &\limsup_{n\to\infty} \left[ -\frac1n \log\nu_i B \left( \bnd_iT^{-n}\x, e^{-n(\l_{i}-\l_{i+1}-\e)} \right) \right] \\ &\le \limsup_{n\to\infty}\left[-\frac{1}{n} \log \ch x_n\nu_i(A_N)\right] \le E_i \;. \end{split} \end{equation} The left-hand side of this inequality looks almost like in the definition of the pointwise dimension, with the only difference that the ball centers vary. We shall take care of this difference by switching to the mean dimension and using $\mu$-stationarity of the measures $\nu_i$. \begin{thm}\label{th:dim} For any $i\in\{1,2,\dots,d-1\}$, $$ \dim_H(\nu_i)\le\ov\dim_B(\nu_i)\le\ov\dim_M\nu_i \le \frac{E_i}{\l_i-\l_{i+1}} \;. $$ \end{thm} \begin{proof} Let $$ A = \bigcap_\eta\bigcup_N\bigcap_{n\ge N}\Big\{\x : -\frac1n\log {\nu}^i B\big(\bnd_iT^{-n}\x,e^{-n(\l_i-\l_{i+1}-\e)}\big) \le E_i+\eta\Big\} \;. $$ be the set of sample paths satisfying condition \eqref{eq:minentr}. Since $\P(A)=1$, for any $\eta,\chi>0$ $$ \P\Big\{\x : -\frac{1}{n}\log{\nu}^i B\big(\bnd_iT^{-n}\x,e^{-n(\l_i-\l_{i+1}-\e)}\big) \le E_i+\eta\Big\}\ge 1-\chi $$ for all sufficiently large $n$. The transformation $T$ preserves the measure $\P$, so that its image under the map $\x\mapsto\bnd_iT^{-n}\x$ is $\nu_i$, whence the rightmost inequality. The other inequalities follow from \thmref{th:dims} because the metric $\r$ has the Besicovitch property (see \secref{sec:bes}). \end{proof} \section{The construction}\label{sec:const} For the rest of this Section we shall fix a probability measure $\mu$ on a subgroup $\G\subset G$ satisfying our standing assumptions from \secref{sec:RW} and \secref{sec:exponents}. \conv{In addition we shall assume in this Section that $\supp\mu$ generates $\G$ as a semigroup.} Recall that an element of the group $G=SL(d,\R)$ is called \emph{$\R$-regular} if it is diagonalizable over $\R$ and the absolute values of its eigenvalues are pairwise distinct. By \cite{Benoist-Labourie93} any Zariski dense subgroup of $G$ contains such an element. Let us fix an $\R$-regular element $\g\in\G$ and consider the sequence of probability measures on $\G$ \begin{equation} \label{eq:mu_k} \mu^k = \frac12\mu+\frac14\left( \d_{\g^k}+\d_{\g^{-k}} \right) \;. \end{equation} where $\d_g$ denotes the Dirac measure at $g$. \begin{rem} Actually, we just need that $\g$ be diagonalizable over $\R$ with the absolute value of some of its eigenvalues being not $1$. \end{rem} Denote by $\nu^k$ the harmonic measures on the flag space $\B$ of the random walks $\left(\G,\mu^k\right)$, and by $\nu^k_i$ their quotients on the Grassmannians $\Gr_i,\;1\le i \le d-1$. Our Main Theorem follows from \begin{thm} \label{th:main} $$ \min_i\{\dim_H\nu_i^k\} \mathop{\,\longrightarrow\,}_{k\to\infty} 0 \;. $$ \end{thm} In its turn, \thmref{th:main} is an immediate consequence of a combination of the inequality from \thmref{th:dim} and \begin{thm} \label{th:estimate} The measures $\mu^k$ \eqref{eq:mu_k} have the property that the entropies $H(\mu^k)$ are uniformly bounded, whereas the lengths of their Lyapunov vectors $\l(\mu^k)$ (equivalently, the top Lyapunov exponents $\l_1(\mu^k)$) go to infinity. \end{thm} The rest of this Section is devoted to a proof of \thmref{th:estimate} split into a number of separate claims. The first one is obvious: \begin{claim}\label{pr:h is bdd} The measures $\mu^k$ have uniformly bounded entropies $H\left(\mu^k\right)$. \end{claim} In view of the discussion from \secref{sec:entr} it immediately implies \begin{cor}\label{cor:unif} The asymptotic entropies $h\left(\G,\mu^k\right)$ and therefore all the differential entropies $E^k_i=E_{\mu^k}\left(\Gr_i,\nu_i^k\right)$ are uniformly bounded. \end{cor} For estimating the top Lyapunov exponent $\l_1\left(\mu^k\right)$ we shall use the Furstenberg formula \eqref{eq:furst}, by which \begin{equation} \label{eq:l1} \begin{aligned} \l_1 \left(\mu^k\right) &= \frac12 \sum_g \mu(g)\int_{P\R^d} \log\frac{\|gv\|}{\|v\|}\ d\nu^k_1(\ov{v}) \\ &\qquad + \frac14\int_{P\R^d} \left( \log\frac{\|\g^k v\|}{\|v\|}+ \log\frac{\|\g^{-k} v\|}{\|v\|}\right) d\nu^k_1(\ov{v}) \;. \end{aligned} \end{equation} The absolute value of the first term of this sum is uniformly bounded because $\mu$ has a finite first moment. For dealing with the second term of the sum \eqref{eq:l1} we need the following claims. \begin{claim}\label{claim:g^k} The sum $$ \left( \log\frac{\|\g^k v\|}{\|v\|} + \log\frac {\|\g^{-k} v\|} {\|v\|}\right) $$ is bounded from below uniformly on $v\in\R^d\setminus\{0\}$ and $k\ge 0$. \end{claim} \begin{proof} If $\d$ is a diagonal matrix, then obviously, $$ \|\d^k v\| \|\d^{-k}v\| \ge \la \d^k v,\d^{-k} v\ra = \|v\|^2 \;. $$ Now, $\g=h^{-1}\d h$ is diagonalizable, whence $$ \begin{aligned} \|\g^k v\| \|\g^{-k}v\| &\ge \|h\|^{-2} \|\d^k h v\| \|\d^{-k} h v \| \ge \|h\|^{-2} \|hv\|^2\\ &\ge \|h\|^{-2}\|h^{-1}\|^{-2} \|v\|^2 \;. \end{aligned} $$ \end{proof} \begin{claim} \label{claim:opens} For any open subset $U\subset P\R^d$ which intersects the limit set $L^1_\G$ (see \secref{sec:limit}), the measures $\nu^k_1(U)$ are bounded away from zero uniformly on $k$. \end{claim} \begin{proof} By \propref{pr:min} there exists a finite set $A\subset\G$ such that $$ L^1_\G\subset \bigcup_{g\in A} g^{-1} U \;. $$ Since $\supp\mu$ generates $\G$ as a semigroup, for each $g\in A$ there exists an integer $s=s(g)$ such that $g\in\supp\mu^{*s}$. Then by $\mu^k$-stationarity of $\nu_1^k$, for each $g\in A$ $$ \nu^k_1(U) \ge \frac1{2^s} \mu^{*s}(g)g\nu_1^k(U) = \frac1{2^s} \mu^{*s}(g)\nu_1^k\left(g^{-1} U\right) \ge \e \nu_1^k\left(g^{-1} U\right) $$ for a certain $\e=\e(A)>0$. Summing up the above inequalities over all $g\in A$ we obtain $$ |A| \nu^k_1(U) \ge \e \sum_{g\in A} \nu_1^k\left(g^{-1} U\right) \ge \e\nu_1^k(L^1_\G) = \e \;, $$ whence the claim. \end{proof} Now we are ready to prove \begin{claim} \label{claim:l1} $$ \lim_{k\to\infty}\l_1\left(\mu^k\right)=+\infty \;. $$ \end{claim} \begin{proof} We shall fix a diagonalization $\g=h^{-1}\d h$ with $\d=\diag(\d_1,\dots,\d_d)$ in such a way that $|\d_1|>1$ and $|\d_d|<1$, and define the open set $U$ as $$ U = \left\{\ov{v}\in P\R^d : \frac{\la h v,e_1\ra }{\|h v\|}>\b \ \textrm{and}\ \frac{\la h v,e_d\ra }{\|h v\|}>\b \right\} \;, $$ i.e., by requiring that the first and the last coordinates (with respect to the standard basis $\E$) of the normalized vector $hv$ be greater than $\b$. The value of $\b$ is chosen to make sure that $U$ is non-empty (for instance, one can take $\b=1/2$). If $\ov{v}\in U$, then $$ \|\g^k v\|= \|h^{-1}\d^k h v\|\ge \|h\|^{-1} |\d_1|^k \la h v,e_1\ra \ge \|h\|^{-1} |\d_1|^k \b \|h v\| \;, $$ and thus $$ \frac{\|\g^k v\|}{\| v\|}\ge \|h^{-1}\|^{-1}~\|h\|^{-1}\b |\d_1|^k \;. $$ In the same way, $$ \frac{\|\g^{-k} v\|}{\| v\|}\ge \|h^{-1}\|^{-1}~\|h\|^{-1}\b |\d_d|^{-k} \;, $$ so that $$ \log\frac{\|\g^k v\|}{\| v\|} + \log\frac{\|\g^{-k} v\|}{\| v\|} \to \infty $$ as $k\to\infty$ uniformly on $\ov{v}\in U$, which in view of \eqref{eq:l1} in combination with \claimref{claim:g^k} and \claimref{claim:opens} finishes the argument. \end{proof} \begin{rem} The measure $\mu$ in our construction can clearly be chosen symmetric and, if the group $\G$ is finitely generated, finitely supported. Obviously, the measures $\mu^k$ then also have these properties, so that \emph{singular harmonic measures can be produced by symmetric finitely supported measures on the group}. \end{rem} \end{document}
\begin{document} \title{${}$ \vskip -1.2cm Half-arc-transitive graphs of arbitrary even valency greater than 2} \author{ Marston D.E. Conder \\[+1pt] {\normalsize Department of Mathematics, University of Auckland,}\\[-4pt] {\normalsize Private Bag 92019, Auckland 1142, New Zealand} \\[-3pt] {\normalsize [email protected]}\\[+4pt] \and Arjana \v{Z}itnik \\[+1pt] {\normalsize Faculty of Mathematics and Physics, University of Ljubljana,} \\[-4pt] {\normalsize Jadranska 19, 1000 Ljubljana, Slovenia} \\[-3pt] {\normalsize [email protected]} } \date{} \maketitle \begin{abstract} A {\em half-arc-transitive\/} graph is a regular graph that is both vertex- and edge-transitive, but is not arc-transitive. If such a graph has finite valency, then its valency is even, and greater than $2$. In 1970, Bouwer proved that there exists a half-arc-transitive graph of every even valency greater than 2, by giving a construction for a family of graphs now known as $B(k,m,n)$, defined for every triple $(k,m,n)$ of integers greater than $1$ with $2^m \equiv 1 \mod n$. In each case, $B(k,m,n)$ is a $2k$-valent vertex- and edge-transitive graph of order $mn^{k-1}$, and Bouwer showed that $B(k,6,9)$ is half-arc-transitive for all $k > 1$. For almost 45 years the question of exactly which of Bouwer's graphs are half-arc-transitive and which are arc-transitive has remained open, despite many attempts to answer it. In this paper, we use a cycle-counting argument to prove that almost all of the graphs constructed by Bouwer are half-arc-transitive. In fact, we prove that $B(k,m,n)$ is arc-transitive only when $n = 3$, or $(k,n) = (2,5)$, or $(k,m,n) = (2,3,7)$ or $(2,6,7)$ or $(2,6,21)$. In particular, $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 5$. This gives an easy way to prove that there are infinitely many half-arc-transitive graphs of each even valency $2k > 2$. \\ \noindent Keywords: graph, half-arc transitive, edge-transitive, vertex-transitive, arc-transitive, automorphisms, cycles \noindent Mathematics Subject Classification (2010): 05E18, 20B25. \end{abstract} \section{Introduction} \label{intro} In the 1960s, W.T.~Tutte \cite{Tutte} proved that if a connected regular graph of odd valency is both vertex-transitive and edge-transitive, then it is also arc-transitive. At the same time, Tutte observed that it was not known whether the same was true for even valency. Shortly afterwards, I.Z.~Bouwer \cite{Bouwer} constructed a family of vertex- and edge-transitive graphs of any given even valency $2k > 2$, that are not arc-transitive. Any graph that is vertex- and edge-transitive but not arc-transitive is now known as a {\em half-arc-transitive\/} graph. Every such graph has even valency, and since connected graphs of valency $2$ are cycles, which are arc-transitive, the valency must be at least $4$. Quite a lot is now known about half-arc-transitive graphs, especially in the $4$-valent case --- see \cite{ConderMarusic,ConderPotocnikSparl,Marusic1998b,DM05} for example. Also a lot of attention has been paid recently to half-arc-transitive group actions on edge-transitive graphs --- see \cite {HujdurovicKutnarMarusic} for example. In contrast, however, relatively little is known about half-arc-transitive graphs of higher valency. Bouwer's construction produced a vertex- and edge-transitive graph $B(k,m,n)$ of order $mn^{k-1}$ and valency $2k$ for every triple $(k,m,n)$ of integers greater than $1$ such that $2^{m} \equiv 1$ mod $n$, and Bouwer proved in \cite{Bouwer} that $B(k,6,9)$ is half-arc-transitive for every $k > 1$. Bouwer also showed that the latter is not true for every triple $(k,m,n)$; for example, $B(2,3,7)$, $B(2,6,7)$ and $B(2,4,5)$ are arc-transitive. For the last $45$ years, the question of exactly which of Bouwer's graphs are half-arc-transitive and which are arc-transitive has remained open, despite a number of attempts to answer it. Three decades after Bouwer's paper, C.H.~Li and H.-S.~Sim \cite{LiSim} developed a quite different construction for a family of half-arc-transitive graphs, using Cayley graphs for metacyclic $p$-groups, and in doing this, they proved the existence of infinitely many half-arc-transitive graphs of each even valency $2k > 2$. Their approach, however, required a considerable amount of group-theoretic analysis. In this paper, we use a cycle-counting argument to prove that almost all of the graphs constructed by Bouwer in \cite{Bouwer} are half-arc-transitive, and thereby give an easier proof of the fact that there exist infinitely many half-arc-transitive graphs of each even valency $2k > 2$. Specifically, we prove the following: \begin{theorem} \label{thm:main} The graph $B(k,m,n)$ is arc-transitive only if and only if $n = 3$, or $(k,n) = (2,5)$, or $(k,m,n) = (2,3,7)$ or $(2,6,7)$ or $(2,6,21)$. In particular, $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 5$. \end{theorem} By considering the $6$-cycles containing a given $2$-arc, we prove in Section~\ref{sec:main} that $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 7$, and then we adapt this for the other half-arc-transitive cases in Section~\ref{sec:other}. In between, we prove arc-transitivity in the cases given in the above theorem in Section~\ref{sec:AT}. But first we give some additional background about the Bouwer graphs in the following section. \section{Further background} \label{further} First we give the definition of the Bouwer graph $B(k,m,n)$, for every triple $(k,m,n)$ of integers greater than $1$ such that $2^{m} \equiv 1$ mod $n$. The vertices of $B(k,m,n)$ are the $k$-tuples $(a,b_2,b_3,\dots,b_k)$ with $a \in \mathbb Z_m$ and $b_j \in \mathbb Z_n$ for $2 \le j \le k$. We will sometimes write a given vertex as $(a,{\bf b})$, where ${\bf b} = (b_2,b_3,\dots,b_k)$. Any two such vertices are adjacent if they can be written as $(a,{\bf b})$ and $(a+1,{\bf c})$ where either ${\bf c} = {\bf b}$, or ${\bf c} = (c_2,c_3,\dots,c_k)$ differs from ${\bf b} = (b_2,b_3,\dots,b_k)$ in exactly one position, say the $(j\!-\!1)$st position, where $c_j = b_j +2^a$. Note that the condition $2^{m} \equiv 1$ mod $n$ ensures that $2$ is a unit mod $n$, and hence that $n$ is odd. Also note that the graph is simple. In what follows, we let ${\bf e}_j$ be the element of $(\mathbb Z_n)^{k-1}$ with $j\hskip 1pt$th term equal to $1$ and all other terms equal to $0$, for $1 \le j < k$. With this notation, we see that the neighbours of a vertex $(a,{\bf b})$ are precisely the vertices $(a+1,{\bf b})$, $(a-1,{\bf b})$, $(a+1,{\bf b}+2^{a}{\bf e}_j)$ and $(a-1,{\bf b}-2^{a-1}{\bf e}_j)$ for $1 \le j < k$, and in particular, this shows that the graph $B(k,m,n)$ is regular of valency $2k$. Next, we recall that for all $(k,m,n)$, the graph $B(k,m,n)$ is both vertex- and edge-transitive; see \cite[Proposition~1]{Bouwer}. Also $B(k,m,n)$ is bipartite if and only if $m$ is even. Moreover, it is easy to see that $B(k,m,n)$ has the following three automorphisms: \noindent (i) $\theta$, of order $k-1$, taking each vertex $(a,{\bf b}) = (a,b_2,b_3,\dots,b_{k-1},b_k)$ to the vertex $(a,{\bf b}') = (a,b_3,b_4,\dots,b_k,b_2)$, obtained by shifting its last $k-1$ entries, \noindent (ii) $\tau$, of order $m$, taking each vertex $(a,{\bf b}) = (a,b_2,b_3,\dots,b_{k-1},b_k)$ to the vertex $(a,{\bf b}'') = (a+1,2b_2,2b_3,\dots,2b_{k-1},2b_k)$, obtained by increasing its first entry $a$ by $1$ and multiplying the others by $2$, and \noindent (ii) $\psi$, of order $2$, taking each vertex $(a,{\bf b}) = (a,b_2,b_3,\dots,b_{k-1},b_k)$ to the vertex $(a,{\bf b}'') = (a,2^{a}-1-(b_2+b_3+\dots+b_k),b_3,\dots,b_{k-1},b_k)$, obtained by replacing its second entry $b_2$ by $2^{a}-1-(b_2+b_3+\dots+b_k)$. \noindent (Note: in the notation of \cite{Bouwer}, the automorphism $\psi$ is $T_2 \circ S_2$.) The automorphisms $\theta$ and $\psi$ both fix the `zero' vertex $(0,{\bf 0}) = (0,0,\dots,0)$, and $\theta$ induces a permutation of its $2k$ neighbours that fixes each of the two vertices $(1,{\bf 0}) = (1,0,0,\dots,0)$ and $(-1,{\bf 0}) = (-1,0,0,\dots,0)$ and induces two $(k\!-\!1)$-cycles on the others, while $\psi$ swaps $(1,{\bf 0})$ with $(1,{\bf e}_1) = (1,1,0,\dots,0)$, and swaps $(-1,{\bf 0})$ with $(-1,-2^{-1}{\bf e}_1) = (-1,-2^{-1},0,\dots,0)$, and fixes all the others. It follows that the subgroup generated by $\theta$ and $\psi$ fixes the vertex $(0,{\bf 0})$ and has two orbits of length $k$ on its neighbours, with one orbit consisting of the vertices of the form $(1,{\bf b})$ where ${\bf b} = {\bf 0}$ or ${\bf e}_j$ for some $j$, and the other consisting of those of the form $(-1,{\bf c})$ where ${\bf c} = {\bf 0}$ or $-2^{-1}{\bf e}_j$ for some $j$. By edge-transitivity, the graph $B(k,m,n)$ is arc-transitive if and only if it admits an automorphism that interchanges the `zero' vertex $(0,{\bf 0})$ with one of its neighbours, in which case the above two orbits of $\langle \theta, \psi \rangle$ on the neighbours of $(0,{\bf 0})$ are merged into one under the full automorphism group. We will use the automorphism $\tau$ in the next section. We will also use the following, which is valid in all cases, not just those with $m > 6$ and $n > 7$ considered in the next section. \begin{lemma} \label{threearcs} Every $3$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2 \!\sim\! v_3$ in $X$ with first vertex $v_0 = (0,{\bf 0})$ is of one of the following forms, with $r,s,t \in \{1,\dots,k\!-\!1\}$ in each case$\,:$ \\[+10pt] \begin{tabular}{rl} \ {\rm (1)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,{\bf 0}) \!\sim\! (3,{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_r$, \\[+4 pt] \ {\rm (2)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,{\bf 0}) \!\sim\! (1,-2{\bf e}_r)$, \\[+4 pt] \ {\rm (3)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (3,2{\bf e}_r\!+\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_s$, \\[+4 pt] \ {\rm (4)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (1,2{\bf e}_r\!-\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,2{\bf e}_s$ with $s \ne r$, \\[+4 pt] \ {\rm (5)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (1,-{\bf e}_r\!+\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_s$ with $s \ne r$, \\[+4 pt] \ {\rm (6)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r\!-\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_s$, \\[+4 pt] \ {\rm (7)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r) \!\sim\! (3,{\bf e}_r\!+\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_s$, \\[+4 pt] \ {\rm (8)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r) \!\sim\! (1,{\bf e}_r\!-\!2{\bf e}_s)$, \\[+4 pt] \end{tabular} \begin{tabular}{rl} \ {\rm (9)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r\!+\!2{\bf e}_s) \!\sim\! (3,{\bf e}_r\!+\!2{\bf e}_s\!+\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_t$, \\[+4 pt] {\rm (10)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r\!+\!2{\bf e}_s) \!\sim\! (1,{\bf e}_r\!+\!2{\bf e}_s\!-\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2{\bf e}_t$ with $t \ne s$, \\[+4 pt] {\rm (11)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_r+{\bf e}_s)$, \\[+4 pt] {\rm (12)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (-1,{\bf e}_r\!-\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_s$, \\[+4 pt] {\rm (13)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r-{\bf e}_s) \!\sim\! (1,{\bf e}_r-{\bf e}_s\!+\!{\bf d})$, where $s \ne r$, \\ & \quad and $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_t$ with $t \ne s$, \\[+4 pt] {\rm (14)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r-{\bf e}_s) \!\sim\! (-1,{\bf e}_r-{\bf e}_s\!-\!{\bf d})$, where $s \ne r$, \\ & \quad and $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_t$, \\[+4 pt] {\rm (15)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (1,2^{-1}{\bf e}_r\!+\!{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_s$, \\[+4 pt] {\rm (16)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (-1,2^{-1}{\bf e}_r\!-\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_s$ with $s \ne r$, \\[+4 pt] {\rm (17)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,{\bf 0}) \!\sim\! (-1,2^{-2}{\bf e}_r)$, \\[+4 pt] {\rm (18)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,{\bf 0}) \!\sim\! (-3,-{\bf d})$, where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_r$, \\[+4 pt] {\rm (19)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r) \!\sim\! (-1,-2^{-2}{\bf e}_r\!+\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-2}{\bf e}_s$ with $s \ne r$, \\[+4 pt] {\rm (20)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r) \!\sim\! (-3,-2^{-2}{\bf e}_r\!-\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_s$, \\[+4 pt] {\rm (21)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (1,-2^{-1}{\bf e}_r\!+\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_s$, \\[+4 pt] {\rm (22)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s)$, \\[+4 pt] {\rm (23)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (1,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s\!+\!{\bf d})$, \\ & \quad where $s \ne r$, and $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_t$, \\[+4 pt] {\rm (24)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s\!-\!{\bf d})$, \\ & \quad where $s \ne r$, and $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_t$ with $t \ne s$, \\[+4 pt] {\rm (25)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r\!+\!2^{-2}{\bf e}_s)$, \\[+4 pt] {\rm (26)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r) \!\sim\! (-3,-2^{-1}{\bf e}_r\!-\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_s$, \\[+4 pt] {\rm (27)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s\!+\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-2}{\bf e}_t$ with $t \ne s$, \\[+4 pt] {\rm (28)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s) \!\sim\! (-3,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s\!-\!{\bf d})$, \\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_t$. \\ \end{tabular} \end{lemma} \begin{proof} This follows directly from the definition of $X = B(k,m,n)$. \end{proof} Note that the 28 cases given in Lemma~\ref{threearcs} fall naturally into $14$ pairs, with each pair determined by the form of the initial $2$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2$. Also is it easy to see that the number of $3$-arcs in each case is \\[-4pt] $$\begin{cases} \ k & \hbox{in cases 1 and 18,} \\[-1 pt] \ k\!-\!1 & \hbox{in cases 2 and 17,} \\[-1 pt] \ k(k\!-\!1) & \hbox{in cases 3, 6, 7, 12, 15, 20, 21 and 26,} \\[-1 pt] \ (k\!-\!1)^2 & \hbox{in cases 4, 5, 8, 11, 16, 19, 22 and 25,} \\[-1 pt] \ k(k\!-\!1)^2 & \hbox{in cases 9 and 28,} \\[-1 pt] \ (k\!-\!1)^3 & \hbox{in cases 10 and 27,} \\[-1 pt] \ (k\!-\!1)^{2}(k\!-\!2) & \hbox{in cases 13 and 24,} \\[-1 pt] \ k(k\!-\!1)(k\!-\!2) & \hbox{in cases 14 and 23,} \end{cases} $$ and the total of all these numbers is $2k(2k\!-\!1)^2$, as expected. \section{The main approach} \label{sec:main} Let $k$ be any integer greater than $1$, and suppose that $m > 6$ and $n > 7$. We will prove that in every such case, the graph $X = B(k,m,n)$ is not arc-transitive, and is therefore half-arc-transitive. We do this simply by considering the ways in which a given $2$-arc or $3$-arc lies in a cycle of length $6$ in $X$. (For any positive integer $s$, an $s$-arc in a simple graph is a walk $\,v_0 \!\sim\! v_1 \!\sim\! v_2 \!\sim\! \dots \!\sim\! v_s$ of length $s$ in which any three consecutive vertices are distinct.) By vertex-transitivity, we can consider what happens locally around the vertex $(0,{\bf 0})$. \begin{lemma} \label{girth} The girth of $X$ is $6$. \end{lemma} \begin{proof} First, $X = B(k,m,n)$ is simple, by definition. Also there are no cycles of length $3$, $4$ or $5$ in $X$, since in the list of cases for a $3$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2 \!\sim\! v_3$ in $X$ with first vertex $v_0 = (0,{\bf 0})$ given by Lemma~\ref{threearcs}, the vertex $v_3$ is never equal to $v_0$, the vertex $v_1$ is uniquely determined by $v_2$, and every possibility for $v_2$ is different from every possibility for $v_3$. On the other hand, there are certainly some cycles of length $6$ in $X$, such as $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_k) \!\sim\! (1, 2{\bf e}_k) \!\sim\! (0,{\bf e}_k) \!\sim\! (1,{\bf e}_k) \!\sim\! (0,{\bf 0})$. \end{proof} Next, we can find all $6$-cycles based at the vertex $v_0 = (0,{\bf 0})$ in $X$. \begin{lemma} \label{sixcycles} Up to reversal, every $6$-cycle based at the vertex $v_0 = (0,{\bf 0})$ has exactly one of the forms below, with $r$, $s,$ $t$ all different when they appear$\,:$ \\[+5pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (1,2{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (1,-{\bf e}_r) \!\sim\! (2,{\bf e}_r) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (1,{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (0,{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (1,{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,2{\bf e}_s\!+\!{\bf e}_r) \!\sim\! (1,2{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (0,{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (1,{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_s\!+\!{\bf e}_r) \!\sim\! (0,{\bf e}_s) \!\sim\! (1,{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (-1,2^{-1}{\bf e}_r) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r\!-\!{\bf e}_s) \!\sim\! (1,{\bf e}_r\!-\!{\bf e}_s\!+\!{\bf e}_t) \!\sim\! (0,{\bf e}_t\!-\!{\bf e}_s) \!\sim\! (1,{\bf e}_t) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r\!-\!{\bf e}_s) \!\sim\! (-1,2^{-1}{\bf e}_r\!-\!{\bf e}_s) \!\sim\! (0,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-1,-\!2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (1,2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r) $ \\ ${}$ \qquad $\!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (-1,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (0,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-1,-\!2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r) \!\sim\! (-1,-2^{-2}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r) $ \\ ${}$ \qquad $\!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (0,-2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-1,-2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (1,2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (0,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s\!-\!2^{-1}{\bf e}_t)$ \\ ${}$ \qquad $\!\sim\! (0,2^{-1}{\bf e}_s\!-\!2^{-1}{\bf e}_t) \!\sim\! (-1,-2^{-1}{\bf e}_t) \!\sim\! (0,{\bf 0})$, or \\[+3 pt] $\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s) \!\sim\! (-1,-2^{-2}{\bf e}_r\!-\!2^{-2}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-2,-2^{-2}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$. \end{lemma} \begin{proof} This is left as an exercise for the reader. It may be helpful to note that a $6$-cycle of the first form is obtainable as a $3$-arc of type 4 with final vertex $2{\bf e}_r$ followed by the reverse of a $3$-arc of type 11 with the same final vertex. The $6$-cycles of the other $15$ forms are similarly obtainable as the concatenation of a $3$-arc of type $i$ with the reverse of a $3$-arc of type $j$, for $(i,j) = (5,8)$, $(5,13)$, $(6,22)$, $(10,13)$, $(11,11)$, $(12,16)$, $(13,13)$, $(14,24)$, $(15,21)$, $(16,24)$, $(19,25)$, $(22,22)$, $(23,23)$, $(24,24)$ and $(27,27)$, respectively. Uniqueness follows from the assumptions about $m$ and $n$. \end{proof} \begin{corollary} \label{count6cycles} The number of different $6$-cycles in $X$ that contain a given $2$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2$ with first vertex $v_0 = (0,{\bf 0})$ is always $0$, $1$ or $k$. More precisely, this number is \\[-12pt] \begin{center} \begin{tabular}{cl} ${}$\hskip -18pt $0$ & \hskip -8pt for the $2$-arcs $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,{\bf 0})$ and $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,{\bf 0})$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,3{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-(2^{-1}\!+\!2^{-2}){\bf e}_r)$, \\[+6pt] ${}$\hskip -18pt $1$ & \hskip -8pt for the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r\!+\!2{\bf e}_s)$, when $k > 2$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s)$, \\ & \quad when $k > 2$, \ and \end{tabular} \begin{tabular}{cl} ${}$\hskip -18pt $k$ & \hskip -8pt for the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r\!-\!{\bf e}_s)$, when $k > 2$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r)$, \\ & and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s)$, \\ & \quad when $k > 2$. \\[-3pt] \end{tabular} \end{center} In particular, every $2$-arc of the form $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r)$ lies in just one $6$-cycle, namely $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (1,2{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf 0})$, while every $2$-arc of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$ lies in $k$ $6$-cycles. \end{corollary} \begin{proof} This follows easily from inspection of the list of $6$-cycles given in Lemma~\ref{sixcycles}, and their reverses. \end{proof} At this stage, we could repeat the above calculations for $2$-arcs and $3$-arcs with first vertex $(1,{\bf 0}) = (1,0,0,\dots,0)$, but it is much easier to simply apply the automorphism $\tau$ defined in Section~\ref{further}. Hence in particular, the $2$-arcs $\,v_0 \!\sim\! v_1 \!\sim\! v_2$ with first vertex $v_0 = (1,{\bf 0})$ that lie in a unique $6$-cycle are those of the form $(1,{\bf 0}) \!\sim\! (2,{\bf 0}) \!\sim\! (3,4{\bf e}_r)$, or $(1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (3,2{\bf e}_r)$, or $(1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (3,2{\bf e}_r\!+\!4{\bf e}_s)$ when $k > 2$, or $(1,{\bf 0}) \!\sim\! (0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r)$, or $(1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r)$, or $(1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r\!-\!2^{-1}{\bf e}_s)$ when $k > 2$. Let $v$ and $w$ be the neighbouring vertices $(0,{\bf 0})$ and $(1,{\bf 0})$. We will show that there is no automorphism of $X$ that reverses the arc $(v,w)$. Let $A = \{(2,2{\bf e}_r) : 1 \le r < k\}$, which is the set of all vertices $x$ in $X$ that extend the arc $(v,w)$ to a $2$-arc $(v,w,x)$ which lies in a unique $6$-cycle, and similarly, let $B = \{(0,-{\bf e}_r) : 1 \le r < k\}$, the set of all vertices that extend $(v,w)$ to a $2$-arc which lies in $k$ different $6$-cycles. Also let $C = \{(-1,-2^{-1}{\bf e}_r) : 1 \le r < k\}$ and $D = \{(1,{\bf e}_r) : 1 \le r < k\}$ be the analogous sets of vertices extending the arc $(w,v)$, as illustrated in Figure~\ref{ABCD}. \begin{figure} \caption{$6$-cycles containing the edge from $v = (0,{\bf 0})$ to $w = (1,{\bf 0})$} \label{ABCD} \end{figure} Now suppose there exists an automorphism $\xi$ of $X$ that interchanges the two vertices of the edge $\{v,w\}$. Then by considering the numbers of $6$-cycles that contain a given $2$-arc, we see that $\xi$ must interchange the sets $A$ and $C$, and interchange the sets $B$ and $D$. Next, observe that if $x = (2,2{\bf e}_r) \in A$, then the unique $6$-cycle containing the $2$-arc $v \!\sim\! w \!\sim\! x$ is $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$, where $(y,z,u) = ((1,2{\bf e}_r), (0,{\bf e}_r),(1,{\bf e}_r))$; in particular, the $6$th vertex $u = (1,{\bf e}_r)$ lies in $D$. Similarly, if $x' = (-1,-2^{-1}{\bf e}_r) \in C$, then the unique $6$-cycle containing the $2$-arc $w \!\sim\! v \!\sim\! x'$ is $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$, where $(y',z',u') = ((0,-2^{-1}{\bf e}_r), (-1,-{\bf e}_r), (0,-{\bf e}_r))$, and the $6$th vertex $u' = (0,-{\bf e}_r)$ lies in $B$. The arc-reversing automorphism $\xi$ must take every $6$-cycle of the first kind to a $6$-cycle of the second kind, and hence must take each $2$-arc of the form $v \!\sim\! u \!\sim\! z$ ($= (0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$) to a $2$-arc of the form $w \!\sim\! u' \!\sim\! z'$ ($= (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r)$). By Corollary~\ref{count6cycles}, however, each $2$-arc of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$ lies in $k$ different $6$-cycles, while each $2$-arc of the form $(1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r)$ is the $\tau$-image of the $2$-arc $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r)$ and hence lies in only one $6$-cycle. This is a contradiction, and shows that no such arc-reversing automorphism exists. Hence the Bouwer graph $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 7$. If $m \le 6$, then the order $m_2$ of $2$ as a unit mod $n$ is at most $6$, and so $n$ divides $2^{m_2} - 1 = 3, 7, 15, 31$ or $63$. In particular, if $m _2 = 2$, $3$, $4$ or $5$ then $n \in \{3\}$, $\{7\}$, $\{5,15\}$ or $\{31\}$ respectively, while if $m_2 = 6$, then $n$ divides $63$ but not $3$ or $7$, so $n \in \{9, 21, 63\}$. Conversely, if $n = 3, 5, 7, 9,$ $ 15, 21, 31$ or $63$, then $m_2 = 2$, $4$, $3$, $6$, $4$, $6$, $5$ or $6$, respectively, and of course in each case, $m$ is a multiple of $m_2$. We deal with these exceptional cases in the next two sections. \section{Arc-transitive cases} \label{sec:AT} The following observations are very easy to verify. When $n = 3$ (and $m$ is even), the Bouwer graph $B(k,m,n)$ is always arc-transitive, for in that case there is an automorphism that takes each vertex $(a,{\bf b})$ to $(1\!-\!a,-{\bf b})$, and this reverses the arc from $(0,{\bf 0})$ to $(1,{\bf 0})$. Similarly, when $k = 2$ and $n = 5$ (and $m$ is divisible by $4$), the graph $B(k,m,n)$ is always arc-transitive, since it has an automorphism that takes $(a,b_2)$ to $(1\!-\!a,-b_2)$ for all $a \equiv 0$ or $1$ mod $4$, and to $(1\!-\!a,2-b_2)$ for all $a \equiv 2$ or $3$ mod $4$, and again this interchanges $(0,{\bf 0})$ with $(1,{\bf 0})$. Next, $B(2,m,7)$ is arc-transitive when $m = 3$ or $6$, because in each of those two cases there is an automorphism that takes \\[-16pt] \begin{center} \begin{tabular}{lll} $\quad(a,0)$ to $(1\!-\!a,0)$, \quad \ & $(a,1)$ to $(a\!+\!1,2)$, \quad \ & $(a,2)$ to $(a\!-\!1,1)$, \\[+1 pt] $\quad (a,3)$ to $(\!-\!a,6)$, \quad \ & $(a,4)$ to $(a\!+\!3,4)$, \quad \ & $(a,5)$ to $(1\!-\!a,5)$, \\[+1 pt] $\quad (a,6)$ to $(1\!-\!a,3)$, & & \\[-6pt] \end{tabular} \end{center} for every $a \in \mathbb Z_m$. Similarly, $B(2,6,21)$ is arc-transitive since it has an automorphism taking \\[-16pt] \begin{center} \begin{tabular}{lll} $\quad (a,0)$ to $(1\!-\!a,0)$, \quad \ & $(a,1)$ to $(a\!+\!1,2)$, \quad \ & $(a,2)$ to $(a\!-\!1,1)$, \\[+1 pt] $\quad (a,3)$ to $(5\!-\!a,6)$, \quad \ & $(a,4)$ to $(a\!+\!3,11)$, \quad \ & $(a,5)$ to $(5\!-\!a,19)$, \\[+1 pt] $\quad (a,6)$ to $(5\!-\!a,3)$, \quad & $(a,7)$ to $(1\!-\!a,14)$, \quad \ & $(a,8)$ to $(a\!+\!1,16)$, \\[+1 pt] $\quad (a,9)$ to $(a\!-\!1,15)$, \quad & $(a,10)$ to $(5\!-\!a,20)$, \quad \ & $(a,11)$ to $(a\!+\!3,4)$, \\[+1 pt] $\quad (a,12)$ to $(5\!-\!a,12)$, \quad & $(a,13)$ to $(5\!-\!a,17)$, \quad \ & $(a,14)$ to $(1\!-\!a,7)$, \\[+1 pt] $\quad (a,15)$ to $(a\!+\!1,9)$, \quad & $(a,16)$ to $(a\!-\!1,8)$, \quad \ & $(a,17)$ to $(5\!-\!a,13)$, \\[+1 pt] $\quad (a,18)$ to $(a\!+\!3,18)$, \quad & $(a,19)$ to $(5\!-\!a,5)$, \quad \ & $(a,20)$ to $(5\!-\!a,10)$, \\[-3pt] \end{tabular} \end{center} for every $a \in \mathbb Z_m$. \section{Other half-arc-transitive cases} \label{sec:other} In this final section, we give sketch proofs of half-arc-transitivity in all the remaining cases. We first consider the three cases where $m$ and the multiplicative order of $2$ mod $n$ are both equal to $6$ (and $n = 9$, $21$ or $63$), and then deal with the cases where the multiplicative order of $2$ mod $n$ is less than $6$ (and $n = 5$, $7$, $15$ or $31$). In the cases $n = 15$ and $n = 31$, we can assume that $m < 6$ as well. Note that one of these remaining cases is the one considered by Bouwer, namely where $(m,n) = (6,9)$. As this is one of the exceptional cases, it is not representative of the generic case --- which may help explain why previous attempts to generalise Bouwer's approach did not get very far. To reduce unnecessary repetition, we will introduce some further notation that will be used in most of these cases. As in Section~\ref{sec:main}, we let $v$ and $w$ be the vertex $(0,{\bf 0})$ and its neighbour $(1,{\bf 0})$. Next, for all $i \ge 0$, we define $V^{(i)}$ as the set of all vertices $x$ in $X$ for which $v \!\sim\! w \!\sim\! x $ is a $2$-arc that lies in exactly $i$ different $6$-cycles, and $W^{(i)}$ as the analogous set of vertices $x'$ in $X$ for which $w \!\sim\! v \!\sim\! x'$ is a $2$-arc that lies in exactly $i$ different $6$-cycles. (Hence, for example, the sets $A$, $B$, $C$ and $D$ used in Section~\ref{sec:main} are $V^{(1)}$, $V^{(k)}$, $W^{(1)}$ and $W^{(k)}$, respectively.) Also for $i \ge 0$ and $j \ge 0$, we define $T_{v}^{\,(i,j)}$ as the set of all $2$-arcs $v \!\sim\! u \!\sim\! z$ that come from a $6$-cycle of the form $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$ with $x \in V^{(i)}$ and $u \in W^{(i)}$, and lie in exactly $j$ different $6$-cycles altogether, and define $T_{w}^{\,(i,j)}$ as the analogous set of all $2$-arcs $w \!\sim\! u' \!\sim\! z'$ that come from a $6$-cycle of the form $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$ with $x' \in W^{(i)}$ and $u' \in V^{(i)}$, and lie in exactly $j$ different $6$-cycles altogether. Note that if the graph under consideration is arc-transitive, then it has an automorphism $\xi$ that reverses the arc $(v,w)$, and then clearly $\xi$ must interchange the two sets $V^{(i)}$ and $W^{(i)}$, for each $i$. Hence also $\xi$ interchanges the two sets $T_{v}^{\,(i,j)}$ and $T_{w}^{\,(i,j)}$ for all $i$ and $j$, and therefore $|T_{v}^{\,(i,j)}| = |T_{w}^{\,(i,j)}|$ for all $i$ and $j$. Equivalently, if $|T_{v}^{\,(i,j)}| \ne |T_{w}^{\,(i,j)}|$ for some pair $(i,j)$, then the graph cannot be arc-transitive, and therefore must be half-arc-transitive. The approach taken in Section~\ref{sec:main} was similar, but compared the $2$-arcs $v \!\sim\! u \!\sim\! z$ that come from a $6$-cycle of the form $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$ where $x \in V^{(1)}$ and $u \in W^{(k)}$, with the $2$-arcs $w \!\sim\! u' \!\sim\! z'$ that come from a $6$-cycle of the form $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$ where $x' \in W^{(1)}$ and $u' \in V^{(k)}$. We proceed by considering the first three cases below, in which the girth of the Bouwer graph $B(k,m,n)$ is $6$, but the numbers of $6$-cycles containing a given arc or $2$-arc are different from those found in Section~\ref{sec:main}. \subsection{The graphs $B(k,6,9)$} Suppose $m = 6$ and $n = 9$. This case was considered by Bouwer in \cite{Bouwer}, but it can also be dealt with in a similar way to the generic case in Section~\ref{sec:main}. Every $2$-arc lies in either $k$ or $k+1$ cycles of length $6$, and each arc lies in exactly $k$ distinct $2$-arcs of the first kind, and $k-1$ of the second kind. Next, the set $V^{(k)}$ consists of $(2,{\bf 0})$ and $(0,-{\bf e}_r)$ for $1 \le r < k$, while $W^{(k)}$ consists of $(-1,{\bf 0})$ and $(1,{\bf e}_s)$ for $1 \le s < k$. Now consider the $2$-arcs $v \!\sim\! u \!\sim\! z$ that come from a $6$-cycle of the form $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$ with $x \in V^{(k)}$ and $u \in W^{(k)}$. There are $k^{2}-2k+2$ such $2$-arcs, and of these, $k\!-\!1$ lie in $k\!+\!1$ different $6$-cycles altogether (namely the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$), while the other $k^{2}-3k+3$ lie on only $k$ different 6-cycles. In particular, $|T_{v}^{\,(k,k+1)}| = k\!-\!1$. On the other hand, among the $2$-arcs $w \!\sim\! u' \!\sim\! z'$ coming from a $6$-cycle of the form $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$ with $x' \in W^{(k)}$ and $u' \in V^{(k)}$, none lies in $k+1$ different 6-cycles. Hence $|T_{w}^{\,(k,k+1)}| = 0$. Thus $|T_{v}^{\,(k,k+1)}| \ne |T_{w}^{\,(k,k+1)}|$, and so the Bouwer graph $B(k,6,9)$ cannot be arc-transitive, and is therefore half-arc-transitive. \subsection{The graphs $B(k,6,21)$ for $k > 2$} Next, suppose $m = 6$ and $n = 21$, and $k > 2$. (The case $k = 2$ was dealt with in Section~\ref{sec:AT}.) Here every $2$-arc lies in one, two or $k$ cycles of length $6$, and each arc lies in exactly one $2$-arc of the first kind, and $k-1$ distinct $2$-arcs of each of the second and third kinds. The set $V^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$, while $W^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$. (Note that this does not hold when $k = 2$.) Also $T_{v}^{\,(k,2)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, so $|T_{v}^{\,(k,2)}| = k\!-\!1$, but on the other hand, $T_{w}^{\,(k,2)}$ is empty. Hence there can be no automorphism that reverses the arc $(v,w)$, and so the graph is half-arc-transitive. \subsection{The graphs $B(k,6,63)$} Suppose $m = 6$ and $n = 63$. Then every $2$-arc lies in either one or $k$ different cycles of length $6$, and each arc lies in exactly $k$ $2$-arcs of the first kind, and $k\!-\!1$ of the second kind. The sets $V^{(k)}$ and $W^{(k)}$ are precisely as in the previous case, but for all $k \ge 2$, and in this case $T_{v}^{\,(k,1)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, so $|T_{v}^{\,(k,1)}| = k\!-\!1$, but on the other hand, $T_{w}^{\,(k,1)}$ is empty. Hence there can be no automorphism that reverses the arc $(v,w)$, and so the graph is half-arc-transitive. Now we turn to the cases where $n = 5$, $7$, $15$ or $31$. In these cases, the order of $2$ as a unit mod $n$ is $4$, $3$, $4$ or $5$, respectively, and indeed when $n = 15$ or $31$ we may suppose that $m = 4$ or $m = 5$, while the cases $n = 5$ and $n = 7$ are much more tricky. \subsection{The graphs $B(k,4,15)$} Suppose $n = 15$ and $m = 4$. Then the girth is $4$, with $2k$ different $4$-cycles passing through the vertex $(0,{\bf 0})$. Apart from this difference, the approach taken in Section~\ref{sec:main} for counting $6$-cycles still works in the same way as before. Hence the graph $B(k,4,15)$ is half-arc-transitive for all $k \ge 2$. \subsection{The graphs $B(k,5,31)$} Similarly, when $n = 31$ and $m = 5$, the girth is $5$, and the same approach as taken in Section~\ref{sec:main} using $6$-cycles works, to show that the graph $B(k,5,31)$ is half-arc-transitive for all $k \ge 2$. \subsection{The graphs $B(k,m,5)$ for $k > 2$} Suppose $n = 5$ and $k > 2$. (The case $k = 2$ was dealt with in Section~\ref{sec:AT}.) Here we have $m \equiv 0$ mod $4$, and the number of $6$-cycles is much larger than in the generic case considered in Section~\ref{sec:main} and the cases with $m = 6$ above, but a similar argument works. When $m > 4$, the girth of $B(k,m,n)$ is $6$, and every $2$-arc lies in either $2k$, $2k+3$ or $4k-4$ cycles of length 6. Also $V^{(2k+3)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$, while $W^{(2k+3)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$, and then $T_{v}^{\,(2k+3,2k)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, so $|T_{v}^{\,(2k+3,2k)}| = k\!-\!1$, but on the other hand, $T_{w}^{\,(2k+3,2k)}$ is empty. When $m = 4$, the girth of $B(k,m,n)$ is $4$, and the situation is similar, but slightly different. In this case, every $2$-arc lies in either $2k+2$, $2k+5$ or $6k-6$ cycles of length 6, and $|T_{v}^{\,(2k+5,2k+2)}| = k\!-\!1$ while $|T_{w}^{\,(2k+5,2k+2)}| = 0$. Once again, it follows that no automorphism can reverse the arc $(v,w)$, and so the graph is half-arc-transitive, for all $k > 2$ and all $m \equiv 0$ mod $4$. \subsection{The graphs $B(k,m,7)$ for $(k,m) \ne (2,3)$ or $(2,6)$} Suppose finally that $n = 7$, with $m \equiv 0$ mod $3$, but $(k,m) \ne (2,3)$ or $(2,6)$. Here $m \equiv 0$ mod $3$, and we treat four sub-cases separately: (a) $k = 2$ and $m > 6$; (b) $k > 2$ and $m = 3$; (c) $k > 2$ and $m = 3$; and (d) $k > 2$ and $m > 6$. In case (a), where $k = 2$ and $m > 6$, every $2$-arc lies in $1$, $2$ or $3$ cycles of length $6$. Also the set $V^{(3)}$ consists of the single vertex $(0,-1)$, while $W^{(3)}$ consists of the single vertex $(1,1)$, and then $T_{v}^{\,(3,1)}$ consists of the single $2$-arc $(0,0) \!\sim\! (1,1) \!\sim\! (2,1)$, so $|T_{v}^{\,(3,1)}| = 1$, but on the other hand, $T_{w}^{\,(3,1)}$ is empty. In case (b), where $k > 2$ and $m = 3$, every $2$-arc lies in $0$, $2$ or $k$ cycles of length $6$. In this case the set $V^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$, while $W^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$, and then $T_{v}^{\,(k,2)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, so $|T_{v}^{\,(k,2)}| = k-1$, but on the other hand, $T_{w}^{\,(k,2)}$ is empty. In case (c), where $k > 2$ and $m = 6$, every $2$-arc lies in $3$, $k\!+\!1$ or $4k\!-\!3$ cycles of length $6$. Here $V^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$, while $W^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$, and then $T_{v}^{\,(k+1,3)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, and so $|T_{v}^{\,(k+1,3)}| = k-1$, but on the other hand, $T_{w}^{\,(k+1,3)}$ is empty. In case (d), where $k > 2$ and $m > 6$, every 2-arc lies in $1$, $k\!+\!1$ or $2k\!-\!2$ cycles of length $6$. Next, if $k > 3$ then $V^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$, while $W^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$, but if $k = 3$ then $k+1 = 2k-2$, and $V^{(k+1)}$ contains also $(2,{\bf 0})$ while $W^{(k+1)}$ contains also $(-1,{\bf 0})$. Whether $k = 3$ or $k > 3$, the set $T_{v}^{\,(k+1,1)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, and so $|T_{v}^{\,(k+1,1)}| = k-1$, but on the other hand, $T_{w}^{\,(k+1,1)}$ is empty. Hence in all four of cases (a) to (d), no automorphism can reverse the arc $(v,w)$, and therefore the graph is half-arc-transitive. This completes the proof of our Theorem. \noindent {\Large\bf Acknowledgements} This work was undertaken while the second author was visiting the University of Auckland in 2014, and was partially supported by the ARRS (via grant P1-0294), the European Science Foundation (via EuroGIGA/GReGAS grant N1-0011), and the N.Z. Marsden Fund (via grant UOA1323). The authors would also like to thank Toma{\v z} Pisanski and Primo{\v z} Poto\v{c}nik for discussions which encouraged them to work on this topic, and acknowledge considerable use of the {\sc Magma} system~\cite{Magma} for computational experiments and verification of some of the contents of the paper. \end{document}
\begin{document} \author{Hong Wang} \address[H. Wang]{Department of Mathematics, Massachusetts Institute of Technology} \email{[email protected]} \author{Ben Yang} \address[B. Yang]{Department of Mathematics, Massachusetts Institute of Technology} \email{[email protected]} \author{Ruixiang Zhang} \address[R. Zhang]{Department of Mathematics, Princeton University} \email{[email protected]} \begin{abstract} We prove new bounds on the number of incidences between points and higher degree algebraic curves. The key ingredient is an improved initial bound, which is valid for all fields. Then we apply the polynomial method to obtain global bounds on $\R$ and $\C$. \end{abstract} \title{Bounds of incidences between points and algebraic curves} \section{introduction} The Szemer\'{e}di--Trotter theorem \cite{szemeredi1983extremal} says that for a finite set, $L$, of lines and a finite set, $P$, of points in $\mathbb{R}^2$, the number of incidences is less than a constant times $|P|^{\frac{2}{3}} |L|^{\frac{2}{3}} + |P| + |L|$. There have been several generalizations of this theorem. For example, Pach and Sharir \cite{pach1998number} allow simple curves that have $k$ degrees of freedom and multiplicity-type $C$: (i) for any $k$ distinct points there are at most $C$ curves in $L$ that pass through them, and (ii) any two distinct curves in $L$ have at most $C$ intersection points. This is summarized in the following theorem: \begin{thm}[Pach--Sharir 98]\label{PSthm} For a finite set $P$ of points in $\mathbb{R}^2$ and a finite set $L$ of simple curves which have $k$ degrees of freedom and multiplicity-type $C$. The number of incidences $|\mathcal{I}(P,L)|:=|\{(p,l)\in P\times L, p\in l\}|$ satisfies \begin{equation} |\mathcal{I}(P,L)|\lesssim_{C, k} |P|^{\tfrac{k}{2k-1}}|L|^{\tfrac{2k-2}{2k-1}}+|P|+|L|. \end{equation} \end{thm} \subsection*{Notation} We use the asymptotic notation $X=\OO(Y)$ or $X\lesssim Y$ to denote the estimate $X\leq CY$ for some constant $C$. If we need the implicit constant $C$ to depend on additional parameters, then we indicate this by subscripts. For example, $X=\OO_{d}(Y)$ or $X\lesssim_{d} Y$ means that $X\leq C_{d}Y$ for some constant $C_{d}$ that depends on $d$. The main result of this paper is an improvement to Theorem \ref{PSthm} when $L$ is a set of higher degree algebraic curves. Let $L$ be a finite set of algebraic curves of degree $\leq d$ in $\mathbb{R}^{2}$, any two of which do not share a common irreducible component. Let $P$ be a finite set of distinct points in $\mathbb{R}^{2}$. By B\'{e}zout's theorem, there is at most one curve in $L$ that goes through a subset of $P$ of size $d^{2}+1$. In the notation introduced by Pach and Sharir, a degree $d$ algebraic curve has $d^2+1$ degrees of freedom and multiplicity-type $d^2$. However, one may wonder whether $d^2 +1$ is a misleading definition of the degrees of freedom since generically $A:= {d+2 \choose 2} -1$ points determine a degree $d$ algebraic curve and $A\leq d^{2}+1$ when $d\geq 3$. This suggests that Theorem ~\ref{PSthm} may still hold for degree $d$ curves with the ``generic degree of freedom" $A$. Indeed, we prove that this is the case. \begin{thm}\label{main theorem} Let $d$ be a positive integer, $A={d+2\choose 2} -1$, $L$ a finite set of degree $\leq d$ algebraic curves in $\mathbb{R}^{2}$ such that any two distinct algebraic curves do not share a common irreducible component, and $P$ a finite set of points in $\mathbb{R}^{2}$. Then, \begin{equation}\label{estimatemainthm} |\mathcal{I}(P,L)|\lesssim_{d} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|. \end{equation} \end{thm} This gives a better bound than Theorem ~\ref{PSthm} for degree $d$ algebraic curves when $d\geq 3$. It gives the same bound when $d=1$ or $d=2$. We generalize Theorem ~\ref{main theorem} to algebraic curves parametrized by an algebraic variety. \begin{definition} Given an integer $d\geq 1$ and a field $\F$, consider the Veronese embedding $\nu_{d}: \PP^{2}\rightarrow \PP^{A}$ given by:$$\nu_{d}: [x, y, z]\mapsto [x^{d},\dots, x^{i}y^{j}z^{k},\dots, z^{d}]_{i+j+k=d}.$$ We identify a degree $d$ curve in $\PP^{2}$ with the preimage of a hyperplane in $\PP^{A}$ or a point in the dual space $(\PP^{A})^{*}$. We say a subset $\M $ of the space of degree $d$ polynomials $\subset S_{d}$ is \emph{parametrized by an algebraic variety} $M$ if it is the preimage of $M\subset (\PP^{A})^{*}$, and we define the dimension of $\M$ to be $\dim M$. \end{definition} A consequence of the Theorem ~\ref{main theorem} is the following: \begin{cor}\label{variety} Given an integer $d\geq 1$, $A={d+2\choose 2}-1$, and a subset $\mathcal{M}$ of the space of degree $d$ polynomials is parametrized by an algebraic variety $M$ of dimension $\leq k$. Let $P$ be a finite set of points in $\mathbb{RP}^2$, and $L$ a finite subset of $\M$ such that no two curves in $L$ share a common irreducible component. Then, \begin{equation} |\mathcal{I}(P,L)|\lesssim_{\mathcal{M}}|P|^{\tfrac{k}{2k-1}}|L|^{\tfrac{2k-2}{2k-1}}+|P|+|L|. \end{equation} \end{cor} This generalization is helpful when we consider a family of curves with special properties, for example, parabolas, circles and a family of curves passing through a common point. The proof of Theorem ~\ref{main theorem} is a standard application of the polynomial method (see for example \cite{dvir2009size} and \cite{guth2010erdos}) with the following new initial bound. For an exposition of the polynomial method we refer the reader to \cite{kaplan2012simple}. \begin{lem}[Initial bound]\label{trivial bound} Let $\mathbb{F}$ be a field. Given an integer $d\geq 1$, $A={d+2\choose 2} -1$, a finite set of algebraic curves, $L$, in $\mathbb{F}^{2}$ of degree $\leq d$ such that any two distinct curves in $L$ do not share a common component, and a finite set of points, $P$, in $\mathbb{F}^{2}$. Then, \begin{equation} |\mathcal{I}(P,L)|\lesssim_{d} |P|^A+|L|.\label{initial bound} \end{equation} \end{lem} Combining (\ref{initial bound}) with Solymosi and Tao's polynomial method and induction on the number of points \cite{solymosi2012incidence}, we obtain the following incidence theorem on complex space: \begin{thm}\label{complex epsilon} Given an integer $d\geq 1$, $A={d+2\choose 2} -1$, a finite set of points $P$ in $\mathbb{C}^{2}$, a finite set of algebraic curves, $L$, in $\mathbb{C}^{2}$ of degree $\leq d$ such that any two curves: (i) do not share a common irreducible component; (ii) intersect transversally at smooth points. Then, for any $\epsilon>0$, \begin{equation}\label{estimateepsilonleft} |\mathcal{I}(P,L)|\lesssim_{\epsilon,d} |P|^{\tfrac{A}{2A-1}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L| \end{equation} and \begin{equation}\label{estimateepsilonright} |\mathcal{I}(P,L)|\lesssim_{\epsilon,d} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}+\epsilon}+|P|+|L|. \end{equation} \end{thm} \section{Proof of Theorem ~\ref{main theorem}} \subsection{The initial bound} In this subsection we prove the initial bound by double counting. \begin{definition} Given a point $p\in \PP^{2}$, let $H_{p}$ denote the corresponding hyperplane in $\PP^{A*}$ via the Veronese embedding and dual. Given a finite set of points $\Gamma=\{p_{1},\dots,p_{n}\}$, we define $m_{d}(\Gamma)=\dim(\cap H_{p_{i}})$, which characterizes the dimension of curves passing through all the points in $\Gamma$. In particular, if $m_{d}(\Gamma)=0$, there is at most one curve of degree $d$ that passes through $\Gamma$. \end{definition} \begin{proof}[Proof of Lemma \ref{trivial bound}] We may remove curves containing fewer than $d^{2}+1$ points by adding $\OO(|L|)$ to the bound. Now we assume that each curve contains more than $d^{2}+1$ points of $P$. Fix a curve $l\in L$. We call an $A$-tuple $\Gamma'$ \emph{good} if it is a subset of $\Gamma\subset l$ with $|\Gamma|=d^{2}+1$ and $m_{d}(\Gamma')=m_{d}(\Gamma)$. For any $(d^{2}+1)$-tuple $\Gamma\in l\cap P$, there exists a good $A$-tuple $\subset \Gamma$ since $L$ is parametrized by $\PP^{A}$. Since $\cap_{p\in \Gamma}H_{p}$ and $\cap_{p\in \Gamma'}H_{p}$ have the same dimension and $\cap_{p\in \Gamma}H_{p}\subseteq\cap_{p\in \Gamma'}H_{p}$, the two vector subspaces are in fact the same. In other words, any curve in $L$ passing through $\Gamma'$ must pass through all of $\Gamma$. Since curves in $L$ do not have a common irreducible component, by B\'{e}zout's theorem, every set of $d^{2}+1$ points determines a unique curve in $L$. Hence, every good $A$-tuple determines a unique curve in $L$. There are at least ${|l\cap P| \choose d^2+1}/{ |l\cap P| - A\choose d^{2}+1-A}$ distinct good $A$-tuples $\Gamma'$ determining $l$. The number of good $A$-tuples determines the number of points in $l\cap P$ because ${|l\cap P| \choose d^2+1}/{ |l\cap P| - A\choose d^{2}+1-A}= \OO(|l\cap P|^{A})\gtrsim|l\cap P|$. On the other hand, the number of $A$-tuples is ${|P|\choose A}= \OO(|P|^{A})$. Then, $$|\mathcal{I}(P,L)| = \sum_{l \in L} |l\cap P|\lesssim |P|^{A}+|L|,$$ where the $|L|$ term comes from the first step when we deleted curves with fewer than $d^{2}+1$ points from $P$. \end{proof} \subsection{Polynomial method}\label{polynomialpartitioningsection} Now we can apply the polynomial method to the initial bound and conclude the proof of Theorem \ref{main theorem}. We shall use the following polynomial partitioning proposition (see, for example, Theorem 4.1 in \cite{guth2010erdos}): \begin{prop}\label{cell decomposition} Let $P$ be a finite set of points in $\mathbb{R}^m$ and let $D$ be a positive integer. Then, there exists a nonzero polynomial $Q$ of degree at most $D$ and a decomposition \[\mathbb{R}^{m}=\{Q=0\}\cup U_{1}\cup\cdots \cup U_{M}\] into the hypersurface $\{Q=0\}$ and a collection $U_{1},\ldots ,U_{M}$ of open sets (which we call \emph{cells}) bounded by $\{Q=0\}$, such that $M =\OO_{m}( D^m)$ and that each cell $U_{i}$ contains $\OO_{m}(|P|/D^{m})$ points. \end{prop} \begin{proof}[Proof of Theorem \ref{main theorem}] Applying Proposition ~\ref{cell decomposition}, we find a polynomial $Q$ of degree $D$ (to be chosen later) that partitions $\mathbb{R}^{2}$ into $M$ cells: \[\mathbb{R}^{2}=\{Q=0\}\cup U_{1}\cup\cdots \cup U_{M},\] where $M=\OO( D^2)$ and $U_{1},\ldots ,U_{M}$ are open sets bounded by $\{Q=0\}$ and $P_{i}=U_{i}\cap P$. Let $L_{i}$ be the set of curves that have non-empty intersection with $U_{i}$. Then $|P_{i}|=\OO(|P|/D^{2})$. By B\'{e}zout's theorem we see every curve meets $\{Q=0\}$ at no more than $d\cdot D\leq \OO_{d}(D)$ components. Fix a curve $l\in L$, by Harnack's curve theorem~\cite{harnack1876ueber}, the curve itself has $\OO_d (1)$ connected components. Moreover, since a component of $l$ is either contained in $U_{i}$ for some $i$ or it must meet the partition surface $\{Q=0\}$, $l$ can only meet $\OO_{d}(D)$ many $U_{i}$'s. Thus, we have the inequality $\sum |L_{i}|\leq D|L|$. We may assume that every curve is irreducible, otherwise we can replace the curve with its irreducible components and the cardinality of curves increase by only a constant factor. Let $P_{cell}$ denote the points of $P$ in cells $U_{i}$ and let $P_{alg}$ denote those on the partition surface $\{Q=0\}$. Similarly, let $L_{alg}$ denote those curves that belong to $\{Q = 0 \}$ and let $L_{cell}$ be the union of the other curves. We deduce by Lemma \ref{trivial bound} and H\"{o}lder's inequality: \begin{align}\label{polypartitionestimate} |\mathcal{I}(P,L)|&= |\mathcal{I}(P_{cell}, L_{cell}) |+ |\mathcal{I}(P_{alg},L_{cell}) |+|\mathcal{I}(P_{alg}, L_{alg})|\nonumber\\ &\lesssim_{d} \sum_i (|P_{i}|^{A}+|L_{i}|) + D|L_{cell}|+|\mathcal{I}(P_{alg}, L_{alg})|\nonumber\\ &\lesssim_{d} |P|^{A}D^{-2(A-1)}+D|L|+|\mathcal{I}(P_{alg}, L_{alg})|. \end{align} In addition, we may assume that $ |P|^{1/2}\leq |L|\leq |P|^{A}$, otherwise (\ref{estimatemainthm}) already holds either by Lemma \ref{trivial bound} or by another initial bound $|\mathcal{I} (P, L)| \lesssim |P| + |L|^2$ (every two curves intersect at at most $\OO(1)$ points). In this case, we may choose $D =\OO_{d}( |P|^{\tfrac{A}{2A-1}} |L|^{-\tfrac{1}{2A-1}})$ and $D\leq|L|/2$. Then, the first two terms on the right-hand side of (\ref{polypartitionestimate}) are $O_{d}( |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}})$. Since $|L_{alg}| \leq D \leq \frac{|L|}{2}$, we can perform a dyadic induction on $|L|$. By repeating the above process on $L_{alg}$, we obtain the following: \begin{align} |\mathcal{I}(P,L)|&\lesssim_{d} \sum_{ 0\leq i \leq \log(|L|/|P|^{1/2})} |P|^{\tfrac{A}{2A-1}}(2^{-i}|L|)^{\tfrac{2A-2}{2A-1}} + |P|+|L|\nonumber\\&\lesssim_{d}|P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|.\nonumber \end{align} The induction stops when $i> \log(|L|/|P|^{1/2})$ because when $2^{-i}L\lesssim |P|^{1/2}$ the number of incidences in the $i$-th step is bounded by $\OO( |P| )$. This proves (\ref{estimatemainthm}). \end{proof} \section{An estimate for parametrized curves} We prove an initial bound with parametrized curves, which implies Corollary \ref{variety}. We first state two propositions that we will need to prove the initial bound. \begin{prop}(see \cite{hartshorne1977algebraic}, Ch I, Exercise 1.8)\label{dimension diminute} If $V$ is an $r$-dimensional variety in $\mathbb{F}^{d}$, and $P :\mathbb{F}^{d}\rightarrow V$ is a polynomial which is not identically zero on $V$, then every component of $V\cap \{P=0\}$ has dimension $r-1$. \end{prop} \begin{prop}(see \cite{fulton1984intersection}, Section 2.3)\label{refine Bezout} Let $V_{1},\ldots V_{s}$ be subvarieties of $\mathbb{FP}^{N}$, and let $Z_{1},\ldots ,Z_{r}$ be the irreducible components of $V_{1}\cap\cdots \cap V_{r}$. Then, \[\sum_{i=1}^{r} \deg(Z_{i})\leq \prod_{j=1}^{s} \deg(V_{j}).\] \end{prop} \begin{lem}\label{algebraicsettrivialbound} With the same setting and notation as in Corollary \ref{variety}, we have \begin{equation} |\mathcal{I}(P,L)|\lesssim_{\mathcal{M}}|P|^{k}+|L|. \end{equation} \end{lem} \begin{proof} Without loss of generality we assume that $\mathbb{F}$ is an algebraically closed field, every curve in $L$ is irreducible, of degree $d$, and contains more than $d^{2}+1$ points from $P$. A curve $l\in L$ corresponds to a point of intersection $\cap_{p\in l\cap P}H_{p}\cap M$. Comparing with the proof of Lemma \ref{trivial bound}, it suffices to prove that for every $(d^{2}+1)$-tuple $\Gamma$ in $l\cap P$, there is a $\Gamma' \subseteq \Gamma$ such that $\cap_{p\in \Gamma'} H_{p}\cap M$ contains at most $O_{\mathcal{M}} (1)$ curves and $|\Gamma'|=k$. This follows from Proposition~\ref{dimension diminute} and Proposition~\ref{refine Bezout} above. Indeed, by iterately applying Proposition ~\ref{dimension diminute}, we can choose $\Gamma'$ such that $| \Gamma'|=k$ and $\cap_{p\in\Gamma'}H_{p}\cap\mathcal{M}$ has dimension $0$. By Proposition \ref{refine Bezout}, the cardinality of $\cap_{p\in\Gamma'}H_{p}\cap\mathcal{M}$ is bounded by a constant depending on $k$ and $\deg \mathcal{M}$. \end{proof} \section{A theorem on the complex field with $\epsilon$} In this section, we follow the approach of \cite{solymosi2012incidence} to sketch a proof of Theorem \ref{complex epsilon}. The idea is to partition the point set $P$ with a polynomial of constant degree and then use induction on the size of $P$. In other words, the degree does not depend on the size of $P$ and $L$. With an $\epsilon$ loss in the exponents, one can perform induction on $|P|$, which controls incidences in the complement of the partition surface (in the cells). Here, a constant degree implies constant complexity, and we can use dimension reduction to estimate incidences on the partition surface. \begin{proof}[Proof of Theorem \ref{complex epsilon}] In our proof, $C$ is a constant depending only on $d$ and $\epsilon$ which may vary from place to place. $C_0, C_1$ and $C_2$ are positive constants to be chosen later, where $C_{0}, C_1 >2$ are sufficiently large depending on $d$ and $\epsilon$, and $C_{2}$ is sufficiently large depending on $C_{1}$, $C_{0}$, $d$ and $\epsilon$. We do an induction on $|P|$. Suppose for $|P'|\leq \tfrac{|P|}{2}$ and $|L'| \leq |L|$, we have by the induction hypothesis, \begin{equation}\label{hypothesis} |\mathcal{I}(P',L')|\leq C_{2}|P'|^{\tfrac{A}{2A-1}+\epsilon}|L'|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P'|+|L'|). \end{equation} Our goal is to prove \begin{equation}\label{inductiveclaim} |\mathcal{I}(P,L)|\leq C_{2}|P|^{\tfrac{A}{2A-1}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P|+|L|). \end{equation} We apply Proposition \ref{cell decomposition} to $D = C_{1}$ on $\mathbb{C}^2 \simeq \mathbb{R}^{4}$ and obtain the following partition: \begin{equation} \mathbb{R}^{4}=\{Q=0\}\cup U_{1}\cup\cdots \cup U_{M}. \end{equation} Here, $Q: \mathbb{R}^{4}\rightarrow\mathbb{R}$ has degree at most $C_{1}$, $M =\OO( C_{1}^{4})$, and $|P_{i}| = |P \cap U_{i}| = O(\tfrac{|P|}{C_{1}^{4}})\leq \tfrac{|P|}{2}$. We denote $L_{i}$ to be the set of curves in $L$ with nonempty intersection with $U_{i}$. Thus, by the induction hypothesis we have, \begin{align} |\mathcal{I}(P_{i},L_{i})| \leq & C_{2}|P_{i}|^{\tfrac{A}{2A-1}+\epsilon}| L_{i}|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P_{i}| + |L_{i}|)\nonumber\\ \leq & C [C_{2}C_{1}^{-4(\tfrac{A}{2A-1}+\epsilon)}|P|^{\tfrac{A}{2A-1}+\epsilon}|L_{i}|^{\tfrac{2A-2}{2A-1}}+C_{0}(\tfrac{|P|}{C_1^4}+|L_{i}|)]. \end{align} For $l$ belonging to some $L_{i}$, we apply a result in real algebraic geometry that implies the number of connected components of $l\setminus \{Q=0\}$ is at most $\OO_{d} (C_{1}^{2})$ (see Theorem A.2 of \cite{solymosi2012incidence}, \cite{milnor1964betti},\cite{petrovskii1949topology},\cite{thom1965homologie}). We deduce that \begin{equation} \sum_{i=1}^{M}|L_{i}| \leq CC_{1}^{2}|L|. \end{equation} Adding up $|\mathcal{I}(P_i,L_{i})|$ and applying H\"{o}lder's inequality, we obtain \begin{align} |\mathcal{I}(P_{cell},L_{cell})| = & \sum_{i=1}^{M}|\mathcal{I}(P_{i},L_{i})|\nonumber\\ \leq & C(C_{2}C_{1}^{-4(\tfrac{A}{2A-1}+\epsilon)}|P|^{\tfrac{A}{2A-1}+\epsilon}(\sum_{i=1}^{M}|L_{i}|^{\tfrac{2A-2}{2A-1}})+C_0 (|P|+C_{1}^{2}|L|))\nonumber\\ \leq & C(C_{1}^{-4\epsilon}C_{2}|P|^{\tfrac{A}{2A-1}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P|+C_{1}^{2}|L|)). \end{align} Now we recall the two trivial bounds (\ref{L2+P}) explained in the case of real space and by Lemma~\ref{trivial bound}, we have: \begin{equation}\label{L2+P} |\mathcal{I}(P,L)|\lesssim_{d} |L|^{2}+|P|,~~~~~ |\mathcal{I}(P,L)|\lesssim_{d} |P|^{A}+|L|. \end{equation} Thus, one may assume that $|P|^{\tfrac{1}{2}}\lesssim_{d}|L|\lesssim_{d}|P|^{A}$, otherwise we immediately have $|\mathcal{I}(P,L)|\lesssim_{d}|P|+|L|$ and it suffices to choose $C_0$ larger than the implicit constant. With this assumption, we have \begin{equation}\label{PcellLcell} |\mathcal{I}(P_{cell},L_{cell})| \leq C(C_{1}^{-4\epsilon}C_{2}+C_{0}(|P|^{-\epsilon}+C_{1}^{2}|P|^{-\epsilon}))|P|^{\tfrac{A}{2A-2}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}. \end{equation} If the following inequality is given \begin{equation}\label{PalgL} \mathcal{I}(P_{alg}, L)\lesssim_{C_1}|P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}} + |P|+|L|, \end{equation} then $ \lesssim_{C_1} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}$ by assumption. Hence, we can combine this with (\ref{PcellLcell}) and a careful choice of $C_0, C_1$ and $C_2$ gives us (\ref{inductiveclaim}). When $|P| = 1$, (\ref{inductiveclaim}) is trivial, and we obtain (\ref{estimateepsilonleft}). For (\ref{estimateepsilonright}) the argument is similar and is omitted. Finally, (\ref{PalgL}) follows from Proposition ~\ref{inductionondim} below when $r=3$, $D=C_{1}$ and $\Sigma=\{Q=0\}$. \end{proof} \begin{prop}\label{inductionondim} Let $P$ and $L$ be as in Theorem ~\ref{complex epsilon}, $0\leq r\leq 3$, and let $\Sigma$ be a subvariety in $\mathbb{C}^2 \simeq \mathbb{R}^{4}$ of (real) dimension $\leq r$ and of degree $\leq D$. Then, \begin{equation} \mathcal{I}(P\cap \Sigma, L) \lesssim_{D} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|. \end{equation} \end{prop} \begin{proof} When $r=0$, $\Sigma$ is a single point and the inequality trivially holds. When $r=1$, we decompose $\Sigma=\Sigma_{1}\cup\Sigma_{2}$, where every component of $\Sigma_{1}$ belongs to some curve in $L$ and $\Sigma_{2}$ do not have a common component with curves in $L$. Then, $|\mathcal{I}(P\cap \Sigma_{1})|\lesssim_{D}|P|$ and $|\mathcal{I}(P\cap \Sigma_{2})|\lesssim_{D, d}|L|$. Now we deal with the case $r=2$. By an algebraic geometry result (see, for example, Corollary 4.5 in \cite{solymosi2012incidence}), one can decompose $\Sigma$ into smooth points on subvarieties \[\Sigma = \Sigma^{smooth}\cup\Sigma_{i}^{smooth},\] where $\Sigma_{i}$'s are subvarieties of $\Sigma$ of dimension $\leq 1$ and of degree $\OO_{D}(1)$. The number of $\Sigma_{i}$'s is at most $\OO_{D}(1)$. It suffices to bound $\mathcal{I}(P\cap\Sigma^{smooth}, L)$. If $l_{1}, l_{2} \in \Sigma$ intersect at $p\in \Sigma^{smooth}$, by considering the tangent space and the transverse assumption, we know that $p$ is a singular point of $l_{1}$ or $l_{2}$. Since each curve has $\OO_d (1)$ singular points, we obtain \begin{equation}\label{SigmaLalg} \mathcal{I}(P\cap\Sigma^{smooth}, L_{alg}) \leq |P| + \OO_d (1)|L|. \end{equation} It remains to estimate $\mathcal{I}(P\cap\Sigma^{smooth}, L_{cell})$. If $l$ does not belong to $\Sigma$, then by Corollary 4.5 of \cite{solymosi2012incidence} the intersection of $l$ and $\Sigma$ can be decomposed as $l\cap\Sigma=\cup_{j=0}^{J (l)}l_{j}$ for some $J(l) \leq \OO_{D}(1)$, where $l_{j}$ is an algebraic variety of dimension $\leq 1$ and of degree $\OO_D (1)$ for each $1\leq j\leq J(l)$. Let $\mathcal{I}_{l, j}$ denote the set $\{p \in P: p \in l_{j}\}$, we obtain \begin{equation} |\mathcal{I}(P\cap \Sigma^{smooth}, L_{cell})|\leq \sum_{l, j: j \leq J(l)} |\mathcal{I}_{l, j}|. \end{equation} If $l_{j}$ is not the union of $\OO_D (1)$ points, then $l_{j}$ belongs to a unique $l$ because distinct curves in $L$ do not share a common component. By taking a generic projection from $\mathbb{R}^4$ to $\mathbb{R}^2$, we can apply arguments in the proof of Theorem ~\ref{main theorem}: use the initial bound given by $L$ and $P$, then apply the polynomial method. When $r=3$, we can repeat the proof of $r=2$ assuming that the bound holds for $r\leq 2$. \end{proof} We also have the following corollary for complex curves parametrized by an algebraic variety: \begin{cor}\label{complex variety} Given a finite point set $P\in \mathbb{C}^{2}$, an integer $d\geq 1$, $A={d+2\choose 2} -1$ and a subset $\mathcal{M}\in S_d$ parametrized by an algebraic variety of dimension $\leq k$. Let $L$ be a finite subset of $\mathcal{M}$ such that any two distinct curves of $L$ do not share a common component and intersect transversally at smooth points. Then, for any sufficiently small $\epsilon>0$, \begin{equation}\label{estimatecomplexepsilonleft} |\mathcal{I}(P,L)|\lesssim_{\epsilon, \mathcal{M}} |P|^{\tfrac{k}{2k-1}+\epsilon}|L|^{\tfrac{2k-2}{2k-1}}+|P|+|L| \end{equation} and \begin{equation}\label{estimatecomplexepsilonright} |\mathcal{I}(P,L)|\lesssim_{\epsilon, \mathcal{M}} |P|^{\tfrac{k}{2k-1}}|L|^{\tfrac{2k-2}{2k-1}+\epsilon}+|P|+|L|. \end{equation} \end{cor} \end{document}
\begin{document} \twocolumn[ \icmltitle{{DELFI}\xspace: Deep Mixture Models for Long-term Air Quality Forecasting in the Delhi National Capital Region} \begin{icmlauthorlist} \icmlauthor{Naishadh Parmar}{iitk} \icmlauthor{Raunak Shah}{iitk} \icmlauthor{Tushar Goswamy}{iitk} \icmlauthor{Vatsalya Tandon}{iitk} \icmlauthor{Ravi Sahu}{iitk} \icmlauthor{Ronak Sutaria}{resp} \icmlauthor{Purushottam Kar}{iitk} \icmlauthor{Sachchida Nand Tripathi}{iitk} \end{icmlauthorlist} \icmlaffiliation{iitk}{Indian Institute of Technology, Kanpur, India} \icmlaffiliation{resp}{Respirer Living Sciences, Mumbai, India} \icmlkeywords{Machine Learning, Air Quality Forecasting, ICML} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} The identification and control of human factors in climate change is a rapidly growing concern and robust, real-time air-quality monitoring and forecasting plays a critical role in allowing effective policy formulation and implementation. This paper presents {DELFI}\xspace, a novel deep learning-based mixture model to make effective long-term predictions of Particulate Matter (PM) 2.5 concentrations. A key novelty in {DELFI}\xspace is its multi-scale approach to the forecasting problem. The observation that point predictions are more suitable in the short-term and probabilistic predictions in the long-term allows accurate predictions to be made as much as 24 hours in advance. {DELFI}\xspace incorporates meteorological data as well as pollutant-based features to ensure a robust model that is divided into two parts: (i) a stack of three Long Short-Term Memory (LSTM) networks that perform differential modelling of the same window of past data, and (ii) a fully-connected layer enabling attention to each of the components. Experimental evaluation based on deployment of 13 stations in the Delhi National Capital Region (Delhi-NCR) in India establishes that {DELFI}\xspace offers far superior predictions especially in the long-term as compared to even non-parametric baselines. The Delhi-NCR recorded the 3rd highest PM levels amongst 39 mega-cities across the world during 2011-2015 and {DELFI}\xspace's performance establishes it as a potential tool for effective long-term forecasting of PM levels to enable public health management and environment protection. \end{abstract} \section{Introduction} \label{sec:intro} The global challenge of climate change demands a multi-faceted response. Rapid, robust, and real-time identification of human sources of climate change such as combustion, mining, and other activities is a key aspect in enabling agile and adaptive policy and regulatory decisions. Climate change positively correlates with air pollution levels, with air pollutants such as black carbon that constitute particulate matter (PM), methane, tropospheric ozone, and aerosols, also affecting the amount of incoming sunlight and contributing to global temperature rise and glacial degradation~\cite{Manisalidis2020}. For instance, fossil fuel and biomass combustion are the biggest source of black carbon aerosols that contribute to both PM levels as well as accelerating glacier melting in the Himalayas~\cite{Patella2018}. In particular, PM2.5 refers to particulate matter with a diameter less than 2.5$\mu$m and includes combustion by-products, metals and organic materials. PM2.5 is classified as an atmospheric pollutant of high concern since owing to its small size and comparatively larger surface area, it can remain suspended for extended periods, be easily transported and infiltrate the pulmonary and circulatory systems, if inhaled. Chronic exposure to high PM2.5 levels has been linked to, heart attacks and strokes \cite{PopeIII2002}, and respiratory diseases such as lung cancer \cite{Kampa2008}. Maternal exposure to high PM2.5 levels elevates the risk of congenital heart defects in infants~\cite{Zhang2016}. Effective monitoring and regulatory control of PM2.5 levels presents the need for a robust long-term forecasting model for PM2.5, especially in high pollution regions such as the Delhi National Capital Region (NCR) in India where extremely high PM2.5 levels (484 on a scale on 500) led to the local government triggering a state of public health emergency on November 1, 2019 \cite{reuters2017}. \textbf{Technical Contributions and Impact:} The primary technical contribution of this paper is {DELFI}\xspace, a novel deep learning-based mixture model to perform long-term predictions of PM2.5 levels with key technical contributions: \begin{enumerate}[nolistsep] \item Adopting a novel novel multi-scale forecasting strategy employing probabilistic predictions for forecasts with horizons longer than 6 hours \item A light-weight technique employing pre-computed \text{NEF}\xspace features (see Sec~\ref{sec:method}) that allow spatio-temporal effects to be incorporated without additional expense \item A mixture model architecture with 3 components each comprised of a stack of LSTM networks with attention and an alternating optimization-based training strategy \item Forecasts for time horizons as large as 24 hours into the future that offer significantly improved accuracy compared to baseline methods while offering short-term (1-4 hour) predictions within sensor error levels \end{enumerate} {DELFI}\xspace makes point short-term predictions (for up to 6 hours in the future) and probabilistic long-term predictions (for up to 48 hours in the future). Experiments indicate that the model can be used to reliably design long-term air quality forecasting and early-warning systems that enable rapid regulatory action of preventative nature. Given the close link between air-quality and the drivers of climate change, this can not only safeguard the health of citizens against the harmful effects of air pollution, but also mitigate the adverse impact of human activities on climate change. \textbf{Related Works and Contributions of {DELFI}\xspace in Context}: At a high level, most existing works use a monolithic Long Short Term Memory (LSTM)-based model to make short-term predictions, often next-hour but up to 6 hours. In contrast, {DELFI}\xspace offers accurate probabilistic predictions upto 24 hours in advance that make it suitable for use in designing early warning systems. \citet{delhilstm} uses data on CO, NO2, NO, Ozone, PM2.5 and SO2 levels from high-grade monitors to perform next hour prediction of the same levels using LSTMs. In contrast, {DELFI}\xspace uses PM1, PM10 and PM2.5 concentrations from low-cost sensors in addition to meteorological features to offer predictions over much longer time scales as well as MAE values that are 35\% lower. \cite{Chaudhary2018TimeSB} use LSTMs to make next-hour predictions but use additional sources of seasonal data such as holidays and traffic data to which {DELFI}\xspace does not assume access. \citep{Kumar2011PCR} apply principal component regression to make short-term predictions at a single air quality monitoring station. In contrast, {DELFI}\xspace ingests data from, and makes simultaneous predictions for, more than a dozen locations. Ensemble models have been also considered such as \cite{Liu2019} which incorporates 5 models but only offer predictions up to 6 hours in advance, and \cite{Bai2019} that uses an ensemble LSTM network to make next-hour predictions. In contrast, {DELFI}\xspace makes use of fewer but diverse components in its mixture, and does a careful assignment of data used to train each component, finally using an aggregator network to assign attention weights to each component. \section{{DELFI}\xspace: \underline{D}eep Mixtur\underline{E}-models for \underline{L}ong term air-quality \underline{F}orecast\underline{I}ng} \label{sec:method} \textbf{Data}: {DELFI}\xspace is trained on air pollution as well as meteorological data from 13 stations in the region of Delhi-NCR, India, latitudes 28째 27' 15" to 28째 38' 40" and longitudes 77째 4' 26" to 77째 19' 40". Data collection was done at intervals of 1 hour in the time period of 1 November 2018 00:00:00 to 28 March 2019 23:00:00. PM2.5 concentrations that appeared extremely elevated were not removed as outliers from the data so as to enable the model to be trained and tested on predicting spikes in PM2.5 levels. \textbf{Training Features}: {DELFI}\xspace uses a total of 9 features, each available as a time-series at each station, to perform forecasting: (i) PM1, (ii) PM10, (iii) PM2.5, (iv) Temperature, (v) Humidity, (vi) Visibility, (vii) Wind speed, (viii) Wind direction and (ix) the Net external flow (NEF). Of these, the first 8 features are standard and available from the monitoring stations or else standard APIs. However, the last NEF feature (described below) was engineered to allow {DELFI}\xspace to take spatio-temporal effects of long-range air flow into account. A \emph{data-point} $w_{i, t}$ is defined as a six-hour window (ending at timestamp $t$) of these pollutant and meteorological features taken at one-hour intervals at station $i$. The true PM2.5 concentration value at station $i$ at timestamp $t$ will be denoted by $PM_{i, t}$ below. \textbf{The \text{NEF}\xspace Feature}: This feature attempts to capture in a relatively inexpensive manner, the effect of PM2.5 concentrations at other stations given wind direction and velocities. For a station $a$ and time $t$, this feature is defined as \[ \text{NEF}\xspace_{a, t} = \frac{1}{1+e^{-x}}, x = \sum_i PM_{i, t} \times V_{i, t}\times cos(\theta_a^i - \phi_{i,t}), \] where $PM_{i, t}$, $V_{i,t}$ and $\phi_{i, t}$ are respectively the true PM2.5 concentration, wind speed and wind bearing at station $i$ at time $t$ and $\theta_a^i$ is the bearing between station $a$ and station $i$ based on the World Geodetic System (WGS84). \textbf{A Key to Long-term Forecasting}: As discussed in Section~\ref{sec:intro}, most existing work attempts to make short term forecasts (up-to 6 hours). Long term forecasting rapidly deteriorates in quality possibly due to the lack of complete knowledge of all factors affecting air quality as well as the chaotic nature of aerodynamic systems. However, {DELFI}\xspace makes an observation that when making longer-term forecasts, say 24 hours in advance, a \textit{point} prediction becomes less critical. Specifically, if making a 24 hour forecast at 1100hrs today, it is not critical to know the PM2.5 levels at exactly 1100hrs tomorrow. Rather, the \textit{distribution} of PM2.5 levels around 1100hrs (e.g. in the period 0900-1300 hrs, are the levels likely to remain mild or can they spike) becomes more important both from the point of policy formulation as well as modulating personal behavior. To exploit this observation, {DELFI}\xspace uses air-quality categorization set forth by regulatory authorities in India, namely 0-30 (Good), 30-60 (Satisfactory), 60-90 (Moderately polluted), 90-120 (Poor), 120-250 (Very poor) and 250+ (Severe) to create 6 \textit{bins}. Thus, when making long-term forecasts, say in the above example, {DELFI}\xspace predicts a discrete probability distribution over these 6 bins that indicate the probability of hourly PM2.5 values within the 0900-1300 hrs time window falling in those bins. Experiments find that probabilistic predictions perform better for long-term forecasts. \textbf{Data Setup}: To make point predictions, that are more useful in the short term, {DELFI}\xspace trains on the \textit{residuals} as is common in literature~\cite{zheng2015forecasting} i.e. rather than learning to predict $PM_{i, (t+1)}$ directly for a next-hour point prediction, {DELFI}\xspace learns to predict $\Delta PM_{i, \left(t+1\right)} = PM_{i, \left(t+1\right)} - PM_{i, t}$ instead. Thus, for next-hour point predictions, the set $\{w_{i, t}, \Delta PM_{i, \left(t+1\right)}\}$ defines the training set. To make longer term point predictions, say given a $s$-hour \textit{horizon} (i.e. predicting $s$ hours into the future), {DELFI}\xspace simply makes next-hour predictions $s-1$ times, sliding the 6-hour window by a single hour each time using its own predicted values. This allows {DELFI}\xspace to use a single model to make point predictions at various horizons say 1-hour, 2 hours, etc and not learn a separate model for every horizon. However, in line with past works, our experiments show that point predictions become severely inaccurate beyond 6 hour horizons. {DELFI}\xspace adopts probabilistic predictions for longer-term horizons say 8, 12, 24 and 48 hours, as discussed above. For such a long-term horizon $s$ and data-point $w_{i, t}$, a window of length $s$ centred at $t+s$ i.e. $PM_{i, \left(t+j\right)}, \forall j \in [t+s/2, t+3s/2)$ is used to create a normalized histogram denoted by $h_{i,t+s}$ of these values binned into the 6 bins described above. The set $\{w_{i, t}, h_{i,t+s}\}$ then defines the training set. A different training set is thus created for every value of horizon $s$. For point as well as probabilistic predictions, data points created out of the first 85\% timestamps of each station's data were used as training data and those created out of the suffix 15\% of each station's data were used as test data. \begin{figure} \caption{Model architecture used by {DELFI}\xspace for long-term probabilistic predictions. $B$ is batch-size. The blue and orange portions of the network are trained alternately as described in Algorithm~\ref{algo:train}.} \label{fig:amtold-long} \end{figure} \textbf{Model and architecture}: Initial experiments were conducted with 13 separate LSTM-based model being trained on data from each of the 13 stations in the deployment. However, this strategy neither took advantage of the much larger amount of overall data, nor did models learnt for one station do well in predicting values for another station. However, given the diversity of the stations (some were prominent PM2.5 hotspot candidates whereas others reported much milder PM2.5 values), it was also expected that a single model may struggle to address these extremes. {DELFI}\xspace's solution is mixture model that consults 3 components, each being a stack of LSTM networks with attention weights being learnt for each component (see Figure~\ref{fig:amtold-long}). More specifically, each component consists of a series of stacked LSTMs which take in features $w_{i,t}$ as sequential input and offers a sequential output. {DELFI}\xspace develops two distinct mixture models: one for short-term predictions and one for long-term predictions. A fully connected layer is used as the \textit{aggregator} that offers the attention weights. For short-term predictions (see Figure \ref{fig:amtold-short} in the appendix), each component predicts a certain residual value, for example $\hat{\Delta PM}_{i,(t+1)}$ for next-hour predictions and weights are assigned to the output of each component. For long-term predictions, the output is used to scale the concatenated sequences and passed through another fully connected layer to get the final histogram output. These large-capacity architectures allow the 13 diverse stations to adaptively choose a component best suited to make predictions for its data. \begin{table*}[t] \centering \begin{tabular}{| c || c | c | c | c | c | c | c | c | c |} \hline & 1 hr & 2 hr & 3 hr & 4 hr & 5 hr & 6 hr & 8 hr & 12 hr & 24 hr \\ \hline \hline KNN & 9.27 & 16.42 & 23.20 & 30.55 & 38.72 & 47.86 & 68.53 & 120.01 & 326.30\\ \hline Linear & 10.01 & 23.19 & 39.08 & 63.99 & 103.19 & 162.71 & 366.81 & 1356.74 & 37569.30\\ \hline {DELFI}\xspace & 10.03 & 17.49 & 25.28 & 35.22 & 47.90 & 62.58 & 95.07 & 175.43 & 450.91\\ \hline \end{tabular} \caption{Mean absolute error (MAE) in $\mu\textrm{g}$ $\textrm{m}^{-3}$ between point predictions and actual PM2.5 values for all methods and various horizon values. For short term point-predictions (e.g. 1-4 hour horizons) KNN, linear model as well as {DELFI}\xspace are competitive and offer acceptable performance. In particular, {DELFI}\xspace MAE values for $s = 1$hr are within sensor error levels. Performance deteriorates to unacceptable levels for all methods for long- term horizons e.g. 8+ hrs which indicates that point predictions are ill-suited for long-term forecasting.} \label{tab:short-term-predictions} \end{table*} \begin{algorithm}[t] \caption{{DELFI}\xspace training via Alternating Optimization} \label{algo:train} \begin{algorithmic}[1] \STATE Perform pre-training (stations grouped into 3 categories using variance of residual and each group assigned to a component) \FOR{$t$ in $n_{epochs}$} \STATE Freeze aggregator and subsequent dense layer \STATE Train mixture model components on data-points assigned to it for $n_t$ iterations \STATE Freeze mixture model components \STATE Train aggregator and subsequent dense layer for $m_t$ iterations \ENDFOR \end{algorithmic} \end{algorithm} \textbf{Pre-training}: The variance in the residual values $\Delta PM_{i, \left(t+1\right)}$ was computed for all stations. Using this statistic, stations were grouped into three categories. Each group was assigned one component in the mixture and data from within the group was used to pre-train model parameters for that particular component. This division was used to decide the data points used to pre-train the three individual components before starting the main training step. As a result, stations prone to spiking PM2.5 values were likely to cluster together into one group, with other stations with more gentle variations in PM2.5 levels falling into another. \textbf{Training}: An alternating optimization procedure reminiscent of the EM algorithm was adopted to train both the fully connected layer (aggregator) and individual components of the mixture described in Algorithm~\ref{algo:train}. The mixture model components and aggregator have been colored differently (Figures~\ref{fig:amtold-long} and \ref{fig:amtold-short}) to highlight the alternating nature of the training algorithm. For point predictions, the mean squared loss between the predicted residual $\hat{\Delta PM}_{i, \left(t+1\right)}$ and the actual residual $\Delta PM_{i, \left(t+1\right)}$ was used to train the model. For probabilistic long-term predictions, the Kullback-Liebler divergence loss between the predicted and actual histograms was used to train the model. The Adam optimizer~\cite{Kingma14} was used to learn the parameters of the model with a learning rate of $0.005$. The training loss was found to reasonably converge within $n_{epochs} = 10$ and $n_t = m_t = 10$ iterations during experiments. \begin{table}[t] \begin{tabular}{| c || c | c | c | c | c |} \hline Time (hrs) & 6$\pm$3 & 8$\pm$4 & 12$\pm$6 & 24$\pm$12 & 48$\pm$24\\ \hline \hline KNN & 1.33 & 1.80 & 2.40 & 1.40 & 1.17\\ \hline {DELFI}\xspace & \textbf{0.32} & \textbf{0.89} & \textbf{1.38} & \textbf{0.59} & \textbf{0.53}\\ \hline \end{tabular} \caption{KL divergence between predicted and actual histograms for various horizon lengths. Linear models were unable to provide meaningful predictions and were excluded from comparison. {DELFI}\xspace offers KL divergence values that are at least 42\% and upto 76\% smaller than those offered by the KNN algorithm.} \label{tab:long-term-predictions} \end{table} \section{Experimental Results} \label{sec:exps} To validate {DELFI}\xspace's performance, baseline strategies popular in air-quality calibration and forecasting literature \cite{amt2021} such as parametric (linear) and non-parametric (k-nearest neighbors or KNN) models were considered. These baselines often offer acceptable performance for real-time calibration as well as short-term forecasting but rapidly deteriorate on the more challenging task of long-term forecasting. The results offered by linear and KNN models as well as {DELFI}\xspace on the point prediction task for various horizon lengths are presented in Table \ref{tab:short-term-predictions}. Table \ref{tab:long-term-predictions} presents results on the task of making probabilistic forecasts. The KNN algorithm was suitably modified to output probabilistic predictions by averaging ouputs in the identified neighborhood. The results for point predictions are presented in terms of the mean absolute error (MAE) between the actual PM2.5 concentration and the predicted PM2.5 concentration, whereas results for probabilistic predictions are presented in terms of the Kullback-Liebler divergence (KL divergence) between the actual and predicted histograms. In both cases, lower values correspond to better performance. Table \ref{tab:short-term-predictions} demonstrates that for short horizon values (1-4 hours), {DELFI}\xspace offers performance that is comparable to the non-parametric method KNN. However, for long horizon values (8+ hours), point predictions from all models are extremely inaccurate with those from the linear model being especially poor. Although KNN does offer marginally better performance than {DELFI}\xspace for larger horizons, the outputs of neither algorithm can be deemed acceptable. Rather than a statement on the algorithm, Table \ref{tab:short-term-predictions} shows that point predictions are ill-suited for long-term forecasting. However, the trends are vastly different in Table \ref{tab:short-term-predictions} where {DELFI}\xspace offers much superior performance as compared to the KNN algorithm, with as much as 76\% less KL divergence values. \section{Discussion and Impact on Climate Change} This paper presents {DELFI}\xspace, a novel algorithm that introduces several technical innovations such as the use of probabilistic predictions for long-term forecasting and using a mixture model trained using an alternating strategy. Intuitively, a low bias model is preferred for short-term predictions to encode local features properly that explains KNN's good performance on small horizons (1-4 hrs). However, for long-term predictions, as unpredictability of the system grows, a low-variance method is preferable instead that KNN does not offer. {DELFI}\xspace seems to offer a suitable balance between bias and variance allowing it to perform well in both regimes. Experimental results suggest that {DELFI}\xspace offers predictions reliable enough to design early-warning systems based on long-term forecasts. In highly polluted regions of the globe such as the Delhi-NCR, such systems offer citizens a chance to modulate their own personal behavior e.g. avoiding outdoor activities, but can also enable regulatory authorities to take immediate preventative action as well as effect long-term policy shift. Given the close link between air-quality and the drivers of climate change discussed in Section~\ref{sec:intro}, this can not only safeguard the health of citizens but also mitigate the adverse impact of human activities on climate change in the longer term. \appendix \section{Appendix} \begin{figure} \caption{Model architecture used by {DELFI}\xspace for short-term point predictions. $B$ is batch-size. The blue and orange portions of the network are trained alternately as described in Algorithm~\ref{algo:train}.} \label{fig:amtold-short} \end{figure} \end{document}
\begin{document} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{rem}[thm]{Remark} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{ex}[thm]{Example} \newcommand{r.~i.}{r.~i.} \title[]{Khintchine inequality and Banach-Saks type properties in rearrangement-invariant spaces} \date{} \author[]{F.~A.~Sukochev and D. Zanin} \thanks{Research supported by the Australian Research Council.} \keywords{$p$-Banach Saks type properties, rearrangement-invariant spaces, Khintchine inequality, Kruglov property} \subjclass{46E30, 46B20} \date{} \begin{abstract} {\it We study the class of all rearrangement-invariant (=r.i.) function spaces $E$ on $[0,1]$ such that there exists $0<q<1$ for which $ \Vert \sum_{_{k=1}}^n\xi_k\Vert _{E}\leq Cn^{q}$, where $\{\xi_k\}_{k\ge 1}\subset E$ is an arbitrary sequence of independent identically distributed symmetric random variables on $[0,1]$ and $C>0$ does not depend on $n$. We completely characterize all Lorentz spaces having this property and complement classical results of Rodin and Semenov for Orlicz spaces $exp(L_p)$, $p\ge 1$. We further apply our results to the study of Banach-Saks index sets in r.i. spaces. } \end{abstract} \maketitle \section{Introduction} A classical result of Rodin and Semenov (see \cite{RS} or \cite[ Theorem 2.b.4]{LT-II}) says that the sequence of Rademacher functions $\{r_k\}_{k\ge 1}$ on $[0,1]$ in a r.i. space $E$ is equivalent to the unit vector basis of $l_2$ if and only if $E$ contains (the separable part of) the Orlicz space $L_{N_2}(0,1)$ (customarily denoted as $exp(L_2)$) where $N_2(t)=e^{t^2}-1$. Here, $\{r_k\}_{k\ge 1}$ may be thought of as a sequence of independent identically distributed centered Bernoulli variables on $[0,1]$. A quick analysis of the proof (see e.g. \cite[p.134]{LT-II}) shows that the embedding $exp(L_2)\subseteq E$ is established there under a weaker assumption that $\{r_k\}_{k\ge 1}$ is $2$-Banach-Saks sequence in $E$, that is $ \Vert \sum_{_{k=1}}^nr_k\Vert _{_{E}}\leq Cn^{1/2}$, where $C>0$ does not depend on $n\ge 1$. The main object of study in the present article is the class of all r.i. spaces $E$ such that there exists $0<q<1$ for which \begin{equation}\label{mainzero} \Vert \sum_{_{k=1}}^n\xi_k\Vert _{E}\leq Cn^{q}, \end{equation} where $\{\xi_k\}_{k\ge 1}\subset E$ is an arbitrary sequence of independent identically distributed symmetric random variables on $[0,1]$ and $C>0$ does not depend on $n$. We completely characterize all Lorentz spaces from this class in Corollary \ref{lorentz alternative} below. In Theorem \ref{Marc} we obtain sharp estimates of type \eqref{mainzero} for the Orlicz spaces $exp(L_p)=L_{N_p}(0,1)$, $1\leq p<\infty$ where $N_p(t)=e^{t^p}-1$ complementing results of \cite{RS} (see also exposition in \cite{D}). Our results have also a number of interesting implications to the study of Banach-Saks type properties in r.i. spaces. Recall that a bounded sequence $\{x_n\}\subset E$ is called a p-BS-sequence if for all subsequences $\{y_k\}\subset\{x_n\}$ we have \[ \sup\limits_{m\in N}m^{-\frac{1}{p}}\Big\|\sum\limits_{k=1}^{m}y_k\Big\|_E<\infty. \] We say that $E$ has the p-BS-property and we write $E\in BS(p)$ if each weakly null sequence contains a p-BS-sequence. The set \[ \Gamma(E)=\{p:\:p\geq 1,\:E\in BS(p)\} \] is said to be the index set of $E$, and is of the form $[1,\gamma]$, or $[1,\gamma)$ for some $1\leq \gamma$. If, in the preceding definition, we replace all weakly null sequences by weakly null sequences of independent random variables (respectively, by weakly null sequences of pairwise disjoint elements; by weakly null sequences of independent identically distributed random variables), we obtain the set $\Gamma_{\rm i}(E)$ (respectively, $\Gamma_{\rm d}(E)$, $\Gamma_{\rm iid}(E)$). The general problem of describing and comparing the sets $\Gamma(E)$, $\Gamma_{\rm i}(E)$, $\Gamma_{\rm iid}(E)$) and $\Gamma_{\rm d}(E)$ in various classes of r.i. spaces was addressed in \cite{SeSu-CR, DoSeSu2004, SeSu, AsSeSu2005, new-16-Sem-Suk, AsSeSu2007}. In particular, it is known \cite{AsSeSu2005} that $1\in\Gamma(E)\subseteq \Gamma_{\rm i}(E)\subseteq \Gamma_{\rm iid}(E)\subseteq [1,2]$ and $\Gamma_{\rm i}(E)\subseteq \Gamma_d(E)$ for any r.~i.\ space $E$. Moreover, the sets $\Gamma(E)$ and $\Gamma_{\rm i}(E)$ coincide in many cases but not always. For example, $\Gamma(L_p)=\Gamma_{\rm i}(L_p)=\Gamma_{\rm iid}(L_p)$, $1<p<\infty$ (see e.g. \cite[Corollary 4.4 and Theorem 4.5]{new-16-Sem-Suk} and also Theorem \ref{firstmain} below), whereas for the Lorentz space $L_{2,1}$ generated by the function $\psi(t)=t^{1/2},$ we have $\Gamma(L_{2,1})=[1,2)$ and $\Gamma_{\rm i}(L_{2,1})=[1,2]$ (\cite[Theorem~5.9]{new-16-Sem-Suk} and \cite[Proposition~4.12]{AsSeSu2005}). It turns out that these two situations are typical \cite[Theorem 9]{SeSu}: under the assumption that $\Gamma(E)\ne \{1\}$, we have either $\Gamma_{\rm i}(E)\setminus\Gamma(E)=\emptyset$ or else $\Gamma_{\rm i}(E)\setminus\Gamma(E)=\{2\}$. The present paper may also be considered as a contribution to the study of the class of all r.i. spaces $E$ such that $\Gamma_{\rm iid}(E)=\Gamma_{\rm i}(E)$. We prove a general theorem (see Theorem \ref{firstmain} below) that $\Gamma_{\rm iid}(E)=\Gamma_{\rm i}(E)$ if and only if $\Gamma_{\rm iid}(E)\subseteq \Gamma_{\rm d}(E)$. It is easy to see that every Lorentz space $\Lambda(\psi)$ satisfies the latter condition and, using the main result described above, we give a complete characterization of all Lorentz spaces $E=\Lambda(\psi)$ such that $\Gamma_{\rm iid}(E)\neq \{1\}$ (see Theorem \ref{mainsecond} and Corollary \ref{mainsecond_add}). It also pertinent to note here, that if one views the Rademacher system as a special example of sequences of independent mean zero random variables, then a significant generalization of Khintchine inequality is due to W.B. Johnson and G. Schechtman \cite{JoSch1989}. They introduced the r.i. space $Z_E^2$ on $[0,\infty)$ linked with a given r.i. space $E$ on $[0,1]$ and showed that any sequence $\{f_k\}_{k=1}^\infty$ of independent mean zero random variables in $E$ is equivalent to the sequence of its disjoint translates $\{\bar f_k(\cdot):=f_k(\cdot-k+1)\}_{k=1}^\infty$ in $Z_E^2$, provided that $E$ contains an $L_p$-space for some $p<\infty$. This study was taken further in \cite{Br1994, AsSeSu2005,AsSu2006-1,AsSu2006-2}, where the connection between this (generalized) Khintchine inequality and the so-called Kruglov property was established (we explain the latter property in the next section). We show the connection between the class of all r.i. spaces with Kruglov property and the estimates \eqref{mainzero} in Theorem \ref{Kruglov}. Recently, examples of r.i. spaces $E$ such that $\Gamma(E)= \{1\}$ but $\Gamma_{\rm i}(E)\ne \{1\}$ have been produced in \cite{AsSeSu2007} under the assumption that $E$ has the Kruglov property. Our approach in this paper complements that of \cite{AsSeSu2007}; in particular, we present examples of Lorentz and Marcinkiewicz spaces $E$ such that $\Gamma_{\rm i}(E)=\Gamma_{\rm iid}(E)\neq \{1\}$ and which do not possess the Kruglov property. Finally, we show that the equality $\Gamma_{\rm iid}(E)=\Gamma_{\rm i}(E)$ fails when $E$ is a classical space $L_{pq}$, $1<q<p<2$. \section{Definitions and preliminaries} \subsection{Rearrangement-invariant spaces} A Banach space $(E,\Vert \cdot\Vert _{_{E}})$ of real-valued Lebesgue measurable functions (with identification $m$-a.e.) on the interval $[0,1]$ will be called {\it rearrangement-invariant} (briefly, r.~i. ) if \begin {enumerate} \item[(i).] $E$ is an ideal lattice, that is, if $y\in E$, and if $x$ is any measurable function on $[0,1]$ with $0\leq \vert x\vert \leq \vert y\vert $ then $x\in E$ and $\Vert x\Vert _{_{E}} \leq \Vert y\Vert _{_{E}};$ \item[(ii).] $E$ is rearrangement invariant in the sense that if $y\in E$, and if $x$ is any measurable function on $[0,1]$ with $x^*=y^*$, then $x\in E$ and $\Vert x\Vert _{_{E}} = \Vert y\Vert _{_{E}}$. \end{enumerate} \noindent Here, $m$ denotes Lebesgue measure and $x^*$ denotes the non-increasing, right-continuous rearrangement of $x$ given by $$ x^{*}(t)=\inf \{~s\ge 0:m (\{u\in [0,1]:\,\mid x(u)\mid >s\})\le t~\},\quad t>0. $$ For basic properties of r.i. spaces, we refer to the monographs \cite{KPS,LT-II}. We note that for any r.i. space $E$ we have: $L_\infty [0,1]\subseteq E\subseteq L_1[0,1].$ We will also work with a r.i. space $E(\Omega,{\mathcal {P}})$ of measurable functions on a probability space $(\Omega,{\mathcal {P}})$ given by $$ E(\Omega,{\mathcal {P}}):=\{f\in L_1(\Omega,{\mathcal {P}}):f^*\in E\}, \quad \|f\|_{E(\Omega,{\mathcal {P}})}:=\|f^*\|_X. $$ Here, the decreasing rearrangement $f^*$ is calculated with respect to the measure ${\mathcal {P}}$ on $\Omega$. Recall that for $0<\tau<\infty$, the dilation operator $\sigma_\tau$ is defined by setting $$\sigma_\tau x(t)=\begin{cases} x(t/\tau),\;0\leq t\leq\min(1,\tau) \\ 0,\; \min(1,\tau)<t\leq 1. \end{cases} $$ The dilation operators $\sigma_\tau$ are bounded in every r.i. space $E$. Denoting the space of all linear bounded operators on a Banach space $E$ by ${\mathcal L}(E)$, we set \[ \alpha_E:=\lim\limits_{\tau\to 0}\frac{\ln\|\sigma_\tau\|_{{\mathcal L}(E)}}{\ln\tau},\quad \beta_E:=\lim\limits_{\tau\to \infty}\frac{\ln\|\sigma_\tau\|_{{\mathcal L}(E)}}{\ln\tau}. \] The numbers $\alpha_E$ and $\beta_E$ belong to the closed interval $[0,1]$ and are called the Boyd indices of $E$. The K\"othe dual $E^\times $ of an r.i. space $E$ on $[0,1]$ consists of all measurable functions $y$ for which $$ \Vert y\Vert _{_{E^{\times }}}:= \sup \Big\{\int _0^1\vert x(t)y(t)\vert\,dt:\ x\in E,\ \Vert x \Vert _{_{E}}\leq 1\,\Big\} <\infty. $$ If $E^*$ denotes the Banach dual of $E$, then $E^\times \subset E^{*}$ and $E^\times =E^{*}$ if and only if $E$ is separable. An r.i. space $E$ is said to have the {\it Fatou property} if whenever $\{f_n\}_{n=1}^\infty\subseteq E$ and $f$ measurable on $[0,1]$ satisfy $f_n\to f$ a.e. on $[0,1]$ and $\sup _n\Vert f_n\Vert _{_{E }} <\infty $, it follows that $f\in E$ and $\Vert f\Vert _{_{E}}\leq \liminf _{n\to \infty }\Vert f_n\Vert _{_{E}}$. It is well-known that an r.i. space $E$ has the Fatou property if and only if the natural embedding of $E$ into its K\"othe bidual $E^{\times\times}$ is a surjective isometry. Let us recall some classical examples of r.i. spaces on $[0,1]$. Denote by $\Psi$ the set of all increasing continuous concave functions on $[0,1]$ with $\varphi(0) =0$. Each function $\varphi\in\Psi$ generates the Lorentz space $\Lambda(\varphi)$ (see e.g. \cite{KPS}) endowed with the norm \[\|x\|_{\Lambda(\varphi)}=\int\limits_0^1 x^*(t)d\varphi(t)\] and the Marcinkiewicz space $M(\varphi)$ endowed with the norm \[ \|x\|_{M(\varphi)}=\sup\limits_{0<\tau\leq 1}\frac{1}{\varphi(\tau)}\int\limits_0^\tau x^*(t)dt. \] The space $M(\varphi)$ is not separable, but the space \[\left \{x\in M(\varphi):\:\lim\limits_{\tau\to 0}\frac{1}{\varphi(\tau)} \int\limits_0^\tau x^*(t)dt=0\right \}\] endowed with the norm $\|\cdot \|_{M(\varphi)}$ is a separable r.i. space (denoted further as $(M(\varphi)_0$), which coincides with the closure of $L_\infty$ in $(M(\varphi),\|\cdot \|_{M(\varphi)})$. It is well known (see e.g. \cite[Section II.1]{KPS}) that $$ \beta_{M(\varphi)}=1\Longleftrightarrow \alpha_{\Lambda(\varphi)}=0\Longleftrightarrow\forall t\in (0,1)\exists (s_n)_{n\ge1} \subseteq (0,1)\ :\ \lim_{n\to\infty}\frac{\varphi(ts_n)}{\varphi(s_n)}=1; $$ $$ \alpha_{M(\varphi)}=0\Longleftrightarrow \beta_{\Lambda(\varphi)}=1\Longleftrightarrow\forall \tau\ge1\exists (s_n)_{n\ge1} \subseteq (0,1)\ :\ \lim_{n\to\infty}\frac{\varphi(s_n\tau)}{\varphi(s_n)}=\tau. $$ If $M(t)$ is a convex increasing function on $[0,\infty)$ such that $M(0)=0$, then the Orlicz space $L_M$ on $[0,1]$ (see e.g. \cite{KPS, LT-II}) is a r.i. space of all $x\in L_1[0,1]$ such that \[\|x\|_{L_M}:=\inf\{\lambda :\lambda >0,\;\int\limits_{0}^{1} M(|x(t)|/\lambda)dt\leq 1\}<\infty.\] The function $N_p(u)=e^{u^p}-1$ is convex for $p\geq1$ and is equivalent to a convex function for $0<p<1$ (see e.g. \cite{Br1994, AsSu2005}). The space $L_{N_p}$, $0<p<\infty$ is customarily denoted $\exp (L_p)$. \subsection{The Kruglov property in r.i.\ spaces} Let $f$ be a random variable on $[0,1]$. By $\pi(f)$ we denote the random variable $\sum_{i=1}^N f_i$, where $f_i$'s are independent copies of $f$ and $N$ is a Poisson random variable with parameter $1$ independent of the sequence $\{f_i\}$. {\bf Definition.}\quad {\sl An r.i. space $E$ is said to have the Kruglov property, if and only if $f\in E\Longleftrightarrow \pi(f)\in E.$} This property has been studied by M. Sh. Braverman \cite{Br1994} which uses some earlier probabilistic constructions of V.M. Kruglov \cite{K} and in \cite{AsSu2005,AsSu2006-1, AsSu2006-2} via an operator approach. It was proved in \cite{AsSu2006-2}, that an r.i. space $E$ satisfies the Kruglov property if and only if for every sequence of independent mean zero functions $\{f_n\}\in E$ the following inequality holds \begin{equation}\label{independent to disjoint} ||\sum_{k=1}^nf_k||_E\leq const\cdot ||\sum_{k=1}^n\overline{f}_k||_{Z^2_E}. \end{equation} Here, $Z^2_E$ is an r.i. space on $(0,\infty),$ equipped with a norm $$||x||=||x^*\chi_{[0,1]}||_E+||x^*\chi_{[1,\infty)}||_{L_2}$$ and the sequence $\{\bar f_k\}_{k=1}^n\subseteq Z^2_{X}$ is a sequence of disjoint translates of $\{f_k\}_{k=1}^n\subseteq X,$ that is, $\bar f_k(\cdot)=f_k(\cdot-k+1)$. Note that inequality \eqref{independent to disjoint} has been proved earlier in ~\cite{JoSch1989} (see inequality~(3) there) under the more restrictive assumption that $E\supseteq L_p$ for some $p<\infty$. Clearly, the latter assumption holds if $\alpha_E>0$. \section{Operators $A_n$, $n\ge 0$} Let $\Omega$ be the segment $[0,1],$ equipped with the Lebesgue measure. Let $E$ be an arbitrary rearrangement invariant space on $\Omega.$ For every $n\geq 1$, we consider the operator $A_n:E(\Omega)\rightarrow E(\underbrace{\Omega\times\Omega\times\cdots\times\Omega}_{2n\ times})$ given by \begin{multline*} A_nf=(f\otimes r)\otimes(1\otimes1)\otimes\cdots\otimes(1\otimes 1)+(1\otimes1)\otimes(f\otimes r)\otimes\cdots\otimes(1\otimes 1)+\cdots\\ \cdots +(1\otimes1)\otimes \cdots \otimes(1\otimes1)\otimes(f\otimes r), \end{multline*} where $r$ is centered Bernoulli random variable. For brevity, we will also use the following notation $$A_nf=(f\otimes r)_1+(f\otimes r)_2+\cdots+(f\otimes r)_n.$$ We set $A_0=0.$ The following theorem is the main result of the present section. \begin{thm}\label{alternative} The following alternative is valid in an arbitrary r.i.\ space $E.$ \begin{enumerate} \item[(i).] $||A_n||_{{\mathcal L}(E)}=n$ for every natural $n;$ \item[(ii).] There exists a constant $\frac12\leq q<1,$ such that $||A_n||_{{\mathcal L}(E)}\leq const\cdot n^q$ for all $n\in\mathbb{N}.$ \end{enumerate} \end{thm} \begin{proof} Since for all $n,m\geq0,$ we have \begin{equation}\label{additive} ||A_{n+m}||_{{\mathcal L}(E)}\leq ||A_n||_{{\mathcal L}(E)}+||A_m||_{{\mathcal L}(E)}, \end{equation} and since $||f\otimes r||_E=||f||_E,$ we infer that $||A_n||_{{\mathcal L}(E)}\leq n.$ Observing that $A_{mn}(f)$ and $A_m(A_n(f))$ are identically distributed, we have $$||A_{mn}(f)||_E=||A_m(A_n(f))||_E,\quad f\in E(\Omega).$$ Here, we identify the element $A_nf\in E(\Omega\times\cdots\times\Omega)$ with an element from $E(\Omega)$ via a measure preserving transformation $\underbrace{\Omega\times\cdots\times\Omega}_{2n\ times}\rightarrow\Omega.$ Hence, \begin{equation}\label{multiplicative}||A_{mn}||_{{\mathcal L}(E)}\leq ||A_m||_{{\mathcal L}(E)} \cdot||A_n||_{{\mathcal L}(E)}.\end{equation} Thus, we have the following alternative: \begin{enumerate} \item[(i).] $||A_n||_{{\mathcal L}(E)}=n$ for every natural $n;$ \item[(ii).] There exists $n_0\geq2,$ such that $||A_{n_0}||_{{\mathcal L}(E)}<n_0.$ \end{enumerate} To finish the proof of Theorem \ref{alternative}, we need only to consider the second case. Suppose there exists a constant $\frac12\leq q<1,$ such that $||A_{n_0}||_{{\mathcal L}(E)}\leq n_0^q.$ By \eqref{multiplicative} we have $$||A_{n_0^m}||_{{\mathcal L}(E)}\leq ||A_{n_0}||_{{\mathcal L}(E)}^m\leq n_0^{qm},\ \forall m\in\mathbb{N}.$$ Every $n$ can be written as $\sum_{i=0}^ka_in_0^i,$ where $0\leq a_i\leq n_0-1$ and $a_k\neq0.$ So, using \eqref{additive} and \eqref{multiplicative}, we have $$||A_n||_{{\mathcal L}(E)}\leq \sum_{i=0}^k||A_{a_in_0^i}||_{\mathcal{L}(E)}\leq \sum_{i=0}^k||A_{a_i}||_{\mathcal{L}(E)}n_0^{qi}\leq$$ $$\leq (\sum_{i=0}^k n_0^{qi})\max\limits_{1\leq s\leq n_0}\{||A_s||_{{\mathcal L}(E)}\} \leq \frac{n_0^q\cdot n_0^{qk}}{n_0^q-1}\max\limits_{1\leq s\leq n_0}\{||A_s||_{{\mathcal L}(E)}\}.$$ Now, using the fact that $q\geq\frac12$ and $n_0\geq2,$ we have $n_0^q-1\geq(\sqrt{2}-1).$ So, $$\frac1{n_0^q-1}\leq \sqrt{2}+1.$$ Since $n_0^k\leq n,$ we have $$||A_n||_{{\mathcal L}(E)}\leq (\sqrt{2}+1)\cdot n_0^q \cdot\max\limits_{1\leq s\leq n_0}\{||A_s||_{{\mathcal L}(E)}\}\cdot n_0^{qk}\leq const\cdot n^q.$$ This proves the theorem. \end{proof} \begin{rem}\label{connection} We record here an important connection between the estimates given in Theorem \ref{alternative}(ii) above and the set $\Gamma_{\rm iid}(E)$, where the r.i. space $E$ is separable. For $\frac12\leq q\leq 1$ the following conditions are equivalent \begin{itemize} \item[(i)] $ ||A_n||_{{\mathcal L}(E)}\leq const\cdot n^q$, $n\ge 1$; \item[(ii)]\quad $ \frac{1}{q} \in \Gamma_{\rm iid}(E)$. \end{itemize} Indeed, the implication $(i)\Rightarrow(ii)$ is obvious. Now, let the probability space $(\Omega, \mathcal{P})$ be the infinite direct product of measure spaces $([0,1],m)$. Fix $f\in E$ and consider the sequence $\{(f\otimes r)_n\}_{n\ge 1}\subset E(\Omega, \mathcal{P})$. It follows from \cite[Lemma 3.4]{SeSu} that this sequence is weakly null in $E(\Omega, \mathcal{P})$. Since the spaces $E$ and $E(\Omega, \mathcal{P})$ are isometric, we obtain the implication $(ii)\Rightarrow(i)$ via an application of the uniform boundedness principle. \end{rem} We complete this section with an estimate of $\|A_n\|_{{\mathcal L}(E)}$, $n\ge 1$ in general r.i. spaces with the Kruglov property. \begin{thm}\label{Kruglov} Let $E$ be a separable r.i. space. If $\beta_E<1$ and if $E$ satisfies the Kruglov property, then $||A_n||_{{\mathcal L}(E)}\leq const\cdot n^q$ for all sufficiently large $n\ge 1$ and any $\beta_E<q<1$. \end{thm} \begin{proof} It is proved in \cite[Proposition 2.2]{AsSeSu2007} (see also \cite[Theorem~1]{MS2002}), that for every r.i. space $E$ and an arbitrary sequence of independent random variables $\{f_k\}_{k=1}^n$ $(n\ge 1)$ from $E$, the right hand side of \eqref{independent to disjoint} can be estimated as \begin{equation}\label{quad sum} ||\sum_{k=1}^n\overline{f}_k||_{Z^2_E}\leq 6||(\sum_{k=1}^nf_k^2)^{\frac12}||_E. \end{equation} Now, assume in addition that the sequence $\{f_k\}_{k=1}^n$ $(n\ge 1)$ consists of independent identically distributed random variables, $\|f_1\|_E=1$. Since $\beta_E<1,$ there exist $N$ and $\beta_E<q<1$ such that for every $k\geq N$ $||\sigma_k||_{\mathcal{L}(E)}\leq k^{q}$. Fix $\varepsilon>0$ such that $\frac12+\varepsilon<q$. By \cite[Theorem 9]{SeSu}, in every separable r.i. space $E$, the right hand side of \eqref{quad sum} can be estimated as \begin{equation}\label{quad bound} ||(\sum_{k=1}^nf_k^2)^{\frac12}||_E\leq\frac{4}{\varepsilon}\max_{1\leq k\leq n}(\frac{n}k)^{\frac12+\varepsilon}||\sigma_k||_{\mathcal{L}(E)}:=A,\quad n\ge 1. \end{equation} So, the right hand side of \eqref{quad bound} can be estimated as \begin{multline}\label{another_bound} A\leq\frac{4}{\varepsilon}n^{\frac12+\varepsilon} \max\{\max_{N\leq k\leq n}k^{-\frac12-\varepsilon}k^q, \max_{1\leq k\leq N}k^{-\frac12-\varepsilon}||\sigma_k||_{\mathcal{L}(E)}\}=\\ =\frac{4}{\varepsilon}n^{\frac12+\varepsilon} \max\{n^{q-\frac12-\varepsilon},const\}\leq const\cdot n^q. \end{multline} Recalling the definition of the operator $A_n$ and combining it with \eqref{independent to disjoint}, \eqref{quad sum}, \eqref{quad bound}, \eqref{another_bound} yields the assertion.\end{proof} \begin{rem} \begin{itemize} \item[(i)] The assumption $\beta_E<1$ in Theorem \ref{Kruglov} is necessary (see \cite[Theorem 4.2]{AsSeSu2005}). For example, the space $E=L_1$ satisfies the Kruglov property and $\beta_E=1$. However, $\|A_n\|_{\mathcal{L}(L_1)}= n$. \item[(ii)] On the other hand, the condition that $E$ satisfies the Kruglov property is not optimal. In the following section, we will show that there are Lorentz spaces which do not possess the Kruglov property and which still satisfy the condition of Theorem \ref{alternative}(ii). \end{itemize} \end{rem} \section{Operators $A_n$, $n\ge 1$ in Lorentz spaces.} We need the following technical facts. The first lemma is elementary and its proof is omitted. \begin{lem}\label{linearity} Let $\psi$ is a concave function on $[0,1].$ If there are points $0\leq x_1\leq x_2\leq\cdots\leq x_n\leq1,$ such that $$\frac1n(\psi(x_1)+\cdots \psi(x_n))=\psi(\frac1n (x_1+\cdots +x_n)),$$ then $\psi$ is linear on $[x_1,x_n].$ \end{lem} \begin{lem}\label{estimate of expectation} Let $x_1,\cdots,x_n$ are independent random variables. The following inequality holds. $$\mathbb{E}(|x_1+\cdots+x_n|)\leq \mathbb{E}(|x_1|)+\cdots+\mathbb{E}(|x_n|).$$ Moreover, the equality holds if and only if all $x_i'$s are simultaneously non-negative (or non-positive). \end{lem} \begin{proof} We have $$\mathbb{E}(|x_1|)+\cdots+\mathbb{E}(|x_n|)-\mathbb{E}(|x_1+\cdots+x_n|)=\mathbb{E}(|x_1|+\cdots+|x_n|-|x_1+\cdots+x_n|)\geq0.$$ By the independence of $x_i's,$ $i=1,2,\cdots,n$ we have $sign(x_i),$ $i=1,2,\cdots,n$ are independent random variables. If there exists a function $x_i,$ which is neither non-negative, nor non-positive, then, for every other function $x_j,$ we have {\bf z-z-z-z} \begin{multline*} m(x_ix_j<0)=m(sign(x_i)>0,sign(x_j)<0) +m(sign(x_i)<0,sign(x_j)>0)\\=m(sign(x_i)>0)m(sign(x_j)<0) +m(sign(x_i)<0)m(sign(x_j)>0)>0. \end{multline*} Hence, there exists a set $A$ of positive measure such that $x_ix_j<0$ almost everywhere on $A.$ This guarrantees that $|x_1|+\cdots+|x_n|>|x_1+\cdots+x_n|$ almost everywhere on $A.$ This is sufficient for the strict inequality to hold.\end{proof} We need to consider the following properties of the function $\psi.$ \begin{equation}\label{first property}a_\psi:=\limsup_{u\rightarrow0}\frac{\psi(ku)}{\psi(u)}<k.\end{equation} \begin{equation}\label{second property}c_\psi:=\limsup_{u\rightarrow0}\frac{\psi(u^l)}{\psi(u)}<1.\end{equation} \begin{equation}\label{general limit property}\limsup_{u\rightarrow0}\frac1{\psi(u)}\sum_{s=1}^n\psi(2^{1-s}\binom{n}{s}u^s)<n.\end{equation} \begin{prop}\label{two limits into general} Suppose, there exist $k\geq2$ such that \eqref{first property} holds and $l\geq2$ such that \eqref{second property} holds. Then, \eqref{general limit property} holds for all sufficiently large $n.$ \end{prop} \begin{proof} Consider the sum $\sum_{s=1}^n\psi(\binom{n}{s}2^{1-s}u^s).$ For any sufficiently large $n,$ we write $$\sum_{s=1}^n=\sum_{s=1}^{1+[\frac{n}k]}+\sum_{s=2+[\frac{n}k]}^n.$$ Consequently, the upper limit in \eqref{general limit property} can be estimated as \begin{equation}\label{decomposition}\begin{split} \limsup_{u\rightarrow0}\frac1{\psi(u)}\sum_{s=1}^n\psi(\binom{n}{s}2^{1-s}u^s)\leq\limsup_{u\rightarrow0}\frac1{\psi(u)}\sum_{s=1}^{1+[\frac{n}k]}\psi(\binom{n}{s}2^{1-s}u^s)+\\ +\limsup_{u\rightarrow0}\frac1{\psi(u)}\sum_{s=2+[\frac{n}k]}^n\psi(\binom{n}{s}2^{1-s}u^s) \end{split}\end{equation} Consider the first upper limit in \eqref{decomposition}. Since $\psi$ is concave, we have $$\sum_{s=1}^{1+[\frac{n}k]}\psi(\binom{n}{s}2^{1-s}u^s)\leq(1+[\frac{n}{k}])\psi(\frac1{1+[\frac{n}k]}\sum_{s=1}^{1+[\frac{n}k]}\binom{n}{s}2^{1-s}u^s)=$$ $$=(1+[\frac{n}{k}])\psi(\frac1{1+[\frac{n}k]}(nu+o(u)))\leq(1+[\frac{n}{k}])\psi(ku(1+o(1))).$$ Hence, the first upper limit in \eqref{decomposition} is bounded from above by $$(1+[\frac{n}k])a_{\psi}=n\cdot\frac{a_{\psi}}{k}+o(n).$$ Consider the second upper limit in \eqref{decomposition}. It is clear that for all $\frac1k n\leq s\leq n$ $$\binom{n}{s}\cdot2^{1-s}\leq 2^n$$ and $$\binom{n}{s}2^{1-s}u^s\leq 2^nu^{\frac1k n}=(2^ku)^{\frac1k n}.$$ Thus, the second upper limit in \eqref{decomposition} can be estimated as $$\limsup_{u\rightarrow0}\frac1{\psi(u)}\sum_{s=2+[\frac{n}k]}^n\psi(\binom{n}{s}2^{1-s}u^s)\leq n(1-\frac1k)\limsup_{u\rightarrow0}\frac{\psi((2^ku)^{\frac{n}{k}})}{\psi(u)}.$$ Substituting variable $w=2^ku$ on the right hand side, we have $$n(1-\frac1k)\limsup_{w\rightarrow0}\frac{\psi(w^{\frac{n}{k}})}{\psi(2^{-k}w)}.$$ By the concavity of $\psi,$ we have $\psi(2^{-k}w)\geq2^{-k}\psi(w).$ Therefore, the second upper limit in \eqref{decomposition} is bounded from above by $$n(1-\frac1k)2^k\limsup_{w\rightarrow0}\frac{\psi(w^{\frac{n}{k}})}{\psi(w)}.$$ Now, we observe that \begin{equation} \limsup_{w\rightarrow0}\frac{\psi(w^m)}{\psi(w)}\leq c_{\psi}^{\frac{\log(m)}{\log(l)}-1}. \end{equation} Indeed, let $l^r\leq m\leq l^{r+1},$ $$\frac{\psi(w^m)}{\psi(w)}\leq\frac{\psi(w^{l^r})}{\psi(w)}=\frac{\psi(w^{l^r})}{\psi(w^{l^{r-1}})}\cdots\frac{\psi(w^l)}{\psi(w)}$$ and $$\limsup_{w\rightarrow0}\frac{\psi(w^m)}{\psi(w)}\leq c_{\psi}^r \leq c_{\psi}^{\frac{\log(m)}{\log(l)}-1}.$$ If $n$ tends to infinity, then, thanks to the assumption $c_{\psi}<1,$ we have $$n(1-\frac1k)2^k\limsup_{w\rightarrow0}\frac{\psi(w^{\frac{n}k})}{\psi(w)}=o(n).$$ Therefore, the upper limit in \eqref{general limit property} (see also \eqref{decomposition}) is bounded from above by $$\frac{a_{\psi}}k n+o(n).$$ Thus, the upper limit in \eqref{general limit property} is strictly less then $n$ for every sufficiently large $n.$ \end{proof} Let the function $g_n$ be defined by \begin{equation}\label{g definition} g_n(u):=\frac{||A_n\chi_{[0,u]}||_{\Lambda(\psi)}}{n||\chi_{[0,u]}||_{\Lambda(\psi)}} =\frac1{n\psi(u)}\sum_{s=1}^n\psi(m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s)). \end{equation} It is obvious that $0\leq g_n\leq 1$. \begin{rem} The second equality in \eqref{g definition} is a corollary of \cite[II.5.4]{KPS}. \end{rem} \begin{prop}\label{tech fignya} For sufficiently large $n,$ we have $g_n(u)<1$ for all $u\in(0,1]$. \end{prop} \begin{proof} Since $\psi$ is concave, we have \begin{multline}\label{first bound of g} \sum_{s=1}^n\psi(m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s))\leq \\ \leq n\cdot\psi(\frac1n\sum_{s=1}^nm(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s)). \end{multline} Note, that if random variable $\xi_n$ takes the values $0,1,\cdots,n$ then \begin{equation}\label{expectation appears} \sum_{s=1}^nm(\xi_n\geq s)=\mathbb{E}(\xi_n). \end{equation} By \eqref{expectation appears}, the right-hand side of \eqref{first bound of g} is equal to $n\psi(\frac1n \mathbb{E}(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|)).$ By Lemma \ref{estimate of expectation}, we have \begin{equation}\label{strict estimate of expectation} \frac1n \mathbb{E}(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|)<\mathbb{E}(|\chi_{[0,u]}\otimes r|)=u. \end{equation} Taking $\psi$, we obtain \begin{equation}\label{second bound of g} n\psi(\frac1n\mathbb{E}(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|))\leq n\psi(\mathbb{E}(|\chi_{[0,u]}\otimes r|)). \end{equation} The right hand side of \eqref{second bound of g} is equal to $n\psi(u).$ Let us assume that $g_n(u)=1,$ for some $u>0$ and some $n>1.$ It then follows, that both inequalities \eqref{first bound of g} and \eqref{second bound of g} are actually equalities. The equality \begin{multline*} \sum_{s=1}^n\psi(m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s))=\\ =n\cdot\psi(\frac1n\sum_{s=1}^nm(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s)) \end{multline*} implies, by Lemma \ref{linearity}, that $\psi$ is linear on the interval $[a_1,b_1]$ with $a_1=m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq n),$ and $b_1=m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq1).$ Since the inequality in \eqref{second bound of g} is actually an equality, we derive from \eqref{strict estimate of expectation} and \eqref{second bound of g}, that $\psi$ must be a constant on the interval $[a_2,b_2]$ with $a_2=\frac1n \mathbb{E}(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|),$ and $b_2=\mathbb{E}(|\chi_{[0,u]}\otimes r|)].$ Since $\psi$ is increasing and concave function, it must be a constant on $[a_2,1].$ Since, by \eqref{expectation appears}, $$\frac1n \mathbb{E}(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|)>m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq n)$$ and $$\frac1n \mathbb{E}(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|)<m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq1)),$$ we have $a_1<a_2<b_2.$ So, the intersection of the intervals $[a_1,b_1]$ and $[a_2,1]$ contains an interval $[a_3,b_3]$ with $a_3<b_3.$ Since $\psi$ is a linear function on the $[a_1,b_1]$ and is a constant on the $[a_2,1]$ it must be a constant on $[a_1,1]$ that is on the interval $$[m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq n),1]=[2^{1-n}u^n,1].$$ Thus, $\psi$ is a constant on the interval $[2^{1-n},1]\subset [2^{1-n}u^n,1],$ which is not the case for sufficiently large $n.$ So, $g_n(u)<1$ for all sufficiently large $n.$ \end{proof} \begin{lem}\label{limit equivalence} For the function $g_n,$ defined in Proposition \ref{tech fignya}, we have $$\limsup_{u\rightarrow0}g_n(u)=\limsup_{u\rightarrow0}\frac1{n\psi(u)}\sum_{s=1}^n\psi(2^{1-s}\binom{n}{s}u^s).$$ \end{lem} \begin{proof}For every $s\geq 1,$ using a formula for conditional probabilities, we have \begin{equation*} m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s)=\sum_{k=1}^n\binom{n}{s}u^k(1-u)^{n-k}m(|r_1+\cdots+r_k|\geq s). \end{equation*} Actually, the summation above is taken from $k=s$ up to $n,$ since $m(|r_1+\cdots+r_k|\geq s)=0$ for every $k<s.$\\ If now $u\rightarrow0,$ then, for every $s\geq1$ and $k>s,$ we have $\binom{n}{k}u^k(1-u)^{n-k}=o(u^s).$ Therefore, \begin{equation}\label{main term selection} m(|(\chi_{[0,u]}\otimes r)_1+\cdots+(\chi_{[0,u]}\otimes r)_n|\geq s)=2^{1-s}\binom{n}{s}u^s(1+o(1)). \end{equation} Since $\psi$ is concave, we have \begin{equation}\label{sublinearity} \psi(\frac1{m}u)\leq\frac1{m}\psi(u),\ 0<m\leq1. \end{equation} This implies \begin{equation}\label{psi limit property} \lim\limits_{u\rightarrow0}\frac{\psi(u(1+o(1)))}{\psi(u)}=1. \end{equation} After applying \eqref{main term selection} and \eqref{psi limit property} to the definition of $g_n$ in \eqref{g definition}, we obtain the assertion of the lemma. \end{proof} The following theorem is the main result in this section. \begin{thm}\label{characterization} Let $\psi\in\Psi.$ The following conditions are equivalent. $(i)$ $||A_n||_{\mathcal{L}(\Lambda(\psi))}<n$ for all sufficiently large $n;$ $(ii)$ Estimates \eqref{first property} and \eqref{second property} hold for some $k\geq2$ and $l\geq2$. \end{thm} \begin{rem} Note that condition $(i)$ above is equivalent to the assumption that $||A_{n_0}||_{\mathcal{L}(\Lambda(\psi))}<n_0$ for some $n_0>1$ (see Theorem \ref{alternative}). \end{rem} \begin{proof} We are interested whether there exist $n\in\mathbb{N}$ and $c<n,$ such that \begin{equation}\label{an bound} ||A_nf||_{\Lambda(\psi)}\leq c||f||_{\Lambda(\psi)},\ \ f\in\Lambda(\psi). \end{equation} We will use the following known description of extreme points of the unit ball in $\Lambda(\psi).$ A function $f\in extr(B_{\Lambda(\psi)}(0,1))$ if and only if $$|f|=\frac{\chi_{A}}{||\chi_{A}||_{\Lambda(\psi)}}$$ for some measurable set $A\subset [0,1].$ Here $\chi_A$ is the indicator function of the set $A$. This means that $f$ is of the form $$f=\frac{\chi_{A_1}-\chi_{A_2}}{\psi(m(A_1\cup A_2))}$$ with $A_1$ and $A_2$ having empty intersection. It is sufficient to verify \eqref{an bound} only for functions $f$ as above (see \cite[Lemma II.5.2]{KPS}). Clearly, $f\otimes r$ and $|f|\otimes r$ are identically distributed random variables. Therefore, $A_n(f)$ and $A_n(|f|)$ are also identically distributed ones. Furthermore, $||A_m(f)||=||A_m(|f|)||$ and $||f||=||\,|f|\,||.$ Thus, we need to check \eqref{an bound} for indicator functions only. It is sufficient to take $A$ of the form $[0,u],$ $0<u\leq1.$ Using the notation $g_n(\cdot)$ introduced in \eqref{g definition}, we see that \eqref{an bound} is equivalent to \begin{equation}\label{gn bound} \sup_u g_n(u)<1. \end{equation} Now, we are ready to finish the proof of the theorem. [Necessity] Fix $n$ such that $||A_n||_{\mathcal{L}(\Lambda(\psi))}<n.$ It follows from the argument above that \eqref{gn bound} holds. Now, we immediately infer from Lemma \ref{limit equivalence} and the definition of $g_n(\cdot)$ that $$\limsup_{u\rightarrow0}\frac1{n\psi(u)}\sum_{s=1}^n\psi(\binom{n}{s}2^{1-s}u^s)<1,$$ which is equivalent to \eqref{general limit property}. Thus, $$\limsup_{u\rightarrow0}\frac{\psi(nu)}{n\psi(u)} =\limsup_{u\rightarrow0}\frac{\psi(2^{1-1}\binom{n}{1}u^1)}{n\psi(u)} \leq\limsup_{u\rightarrow0}\frac1{n\psi(u)}\sum_{s=1}^n\psi(2^{1-s}\binom{n}{s}u^s)<1.$$ Suppose that \eqref{second property} fails. We have $$\limsup_{u\rightarrow 0}\frac{\psi(u^l)}{\psi(u)}=1$$ for every $l\geq 1.$ Since $\binom{n}{s}2^{1-s}u^s\geq u^{n+1}$ for every $s=1,2,\cdots,n$ and every sufficiently small $u,$ we have $$\limsup_{u\rightarrow 0}\frac1{n\psi(u)}\sum_{s=1}^n\psi(\binom{n}{s}2^{1-s}u^s) \geq\limsup_{u\rightarrow 0}\frac{n\psi(u^{n+1})}{n\psi(u)}=1.$$ This contradicts with \eqref{general limit property} and completes the proof of necessity. [Sufficiency] Fix $k\geq2$ (respectively, $l\geq2$) such that \eqref{first property} (respectively, \eqref{second property}) holds. Then, for sufficiently large $n,$ \eqref{general limit property} also holds. By Lemma \ref{limit equivalence}, we have \begin{equation}\label{gn limit bound} \limsup_{u\rightarrow0}g_n(u)<1 \end{equation} for all sufficiently large $n.$ By Proposition \ref{tech fignya}, we have $g_n(u)<1$ for all sufficiently large $n$ and for all $u\in(0,1].$ Therefore, by \eqref{gn limit bound}, \eqref{gn bound} holds for sufficiently large $n.$ Then (see the argument at the beginning of the proof), $||A_n||_{\mathcal{L}(\Lambda(\psi))}<n$ for sufficiently large $n.$ \end{proof} Combining Theorems \ref{alternative} and \ref{characterization}, we have \begin{cor}\label{lorentz alternative} For every function $\psi,$ one of the following two mutually excluding alternatives holds. \begin{enumerate} \item There exist $q\in[\frac12,1)$ and $C>0,$ such that the operator $A_n:\Lambda(\psi)\rightarrow\Lambda(\psi)$ satisfies $$||A_n||_{\mathcal{L}(E)}\leq C\cdot n^q,\ n\geq1.$$ \item Either for every $k\in\mathbb{N},$ \begin{equation}\label{first bad condition} \limsup_{u\rightarrow0}\frac{\psi(ku)}{\psi(u)}=k \end{equation} or for every $l\in\mathbb{N},$ \begin{equation}\label{second bad condition} \limsup_{u\rightarrow0}\frac{\psi(u^l)}{\psi(u)}=1. \end{equation} \end{enumerate} \end{cor} \begin{rem} \begin{itemize}\item[(i)] The condition \eqref{first bad condition} is equivalent to the assumption $\beta_{\Lambda(\psi)}=1$. \item[(ii)] The condition \eqref{second bad condition} implies (but not equivalent to) the condition $\alpha_{\Lambda(\psi)}=0$. In the last section of this paper, we will present an example $\psi\in \Psi$ failing \eqref{second bad condition} such that the Lorentz space $\Lambda(\psi)$ fails the Kruglov property. \end{itemize} \end{rem} \section{Operators $A_n$, $n\ge 1$ in Orlicz spaces $exp(L_p)$.} The space $exp(L_p)$ satisfies Kruglov property if and only if $p\leq1$ (see \cite{Br1994,AsSu2005}). The space $exp(L_p)$ is 2-convex for all $0<p<\infty$ (see e.g. \cite[1.d]{LT-II}). Now, we immediately infer from \cite{AsSeSu2007} that $\Gamma_{\rm iid}(exp(L_p)_0)=\Gamma_{\rm i}(exp(L_p)_0)=[1,2]$ for all $0<p\leq1$ (here, $exp(L_p)_0$ is the separable part of the space $exp(L_p)$). Using Remark \ref{connection}, we have $||A_n||_{\mathcal{L}(exp(L_p)_0)}\leq const\cdot n^{\frac12}$ for all $n\geq1$ and $0<p\leq1$. It easily follows that in fact, $||A_n||_{\mathcal{L}(exp(L_p))}\leq const\cdot n^{\frac12}$ for all $n\geq1$ and $0<p\leq1$. In this section, we prove the estimate $||A_n||_{\mathcal{L}(exp(L_p))}\leq const\cdot n^{\frac12}$ (respectively, $||A_n||_{\mathcal{L}(exp(L_p))}\leq const\cdot n^{1-1/p}$) for all $1<p\leq 2$ (respectively, $2\leq p<\infty$.) To this end, it is convenient to view $exp(L_p)$ as a Marcinkiewicz space $M(\psi_p)$ with $\psi_p(t)=t\log^{\frac1p}(\frac{e}t)$ (see \cite[Lemma 4.3]{AsSu2005}). The following simple lemma is crucial. \begin{lem}\label{expl2 normal} There exists $\Psi\ni\psi\sim\psi_2,$ such that the random variable $\psi'\otimes r$ is Gaussian. \end{lem} \begin{proof} Setting $F(t):=\frac2{\sqrt{\pi}}\int_t^{\infty}e^{-z^2}dz$, $t\ge 0$ and denoting its inverse by $G$, we clearly have that $G\otimes r$ is Gaussian. From the obvious inequality $$c_1\cdot e^{-2t^2}\leq F(t)\leq c_2 e^{-t^2},$$ substituting $t=G(z),$ we obtain $$c_1\cdot e^{-2G^2(z)}\leq z\leq c_2 e^{-G^2(z)}$$ or, equivalently, $$\frac1{\sqrt{2}}\log^{\frac12}(\frac{c_1}z)\leq G(z)\leq \log^{\frac12}(\frac{c_2}z).$$ This means $$\psi(t)=\int_0^tG(z)dz\sim\int_0^t\log^{\frac12}(\frac{e}z)dz\sim t\log^{\frac12}(\frac{e}t)=\psi_2(t).$$ \end{proof} \begin{thm}\label{explp bound} \begin{itemize} \item[(i)] For every $1\leq p\leq 2,$ we have $||A_n||_{\mathcal{L}(exp(L_p))}\leq const\sqrt{n}.$ \item[(ii)]For every $2\leq p\leq\infty,$ we have $||A_n||_{\mathcal{L}(exp(L_p))}\leq const\cdot n^{1-1/p}.$ \end{itemize} \end{thm} \begin{proof} (i).\quad By Lemma \ref{expl2 normal} $exp(L_2)=M(\psi)$, $\psi\in \Psi$ where $\psi'\otimes r$ is Gaussian. Recall the following description of the extreme points of the unit ball in Marcinkiewicz spaces (see \cite{Ryff}): a function $f$ is an extreme point of the unit ball in $M({\psi})$ if and only if $f^*=\psi'$. Since $||A_nx||_{M({\psi})}=||A_n\psi'||_{M({\psi})}$ for any $x\in M_{\psi}$ with $x^*=\psi'$, we infer that $||A_n\psi'||_{M({\psi})}=||A_n||_{\mathcal{L}(M({\psi}))}$, $n\ge 1$. Since the $\psi'\otimes r$ is Gaussian, the function $$\frac{(\psi'\otimes r)_1+\cdots+(\psi'\otimes r)_n}{\sqrt{n}}$$ is also Guassian, in particular, its rearrangement coincides with $\psi'$. This means $||A_n||_{\mathcal{L}(M_{\psi})}=\sqrt{n}$. The result now follows by interpolation between $exp(L_{1})$ and $exp(L_{2})$, since for every $0<p_1\leq p_2\leq\infty$ we have $$[exp(L_{p_1}),exp(L_{p_2})]_{\theta,\infty}=exp(L_p)$$ with $\frac1p=\frac{1-\theta}{p_1}+\frac{\theta}{p_2}$ (see, for example \cite{BrKrug}). (ii).\quad Noting that $||A_n||_{\mathcal{L}(L_\infty)}=n$, $n\ge 1$, the assertion follows from (i) by applying the real method of interpolation to the couple $(exp(L_{2}),L_\infty)$ as above. \end{proof} \section{Applications to Banach-Saks index sets} The first main result of this section characterizing a subclass of the class of all r.i. spaces $E$ such that $\Gamma_{\rm iid}(E)= \Gamma_{\rm i}(E)$ is given in Theorem \ref{firstmain} below. We firstly need a modification of the subsequence splitting result from \cite[Theorem 3.2]{new-16-Sem-Suk}. We present necessary details of the proof for convenience of the reader. \begin{thm}\label{firsttechnical} Let $\{x_n\}_{n\ge 1}$ be a weakly null sequence of independent functions in a separable r.i. space $E$ with the Fatou property. Then, there exists a subsequence $\{y_n\}_{n\ge 1}\subset\{x_n\}_{n\ge 1},$ which can be split as $y_n=u_n+v_n+w_n, n\ge 1$. Here $\{u_n\}_{n\ge 1}$ is a weakly null sequence of independent identically distributed functions, the sequence $\{v_n\}_{n\ge 1}$ is also weakly null and consists of the elements with pairwise disjoint support and $\|w_n\|_E\to 0$ as $n\to \infty$. \end{thm} \begin{proof} Let the probability space $(\Omega, \mathcal{P})$ be the infinite direct product of measure spaces $([0,1],m)$. Without loss of generality, we assume that $E=E(\Omega)$ and that each function $x_n$ depends only on the $n-$th coordinate. That is the following holds $$x_n=\underbrace{1\otimes\cdots\otimes1}_{(n-1)\ times}\otimes h_n\otimes 1\otimes\cdots,\quad h_n\in E(0,1),\ \quad n\ge 1.$$ Consider the sequence $\{g_n\}_{n\ge 1}=\{h^*_n\}_{n\ge 1}\subset E(0,1)$. Since $$||x_n||_E=||g_n||_E\geq ||g_n\chi_{[0,s]}||_E\geq g_n(s)||\chi_{[0,s]}||_E,\quad s\in [0,1]$$ and the sequence $\{x_n\}$ is bounded, it follows from Helly Selection theorem that there exists a subsequence $\{g_n^1\}\subset\{g_n\},$ which converges almost everywhere on $[\frac12,1]$. Repeating the argument, we get a subsequence $\{g_n^2\}\subset\{g_n^1\},$ which converges almost everywhere on $[\frac13,1],$ etc. Thus, there exists a function $h\in L_1(0,1)$ to which the diagonal sequence $\{g_n^n\}_{n\ge 1}=\{(h_n^n)^*\}_{n\ge 1}$ converges almost everywhere. The Fatou property of $E$ guarantees that $h\in E(0,1)$ and $\|h\|_E\leq 1$. There is an operator $P_n:L_1(0,1)\to L_1(0,1)$ of the form $(P_nx)(t)=\alpha(t)x(\gamma(t))$ (here $|\alpha(t)|=1$ and $\gamma$ is a measure preserving transformation of the interval $(0,1)$ into itself), such that $P_ng_n^n=h_n^n$, $n\ge 1$ (see e.g. \cite{KPS}). Now, put $$y_n:=1\otimes1\otimes\cdots\otimes1\otimes h^n_n\otimes1\cdots,\quad n\ge 1,$$ $$u_n:=1\otimes1\otimes\cdots\otimes1\otimes (P_n h)\otimes1\cdots,\quad n\ge 1.$$ It is clear, that functions $u_n$ are independent. {\bf z-z-z-z} The proof is finished by repeating the remaining argument from \cite[Theorem 3.2]{new-16-Sem-Suk}. \end{proof} \begin{thm}\label{firstmain} For an arbitrary separable r.i. space $E$ with the Fatou property, we have $$ \Gamma_{\rm iid}(E)= \Gamma_{\rm i}(E)\Longleftrightarrow \Gamma_{\rm iid}(E)\subseteq \Gamma_{\rm d}(E). $$ \end{thm} \begin{proof} If $\Gamma_{\rm iid}(E)= \Gamma_{\rm i}(E)$, then the embedding $\Gamma_{\rm iid}(E)\subseteq \Gamma_{\rm d}(E)$ follows immediately from \cite [Lemma 4.1(ii)]{AsSeSu2005}. Suppose now that $\Gamma_{\rm iid}(E)\subseteq \Gamma_{\rm d}(E)$ and let $\{f_k\}_{k\ge 1}\subset E$ be a normalized weakly null sequence of independent random variables on $[0,1]$. Passing to a subsequence and applying the preceding theorem, we may assume that $f_n=u_n+v_n+w_n, n\ge 1$, where $\{u_n\}_{n\ge 1}$ is a weakly null sequence of independent identically distributed functions, the sequence $\{v_n\}_{n\ge 1}$ is also weakly null and consists of the elements with pairwise disjoint support and $\|w_n\|_E\to 0$ as $n\to \infty$. Due to the latter convergence, we may assume without loss of generality that $||w_k||_{E}\leq 2^{-k}$ and so for every subsequence $\{z_n\}\subset\{w_n\},$ we have $$||\sum_{k=1}^nz_k||_{E}\leq 1.$$ If, in addition, $\frac {1}{q} \in \Gamma_{\rm iid}(E)$, then our assumptions also guarantee that there are constants $C_2, C_3>0$ $$||\sum_{k=1}^nu_k||_{E}\leq C_2\cdot n^{q},\quad ||\sum_{k=1}^nv_k||_{E}\leq C_3\cdot n^{q}.$$ \end{proof} We will illustrate the result above in the settings of: $(\alpha)$ r.i. spaces satisfying an upper $2$-estimate; $(\beta)$ Lorentz spaces $\Lambda(\varphi)$ and Marcinkiewicz spaces $M(\varphi)_0$, $\varphi\in \Psi$; and $(\gamma)$ classical $L_{p,q}$-spaces. $(\alpha)$\quad Recall that a Banach lattice $X$ is said to satisfy an \textit{upper\ }$2-$\textit{estimate}, if there exists a constant $C>0$ such that for every finite sequence $(x_{i})_{_{i=1}}^{n}$ of pairwise disjoint elements in $X$ \begin{equation*} \left\Vert \sum_{j=1}^{n}x_{j}\right\Vert _{X}\leq C\left( \sum_{j=1}^{n}\Vert x_{j}\Vert _{X}^{2}\right) ^{1/2}. \end{equation*} \begin{cor}\label{upper2estimate} If $E$ is a separable r.i. space with the Fatou property and satisfying an upper $2$-estimate, then $ \Gamma_{\rm iid}(E)= \Gamma_{\rm i}(E)$. \end{cor} \begin{proof} The assumption that the space $E$ satisfies an upper $2$-estimate implies immediately that $2\in \Gamma_{\rm d}(E)$ and hence $[1,2]\subseteq \Gamma_{\rm d}(E)$. Noting that $ \Gamma_{\rm iid}(E) \subseteq [1,2]$ (see \cite [Lemma 4.1(i)]{AsSeSu2005} the result now follows from Theorem \ref{firstmain}. \end{proof} $(\beta)$\quad Although Lorentz spaces do not satisfy an upper $2$-estimate, we have $$\Gamma_{d}(\Lambda(\psi))=[1,\infty)$$ (see e.g. the proof of \cite [Corollary 4.8]{AsSeSu2005}) and similarly, $\Gamma_{d}(M(\psi)_0)=[1,\infty)$ (see e.g. \cite [p.897]{AsSeSu2005}) for any $\psi\in \Psi$. Although, the Marcinkiewicz spaces $(M(\psi)_0)$ do not possess the Fatou property, applying the modification of Theorem \ref{firsttechnical} similar to to \cite[Lemma 3.6]{AsSeSu2005}, we obtain the following corollary from Theorem \ref{firstmain}. \begin{cor}\label{coincidence_in_Lor} For every $\psi\in \Psi$, we have $\Gamma_{\rm i}(\Lambda(\psi))= \Gamma_{\rm iid}(\Lambda(\psi))$ and $\Gamma_{\rm i}(M(\psi)_0)= \Gamma_{\rm iid}(M(\psi)_0)$. \end{cor} $(\gamma)$\quad We will now show that the equality $\Gamma_{\rm i}(E)=\Gamma_{\rm iid}(E)$ fails in the important subclass of r.i. space which plays a significant role in the interpolation theory \cite{KPS,LT-II}. Recall the definition of the Lorentz spaces $L_{p,q}$, $1<p,q<\infty$: $x\in L_{p,q}$ if and only if the quasi-norm \[ \|x\|_{p,q}= \dfrac{q}{p}\left(\displaystyle\int\limits_0^1 \left(x^*(t)t^{1/p}\right)^q\dfrac{dt}{t}\right)^{1/q}, \] is finite. The expression $\|\cdot \|_{p,q}$ is a norm if $1\leqslant q\leqslant p$ and is equivalent to a (Banach) norm if $q>p$. We will now show that $\Gamma_{\rm i}(L_{p,q})\neq\Gamma_{\rm iid}(L_{p,q})$, provided $1<q<p<2$. To this end, we firstly observe that every normalized sequence $\{v_n\}_{n\ge 1}\subset L_{p,q}$ of functions with disjoint support contains a subsequence spanning the space $l_q$ (see \cite[Lemma 2.1]{CD}). In particular, $\Gamma_{\rm d}(L_{p,q})\subset\Gamma(l_q)=[1,q]$ and so, by \cite[Lemma 4.1(ii)]{AsSeSu2005}, we have $\Gamma_{\rm i}(L_{p,q})\subseteq [1,q]$. Next, it is proved in \cite[Corollary 5.2]{Br1996} (see also \cite{CarDil89}) {\bf z-z-z-z} that if $p<2$ then for every sequence of identically distributed independent random variables we have $$||\sum_{k=1}^nx_k||_{L_{p,q}}=o(n^{\frac1p}),$$ which implies, in particular, that $[1,p]\subseteq \Gamma_{\rm iid}(L_{p,q})$. This shows that $(q,p]\subseteq \Gamma_{\rm iid}(L_{p,q})\setminus \Gamma_{\rm i}(L_{p,q})$ as soon as $1<q<p<2$. Our second main result in this section completely characterizes the subclass of all Lorentz spaces $\Lambda(\psi)$, $\psi\in \Psi$ whose Banach-Saks index set $\Gamma_{\rm i}(\Lambda(\psi))$ is non-trivial. \begin{thm}\label{mainsecond} $\Gamma_{\rm iid}(\Lambda(\psi))\neq\{1\}$ if and only if the function $\psi$ satisfies conditions \eqref{first property} and \eqref{second property} for some $k,l\geq2$. \end{thm} \begin{proof} Let $\{f_k\}_{k\ge1}\subset \Lambda(\psi)$ be a normalized weakly null sequence of independent identically distributed random variables on $[0,1]$. Note that we automatically have $\int_0^1f_kdm=0$, $k\ge 1$. Using standard symmetrization trick, we consider another sequence $\{f_k'\}_{k\ge 1}$ of independent random variables (which is also independent with respect to the sequence $\{f_k\}_{k\ge 1}$) such that $f_k'$ is equidistributed with $f_k$ and define $h_k:=f_k-f_k'$, $k\ge 1$. Clearly, $\{h_k\}_{k\ge 1}$ is a sequence of independent symmetric identically distributed random variables. Noting, that by \cite[Proposition~11, p.~6]{Br1994}, we have $$||\sum_{k=1}^nf_k||_{\Lambda(\psi)}\leq const\cdot ||\sum_{k=1}^nh_k||_{\Lambda(\psi)},\quad n\ge 1.$$ Now, if $\psi$ satisfies conditions \eqref{first property} and \eqref{second property}, then it follows from Corollary \ref{lorentz alternative} that $||\sum_{k=1}^nh_k||_{\Lambda(\psi)}\leq const\cdot n^q$ for some $q\in (0,1)$ and hence $\frac{1}{q}\in \Gamma_{\rm iid}(\Lambda(\psi))$. Conversely, let $\frac{1}{q}\in \Gamma_{\rm iid}(\Lambda(\psi))$ for some $q\in (0,1)$. Fix $f\in \Lambda(\psi)$ and consider the sequence $\{(f\otimes r)_n\}_{n\ge 1}\subset \Lambda(\psi)(\Omega, \mathcal{P})$, where the probability space $(\Omega, \mathcal{P})$ is the infinite direct product of measure spaces $([0,1],m)$. Since Lorentz spaces $\Lambda(\psi)(\Omega, \mathcal{P})$ and $\Lambda(\psi)(0,1)$ are isometric, and since the sequence $\{(f\otimes r)_n\}_{n\ge 1}$ is weakly null in $\Lambda(\psi)(\Omega, \mathcal{P})$ ( see e.g. \cite[Lemma 3.4]{SeSu}), we have $$ \sup_{n\ge 1}\frac{1}{n^q}\|(f\otimes r)_1+(f\otimes r)_2+\dots +(f\otimes r)_n\|_{\Lambda(\psi)}\leq C(f). $$ Setting, $B_n:=\frac{1}{n^q}A_n$, $n\ge 1$ we have $\|B_nf\|_{\Lambda(\psi)}\leq C(f)$ for every $n\ge 1$. By the uniform boundedness principle, we have $\|B_n\|_{\mathcal{L}(\Lambda(\psi))}\leq C<\infty$ for all $n\ge 1$, or equivalently that $||A_n||_{\mathcal{L}(\Lambda(\psi))}\leq C\cdot n^q,\ n\geq 1$. Corollary 9 now yields that the function $\psi$ satisfies conditions \eqref{first property} and \eqref{second property}. \end{proof} The following Corollary follows immediately from the above combined with Corollary \ref{coincidence_in_Lor}. \begin{cor}\label{mainsecond_add} $\Gamma_{\rm i}(\Lambda(\psi))\neq\{1\}$, if and only if the function $\psi\in \Psi$ satisfies conditions \eqref{first property} and \eqref{second property} for some $k,l\geq2.$ \end{cor} We complete this section with the description of $\Gamma_i(exp(L_p)_0)$, $1\leq p<\infty$. \begin{thm}\label{Marc} For every $1\leq p\leq2,$ we have $\Gamma_{\rm iid}(exp(L_p)_0)=\Gamma_{\rm i}(exp(L_p)_0)=[1,2]$. For every $2\leq p<\infty,$ we have $\Gamma_{\rm iid}(exp(L_p)_0)=\Gamma_{\rm i}(exp(L_p)_0)= [1,\frac{p}{p-1}].$ \end{thm} \begin{proof} The first assertion follows from Remark \ref{connection}, Theorem \ref{explp bound} and Corollary \ref{coincidence_in_Lor}. The same argument shows that $\Gamma_i(exp(L_p)_0)\supseteq [1,\frac{p}{p-1}]$ for every $2\leq p<\infty$. The equality $\Gamma_{\rm i}(exp(L_p)_0)= [1,\frac{p}{p-1}]$ follows from the fact that the estimate $$\|A_n \chi _{[0,1]}\|_{exp(L_p)_0}\leq const\cdot n^{1-1/p},\quad n\ge 1$$ is the best possible (see \cite[Theorem 8]{RS} or \cite[Theorem 15]{D}). \end{proof} \section{Concluding Remarks and Examples} The preceding theorem shows that the set $\Gamma_i(exp(L_p)_0)$ is non-trivial for all $1\leq p<\infty$, whereas $exp(L_p)$ has the Kruglov property if and only if $0<p\leq 1$. This result extends and complements \cite{AsSeSu2007}, where examples of r.i. spaces $E$ with Kruglov property such that $\Gamma(E)=\{1\}$ and $\Gamma_{\rm i}(E)\neq \{1\}$ are built. We now present an example of Lorentz space $\Lambda(\psi)$ such that $\Gamma_{\rm i}(\Lambda(\psi))\neq\{1\}$ and which does not possess the Kruglov property. \begin{ex} Let $\psi\in \Psi$ be given by the condition $\psi(t):=\frac{1}{\log^{\frac12}(\frac1t)}$, $t\in [0,e^{-\frac32}]$ and be linear on $ [e^{-\frac32},1]$. The space $\Lambda(\psi)$ does not have the Kruglov property, however $\Gamma_{\rm i}(\Lambda(\psi))\neq\{1\}$ \end{ex} \begin{proof} Since for every $k,l>1$ we have $$\lim_{u\rightarrow0}\frac{\psi(ku)}{\psi(u)}= \lim_{u\rightarrow0}(\frac{\log(u)}{\log(ku)})^{\frac12}= 1<k, \quad\lim_{u\rightarrow0}\frac{\psi(u^l)}{\psi(u)}= \lim_{u\rightarrow0}(\frac{\log(u)}{\log(u^l)})^{\frac12}=\frac1{l^{\frac12}}<1$$ we see that $\Gamma_{\rm i}(\Lambda(\psi))\neq\{1\}$ by Corollary 9. By \cite[Theorem 5.1]{AsSeSu2005} a Lorentz space $\Lambda(\phi)$, $\phi\in \Psi$ has the Kruglov property if and only if $$\sup_{t>0}\frac1{\phi(t)}\sum_{n=1}^{\infty}\phi(\frac{t^n}{n!})<\infty.$$ In our case, for every fixed $t\leq e^{-\frac32}$ $$\sum_{n=1}^{\infty}\psi(\frac{t^n}{n!})= \sum_{n=1}^{\infty}\frac1{(\log(n!)+n\log(\frac1t))^{\frac12}}= \sum_{n=1}^{\infty}\frac1{(n\log(n)(1+o(1)))^{1/2}}=\infty. $$ \end{proof} \leftline{F. Sulochev}\leftline{School of Mathematics and Statistics}\leftline{University of New South Wales, Kensington NSW 2052}\leftline{Email Address:{\it [email protected]}} \leftline {D. Zanin}\leftline {School of Computer Science, Engineering and Mathematics} \leftline {Flinders University, Bedford Park, SA 5042, Australia}\leftline {Email Address: {\it [email protected]} } \end{document}
\begin{document} \title{Finding All Useless Arcs in Directed Planar Graphs} \begin{abstract} We present a linear-time algorithm for simplifying flow networks on directed planar graphs: Given a directed planar graph on $n$ vertices, a source vertex $s$ and a sink vertex $t$, our algorithm removes all the arcs that do not participate in any simple $s,t$-path in linear-time. The output graph produced by our algorithm satisfies the prerequisite needed by the $O(n\log n)$-time algorithm of Weihe [FOCS'94 \& JCSS'97] for computing maximum $s,t$-flow in directed planar graphs. Previously, Weihe's algorithm could not run in $O(n\log n)$-time due to the absence of the preprocessing step; all the preceding algorithms run in $\tilde{\Omega}(n^2)$-time [Misiolek-Chen, COCOON'05 \& IPL'06; Biedl, Brejov{\'{a}} and Vinar, MFCS'00]. Consequently, this provides an alternative $O(n\log n)$-time algorithm for computing maximum $s,t$-flow in directed planar graphs in addition to the known $O(n\log n)$-time algorithms [Borradaile-Klein, SODA'06 \& J.ACM'09; Erickson, SODA'10]. Our algorithm can be seen as a (truly) linear-time $s,t$-flow sparsifier for directed planar graphs, which runs faster than any maximum $s,t$-flow algorithm (which can also be seen of as a sparsifier). The simplified structures of the resulting graph might be useful in future developments of maximum $s,t$-flow algorithms in both directed and undirected planar graphs. \end{abstract} \section{Introduction} \label{sec:intro} The {\em maximum $s,t$-flow} problem is a fundamental problem in Combinatorial Optimization that has numerous applications in both theory and practice. A basic instance of maximum flow where the underlying graph is planar has been considered as an important special case and has been studied since 50's in the early work of Ford and Fulkerson \cite{FordF56}. Since then, there have been steady developments of maximum flow algorithms on undirected planar graphs. Itai and Shiloach \cite{ItaiS79} proposed an algorithm for the maximum $s,t$-flow problem on undirected planar graphs that runs in $O(n^2\log n)$ time, and in subsequent works \cite{Hassin81,Reif83,HassinJ85,Frederickson87,KleinRRS94-STOC94,HenzingerKRS97,ItalianoNSW11}, the running time have been improved to the current best $O(n\log\log{n})$-time algorithm by Italiano~et~al.~\cite{ItalianoNSW11}. Another line of research is the study of the maximum $s,t$-flow problem in directed planar graphs. The fastest algorithm with the running time of $O(n\log n)$ is due to Borradaile and Klein~\cite{BorradaileK09}. Historically, in 1994, Weihe~\cite{Weihe97} presented a novel approach that would solve the maximum $s,t$-flow problem on directed planar graphs in $O(n\log n)$ time. However, Weihe's algorithm requires a preprocessing step that transforms an input graph into a particular form: (1) each vertex (except source and sink) has degree three, (2) the planar embedding has no clockwise cycle, and (3) every arc must participate in some {\em simple $s,t$-path}. The first condition can be guaranteed by a simple (and thus linear-time) reduction, and the second condition can be obtained by an algorithm of Khuller, Naor and Klein in \cite{KhullerNK93} which runs in $O(n\log n)$-time (this was later improved to linear time \cite{HenzingerKRS97}). Unfortunately, for the third condition, there was no known algorithm that could remove all such {\em useless} arcs in $O(n\log n)$-time. As this problem seems to be simple, this issue had not been noticed until it was pointed out much later by Biedl, Brejov{\'{a}} and Vinar \cite{BiedlBV00}. Although an $O(n\log n)$-time algorithm for the maximum $s,t$-flow problem in directed planar graphs has been devised by Borradaile and Klein \cite{BorradaileK09}, the question of removing all the useless arcs in $O(n\log n)$-time remains unsolved. In this paper, we study the {\em flow network simplification} problem, where we are given a directed planar graph $G=(V,E)$ on $n$ vertices, a source vertex $s$ and a sink vertex $t$, and the goal is to remove all the arcs $e$ that are not contained in any simple $s,t$-path. One may observe that the problem of determining the usefulness of a single arc involves finding two vertex-disjoint paths, which is NP-complete in general directed graphs \cite{FortuneHW80}. Thus, detecting all the useless arcs at once is non-trivial. Here we present a linear-time algorithm that determines all the useless arcs, thus solving the problem left open in the work of Weihe~\cite{Weihe97} and settling the complexity of simplifying a flow network in directed planar graphs. The main ingredient of our algorithm is a decomposition algorithm that slices a plane graph into small strips with simple structures. This allows us to design an appropriate algorithm and data structure to handle each strip separately. Our data structure is simple but requires a rigorous analysis of the properties of a planar embedding. We use information provided by the data structure to determine whether each arc $e$ is contained in a simple $s,t$-path $P$ in $O(1)$ time, thus yielding a linear-time algorithm. The main difficulty is that we cannot afford to explicitly compute such $s,t$-path $P$ (if one exists) as it would lead to $O(n^2)$ running time. The existence of any path $P$ can only be determined implicitly by our structural lemmas. Our main algorithm runs in linear time. However, it requires the planar embedding (which is assumed to be given together with the input graph) to contain no clockwise cycle and that every vertex except the source and the sink has degree exactly three. We provide a sequence of linear-time reductions that produces a plane graph that satisfies the above requirements with the guarantees that the value of maximum $s,t$-flow is preserved. In particular, we apply a standard degree-reduction to reduce the degree of the input graph and invoke the algorithm of Khuller, Naor, and Klein \cite{KhullerNK93} to modify the planar embedding using an application of shortest path computation in the dual graph, which can be done in linear time by the algorithm of Henzinger, Klein, Rao, and Subramanian~\cite{HenzingerKRS97}. \subsection{Our Contributions} \label{sec:contributions} Our contribution is two-fold. Firstly, our algorithm removes all the useless arcs in a directed planar $s,t$-flow network in linear-time, which thus completes the framework of Weihe \cite{Weihe97} and yields an alternative $O(n\log n)$-time algorithm for computing maximum $s,t$-flow on directed planar graphs. Secondly, our algorithm can be seen as a (truly) linear-time $s,t$-flow sparsifier, which runs faster than any known maximum $s,t$-flow algorithm on directed (and also undirected) planar graphs (which can be seen as an $s,t$-flow sparsifier as well). The plane graph produced by our algorithm has simple structures that can be potentially exploited in the further developments of network flow algorithms; in particular, this may lead to a simpler or faster algorithm for computing maximum $s,t$-flow in directed planar graphs. In addition, our algorithm could be adapted to simplify a flow network in undirected planar graphs as well. \subsection{Related Work} \label{sec:related-work} Planar duality plays crucial roles in planar flow algorithm developments. Many maximum flow algorithms in planar graphs exploit a shortest path algorithm as a subroutine. However, the $O(n\log n)$-time algorithm for the maximum $s,t$-flow problem in directed planar graphs by Borradaile and Klein \cite{BorradaileK09}, which is essentially a left-most augmenting-path algorithm, is not based on shortest path algorithms. Erickson~\cite{Erickson10} reformulated the maximum flow problem as a parametric shortest path problem. The two $O(n\log n)$-algorithms were shown to be the same. \cite{Erickson10} but with different interpretation. Roughly speaking, the first one by Borradaile and Klein runs mainly on the primal graph while the latter one by Erickson runs mainly on the dual graph. Borradaile and Harutyunyan~\cite{BorradaileH13} explored this path-flow duality further and showed a correspondence between maximum flows and shortest paths in directed planar graphs with no restriction on the locations of the source and the sink. For undirected planar unit capacity networks, Weihe~\cite{Weihe97-undirected} presented a linear time algorithm for the maximum $s,t$-flow. Brandes and Wagner~\cite{BrandesW00} and Eisenstat and Klein~\cite{EisenstatK13} presented linear time algorithms when the planar unit-capacity network is directed. A seminal work of Miller and Naor~\cite{MillerN95} studied flow problems in planar graphs with multiple sources and multiple sinks. The problem is special in planar graphs as one cannot use standard reduction to the single-source single-sink problem without destroying the planarity. Using a wide range of planar graph techniques, Borradaile, Klein, Mozes, Nussbaum, Wulff-Nilsen~\cite{BorradaileKMNWN11} presented an $O(n\log^3 n)$ algorithm for this problem. \section{Preliminaries and Background} \label{sec:prelim} We use the standard terminology in graph theory. By the {\em degree} of a vertex in a directed graph, we mean the sum of its indegree and its outdegree. A path $P\subseteq G$ is a {\em simple path} if each vertex appears in $P$ at most once. A {\em cycle} in a directed graph is a simple directed walk (i.e., no internal vertices appear twice) that starts and ends at the same vertex. A {\em strongly connected component} of $G$ is a maximal subgraph of $G$ that is strongly connected. The {\em boundary} of a strongly connected component is the set of arcs that encloses the component (arcs on the unbounded face of the component). We call those arcs {\em boundary arcs}. For any strongly connected component $C$ such that $s,t\notin V(C)$, a vertex $v\in V(C)$ is an {\em entrance} of $C$ if there exists an $s,v$-path $P\subseteq G$ such that $P$ contains no vertices of $C$ except $v$, i.e., $V(P)\cap V(C)=\{v\}$. Similarly, a vertex $v\in V(C)$ is an {\em exit} of $C$ if there exists an $v,t$-path $P\subseteq G$ such that $P$ contains no vertices of $C$ except $v$, i.e., $V(P)\cap V(C)=\{v\}$. Consider an $s,t$-flow network consisting of a directed planar graph $G$, a source vertex $s$ and a sink vertex $t$. We say that an arc $uv$ is a {\em useful} arc (w.r.t. $s$ and $t$) if there is a simple $s,t$-path $P$ containing $uv$. Thus, the $s,u$ and the $v,t$ subpaths of $P$ have no common vertices. Otherwise, if there is no simple $s,t$-path containing $uv$, then we say that $uv$ is a {\em useless} arc (w.r.t. $s$ and $t$). Similarly, a path $P$ is {\em useful} (w.r.t. $s$ and $t$) if there is a simple $s,t$-path $Q$ that contains $P$. Note that if a path $P$ is useful, then all the arcs of $P$ are useful. However, the converse is not true, i.e., all the arcs of $P$ are useful does not imply that $P$ is a useful path. Throughout the paper, we denote the input directed planar graph by $G$ and denote the number of vertices and arcs of $G$ by $n$ and $m=O(n)$, respectively. We also assume that a planar embedding of $G$ is given as input, and the sink vertex $t$ is on the unbounded face of the embedding. Note that we assume that our flow networks have unit-capacities although the algorithm works on networks with arbitrary capacities as well. \subsection{Planar Embedding and Basic Subroutines} \label{sec:prelim:planar-embedding} We assume that the plane graph is given in a standard format in which the input graph is represented by an adjacency list whose arcs sorted in counterclockwise order. To be precise, our input is a table $\calT$ of adjacency lists of arcs incident to each vertex $v\in V(G)$ (i.e., $V(G)$ is the index of $\calT$) sorted in counterclockwise order. Each adjacency list $\calT(v)$ is a doubly linked-list of arcs having $v$ as either heads or tails (i.e., arcs of the forms $uv$ or $vw$). Given an arc $vw$ one can find an arc $vw'$ next to $vw$ in the counterclockwise (or clockwise) order. This allows us to query in $O(1)$ time the {\em right-most arc} (resp., {\em left-most arc}) of $uv$, which is an arc $vw$ in the reverse direction that is nearest to $uv$ in the counterclockwise (resp., clockwise) order. \subsection{Left and Right Directions} \label{sec:prelim:left-right} Since we are given a planar embedding, we may compare two paths/arcs using the notion of {\em left to} and {\em right to}. Let us consider a reference path $P$ and place it so that the start vertex of $P$ is below its end vertex. We say that a path $Q$ is {\em right to} $P$ if we can cut the plane graph along the path $P$ in such a way that $Q$ is enclosed in the right side of the half-plane. Our definition includes the case that $Q$ is a single arc. We say that an arc $uu'$ {\em leaves $P$ to} (resp., {\em enters $P$ from}) the right if $u\in V(P)$ and $uu'$ lies on the right half-plane cutting along $P$. Similarly, we say that a path $Q$ {\em leaves $P$ to} (resp., {\em enters $P$ from}) the right if the first arc of $Q$ leaves $P$ to (resp., enters $P$ from) the right half-plane cutting along $P$. These terms are also used analogously for the case of the left direction. As mentioned, the representation of a plane graph described in the previous section allow us to query in $O(1)$ time the {\em right-most} arc $vw$ of a given arc $uv$, which is the arc in the reverse direction nearest to $uv$ in the counterclockwise order. Consequently, we may define the {\em right-first-search} algorithm as a variant of the depth-first-search algorithm that chooses to traverse to the right-most (unvisited) arc in each step of the traversal. We may assume that the source $s$ has a single arc $e$ leaving it. Thus, the right-first-search (resp., left-first-search) started from $s$ gives a unique ordering of vertices (because every path must start from the arc $e$). \subsection{Forward and Backward Paths} \label{sec:prelim:forward-backward} Forward and backward paths are crucial objects that we use to determine whether an edge is useful or useless. Let $F$ be a reference path, which we call a {\em floor}. We order vertices on $F$ according to their appearance on $F$ and denote such order by $\pi:V(F)\rightarrow [|V(F)|]$. We may think that $F$ is a line that goes from left to right. Consider any path $P$ that starts and ends at vertices on $F$. We say that $P$ is a {\em forward path $P$ w.r.t. $F$} if the order of vertices in $V(P)\cap V(F)$ agrees with that of $F$, i.e., for all vertices $u,v\in V(P)\cap V(F)$, $\pi(u) < \pi(v)$ implies that $u$ appears before $v$ on $P$; otherwise, we say that $P$ is a {\em backward path $P$ w.r.t. $F$}. In general, we will use the term forward path to mean a minimal forward path, i.e., a forward path $P$ that intersects $F$ only at its start and end vertices and share no inner vertices with $F$. We will use the term {\em extended forward path} to mean a non-minimal forward path. In this sense, the ceiling $U$ is also an extended forward path. Analogously, we use the same notions for backward paths. A path $P$ is {\em flipped} w.r.t. $F$ if it starts from the left-side of $F$ and ends on the right-side of $F$. Similarly, $P$ is {\em reverse-flipped} w.r.t. $F$ if it starts from the right-side of $F$ and ends on the left-side of $F$. A {\em non-flipped} path is a path that lies on the left-side of $F$, and a {\em hanging} path is a path that lies on the right-side of $F$. \subsection{Strips and Links} \label{sec:prelim:strips} We are ready to define our main structural components. A strip is formed by a {\em floor} $F$ and a forward path $U$ w.r.t. $F$, called a {\em ceiling}. A {\em strip} is a region enclosed by a floor $F$ and a ceiling $U$, and we denote the strip by $C_{U,F}$. Observe that if we draw $F$ in such a way that $F$ is a line that goes from left to right, then $F$ lies beneath $U$ in the planar drawing. The two paths $U$ and $F$ form the {\em boundary} of the strip $C_{U,F}$. A strip is {\em proper} if $U$ and $F$ are internally vertex disjoint, i.e., they have no vertices in common except the start and the end vertices. Generally, we use the term strip to mean a proper strip except for the case of {\em primary strip}, which we will define later in Section~\ref{sec:s-inside}. Consider a proper strip $C_{U,F}$. We call arcs in $U$ and $F$ {\em boundary arcs} and call other arcs (i.e., arcs in $E(C_{U,F})- (E(U)\cup E(F))$) {\em inner arcs}. Similarly, vertices in $U$ and $F$ are called {\em boundary vertices} and other vertices (i.e., vertices in $V(C_{U,F})- (V(U)\cup V(F))$) are called {\em inner vertices}. A path $P\subseteq C_{U,F}$ which is not entirely contained in $F$ or $U$ is called a {\em link} in $C_{U,F}$ if its start and end vertices are boundary vertices and all other vertices are inner vertices, i.e., $V(P)\cap (V(U)\cup V(F))=\{u,v\}$ where $u,v$ are the first and the last vertices of $P$, respectively. Observe that a link has no boundary arcs, i.e., $E(P)\cap (E(U)\cup E(F))=\emptyset$. A link in $C_{U,F}$ whose start and end vertices are in the floor $F$ can be classified as {\em forward} and {\em backward} in the same manner as forward and backward paths (w.r.t. the floor $F$). Specifically, {\em forward} (resp., {\em backward}) links are minimal forward (resp., backward) paths. A link, however, can start at some vertex on the floor $F$ and end at an inner vertex of the ceiling $U$; in this case, we call it an {\em up-cross} path. On the other hand, if it goes from an inner vertex of the ceiling $U$ to some vertex on the floor $F$, we call it a {\em down-cross} path. A {\em hanging path} inside a strip is defined to be a hanging path w.r.t. the ceiling $U$. That is, a hanging path is a link that starts and ends at inner vertices of the ceiling $U$. Note that a hanging path can be either forward or backward hanging path. The classification of these links are shown in Figure~\ref{fig:paths-in-strip}. \begin{figure} \caption{The classification of links in a strip.} \label{fig:paths-in-strip} \end{figure} A link inside a proper strip slices the strip into two pieces: one on the left and on the right of the link. If we slice a strip with a forward, up-cross, down-cross or forward hanging path, then we have two strips that are again proper strips. In this sense, we say that a strip is {\em minimal} if we cannot slice it further, i.e., it has neither forward, up-cross, down-cross nor forward hanging paths (but, backward paths are allowed). \subsection{The Usefulness of Forward Paths} The usefulness of forward paths inside a strip $C_{U,F}$ can be determined by the usefulness of its floor as stated in the following lemma. \begin{lemma} \label{lem:forward-path-is-useful} Consider a strip $C_{U,F}$ such that the source vertex $s$ and the sink vertex $t$ are not in $C_{U,F}$. If $F$ is useful, then so is any forward path w.r.t. $C_{U,F}$. \end{lemma} \begin{proof} Let $u$ and $v$ be the start and the end vertices of $F$, and let $P$ be a forward path connecting some vertices $q,w \in V(F)$. Since $F$ is useful, there exists a simple $s,t$-path $R$ that contains $F$. We may choose $R$ to be the shortest such path. Since $s,t$ are not in $C_{U,F}$, the minimality of $R$ implies that none of the vertices in $V(R)- V(F)$ are in $C_{U,F}$. Consequently, the path $P$ (which is a link) has no vertices in $V(R)- V(F)$. We will construct a new simple $s,t$-path by replacing the $q,w$-subpath of $R$ by $P$. To be formal, let $R_{s,q}$ and $R_{w,t}$ be the $s,q$ and $w,t$ subpaths of $R$, respectively. We then define a new path $R' = R_{s,q}\cdot P\cdot R_{w,t}$. It can be seen that $R'$ is a simple $s,t$-path. This proves that $P$ is useful. \end{proof} The following corollary follows immediately from Lemma~\ref{lem:forward-path-is-useful}. \begin{corollary} \label{cor:ext-forward-path-is-useful} Consider a strip $C_{U,F}$ such that the source vertex $s$ and the sink vertex $t$ are not in $C_{U,F}$. If $F$ is useful, then so is any extended forward path w.r.t. $C_{U,F}$. \end{corollary} \begin{proof} We proceed the proof in the same way as that of Lemma~\ref{lem:forward-path-is-useful}. First consider any extended forward path $P$. Observe that $P$ can be partitioned into subpaths so that each subpath is either contained in $R$ or is a forward path w.r.t $C_{U,F}$. We replace subpaths of $F$ by these forward paths, which results in a new $u,v$-path $P'\supseteq P$, where $u$ and $v$ are the start and the end vertices of the floor $F$, respectively. Since $F$ is useful, there exists a simple $s,t$-path $R$ that contains $F$. We may assume that $R$ is the shortest such path. By the minimality of $R$, we know that $P'$ has no vertices in $V(R)- V(F)$ (because $P'\subseteq C_{U,F}$). So, we can construct a new simple $s,t$-path $R'$ by replacing the path $F\subseteq R$ in $R$ by the path $P'$. The simple $s,t$-path $R'\supseteq P'\supseteq P$ certifies that $P$ is useful. \end{proof} \subsection{Other important facts} Below are the useful facts that form the basis of our algorithm. \begin{lemma} \label{lem:remain-useful} Let $e'$ be any useless arc in $G$. Then any useful arc $e\in E(G)$ is also useful in $G- \{e'\}$. \end{lemma} \begin{proof} By the definition of useful arc, there exists a simple $s,t$-path $P$ in $G$ containing the arc $e$, and such path $P$ cannot contain $e'$; otherwise, it would certify that $e'$ is useful. Therefore, $P\subseteq G-\{e'\}$, implying that $e$ is useful in $G-\{e'\}$. \end{proof} \begin{lemma} \label{lem:boundary-is-ccw} Let $C$ be a strongly connected component, and let $B$ be the set of boundary arcs of $C$. Then $B$ forms a counter clockwise cycle. \end{lemma} \begin{proof} Since we assume that the embedding has no clockwise cycle, it suffices to show that $B$ is a cycle. We prove by a contradiction. Assume that $B$ is not a cycle. Then we would have two consecutive arcs in $B$ that go in opposite directions. (Note that the underlying undirected graph of $B$ is always a cycle.) That is, there must exist a vertex $u$ with two leaving arcs, say $uv$ and $uw$. Since $u,v,w$ are in the same strongly connected component, the component $C$ must have a $v,u$-path $P$ and and a $w,u$-path $P'$. We may assume minimality of $P$ and $P'$ and thus assume that they are simple paths. Now $P\cup\{uv\}$ and $P'\cup\{uw\}$ form a cycle, and only one of them can be counterclockwise (since $uv$ and $uw$ are in opposite direction), a contradiction. \end{proof} \begin{lemma} \label{lem:disjoint-entrance-exit-paths} Let $u$ and $v$ be an entrance and an exit of $C$, and let $P_u$ and $P_v$ be an $s,u$-path $P_u$ and a $v,t$-path $P_v$ that contains no vertices of $C$ except $u$ and $v$, respectively. Then $P_u$ and $P_v$ are vertex disjoint. \end{lemma} \begin{proof} Suppose $P_u$ and $P_v$ are not vertex-disjoint. Then the intersection of $P_u$ and $P_v$ induces a strongly connected component strictly containing $C$. This is a contradiction since $C$ is a maximal strongly connected subgraph of $G$. \end{proof} \section{Overview of Our Algorithm} \label{sec:overview} In this section, we give an overview of the algorithm. First, we preprocess the plane graph so that it meets the following requirements. \begin{enumerate} \item There is {\em no clockwise cycle} in the planar embedding. \item The source $s$ is adjacent to only {\em one outgoing arc}. \item Every vertex except $s$ and $t$ has {\em degree three}. \end{enumerate} We provide in Section~\ref{sec:preprocessing} a sequence of reductions that outputs a graph satisfied the above conditions while guaranteeing that the value of maximum $s,t$-flow is preserved. It is worth noting that the degree-three condition is not a strict requirement for our main algorithm. We only need every vertex to have either indegree one or outdegree one; a network that meets this prerequisite is called a {\em unit network}. After the preprocessing step, we apply the main algorithm, which first decomposes the plane graph into a collection of strongly connected components. The algorithm then processes each strongly connected component independently. Notice that the usefulness of arcs that lie outside of the strongly connected components can be determined by basic graph search algorithms. To see this, let us contract all the strongly connected components, which results in a {\em directed acyclic graph} (DAG). Since there is no cycle, an arc $uv$ is useful if and only if $u$ is reachable from $s$, and $t$ is reachable from $v$. So, it suffices to run any graph search algorithm to list all the vertices reachable from $s$ and those reachable from $t$ in the reverse graph. Hence, we are left to work on arcs inside each strongly connected component. We classify each strongly connected component into an {\bf outside case} and an {\bf inside case}. The outside case is the case that the source $s$ lies outside of the component, and the inside case is the case that the source $s$ is enclosed inside the component. (Note that since $s$ has no incoming arc, it cannot be on the boundary of a strongly connected component.) We deal with the outside case in Section~\ref{sec:st-outside} and deal with the inside case in Section~\ref{sec:s-inside}. While the inside case is more complicated, we decompose a component of the inside case into subgraphs, which resemblance those in the outside cases. In both cases, the main ingredient is the {\em strip decomposition}, which allows us to handle all the useless arcs in one go. We present the strip decomposition algorithm in Section~\ref{sec:strip-decomposition}. Please see Figure~\ref{fig:outside-inside} for the example of outside and inside cases. It is possible to have many inside cases (see Figure~\ref{fig:component-example}). \begin{figure} \caption{The top figure illustrates the outside case, while the bottom figure illustrates the inside case.} \label{fig:outside-inside} \end{figure} \begin{figure} \caption{An example showing 7 components. Note the nested inside cases.} \label{fig:component-example} \end{figure} \section{Preprocessing The Plane Graph} \label{sec:preprocessing} In this section, we present a sequence of reductions that provides a plane graph required by the main algorithm. The input to this algorithm is a directed planar flow network $(G_0,s_0,t_0)$ consisting of a plane graph $G_0$ (i.e., a directed planar graph plus a planar embedding), a source $s_0$ and a sink $t_0$. Our goal is to compute a directed planar flow network $(G,s,t)$ consisting of a plane graph $G$, a source $s$ and a sink $t$ with the following properties. \begin{itemize} \item[(1)] The value of the maximum flow in the network $(G,s,t)$ and that in the original network $(G_0,s_0,t_0)$ are the same. \item[(2)] The plane graph $G$ has no clockwise cycle. \item[(3)] Every vertex in $G$ except the source $s$ and the sink $t$ has degree exactly three. \item[(4)] The source $s$ has no incoming arc and has outdegree one. \item[(5)] The resulting graph $G$ has $|V(G)| \leq |E(G_0)|+2$ and $|E(G)| \leq 2|E(G_0)|+2$. \end{itemize} The main subroutine in the preprocessing step is the algorithm for removing all the clockwise cycles of a given plane graph $H$ by Khuller, Naor and Klein \cite{KhullerNK93}, which guarantees to preserve the value of maximum $s,t$-flow. This subroutine can be implemented by computing single-source shortest path in the dual planar graph. The setting of this shortest path problem is by placing a source $s^*$ on the outer-face of the dual graph, which can be computed in linear-time using the algorithm of Henzinger, Klein, Rao, and Subramanian~\cite{HenzingerKRS97}. This subroutine introduces no new vertices but may reverse some arcs. \subsection{Reduction} \label{sec:preprocessing:reduction} Our preprocessing first removes all the clockwise cycles. Then it applies a standard reduction to reduce the degree of all the vertices: \begin{itemize} \item {\bf Step 1:} We remove all the clockwise cycles from the plane graph $G_0$. We denote the plane graph obtained in this step by $G_1$. \item {\bf Step 2:} We apply the {\em degree reduction} to obtain a plane graph $G_2$ that satisfies the degree conditions in Properties (4) and (5): We replace the source $s_0$ by an arc $ss_0$, i.e., we introduce the new source $s$ and add an arc $ss_0$ joining $s$ to $s_0$. For notation convenience, we rename $t_0$ as $t$. Then we replace each vertex $v$ of degree $d_v$ by a counterclockwise cycle $\hat{C}_v$ of length $d_v$. We re-direct each arc going to $v$ and leaving $v$ to exactly one vertex $\hat{C}_v$ in such a way that no arcs cross. (So, each vertex of $\hat{C}_v$ corresponds to an arc incident to $v$ in $G_0$) \end{itemize} As we discussed, it follows by the result in \cite{KhullerNK93} that removing clockwise cycles does not change the value of the maximum flow, and it is not hard to see that the degree reduction has no effect on the value of maximum flow on edge-capacitated networks. Thus, Property~(1) holds for $(G,s,t)$. Property~(2) holds simply because we remove all the clockwise cycles at the end, and Property~(3)-(5) follows directly from the degree reduction. \section{Strip-Slicing Decomposition} \label{sec:strip-decomposition} The crucial part of our algorithm is the decomposition algorithm that decomposes a strongly connected component $C$ into (proper) {\em strips}, which are regions of $C$ in the planar embedding enclose by two arc-disjoint paths $U$ and $F$ that share start and end vertices. We will present the {\em strip-slicing decomposition} algorithm that decomposes a given strip $C_{U,F}$ into a collection $\calS$ of {\em minimal} strips in the sense that any strip in $\calS$ cannot be sliced into smaller strips. Moreover, any two strips obtained by the decomposition are either disjoint or share only their boundary vertices. We claim that the above decomposition can be done in linear time. If then certain prerequisites are met (we will discuss this in the later section), then our strip-slicing decomposition gives a collection of strips such that all the inner arcs of each strip are useless while all the boundary arcs are useful. Before proceeding to the presentation of the algorithm, we advise the readers to recall definitions of links and paths in a strip in Section~\ref{sec:prelim:strips}. \subsection{The Decomposition Algorithm} \label{sec:strip-decomposition:algorithm} Now we describe our strip-slicing decomposition algorithm. We start our discussion by presenting an abstraction of the algorithm. We denote the initial (input) strip by $C_{U^*,F^*}$. We call $U^*$ the {\em top-most} ceiling and call $F^*$ the {\em lowest floor}. The top-most ceiling $U^*$ can be a dummy ceiling that does not exist in the graph. The decomposition algorithm taking as input a strip $C_{U,F}$. Then it finds the ``right-most'' path $P$ w.r.t. the floor $F$ that is either a forward path or an up-cross path, which we call a {\em slicing path}. This path $P$ slices the strips into two regions, the {\em up-strip} and the {\em down-strip}. Please see Figure~\ref{fig:slicing-strip} for illustration. Intuitively, we wish to slice the input strip into pieces so that the final collection consists of strips that have no forward, up-cross, down-cross nor forward hanging paths (each of these paths can slice a strip into smaller ones). \begin{figure} \caption{The path $P$ slices the strip into up-strip and down-strips.} \label{fig:slicing-strip} \end{figure} The naive implementation yields an $O(n^2)$-time algorithm; however, with a careful implementation, we can speed up the running time to $O(n)$. We remark that our decomposition algorithms (both the quadratic and linear time algorithm) do not need the properties that the input plane graph has no clockwise cycles and that every vertex (except source and sink) has degree three. Thus, we are free to modify the graph as long as the graph remains planar. To make the readers familiar with our notations, we start by presenting the quadratic-time algorithm in Section~\ref{sec:strip-decomposition:quadratic-time} and prove its correctness. Then we will present the linear-time algorithm in Section~\ref{sec:strip-decomposition:linear-time} and show that it has the same properties as that of the quadratic-time algorithm. The basic subroutines that are shared in both algorithms are presented in Section~\ref{sec:strip-decomposition:subroutines} More precisely, we prove the following lemmas in the next two sections. \begin{lemma} \label{lem:quadratic-time-decomposition} There is an $O(n^2)$-time algorithm that, given as input a strip $C_{U^*,F^*}$, outputs a collection $\calS$ of strips such that each strip $C_{U,F}\in\calS$ has neither forward, up-cross, down-cross nor forward hanging path (w.r.t. $U$). \end{lemma} \begin{lemma} \label{lem:linear-time-decomposition} There is a linear-time algorithm that, given as input a strip $C_{U^*,F^*}$, outputs a collection $\calS$ of strips such that each strip $C_{U,F}\in\calS$ has neither forward, up-cross, down-cross nor forward hanging path (w.r.t. $U$). \end{lemma} The important property that we need from the decomposition algorithm is that if the floor of the input strip $C_{U^*,F^*}$ is useful, then all the boundary arcs of every strip $C_{U,F}\in\mathcal{S}$ are also useful. \begin{lemma} \label{lem:properties-strips} Let $\mathcal{S}$ be a collection of minimal strips obtained by running the strip-slicing decomposition on an input strip $C_{U^*,F^*}$. Suppose the source $s$ and the sink $t$ are not enclosed in $C_{U^*,F^*}$ and that the floor $F^*$ is useful. Then, for every strip $C_{U,F}\in\mathcal{S}$, all the boundary arcs of $C_{U,F}$ (i.e., arcs in $U\cup F$) are useful. \end{lemma} \begin{proof} We prove by induction that the decomposition algorithm maintains an invariant that the ceiling $U$ and the floor $F$ are useful in every recursive call. For the base case, since $F^*$ is useful, Lemma~\ref{lem:forward-path-is-useful} implies that $U^*$ is useful because it is a forward path w.r.t. $F^*$. Inductively, assume that the claim holds prior to running the decomposition algorithm on a strip $C_{U,F}$. Thus, $U$ and $F$ are useful. If the algorithm finds no slicing-path, then we are done. Otherwise, it finds a slicing-path $P$, which slices the strip $C_{U,F}$ into $C_{U_{up},F_{up}}$ and $C_{U_{down},F_{down}}$. Let $s'$ and $t'$ be the start and end vertices of the strip $C_{U,F}$, respectively. Observe that each of the path $U_{up},F_{up},U_{down}$ and $F_{down}$ is a simple $s',t'$-path, regardless of whether the slicing $P$ is a forward path or an up-cross path. Since $s$ and $t$ are not enclosed in $C_{U,F}$, we can extend any simple $s',t'$-path $Q$ into a simple $s,t$-path, which certifies that $Q$ is a useful path. Therefore, $U_{up},F_{up},U_{down}$ and $F_{down}$ are all useful, and the lemma follows. \end{proof} \subsection{Basic Subroutines} \label{sec:strip-decomposition:subroutines} Before proceeding, we formally describe the two basic procedures that will be used as subroutines in the main algorithms. \paragraph*{PROCEDURE 1: Slice the strip $C_{U,F}$ by a path $P$.} This procedure outputs two strips $C_{U_{up},F_{up}},C_{U_{down},F_{down}}$. We require that $P$ is either a forward-path or an up-cross path. Let $s',t'$ be the start and end vertices of the strip $C_{U,F}$ (which are end vertices of both $U$ and $F$), and let $u,v$ be the start and the end vertices of $P$, respectively. We define the two new strips by defining the new floors and strips as below. \begin{itemize} \item {\bf Case 1}: $P$ is a {\bf forward path}. Then we know that $u,v$ are on the floor $F$ and other vertices of $P$ are not on the boundary of $C_{U,F}$. To define the up-strip, we let $U_{up} := U$ and construct $F_{up}$ by replacing the $u,v$-subpath of $F$ by $P$. To define the down-strip, we let $U_{down} := P$ and let $F_{down}$ be the $u,v$-subpath of $F$. \item {\bf Case 2}: $P$ is an {\bf up-cross path}. Then we know that $u$ is on the floor $F$ while $v$ is on the ceiling $U$ and other vertices of $P$ has not on the boundary of $C_{U,F}$. To define the up-strip, we let $U_{up}$ be the $s',u$-subpath of $U$ and construct $F_{up}$ by concatenating the $s',u$-subpath of $F$ with $P$. To define the down-strip, we construct $U_{down}$ by concatenating the path $P$ with the $v,t'$-subpath of $U$, and we let $F_{down}$ be the the $u,t'$-subpath of $F$. \end{itemize} \paragraph*{PROCEDURE 2: Find a slicing path starting from $u\in F$.} We require that there are no arcs leaving the strip $C_{U,F}$ (we can temporary remove them). The algorithm runs recursively in the right first search manner (recall this is a variant of depth first search where we choose the traverse to the right-most arc). For simplicity, we assume that the first arc that will be scanned is the arc leaving $u$ that is next to the floor arc. Specifically, letting $uw\in F$ be the floor arc and $uw'$ be the arc next to $uw$ in counterclockwise order, the algorithm will ignore $uw$ and start by traversing $uw'$. We terminate a recursive call when we reach a boundary vertex $v \in U\cup F$. The algorithm then decides whether the path $P$ is a slicing path. More precisely, if $v$ appears before $u$ (thus, $P$ is a forward path) or $v$ is an internal vertex of the ceiling (thus, $P$ is an up-cross path), then the algorithm returns that we found a slicing path $P$ plus reports that $P$ is a forward path or an up-cross path. Otherwise, we continue the search until we visited all the inner vertices reachable from $u$; in this case, the algorithm returns that there is no slicing path. Notice that any slicing path $P$ found by the algorithm is the {\em right most} slicing path starting from $u$. \subsection{Quadratic-Time Strip-Slicing Decomposition (Proof of Lemma~\ref{lem:quadratic-time-decomposition})} \label{sec:strip-decomposition:quadratic-time} \paragraph*{The Main Algorithm.} Now we describe our $O(n^2)$-time algorithm in more details. The algorithm reads an input strip $C_{U^*,F^*}$ and then outputs a collection of minimal strips $\calS$. Our algorithm consists of initialization and recursive steps (we may think that these are two separated procedures). Let $s',t'$ be the start and the end vertices of the input strip $C_{U^*,F^*}$ (i.e., the start and end vertices of the paths $U^*$ and $F^*$). At the initialization step, we remove every arc leaving the input strip and then make sure that the {\em initial strip} has no hanging path by adding an auxiliary ceiling $(s',x,'t')$ to enclose the input strip, where $x'$ is a dummy vertex that does not exist in the graph. Clearly, the modified strip has no hanging path. Next we run the recursive procedure. Let us denote the input strip to this procedure by $C_{U,F}$. We first find a slicing path $P$. If there is no such path, then we add $C_{U,F}$ to the collection of strips $\calS$. Otherwise, we found a slicing path $P$, and we use it to slice the strip $C_{U,F}$ into the up-strip $C_{U_{up},F_{up}}$ and the down-strip $C_{U_{down},F_{down}}$. We then make recursive calls on $C_{U_{up},F_{up}}$, $C_{U_{down},F_{down}}$, and terminate the algorithm (or return to the parent recursive-call). \paragraph*{Analysis} The running-time of the algorithm can be analyzed by the following recurrent relation: \[ T(n) = T(n_1) + T(n - n_1) + O(n), \mbox{where $T(1)=1$}. \] This implies the quadratic running-time of the algorithm. It is clear from the construction that any two strips in $\calS$ can intersect only at their boundaries and that any strip in the collection $\calS$ contains no forward nor up-cross path. It remains to show that the any strip at the termination has no hanging nor down-cross path. Observe that the assertion holds trivially at the first call to the recursive procedure because the ceiling $U=(s',x,t')$ is an auxiliary ceiling (we simply have no arc leaving the lone internal vertex of $U$). We will show that this property holds inductively, i.e., we claim that if $C_{U,F}$ has no forward hanging nor down-cross path, then both $C_{U_{up},F_{up}}$ and $C_{U_{down},F_{down}}$ also have no forward hanging nor down-cross path. We consider two cases of the slicing path $P$. {\bf Case 1: $P$ is a forward path.} Then the up-strip $C_{U_{up},F_{up}}$ has no forward hanging path because $U_{up}=U$ (but, a backward hanging path may exist), and $C_{U_{up},F_{up}}$ has no down-cross path because, otherwise, any such path together with a subpath of $P$ would form a down-cross path in $C_{U,F}$. For the down-strip $C_{U_{down},F_{down}}$, we know that $P=F_{U_{down}}$ and that $P$ is the right-most path w.r.t. to $F_{up}$. This means that there is no forward hanging path w.r.t. $U_{down}$ simply because the hanging path $Q$ by definition must lie on the right of $U_{down}=P$, which contradicts the fact that $P$ is the right-most forward path (we can replace a subpath of $P$ by $Q$ to form a forward path right to $P$). Similarly, any down-cross path w.r.t. $C_{U_{down},F_{down}}$ together with a subpath of $P$ would form a forward path right to $P$ in $C_{U,F}$, again a contradiction. {\bf Case 2: $P$ is an up-cross path.} Then the up-strip $C_{U_{up},F_{up}}$ has no forward hanging path simply because $U_{up}$ is a subpath of $U$. It is also not hard to see that $C_{U_{up},F_{up}}$ has no down-cross path. Suppose not. Then we have a down-cross path $Q$ that starts from some internal vertex $x$ of $U_{up}$ and ends at some vertex $y$ in $F_{up}$. We have two cases: (1) if $y\in F$, then the path $Q$ is also a down-cross path in $C_{U,F}$ and (2) if $y\in P$, then the path $Q$ together with a subpath of $P$ form a forward hanging path in $C_{U,F}$. Both cases lead to contradiction. Next consider the down-strip $C_{U_{down},F_{down}}$. The ceiling $U_{down}$ consists of the slicing path $P$ and a subpath of $U$. Suppose there is a down-cross path $Q$ in $C_{U_{down},F_{down}}$. Then we know that $Q$ must go from $P$ to $F_{down}=U$. But, this means that we can form a forward path right to $P$ by concatenating a subpath of $P$ with $Q$, which contradicts the fact that $P$ is the right-most slicing path. Suppose there is a forward hanging path $Q$ in $C_{U_{down},F_{down}}$. Then it must start from some vertex $v$ in $P$ and ends on either $P$ or the subpath of $U$ in $U_{down}$. But, both cases imply that there exists a slicing path $P'$ that lie right to $P$, which contradicts our choice of $P$. \qed \subsection{Linear Time Implementation} \label{sec:strip-decomposition:linear-time} Now we present a linear-time implementation of our strip-slicing decomposition algorithm and thus prove Lemma~\ref{lem:linear-time-decomposition}. The algorithm is similar to the one with quadratic running-time except that we keep the information of vertices that have already been scanned. Here the recursive procedure has an additional parameter $v^*$, which is a pointer to a vertex on the floor $F$ that we use as a starting point in finding the slicing path $P$. To be precise, our algorithm again consists of two steps. Let us denote the input strip by $C_{U^*,F^*}$ and denote its start and end vertices by $s'$ and $t'$, respectively. At the initialization step, we remove all arc leaving $C_U^*,F^*$ and add an auxiliary ceiling $(s',x',t')$ to enclose the input strip ($x'$ is a dummy vertex). Then we set the starting point $v^*:=s'$ and call the recursive procedure with parameter $C_{U,F}$ (the modified strip) and $v^*$. In the recursive procedure, we first order vertices on the floor $F$ by $v_1,\ldots,v_{\ell}$ (note that $v_1=s'$ and $v_{\ell}=t'$). Let $i^*$ be the index such that $v_{i^*}=v^*$. For each $i=i^*$ to $\ell$ in this order, we find a slicing path starting from $v_i$ (using the right-first-search algorithm). If we cannot find a slicing path starting from $v_i$, then we remove all the inner vertices and arcs scanned by the search algorithm and iterate to the next vertex (i.e., to the vertex $v_{i+1}$). Otherwise, we found a slicing path $P$ and exit the loop. Note that the loop terminates under two conditions. Either the search algorithm finds no slicing path or it reports that there exists a slicing path $P$ starting from a vertex $v_i$. For the former case, we add $C_{U,F}$ to the collection of strips $\calS$. For the latter case, we remove all the inner vertices and arcs scanned by the search algorithm except those on $P$, and we then slice the strip $C_{U,F}$ into the up-strip $C_{U_{up},F_{up}}$ and the down-strip $C_{U_{down},F_{down}}$. Finally, we make two calls to the recursive procedure with parameters $(C_{U_{up},F_{up}},v_i)$ and $(C_{U_{down},F_{down}},v_i)$, respectively. In particular, we recursively run the procedure on the up and down strips, and we pass the vertex $v_i$ as the starting point in finding slicing paths. \subsubsection{Implementation Details} Here we describe some implementation details that affect the running-time of our algorithm. \paragraph*{Checking whether a vertex is in the floor or ceiling.} Notice that checking whether a vertex is in the the floor $F$ or the ceiling $U$ can be an issue since the up-strip and the down-strip share boundary vertices. A trivial implementation would lead to storing the same vertex in $O(n)$ data structures, and the algorithm ends up running in $O(n^2)$-time. We resolve this issue by using a coloring scheme. We color vertices on the floor $F$ by red and the vertices on the ceiling $U$ by green. (Note that we color only the internal vertices of $U$ since $U$ and $F$ share the start and end vertices). We color other vertices by white. Now we can easily check if a vertex is in the floor or ceiling by checking its color. When we make a recursive call to the up-strip and the down-strip. We have to maintain the colors on the slicing path $P$ to make sure that we have a right coloring. More precisely, when we make a recursive call to the up-strip we color internal vertices of $P$ by red (since they will be vertices on the floor $F_{up}$) and then flip their colors to green later when we make a recursive call to the down-strip (since they will be vertices on the ceiling $U_{down}$). Observe that internal vertices of the slicing path $P$ are internal vertices of the strip $C_{U,F}$, which will be boundary vertices of the up and down strips in the deeper recursive calls. As such, any vertices will change colors at most three times (from white to green and then to red). \paragraph*{Checking if a path is a forward path.} Another issue is when we search for a slicing path $P$ and want to decide whether the path we found is a forward path or a backward path. We simply solve this issue by removing all the incoming arcs of vertices $v_{i'}$ for all $0 \leq i'< i$. This way any path we found must be either a forward path or an up-cross path. \paragraph*{Slicing a Strip.} Observe that we may create $O(n)$ strips (and thus have $O(n)$ floors and ceilings) during the run of the decomposition algorithm. However, we cannot afford to store floors and ceilings of every strip separately as they are not arc disjoint. Otherwise, just storing them would require a running time of $O(n^2)$. To avoid this issue, we keep paths (either floor or ceiling) as doubly linked lists. Consequently, we can store $C_{U,F}$ by referencing to the first and last arcs of the floor (respectively, ceiling). Moreover, since we store paths as doubly linked list, each {\em cut-and-join} operation can be done in $O(1)$ time. \paragraph*{Maintaining the search to be inside $C_{U,F}$.} As we may have an arc going leaving $C_{U,F}$, we have to ensure that the right-first-search algorithm would not go outside the strip. (Note that our algorithm does not see the real drawing of $C_{U,F}$ on a plane and thus cannot decide whether an arc is in a strip or not.) To do so, we temporary remove the arcs leaving $C_{U,F}$ and put them back after the procedure terminates. Observe that the inner parts of up-strip $C_{U_{up},F_{up}}$ and $C_{U_{down},F_{down}}$ are disjoint. Thus, each arc leaving $C_{U,F}$ is removed and put back at most once. \subsubsection{Correctness} \label{sec:strip-decomposition:linear-time:correctness} For the correctness, it suffices the show that the slicing path $P$ that we found in every step is exactly the same as what we would have found by the quadratic-time algorithm. We prove this by reverse induction. The base case, where we call the procedure on the initial strip $C_{U^*,F^*}$, holds trivially. For the inductive step, we assume the that both implementations found the same slicing paths so far. Now if the recursive procedure found no slicing path, then we are done. Otherwise, we found a slicing path $P$. Let us order vertices on the floor $F$ by $v_1,\ldots,v_{\ell}$, and let $v_i$ be the start vertex of $P$. First, consider the up-strip $C_{U_{up},F_{up}}$. Clearly, $C_{U_{up},F_{up}}$ has no up-cross path starting from any vertex $v_{i'}$ with $0 \leq i' < i$. We also know that $C_{U_{up},F_{up}}$ has no forward path $Q$ starting from $v_{i'}$; otherwise, the path $Q$ is either a forward path in $C_{U,F}$ starting at $v_{i'}$ (if $Q$ starts and ends on $U$) or we can form another slicing path $P'$ starting from $v_{i'}$ by concatenate $Q$ with a subpath of $P$. Moreover, all the vertices that scanned by running the right-first-search algorithm from $v_{i}$ must be enclosed in the down-strip. Thus, we conclude that removing all scanned vertices (those that are not on $P$) does not affect the choice of slicing paths in the up-strip. Next consider the down-strip. In any case, we know that the slicing path $P$ becomes a subpath of the ceiling of $C_{U_{down},F_{down}}$. Clearly, any path $Q$ from $P$ to $F_{down}$ would either form a forward path $P'$ lying right to $P$ or would imply that $C_{U,F}$ has a down-cross path (this is the case when $Q$ is a subpath of a longer path that goes from $U$ to $F_{down}$), which are contradictions in both cases. Thus, we can remove every vertex that is scanned by the right-first-search algorithm and is not on the slicing path $P$ without any effect on the choices of slicing paths in deeper recursive calls. To conclude, our linear-time and quadratic-time implementations have the same choices of slicing paths and must produce the same outputs. The correctness of our algorithm is thus implied by Lemma~\ref{lem:quadratic-time-decomposition}. \subsubsection{Running Time Analysis} \label{sec:strip-decomposition:linear-time:running-time-analysis} Clearly, the initialization step of the decomposition algorithm runs in linear-time. Thus, it suffices to consider the running time of the recursive procedure. We claim that each arc is scanned by our algorithm at most $O(1)$ times, and each vertex is scanned only when we scan its arcs. This will immediately imply the linear running time. We prove this by induction. First, note that we remove all the internal vertices and arcs that are scanned by the search algorithm but are not on the slicing path $P$ (if it exists), which means that these vertices and arcs are scanned only once throughout the algorithm. Thus, if we found no slicing path, then the decomposition runs in linear time on the number of arcs in the strip. Otherwise, we found a slicing path $P$ starting at some vertex $v_i\in F$. The recursive procedure slices the strip $C_{U,F}$ in such a way that the two resulting strips $C_{U_{up},F_{up}}$ and $C_{U_{down},F_{down}}$ have no inner parts in common. Thus, we can deduce inductively that the running time incurred by scanning inner arcs and vertices of the two strips are linear on the number of arcs in the strips. Moreover, since we pass $v_i$ as the starting point of iterations in deeper recursive calls, we can guarantee that any vertices $v_{i'}\in F$ with $i < i'$ (those that appear before $v_i$) will never be scanned again. We note, however, that we may scan the vertex $v_i$ multiple times because some arcs leaving $v_i$ that lie on the left of $P$ have never been scanned. Nevertheless, we can guarantee that the number of times that we scan $v_i$ is at most the number of its outgoing arcs. To see this, first any arc that does not lead to a slicing path will be removed. For the arc that leads to a slicing path, we know that it will become an arc on the boundaries of up and down strips and thus will never be scanned again. In both cases, we can see that the number of times we scan $v_i$ is upper bounded by the number of its outgoing arcs. Therefore, the running time of our algorithm is linear on the number of arcs in the input strip, completing the proof of Lemma~\ref{lem:linear-time-decomposition}. \qed \section{The Outside Case: Determining useless arcs when $s$ is not in the component} \label{sec:st-outside} In this section, we describe an algorithm that determines all useful arcs of a strongly connected component $C$ when the source $s$ is in the infinite face. We first outline our approach. Consider a strongly connected component $C$. If the boundary of $C$ is not a cycle, then the strong connectivity implies that we can extend some path on the boundary to form a clockwise cycle, which contradicts to our initial assumption that the embedding has no clockwise cycle. If all arcs in the boundary of $C$, denoted by $Q$, are useless, then no arcs of $C$ are useful. Otherwise, we claim that some arcs of $Q$ are useful and some are useless. Moreover, these arcs partition $Q$ into two paths $Q_1$ and $Q_2$ such that $Q_1$ is a useful path and all arcs of $Q_2$ are useless. \begin{lemma} \label{lem:one-useful} Consider the boundary of a strongly connected component $C$. Let $Q$ be the cycle that forms the boundary of $C$. Then either all arcs of $C$ are useless or there are non-trivial paths $Q_1$ and $Q_2$ such that \begin{itemize} \item $E(Q)=E(Q_1)\cup E(Q_2)$. \item $Q_1$ is a useful path. \item All arcs of $Q_2$ are useless. \end{itemize} Moreover, there is a linear-time algorithm that computes $Q_1$ and $Q_2$. \end{lemma} \begin{proof} First, we claim that all arcs of $C$ are useless if and only if $Q$ has no entrance or has no exit. If $Q$ has no entrance or has no exit, then it is clear that all arcs of $C$ are useless because there can be no simple $s,t$-path using an edge of $C$. Suppose $Q$ has at least one entrance $u$ and at least one exit $v$. We will show that $Q$ has both useful and useless arcs, which can be partitioned into two paths as in the statement of the lemma. Observe that no vertices of $Q$ can be both entrance and exit because of the degree-three assumption. (Any vertex that is both entrance and exit must have degree at least four, two from $Q$ and two from the entrance and the exit paths.) For any pair of entrance and exit $u,v$, let $Q_{uv}$ be the $u,v$-subpath of $Q$. We claim that $Q_{uv}$ is a useful path. To see this, we apply Lemma~\ref{lem:disjoint-entrance-exit-paths}. Thus, we have an $s,u$-path $P_u$ and a $v,t$-path $P_v$ that are vertex disjoint. Moreover, $P_u$ and $P_v$ contain no vertices of $Q$ except $u$ and $v$, respectively. Thus, $R=P_u\cdot Q_{uv}\cdot P_v$ form a simple $s,t$-path, meaning that $Q_{uv}$ is a useful path. Notice that $Q$ has no useless arc only if $Q$ has two distinct entrances $u_1,u_2$ and two distinct exits $v_1,v_2$ that appear in $Q$ in interleaving order $(u_1,v_1,u_2,v_2)$. Now consider a shortest $s,u_1$-path $P_{u_1}$, a shortest $s,u_2$-path $P_{u_2}$, and the $u_1,u_2$-subpath $Q_{u_1,u_2}$ of $Q$. These three paths together encloses any path that leaves the component $C$ through an exit $v_1$. Thus, any $v_1,t$-path must intersect either $P_{u_1}$ or $P_{u_2}$, contradicting Lemma~\ref{lem:disjoint-entrance-exit-paths}. Consequently, a sequence of entrances and exits appear in consecutive order on $Q$. Let us take the first entrance $u^*$ and the last exit $v^*$. We know from the previous arguments that any pair of entrance and exit yields a useful subpath of $Q$. More precisely, the $u^*,v^*$-subpath $Q_1$ of $Q$ must be useful and must contain all the entrances and exits because of the choices of $u^*$ and $v^*$. Consider the other subpath -- the $v^*,u^*$-subpath $Q_2$ of $Q$. The only entrance and exit on $Q_2$ are $u^*$ and $v^*$, respectively. Thus, any path $s,t$-path $P$ that contains an arc $e$ of $Q_2$ must intersect with the path $Q_1$. But, then the path $P$ together with the boundary $Q$ enclose the region of $C$ that has no exit. Thus, $P$ cannot be a simple path. It follows that no arcs of $Q_2$ are useful. To distinguish the above two cases, it suffices to compute all the entrances and exits of $Q$ using a standard graph searching algorithm. The running time of the algorithm is linear, and we need to apply it once for all the strongly connected components. This completes the proof. \end{proof} \subsection{Algorithm for The Outside Case} \label{sec:algo-outside-case} Now we present our algorithm for the outside case. We assume that some arcs in the boundary of $C$ are useful and some are useless. Our algorithm decides whether each arc in $C$ is useful or useless by decomposing $C$ into a collection of minimal strips using the algorithm in Section~\ref{sec:strip-decomposition}. Then any arc enclosed inside a minimal strip is useless (only boundary arcs are useful). Thus, we can determine the usefulness of arcs in each strip. Observe, however, that the component $C$ is not a strip because it is enclosed by a cycle instead of two paths that go in the same direction. We transform $C$ into a strip as follows. First, we apply the algorithm as in Claim~\ref{lem:one-useful}. Then we know the boundary of $C$, which is a cycle consisting of two paths $Q_1$ and $Q_2$, where $Q_1$ is a useful path and $Q_2$ is a useless path. Let $F^*=Q_1$, and call it the {\em lowest-floor}. Note that $F^*$ starts at an entrance $s_C$ and ends at an exit $t_C$. We add to $C$ a dummy path $U^* = (s_C,u^*,t_C)$, where $u^*$ is dummy vertex, and call $U^*$ the {\em top-most ceiling}, which is a dummy ceiling. This transforms $C$ into a strip, denoted by $C_{U^*,F^*}$. (See Figure~\ref{fig:s-outside-init-strip}) \begin{figure} \caption{The initial strip before the decomposition.} \label{fig:s-outside-init-strip} \end{figure} Now we are able to decompose $C_{{U^*},{F^*}}$ into a collection $\calS$ of minimal strips by calling the strip-slicing decomposition on $C_{{U^*},{F^*}}$. Since $F^*$ is a useful path, we deduce from Corollary~\ref{cor:ext-forward-path-is-useful} that all the strips $C_{U,F}\in \calS$ are such that both $U$ and $F$ are useful paths. Next we show that no inner arcs of a strip $C_{U,F}\in\calS$ are useful, which then implies only arcs on the boundaries of strips computed by the strip-slicing decomposition are useful. Therefore, we can determine all the useful arcs in $C$ in linear-time. \begin{lemma} \label{lem:outside-no-useful-backward} Consider a strip $C_{U,F}\in\calS$ computed by the strip-slicing decomposition algorithm. Then no inner arc $e$ of $C_{U,F}$ is useful. More precisely, all arcs in $e\in E(C_{U,F})- E(U\cup F)$ are useless. \end{lemma} \begin{proof} Suppose for a contradiction that there is an inner arc $e$ of $C_{U,F}$ that is useful. Then there is an $s,t$ path $P$ containing $e$. We may also assume that $P$ contains a link $Q$. That is, $Q$ starts and ends at some vertices $u$ and $v$ in $V(U\cup F)$, respectively, and $Q$ contains no arcs of $E(U\cup F)$. By Lemma~\ref{lem:linear-time-decomposition}, $Q$ must be a backward path. Hence, it suffices to show that any backward path inside the strip $C_{U,F}$ is useless. By the definition of the backward path $Q$, $v$ must appear before $u$ on $F$, and by Lemma~\ref{lem:properties-strips}, $F$ is a useful path, meaning that there is an $s,t$-path $R$ containing $F$. Let $s'$ and $t'$ be the start and end vertices of the strip $C_{U,F}$ (i.e., $s'$ and $t'$ are the common start and end vertices of the paths $U$ and $F$, respectively). Then we have four cases. \begin{itemize} \item {\bf Case 1: $v=s'$ and $u=t'$.} In this case, the path $U$ and $Q$ form a clockwise cycle, a contradiction. \item {\bf Case 2: $v=s'$ and $u\neq t'$.} In this case, $v$ has degree three in $Q\cup U\cup F$. Thus, the path $R\supseteq F$ must start at the same vertex as $F$; otherwise, $v$ would have degree at least four. This means that $s=s'$, but then $P$ could not be a simple $s,t$-path because $s$ must appear in $P$ at least twice, a contradiction. \item {\bf Case 3: $v\neq s'$ and $u=t'$.} This case is similar to the former one. The vertex $u$ has degree three in $Q\cup U\cup F$. Thus, the path $R\supseteq F$ must end at the same vertex as $F$; otherwise, $u$ would have degree at least four. This means that $t=t'$, but then $P$ could not be a simple $s,t$-path because $t$ must appear in $P$ at least twice, a contradiction. \item {\bf Case 4: $u\neq t'$ and $v\neq s'$.} Observe that $P$ must enter and leave $C_{U,F}$ on the right of $F$ (and thus $R$) at some vertices $a$ and $b$, respectively. We may assume wlog that the $a,u$-subpath and the $v,b$-subpath of $P$ are contained in $R$. Moreover, it can been seen that $u$ and $v$ appear in different orders on the path $R$ (that contains $F$) and the path $P$ (that contains $Q$). Thus, $P$ must intersect either the $s,b$-subpath of $R$, say $R_{s,b}$, or the $a,t$-subpath of $R$, say $R_{a,t}$. (Note that $R$ contains no inner vertices of $Q$ by the definition of a backward path.) Suppose $P$ intersects $R_{a,t}$ at some vertex $x$, and assume that $x$ is the last vertex of $P$ in the intersection of $P$ and $R_{a,t}$. Since $P$ enters $R$ at the vertex $a$ from the right, the $x,a$-subpath of $P$ must lie on the right of $R$. But, then the union of the $x,a$-subpath of $P$ and the $a,x$-subpath of $R$ forms a clockwise cycle, a contradiction. (See Figure ~\ref{fig:outside-no-usefull-backward-4-1} for illustration.) Now suppose $P$ intersects $R_{s,b}$ at some vertex $y$, and assume that $y$ is the first vertex of $P$ in the intersection of $P$ and $R_{s,b}$. Since $P$ leaves $R$ at the vertex $b$ from the right, the $b,y$-subpath of $P$ must lie on the right of $R$. But, then the union of the $b,y$-subpath of $P$ and the $y,b$-subpath of $R$ forms a clockwise cycle, again a contradiction. \end{itemize} Therefore, all backward paths inside the strip $C_{U,F}$ are useless and so do inner arcs of $C_{U,F}$. \end{proof} \begin{figure} \caption{An illustration of Case~4 in the proof of Lemma~\ref{lem:outside-no-useful-backward}} \label{fig:outside-no-usefull-backward-4-1} \end{figure} \section{The Inside Case: Determining useful arcs of a strongly connected component when $s$ is inside of the component} \label{sec:s-inside} In this section, we describe an algorithm for the case that the source vertex $s$ is in the strongly connected component $C$. In this case, the source vertex $s$ is enclosed in some face $f_s$ in the component $C$. It is possible that $s$ is also enclosed nestedly in other strongly connected components (see Figure~\ref{fig:component-example}). We will perform a simple reduction so that (1) the source $s$ is enclosed inside the component $C$, (2) $s$ has only one outgoing arc, and (3) every vertex except $s$ (and $t$) has degree three: First, we contract the maximal component containing $s$ inside $f_s$ into $s$. Now the source $s$ has degree more than one in a new graph. We thus add an arc $s's$ and declare $s'$ as a new source vertex. Then we replace $s$ by a binary tree on $d$ leaves to maintain the degree-three requirement. See Figure~\ref{fig:inside-reduction}. \begin{figure} \caption{A reduction for simplifying the source structure for the inside case.} \label{fig:inside-reduction} \end{figure} \subsection{Overview} We decompose the problem of determining the usefulness of component $C$ into many subproblems and will deal with each subproblem (almost) independently. The basis of our decomposition is in slicing the component with the {\em lowest-floor path} $F^*$ and the {\em top-most ceiling} $U^*$. These two paths then divide the component $C$ into two parts: (1) the primary strip and (2) the open strip. Figure~\ref{fig:floor-of-inside-case} illustrates the paths $F^*$ and $U^*$, and Figure~\ref{fig:inside-strip-types} shows the schematic view of the primary and the open strips. All types of strips and components will be defined later. The lowest-floor is constructed in such a way that it includes all the exits of a component $C$. We thus assume that the sink $t$ is a vertex on the boundary of $C$, and there is no other exit. \begin{figure} \caption{the lowest-floor path $F^*$ and the top-most ceiling $U^*$ in the inside case.} \label{fig:floor-of-inside-case} \end{figure} \begin{figure} \caption{The types of major strips in the inside case.} \label{fig:inside-strip-types} \end{figure} \subsubsection{The lowest-floor path $F^*$.} \label{sec:s-inside:lowest-floor} We mainly apply the algorithm from Section~\ref{sec:st-outside} to the primary strip defined by two paths $F^*$ and $U^*$, which may not be arc-disjoint. The path $F^*$ is the {\em lowest-floor path} defined similarly to the outside case: We compute $F^*$ by finding the right-most path $P$ from $s$ to $t$ (hence, $P$ is an $s,t$-path). We recall that the sink $t$ is placed on the unbounded face of the planar embedding. Thus, $t$ must lie outside of the region enclosed by $C$. This means that the right-most path $P$ has to go through some exit $t'$ on the boundary of $C$. Although we know that the right-most path goes from $t'$ to $t$ directly, we detour the path $P$ along the boundary to include all the exits, which is possible since the source $s$ is enclosed inside the component. See Figure~\ref{fig:finding-lowest-floor-path} for illustration. \begin{figure} \caption{How to find $F^*$: the path $P$ is shown in solid red. The extension on the boundary is shown as a dashed line.} \label{fig:finding-lowest-floor-path} \end{figure} \subsubsection{Flipped paths.} Unlike the outside case, the path $F^*$ in the inside case does not divide the component $C$ into two pieces because the source vertex $s$ is inside some inner face of $C$. This leaves us more possibilities of paths that we have not encountered in the outside case, called ``flipped paths''. These are paths that goes from the left-side of the floor $F^*$ to the right-side (or vice versa). (Please recall the terminology of left and right directions in Section~\ref{sec:prelim:left-right}, and recall the types of paths in Section~\ref{sec:prelim:forward-backward}.) Fortunately, the reverse-flipped and hanging paths do not exist because of our choice of the lowest floor path $F^*$. \begin{lemma} \label{lem:no-reserse-flip-nor-hanging} Consider the construction of the lowest floor path $F^*$ as in Section~\ref{sec:s-inside:lowest-floor}. Then there is no reverse-flipped nor hanging paths w.r.t. $F^*$. \end{lemma} \begin{proof} Consider a connected component $C$ that encloses the source $s$. We recall that $F^*$ is constructed by finding the right-most $s,t$-path and then detouring along the boundary of $C$. Thus, $F^*$ is a union of two subpaths: (1) the path $F_1$ that is a subpath of the right-most $s,t$-path and (2) the path $F_2$ that is a subpath of the boundary of $C$. First, we rule out the existence of a hanging path. Assume to the contrary that there is a path $P\subseteq C$ that leaves $F^*$ to the right and enters $F^*$ again from the right. Then clearly $P$ must start and end on the first subpath $F_1$ of $F^*$ (otherwise, $P$ would have left the boundary). But, this would contradict the fact that $F_1$ is a subpath of the right-most $s,t$-path. Next, we rule out the existence of a reverse-flipped path. Assume to the contrary that there is a path $P$ that leaves $F^*$ to the right and enters $F^*$ again from the left. We assume the minimality of such path (thus, $P$ contains no internal vertices of $F^*$). Then we know that $P$ must start from $F_1$. (otherwise, $P$ would have left the boundary). If $P$ is a backward path, then the path $P$ together with a subpath of $F^*$ forms a clockwise cycle, a contradiction (we would have gotten ride of them in the preprocessing step). If $P$ is a forward path, then it would contradict the fact that $F_1$ is a subpath of the right-most path since we can traverse along the path $P$ (which is right to $F_1$) to get an $s,t$-path. \end{proof} \subsubsection{Primary Strip.} The {\em primary strip} is defined by two paths, which may not be arc-disjoint, namely $F^*$ and $U^*$. The path $F^*$ is the lowest-floor path. The path $U^*$ is the {\em top-most ceiling}, which is an $s,t$-path such that the strip $C_{U^*,F^*}$ encloses all (non-flipped) forward paths (w.r.t. $F^*$). The path $U^*$ is essentially the left-most path from $s$ to $t$. Section~\ref{sec:s-inside:init} describes how we find $U^*$ given $F^*$. \subsubsection{Open Strip.} The {\em open strip} $\hat{C}$ is defined to be everything not in the inner parts of the primary strip. See Figure~\ref{fig:inside-path-types} for illustration. \begin{figure} \caption{The types of paths in the open strip.} \label{fig:inside-path-types} \end{figure} To deal with arcs in $\hat{C}$, we first characterize the first set of useless arcs: ceiling arcs. An arc $e\in\hat{C}=(u,v)$ is a {\em ceiling arc} if $u$ can be reachable only from paths leaving $F^*$ to the left and $v$ can reach $t$ only through paths entering $F^*$ from the left as well. These arcs form ``ceiling components''. (Note that paths that form ceiling components are all backward paths because all the forward paths w.r.t. $F^*$ have been enclosed in $C_{U^*,F^*}$.) We shall prove that these paths are useless in Lemma~\ref{lem:backward-comp-useless}. Therefore, as a preprocessing step, we delete all the ceiling arcs from $\hat{C}$. To deal with the remaining arcs in $\hat{C}$, we first find strongly connected components $H_1,H_2,\ldots$ in $\hat{C}$. The arcs outside strongly connected components then form a directed acyclic graph $\hat{D}$ that represents essentially the structures of all flipped paths. We can now process each component $H_i$ using the outside-case algorithm only if we know which adjacent arcs in $\hat{D}$ are valid entrances and exits. For arcs in $\hat{D}$ that belong to some flipped path, we can use a dynamic programming algorithm to compute all reachability information needed to determine their usefulness. Our algorithm can be described shortly as follows. \begin{itemize} \item {\bf Initialization.} We compute the lowest-floor path $F^*$ and the top-most ceiling $U^*$, thus forming the primary strip $C_{U^*,F^*}$ and the open strip $\hat{C}$. \item {\bf The open strip (1): structures of flipped paths.} We find all strongly connected components in $\hat{C}$ and collapse them to produce the directed acyclic graph $\hat{D}$. We then compute the reachability information of arcs in $\hat{D}$ and use it to determine the usefulness of arcs in $\hat{D}$. \item {\bf The open strip (2): process each strongly connected component.} Then we apply the outside-case algorithm to each strongly connected component in $\hat{C}$. \item {\bf The primary strip.} We determine the usefulness of arcs in the primary strip. \end{itemize} In the following subsections, we describe how to deal with each part of $C$. \subsection{Initialization: Finding the Primary and Open Strips} \label{sec:s-inside:init} In this section, we discuss the initialization step of our algorithm, which computes the primary and the open strips. In this step, we also compute the reachability information of flipped paths, which will be used in the later part of our algorithm. As discussed in the previous section, we compute the lowest-floor path $F^*$ by finding the right-most $s,t$-path, which is unique and well-defined because $s$ has a single arc leaving it. The path $F^*$ is contained in $C$ because we assume that $t$ is an exit vertex on the boundary of $C$. The top-most ceiling $U^*$ can be computed by simply computing the left-most non-flipped path w.r.t. of $F^*$ from $s$ to $t$. This can be done in linear time by first (temporary) removing every arc on the right-side of $F^*$ and then computing the left-most path from $s$ to $t$. Consequently, we have the primary strip $C_{U^*,F^*}$, we claim that it encloses all forward paths (w.r.t. $F^*$). \begin{lemma} \label{lem:main-strip-encloses-all-forward} Every forward path w.r.t. $F^*$ is enclosed in $C_{U^*,F^*}$. \end{lemma} \begin{proof} Suppose to the contrary that there exists a forward path $P$ not enclosed by $C_{U^*,F^*}$. Then $P$ must intersect with $U^*$ or $F^*$. If it is the former case, then $P$ must have a subpath $Q$ that leaves and enters $U^*$ on the left (this is because $U^*$ is on the left of $F^*$). We choose a subpath $Q$ in which the start vertex of $Q$, say $u$, appears before its end vertex, say $v$, on $U^*$. Such path $Q$ must exist because, otherwise, $P$ would have a self-intersection inside $C_{U^*,F^*}$. We may further assume that $Q$ is a minimal such path, which means that all the arcs of $Q$ lie entirely on the left of $U^*$. But, then $U^*$ would not be the left-most non-flipped path (w.r.t. $F^*$) because the left-first-search algorithm would have followed $Q$ instead of the $u,v$-subpath of $U^*$, a contradiction. The case that $P$ intersects $F^*$ is similar, and such path would contradict the fact that $F^*$ is the right-most path. Therefore, all the forward paths (w.r.t. $F^*$) must be enclosed in $C_{U^*,F^*}$. \end{proof} Next consider the remaining parts of the component, which forms the open strip $\hat{C}=C- C_{U^*,F^*}$. We have as a corollary of Lemma~\ref{lem:main-strip-encloses-all-forward} that there is no simple $s,t$-path in the open strip that leaves $F^*$ from the right. \begin{corollary} \label{cor:no-right-path} No simple $s,t'$-path that lies on the right of $F^*$. \end{corollary} \begin{proof} Suppose there is an $s,t$-path $P$ in the open strip that leaves $F^*$ from the right. Then we have three cases. First, if $P$ also enters $F^*$ from the right, then $P$ cannot be a forward-path and cannot have any forward subpath by Lemma~\ref{lem:main-strip-encloses-all-forward}. Thus, $P$ must have a self-intersection, contradicting to the fact that $P$ is a simple path. Second, if $P$ enters $F^*$ from the left, then we can find a $u,v$-subpath $Q$ of $P$ such that $u$ appears before $v$ in $F^*$ and $Q$ is arc-disjoint from $F^*$. But, then the right-first-search algorithm would have followed $Q$ instead of the $u,v$-subpath of $F^*$, a contradiction. Otherwise, if $P$ never enters $F^*$, then it must go directly to $t$. But, then the right-first-search algorithm would have followed the subpath of $P$ that leaves $F^*$, a contradiction. \end{proof} \subsection{Working on the Open Strip (1): Dealing with Arcs in Flipped Paths} \label{sec:arcs-in-flipped-paths} To determine the usefulness of arcs on the DAG $D$, which is formed by contracting strongly connected components $H_1,H_2,\ldots$ in the open strip, it suffices to check if it is contained in some useful flipped path w.r.t. $F^*$. We first prove the characterization of useful flipped paths. Section~\ref{sec:s-inside:reachability} describes how we can check if an arc belongs to any useful path by using reachability information. Our first observation is that every forward flipped path is useful. \begin{lemma} \label{lem:flip-forward-is-useful} Every forward flipped path w.r.t. $F^*$ is useful. \end{lemma} \begin{proof} Consider any forward flipped path $P$. By the definition of the forward flipped path, $P$ shares no arcs with $F^*$. Moreover, $P$ starts at some vertex $v\in F^*$ and ends at some vertex $w\in F^*$ such that $v$ appears before $w$ in $F^*$. Note that $F^*$ is a path that goes from $s$ to the vertex $t'$ in the boundary of the component $C$. We extend $F^*$ to a simple $s,t$-path $R$ by adding a $t',t$-path that is not in the component $C$. We then replace the $u,v$-subpath of $R$ by $P$, thus getting a new path $R'$, which is a simple path because $R$ shares no arcs with $P$ and contains no vertices of $V(P)-\{v,w\}$. Thus, $R'$ is a simple $s,t$-path, implying that $P$ is useful. \end{proof} If an arc is not contained in any forward flipped path, it must be in some backward flipped path. In most cases, a backward flipped path is useless except when there exists an exit that provides an alternative route to $t$. Consider the primary strip, which is enclosed by the lowest-floor $F^*$ and the top-most ceiling $U^*$. There are some arcs that $U^*$ and $F^*$ have in common. If we remove all of these arcs, then the remaining graph consists of weakly connected components, which we call humps. Formally, a {\em hump} is a strip obtained from the primary strip by removing $F^*\cap U^*$. We can also order humps by the ordering of their first vertices on the floor $F^*$. The structure of humps is important in determining the usefulness of a flipped path. See Figure~\ref{fig:useful-flipped-backward-path}. \begin{figure} \caption{An example of flipped backward paths $P_1$ (useless) and $P_2$ (useful). $F^*$ is shown in red and $U^*$ is shown in blue. There are 5 humps (shown in gray).} \label{fig:useful-flipped-backward-path} \end{figure} The lemma below shows the conditions when arcs in a backward flipped path can be useful. \begin{lemma} \label{lem:useful-backward-flip-paths} Let $P$ be any backward flipped-path such that $P$ starts at a vertex $u\in F^*$ and ends at a vertex $v\in F^*$. Then $\hat{P}=P\cap\hat{C}$ (the subpath of $P$ in the open strip) is useful iff $P$ ends at some hump $H$ (thus, $H$ contains $v$) such that either \begin{enumerate} \item both $u$ and $v$ are in the same hump $H$, and $v$ is not the first vertex in $H$. \item The vertex $u$ is not in the hump $H$, and there is an exit after $v$ in the hump $H$. \end{enumerate} \end{lemma} \begin{proof} First, we show that if the end vertex $v$ of $P$ is not in any hump, then $\hat{P}=P\cap\hat{C}$ is useless. We assume wlog that the start vertex $u$ of $P$ is also not contained in any hump; otherwise, we contract all vertices in the hump (that contains $u$) to the vertex $u$. Thus, $P=\hat{P}$. It suffices to show that there is no $s,u$-path $J$ that is vertex disjoint from $P$, which will imply that $P$ cannot be a useful path. To see this, assume a contradiction that there is an $s,u$-path $J$ that is vertex disjoint from $P$. Then $J$ has to leave $F^*$ at some vertex $x$ and then enters $F^*$ again at some vertex $y$ (it is possible that $y=u$) in such a way that $x,y,u,v$ appear in the order $(x,v,y,u)$ on $F^*$. Let $J'$ be the $x,y$-subpath of $J$. If $J'$ is contained in the primary strip, then $J'$ would induce a hump containing $v$, a contradiction. So, we assume that $J'$ is not contained in the primary strip. If $J'$ leaves $F^*$ from the right, then $J'$ cannot enter $F^*$; otherwise, it would contradict the fact that $F^*$ is the right-most path. Hence, we are left with the case that $J'$ leaves $F^*$ from the left and then enters again from the right. Since $J$ and $P$ are vertex-disjoint (and so are $J'$ and $P$), $J'$ cannot enter $F^*$ at any vertex that appears after $v$ (including $u$ and $y$) because, otherwise, $J'$ has to cross the path $P$. Thus, we again have a contradiction. Consequently, since there is no path from $s$ that can reach $u$ or the hump containing $u$ without intersecting with $\hat{P}$, the path $\hat{P}$ cannot be a useful path. Next consider the case that $v$ is contained in some hump $H$. Let $w$ be a vertex in $P\cap U^*$, i.e., $w$ be a vertex in the ceiling intersecting with $P$. Also, let $\hat{P}$ be $P\cap\hat{C}$. For the backward direction, we construct a useful path $Q$ containing $\hat{P}$ by taking the union of the prefix of the top-most ceiling $U^*$ up to $w$, $\hat{P}$ and the subpath $P_F$ of the floor $F^*$ after $v$. In Case (1), $P_F$ is the suffix of $F^*$ after $v$, and in Case (2), $P_F$ starts from $v$ and ends at the exit. It can be seen that $Q$ is a simple path from $s$ to an exit, meaning that $Q$ is a useful path and so is $\hat{P}$. Now consider the forward direction. We proceed by contraposition. There are two subcases we have to consider: (i) $u$ and $v$ are in the same hump $H$, but $v$ is the first vertex in the hump and (ii) $u$ is not in the hump $H$ and there is no exit after $v$ in $H$. In Subcase (i), any useful path $Q$ containing $\hat{P}$ whose prefix up until $\hat{P}$ is entirely in the primary strip has to go through $v$; therefore, it cannot be simple. Consider the case when there is a useful path $Q$ containing $\hat{P}$ outside the primary strip. The prefix $Q'$ of this path must reach $w$ without touching $v$ (because $v\in \hat{P}$). Observe that the union of $H$ and $\hat{P}$ contains a cycle enclosing the source $s$, and by the construction of $U^*$, there is no path from $s$ entering $H\cap U^*$ (otherwise, it would have been included in $U^*$). Consequently, because of planarity, the prefix $Q'$ must cross $\hat{P}$, meaning that $Q$ cannot be simple. Finally, we deal with Subcase (ii). Let $x$ be the last floor vertex of $H$. First note that, by the choice of $F^*$, any path from $s$ to $w$ not intersecting $\hat{P}$ cannot cross $F^*$ to the right; therefore, it has to use $x$. Because of the same reason, any path from $v\in F^*$ to $t$ cannot leave $F^*$ to the right; thus, it must also go though $x$ as well. Consequently, a path from $s$ to $t$ containing $\hat{P}$ in this case cannot be simple. \end{proof} \subsubsection{Reachability information} \label{sec:s-inside:reachability} After we compute the primary strip and the open strip, and collapse strongly connected components in $\hat{C}$, we run a linear-time preprocessing to compute reachability information. For each vertex $u$ in $\hat{D}$, we would like to compute $\mathsf{first}(u)\in F^*$ defined to be the first vertex $w$ in $F^*$ such that there is a path from $s$ to $u$ that uses $w$ but not other vertex in $F^*$ after $w$. We also want to compute $\mathsf{last}(u)\in F^*$ defined to be the last vertex $w'$ in $F^*$ such that there is a path from $u$ to $t$ that uses $w'$ as the first vertex in $F^*$. To compute $\mathsf{first}(u)$, we find the left-first-search tree $T$ rooted at $s$. For $u\in\hat{C}$, we set $\mathsf{first}(u)$ to be the closest ancestor of $u$ on the floor $F^*$. We compute $\mathsf{last}(u)$ similarly by finding the left-first-search tree in the reverse graph rooted at $t$ and setting $\mathsf{last}(u)$ as the closest ancestor of $u$ on the floor $F^*$. \subsubsection{Checking arcs in $\hat{D}$} Now we describe our linear-time algorithm for determining the usefulness of arcs in $\hat{D}$ Consider an arc $e=(u,v)\in\hat{D}$. If $\phi(\mathsf{first}(u))>\phi(\mathsf{last}(v))$, then there exist a flipped forward path containing $e$. Thus, by Lemma~\ref{lem:flip-forward-is-useful}, $e$ is useful. If $\phi(\mathsf{first}(u))\leq\phi(\mathsf{last}(v))$, then we need to check conditions in Lemma~\ref{lem:useful-backward-flip-paths}. To do so, we need to maintain additional information on every vertex $v$ in $F^*$, namely the hump $H(v)$ that contains it, a flag representing if $v$ is the first vertex in the hump, and a flag representing if the hump $H(v)$ contains an exit. Since these data can be preprocessed in linear-time, we can perform the check for each arc $e$ in constant time. \subsection{Working on the Open Strip (2): Dealing with Arcs in Strongly Connected Components} \label{sec:arcs-in-scc} Consider a strongly connected component $H_i$ in the open strip. If no arcs going into $H_i$ are useful or no arcs leaving $C$ are useful, clearly every arc in $H_i$ are useless. However, it is not obvious that applying the previous outside algorithm simply works because entering and leaving arcs determine the usefulness of the path. See Figure~\ref{fig:scc-in-open-strip-dependency}, for example. \begin{figure} \caption{The arc $e$ in the red path is useful when considering $C$ as an outside case instance. However, using the blue path as an entering path (at the entrance $b_3$) and the green path as a leaving path (at the exit $b'_2$) is not enough to show that. One can, instead, use the orange path (at the entrance $b_1$) as an entering path together with parts of the boundary to construct a forward flipped path that contains $e$.} \label{fig:scc-in-open-strip-dependency} \end{figure} The following lemma proves that we can take the strongly connected component $H_i$ and build an outside case instance by attaching, for each useful arc in the DAG entering $H_i$ or leaving $H_i$, corresponding to an entrance or an exit. \begin{lemma} Every arc in an outside instance of $H_i$ is useful iff it is useful in the original graph. \end{lemma} \begin{proof} The backward direction is straight-forward. Consider the forward direction: we would like to show that if an arc $e$ is useful in the instance, there exists a useful path containing it. Suppose that $H_i$ has $n_1$ entrances and $n_2$ exits. From Lemma~\ref{lem:boundary-is-ccw} and the observation in the proof of Lemma~\ref{lem:one-useful} that entrance-exit pairs do not appear interleaving, we know that $H_i$ is enclosed by a counter-clockwise cycle and we can name all entrances as $b_1,b_2,\ldots,b_{n_1}$ and all exits as $b'_1,b'_2,\ldots,b'_{n_2}$ in such a way that they appear in counter-clockwise order as \[ b_1,b_2,\ldots,b_{n_1},b'_1,b'_2,\ldots,b'_{n_2}. \] Consider an arc $e$ which is useful in this instance. Since $e$ is useful, there is a simple path $P$ from some entrance $b_i$ to some exit $b'_j$ containing $e$. Since the boundary arcs form a counter-clockwise cycle, we can extend $P$ to start from $b_1$ by adding boundary arcs from $b_1$ to $b_i$. We can also extend $P$ to reach $b'_{n_2}$ by adding arcs from $b'_j$ to $b'_{n_2}$. Since an arc entering $C$ at $b_1$ is useful and an arc leaving $C$ from $b'_{n_2}$ is also useful, we can construct a useful path containing $e$ by joining a path from $s$ to $b_1$, the path $P$, and a path from $b'_{n_2}$ to $t$. Thus, $e$ is useful. \end{proof} \subsection{Working on the Primary Strip} \label{sec:s-inside:primary} Given the primary strip $C_{U^*,F^*}$, we can use the algorithm from Section~\ref{sec:st-outside} to find all useful arcs inside $C_{U^*,F^*}$. The next lemma proves the correctness of this step. \begin{lemma} A useful arc $e$ in $C$ is useful iff it is useful in $C_{U^*,F^*}$. \end{lemma} \begin{proof} The backward direction is obvious. We focus on the forward direction. Assume for contradiction that there exist a useful arc $e$ in $C$ but $e$ is not useful in $C_{U^*,F^*}$. Consider a useful path $P$ from $s$ to $t$ containing $e$. Since $e$ is inside $C_{U^*,F^*}$, $P$ must cross the lowest-floor $F^*$ or the top-most ceiling $U^*$. However, $P$ cannot cross $U^*$ as it would create a forward path outside the primary strip (contradicting Lemma~\ref{lem:main-strip-encloses-all-forward}). Now suppose that $P$ crosses $F^*$ at some vertex $w$ before $e$. If $P$ do not leave the primary strip after $w$, then clearly $e$ must be useful inside the primary strip. If $P$ leaves the primary strip at the ceiling after $e$, the subpath of $P$ containing $e$ is again a forward path. We also note that $P$ cannot leave $C_{U^*,F^*}$ at the floor because it would cross itself. Since we reach contradiction in every case, the lemma follows. \end{proof} \subsection{The Ceiling Components are Useless} \label{sec:ceiling-components} In this section, we show that every arc in the ceiling components is useless. The ceiling components are formed by arcs $e=(u,v)$ in $\hat{C}$ which are reachable only from paths leaving $F^*$ to the left and $v$ can reach $t$ only through paths entering $F^*$ from the left. \begin{lemma} Every ceiling arc $e$ is useless. \label{lem:backward-comp-useless} \end{lemma} \begin{proof} First, note that $e$ does not lie in a forward path; otherwise $e$ would be in the primary strip. Let $P$ be a useful path from $s$ to $t$ containing $e$. Let $u'$ be the last vertices of $P\cap U^*$ before reaching $e$ and $v'$ be the first vertices of $P\cap U^*$ after leaving $e$. Because of the degree constraint, $P'$ must use the only incoming arc of $u'$, say $e_u$, and the only outgoing arc of $v'$, say $e_v$. Let $P_1$ be the prefix of $P'$ from $s$ to the head of $e_u$ and $P_2$ be the suffix of $P'$ from the tail of $e_v$ to $t$. The only way $P_2$ can avoid crossing $P_1$ is to cross $U^*$; however, this creates a clockwise cycle, a contradiction. \end{proof} \section{Conclusions and Open Problems} In this paper, we presented the algorithm that simplifies a directed planar network into a plane graph in which every vertex except the source $s$ and the sink $t$ has degree three, the graph has no clockwise cycle, and every arc is contained in some simple $s,t$-path. Our algorithm can be applied as a preprocessing step for Weihe's algorithm \cite{Weihe97} and thus yields an $O(n\log n)$-time algorithm for computing maximum $s,t$-flow on directed planar graphs. This gives an alternative approach that departs from the $O(n\log n)$-time algorithm of Borradaile and Klein \cite{BorradaileK09} and that of Erickson \cite{Erickson10}, which are essentially the same algorithm with different interpretations. While other works mainly deal with maximum flow, the main concern in this paper is in simplifying the flow network. Henceforth, we believe that our algorithm will serve as supplementary tools for further developments of network flow algorithms. Next let us briefly discuss open problems. A straightforward question is whether our approach can be generalized to a larger class of graphs on surfaces. Another problem that might be interesting for readers is to remove the prerequisites from our main algorithm. Specifically, prior to feeding a plane graph to the algorithm, we apply a sequence of reductions so that each vertex has degree three, and the plane graph contains no clockwise cycles. These are the prerequisites required by our main algorithm. Although the reduction does not change the value of the maximum flow, the usefulness of arcs in the modified network may differ from the original graph. It would be interesting to simplify the flow network without changing the usefulness of arcs in the graph. Lastly, we would like to note that if there exists a reduction that deals with clockwise cycles, then the degree requirement can be removed. Specifically, if there is a procedure that given a plane graph $G$ constructs a new plane graph $G'$ with no clockwise cycles together with another efficient procedure for identifying the usefulness of original arcs in $G$ based on the results in $G'$, then we can apply the following lemma. \begin{lemma} \label{lem:no-cw-cycles} If a plane graph $G$ contains no clockwise cycles, a new plane graph $G'$ constructed by replacing every vertex of degree greater than three in $G$ with a {\bf clockwise cycle} preserves the usefulness of every arc from $G$ (those arcs that are not contained in any clockwise cycle). \end{lemma} \begin{proof} We first describe the reduction formally. Let $G$ be a plane graph with no clockwise cycle, and let $s$ and $t$ be the source and sink vertices. For each vertex $v\in V(G)$, let $d_v$ denote the degree of $v$ in $G$. We construct $G'$ by first adding copies of $s$ and $t$, namely $s'$ and $t'$, respectively. Then we add to $G'$ a clockwise cycle $C_v$ on $d_v$ vertices, for each vertex $v\in V(G)-\{s,t\}$. Each vertex $v_e$ in the cycle $C_v$ corresponds to an arc $e$ incident to $v$ in $G$, and vertices in $C_v$ are sorted in the same cyclic order as their corresponding arcs in $G$. Next we add an arc $u_{uv}v_{uv}$ to $G'$ for every arc $uv\in E(G)$. Observe that $G'$ is a planar graph obtained by replacing each vertex of $G$ by a cycle, and we can keep the same planar drawing as that of $G$. In particular, the resulting graph $G'$ is a plane graph. It can be seen that every simple path in $G$ corresponds to some simple path in $G'$. Thus, every useful arc in $G$ (w.r.t. $s$ and $t$) is also useful in $G'$ (w.r.t. $s'$ and $t'$). Now let us consider a useless arc $uv$ in $G$ (w.r.t. $s$ and $t$). Assume to the contrary that there is a simple $s',t'$-path $P'$ in $G'$ containing the arc $v_{uv}u_{uv}$, which is the arc corresponding to $uv$. Since $G$ has no simple $s,t$-path containing $uv$, we know that $P'$ maps to a walk $P$ in $G$ that visits some vertex $w$ at least twice. Thus, $P'$ must visit the cycle $C_w$ at least twice as well. Let us say $P'$ enters $C_w$ at a vertex $w_{e_1}$, leaves $C_w$ from a vertex $w_{e_2}$, enters $C_{w}$ again at a vertex $w_{e_3}$ and then leaves $C_{w}$ from a vertex $w_{e_4}$. Then we can construct a cycle $Q$ by walking along the $w_{e_1},w_{e_3}$-subpath of $P'$ and then continue to the $w_{e_3},w_{e_1}$-subpath of $C_w$. Since the original graph $G$ has no clockwise cycle, $Q$ must be counterclockwise. Moreover, since $C_w$ is a clockwise cycle, the vertex $w_{e_4}$ must lie between $w_{e_3}$ and $w_{e_1}$. At this point, it is not hard to see that the cycle $Q$ must enclose the $s',w_{e_4}$-subpath of $P'$. So, the only way that $P'$ can leave the cycle $Q$ and reach $t'$ is to cross the $w_{e_2},w_{e_3}$-subpath of $C_w$. But, this is not possible unless $P'$ crosses itself. Hence, we arrive at a contradiction. Therefore, the graph $G'$ preserves the usefulness of every arc from $G$. \end{proof} \noindent{\bf Acknowledgement.} We thank Joseph Cheriyan for introducing us the flow network simplification problem. We also thank Karthik~C.S. for pointing out some typos. Part of this work was done while Bundit Laekhanukit was visiting the Simons Institute for the Theory of Computing. It was partially supported by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF grant \#CCF-1740425. \end{document}
\begin{document} \begin{abstract} The aim of this short note is to give a simple proof of the non-rationality of the double cover of the three-dimensional projective space branched over a sufficiently general quartic. \end{abstract} \title{A simple proof of the non-rationality \ of a general quartic double solid} \section{Introduction} Throughout this work the ground field is supposed to be the complex number field $\mathbb{C}$. A \textit{quartic double solid} is a projective variety represented as a the double cover of $\mathbb{P}^3$ branched along a smooth quartic. It is known that quartic double solids are unirational but not rational \cite{Beauville1977}, \cite{Tikhomirov1986}, \cite{Voisin1988}, \cite{Clemens1991}. Moreover, a general quartic double solid is not \textit{stably rational} \cite{Voisin2015a}. There are also a lot of results related to rationality problems of \emph{singular} quartic double solids see e.g. \cite{Artin-Mumford-1972}, \cite{Clemens1983}, \cite{Varley1986}, \cite{Debarre1990}, \cite{Przhiyalkovskij-Cheltsov-Shramov-2015}, \cite{CheltsovPrzyjalkowskiShramov2015b}. The main result of this note is to give a simple proof of the following \begin{mtheorem}{\bf Theorem.}\label{theorem-main} Let $X$ be the quartic double solid branched over the surface \begin{equation*} \label{equation-1} x_{1}^3x_{2}+ x_{2}^3x_{3}+ x_{3}^3x_{4}+ x_{4}^3x_{1}=0. \end{equation*} Then the intermediate Jacobian $J(X)$ is not a sum of Jacobians of curves. As a consequence, $X$ is not rational. \end{mtheorem} \begin{mtheorem}{\bf Corollary.}\label{corollary-main} A general quartic double solid is not rational. \end{mtheorem} Our proof uses methods of A. Beauville \cite{Beauville2012}, \cite{Beauville2013} and Yu. Zarhin \cite{Zarhin2009}. The basic idea is to find a sufficiently symmetric variety in the family. Then the action of the automorphism group provides a good tool to prove non-decomposability the intermediate Jacobian into a sum of Jacobians of curves by using purely \textit{group-theoretic} techniques. Since the Jacobians and their sums form a closed subvariety of the moduli space of principally polarized abelian varieties, this shows that a general quartic double solid is not rational\footnote{Recently V. Przyjalkowski and C. Shramov used similar method to prove non-rationality of some double quadrics \cite{PrzyjalkowskiShramov2016}.}. \section{Preliminaries} \begin{say}{\bf Notation.} We use standard group-theoretic notation: if $G$ is a group, then ${\operatorname{z}}(G)$ denotes its center, $[G,G]$ its derived subgroup, and ${\operatorname{Syl}}_p(G)$ its (some) Sylow $p$-subgroup. By $\zeta_m$ we denote a primitive $m$-th root of unity. The group generated by elements $\upalpha_1,\upalpha_2,\dots$ is denoted by $\langle \upalpha_1,\upalpha_2,\dots\rangle$. \end{say} \begin{say} Let $X$ be a three-dimensional smooth projective variety with $H^3(X,\mathscr{O}_X)=0$ and let $J(X)$ be its intermediate Jacobian regarded as a principally polarized abelian variety (see \cite{Clemens-Griffiths}). Then $J(X)$ can be written, uniquely up to permutations, as a direct sum \begin{equation} \label{equation-decomposition} J(X)=A_1 \oplus \dots \oplus A_n, \end{equation} where $A_1, \dots, A_p$ are indecomposable principally polarized abelian varieties (see \cite[Corollary 3.23]{Clemens-Griffiths}). This decomposition induces a decomposition of tangent spaces \begin{equation} \label{equation-decomposition-tangent} \operatorname{T}_{0,J(X)}= \operatorname{T}_{ 0,A_1} \oplus \dots \oplus \operatorname{T}_{0,A_n} \end{equation} Now assume that $X$ is acted on by a finite group $G$. Then $G$ naturally acts on $J(X)$ and $\operatorname{T}_{0,J(X)}$ preserving decompositions \eqref{equation-decomposition} and \eqref{equation-decomposition-tangent}. \end{say} \begin{mtheorem}{\bf Lemma.}\label{lemma-action-80} Let $C$ be a curve of genus $g\ge 2$ and let $\Gamma\subset \operatorname{Aut}(C)$ be a subgroup of order $2^k\cdot 5$ whose Sylow $5$-subgroup ${\operatorname{Syl}}_5(\Gamma)$ is normal in $\Gamma$. Then the following assertions hold: \begin{enumerate} \item\label{lemma-action-80-k=2} if $k=2$, then $g\ge 3$, \item\label{lemma-action-80-k=4} if $k=4$, then $g\ge 6$, \item\label{lemma-action-80-k=5}\label{faithful-action-on-curve} if $k=5$, then $g\ge 11$. \end{enumerate} \end{mtheorem} \begin{proof} Let $C':=C/{\operatorname{Syl}}_5(\Gamma)$ and $g':=g(C')$. Let $P_1,\dots, P_n\in C'$ be all the branch points. By Hurwitz's formula \[ g+4=5g'+2n. \] The group $\Gamma':=\Gamma/ {\operatorname{Syl}}_5(\Gamma)$ of order $2^k$ faithfully acts on $C'$ and permutes $P_1,\dots, P_n$. \ref{lemma-action-80-k=2} Assume that $k=g=2$. Then $g'=0$, $C'\simeq \mathbb{P}^1$, and $n=3$. At least one of the points $P_1,P_2,P_3$, say $P_1$, must be fixed by $\Gamma'$. But then $\Gamma'$ must be cyclic (of order $4$) and it cannot leave the set $\{P_1,P_2,P_3\}\subset \mathbb{P}^1$ invariant. This proves \ref{lemma-action-80-k=2}. \ref{lemma-action-80-k=4} Assume that $k=4$ and $g\le 5$. Then $g'\le 1$. If $g'=0$, then $n\in \{3,\, 4\}$ and the group $\Gamma'$ of order $16$ acts on $C'\simeq \mathbb{P}^1$ so that the set $\{P_1,\dots, P_n\}$ is invariant. This is impossible. If $g'=1$, then, as above, $\Gamma'$ acts on an elliptic curve $C'$ leaving a non-empty set of $n\le 2$ points is invariant. This is again impossible and the contradiction proves \ref{lemma-action-80-k=4}. \ref{lemma-action-80-k=5} Finally, let $k=5$ and $g\le 10$. Then $g'\le 2$ and $n\le 7$. If $g'\le 1$, then we get a contradiction as above. Let $g'=2$, let $C'\to \mathbb{P}^1$ the the canonical map, and let $\Gamma''\subset \operatorname{Aut}(\mathbb{P}^1)$ be the image of $\Gamma'$. Since $\Gamma''$ is a $2$-subgroup in $\operatorname{Aut}(\mathbb{P}^1)$, it is either cyclic or dihedral. On the other hand, $\Gamma''$ permutes the branch points $Q_1,\dots,Q_6\in \mathbb{P}^1$ so that the stabilizer of each $Q_i$ is a subgroup in $\Gamma''$ of index $\le 4$. Clearly, this is impossible. \end{proof} \section{Symmetric quartic double solid} \begin{say} Let $X$ be the quartic double solid as in Theorem \ref{theorem-main}. Then $X$ is isomorphic to a hypersurface given by \begin{equation} \label{equation} y^2+x_{1}^3x_{2}+ x_{2}^3x_{3}+ x_{3}^3x_{4}+ x_{4}^3x_{1}=0, \end{equation} in the weighted projective space $\mathbb{P}:=\mathbb{P}(1^4,2)$, where $x_1,\dots,x_4,y$ are homogeneous coordinates with $\deg x_i=1$, $\deg y=2$. Let $\upalpha$ be the automorphism of $X$ induced by the diagonal matrix \[ {\operatorname{diag}} (1,\, \zeta_{40}^{38},\, \zeta_{40}^{4},\, \zeta_{40}^{26};\, \zeta_{40}^{-1}) \] and let $\upbeta$ be the cyclic permutation $(1,2,3,4)$ of coordinates $x_1,x_2,x_3,x_4$. Since \[ \upbeta\upalpha\upbeta^{-1}= {\operatorname{diag}} (\zeta_{40}^{26}, 1, \zeta_{40}^{38},\, \zeta_{40}^{4};\, \zeta_{40}^{-1})= {\operatorname{diag}} (1,\, \zeta_{40}^{14},\, \zeta_{40}^{12},\, \zeta_{40}^{18};\, \zeta_{40}^{27})= \upalpha^{13}, \] these automorphisms generate the group \[ G= \langle \upalpha,\, \upbeta \mid \upalpha^{40}=\upbeta^4=1,\hspace{3pt} \upbeta\upalpha\upbeta^{-1}=\upalpha^{13}\rangle \subset \operatorname{Aut}(X), \quad G\simeq \mathbb{Z}/40 \rtimes\mathbb{Z}/ 4. \] \end{say} \begin{mtheorem}{\bf Lemma.}\label{subgroup-index10} Let $G$ be as above. Then we have \begin{enumerate} \item \label{subgroup-index10-1} ${\operatorname{z}}(G)=\langle \upalpha^{10}\rangle$ and $[G,G]=\langle \upalpha^4\rangle$, \item \label{subgroup-index10-2} the Sylow $5$-subgroup ${\operatorname{Syl}}_5(G)$ is normal, \item \label{subgroup-index10-3} any subgroup in $G$ of index $10$ contains ${\operatorname{z}}(G)$. \end{enumerate} \end{mtheorem} \begin{proof} \ref{subgroup-index10-1} can be proved by direct computations and \ref{subgroup-index10-2} is obvious because ${\operatorname{Syl}}_5(G)\subset \langle\upalpha\rangle$. To prove \ref{subgroup-index10-3} consider a subgroup $G'\subset G$ of index $10$. The intersection $G'\cap \langle\upalpha\rangle$ is of index $\le 4$ in $G'$. Hence $G'\cap \langle\upalpha\rangle$ is a $2$-group of order $\ge 4$ and so $\upalpha^{10}\in G'\cap \langle\upalpha\rangle$. \end{proof} \begin{mtheorem}{\bf Lemma (cf. \cite[0.1(b)]{Voisin1988}).}\label{Lemma-representation0} There exists a natural exact sequence \[ 0 \to H^2(X,\Omega_{X}^1 ) \to H^0(X,-K_X)^\vee {\longrightarrow}\mathbb{C} \to 0. \] \end{mtheorem} \begin{proof} Since $X$ is contained in the smooth locus of $\mathbb{P}$ and $\mathscr{O}_{\mathbb{P}}(X)=\mathscr{O}_{\mathbb{P}}(4)$, we have the following exact sequence \[ 0 \longrightarrow \mathscr{O}_X(-4)\longrightarrow \Omega_{\mathbb{P}}^1|_X \longrightarrow \Omega_{X}^1 \longrightarrow 0, \] and so \[ H^2(X, \Omega_{\mathbb{P}}^1|_X) \to H^2(X,\Omega_{X}^1 ) \to H^0(X,\mathscr{O}_X(2))^\vee \to H^3(X, \Omega_{\mathbb{P}}^1|_X)\to 0 \] The Euler exact sequence for $\mathbb{P}=\mathbb{P}(1^4,2)$ has the form \[ 0 \longrightarrow \Omega^1_{\mathbb{P}} \longrightarrow\mathscr{O}_{\mathbb{P}}(-2)\oplus \mathscr{O}_{\mathbb{P}}(-1)^{\oplus 4} \longrightarrow \mathscr{O}_{\mathbb{P}} \longrightarrow 0. \] Restricting it to $X$ we obtain $H^2(X, \Omega_{\mathbb{P}}^1|_X)=0$ and $H^3(X, \Omega_{\mathbb{P}}^1|_X)=\mathbb{C}$. \end{proof} \begin{mtheorem}{\bf Lemma.}\label{Lemma-representation} We have the following decomposition of $G$-modules: \[ \operatorname{T}_{0,J(X)}= V_4\oplus V_4'\oplus V_2, \] where $V_4$, $V_4'$ are irreducible faithful $4$-dimensional representations and $V_2$ is an irreducible $2$-dimensional representation with kernel $\langle \upalpha^{8},\,\upbeta^2\rangle$. Moreover, ${\operatorname{z}}(G)$ acts on $V_4$ and $V_4'$ via different characters. \end{mtheorem} \begin{proof} Clearly, $\operatorname{T}_{0,J(X)}\simeq H^0(J(X),\Omega_{J(X)})^\vee \simeq H^2(X,\Omega_{X}^1)$ and by Lemma \ref{Lemma-representation0} we have an injection $\operatorname{T}_{0,J(X)}\hookrightarrow H^0(X,-K_X)^\vee$. By the adjunction formula $K_X=(K_{\mathbb{P}}+X)|_X$ and so \[ H^0(X,-K_X)\simeq H^0(\mathbb{P},\mathscr{O}_{\mathbb{P}}(-K_{\mathbb{P}}-X)). \] Consider the affine open subset $U:=\{x_1x_2x_3x_4\neq 0\}$. Then $v=y/x_1^2$ and $z_i=x_i/x_1$, $i=2,3,4$ are affine coordinates in $U\subset \{x_1\neq 0\}\simeq \mathbb A^4$. Let $\upomega$ be the $3$-form \[ \upomega:= \frac {d z_2\wedge d z_3\wedge d z_4 }{\partial \phi/\partial v} = \frac {d z_2\wedge d z_3\wedge d z_4 }{2 v}, \] where $\phi=v^2+z_{2}+ z_{2}^3z_{3}+ z_{3}^3z_{4}+ z_{4}^3$ is the equation of $X$ in $U$. It is easy to check that for any polynomial $\psi(z_2,z_3,z_4)$ of degree $\le 2$ the element $\psi\cdot \upomega ^{-1}$ extends to a section of $H^0(X, -K_X)$. Thus we have \[ H^0(X, -K_X)\simeq \{ \psi(z_2,z_3,z_4)\cdot \upomega ^{-1} \mid \deg \psi\le 2\}. \] It is easy to check that the forms \begin{equation} \label{equation-basis} \upomega ^{-1}, z_2^2 \upomega ^{-1}, z_3^2 \upomega ^{-1}, z_4^2\upomega ^{-1}, z_2 \upomega ^{-1}, z_2z_3 \upomega ^{-1}, z_3z_4 \upomega ^{-1}, z_4 \upomega ^{-1}, z_3 \upomega ^{-1} , z_2z_4\upomega ^{-1} \end{equation} are eigenvectors for $\upalpha$ and $\upbeta$ permutes them. Moreover, the following subspaces \begin{itemize} \item[] $W_4= \langle \upomega ^{-1},\hspace{5pt}z_2^2 \upomega ^{-1},\hspace{5pt}z_3^2 \upomega ^{-1},\hspace{5pt}z_4^2\upomega ^{-1}\rangle$, \item[] $W_4'= \langle z_2 \upomega ^{-1},\hspace{5pt}z_2z_3 \upomega ^{-1},\hspace{5pt}z_3z_4 \upomega ^{-1},\hspace{5pt}z_4 \upomega ^{-1}\rangle$, \item[] $W_2= \langle z_3 \upomega ^{-1} ,\hspace{5pt}z_2z_4\upomega ^{-1}\rangle$. \end{itemize} are $G$-invariant in $H^0(X, -K_X)$. Moreover, in the basis \eqref{equation-basis} the element $\upalpha$ acts diagonally: \begin{equation} \label{equation-W} \begin{array}{l} \upalpha|_{W_4} ={\operatorname{diag}} (\zeta_{40}^{11},\, \zeta_{40}^{7},\, \zeta_{40}^{19},\, \zeta_{40}^{23}), \\[4pt] \upalpha|_{W_4'} ={\operatorname{diag}} (\zeta_{40}^{9},\, \zeta_{40}^{13},\, \zeta_{40}^{},\, \zeta_{40}^{37}), \\[4pt] \upalpha|_{W_2} ={\operatorname{diag}} (\zeta_{8}^{3},\, \zeta_{8}^{7}), \end{array} \end{equation} and $\upbeta$ acts on each of these subspaces permuting the eigenspaces of $\upalpha$ cyclically. Thus $\upalpha^{10}$ acts on $W_4$ (resp., $W_4'$) via scalar multiplication by $\zeta_4^{3}$ (resp., $\zeta_4$). Put $V_4:=W_4^\vee$, $V_4':=W_4'^\vee$, $V_2:=W_2^\vee$. \end{proof} \section{Proof of Theorem \ref{theorem-main}} \begin{say} Assume to the contrary to Theorem \ref{theorem-main} that $J(X)$ is a direct sum of Jacobians of curves, i.e. in the unique decomposition \eqref{equation-decomposition} we have $A_i\simeq J(C_i)$, where $C_i$ is a curve of genus $\ge 1$ and $J(C_i)$ is its Jacobian regarded as a principally polarized abelian variety. Let $G_i$ be the stabilizer of $A_i$. There is a natural homomorphism $\varsigma_i: G_i \to\operatorname{Aut} (C_i)$. By the Torelli theorem $\varsigma_i$ is injective and we have \begin{equation} \label{equation-Aut-C} \operatorname{Aut} (J(C_i))\simeq \begin{cases} \operatorname{Aut} (C_i)&\text{if $C_i$ is hyperelliptic,} \\ \operatorname{Aut} (C_i)\times \{\pm 1\}&\text{otherwise.} \end{cases} \end{equation} Let us analyze the action of $G$ on the set $\{A_1,\dots, A_n\}$. Up to renumbering we may assume that subvarieties $A_1,\dots, A_m$ form one $G$-orbit (however, the choice of this orbit is not unique in general). Clearly, $m\in \{1,2,4,5,8,10\}$. Denote the stabilizer of $A_i$ by $G_i$. Consider the possibilities for $m$ case by case. \end{say} \begin{say} \label{lemma-invariant-A} {\bf Case: $m=1$,} that is, $A_1\subset J(X)$ is a $G$-invariant subvariety. Since ${\operatorname{z}}(G)=\langle \upalpha^{10}\rangle$, the only normal subgroup of order 2 in $G$ is $\langle \upalpha^{20}\rangle$. Hence $G$ cannot be decomposed as a direct product of groups of orders $2$ and $80$ (otherwise the order of $\upalpha$ would be $20$). If the action of $G$ on $A_1=J(C_1)$ is faithful, then by \eqref{equation-Aut-C} so is the corresponding action on $C_1$. So, the curve $C_1$ of genus $\le 10$ admits faithful action of the group $G$ of order $2^{5}\cdot 5$. This contradicts Lemma \ref{lemma-action-80}\ref{lemma-action-80-k=5}. Therefore the induced representation on $\operatorname{T}_{0,A_1}$ is not faithful. By Lemma \ref{Lemma-representation}\ $\operatorname{T}_{0,J(C_1)}= V_2$. In this case $g(C_1)=2$ and the action of $G$ on $J(C_1)$ induces a faithful action of the group $\bar G:= G/\langle \upalpha^{8},\,\upbeta^2\rangle$ of order $16$. Since $C_1$ is hyperelliptic, $\bar G$ is contained in $\operatorname{Aut}(C_1)$. If $\bar G$ contains the hyperelliptic involution $\tau$, then $\tau$ generates a normal subgroup of order 2. In this case $\langle \tau\rangle=[\bar G,\bar G]$ and $\bar G/\langle \tau\rangle$ is an abelian non-cyclic group of order $8$. But such a group cannot act faithfully on $C_1/\langle \tau\rangle\simeq \mathbb{P}^1$. Thus $\bar G$ does not contain the hyperelliptic involution. In this case the image of the induced action of $\bar G$ on canonical sections $H^0(C_1,\mathscr{O}_{C_1}(K_{C_1}))$ does not contain scalar matrices. Hence this representation is reducible and so it is trivial on $[\bar G,\bar G]$. On the other hand, the action of $\operatorname{Aut}(C_1)$ on $H^0(C_1,\mathscr{O}_{C_1}(K_{C_1}))$ must be faithful a contradiction. \end{say} From now on we may assume that the decomposition \eqref{equation-decomposition} contains no $G$-invariant summands. \begin{say}{\bf Case: $m=5$.} The subspace $\operatorname{T}_{0,A_1}\oplus\cdots\oplus \operatorname{T}_{0,A_5}\subset \operatorname{T}_{0,J(X)}$ is a $G$-invariant of dimension $5$ or $10$. On the other hand, $\operatorname{T}_{0,J(X)}$ contains no invariant subspaces of dimension $5$ by Lemma \ref{Lemma-representation}. Hence, $\operatorname{T}_{0,A_1}\oplus\cdots\oplus \operatorname{T}_{0,A_5}= \operatorname{T}_{0,J(X)}$, $\dim A_i=2$, and $J(X)= \oplus_{i=1}^{5} A_i$. The stabilizer $G_i\subset G$ is a Sylow $2$-subgroup that faithfully acts on $C_i$ (because $C_i$ is hyperelliptic, see \eqref{equation-Aut-C}). Further, $G_i$ permutes the Weierstrass points $P_1,\dots,P_6\in C_i$. Hence a subgroup $G_i'\subset G_i$ of index 2 fixes one of them. In this situation, $G_i'$ must be cyclic. On the other hand, it is easy to see that $G$ does not contain any elements of order $16$, a contradiction. \end{say} \begin{say}{\bf Case: $m=10$.} Then $A_1,\dots,A_{10}$ are elliptic curves and $G_i\subset G$ is a subgroup of index $10$. By Lemma \ref{subgroup-index10} each $G_i$ contains ${\operatorname{z}}(G)$. Clearly, ${\operatorname{z}}(G)$ acts on $\operatorname{T}_{0,A_i}$ via the same character. Since the subspaces $\operatorname{T}_{0,A_i}$ generate $\operatorname{T}_{0,J(X)}$, the group ${\operatorname{z}}(G)$ acts on $\operatorname{T}_{0,J(X)}$ via scalar multiplication. This contradicts Lemma \ref{Lemma-representation}. \end{say} \begin{say} \label{orbit=8} {\bf Case: $m=8$.} Then $A_1,\dots,A_8$ are elliptic curves and the stabilizer $G_1\subset G$ is of order $20$. In particular, the Sylow $5$-subgroup ${\operatorname{Syl}}_5(G)$ is contained in $G_1$. Since ${\operatorname{Syl}}_5(G)$ is normal in $G$, we have ${\operatorname{Syl}}_5(G)\subset G_i$ for $i=1,\dots,8$. Since the automorphism group of an elliptic curve contains no order $5$ elements, ${\operatorname{Syl}}_5(G)$ acts trivially on $A_i$. Therefore, ${\operatorname{Syl}}_5(G)$ acts trivially on the $8$-dimensional $G$-invariant subspace $\operatorname{T}_{0,A_1}\oplus \cdots\oplus \operatorname{T}_{0,A_8}$. This contradicts Lemma \ref{Lemma-representation}. \end{say} \begin{say}\label{orbit=4}{\bf Case: $m=4$.} The intersection $G_1\cap \langle\upalpha\rangle$ is a subgroup of index $\le 4$ in both $G_1$ and $\langle\upalpha\rangle$. Hence, $G_1\ni \upalpha^{4}$ and so $G_1\supset [G,G]$. In particular, $G_1$ is normal and $G_1=\cdots=G_4$. If $\dim A_1=1$, then the element $\upalpha^8$ of order $5$ must act trivially on elliptic curves $A_i\in 0$, $i=1,\dots,4$. Therefore, $\upalpha^8$ acts trivially on the $4$-dimensional space $\operatorname{T}_{0,A_1}\oplus\cdots\oplus \operatorname{T}_{0,A_4}$. This contradicts Lemma \ref{Lemma-representation}. Thus $\dim A_1=2$. Then $\operatorname{T}_{0,A_1}\oplus\cdots\oplus \operatorname{T}_{0,A_4}=V_4\oplus V_4'$. An eigenvalue of $\upalpha$ on $\operatorname{T}_{0,A_1}\oplus\cdots\oplus \operatorname{T}_{0,A_4}$ must be a primitive $40$-th root of unity (see \eqref{equation-W}). Hence the group $G_1\cap \langle\upalpha\rangle$ acts faithfully on $\operatorname{T}_{0,A_1}$ and $C_1$ (see \eqref{equation-Aut-C}). By Lemma \ref{lemma-action-80}\ref{lemma-action-80-k=2} $G_1\cap \langle\upalpha\rangle$ is of order $10$, i.e. $G_1\cap \langle\upalpha\rangle=\langle \upalpha^4\rangle$ and the kernel $N:=\ker (G_1\to \operatorname{Aut}(C_1))$ is of order $4$. Thus $G_1=\langle \upalpha^4\rangle\times N$. In particular, $G_1$ is abelian. But then the centralizer $\operatorname{C}(\upalpha^8)$ of $\upalpha^8$ contains $N$ and $\langle\upalpha\rangle$. Therefore, $\operatorname{C}(\upalpha^8)=G$ and $\upalpha^8\in {\operatorname{z}}(G)$. This contradicts Lemma \ref{subgroup-index10}\ref{subgroup-index10-1}. \end{say} Thus we have excluded the cases $m=1,4,5,8,10$. The only remaining possibility is that all the orbits of $G$ on $\{ A_i\}$ are of cardinality $2$. \begin{say}{\bf Case: $m=2$.} Then $\dim A_1\le 5$ and $G_1$ is a group of order $80$. By replacing the orbit $\{A_1,A_2\}$ with another one we may assume that $\operatorname{T}_{0,A_1}\oplus \operatorname{T}_{0,A_2}\not \subset V_2$ and so $\operatorname{T}_{0,A_1}\oplus \operatorname{T}_{0,A_2}$ coincides with either $V_4$, $V_4'$, or $V_4\oplus V_4'$. In particular, $g(C_1)\ge 2$. Clearly, $G_1\cap \langle\upalpha\rangle$ is of order $40$ or $20$. Hence, $\upalpha^{2}\in G_1$ and so the group $G_1$ cannot be decomposed as a direct product $G_1=\langle \upalpha^{20}\rangle\times H$. By the Torelli theorem $G_1$ faithfully acts on $C_1$. This contradicts Lemma \ref{lemma-action-80}\ref{lemma-action-80-k=4}. \end{say} Proof of Theorem \ref{theorem-main} is now complete. \begin{proof}[Proof of Corollary \textup{\ref{corollary-main}}] The Jacobians and their sums form a closed subvariety of the moduli space of principally polarized abelian varieties. By Theorem \ref{theorem-main}, in our case, this subvariety does not contain the subvariety formed by Jacobians of quartic double solids. Therefore a general quartic double solid is not rational. \end{proof} \subsection*{Acknowledgements.} The author would like to thank C. Shramov and the referee for useful comments and corrections. \input jacobian.bbl \end{document}
\begin{document} \title{Random Matrices from Linear Codes and the Convergence to Wigner's Semicircle Law} \begin{abstract} Recently we considered a class of random matrices obtained by choosing distinct codewords at random from linear codes over finite fields and proved that under some natural algebraic conditions their empirical spectral distribution converges to Wigner's semicircle law as the length of the codes goes to infinity. One of the conditions is that the dual distance of the codes is at least 5. In this paper, employing more advanced techniques related to Stieltjes transform, we show that the dual distance being at least 5 is sufficient to ensure the convergence, and the convergence rate is of the form $n^{-\beta}$ for some $0 < \beta < 1$, where $n$ is the length of the code. \end{abstract} \begin{keywords} Group randomness, linear code, dual distance, empirical spectral measure, random matrix theory, Wigner's semicircle law. \end{keywords} \section{Introduction}\label{intro} Random matrix theory is the study of matrices whose entries are random variables. Of particular interest is the study of eigenvalue statistics of random matrices such as the empirical spectral measure. It has been broadly investigated in a wide variety of areas, including statistics \cite{Wis}, number theory \cite{MET}, economics \cite{econ}, theoretical physics \cite{Wig} and communication theory \cite{TUL}. Most of the matrix models considered in the literature were matrices whose entries have independent structures. In a series of work (\cite{Tarokh2,Tarokh3,OQBT}), initiated in \cite{Tarokh1}, the authors studied a class of matrices formed by choosing codewords at random from linear codes over finite fields and ultimately proved the convergence of the empirical spectral distribution of their Gram matrices to the Marchenko-Pastur law under the condition that the minimum Hamming distance of the dual codes is at least 5. This is the first result relating the randomness of matrices from linear codes to the algebraic properties of the underlying dual codes, and can be interpreted as a joint randomness test for sequences from linear codes. It implies in particular that sequences from linear codes with desired properties behave like random sequences from the view point of random matrix theory. This is called a ``group randomness'' property in \cite{Tarokh1} and may have many applications (see \cite{WigCode,WigM} from a different perspective). Recently we considered a distinct normalization of matrices obtained in a similar fashion from linear codes and proved the convergence of the empirical spectral distribution to the Wigner's semicircle law under some natural algebraic conditions of the underlying codes (see \cite{CSC}). This is also a group randomness property of linear codes. In this paper we explore this new phenomenon much further. \subsection{Statement of Main Results} To describe our results more precisely, we need some notation. Let $\mathscr{C}=\{\mathcal{C}_i : i \ge 1\}$ be a family of linear codes of length $n_i$ and dimension $k_i$ over the finite field $\mathbb{F}_q$ of $q$ elements ($\mathcal{C}_i$ is called an $[n_i,k_i]_q$ code for short), where $q$ is a prime power. The most interesting case is binary linear codes, corresponding to $q=2$. Denote by $\mathcal{C}_i^\bot$ the dual code of $\mathcal{C}_i$ and $d_i^\bot$ the Hamming distance of $\mathcal{C}_i^\bot$. $d_i^\bot$ is also called the \emph{dual distance} of $\mathcal{C}_i$. The standard additive character of $\mathbb{F}_q$ extends component-wise to a natural mapping $\psi: \mathbb{F}_q^{n_i} \to (\mathbb{C}^*)^{n_i}$. For each $i$, we choose $p_i$ distinct codewords from $\mathcal{C}_i$ and apply the mapping $\psi$. Endowing with uniform probability on the choice of the $p_i$ codewords, this forms a probability space. Put the $p_i$ distinct sequences as the rows of a $p_i \times n_i$ random matrix $\Phi_{\mathcal{C}_i}$. Denote \begin{equation}\label{Gram} \mathcal{G}_{\mathcal{C}_i}=\frac{1}{n_i}\Phi_{\mathcal{C}_i}\Phi_{\mathcal{C}_i}^*, \end{equation} where $\Phi_{\mathcal{C}_i}^*$ is the conjugate transpose of the matrix $\Phi_{\mathcal{C}_i}$ and define \begin{equation}\label{Mn} M_{\mathcal{C}_i}=\sqrt\frac{n_i}{p_i}(\mathcal{G}_{\mathcal{C}_i}-I_{p_i}). \end{equation} Here $I_{p_i}$ is the $p_i \times p_i$ identity matrix. For any $n \times n$ matrix $\mathbf{A}$ with eigenvalues $\lambda_1,\ldots,\lambda_n$, the \emph{spectral measure} of $\mathbf{A}$ is defined by $$\mu_\mathbf{A}=\frac{1}{n}\sum_{j=1}^n \delta_{\lambda_j},$$ where $\delta_{\lambda}$ is the Dirac measure at the point $\lambda$. The \emph{empirical spectral distribution} of $\mathbf{A}$ is defined by $$F_\mathbf{A}(x):=\int_{-\infty}^x \mu_\mathbf{A}(\text{d}x).$$ Our first main result is as follows: \begin{theorem} \label{thm} Suppose $p_i, \frac{n_i}{p_i} \to \infty$ simultaneously as $i \to \infty$. If $d^\bot_i \geq 5$ for any $i$, then as $i \to \infty$, we have \begin{eqnarray} \label{1:eq1} \mu_{n_i}(\mathcal{I}) \to \varrho_\SC(\mathcal{I}) \quad \emph{ in Probability},\end{eqnarray} and the convergence is uniform for all intervals $\mathcal{I} \subset \mathbb{R}$. Here $\mu_{n_i}$ is the spectral measure of the matrix $M_{\mathcal{C}_i}$ and $\varrho_\mathrm{SC}$ is the probability measure of the semicircle law whose density function is given by \begin{equation}\label{scpdf} \mathrm{d}\varrho_\SC(x)=\frac{1}{2\pi}\sqrt{4-x^2}\mathbbm{1}_{[-2,2]}\,\mathrm{d} x, \end{equation} and $\mathbbm{1}_{[-2,2]}$ is the indicator function of the interval $[-2,2]$. \end{theorem} We remark that originally in \cite{CSC} the same convergence (\ref{1:eq1}) was proved with an extra condition that there is a fixed constant $c > 0$ independent of $i$ such that \begin{equation} \label{1:srip} |\langle v,v' \rangle| \leq c\sqrt{n_i}, \quad \mbox{ for any } v \ne v' \in \psi(\mathcal{C}_i). \end{equation} The condition (\ref{1:srip}) is natural as explained in \cite{CSC}, and when $q=2$, it is equivalent to \[\left|\mathrm{wt}(\mathbf{c})-\frac{n_i}{2}\right| \le \frac{c}{2} \sqrt{n_i}, \quad \forall \mathbf{c} \in \mathcal{C}_i \setminus \{\mathbf{0}\},\] where $\mathrm{wt}(\mathbf{c})$ is the Hamming weight of the codeword $\mathbf{c}$. It is interesting that this extra condition can be dropped. Now the result of Theorem \ref{thm} has the same strength as that of \cite{OQBT} where the condition $d_i^\bot \ge 5$ alone is sufficient to ensure the convergence. It shall be noted that similar to \cite{OQBT}, the condition $d_i^\bot \ge 5$ in Theorem \ref{thm} is optimal because if $d_i^\bot=4 \, \forall i$, then Conclusion (\ref{1:eq1}) is false for first-order binary Reed-Muller codes which have dual distance $4$. Our second main result shows that the rate of convergence (\ref{1:eq1}) is fast with respect to the length of the codes. \begin{theorem} \label{thm1-2} Let $\mathcal{C}$ be an $[n,k]_q$ code with dual distance $d^\bot \ge 5$. For fixed constants $\gamma_1,\gamma_2 \in (0,1)$ and $c \ge 1$, suppose $p$ and $n$ satisfy \[c^{-1} n^{\gamma_1} \leq p \leq c \, n^{\gamma_2}.\] Then \begin{equation}\label{SD} \left|\mu_{n}(\mathcal{I})-\varrho_\SC(\mathcal{I})\right| \prec n^{-\beta} \end{equation} uniformly for all intervals $\mathcal{I} \subset \mathbb{R}$, where $\beta>0$ is given by \begin{equation}\label{beta} \beta:=\min\left\{\frac{\gamma_1}{4},\frac{1-\gamma_2}{8} \right\}. \end{equation} \end{theorem} We remark that the symbol ``$\prec$'' in (\ref{SD}) is a standard ``stochastic domination'' notation in probability theory (see \cite{Local law} for details), which means that for any $\varepsilon >0$ and any $D>0$, there is a quantity $N(\varepsilon,D,c,\gamma_1,\gamma_2)$, such that whenever $n \geq N(\varepsilon,D,c,\gamma_1,\gamma_2)$, we have \begin{equation}\label{SD2} \sup_\mathcal{I} \mathbb{P}\left[|\mu_{n}(\mathcal{I})-\varrho_\SC(\mathcal{I})| > n^{-\beta+\varepsilon} \right] \leq n^{-D}. \end{equation} Here $\mathbb{P}$ is the probability within the space of picking $p$ distinct codewords from $\mathcal{C}$ and the supremum is taken over all intervals $\mathcal{I} \subset \mathbb{R}$. Since $\varepsilon, D$ and $N(\varepsilon,D,c,\gamma_1,\gamma_2)$ do not depend on $\mathcal{C}$, the supremum can be taken over all linear codes $\mathcal{C}$ of length $n$ over $\mathbb{F}_q$ with $d^\bot \geq 5$. We also remark that $d^\bot \geq 5$ is a very mild restriction on linear codes $\mathcal{C}$, and there is an abundance of binary codes that satisfy this condition, for example, the Gold codes (\cite{Gold}), some families of BCH codes (see \cite{DIN1,DIN2}) and many families of cyclic and linear codes studied in the literature (see for example \cite{CHE,TAN}). Such binary linear codes can also be generated by almost perfect nonlinear (APN) functions \cite{Blon,POTT}, a special class of functions with important applications in cryptography. \subsection{Simulations} We illustrate Theorems \ref{thm} and \ref{thm1-2} by numerical experiments. We focus on binary Gold codes augmented by the all-1 vector. It is known that binary Gold codes have length $n=2^m-1$, dimension $2m$ and dual distance 5. The augmented binary Gold codes has length $n$, dimension $2m+1$ and dual distance at least 5. Because of the presence of the all-1 vector, the condition (\ref{1:srip}) is not satisfied. For each triple $(m,n,p)$ in the set $\{(5,31,8), (7,127,20),(9,511,35),(11,2047,50)\}$, we randomly pick $p$ codewords from the augmented binary Gold code of length $n=2^m-1$ and form the corresponding matrix, from which we use {\bf Sage} to compute the eigenvalues and plot the empirical spectral distribution along with Wigner's distribution (see Figures \ref{fig1} to \ref{fig4} below). We do the above 10 times for each such triple $(m,n,p)$ and at each time, we find that the plots are almost the same as before: they are all very close to Wigner's semicircle law and as the length $n$ increases, they become less and less distinguishable. In order to illustrate more clearly the shape of the eigenvalue distribution, we also plot a density graph, which is shown in Figure \ref{fig5}. This is based on picking $p=100$ codewords from a binary Gold code of length $n=32767=2^{15}-1$. From (\ref{beta}) it is easy to see that $\beta \leq 1/12$ and the upper bound is achieved when $\gamma_1=\gamma_2=1/3$. It might be possible to improve this value $\beta$ and hence obtain a better convergence rate. From the simulation results, however, it is not clear to us what the optimal $\beta$ that one may expect is. \begin{figure} \caption{Empirical spectral distribution (ESD) of $[31,11,12]$ augmented binary Gold code versus Wigner semicircle law (SC), with $p=8$} \label{fig1} \end{figure} \begin{figure} \caption{Empirical spectral distribution (ESD) of $[127,15,56]$ augmented binary Gold code versus Wigner semicircle law (SC), with $p=20$} \label{fig2} \end{figure} \begin{figure} \caption{Empirical spectral distribution (ESD) of $[511,19,240]$ augmented binary Gold code versus Wigner semicircle law (SC), with $p=35$} \label{fig3} \end{figure} \begin{figure} \caption{Empirical spectral distribution (ESD) of $[2047,23,992]$ augmented binary Gold code versus Wigner semicircle law (SC), with $p=50$} \label{fig4} \end{figure} \begin{figure} \caption{Empirical spectral density of $[32767,30,16256]$ binary Gold code versus Wigner semicircle density, with $p=100$} \label{fig5} \end{figure} \subsection{Techniques and relation to previous work} This paper strengthens \cite[Theorem 2]{CSC} on two fronts: in Theorem \ref{thm} we obtain the same convergence by removing the extra condition (\ref{1:srip}), and in Theorem \ref{thm1-2} we obtain a strong and explicit convergence rate with respect to the length of the code, and the results were supported by computer simulations. The main technique we use in this paper is the Stieltjes transform, a well-developed and standard tool in random matrix theory, and the method is essentially complex analysis. From the view point of random matrix theory, in \cite{BaiY,Bao,Xie} the authors have used Stieltjes transform to study similar matrix models with success, however, our matrices, arising from general linear codes over finite fields with dual distance 5, possess characteristics significantly different from \cite{BaiY,Bao,Xie}. With applications in mind, say, to generate pseudo-random matrices efficiently via linear codes, our matrices are more natural and interesting. None of the methods in previous works seem to apply directly to our setting. Instead we adopt methods from \cite{ConI,SA,Local law} and use a combination of ideas to obtain our final results. Related to this paper, the authors in \cite{CTX} have used Stieltjes transform to obtain a strong convergence rate which is similar in nature to Theorem \ref{thm1-2} of this paper, hence extending the work \cite{OQBT}, and some of the arguments are similar. The paper is organized as follows. In Section \ref{pre} we introduce Stieltjes transform and related formulas and lemmas which will play important roles later. The main ideas of proving Theorems \ref{thm} and \ref{thm1-2} share some similarity but technically speaking, they are quite involved, with the latter being even more so. To streamline the idea of the proofs, we assume a major technical statement (Theorem \ref{thm2}) from which we prove Theorems \ref{thm} and \ref{thm1-2} in Sections \ref{proof} and \ref{proof1-2} respectively. Finally we prove the required Theorem \ref{thm2} in Section \ref{proof2}. \section{Preliminaries}\label{pre} \subsection{Linear codes over $\mathbb{F}_q$ of dual distance at least 5} The standard additive character $\psi: \mathbb{F}_q \to \mathbb{C}^*$ is given by \begin{equation}\label{char} \psi(a) = \zeta^{\mathrm{Tr}(a)}, \quad \forall a \in \mathbb{F}_q, \end{equation} where $\mathrm{Tr}$ is the absolute trace mapping from $\mathbb{F}_q$ to its prime subfield $\mathbb{F}_r$ of order $r$ and $\zeta=\exp(2\pi\sqrt{-1}/r)$ is a (complex) primitive $r$-th root of unity. In particular when $q=r=2$, then $\zeta=-1$ and $\psi(a)=(-1)^a$ for $a \in \mathbb{F}_2$. It is known that $\psi$ satisfies the following orthogonality relation: \begin{equation} \label{2:orth} \frac{1}{q}\sum_{x \in \mathbb{F}_q} \psi(ax)= \left\{ \begin{array}{lll} 1 &:& \mbox{ if } a = 0;\\ 0 &:& \mbox{ if } a \in \mathbb{F}_q \setminus \{0\}. \end{array}\right. \end{equation} Let $\mathcal{C}$ be an $[n,k]_q$ linear code with dual distance $d^\bot \ge 5$. By the sphere-packing bound \cite[Theorem 1.12.1]{FECC}, we have $$\#\C^\bot =q^{n-k} \leq \frac{q^n}{1+n(q-1)+\binom{n}{2}(q-1)^2} =O\left(\frac{q^n}{n^2}\right),$$ here the implied constant in the big O-notation depends only on $q$. From this we can obtain \begin{eqnarray} \label{le:db} \frac{n^2}{q^k}=O(1). \end{eqnarray} Since $\mathcal{C}$ is linear, the orthogonal relation (\ref{2:orth}) further implies that for any $\mathbf{a} \in \mathbb{F}_q^n$, we have \begin{equation}\label{lem} \frac{1}{\#\mathcal{C}}\sum_{\mathbf{c} \in \mathcal{C}}\psi(\mathbf{a}\cdot \mathbf{c})=\left\{\begin{array}{lll} 1 &:& \mbox{ if } \mathbf{a} \in \C^\bot,\\ 0 &:& \mbox{ if } \mathbf{a} \notin \C^\bot. \end{array}\right. \end{equation} Here $\mathbf{a}\cdot\mathbf{c}$ is the usual inner product between the vectors $\mathbf{a}$ and $\mathbf{c}$ in $\mathbb{F}_q^n$. \subsection{Stieltjes Transform} In this section we recall some basic knowledge of Stieltjes transform. Interested readers may refer to \cite[Chapter B.2]{SA} for more details. Stieltjes transform can be defined for any real function of bounded variation. For the case of interest to us, however, we confine ourselves to functions arising from probability theory. Let $\mu$ be a probability measure and let $F$ be the corresponding cumulative distribution function. The Stieltjes transform of $F$ or $\mu$ is defined by $$s(z):=\int_{-\infty}^\infty \frac{\mathrm{d} F(x)}{x-z}=\int_{-\infty}^\infty \frac{\mu(\mathrm{d} x)}{x-z}, $$ where $z$ is a complex variable taking values in $\mathbb{C}^+:=\{z \in \mathbb{C}: \Im z > 0\}$, the upper half complex plane. Here $\Im z$ is the imaginary part of $z$. It is known that $s(z)$ is well-defined for all $z \in \mathbb{C}^+$ and is well-behaved, satisfying the following properties: \begin{itemize} \item[(i).] $s(z) \in \mathbb{C}^+$ for any $z \in \mathbb{C}^+$; \item[(ii).] $s(z)$ is analytic in $\mathbb{C}^+$ and \begin{eqnarray} \label{2:lips} \left|\frac{\mathrm{d} s(z)}{\mathrm{d} z}\right| \leq \int_{-\infty}^\infty \frac{\mu(\mathrm{d} x)}{|x-z|^2} \le \frac{1}{\eta^2},\end{eqnarray} where $\eta=\Im z>0$; \item[(iii).] the probability measure $\mu$ can be recovered from the Stieltjes transform $s(z)$ via the inverse formula (see \cite{SA}): \begin{equation}\label{2:inverse} \mu((x_1,x_2])=F(x_2)-F(x_1)=\lim_{\eta \to 0^+}\frac{1}{\pi}\int_{x_1}^{x_2} \Im(s(E+\mathrm{i}\eta)) \, \mathrm{d} E; \end{equation} \item[(iv).] the convergence of Stieltjes transforms is equivalent to the convergence of the underlying probability measures (see for example \cite[Theorem B.9]{SA}). \end{itemize} \subsection{Resolvent Identities and Formulas for Green function entries} Let $M$ be a Hermitian $p \times p$ matrix whose $(j,k)$-th entry is $M_{jk}$. Denote by $G$ the Green function of $M$, that is, \[G:=G(z)=(M-zI_p)^{-1},\] where $z \in \mathbb{C}^{+}$. The $(j,k)$-th entry of $G$ is $G_{jk}$. Given any subset $T \subset [1\mathrel{{.}\,{.}}\nobreak p]:=\{1,2,\cdots,p\}$, let $M^{(T)}$ be the $p \times p$ matrix whose $(j,k)$-th entry is given by $(M^{(T)})_{jk}:=\mathbbm{1}_{j,k \notin T}M_{jk}$. In addition, let $G^{(T)}$ be the Green function of $M^{(T)}$, that is, \[ G^{(T)}:=G^{(T)}(z)=(M^{(T)}-zI_p)^{-1}.\] When $T$ is a singleton, say $\{\ell\}$, it is common to further abbreviate the notation $G^{(\{\ell\})}$ as $G^{(\ell)}$, and similar for other matrices. Let $\mathbf{m}_{\ell}$ denote the $\ell$-th column of $M$. For $z \in \mathbb{C}^{+}$ and any $\ell \in [1\mathrel{{.}\,{.}}\nobreak p] \setminus T$, we have the \emph{Schur complement formula} (see \cite{SA, Local law}) \begin{equation}\label{diagonal} \frac{1}{G_{\ell\ell}^{(T)}}=M_{\ell\ell}-z-\mathbf{m}_{\ell}^*G^{(T\ell)}\mathbf{m}_{\ell}, \end{equation} where $G^{(T\ell)}:=G^{(T \cup \{\ell\})}$ and $\mathbf{m}_{\ell}^*$ is the conjugate transpose of $\mathbf{m}_{\ell}$. We also have the following eigenvalue interlacing property (see \cite{SA, Local law}) \begin{equation}\label{Interlacing} |{\bf Tr} G^{(T)}(z)-{\bf Tr} G(z)| \leq C\eta^{-1}, \end{equation} where $z=E+\mathrm{i}\eta \in \mathbb{C}^{+}$, ${\bf Tr}$ is the trace function, and $C$ is a constant depending only on the set $T$. \subsection{Stieltjes Transform of the Semicircle Law} The Stieltjes transform $s_\mathrm{SC}$ of the semicircle distribution given in (\ref{scpdf}) can be computed as (see \cite{SA}) \begin{equation}\label{SSC} s_\mathrm{SC}(z)=\frac{-z+\sqrt{z^2-4}}{2}. \end{equation} Here and throughout this paper, we always pick the complex square root $\sqrt{\cdot}$ to be the one with positive imaginary part. It is well-known that $s_\mathrm{SC}(z)$ is the unique function that satisfies the equation \begin{equation}\label{SSC2} u(z)=\frac{1}{-z-u(z)} \end{equation} such that $\Im u(z) > 0$ whenever $\eta:=\Im z > 0$. \subsection{Convergence of Stieltjes Transform in Probability} In order to bound the convergence rate of a random Stieltjes transform in probability, we need the following well-known McDiarmid's lemma from probability theory (see \cite[Lemma F.3]{Local law}). \begin{lemma}[McDiarmid]\label{leMic} Let $X_1,\cdots,X_p$ be independent random variables taking values in the spaces $E_1,\cdots,E_p$ respectively. Let $$f: E_1 \times \cdots E_p \to \mathbb{R}$$ be a measurable function and define the random variable $Y=f(X_1,\cdots,X_p)$. Define, for each $k \in [1\mathrel{{.}\,{.}}\nobreak p]$, \begin{equation}\label{ck} c_k:=\sup |f(x_1,\cdots,x_{k-1},y,x_{k+1},\cdots,x_p)-f(x_1,\cdots,x_{k-1},z,x_{k+1},\cdots,x_p)|, \end{equation} where the supremum is taken over all $x_j \in E_j$ for $j \neq k$ and $y,z \in E_k$. Then for any $\varepsilon > 0$, we have \begin{equation}\label{Mic} \mathbb{P}\left(|Y-\mathbb{E} Y| \geq \varepsilon \right) \leq 2\exp\left(-\frac{2\varepsilon^2}{c_1^2+\cdots+c_p^2}\right). \end{equation} \end{lemma} We will need the following concentration inequality. We remark that a very similar concentration inequality was proved (see \cite[Lemma F.4]{Local law}). Here for the sake of completeness, we provide a detailed proof. \begin{lemma}\label{deviation} Let $\mathcal{M}$ be a $p \times n$ random matrix with independent rows, define $S=(n/p)^{1/2}(\mathcal{M}\mathcal{M}^*-I_p)$. Let $s(z)$ be the Stieltjes transform of the empirical spectral distribution of $S$. Then for any $\varepsilon > 0$ and $z =E+i \eta \in \mathbb{C}^+$, $$\mathbb{P}\left(|s(z)-\mathbb{E} s(z)| \geq \varepsilon \right) \leq 2\exp\left(-\frac{p\eta^2\varepsilon^2}{8}\right).$$ \end{lemma} \begin{proof}[Proof of Lemma \ref{deviation}] Applying Lemma \ref{leMic}, we take $X_j$ to be the $j$-th row of $\mathcal{M}$ and the function $f$ to be the Stieltjes transform $s$. Note that the $(j,k)$-th entry of $S$ is a linear function of the inner product of the $j$-th and $k$-th rows of $\mathcal{M}$. Hence changing one row of $\mathcal{M}$ only gives an additive perturbation of $S$ of rank at most two. Applying the resolvent identity \cite[(2.3)]{Local law}, we see that the Green function is also only affected by an additive perturbation by a matrix of rank at most two and operator norm at most $2\eta^{-1}$. Therefore the quantities $c_k$ in (\ref{ck}) can be bounded by $$c_k \leq \frac{4}{p\eta}.$$ Then the required result follows directly from inserting the above bound to (\ref{Mic}). \end{proof} \section{Proof of Theorem \ref{thm}}\label{proof} Throughout the paper, let $\mathcal{C}$ be an $[n,k]_q$ linear code over $\mathbb{F}_q$. We always assume that its dual distance satisfies $d^\bot \ge 5$. Denote $N=q^k$. The standard additive character on $\mathbb{F}_q$ extends component-wise to a natural mapping $\psi: \mathbb{F}_q^{n} \to \mathbb{C}^{n}$. Define $\mathcal{D}=\psi(\mathcal{C})$. \subsection{Problem set-up} Theorems \ref{thm} and \ref{thm1-2} are for random matrices in the probability space $\Omega_{p,I}$ of choosing $p$ distinct elements uniformly from $\mathcal{D}$. Denote by $\mathcal{D}^p$ the probability space of choosing $p$ elements from $\mathcal{D}$ independently and uniformly. Because $d^\bot \ge 5$, from (\ref{le:db}) we have $$\frac{\#\mathcal{D}^p}{\#\Omega_{p,I}}=\frac{N^p}{N(N-1)(N-2)\cdots(N-p+1)}=1+O\left(\frac{p^2}{N}\right) \to 1,$$ as $n,p \to \infty$. Thus to prove Theorems \ref{thm} and \ref{thm1-2}, it is equivalent to consider the larger probability space $\mathcal{D}^p$. This will simplify the proofs. Now let $\Phi_n$ be a $p \times n$ random matrix whose rows are picked from $\mathcal{D}$ uniformly and independently. Denote by $\mathbb{E}$ the expectation with respect to the probability space $\mathcal{D}^p$. We may assume that $p:=p(n)$ is a function of $n$ such that $p,n/p \to \infty$ as $n \to \infty$. Let \begin{eqnarray} \label{3:gn} \mathcal{G}_n=\frac{1}{n}\Phi_n\Phi_n^*, \quad M_n=\sqrt{\frac{n}{p}}(\mathcal{G}_n-I_p).\end{eqnarray} Let $\mu_n$ be the empirical spectral measure of $M_n$ and let $s_{M_n}(z)$ be its Stieltjes transform, that is, $$s_{M_n}(z)=\frac{1}{p}\sum_{j=1}^p \frac{1}{\lambda_j-z}=\frac{1}{p}{\bf Tr} G. $$ Here $\lambda_1,\cdots,\lambda_p$ are the eigenvalues of the matrix $M_n$, and $G:=G(z)$ is the Green function of $M_n$ given by \[ G(z)=(M_n-zI_p)^{-1}.\] Note that the Stieltjes transform $s_{M_n}(z)$ is itself a random variable in the space $\mathcal{D}^p$. We define \begin{equation}\label{EStietjes} s_n(z):=\mathbb{E} s_{M_n}(z)=\frac{1}{p}\mathbb{E} {\bf Tr} G. \end{equation} Throughout the paper, the complex value $z \in \mathbb{C}^+$ is always written as \[z=E+ \mathrm{i} \eta, \quad \mbox{ where } E, \eta \in \mathbb{R} \mbox{ and } \eta >0. \] For a fixed constant $\tau \in (0,1)$, we define \begin{equation}\label{st} \Gamma_{\tau}:=\bigg\{z=E+\mathrm{i}\eta: |E| \leq \tau^{-1}, 0< \eta \leq \tau^{-1} \bigg\}. \end{equation} Now we assume a result about the expected Stieltjes transform $s_n(z)$. \begin{theorem}\label{thm2} For any $z \in \Gamma_{\tau}$, we write \begin{eqnarray} \label{3:snz} s_n(z)=\frac{1}{-z-s_n(z)+\Delta(z)}. \end{eqnarray} Then we have \[ \Delta(z)=O_{\tau}\left(\eta^{-3} \left(p^{-1}+\sqrt{p/n}\right)\right).\] \end{theorem} We emphasize here that this is one of the major technical results in this paper and the proof is a little complicated. This is the only result in the paper that is directly related to the properties of linear codes. It requires $d^\bot \geq 5$ but not the extra condition (\ref{1:srip}) used in \cite{CSC}. To streamline the presentation, here we assume Theorem \ref{thm2}, then Theorem \ref{thm} can be proved easily. The proof of Theorem \ref{thm2} is postponed to Section \ref{proof2}. \subsection{Proof of Theorem \ref{thm}}\label{sec} By properties of the Stieltjes transform (see \cite[Theorem B.9]{SA}), to prove Theorem \ref{thm}, it is equivalent to prove the following statement: \emph{For any $\varepsilon>0$, we have} \begin{eqnarray} \label{3:conP} \mathbb{P}\left(\exists z \in \mathbb{C}^+ \mbox{ such that } \left|s_{M_n}(z) - s_\mathrm{SC}(z) \right| \ge \varepsilon \right) \to 0 \quad \mbox{ as } n \to \infty. \end{eqnarray} We prove Statement (\ref{3:conP}) in several steps. First, we fix an arbitrary value $z \in \mathbb{C}^+$. The quadratic equation (\ref{3:snz}) has two solutions $$s_n^{\pm}(z)=\frac{-(z-\Delta)\pm \sqrt{(z-\Delta)^2-4}}{2}.$$ As $n \to \infty$, from Theorem \ref{thm2} we have $\Delta(z) \to 0$, so $z-\Delta \in \mathbb{C}^+$ for large enough $n$. Since $s_n(z),s_\mathrm{SC}(z) \in \mathbb{C}^+$, we see that \begin{equation}\label{3:snz2} s_n(z)=s_n^+(z)=s_\mathrm{SC}(z-\Delta). \end{equation} Then by the continuity of $s_\mathrm{SC}$ and by taking $n \to \infty$, we obtain \begin{equation}\label{3:limit} s_n(z) \to s_\mathrm{SC}(z). \end{equation} Moreover, by Lemma \ref{deviation}, for any fixed $\varepsilon >0$, as $n \to \infty$, we have \[\mathbb{P}\left(\left|s_{M_n}(z)-s_n(z)\right| \geq \varepsilon\right) \to 0.\] This and (\ref{3:limit}) immediately imply \begin{equation}\label{3:prob} \mathbb{P}\left(\left|s_{M_n}(z)-s_\mathrm{SC}(z)\right| \geq \varepsilon\right) \to 0. \end{equation} Noting that (\ref{3:prob}) holds for any fixed $z \in \mathbb{C}^+$ and any $\varepsilon>0$, so to prove (\ref{3:conP}), in the next step we need to show that the convergence is ``uniform'' for all $z \in \mathbb{C}^+$. To do this, we adopt a simple lattice argument. For any $\tau,\varepsilon \in (0,1)$, define the sets $$\Gamma_{\tau}':=\Gamma_{\tau} \cap \{z=E+\mathrm{i} \eta: \eta \geq \tau\}$$ and $$\mathbf{L}_{\tau,\ep}:=\Gamma_{\tau}' \cap \left\{z=\frac{\tau^2\varepsilon}{4}(a+\mathrm{i} b): (a,b) \in \mathbb{Z}^2\right\}.$$ It is easy to see that $\mathbf{L}_{\tau,\ep} \neq \emptyset$ and $$\#\mathbf{L}_{\tau,\ep} = O_\tau\left(\tau^{-4}\varepsilon^{-2}\right) < \infty.$$ For any fixed $z \in \mathbb{C}^+$, define $\Xi_{n,\varepsilon}(z)$ to be the event $$\left\{|s_{M_n}(z)-s_\mathrm{SC}(z)| < \varepsilon \right\}.$$ By (\ref{3:prob}), for any $\delta > 0$, there is an $N(z,\tau,\varepsilon,\delta)$ such that $$n > N(z,\tau,\varepsilon,\delta) \implies \mathbb{P}\left(\Xi_{n,\frac{\varepsilon}{2}}(z)^{\bf c}\right) < \frac{\delta}{\#\mathbf{L}_{\tau,\ep}}.$$ Here the set $\Xi_{n,\frac{\varepsilon}{2}}(z)^{\bf c}$ denotes the complement of the event $\Xi_{n,\frac{\varepsilon}{2}}(z)$. Then for any $n$ such that \[ n > N(\tau,\varepsilon,\delta):=\max_{z \in \mathbf{L}_{\tau,\ep}} N(z,\tau,\varepsilon,\delta),\] we have $$\mathbb{P}\left(\left(\bigcap_{z \in \mathbf{L}_{\tau,\ep}} \Xi_{n,\frac{\varepsilon}{2}}(z)\right)^{\bf c}\right)=\mathbb{P}\left(\bigcup_{z \in \mathbf{L}_{\tau,\ep}} \Xi_{n,\frac{\varepsilon}{2}}(z)^{\bf c}\right) < \delta. $$ Finally we consider the event $\bigcap_{z \in \mathbf{L}_{\tau,\ep}} \Xi_{n,\frac{\varepsilon}{2}}(z)$, that is, \[ \left|s_{M_n}(z')-s_\mathrm{SC}(z') \right|< \frac{\varepsilon}{2} \quad \forall z' \in \mathbf{L}_{\tau,\ep}.\] Recall from (\ref{2:lips}) that the Stieltjes transforms $s_{M_n}(z)$ and $s_\mathrm{SC}(z)$ are both $\tau^{-2}$-Lipschitz on the set $\Gamma_{\tau}'$, and for any $z \in \Gamma_{\tau}'$, we can find one $z' \in \mathbf{L}_{\tau,\ep}$ such that $$|z-z'| \leq \frac{\tau^2\varepsilon}{4}. $$ So for this $z \in \Gamma_{\tau}'$ we have \begin{align*} |s_{M_n}(z)-s_\mathrm{SC}(z)| &\leq |s_{M_n}(z)-s_{M_n}(z')|+|s_{M_n}(z')-s_\mathrm{SC}(z')|+|s_\mathrm{SC}(z')-s_\mathrm{SC}(z)|\\ &< \tau^{-2}|z-z'|+\frac{\varepsilon}{2}+\tau^{-2}|z-z'|\\ & \le \varepsilon. \end{align*} This means that $$\bigcap_{z \in \mathbf{L}_{\tau,\ep}} \Xi_{n,\frac{\varepsilon}{2}}(z) \subset \bigcap_{z \in \Gamma_{\tau}'} \Xi_{n,\varepsilon}(z).$$ Therefore $$\mathbb{P}\left(\left(\bigcap_{z \in \Gamma_{\tau}'} \Xi_{n,\varepsilon}(z)\right)^{\bf c}\right) \leq \mathbb{P}\left(\left(\bigcap_{z \in \mathbf{L}_{\tau,\ep}} \Xi_{n,\frac{\varepsilon}{2}}(z)\right)^{\bf c}\right) < \delta$$ for any $n > N(\tau,\varepsilon,\delta)$. Hence for any $\tau, \varepsilon \in (0,1)$, we have \begin{eqnarray*} \mathbb{P}\left(\exists z \in \Gamma_{\tau}' \mbox{ such that } \left|s_{M_n}(z) - s_\mathrm{SC}(z) \right| \ge \varepsilon \right) \to 0 \quad \mbox{ as } n \to \infty. \end{eqnarray*} Taking the limit $\tau \to 0^+$, we obtain the desired Statement (\ref{3:conP}). This completes the proof of Theorem \ref{thm}. \section{Proof of Theorem \ref{thm1-2}}\label{proof1-2} Now for fixed constants $c > 1$ and $\gamma_1,\gamma_2 \in (0,1)$, let us assume \[ c^{-1}n^{\gamma_1} \leq p \leq cn^{\gamma_2}.\] Similar in proving Theorem \ref{thm} in the previous section, here we assume Theorem \ref{thm2}. Then the main idea of proving Theorem \ref{thm1-2} is to provide a refined and quantitative version of Statement (\ref{3:conP}), so in each step of the proofs, we need to keep track of all the varying parameters as $n \to \infty$. First, the upper bound for $\Delta(z)$ in Theorem \ref{thm2} can be simplified as \[ \Delta(z)=O_{c,\tau}\left(n^{-4\beta}\eta^{-3}\right), \] where the constant $\beta>0$ is explicitly given in (\ref{beta}). Let us define \begin{eqnarray*} \mathbf{S}_{\tau}:=\Gamma_{\tau} \bigcap \left\{z=E+\mathrm{i} \eta: \eta \ge n^{-\beta+\tau}\right\}. \end{eqnarray*} From now on, $C_{c,\tau}$ denotes some positive constant depending only on $c$ and $\tau$ whose value may vary at each occurrence. We can estimate the difference $|s_n(z)-s_\mathrm{SC}(z)|$ as follows. \begin{lemma}\label{EStieltjes} For any $z \in \mathbf{S}_{\tau}$, we have $$|s_n(z)-s_\mathrm{SC}(z)|=O_{c,\tau}\left(n^{-4\beta}\eta^{-4}\right).$$ \end{lemma} \begin{proof}[Proof of Lemma \ref{EStieltjes}] First, for large enough $n$, noting that $$\Im(z-\Delta)\geq \eta-C_{c,\tau}n^{-4\beta}\eta^{-3} > 0,$$ we see that Equation (\ref{3:snz2}) holds for all $z \in \mathbf{S}_{\tau}$. More precisely, we have $$\Im(z-\Delta) > C_{c,\tau}\eta.$$ By using the fact $\left|\frac{\mathrm{d} s_\mathrm{SC}(z)}{\mathrm{d} z}\right| \leq \eta^{-1}$ which can be easily checked from (\ref{SSC}), we conclude that $$|s_n(z)-s_\mathrm{SC}(z)|=\left|s_\mathrm{SC}(z-\Delta)-s_\mathrm{SC}(z) \right| \leq C_{c,\tau}\eta^{-1}|\Delta|\leq C_{c,\tau}n^{-4\beta}\eta^{-4}.$$ Then Lemma \ref{EStieltjes} is proved. \end{proof} Next we estimate the term $|s_{M_n}(z)-s_\mathrm{SC}(z)|$. An $n$-dependent event $\Xi$ is said to hold \emph{with high probability} if for any $D>0$, there is a quantity $N=N(D)>0$ such that $\mathbb{P}(\Xi) \geq 1-n^{-D}$ for any $n > N$. \begin{theorem}\label{Stieltjes} We have, with high probability, $$|s_{M_n}(z)-s_\mathrm{SC}(z)| \leq n^\tau(n^{-\beta}+n^{-4\beta}\eta^{-4}) \quad \forall z \in \mathbf{S}_{\tau}.$$ \end{theorem} \begin{proof}[Proof of Theorem \ref{Stieltjes}] By the concentration inequality given in Lemma \ref{deviation}, we have \begin{equation}\label{deviation1} \mathbb{P}\left(\left|s_{M_n}(z)-s_n(z)\right| \geq n^{\frac{\tau}{2}-\beta}\right)\leq 2\exp\left(-\frac{n^{\gamma_1-4\beta+3\tau}}{8c}\right)\leq 2\exp\left(-\frac{n^{3\tau}}{8c}\right). \end{equation} Noting that the inequality (\ref{deviation1}) holds for any fixed $z \in \mathbf{S}_{\tau}$. In order to prove Theorem \ref{Stieltjes}, we need an upper bound which is uniform for all $z \in \mathbf{S}_{\tau}$. We apply a lattice argument again. Let $$\mathbf{L}_\tau:=\mathbf{S}_{\tau} \cap \left\{z=n^{-3 \beta} (a+\mathrm{i} b): (a,b) \in \mathbb{Z}^2 \right\}.$$ Note that the set $\mathbf{L}_\tau \ne \emptyset$ and $$\#\mathbf{L}_\tau \leq C_\tau n^{6\beta}.$$ Also, for any $z \in \mathbf{S}_{\tau}$ and $\varepsilon > 0$, define $\mathcal{E}_{n,\varepsilon}(z)$ to be the event $$ \left\{|s_{M_n}(z)-s_n(z)| \leq n^{\varepsilon-\beta} \right\},$$ and $\mathcal{E}_{n,\varepsilon}(z)^{\bf c}$ the complement. Then (\ref{deviation1}) can be rewritten as $$\mathbb{P}\left(\mathcal{E}_{n,\frac{\tau}{2}}(z)^{\bf c} \right) \leq 2\exp\left(-\frac{n^{3\tau}}{8c}\right).$$ So we have \begin{eqnarray} \mathbb{P}\left(\left(\bigcap_{z \in \mathbf{L}_\tau} \mathcal{E}_{n,\frac{\tau}{2}}(z)\right)^{\bf c}\right)&=&\mathbb{P}\left(\bigcup_{z \in \mathbf{L}_\tau} \mathcal{E}_{n,\frac{\tau}{2}}(z)^{\bf c}\right) \nonumber \\ &\leq & C_\tau n^{6\beta}\exp\left(-\frac{n^{3\tau}}{8c}\right) \leq n^{-D} \label{deviation2} \end{eqnarray} for any $D > 0$ and $n > N(c,\gamma_1,\gamma_2,\tau,D)$. Finally we consider the event $\bigcap_{z \in \mathbf{L}_\tau} \mathcal{E}_{n,\frac{\tau}{2}}(z)$, that is, \[|s_{M_n}(z')-s_n(z')| \le n^{\frac{\tau}{2}-\beta} \quad \forall z' \in \mathbf{L}_\tau.\] Noting that for any $z \in \mathbf{S}_{\tau}$, there is $z' \in \mathbf{L}_\tau$ such that $$|z-z'| \leq n^{-3\beta}$$ and that $s_{M_n}(z)$ and $s_n(z)$ are both $n^{2\beta}$-Lipschitz on $\mathbf{S}_{\tau}$, we obtain, for any $z \in \mathbf{S}_{\tau}$, \begin{align*} \left|s_{M_n}(z)-s_n(z) \right| &\leq |s_{M_n}(z)-s_{M_n}(z')|+|s_{M_n}(z')-s_n(z')|+|s_n(z')-s_n(z)|\\ &< 2n^{2\beta}|z-z'|+n^{\frac{\tau}{2}-\beta} \leq n^{\tau-\beta}. \end{align*} This means that $$\bigcap_{z \in \mathbf{L}_\tau} \mathcal{E}_{n,\frac{\tau}{2}}(z) \subset \bigcap_{z \in \mathbf{S}_{\tau}} \mathcal{E}_{n,\tau}(z).$$ Hence by (\ref{deviation2}) we have $$\mathbb{P}\left(\bigcap_{z \in \mathbf{S}_{\tau}} \mathcal{E}_{n,\tau}(z)\right) \geq \mathbb{P}\left(\bigcap_{z \in \mathbf{L}_\tau} \mathcal{E}_{n,\frac{\tau}{2}}(z)\right) \geq 1-n^{-D}$$ for all $n > N(c,\gamma_1,\gamma_2,\tau,D)$. Combining the above inequality with Lemma \ref{EStieltjes} completes the proof of Theorem \ref{Stieltjes}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1-2}] As a standard application of the Helffer-Sj\"{o}strand formula via complex analysis, Theorem \ref{thm1-2} can be derived directly from Theorem \ref{Stieltjes}. This is quite well-known, and the computation is routine. Interested readers may refer to \cite[Section 8]{Local law} for a very similar analysis. We omit the details. \end{proof} \section{Proof of Theorem \ref{thm2}}\label{proof2} In this section we give a detailed proof of Theorem \ref{thm2}, where the condition that $d^\bot \geq 5$ plays an important role. Recall from the beginning of Section \ref{proof} that $\mathcal{C}$ is a linear code of length $n$ over $\mathbb{F}_q$ with $d^\bot \geq 5$, $\psi$ is the standard additive character on $\mathbb{F}_q$, extended component-wisely to $\mathbb{F}_q^n$, $\mathcal{D}=\psi(\mathcal{C})$, and $\Phi_n$ is a $p \times n$ random matrix whose rows are selected uniformly and independently from $\mathcal{D}$. This makes $\mathcal{D}^p$ a probability space, on which we use $\mathbb{E}$ to denote the expectation. Let $\mathcal{G}_n$ and $M_n$ be defined as in (\ref{3:gn}). Since all the entries of $\Phi_n$ are roots of unity, the diagonal entries of $M_n$ are all zero. Let $x_{jk}$ be the $(j,k)$-th entry of $\Phi_n$. The following properties of $x_{jk}$, while very simple, depend crucially on the condition that $d^\bot \ge 5$. \begin{lemma}\label{cor} For any $\ell \in [1\mathrel{{.}\,{.}}\nobreak p]$, we have (a) $\mathbb{E}(x_{\ell j}\overline{x}_{\ell k})=0$ if $j \neq k$; (b) $\mathbb{E}(x_{\ell j}x_{\ell t}\overline{x}_{\ell k}\overline{x}_{\ell s})=0$ if the indices $j,t,k,s$ do not come in pairs; If the indices come in pairs, then $|\mathbb{E}(x_{\ell j}x_{\ell t}\overline{x}_{\ell k}\overline{x}_{\ell s})| \leq 1$. \end{lemma} \begin{proof}[Proof of Lemma \ref{cor}] (a) It is easy to see that \[\mathbb{E}(x_{\ell j}\overline{x}_{\ell k})=\frac{1}{N}\sum_{\mathbf{c} \in \mathcal{C}}\psi(c_j-c_k)=\frac{1}{N}\sum_{\mathbf{c} \in \mathcal{C}}\psi(\mathbf{a}_1 \cdot \mathbf{c}),\] where $\mathbf{c}=(c_1,c_2,\cdots,c_n) \in \mathcal{C}$ and $\mathbf{a}_1=(0, \cdots, 0, 1, 0 \cdots, 0, -1, 0 \cdots, 0) \in \mathbb{F}_q^n$. Here in $\mathbf{a}_1$ the 1 and $-1$ appear at the $j$-th and $k$-th entries respectively. Since $d^\bot \geq 5$, we have $\mathbf{a}_1 \notin \C^\bot$, and the desired result follows directly from (\ref{lem}). (b) It is easy to see that \[\mathbb{E}(x_{\ell j}x_{\ell t}\overline{x}_{\ell k}\overline{x}_{\ell s})=\frac{1}{N}\sum_{\mathbf{c} \in \mathcal{C}}\psi(c_j+c_t-c_k-c_s)=\frac{1}{N}\sum_{\mathbf{c} \in \mathcal{C}}\psi(\mathbf{a}_2 \cdot \mathbf{c}),\] where the vector $\mathbf{a}_2 \in \mathbb{F}_q^n$ is formed from the all-zero vector by adding $1$s to the $j$-th and $t$-th entries and then adding $-1$s from the $k$-th and $s$-th entries. If the indices $j,t,k,s$ do not come in pairs, then $0 \ne \mathrm{wt}(\mathbf{a}_2) \le 4$. Since $d^\bot \geq 5$, we have $\mathbb{E}(x_{\ell j}x_{\ell t}\overline{x}_{\ell k}\overline{x}_{\ell s})=0$ by (\ref{lem}). The second statement of \emph{(b)} is trivial since $|x_{ij}|=1$ for any $i,j$. \end{proof} For any $\ell \in [1 \mathrel{{.}\,{.}}\nobreak p]$, let $\Phi_n^{(\ell)}$ be the $p \times n$ matrix obtained from $\Phi_n$ by changing the whole $\ell$-th row to {\bf 0}. Define \[ \G^{(\ell)}:=\frac{1}{n}\Phi_n^{(\ell)}{\Phi_n^{(\ell)}}^*, \quad M_n^{(\ell)}:=\sqrt{\frac{n}{p}}\left(\G^{(\ell)}-I_p\right).\] Denote by $\omega(\ell)$ the $\ell$-th row of $\Phi_n$, and $\mathbf{m}_{\ell}$ the $\ell$-th column of $M_n$. It is easy to see that \begin{eqnarray} \label{4:ml} \mathbf{m}_{\ell}=\frac{1}{\sqrt{pn}}\Phi_n^{(\ell)}\omega(\ell)^*.\end{eqnarray} Let \[ G:=G(z)=\left(M_n-z I_p\right)^{-1}, \quad G^{(\ell)}:=G^{(\ell)}(z)=\left(M_n^{(\ell)}-z I_p\right)^{-1}\] be the Green functions of $M_n$ and $M_n^{(\ell)}$ respectively for the complex variable $z \in \mathbb{C}^+$. For the Green function $G$, we start with the resolvent identity (\ref{diagonal}) for $T=\emptyset$. Using (\ref{4:ml}), we can express the third term on the right side of (\ref{diagonal}) as \begin{align*} \mathbf{m}_{\ell}^*G^{(\ell)}\mathbf{m}_{\ell}&=\frac{1}{pn}\omega(\ell)\Phi_n^{(\ell)*}G^{(\ell)}\Phi_n^{(\ell)}\omega(\ell)^* \nonumber\\ &=\frac{1}{pn}{\bf Tr} \left({\Phi_n^{(\ell)}}^*G^{(\ell)}\Phi_n^{(\ell)}\omega(\ell)^*\omega(\ell) \right). \end{align*} By the identity $$(\omega(\ell)^*\omega(\ell))_{jk}=\delta_{jk}+(1-\delta_{jk})x_{\ell j}\overline{x}_{\ell k},$$ the right hand side can be further expressed as \begin{align*} \frac{1}{pn}{\bf Tr} \left({\Phi_n^{(\ell)}}^*G^{(\ell)}\Phi_n^{(\ell)}\right)+Z_\ell =\frac{1}{p}{\bf Tr} \left(G^{(\ell)}\G^{(\ell)}\right)+Z_\ell, \end{align*} where \begin{equation}\label{Zl} Z_\ell=\sum_{j \neq k} a_{jk} x_{\ell j}\overline{x}_{\ell k}. \end{equation} Here the indices $j,k$ vary in $[1\mathrel{{.}\,{.}}\nobreak n]$ and $a_{jk}$'s are the $(jk)$-th entry of the $n \times n$ matrix $(a_{jk})$ given by \begin{equation}\label{ajk} (a_{jk})=\frac{1}{pn}{\Phi_n^{(\ell)}}^*G^{(\ell)}\Phi_n^{(\ell)}. \end{equation} Hence the resolvent identity (\ref{diagonal}) yields \begin{align*} \frac{1}{G_{\ell\ell}}&=M_{\ell\ell}-z-\frac{1}{p}{\bf Tr} \left(G^{(\ell)}\G^{(\ell)}\right)-Z_\ell \\ &=-z-\frac{1}{p}{\bf Tr} G^{(\ell)}\left(\sqrt{\frac{p}{n}}M_n^{(\ell)}+I_p\right)-Z_\ell. \end{align*} Expanding the second term on the right, we obtain \begin{align} \frac{1}{G_{\ell\ell}}=-z-s_n(z)+Y_\ell,\label{diagonal2} \end{align} where \begin{equation}\label{Yl} Y_\ell=s_n(z)-\sqrt\frac{p}{n}-\left(\frac{1}{p}+\frac{z}{\sqrt{pn}}\right){\bf Tr} G^{(\ell)}-Z_\ell. \end{equation} \subsection{Estimates of $Z_\ell$ and $Y_\ell$} The random variables $Z_{\ell}$ and $Y_{\ell}$ depend on the complex value $z =E+\mathrm{i} \eta \in \mathbb{C}^+$. For any fixed constant $\tau>0$, recall $\Gamma_{\tau}$ defined in (\ref{st}). Throughout this section we always assume $z \in \Gamma_{\tau}$. \begin{lemma}\label{Zl2} Let $z \in \Gamma_{\tau}$. Then for any $\ell \in [1\mathrel{{.}\,{.}}\nobreak p]$, we have (a) $\mathbb{E}^{(\ell)}Z_\ell=\mathbb{E}Z_\ell=0$. Here $\mathbb{E}^{(\ell)}$ is the conditional expectation given $\{x_{jk}: j \ne \ell\}$; (b) $\mathbb{E}|Z_\ell|^2=O_{\tau}(p^{-1}\eta^{-2})$. \end{lemma} \begin{proof}[Proof of Lemma \ref{Zl2}] (a) Since the rows of $\Phi_n$ are independent, the entries $a_{jk}$ as defined in (\ref{ajk}) are independent with $x_{\ell j}$ and $x_{\ell k}$. Hence from the definition of $Z_\ell$ in (\ref{Zl}) and statement (a) of Lemma \ref{cor}, we have $$\mathbb{E}^{(\ell)}Z_\ell=\sum_{j \neq k}a_{jk}\mathbb{E}(x_{\ell j}\overline{x}_{\ell k})=0.$$ The proof of the result on $\mathbb{E}Z_\ell$ is similar by replacing $a_{jk}$ with $\mathbb{E} a_{jk}$. (b) Expanding $|Z_\ell|^2$ and taking expectation $\mathbb{E}$ inside, noting that the rows of $\Phi_n$ are independent, we have \begin{align*} \mathbb{E}|Z_\ell|^2&=\mathbb{E}\left|\sum_{j \neq k}a_{jk}x_{\ell j}\overline{x}_{\ell k}\right|^2\\ &=\sum_{\substack{j\neq k\\s\neq t}}\mathbb{E}(a_{jk}\overline{a}_{st})\mathbb{E}(x_{\ell j}x_{\ell t}\overline{x}_{\ell k}\overline{x}_{\ell s}). \end{align*} Since $d^\bot \geq 5$, by using statement (b) of Lemma \ref{cor}, we find $$\mathbb{E}|Z_\ell|^2 \leq C\sum_{j,k} \mathbb{E}|a_{jk}|^2=C\mathbb{E} {\bf Tr} ((a_{jk})(a_{jk})^*),$$ where $C$ is an absolute constant which may be different in each appearance. Using the definition of $(a_{jk})$ in (\ref{ajk}) we have \begin{align*} \mathbb{E}|Z_\ell|^2 &\leq\frac{C}{p^2}\mathbb{E} {\bf Tr} \left(\frac{1}{n^2}{\Phi_n^{(\ell)}}^*G^{(\ell)}\Phi_n^{(\ell)}{\Phi_n^{(\ell)}}^*G^{(\ell)}(\bar{z})\Phi_n^{(\ell)}\right)\\ &=\frac{C}{p^2}\mathbb{E} {\bf Tr} [(M_n^{(\ell)}-z)^{-1}\G^{(\ell)}(M_n^{(\ell)}-\bar{z})^{-1}\G^{(\ell)}]\\ &=\frac{C}{p^2}\mathbb{E} \sum_{j=1}^{p} \frac{\left(\sqrt\frac{p}{n}\lambda_j^{(\ell)}+1\right)^2}{(\lambda_j^{(\ell)}-z)(\lambda_j^{(\ell)}-\bar{z})}. \end{align*} Expanding the terms on the right, we can easily obtain \begin{align*} \mathbb{E}|Z_\ell|^2 &\leq \frac{C}{p^2}\mathbb{E} \sum_{j=1}^{p}\left(\frac{p}{n}\left(1+\frac{|z|^2}{|\lambda_j^{(\ell)}-z|^2}\right)+\frac{1}{|\lambda_j^{(\ell)}-z|^2}\right)\\ &\leq C\left(\frac{1}{n}+\frac{|z|^2}{n\eta^2}+\frac{1}{p\eta^2}\right)\leq \frac{C_{\tau}}{p\eta^2}. \end{align*} Here $\lambda_j^{(\ell)} \in \mathbb{R} (1 \leq j \leq p)$ are the eigenvalues of $M_n^{(\ell)}$, and $C_{\tau}$ is a positive constant depending only on $\tau$ whose value may vary in each occurrence. \end{proof} The above estimations lead to the following estimations of $Y_\ell$. \begin{lemma}\label{Yl2} Let $z \in \Gamma_{\tau}$. Then for any $\ell \in [1\mathrel{{.}\,{.}}\nobreak p]$, we have (a) $\mathbb{E} Y_\ell=O_{\tau}\left(\eta^{-1}(p^{-1}+(p/n)^{\frac{1}{2}})\right)$; (b) $\mathbb{E}|Y_\ell|^2=O_{\tau}\left(\eta^{-2}(p^{-1}+p/n)\right)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{Yl2}] (a) Taking expectation on $Y_\ell$ in (\ref{Yl}) and noting that $\mathbb{E} Z_\ell=0$, we get \begin{align*} \mathbb{E} Y_\ell &=\frac{1}{p}\mathbb{E}({\bf Tr} G-{\bf Tr} G^{(\ell)})-\sqrt\frac{p}{n}-\frac{z}{\sqrt{pn}}\mathbb{E}{\bf Tr}G^{(\ell)}. \end{align*} By the eigenvalue interlacing property in (\ref{Interlacing}) and the trivial bound $|G_{jj}^{(\ell)}| \leq \eta^{-1}$, we get $$|\mathbb{E} Y_\ell| \leq \frac{C}{p\eta}+\sqrt\frac{p}{n}+\sqrt\frac{p}{n}\frac{|z|}{\eta} \leq \frac{C}{p\eta}+\frac{C_{\tau}}{\eta}\sqrt\frac{p}{n}.$$ (b) We split $\mathbb{E}|Y_\ell|^2$ as \begin{equation}\label{Yl3} \mathbb{E}|Y_\ell|^2=\mathbb{E}|Y_\ell-\mathbb{E}Y_\ell|^2+|\mathbb{E}Y_\ell|^2=V_1+V_2+|\mathbb{E}Y_\ell|^2, \end{equation} where $$V_1=\mathbb{E}|Y_\ell-\mathbb{E}^{(\ell)}Y_\ell|^2, \quad V_2=\mathbb{E}|\mathbb{E}^{(\ell)}Y_\ell-\mathbb{E}Y_\ell|^2. $$ We first estimate $V_1$. Using (a) of Lemma \ref{Zl2}, we see that $$Y_\ell-\mathbb{E}^{(\ell)}Y_\ell=-Z_\ell+\mathbb{E}^{(\ell)}Z_\ell=-Z_\ell$$ Hence by (b) of Lemma \ref{Zl2} we obtain \begin{equation}\label{V1} V_1=\mathbb{E}|Z_\ell|^2=O_{\tau}(p^{-1}\eta^{-2}). \end{equation} Next we estimate $V_2$. Again by Lemma \ref{Zl2} we have \begin{align*} \mathbb{E}^{(\ell)}Y_\ell-\mathbb{E}Y_\ell &=-\left(\frac{1}{p}+\frac{z}{\sqrt{pn}}\right)({\bf Tr} G^{(\ell)}-\mathbb{E}{\bf Tr} G^{(\ell)}). \end{align*} So we have \begin{align}\label{V2} V_2&=\left|\frac{1}{p}+\frac{z}{\sqrt{pn}}\right|^2\mathbb{E}|{\bf Tr} G^{(\ell)}-\mathbb{E}{\bf Tr} G^{(\ell)}|^2\nonumber\\ &=\left|\frac{1}{p}+\frac{z}{\sqrt{pn}}\right|^2\sum_{m \neq \ell}\mathbb{E}\left|\mathbb{E}^{(T_{m-1})}{\bf Tr} G^{(\ell)}-\mathbb{E}^{(T_m)}{\bf Tr} G^{(\ell)}\right|^2. \end{align} Here we denote $T_0:=\emptyset$ and $T_m:=[1\mathrel{{.}\,{.}}\nobreak m]$ for any $m \in [1\mathrel{{.}\,{.}}\nobreak p]$, and for any subset $T \subset [1\mathrel{{.}\,{.}}\nobreak p]$, we denote $\mathbb{E}^{(T)}$ to be the conditional expectation given $\{x_{jk}: j \notin T\}$. The second equality follows from applying successively the law of total variance to the rows of $\Phi_n$. For $m \neq \ell$, writing $\gamma_m:=\mathbb{E}^{(T_{m-1})}{\bf Tr} G^{(\ell)}-\mathbb{E}^{(T_m)}{\bf Tr} G^{(\ell)}$, we can easily check that $$\gamma_m=\mathbb{E}^{(T_{m-1})}\sigma_m-\mathbb{E}^{(T_m)}\sigma_m, $$ where $\sigma_m:={\bf Tr} G^{(\ell)}-{\bf Tr} G^{(\ell,m)}$. By (\ref{Interlacing}) we have $|\gamma_m| \leq C\eta^{-1}$. Hence we obtain $$V_2 \leq C \left(\frac{1}{p^2}+\frac{|z|^2}{pn}\right)\left(\frac{p}{\eta^2}\right) \leq \frac{C_{\tau}}{p\eta^2}.$$ Plugging the estimates of $\mathbb{E}Y_\ell$ in statement (a), $V_1$ in (\ref{V1}) and $V_2$ above into the equation (\ref{Yl3}), we obtain the desired estimate of $\mathbb{E}|Y_\ell|^2$. \end{proof} \subsection{Proof of Theorem \ref{thm2}} We can now complete the proof of Theorem \ref{thm2}. \begin{proof}[Proof of Theorem \ref{thm2}] We write (\ref{diagonal2}) as \begin{eqnarray} \label{4:gll} G_{\ell\ell}=\frac{1}{\alpha_n+Y_\ell},\end{eqnarray} where $$\alpha_n=-z-s_n(z).$$ Taking expectations on both sides of (\ref{4:gll}), we can obtain \begin{equation}\label{diagonal3} \mathbb{E} G_{\ell\ell}=\frac{1}{\alpha_n}+A_\ell=\frac{1}{\alpha_n+\Delta_\ell}, \end{equation} where \begin{equation}\label{Al} A_\ell=\mathbb{E}\left(\frac{1}{\alpha_n+Y_\ell}\right)-\frac{1}{\alpha_n}=-\frac{1}{\alpha_n^2}\mathbb{E} Y_\ell+\frac{1}{\alpha_n^2}\mathbb{E} \left(\frac{Y_\ell^2}{\alpha_n+Y_\ell}\right), \end{equation} and \begin{equation}\label{dl} \Delta_\ell=\left(\frac{1}{\alpha_n}+A_\ell\right)^{-1}-\alpha_n=-\frac{\alpha_n^2 A_\ell}{1+\alpha_n A_\ell}. \end{equation} For $A_\ell$, since \begin{eqnarray*} \left|G_{\ell\ell}\right|=\left|\frac{1}{\alpha_n+Y_\ell}\right| \le \eta^{-1},\end{eqnarray*} we obtain \begin{equation}\label{Al2} \left|\alpha_n^2 A_\ell \right|=\left|-\mathbb{E} Y_\ell+\mathbb{E}\frac{Y_\ell^2}{\alpha_n+Y_\ell}\right|\leq |\mathbb{E} Y_\ell|+\frac{1}{\eta}\mathbb{E}|Y_\ell|^2. \end{equation} For $\Delta_\ell$, using the fact that $|\alpha_n| \ge \eta$ and Lemma \ref{Yl2} we obtain \begin{eqnarray*} \label{4:dl} \left|\Delta_\ell \right| \leq \frac{C_{\tau}}{\eta^3}\left(\frac{1}{p}+\sqrt\frac{p}{n}\right) \end{eqnarray*} for any $z \in \Gamma_{\tau}$. Summing for all $\ell \in [1\mathrel{{.}\,{.}}\nobreak p]$ and then dividing $p$ on both sides of (\ref{diagonal3}), it is easy to see that in writing \[s_n(z)= \frac{1}{\alpha_n+\Delta(z)},\] the quantity $\Delta(z)$ satisfies the same bound as $\Delta_\ell$ above. This completes the proof of Theorem \ref{thm2}. \end{proof} \section*{Acknowledgments} The research of M. Xiong was supported by RGC grant number 16303615 from Hong Kong. \end{document}
\begin{document} \begin{abstract} Let $K$ be a finite extension of $\mathbf{Q}_p$ with residue field $\mathbf{F}_q$ and let $P(T) = T^d + a_{d-1}T^{d-1} + \cdots +a_1 T$ where $d$ is a power of $q$ and $a_i \in \mathfrak{m}_K$ for all $i$. Let $u_0$ be a uniformizer of $\mathcal{O}_K$ and let $\{u_n\}_{n \geqslant 0}$ be a sequence of elements of $\overline{\mathbf{Q}}_p$ such that $P(u_{n+1}) = u_n$ for all $n \geqslant 0$. Let $K_\infty$ be the field generated over $K$ by all the $u_n$. If $K_\infty / K$ is a Galois extension, then it is abelian, and our main result is that it is generated by the torsion points of a relative Lubin-Tate group (a generalization of the usual Lubin-Tate groups). The proof of this involves generalizing the construction of Coleman power series, constructing some $p$-adic periods in Fontaine's rings, and using local class field theory. \end{abstract} \begin{altabstract} Soit $K$ une extension finie de $\mathbf{Q}_p$ de corps r\'esiduel $\mathbf{F}_q$ et $P(T) = T^d + a_{d-1}T^{d-1} + \cdots +a_1 T$ o\`u $d$ est une puissance de $q$ et $a_i \in \mathfrak{m}_K$ pour tout $i$. Soit $u_0$ une uniformisante de $\mathcal{O}_K$ et $\{u_n\}_{n \geqslant 0}$ une suite d'\'el\'ements de $\overline{\mathbf{Q}}_p$ telle que $P(u_{n+1}) = u_n$ pour tout $n \geqslant 0$. Soit $K_\infty$ l'extension de $K$ engendr\'ee par les $u_n$. Si $K_\infty / K$ est Galoisienne, alors elle est ab\'elienne, et notre r\'esultat principal est qu'elle est engendr\'ee par les points de torsion d'un groupe de Lubin-Tate relatif (une g\'en\'eralisation des groupes de Lubin-Tate usuels). Pour prouver cela, nous g\'en\'eralisons la construction des s\'eries de Coleman, construisons des p\'eriodes $p$-adiques dans les anneaux de Fontaine et utilisons la th\'eorie du corps de classes local. \end{altabstract} \subjclass{11S05; 11S15; 11S20; 11S31; 11S82; 13F25; 14F30} \keywords{Iterated extension; Coleman power series; Field of norms; $p$-adic dynamical system; $p$-adic Hodge theory; Local class field theory; Lubin-Tate group; Chebyshev polynomial} \thanks{This material is partially based upon work supported by the NSF under Grant No.\ 0932078 000 while the author was in residence at the MSRI in Berkeley, California, during the Fall 2014 semester.} \dedicatory{To Glenn Stevens, on the occasion of his 60th birthday} \maketitle \tableofcontents \setlength{\baselineskip}{18pt} \section*{Introduction}\label{intro} Let $K$ be a field, let $P(T) \in K[T]$ be a polynomial of degree $d \geqslant 1$, choose $u_0 \in K$ and for $n \geqslant 0$, let $u_{n+1} \in \overline{K}$ be such that $P(u_{n+1}) = u_n$. The field $K_\infty$ generated over $K$ by all the $u_n$ is called an \emph{iterated extension} of $K$. These iterated extensions and the resulting Galois groups have been studied in various contexts, see for instance \cite{O85}, \cite{S92}, \cite{AHM} and \cite{BJ}. In this article, we focus on a special situation: $p \neq 2$ is a prime number, $K$ is a finite extension of $\mathbf{Q}_p$, with ring of integers $\mathcal{O}_K$, whose maximal ideal is $\mathfrak{m}_K$ and whose residue field is $k$. Let $d$ be a power of $\mathrm{Card}(k)$, and let $P(T) = T^d + a_{d-1} T^{d-1} + \cdots + a_1 T$ be a monic polynomial of degree $d$ with $a_i \in \mathfrak{m}_K$ for $1 \leqslant i \leqslant d-1$. Let $u_0$ be a uniformizer of $\mathcal{O}_K$ and define a sequence $\{ u_n \}_{n \geqslant 0}$ by letting $u_{n+1}$ be a root of $P(T) = u_n$. Let $K_n = K(u_n)$ and $K_\infty=\cup_{n \geqslant 1} K_n$. This iterated extension is called a \emph{Frobenius-iterate extension}, after \cite{CD14} (whose definition is a bit more general than ours). The question that we consider in this article is: which \emph{Galois} extensions $K_\infty/K$ are Frobenius iterate? This question is inspired by the observation, made in remark 7.16 of \cite{CD14}, that it follows from the main results of ibid.\ and \cite{LTFN} that: if $K_\infty/K$ is Frobenius-iterate and Galois, then it is necessarily abelian. Here, we prove a much more precise result. First, let us recall that in \cite{DSLT}, de Shalit gives a generalization of the construction of Lubin-Tate formal groups (for which see \cite{LT}). A \emph{relative Lubin-Tate group} is a formal group $\mathrm{S}$ that is attached to an unramified extension $E/F$ and to an element $\alpha$ of $F$ of valuation $[E:F]$. The extension $E_\infty^{\mathrm{S}}/F$ generated over $F$ by the torsion points of this formal group is the subextension of $F^{\mathrm{ab}}$ cut out via local class field theory by the subgroup of $F^\times$ generated by $\alpha$. If $E=F$, we recover the classical Lubin-Tate groups. \begin{enonce*}{Theorem A} Let $K$ be a finite Galois extension of $\mathbf{Q}_p$, and let $K_\infty/K$ be a Frobenius-iterate extension. If $K_\infty/K$ is Galois, then there exists a subfield $F$ of $K$, and a relative Lubin-Tate group $\mathrm{S}$, relative to the extension $F^{\mathrm{unr}} \cap K$ of $F$, such that if $K_\infty^{\mathrm{S}}$ denotes the extension of $K$ generated by the torsion points of $\mathrm{S}$, then $K_\infty \subset K_\infty^{\mathrm{S}}$ and $K_\infty^{\mathrm{S}} / K_\infty$ is a finite extension. \end{enonce*} This is theorem \ref{dslt}. Conversely, it is easy to see that the extension coming from a relative Lubin-Tate group is Frobenius-iterate after the first layer (see example \ref{expolit}). The proof of theorem A is quite indirect. We start with the observation that if $K_\infty / K$ is a Frobenius-iterate extension, that is not necessarily Galois, then we can generalize the construction of Coleman's power series (see \cite{C79}). Let $\varprojlim \mathcal{O}_{K_n}$ denote the set of sequences $\{x_n\}_{n \geqslant 0}$ with $x_n \in \mathcal{O}_{K_n}$ and such that $\mathrm{N}_{K_{n+1}/K_n}(x_{n+1}) = x_n$ for all $n \geqslant 0$. \begin{enonce*}{Theorem B} We have $\{u_n\}_{n \geqslant 0} \in \varprojlim \mathcal{O}_{K_n}$ and if $\{x_n\}_{n \geqslant 0} \in \varprojlim \mathcal{O}_{K_n}$, then there exists a unique power series $\mathrm{Col}_x(T) \in \mathcal{O}_K \dcroc{T}$ such that $x_n = \mathrm{Col}_x(u_n)$ for all $n \geqslant 0$. \end{enonce*} Suppose now that $K_\infty/K$ is Galois, and let $\Gamma=\mathrm{Gal}(K_\infty/K)$. The results of \cite{CD14} and \cite{LTFN} imply that $K_\infty/K$ is abelian, so that $K_n/K$ is Galois for all $n \geqslant 1$. If $g \in \Gamma$, then $\{g(u_n)\}_{n \geqslant 0} \in \varprojlim \mathcal{O}_{K_n}$, so that by theorem B, we get a power series $\mathrm{Col}_g(T) \in \mathcal{O}_K \dcroc{T}$ such that $g(u_n) = \mathrm{Col}_g(u_n)$ for all $n \geqslant 0$. Let $\widetilde{\mathbf{E}}^+ = \varprojlim_{x \mapsto x^d} \mathcal{O}_{\mathbf{C}_p} / p$, let $K_0 = \mathbf{Q}_p^{\mathrm{unr}} \cap K$ and let $\tilde{\mathbf{A}}^+ = \mathcal{O}_K \otimes_{\mathcal{O}_{K_0}} W(\widetilde{\mathbf{E}}^+)$ be Fontaine's rings of periods (see \cite{FP}). The element $\{u_n\}_{n \geqslant 0}$ gives rise to an element $\overline{u} \in \widetilde{\mathbf{E}}^+$. \begin{enonce*}{Theorem C} There exists $u \in \tilde{\mathbf{A}}^+$ whose image in $\widetilde{\mathbf{E}}^+$ is $\overline{u}$, and such that $\varphi_d(u) = P(u)$. We have $g(u)= \mathrm{Col}_g(u)$ if $g \in \Gamma$. \end{enonce*} The power series $\mathrm{Col}_g(T)$ satisfies the functional equation $\mathrm{Col}_g \circ P(T) = P \circ \mathrm{Col}_g(T)$. The study of $p$-adic power series that commute under composition was taken up by Lubin in \cite{L94}. In \S 6 of ibid., Lubin writes that ``experimental evidence seems to suggest that for an invertible series to commute with a noninvertible series, there must be a formal group somehow in the background''. There are a number of results in this direction, see for instance \cite{LMS}, \cite{SS13} and \cite{JS14}. In our setting, the series $\{ \mathrm{Col}_g(T) \}_{g \in \Gamma}$ commute with $P(T)$ and theorem A says that indeed, there is a formal group that accounts for this. Let us now return to the proof of theorem A. We first show that $P'(T) \neq 0$. It is then proved in \S 1 of \cite{L94} that given such a $P(T)$, a power series $\mathrm{Col}_g(T)$ that commutes with $P(T)$ is determined by $\mathrm{Col}_g'(0)$. If we let $\eta(g) = \mathrm{Col}_g'(0)$, we get the following: the map $\eta : \Gamma \to \mathcal{O}_K^\times$ is an injective character. In order to finish the proof of theorem A, we use some $p$-adic Hodge theory. Let $\mathrm{L}_P(T) \in K\dcroc{T}$ be the \emph{logarithm} attached to $P(T)$ and constructed in \cite{L94}; it converges on the open unit disk, and satisfies $\mathrm{L}_P \circ P(T) = P'(0) \cdot \mathrm{L}_P(T)$ as well as $\mathrm{L}_P \circ \mathrm{Col}_g(T) = \eta(g) \cdot \mathrm{L}_P(T)$ for $g \in \Gamma$. In particular, we can consider $\mathrm{L}_P(u)$ as an element of the ring $\mathbf{B}_{\mathrm{cris}}^+$ (see \cite{FP} for the rings of periods $\mathbf{B}_{\mathrm{cris}}^+$ and $\mathbf{B}_{\mathrm{dR}}$), which satisfies $g(\mathrm{L}_P(u)) = \eta(g) \cdot \mathrm{L}_P(u)$. More generally, if $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$, then we can twist $u$ by $\tau$ to get some elements $u_\tau \in \tilde{\mathbf{A}}^+$ and $\mathrm{L}_P^\tau(u_\tau) \in \mathbf{B}_{\mathrm{cris}}^+$, satisfying $g(\mathrm{L}_P^\tau(u_\tau)) = \tau(\eta(g)) \cdot \mathrm{L}_P^\tau(u_\tau)$. The elements $\{\mathrm{L}_P^\tau(u_\tau)\}_\tau$ are crystalline periods for the representation arising from $\eta$. Our main technical result concerning these periods is that the set of $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$ such that $\mathrm{L}_P^\tau(u_\tau) \in \mathrm{Fil}^1 \mathbf{B}_{\mathrm{dR}}$ is a subgroup of $\mathrm{Gal}(K/\mathbf{Q}_p)$, and therefore cuts out a subfield $F$ of $K$. This allows us to prove the following. \begin{enonce*}{Theorem D} There exists a subfield $F$ of $K$, a Lubin-Tate character $\chi_K$ attached to a uniformizer of $K$, and an integer $r \geqslant 1$, such that $\eta = \mathrm{N}_{K/F}(\chi_K)^r$. \end{enonce*} Theorem A follows from theorem D by local class field theory: the extensions of $K$ corresponding to $\mathrm{N}_{K/F}(\chi_K)$ are precisely those that come from relative Lubin-Tate groups. At the end of \S \ref{lcft}, we give an example for which $r=2$. In this example, the Coleman power series $p$-adically interpolate Chebyshev polynomials. \section{Relative Lubin-Tate groups} \label{dsltsec} We recall de Shalit's construction (see \cite{DSLT}) of a family of formal groups that generalize Lubin-Tate groups. Let $F$ be a finite extension of $\mathbf{Q}_p$, with ring of integers $\mathcal{O}_F$ and residue field $k_F$ of cardinality $q$. Take $h \geqslant 1$ and let $E$ be the unramified extension of $F$ of degree $h$. Let $\varphi_q : E \to E$ denote the Frobenius map that lifts $[x \mapsto x^q]$. If $f(T) = \sum_{i \geqslant 0} f_i T^i \in E\dcroc{T}$, let $f^{\varphi_q}(T) = \sum_{i \geqslant 0} \varphi_q(f_i) T^i$. If $\alpha \in \mathcal{O}_F$ is such that $\mathrm{val}_F(\alpha) = h$, let $\mathcal{F}_\alpha$ be the set of power series $f(T) \in \mathcal{O}_E \dcroc{T}$ such that $f(T) = \pi T + \mathrm{O}(T^2)$ with $\mathrm{N}_{E/F}(\pi) = \alpha$ and such that $f(T) \equiv T^q \bmod{\mathfrak{m}_E \dcroc{T}}$. The set $\mathcal{F}_\alpha$ is nonempty, since $\mathrm{N}_{E/F}(E^\times)$ is the set of elements of $F^\times$ whose valuation is in $h \cdot \mathbf{Z}$. If $\mathrm{N}_{E/F}(\pi) = \alpha$, one can take $f(T)=\pi T+T^q$. The following theorem summarizes some of the results of \cite{DSLT} (see also \S IV of \cite{IK}). \begin{theo}\label{dsltmain} If $f(T) \in \mathcal{F}_\alpha$, then \begin{enumerate} \item there is a unique formal group law $\mathrm{S}(X,Y) \in \mathcal{O}_E \dcroc{X,Y}$ such that $\mathrm{S}^{\varphi_q} \circ f = f \circ \mathrm{S}$, and the isomorphism class of $\mathrm{S}$ depends only on $\alpha$; \item for all $a \in \mathcal{O}_F$, there exists a unique power series $[a](T) \in \mathcal{O}_E \dcroc{T}$ such that $[a](T) = aT + \mathrm{O}(T^2)$ and $[a](T) \in \mathrm{End}(\mathrm{S})$. \end{enumerate} Let $x_0 = 0$ and for $m \geqslant 0$, let $x_m \in \overline{\mathbf{Q}}_p$ be such that $f^{\varphi_q^m}(x_{m+1}) = x_m$ (with $x_1 \neq 0$). Let $E_m = E(x_m)$ and let $E_\infty^{\mathrm{S}} = \cup_{m \geqslant 1} E_m$. \begin{enumerate} \item The fields $E_m$ depend only on $\alpha$, and not on the choice of $f(T) \in \mathcal{F}_\alpha$; \item The extension $E_m/E$ is Galois, and its Galois group is isomorphic to $(\mathcal{O}_F/\mathfrak{m}_F^m)^\times$; \item $E_\infty^{\mathrm{S}} \subset F^{\mathrm{ab}}$ and $E_\infty^{\mathrm{S}}$ is the subfield of $F^{\mathrm{ab}}$ cut out by $\langle \alpha \rangle \subset F^\times$ via local class field theory. \end{enumerate} \end{theo} \begin{rema}\label{dshone} If $h=1$, then we recover the usual Lubin-Tate formal groups of \cite{LT}. \end{rema} \section{Frobenius-iterate extensions} \label{polit} Let $p \neq 2$ be a prime number, let $K$ be a finite extension of $\mathbf{Q}_p$, with ring of integers $\mathcal{O}_K$, whose maximal ideal is $\mathfrak{m}_K$ and whose residue field is $k$. Let $q=\mathrm{Card}(k)$, and let $\pi$ denote a uniformizer of $\mathcal{O}_K$. Let $d$ be a power of $q$, and let $P(T) = T^d + a_{d-1} T^{d-1} + \cdots + a_1 T$ be a monic polynomial of degree $d$ with $a_i \in \mathfrak{m}_K$ for $1 \leqslant i \leqslant d-1$. Let $u_0$ be a uniformizer of $\mathcal{O}_K$ and define a sequence $\{ u_n \}_{n \geqslant 0}$ by letting $u_{n+1}$ be a root of $P(T) = u_n$. Let $K_n = K(u_n)$. \begin{lemm}\label{pinunif} The extension $K_n/K$ is totally ramified of degree $d^n$, $u_n$ is a uniformizer of $\mathcal{O}_{K_n}$ and $\mathrm{N}_{K_{n+1}/K_n} ( u_{n+1} ) = u_n$. \end{lemm} \begin{proof} The first two assertions follow immediately from the theory of Newton polygons, and the last one from the fact that $P(T) - u_n$ is the minimal polynomial of $u_{n+1}$ over $K_n$, as well as the fact that $d$ is odd since $p \neq 2$. \end{proof} Let $K_\infty = \cup_{n \geqslant 1} K_n$. This is a totally ramified infinite and pro-$p$ extension of $K$. \begin{defi}\label{defpolit} We say that an extension $K_\infty/K$ is \emph{$\varphi$-iterate} if it is of the form above. \end{defi} This definition is inspired by the similar one that is given in definition 1.1 of \cite{CD14}. We require $P(T)$ to be a monic polynomial, instead of a more general power series as in ibid., in order to control the norm of $u_n$ and to ensure the good behavior of $K_{n+1}/K_n$. \begin{exem}\label{expolit} (i) If $P(T)=T^q$, then $K_\infty/K$ is a $\varphi$-iterate extension, which is the Kummer extension of $K$ corresponding to $\pi$. (ii) Let $\mathrm{LT}$ be a Lubin-Tate formal $\mathcal{O}_K$-module attached to $\pi$, and $K_n=K(\mathrm{LT}[\pi^n])$. The extension $K_\infty/K_1$ is $\varphi$-iterate with $P(T) = [\pi](T)$. (iii) More generally, let $\mathrm{S}$ be a relative Lubin-Tate group, relative to an extension $E/F$ and $\alpha \in F$ as in \S \ref{dsltsec}. The extension $E_\infty^{\mathrm{S}}/E_1$ is $\varphi$-iterate with $P(T) = [\alpha](T)$. \end{exem} \begin{proof} Item (ii) follows from applying (iii) with $K=E=F$, and we now prove (iii). We use the notation of theorem \ref{dsltmain}. Since the isomorphism class of $\mathrm{S}$ and the extension $E_\infty^{\mathrm{S}}/E$ only depend on $\alpha$, we can take $f(T) = \pi T + T^q$ where $\mathrm{N}_{E/F}(\pi) = \alpha$. Let $P(T) = f^{\varphi_q^{h-1}} \circ \cdots \circ f^{\varphi_q} \circ f(T) \in \mathcal{O}_E[T]$, so that $P(T) = [\alpha](T)$. The extension $E_{hm+1}$ is generated by $x_{hm+1}$ over $E_1$, and we have $P(x_{hm+1}) = x_{(h-1)m+1}$. The claim therefore follows from taking $u_m = x_{hm+1}$ for $m \geqslant 0$, and observing that since $\pi + u_0^{q-1} = 0$, $u_0$ is a uniformizer of $\mathcal{O}_{E_1}$. \end{proof} \section{Coleman power series} \label{colpow} Let us write $\varprojlim \mathcal{O}_{K_n}$ for the set of sequences $\{x_n\}_{n \geqslant 0}$ such that $x_n \in \mathcal{O}_{K_n}$ and such that $\mathrm{N}_{K_{n+1}/K_n}(x_{n+1})=x_n$ for $n \geqslant 0$. By lemma \ref{pinunif}, the sequence $\{ u_n \}_{n \geqslant 0}$ belongs to $\varprojlim \mathcal{O}_{K_n}$. The goal of this {\S} is to show the following theorem (theorem B). \begin{theo}\label{colexist} If $\{ x_n \}_{n \geqslant 0} \in \varprojlim \mathcal{O}_{K_n}$, then there exists a uniquely determined power series $\mathrm{Col}_x(T) \in \mathcal{O}_K \dcroc{T}$ such that $x_n = \mathrm{Col}_x(u_n)$ for all $n \geqslant 0$. \end{theo} Our proof follows the one that is given in \S 13 of \cite{W97}. The unicity is a consequence of the following well-known general principle. \begin{prop} \label{colunik} If $f(T) \in \mathcal{O}_K \dcroc{T}$ is nonzero, then $f(T)$ has only finitely many zeroes in the open unit disk. \end{prop} In order to prove the existence part of theorem \ref{colexist}, we start by generalizing Coleman's norm map (see \cite{C79} for the original construction, and \S 2.3 of \cite{F90} for the generalization that we use). The ring $\mathcal{O}_K\dcroc{T}$ is a free $\mathcal{O}_K\dcroc{P(T)}$-module of rank $d$. If $f(T) \in \mathcal{O}_K\dcroc{T}$, let $\mathcal{N}_P(f)(T) \in \mathcal{O}_K\dcroc{T}$ be defined by the requirement that $\mathcal{N}_P(f)(P(T)) = \mathrm{N}_{\mathcal{O}_K\dcroc{T} / \mathcal{O}_K\dcroc{P(T)}} (f(T))$. For example, $\mathcal{N}_P(T)=T$ since $d$ is odd. \begin{prop}\label{pronmp} The map $\mathcal{N}_P$ has the following properties. \begin{enumerate} \item If $f(T) \in \mathcal{O}_K\dcroc{T}$, then $\mathcal{N}_P(f)(u_n) = \mathrm{N}_{K_{n+1}/K_n} (f(u_{n+1}))$; \item If $k \geqslant 1$ and $f(T) \in 1+ \pi^k \mathcal{O}_K\dcroc{T}$, then $\mathcal{N}_P(f)(T) \in 1+ \pi^{k+1} \mathcal{O}_K\dcroc{T}$; \item If $f(T) \in \mathcal{O}_K \dcroc{T}$, then $\mathcal{N}_P(f)(T) \equiv f(T) \bmod{\pi}$; \item If $f(T) \in \mathcal{O}_K \dcroc{T}^\times$, and $k,m \geqslant 0$, then $\mathcal{N}_P^{m+k}(f) \equiv \mathcal{N}_P^k(f) \bmod{\pi^{k+1}}$. \end{enumerate} \end{prop} \begin{proof} The determinant of the multiplication-by-$f(T)$ map on the $\mathcal{O}_K\dcroc{P(T)}$-module $\mathcal{O}_K\dcroc{T}$ is $\mathcal{N}_P(f)(P(T))$. By evaluating at $T=u_{n+1}$, we find that the determinant of the multiplication-by-$f(u_{n+1})$ map on the $\mathcal{O}_{K_n}$-module $\mathcal{O}_{K_{n+1}}$ is $\mathcal{N}_P(f)(u_n)$, so that $\mathcal{N}_P(f)(u_n) = \mathrm{N}_{K_{n+1}/K_n} (f(u_{n+1}))$. We now prove (2). If $f(T) \in \mathcal{O}_K\dcroc{T}$, let $\mathcal{T}_P(f)(T) \in \mathcal{O}_K\dcroc{T}$ be the trace map defined by $\mathcal{T}_P(f)(P(T)) = \mathrm{Tr}_{\mathcal{O}_K\dcroc{T} / \mathcal{O}_K\dcroc{P(T)}} (f(T))$. A straightforward calculation shows that if $h(T) \in \mathcal{O}_K \dcroc{T}$, then $\mathcal{T}_P(h)(T) \in \pi \cdot \mathcal{O}_K \dcroc{T}$. If $f(T) = 1 +\pi^k h(T)$, then $\mathcal{N}_P(f)(T) \equiv 1 + \pi^k \mathcal{T}_P(h)(T) \bmod{\pi^{k+1}}$, so that $\mathcal{N}_P(f)(T) \in 1+ \pi^{k+1} \mathcal{O}_K\dcroc{T}.$ Item (3) follows from a straightfoward calculation in $k\dcroc{T}$ using the fact that $P(T)=T^d$ in $k\dcroc{T}$. Finally, let us prove (4). If $f(T) \in \mathcal{O}_K \dcroc{T}^\times$, then $\mathcal{N}_P(f)/f \equiv 1 \bmod{\pi}$ by (3), so that $\mathcal{N}_P^m(f)/f \equiv 1 \bmod{\pi}$ as well. Item (2) now implies that $\mathcal{N}_P^{m+k}(f) \equiv \mathcal{N}_P^k(f) \bmod{\pi^{k+1}}$. \end{proof} \begin{proof}[of theorem \ref{colexist}] The power series $\mathrm{Col}_x(T)$ is unique by lemma \ref{colunik}, and we now show its existence. If $x_n$ is not a unit of $\mathcal{O}_{K_n}$, then there exists $e \geqslant 1$ such that $x_n=u_n^e x_n^*$ where $x_n^* \in \mathcal{O}_{K_n}^\times$ for all $n$, and then $\mathrm{Col}_x(T) = T^e \cdot \mathrm{Col}_{x^*}(T)$. We can therefore assume that $x_n$ is a unit of $\mathcal{O}_{K_n}$. For all $j \geqslant 1$, we have $\mathcal{O}_{K_j} = \mathcal{O}_K[u_j]$, so that there exists $g_j(T) \in \mathcal{O}_K[T]$ such that $x_j = g_j(u_j)$. Let $f_j(T) = \mathcal{N}_P^j(g_{2j})$. By proposition \ref{pronmp}, we have $x_n \equiv f_j(u_n) \bmod{\pi^{j+1}}$ for all $n \leqslant j$. The space $\mathcal{O}_K \dcroc{T}$ is compact; let $f(T)$ be a limit point of $\{ f_j \}_{j \geqslant 1}$. We have $x_n = f(u_n)$ for all $n$ by continuity, so that we can take $\mathrm{Col}_x(T) = f(T)$. \end{proof} \begin{rema}\label{colnpinv} We have $\mathcal{N}_P(\mathrm{Col}_x)(T) = \mathrm{Col}_x(T)$. \end{rema} \begin{proof} The power series $\mathcal{N}_P(\mathrm{Col}_x)(T) - \mathrm{Col}_x(T)$ is zero at $T=u_n$ for all $n \geqslant 0$ by proposition \ref{pronmp}, so that $\mathcal{N}_P(\mathrm{Col}_x)(T) = \mathrm{Col}_x(T)$ by lemma \ref{colunik}. \end{proof} \section{Lifting the field of norms} \label{cdn} In this {\S}, we assume that $K_\infty/K$ is a Galois extension, and let $\Gamma = \mathrm{Gal}(K_\infty / K)$. We recall some results of \cite{CD14} and \cite{LTFN}, and give a more precise formulation of some of them in our specific situation. \begin{prop}\label{kngal} If $K_\infty/K$ is Galois, then $K_n/K$ is Galois for all $n \geqslant 1$. \end{prop} \begin{proof} It follows from the main results of \cite{CD14} and of \cite{LTFN} (see remark 7.16 of \cite{CD14}) that if $K_\infty/K$ is a $\varphi$-iterate extension that is Galois, then it is abelian. This implies the proposition (it would be more satisfying to find a direct proof). \end{proof} If $g \in \Gamma$, proposition \ref{kngal} and theorem \ref{colexist} imply that there is a unique power series $\mathrm{Col}_g(T) \in \mathcal{O}_K \dcroc{T}$ such that $g(u_n) = \mathrm{Col}_g(u_n)$ for all $n \geqslant 0$. In the sequel, we need some ramification-theoretic properties of $K_\infty/K$. They are summarized in the theorem below. \begin{theo}\label{normpow} There exists a constant $c=c(K_\infty/K) > 0$ such that for any $E \subset F$, finite extensions of $K$ contained in $K_\infty$, and $x \in \mathcal{O}_F$, we have \[ \mathrm{val}_K \left( \frac{\mathrm{N}_{F/E}(x)}{x^{[F:E]}}-1 \right) \geqslant c. \] \end{theo} \begin{proof} By the main result of \cite{CDAPF}, the extension $K_\infty/K$ is \emph{strictly APF}, so that if we denote by $c(K_\infty/K)$ the constant defined in 1.2.1 of \cite{W83}, then $c(K_\infty/K) > 0$. By 4.2.2.1 of ibid., we have \[ \mathrm{val}_E \left( \frac{\mathrm{N}_{F/E}(x)}{x^{[F:E]}}-1 \right) \geqslant c(F/E), \] By 1.2.3 of ibid., $c(F/E) \geqslant c(K_\infty/E)$ and (see for instance the proof of 4.5 of \cite{CD14} or page 83 of \cite{W83}) $c(K_\infty/E) \geqslant c(K_\infty/K) \cdot [E:K]$. This proves the theorem. \end{proof} Let $c$ be the constant afforded by theorem \ref{normpow}. We can always assume that $c \leqslant \mathrm{val}_K(p)/(p-1)$. If $E$ is some subfield of $\mathbf{C}_p$, let $\mathfrak{a}_E^c$ denote the set of elements $x$ of $E$ such that $\mathrm{val}_K(x) \geqslant c$. Let $\widetilde{\mathbf{E}}^+= \varprojlim_{x \mapsto x^d} \mathcal{O}_{\mathbf{C}_p} / \mathfrak{a}^c_{\mathbf{C}_p}$. The sequence $\{ u_n \}_{n \geqslant 0}$ gives rise to an element $\overline{u} \in \widetilde{\mathbf{E}}^+$. Recall that by \S 2.1 and \S 4.2 of \cite{W83}, there is an embedding $\iota : \varprojlim \mathcal{O}_{K_n} \to \widetilde{\mathbf{E}}^+$, which is an isomorphism onto $\varprojlim_{x \mapsto x^d} \mathcal{O}_{K_n} / \mathfrak{a}^c_{K_n}$, which is also isomorphic to $k \dcroc{\overline{u}}$. Let $K_0= \mathbf{Q}_p^{\mathrm{unr}} \cap K$ and $\tilde{\mathbf{A}}^+=\mathcal{O}_K \otimes_{\mathcal{O}_{K_0}} W(\widetilde{\mathbf{E}}^+)$. Recall (see \cite{FP}) that we have a map $\theta : \tilde{\mathbf{A}}^+ \to \mathcal{O}_{\mathbf{C}_p}$. If $x \in \tilde{\mathbf{A}}^+$ and $\overline{x} = (x_n)_{n \geqslant 0}$ in $\widetilde{\mathbf{E}}^+$, then $\theta \circ \varphi_d^{-n}(x) = x_n$ in $\mathcal{O}_{\mathbf{C}_p} / \mathfrak{a}^c_{\mathbf{C}_p}$. \begin{theo}\label{liftubar} There exists a unique $u \in \tilde{\mathbf{A}}^+$ whose image in $\widetilde{\mathbf{E}}^+$ is $\overline{u}$, and such that $\varphi_d(u) = P(u)$. Moreover: \begin{enumerate} \item[(i)] If $n \geqslant 0$, then $\theta \circ \varphi_d^{-n}(u) = u_n$; \item[(ii)] $\mathcal{O}_K \dcroc{u} = \{ x \in \tilde{\mathbf{A}}^+$, $\theta \circ \varphi_d^{-n}(x) \in \mathcal{O}_{K_n}$ for all $n \geqslant 1\}$; \item[(iii)] $g(u) = \mathrm{Col}_g(u)$ if $g \in \Gamma$. \end{enumerate} \end{theo} \begin{proof} The existence of $u$ and item (i) are proved in lemma 9.3 of \cite{CEV}, where it is shown that $u = \lim_{n \to +\infty} P^{\circ n}(\varphi_d^{-n}([\overline{u}]))$. Let $R = \{ x \in \tilde{\mathbf{A}}^+$ such that $\theta \circ \varphi_d^{-n}(x) \in \mathcal{O}_{K_n}$ for all $n \geqslant 1\}$. If $x \in R$, then its image in $\widetilde{\mathbf{E}}^+$ lies in $\varprojlim_{x \mapsto x^d} \mathcal{O}_{K_n} / \mathfrak{a}^c_{K_n} = k \dcroc{\overline{u}}$. We have $u \in R$ by proposition \ref{liftubar}, so that the map $R / \pi R \to k \dcroc{\overline{u}}$ is surjective. We then have $R = \mathcal{O}_K \dcroc{u}$, since $R$ is separated and complete for the $\pi$-adic topology, which proves (ii). The ring $\mathcal{O}_K \dcroc{u}$ is stable under the action of $G_K$ by (ii). If $g \in \Gamma$, there exists $F_g(T) \in \mathcal{O}_K \dcroc{T}$ such that $g(u) = F_g(u)$. We have $g(u_n) = g ( \theta \circ \varphi_d^{-n}(u)) = \theta \circ \varphi_d^{-n} (F_g(u)) = F_g(u_n)$ by (i), so that $g(u_n) = F_g(u_n)$ for all $n$. This implies that $F_g(T) = \mathrm{Col}_g(T)$. \end{proof} \begin{rema}\label{liftfn} In the terminology of \cite{W83}, $\varprojlim \mathcal{O}_{K_n}$ is the ring of integers of the field of norms $X(K_\infty)$ of the extension $K_\infty/K$, and theorem \ref{liftubar} shows that we can lift $X(K_\infty)$ to characteristic zero, along with the Frobenius map $\varphi_d$ and the action of $\Gamma$. \end{rema} If $g \in \Gamma$, then $\mathrm{Col}_g \circ P(T) = P \circ \mathrm{Col}_g(T)$ since the two series have the same value at $u_n$ for all $n \geqslant 1$. Let $\eta(g) = \mathrm{Col}_g'(0)$, so that $g \mapsto \eta(g)$ is a character $\eta : \Gamma \to \mathcal{O}_K^\times$ \begin{prop}\label{comnoz} If $F(T) \in T \cdot \mathcal{O}_K \dcroc{T}$ is such that $F'(0) \in 1+p \mathcal{O}_K$, and if $A(T) \in T \cdot \mathcal{O}_K \dcroc{T}$ vanishes at order $k \geqslant 2$ at $0$, and satisifes $A \circ F(T) = F \circ A(T)$, then $F(T) = T$. \end{prop} \begin{proof} Write $F(T) = f_1 T + \mathrm{O}(T^2)$, and $A(T) = a_k T^k + \mathrm{O}(T^{k+1})$ with $a_k \neq 0$. The equation $F \circ A(T) = A \circ F(T)$ implies that $f_1 a_k = a_k f_1^k$ so that if $k \neq 1$, then $f_1^{k-1}=1$. Since $f_1 \in 1+p\mathcal{O}_K$ and $p \neq 2$, this implies that $f_1=1$. If $F(T) \neq T$, we can write $F(T) = T + T^i h(T)$ for some $i \geqslant 2$ with $h(0) \neq 0$. The equation $F \circ A(T) = A \circ F(T)$ and the equality $A(T + T^i h(T))=\sum_{j \geqslant 0} (T^i h(T))^j A^{(j)}(T)/j!$ imply that $A(T) + A(T)^i h(A(T)) = A(T) + T^i h(T) A'(T) + \mathrm{O} (T^{2i+k-2})$, so that $A(T)^i h(A(T)) = T^i h(T) A'(T) + \mathrm{O} (T^{2i+k-2})$. The term of lowest degree of the LHS is of degree $ki$, while on the RHS it is of degree $i+k-1$. We therefore have $ki=i+k-1$, so that $(k-1)(i-1)=0$ and hence $k=1$. \end{proof} \begin{coro}\label{dernonul} We have $P'(0) \neq 0$. \end{coro} \begin{proof} This follows from proposition \ref{comnoz}, since $\mathrm{Col}_g'(0) \in 1+p\mathcal{O}_K$ if $g$ is close enough to $1$, and $\mathrm{Col}_g(T) = T$ if and only if $g=1$ (compare with lemma 4.5 of \cite{LTFN}). \end{proof} \begin{coro}\label{galisab} The character $\eta : \Gamma \to \mathcal{O}_K^\times$ is injective. \end{coro} \begin{proof} This follows from proposition 1.1 of \cite{L94}, which says that if $P'(0) \in \mathfrak{m}_K \setminus \{ 0 \}$, then a power series $F(T) \in T \cdot \mathcal{O}_K\dcroc{T}$ that commutes with $P(T)$ is determined by $F'(0)$. This implies that $\mathrm{Col}_g(T)$ is determined by $\eta(g)$, and then $g$ itself is determined by $\mathrm{Col}_g(T)$, since $g(u_n) = \mathrm{Col}_g(u_n)$ for all $n$. \end{proof} We therefore have a character $\eta : \mathrm{Gal}(\overline{\mathbf{Q}}_p/K) \to \mathcal{O}_K^\times$, such that $K_\infty = \overline{\mathbf{Q}}_p^{\ker \eta}$. \section{$p$-adic Hodge theory} \label{chareta} We now assume that $K/\mathbf{Q}_p$ is Galois (for simplicity), and we keep assuming that $K_\infty/K$ is Galois. We use the element $u$ above, and Lubin's logarithm (proposition \ref{lublog} below), to construct crystalline periods for $\eta$. \begin{prop}\label{lublog} There exists a power series $\mathrm{L}_P(T) \in K\dcroc{T}$ that is holomorphic on the open unit disk, and satisfies \begin{enumerate} \item $\mathrm{L}_P(T) = T + \mathrm{O}(T^2)$; \item $\mathrm{L}_P \circ P(T) = P'(0) \cdot \mathrm{L}_P(T)$; \item $\mathrm{L}_P \circ \mathrm{Col}_g(T) = \eta(g) \cdot \mathrm{L}_P(T)$ if $g \in \Gamma$. \end{enumerate} If we write $P(T)=T \cdot Q(T)$, then \[ \mathrm{L}_P(T) = \lim_{n \to +\infty} \frac{P^{\circ n}(T)} {P'(0)^n} = T \cdot \prod_{n \geqslant 0} \frac{Q(P^{\circ n}(T))}{Q(0)}. \] \end{prop} \begin{proof} See propositions 1.2, 2.2 and 1.3 of \cite{L94}. \end{proof} Let $\widetilde{\mathbf{B}}^+_{\mathrm{rig}}$ denote the Fr\'echet completion of $\tilde{\mathbf{A}}^+ [1/\pi]$, so that our $\widetilde{\mathbf{B}}^+_{\mathrm{rig}}$ is $K \otimes_{K_0}$ the ``usual'' $\widetilde{\mathbf{B}}^+_{\mathrm{rig}}$ (for which see \cite{LB2}). If $u \in \tilde{\mathbf{A}}^+$ is the element afforded by proposition \ref{liftubar}, then $\mathrm{L}_P(u)$ converges in $\widetilde{\mathbf{B}}^+_{\mathrm{rig}}$. We have $g(\mathrm{L}_P(u)) = \eta(g) \cdot \mathrm{L}_P(u)$ by proposition \ref{lublog}. If $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$, then let $n(\tau)$ be some $n \in \mathbf{Z}$ such that $\tau = \varphi^n$ on $k_K$, and let $u_\tau = (\tau \otimes \varphi^{n(\tau)}) (u) \in \tilde{\mathbf{A}}^+$. If $F(T) = \sum_{i \geqslant 0} f_i T^i \in K \dcroc{T}$, let $F^\tau(T) = \sum_{i \geqslant 0} \tau(f_i) T^i$. We have $g(\mathrm{L}_P^\tau(u_\tau)) = \tau(\eta(g)) \cdot \mathrm{L}_P^\tau(u_\tau)$ in $\widetilde{\mathbf{B}}^+_{\mathrm{rig}}$. This implies the following result, which is a slight improvement of theorem 4.1 of \cite{LTFN}. \begin{prop}\label{cisdrp} The character $\eta : \Gamma \to \mathcal{O}_K^\times$ is crystalline, with weights in $\mathbf{Z}_{\geqslant 0}$. \end{prop} \begin{proof} The fact that $g(\mathrm{L}_P^\tau(u_\tau)) = \tau(\eta(g)) \cdot \mathrm{L}_P^\tau(u_\tau)$ for all $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$ implies that $\eta$ gives rise to a $K \otimes_{K_0} \mathbf{B}_{\mathrm{cris}}$-admissible representation. If $V$ is any $p$-adic representation of $G_K$, then \[ \left((K \otimes_{K_0} \mathbf{B}_{\mathrm{cris}}) \otimes_{\mathbf{Q}_p} V \right)^{G_K} = K \otimes_{K_0} (\mathbf{B}_{\mathrm{cris}} \otimes_{\mathbf{Q}_p} V)^{G_K}. \] This implies that a $K \otimes_{K_0} \mathbf{B}_{\mathrm{cris}}$-admissible representation is crystalline. The weights of $\eta$ are $\geqslant 0$ because $\mathrm{L}_P^\tau(u_\tau) \in \mathbf{B}_{\mathrm{dR}}^+$ for all $\tau$. \end{proof} \begin{lemm}\label{thetutau} We have $\theta \circ \varphi_d^{-n}(u_\tau) = \lim_{k \to +\infty} (P^\tau)^{\circ k}(u_{n+k}^{p^{n(\tau)}})$. \end{lemm} \begin{proof} The element $u_\tau \in \tilde{\mathbf{A}}^+$ has the property that its image in $\widetilde{\mathbf{E}}^+$ is $\varphi^{n(\tau)}(\overline{u}) = \overline{u}^{p^{n(\tau)}}$, and that $\varphi_d(u_\tau) = P^\tau(u_\tau)$. The lemma then follows from lemma 9.3 of \cite{CEV}. \end{proof} For simplicity, write $u_\tau^n = \theta \circ \varphi_d^{-n}(u_\tau)$ and $u_\tau^{n,k} = (P^\tau)^{\circ k}(u_{n+k}^{p^{n(\tau)}})$. \begin{lemm}\label{unifconv} If $M > 0$, there exists $j \geqslant 0$ such that $\mathrm{val}_K(u_\tau^n - u_\tau^{n,j}) \geqslant M$ for $n \geqslant 1$. \end{lemm} \begin{proof} If $c$ is the constant coming from theorem \ref{normpow}, then $\mathrm{val}_K(u_\tau^n - u_\tau^{n,0}) \geqslant c$ for all $n \geqslant 1$. We prove the lemma by inductively constructing a sequence $\{c_j\}_{j \geqslant 0}$ such that $\mathrm{val}_K(u_\tau^n - u_\tau^{n,j}) \geqslant c_j$ for all $n \geqslant 1$, and such that $c_j \geqslant M$ for $j \gg 0$. Let $c_0=c$ and suppose that for some $j$, we have $\mathrm{val}_K(u_\tau^n - u_\tau^{n,j}) \geqslant c_j$ for all $n \geqslant 1$. We then have \[ \mathrm{val}_K(u_\tau^n - u_\tau^{n,j+1}) = \mathrm{val}_K \left( P^\tau(u_\tau^{n+1}) - P^\tau(u_\tau^{n+1,j})\right). \] If $R(T) \in \mathcal{O}_K[T]$ and $x,y \in \mathcal{O}_{\overline{\mathbf{Q}}_p}$, then $R(x)-R(y) = (x-y) R'(y) + (x-y)^2 S(x,y)$ with $S(T,U) \in \mathcal{O}_K [T,U]$. This, and the fact that $P'(T) \in \mathfrak{m}_K[T]$, implies that we can take $c_{j+1} = \min(c_j+1, 2c_j)$. The lemma follows. \end{proof} We now recall a result from \cite{L94}. If $f(T) \in T \cdot \mathcal{O}_K \dcroc{T}$ is such that $f'(0) \in \mathfrak{m}_K \setminus \{ 0 \}$, let $\Lambda(f)$ be the set of the roots of all iterates of $f$. If $u(T) \in T \cdot \mathcal{O}_K \dcroc{T}$ is such that $u'(0) \in \mathcal{O}_K^\times$ and $u'(0)$ is not a root of $1$, let $\Lambda(u)$ be the set of the fixed points of all iterates of $u$. \begin{lemm}\label{lublamb} If $f$ and $u$ are as above, and if $u \circ f = f \circ u$, then $\Lambda(f)=\Lambda(u)$. \end{lemm} \begin{proof} This is proposition 3.2 of \cite{L94}. \end{proof} For each $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$, let $r_\tau$ be the weight of $\eta$ at $\tau$. \begin{prop}\label{algfil} If $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$, then the following are equivalent. \begin{enumerate} \item[(i)] $r_\tau \geqslant 1$; \item[(ii)] $\mathrm{L}_P^\tau(u_\tau) \in \mathrm{Fil}^1 \mathbf{B}_{\mathrm{dR}}$; \item[(iii)] $\theta(u_\tau) \in \overline{\mathbf{Q}}_p$; \item[(iv)] $\theta(u_\tau) \in \Lambda(P^\tau)$; \item[(v)] $u_\tau \in \cup_{j \geqslant 0} \ \varphi_d^{-j}(\mathcal{O}_K \dcroc{u})$. \end{enumerate} \end{prop} \begin{proof} The equivalence between (i) and (ii) is immediate. We now prove that (ii) implies (iii). If $\mathrm{L}_P^\tau(u_\tau) \in \mathrm{Fil}^1 \mathbf{B}_{\mathrm{dR}}$, then $\mathrm{L}_P^\tau(\theta(u_\tau)) = 0$ so that $\theta(u_\tau) \in \overline{\mathbf{Q}}_p$ since it is a root of a convergent power series with coefficients in $K$. We next prove that (iii) implies (iv) (it is clear that (iv) implies (iii)). If $x= \theta(u_\tau)$ then $g(x) = \mathrm{Col}_g^\tau(x)$. If $x \in \overline{\mathbf{Q}}_p$ and if $g$ is close enough to $1$, then $g(x)=x$ so that $x \in \Lambda(\mathrm{Col}_g^\tau)$, and then $x \in \Lambda(P^\tau)$ by lemma \ref{lublamb}. Let us prove that (iv) implies (ii). If there exists $n \geqslant 0$ such that $(P^{\tau})^{\circ n}(\theta(u_\tau)) = 0$, then $(P^{\tau})^{\circ n}(u_\tau) \in \mathrm{Fil}^1 \mathbf{B}_{\mathrm{dR}}$ so that $\mathrm{L}_P^\tau(u_\tau) \in \mathrm{Fil}^1 \mathbf{B}_{\mathrm{dR}}$ as well by proposition \ref{lublog}. Conditions (i), (ii), (iii) and (iv) are therefore equivalent. Condition (v) implies (iii) by using theorem \ref{liftubar} as well as the fact that $\varphi_d(u_\tau) = P^\tau(u_\tau)$. It remains to prove that (iii) implies (v). Recall that $u_\tau^n = \theta \circ \varphi_d^{-n}(u_\tau)$. It is enough to show that there exists $j \geqslant 0$ such that $u_\tau^n \in \mathcal{O}_{K_{n+j}}$ for all $n$, since by theorem \ref{liftubar}, this implies that $u_\tau \in \varphi_d^{-j}(\mathcal{O}_K \dcroc{u})$. Recall that $u_\tau^{n,k} = (P^\tau)^{\circ k}(u_{n+k}^{p^{n(\tau)}})$. Take $M \geqslant 1 + \mathrm{val}_K((P^\tau)'(u_\tau^n))$ for all $n \gg 0$. By lemma \ref{unifconv}, there exists $j \geqslant 0$ such that $\mathrm{val}_K(u_\tau^n - u_\tau^{n,j}) \geqslant M$ for all $n \geqslant 1$. The element $u_\tau^n$ is a root of $P^\tau(T) = u_\tau^{n-1}$, and therefore $u_\tau^n- u_\tau^{n,j}$ is a root of $P^\tau(u_\tau^{n,j}+T) - u_\tau^{n-1}$. If $u_\tau^{n-1} \in \mathcal{O}_{K_{n+j-1}}$, then the polynomial $R_n(T) = P^\tau(u_\tau^{n,j}+T) - u_\tau^{n-1}$ belongs to $\mathcal{O}_{K_{n+j}}[T]$, and satisfies $\mathrm{val}_K(R_n(0)) \geqslant M + \mathrm{val}_K(R_n'(0))$. By the theory of Newton polygons, $R_n(T)$ has a unique root of slope $\mathrm{val}_K(R_n(0)) - \mathrm{val}_K(R_n'(0)) \geqslant M$, and this root, which is $u_\tau^n - u_\tau^{n,j}$, therefore belongs to $K_{n+j}$. This implies that $u_\tau^n \in \mathcal{O}_{K_{n+j}}$, which finishes the proof by induction on $n$. \end{proof} If $\tau$ satisfies the equivalent conditions of proposition \ref{algfil}, then we can write $u_\tau = f_\tau ( \varphi_q^{-j_\tau}(u))$ for some $j_\tau \geqslant 0$ and $f_\tau(T) \in \mathcal{O}_K \dcroc{T}$. \begin{lemm}\label{ftnul} We have $f_\tau(0)=0$, $f_\tau'(0) \neq 0$, $P^\tau \circ f_\tau(T) = f_\tau \circ P(T)$ and $\mathrm{Col}_g^\tau \circ f_\tau(T) = f_\tau \circ \mathrm{Col}_g(T)$. \end{lemm} \begin{proof} If $u_\tau = f_\tau(\varphi_d^{-j}(u))$, then $P^\tau(u_\tau) = P^\tau \circ f_\tau (\varphi_d^{-j}(u))$ and then $\varphi_d(u_\tau) = f_\tau \circ P (\varphi_d^{-j}(u))$ so that $P^\tau \circ f_\tau(T) = f_\tau \circ P(T)$. Likewise, computing $g(u_\tau)$ in two ways shows that $\mathrm{Col}_g^\tau \circ f_\tau(T) = f_\tau \circ \mathrm{Col}_g(T)$. Evaluating $P^\tau \circ f_\tau(T) = f_\tau \circ P(T)$ at $T=0$ gives $P^\tau(f_\tau(0)) = f_\tau(0)$ so that $f_\tau(0)$ is a root of $P^\tau(T) = T$. The theory of Newton polygons shows that those roots are $0$ and elements of valuation $0$. The latter case is excluded because $\theta \circ \varphi_d^{-n}(u_\tau) = f_\tau ( u_{n+j} ) \in \mathfrak{m}_{K_\infty}$, so that $f_\tau(0) \in \mathfrak{m}_K$. We now prove that $f_\tau'(0) \neq 0$. Write $f(T) = f_k T^k + \mathrm{O}(T^{k+1})$ with $f_k \neq 0$. The fact that $P^\tau \circ f_\tau(T) = f_\tau \circ P(T)$ implies that $\tau(P'(0)) f_k = f_k P'(0)^k$ so that $\tau(P'(0)) = P'(0)^k$. Since $\mathrm{val}_K(P'(0)) > 0$, this implies that $k=1$. \end{proof} \begin{coro}\label{embssgr} The set of those $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$ such that $r_\tau \geqslant 1$ forms a subgroup of $\mathrm{Gal}(K/\mathbf{Q}_p)$, and if $F$ is the subfield of $K$ cut out by this subgroup, then $\eta(g) \in \mathcal{O}_F^\times$. The weight $r_\tau$ is independent of $\tau \in \mathrm{Gal}(K/F)$. \end{coro} \begin{proof} By proposition \ref{liftubar}, $\tau=\mathrm{Id}$ satisfies condition (iii) of proposition \ref{algfil} above, and therefore condition (i) as well, so that $r_{\mathrm{Id}} \geqslant 1$. If $\sigma, \tau$ satisfy condition (v) of ibid, then we can write $u_\sigma = f_\sigma(\varphi_d^{-j_\sigma}(u))$ and $u_\tau = f_\tau(\varphi_d^{-j_\tau}(u))$ so that $u_{\sigma \tau} = f_\tau^\sigma \circ f_\sigma (\varphi_d^{-(j_\tau+j_\sigma)}(u))$ and therefore $\sigma \tau$ also satisfies condition (v). Since $\mathrm{Gal}(K/\mathbf{Q}_p)$ is a finite group, these two facts imply that the set of $\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)$ such that $r_\tau \geqslant 1$ is a group. By lemma \ref{ftnul}, we have $P^\tau \circ f_\tau(T) = f_\tau \circ P(T)$. This implies that $P'(0) \in \mathfrak{m}_F$ and also that $(P^\tau)^{\circ n} \circ f_\tau(T) = f_\tau \circ P^{\circ n}(T)$, so that \[ \frac{1}{P'(0)^n}(P^\tau)^{\circ n} \circ f_\tau(T) = \frac{1}{P'(0)^n} f_\tau \circ P^{\circ n}(T), \] which implies by passing to the limit that $\mathrm{L}_P^\tau \circ f_\tau(T) = f_\tau'(0) \cdot \mathrm{L}_P(T)$. Since $\mathrm{Col}_g^\tau \circ f_\tau(T) = f_\tau \circ \mathrm{Col}_g(T)$, we have $g(\mathrm{L}_P^\tau \circ f_\tau(u) ) = \tau(\eta(g)) \cdot (\mathrm{L}_P^\tau \circ f_\tau(u))$. Moreover, $\mathrm{L}_P^\tau \circ f_\tau(u) = f_\tau'(0) \cdot \mathrm{L}_P(u)$, and therefore $\tau(\eta(g)) = \eta(g)$. This is true for every $\tau \in \mathrm{Gal}(K/F)$, so that $\eta(g) \in \mathcal{O}_F^\times$. The fact that $\eta(g) \in \mathcal{O}_F^\times$ implies that $r_\tau$ depends only on $\tau {\mid}_F$ and is therefore independent of $\tau \in \mathrm{Gal}(K/F)$. \end{proof} \section{Local class field theory} \label{lcft} We now prove theorem D, and show how local class field theory allows us to derive theorem A from theorem D. We still assume that $K/\mathbf{Q}_p$ is Galois for simplicity. Let $\lambda$ be a uniformizer of $\mathcal{O}_K$ and let $K_\lambda$ denote the extension of $K$ attached to $\lambda$ by local class field theory. This extension is generated over $K$ by the torsion points of a Lubin-Tate formal group defined over $K$ and attached to $\lambda$ (see for instance \cite{LT} and \cite{LCFT}). Let $\chi^K_\lambda : \mathrm{Gal}(K_\lambda / K) \to \mathcal{O}_K^\times$ denote the corresponding Lubin-Tate character. We still assume that the extension $K_\infty/K$ is Galois, so that it is an abelian totally ramified extension. This implies that there is a uniformizer $\lambda$ of $\mathcal{O}_K$ such that $K_\infty \subset K_\lambda$. Let $\eta : \Gamma \to \mathcal{O}_K^\times$ be the character constructed in \S\ref{chareta}. \begin{prop}\label{etaexpl} We have $\eta = \prod_{\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)} \tau(\chi^K_\lambda)^{r_\tau}$. \end{prop} \begin{proof} The character $\eta : \Gamma \to \mathcal{O}_K^\times$ is crystalline, and its weight at $\tau$ is $r_\tau$ by definition. The character $\eta_0 = \eta \cdot (\prod_{\tau \in \mathrm{Gal}(K/\mathbf{Q}_p)} \tau(\chi^K_\lambda)^{r_\tau} )^{-1}$ of $\mathrm{Gal}(K_\lambda/K)$ is therefore crystalline with weights $0$ at all embeddings, so that it is an unramified character of $\mathrm{Gal}(K_\lambda/K)$. Since $K_\lambda/K$ is totally ramified, we have $\eta_0=1$. \end{proof} Proposition \ref{etaexpl} and corollary \ref{embssgr} imply the following, which is theorem D. \begin{theo}\label{maineta} There exists $F \subset K$ and $r \in \mathbf{Z}_{\geqslant 1}$ such that $\eta = \mathrm{N}_{K/F}(\chi^K_\lambda)^r$. \end{theo} We now show how this implies theorem A. If $u \in \mathcal{O}_K^\times$, let $\mu^K_u$ denote the unramified character of $G_K$ that sends the Frobenius map of $k_K$ to $u$. If $F$ is a subfield of $K$, and $\mathrm{N}_{K/F}(\lambda) = \varpi^h u$ with $\varpi$ a uniformizer of $\mathcal{O}_F$ and some $u \in \mathcal{O}_F^\times$, then $\mathrm{N}_{K/F}(\chi^K_\lambda) = \chi^F_\varpi \cdot \mu^K_u$. \begin{prop}\label{chards} Let $\mathrm{S}$ be a relative Lubin-Tate group, attached to an extension $E/F$, and an element $\alpha = \varpi^h u \in \mathcal{O}_F^\times$. The action of $\mathrm{Gal}(\overline{\mathbf{Q}}_p/E)$ on the torsion points of $\mathrm{S}$ is given by $g(x) = [\chi^F_\varpi \cdot \mu^E_u(g)](x)$. \end{prop} \begin{proof} See \S 4 of \cite{Y08}. \end{proof} Let $F$ be the subfield of $K$ afforded by theorem \ref{maineta}, and let $E$ be the maximal unramified extension of $F$ contained in $K$. \begin{theo}\label{dslt} There exists a relative Lubin-Tate group $\mathrm{S}$, relative to the extension $E/F$, such that if $K_\infty^{\mathrm{S}}$ denotes the extension of $K$ generated by the torsion points of $\mathrm{S}$, then $K_\infty \subset K_\infty^{\mathrm{S}}$ and $K_\infty^{\mathrm{S}} / K_\infty$ is a finite extension. \end{theo} \begin{proof} Let $\lambda$ be a uniformizer of $K$ such that $K_\infty \subset K_\lambda$ and let $\pi = \mathrm{N}_{K/E}(\lambda)$ and $\alpha = \mathrm{N}_{K/F}(\lambda)$, so that $\pi$ is a uniformizer of $E$ and $\alpha = \mathrm{N}_{E/F}(\pi)$. Let $\mathrm{S}$ be a relative Lubin-Tate group attached to $\alpha$, and let $K_\infty^{\mathrm{S}}$ be the extension of $K$ generated by the torsion points of $\mathrm{S}$. If $g \in \mathrm{Gal}(\overline{\mathbf{Q}}_p/K_\infty^{\mathrm{S}})$, then $\mathrm{N}_{K/F}(\chi_K)(g)=1$ by proposition \ref{chards} and the observation preceding it, so that $\eta(g)=1$ by theorem \ref{maineta}. This implies that $K_\infty \subset K_\infty^{\mathrm{S}}$. By Galois theory and theorem \ref{maineta}, \begin{enumerate} \item $K_\infty^{\mathrm{S}}$ is the field cut out by $\{ g \in G_K \mid \mathrm{N}_{K/F}(\chi_\lambda^K(g)) = 1\}$; \item $K_\infty$ is the field cut out by $\{ g \in G_K \mid \mathrm{N}_{K/F}(\chi_\lambda^K(g))^r = 1\}$. \end{enumerate} This implies that $K_\infty^{\mathrm{S}} / K_\infty$ is a finite Galois extension, whose Galois group injects into $\{ x \in \mathcal{O}_F^\times \mid x^r=1\}$. \end{proof} This proves theorem A. We conclude this {\S} with an example of a $\varphi$-iterate extension that is Galois, corresponding to a polynomial $P(T) \in \mathbf{Q}_p[T]$ such that $r=2$ and such that the extension $K_\infty^{\mathrm{S}} / K_\infty$ is of degree $2$ in the notation of theorems \ref{maineta} and \ref{dslt}. \begin{theo}\label{chebtheo} Let $K=\mathbf{Q}_3$, $P(T)=T^3+6T^2+9T$ and $u_0=-3$. The corresponding iterated extension $K_\infty$ is $\mathbf{Q}_3(\mu_{3^\infty})^{\{\pm 1\} \subset \mathbf{Z}_3^\times}$, and $\eta=\chi_{\mathrm{cyc}}^2$. \end{theo} \begin{proof} For $k \geqslant 1$, let $C_k(T)$ denote the $k$-th Chebyshev polynomial, which is characterized by the fact that $C_k(\cos(\theta)) = \cos(k \theta)$. Let $P_k(T) = 2 ( C_k(T/2+1) - 1)$, so that $P_k(T)$ is a monic polynomial of degree $k$, and $P_k(2(\cos(\theta)-1)) = 2(\cos(k \theta)-1)$. Note that $P(T)=P_3(T)$ and that $u_0 = -3 = 2(\cos(2\pi/3)-1)$. The element $u_n$ is therefore a conjugate of $2(\cos(2\pi/3^{n+1})-1)$. This proves the fact that $K_\infty = \mathbf{Q}_3(\mu_{3^\infty})^{\{\pm 1\} \subset \mathbf{Z}_3^\times}$ If $g \in G_{\mathbf{Q}_3}$, then $g( 2(\cos(2\pi/3^n)-1) = 2(\cos(2\pi\chi_{\mathrm{cyc}}(g) /3^n)-1)$. This implies that $\mathrm{Col}_g(T) = P_k(T)$ if $\chi_{\mathrm{cyc}}(g) = k \in \mathbf{Z}_{\geqslant 1}$. The formula for $\eta$ now follows from this, and the well-known fact that $C_k'(1)=k^2$ if $k \geqslant 1$. \end{proof} We leave to the reader the generalization of this construction to other $p$ and other Lubin-Tate groups. The results of \S 2 of \cite{LMS} should be useful for this. \providecommand{\bysame}{\leavevmode ---\ } \providecommand{\og}{``} \providecommand{\fg}{''} \providecommand{\smfandname}{\&} \providecommand{\smfedsname}{\'eds.} \providecommand{\smfedname}{\'ed.} \providecommand{\smfmastersthesisname}{M\'emoire} \providecommand{\smfphdthesisname}{Th\`ese} \end{document}
\begin{document} \title{Reidemeister transformations of the potential function and the solution } \author{\sc Jinseok Cho and Jun Murakami} \maketitle \begin{abstract} The potential function of the optimistic limit of the colored Jones polynomial and the construction of the solution of the hyperbolicity equations were defined in the authors' previous articles. In this article, we define the Reidemeister transformations of the potential function and the solution by the changes of them under the Reidemeister moves of the link diagram and show the explicit formulas. These two formulas enable us to see the changes of the complex volume formula under the Reidemeister moves. As an application, we can simply specify the discrete faithful representation of the link group by showing a link diagram and one geometric solution. \end{abstract} \section{Introduction}\label{sec1} \subsection{Overview} One of the fundamental theorem of knot theory is the Reidemeister theorem, which states two links are equivalent if and only if their diagrams are related by finite steps of the Reidemeister moves. Therefore, one of the most natural method to obtain a knot invariant is to define a value from a knot diagram and show that the value is invariant under the Reidemeister moves. However, some invariants cannot be defined in this way, especially the ones defined from the hyperbolic structure of the link. This is because, the hyperbolicity equations\footnote{{\it Hyperbolicity equations} are the gluing equations of the edges of a given triangulation together with the equation of the completeness condition. Each solution of the equations determines a boundary-parabolic representation and some of them determines the hyperbolic structure of the link.} and their solutions do not change locally under the Reidemeister moves. Especially, when a solution of certain hyperbolicity equations is given, we cannot see how the equations and the solution change under the Reidemeister moves. This is one of the major obstructions to develop a combinatorial approach to the hyperbolic invariants of links. On the other hand, {the optimistic limit method was first introduced at \cite{Murakami00b}. Although this method was not defined rigorously, the resulting value was {\it optimistically} expected to be the actual limit of certain quantum invariants. The rigorous definition of the optimistic limit of the Kashaev invariant was proposed at \cite{Yokota10} and the resulting value was proved to be the complex volume of the knot. Although this definition is rigorous and general enough, they requires some unnatural assumptions on the diagram and several technical difficulties. Therefore it was modified to more combinatorial version at \cite{Cho13a}. The optimistic limit used in this article is the one defined at \cite{Cho13b} and the main results are based on \cite{Cho14c}. In our definition, the triangulation is naturally defined from the link diagram and its hyperbolicity equations, whose solutions determine the boundary-parabolic representations\footnote{ A representation $\rho:\pi_1(L)\rightarrow {\rm PSL}(2,\mathbb{C})$ of the link group $\pi_1(L):=\pi_1(\mathbb{S}^3\backslash L)$ is {\it boundary-parabolic} when any meridian loop of the boundary-tori of the link complement $\mathbb{S}^3\backslash L$ maps to a parabolic element in ${\rm PSL}(2,\mathbb{C})$ under $\rho$.} of the link group, are the partial derivatives $\exp(w_k\frac{\partial W}{\partial w_k})=1$ $(k=1,\ldots,n)$ of certain potential function $W(w_1,\ldots,w_n)$. Note that this potential function is combinatorially defined from the link diagram, so it changes naturally under the Reidemeister moves. Then the optimistic limit is defined by the evaluation of the potential function (with slight modification) at certain solution of the hyperbolicity equations. (Explicit definition is the equation (\ref{optimistic}).) Let $\mathcal{P}$ be the conjugation quandle consisting of the parabolic elements of ${\rm PSL}(2,\mathbb{C})$ proposed at \cite{Kabaya14}. The shadow-coloring of $\mathcal{P}$ is a way of assigning elements of $\mathcal{P}$ to arcs and regions of the link diagram. The elements on arcs are naturally determined from $\rho$ and the ones on the regions are from certain rules. According to \cite{Kabaya14} and \cite{Cho14c}, we can construct the developing map of a given boundary-parabolic representation $\rho:\pi_1(L)\rightarrow {\rm PSL}(2,\mathbb{C})$ directly from the shadow-coloring. (The explicit construction is in Figure \ref{fig06}. Note that this construction is based on \cite{Neumann99} and \cite{Zickert09}.)} This construction of the solution has two major advantages. At first, if a boundary-parabolic representation $\rho$ is given, then we can always construct the solution corresponding to $\rho$ for any link diagram. (This was the main theorem of \cite{Cho14c}.) In other words, for the hyperbolicity equations of our triangulation, we can always guarantee the existence of a geometric solution,\footnote{ {\it Geometric solution} is a solution of the hyperbolicity equations which determines the discrete faithful representation. (Unlike the standard definition, we allow some tetrahedron can have negative volume. If we consider the triangulation of $\mathbb{S}^3\backslash (L\cup\{\text{two points}\}$), the negative volume tetrahedra are unavoidable.) Note that geometric solution in our context is not unique.} which is an assumption in many other texts. Furthermore, the constructed solution changes locally under the Reidemeister moves on the link diagram $D$. Note that the variables $w_1,\ldots,w_n$ of the hyperbolicity equations are assinged to regions of the diagram. (See Section \ref{sec12} below.) Assume the solution $(w_1^{(0)},\ldots,w_n^{(0)})$ is constructed from the diagram $D$ together with the representation $\rho$. { \begin{definition} A solution $(w_1^{(0)},\ldots,w_n^{(0)})$ is called {\it essential} when $w_k^{(0)}\neq 0$ for all $k=1,\ldots,n$ and $w_k^{(0)}\neq w_m^{(0)}$ for the pairs $w_k^{(0)}$ and $w_m^{(0)}$ assigned to adjacent regions of the diagram $D$. \end{definition} According to Lemma \ref{lem}, essentialness of a solution is generic property, so we can always construct uncountably many essential solutions from any $D$ and $\rho$. From now on, solutions in this article are always assumed to be essential and the Reidemeister transformations are defined between two essential solutions. (This assumption is guaranteed by Corollary \ref{cor32}.) Note that essentialness of the solution guarantees that the shape parameters defined in Section \ref{sec33} are not in $\{0,1,\infty\}$. Let $D'$ be the link diagram obtained by applying one Reidemeister move to $D$.} In this article, we will show that if a new variable $w_{n+1}$ is appeared in $D'$, then the values $w_1^{(0)},\ldots,w_n^{(0)}$ of the newly constructed solution from $D'$ and $\rho$ are preserved and the value $w_{n+1}^{(0)}$ is uniquely determined by the other values $w_1^{(0)},\ldots,w_n^{(0)}$. (The explicit relations are in Section \ref{sec4}.) Also, if a region with $w_k$ is removed in $D'$, then we can easily get the solution by removing the value $w_k^{(0)}$ of the variable $w_k$. These changes of the solution will be called {\it the Reidemeister transformations of the solution} in Section \ref{sec12}. Using the Reidemeister transformations of the potential function together with the solution, we can see how the complex volume formula changes under the Reidemeister moves. (See Theorem \ref{thm1}.) As an application, we can easily specify the discrete faithful representation by showing one link diagram $D$ and one geometric solution corresponding to the diagram. In particular, if we have another diagram $D'$ of $L$, then we can easily find the geometric solution corresponding to $D'$, {without solving the hyperbolicity equations again,} by applying the Reidemeister transformations of the solution. Many results of the optimistic limit and other concepts used in this article are scattered in the authors' previous articles. Referring all of them might be quite confusing for readers, so we added many known results here, especially in Sections \ref{sec2}-\ref{sec3}, and sometimes we reprove the known results to clarify the discussion. \subsection{Reidemeister transformations}\label{sec12} To describe the exact definition of the Reidemeister transformation, we have to define the potential function first. Consider a link diagram\footnote{ We always assume the diagram $D$ does not contain a trivial knot component which has only over-crossings or under-crossings or no crossing. If it happens, we change the diagram of the trivial component slightly by adding a kink. } $D$ of a link $L$ and assign complex variables $w_1,\ldots,w_n$ to regions of $D$. Then we define the potential function of a crossing $j$ as in Figure \ref{pic01}. \begin{figure} \caption{Potential function of the crossing $j$} \label{pic01} \end{figure} {In the definition above, ${\rm Li}_2(z):=-\int_0^z \frac{\log(1-t)}{t}dt$ is the dilogarithm function. Although it is a multi-valued function depending on the choice of the branches of $\log t$ and $\log(1-t)$, the final formula in (\ref{optimistic}) does not depend on choice of the branches. (See Lemma 2.1's of \cite{Cho13a} and \cite{Cho13c}).} {\it The potential function} of $D$ is defined by \begin{equation}\label{defW} W(w_1,\ldots,w_n):=\sum_{j\text{ : crossings of }D}W^j, \end{equation} and we modify it to \begin{equation}\label{defW_0} W_0(w_1,\ldots,w_n):=W(w_1,\ldots,w_n)-\sum_{k=1}^n \left(w_k\frac{\partial W}{\partial w_k}\right)\log w_k. \end{equation} Also, we define the set of equations \begin{equation}\label{defH} \mathcal{I}:=\left\{\left.\exp\left(w_k\frac{\partial W}{\partial w_k}\right)=1\right|k=1,\ldots,n\right\}. \end{equation} Then $\mathcal{I}$ becomes the set of the hyperbolicity equations of the five-term triangulation defined in Section \ref{sec2}. (See Proposition \ref{pro21}.) Consider a boundary-parabolic representation $\rho:\pi_1(L)\rightarrow{\rm PSL}(2,\mathbb{C})$. Then, using the shadow-coloring of $\mathcal{P}$ induced by $\rho$, we can construct the solution $\bold w^{(0)}=(w_1^{(0)},\ldots,w_n^{(0)})$ of $\mathcal{I}$ satisfying $\rho_{\bold w^{(0)}}=\rho$ up to conjugation, where $\rho_{\bold w^{(0)}}$ is the representation induced by the five-term triangulation together with the solution $\bold w^{(0)}$. (The detail is in Section \ref{sec3}. See Proposition \ref{pro2}. {We will also show that any solution of $\mathcal{I}$ can be constructed by this method in Appendix \ref{app}.}) Furthermore, the solution satisfies \begin{equation}\label{optimistic} W_0(\bold w^{(0)})\equiv i({\rm vol}(\rho)+i\,{\rm cs}(\rho))~~({\rm mod}~\pi^2), \end{equation} where ${\rm vol}(\rho)$ and ${\rm cs}(\rho)$ are the hyperbolic volume and the Chern-Simons invariant of $\rho$, respectively, {which were defined in \cite{Neumann04} and \cite{Zickert09}.} We call ${\rm vol}(\rho)+i\,{\rm cs}(\rho)$ {\it the (hyperbolic) complex volume of $\rho$} and define {\it the optimistic limit of the colored Jones polynomial} by $W_0(\bold w^{(0)})$. The oriented Reidemeister moves defined in \cite{Polyak10} are in Figure \ref{pic02}. We define the potential functions $V_{R1}$, $V_{R1'}$, $V_{R_2}$, $V_{R2'}$, $V_{R3}$ and $V_{R3'}$ of Figure \ref{pic02} as follows: \begin{align*} V_{R1}(w_a,w_b,w_c)=&{\rm Li}_2(\frac{w_b}{w_a})+{\rm Li}_2(\frac{w_b}{w_c})-{\rm Li}_2(\frac{w_b^2}{w_a w_c})\\ &-{\rm Li}_2(\frac{w_a}{w_b})-{\rm Li}_2(\frac{w_c}{w_b})+\frac{\pi^2}{6}-\log\frac{w_a}{w_b}\log\frac{w_c}{w_b}, \end{align*} \begin{align*} V_{R1'}(w_a,w_b,w_c)=&-{\rm Li}_2(\frac{w_b}{w_a})-{\rm Li}_2(\frac{w_b}{w_c})+{\rm Li}_2(\frac{w_b^2}{w_a w_c})\\ &+{\rm Li}_2(\frac{w_a}{w_b})+{\rm Li}_2(\frac{w_c}{w_b})-\frac{\pi^2}{6}+\log\frac{w_a}{w_b}\log\frac{w_c}{w_b}, \end{align*} \begin{align*} V_{R2}&(w_a,w_b,w_c,w_d,w_e)=\\ &{\rm Li}_2(\frac{w_a}{w_d})+{\rm Li}_2(\frac{w_a}{w_e})-{\rm Li}_2(\frac{w_a w_c}{w_d w_e})-{\rm Li}_2(\frac{w_d}{w_c})-{\rm Li}_2(\frac{w_e}{w_c})-\log\frac{w_d}{w_c}\log\frac{w_e}{w_c}\\ &-{\rm Li}_2(\frac{w_c}{w_b})-{\rm Li}_2(\frac{w_c}{w_e})+{\rm Li}_2(\frac{w_a w_c}{w_b w_e})+{\rm Li}_2(\frac{w_b}{w_a})+{\rm Li}_2(\frac{w_e}{w_a})+\log\frac{w_b}{w_a}\log\frac{w_e}{w_a}, \end{align*} \begin{align*} V_{R2'}&(w_a,w_b,w_c,w_d,w_e)=\\ &{\rm Li}_2(\frac{w_c}{w_d})+{\rm Li}_2(\frac{w_c}{w_e})-{\rm Li}_2(\frac{w_a w_c}{w_d w_e})-{\rm Li}_2(\frac{w_d}{w_a})-{\rm Li}_2(\frac{w_e}{w_a})-\log\frac{w_d}{w_a}\log\frac{w_e}{w_a}\\ &-{\rm Li}_2(\frac{w_a}{w_b})-{\rm Li}_2(\frac{w_a}{w_e})+{\rm Li}_2(\frac{w_a w_c}{w_b w_e})+{\rm Li}_2(\frac{w_b}{w_c})+{\rm Li}_2(\frac{w_e}{w_c})+\log\frac{w_b}{w_c}\log\frac{w_e}{w_c}, \end{align*} \begin{align*} V_{R3}&(w_a,w_b,w_c,w_d,w_e,w_f,w_h)=-\frac{\pi^2}{2}\\ &-{\rm Li}_2(\frac{w_e}{w_d})-{\rm Li}_2(\frac{w_e}{w_f})+{\rm Li}_2(\frac{w_e w_h}{w_d w_f})+{\rm Li}_2(\frac{w_d}{w_h})+{\rm Li}_2(\frac{w_f}{w_h})+\log\frac{w_d}{w_h}\log\frac{w_f}{w_h}\\ &-{\rm Li}_2(\frac{w_f}{w_a})-{\rm Li}_2(\frac{w_f}{w_h})+{\rm Li}_2(\frac{w_b w_f}{w_a w_h})+{\rm Li}_2(\frac{w_a}{w_b})+{\rm Li}_2(\frac{w_h}{w_b})+\log\frac{w_a}{w_b}\log\frac{w_h}{w_b}\\ &-{\rm Li}_2(\frac{w_h}{w_b})-{\rm Li}_2(\frac{w_h}{w_d})+{\rm Li}_2(\frac{w_c w_h}{w_b w_d})+{\rm Li}_2(\frac{w_b}{w_c})+{\rm Li}_2(\frac{w_d}{w_c})+\log\frac{w_b}{w_c}\log\frac{w_d}{w_c}, \end{align*} \begin{align*} V_{(R3)^{-1}}&(w_a,w_b,w_c,w_d,w_e,w_f,w_g)=-\frac{\pi^2}{2}\\ &-{\rm Li}_2(\frac{w_f}{w_a})-{\rm Li}_2(\frac{w_f}{w_e})+{\rm Li}_2(\frac{w_g w_f}{w_a w_e})+{\rm Li}_2(\frac{w_a}{w_g})+{\rm Li}_2(\frac{w_e}{w_g})+\log\frac{w_a}{w_g}\log\frac{w_e}{w_g}\\ &-{\rm Li}_2(\frac{w_e}{w_d})-{\rm Li}_2(\frac{w_e}{w_g})+{\rm Li}_2(\frac{w_c w_e}{w_d w_g})+{\rm Li}_2(\frac{w_d}{w_c})+{\rm Li}_2(\frac{w_g}{w_c})+\log\frac{w_d}{w_c}\log\frac{w_g}{w_c}\\ &-{\rm Li}_2(\frac{w_g}{w_a})-{\rm Li}_2(\frac{w_g}{w_c})+{\rm Li}_2(\frac{w_b w_g}{w_a w_c})+{\rm Li}_2(\frac{w_a}{w_b})+{\rm Li}_2(\frac{w_c}{w_b})+\log\frac{w_a}{w_b}\log\frac{w_c}{w_b}. \end{align*} Note that $V_{R1}$ is the potential function of the diagram obtained after applying $R1$ move in Figure \ref{pic02}(a). All others are defined in the same ways, for example, $V_{R3}$ and $V_{R3'}$ are the potential functions of the right-hand and the left-hand sides of Figure \ref{pic02}(c), respectively. \begin{figure} \caption{Oriented Reidemeister moves} \label{pic02} \end{figure} \begin{definition}\label{def1} {\it The Reidemeister transformations}\label{def1} $T_{R1}, T_{(R1)^{-1}},...,T_{R3}, T_{(R3)^{-1}}$ of the potential function $W(\ldots,w_a,w_b,\ldots)$ are defined as follows: \begin{align} T_{R1}(W)(\ldots,w_a,w_b,w_c,\ldots)&=W+V_{R1},\nonumber\\ T_{(R1)^{-1}}(W)(\ldots,w_a,w_b,\ldots)&=W-V_{R1},\nonumber\\ T_{R1'}(W)(\ldots,w_a,w_b,w_c,\ldots)&=W+V_{R1'},\nonumber\\ T_{(R1')^{-1}}(W)(\ldots,w_a,w_b,\ldots)&=W-V_{R1'},\nonumber\\ T_{R2}(W)(\ldots,w_a,w_b,w_c,w_d,w_e,\ldots)&=W+V_{R2},\label{R2a}\\ T_{(R2)^{-1}}(W)(\ldots,w_a,w_b,w_c,\ldots)&=W-V_{R2},\label{R2b}\\ T_{R2'}(W)(\ldots,w_a,w_b,w_c,w_d,w_e,\ldots)&=W+V_{R2'},\label{R2c}\\ T_{(R2')^{-1}}(W)(\ldots,w_a,w_b,w_c,\ldots)&=W-V_{R2'},\label{R2d}\\ T_{R3}(W)(\ldots,w_a,w_b,w_c,w_d,w_e,w_f,w_h,\ldots)&=W+V_{R3}-V_{(R3)^{-1}},\nonumber\\ T_{(R3)^{-1}}(W)(\ldots,w_a,w_b,w_c,w_d,w_e,w_f,w_g,\ldots)&=W-V_{R3}+V_{(R3)^{-1}}.\nonumber \end{align} Note that, when applying $R2$ (or $R2'$) move in (\ref{R2a}) (or (\ref{R2c})), we replace $w_b$ of $W$ with $w_d$ for the potential functions of the crossings adjacent to the region associated with $w_d$. Also, when applying $(R2)^{-1}$ (or $(R2')^{-1}$) move in (\ref{R2b}) (or (\ref{R2d})), we replace $w_d$ of $W$ with $w_b$. \end{definition} Remark that the Reidemeister transformations of the potential function is nothing but the changes of the potential function defined in (\ref{defW}) under the corresponding Reidemeister moves. \begin{definition} {\it The Reidemeister transformations}\label{def1} $T_{R1}, T_{(R1)^{-1}},...,T_{R3}, T_{(R3)^{-1}}$ of the {essential} solution $(\ldots,w_a^{(0)},w_b^{(0)},\ldots)$ of $\mathcal{I}$ in (\ref{defH}) is defined as follows: for the first Reidemeister moves \begin{align*} T_{R1}(\ldots,w_a^{(0)},w_b^{(0)},\ldots)=T_{R1'}(\ldots,w_a^{(0)},w_b^{(0)},\ldots) =(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots), \end{align*} where $w_c^{(0)}=2w_b^{(0)}-w_a^{(0)}$, and \begin{align*} T_{(R1)^{-1}}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots)=T_{(R1')^{-1}}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots) =(\ldots,w_a^{(0)},w_b^{(0)},\ldots). \end{align*} For the second Reidemeister moves, we put $T_{R2}(W)$ (or $T_{R2'}(W)$) be the potential function in (\ref{R2a}) (or (\ref{R2c})). Then{ \begin{align*} T_{R2}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots)=(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},\ldots)\\ \left(\text{or }T_{R2'}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots)=(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},\ldots)\right), \end{align*} where $w_d^{(0)}=w_b^{(0)}$ and $w_e^{(0)}$ is uniquely determined by the equation \begin{equation}\label{TR2} \exp(w_b\frac{\partial T_{R2}(W)}{\partial w_b})=1 \left(\text{or } \exp(w_b\frac{\partial T_{R2'}(W)}{\partial w_b})=1\right), \end{equation} and \begin{align*} T_{(R2)^{-1}}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},\ldots)=(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots)\\ \left(\text{or } T_{(R2')^{-1}}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},\ldots)=(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},\ldots)\right). \end{align*} Note that the equation (\ref{TR2}) can be expressed explicitly by using the parameters around the region of $w_b$. (Explicit expression of (\ref{TR2}) is in Lemma \ref{lem53}.)} For the third Reidemeister moves, \begin{align*} &T_{R3}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},w_f^{(0)},w_g^{(0)},\ldots)\\ &~~~=(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},w_f^{(0)},w_h^{(0)},\ldots),\\ &T_{(R3)^{-1}}(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},w_f^{(0)},w_h^{(0)},\ldots)\\ &~~~=(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},w_f^{(0)},w_g^{(0)},\ldots), \end{align*} where $w_h^{(0)}$ or $w_g^{(0)}$ is uniquely determined by the equation \begin{equation}\label{R3rel} w_d^{(0)} w_g^{(0)}-w_c^{(0)} w_e^{(0)}=w_a^{(0)} w_h^{(0)} -w_b^{(0)} w_f^{(0)}. \end{equation} \end{definition} \begin{theorem}\label{thm1} For a link diagram $D$ of $L$, let $\bold w=(\ldots,w_a,w_b,\ldots)$ and put $W(\bold w)$ be the potential function of $D$. Consider a boundary-parabolic representation $\rho:\pi_1(L)\rightarrow{\rm PSL}(2,\mathbb{C})$ and let $\bold w^{(0)}$ be the solution of $\mathcal{I}$ constructed by the shadow-coloring of $\mathcal{P}$ satisfying $\rho_{\bold w^{(0)}}=\rho$, up to conjugation. (See Proposition \ref{pro2} for the actual construction of $\bold w^{(0)}$.) Then, for any Reidemeister transformation $T$, $T(\bold w^{(0)})$ is also a solution\footnote{ The resulting solution of a Reidemeister transformation on an essential solution can be nonessential. However, essential solutions are generic, so we can deform both solutions to essential ones by changing the region-colorings slightly. (See Corollary \ref{cor32}.) Therefore, we assume the solutions are always essential.} of $\mathcal{I}$ and the induced representation $\rho_{T(\bold w^{(0)})}$ satisfies $\rho_{T(\bold w^{(0)})}=\rho$, up to conjugation. Furthermore, \begin{equation}\label{volume} T(W)_0(T(\bold w^{(0)}))\equiv W_0(\bold w^{(0)}))\equiv i({\rm vol}(\rho)+i\,{\rm cs}(\rho))~~({\rm mod}~\pi^2), \end{equation} where $T(W)_0$ is the modification of the potential function $T(W)$ by (\ref{defW_0}). \end{theorem} \begin{proof} This follows from Lemma \ref{LemR1}, Lemma \ref{LemR2}, Lemma \ref{LemR3} and Proposition \ref{pro2}. \end{proof} Note that when some orientations of the strings in Figure \ref{pic02} are reversed, the Reidemeister transformations of the solutions can be defined by the exactly same formula. (It will be proved in Section \ref{sec5}.) If we change the potential function according to the changes of the orientation, then Theorem \ref{thm1} still works. Therefore, it defines the {\it un-oriented} Reidemeister transformations\footnote{ The Reidemeister triansformations of the potential function still depend on the orientation. As a matter of fact, it is possible to define the potential function of the un-oriented diagram using Section 3.2 of \cite{Cho13b}. However, the formula will be redundantly complicate than the one defined in this article, so we do not introduce it.} of the solutions. We will discuss and prove the un-oriented ones in Section \ref{sec5}. Also, the mirror images of the Reidemeister moves will be discussed in Section \ref{sec5}. As an example of the Reidemeister transformations, we will show the changes of the geometric solution of a diagram $D$ of the figure-eight knot to its mirror image $\overline{D}$ in Section \ref{sec6}. \section{Five-term triangulation of $\mathbb{S}^3\backslash (L\cup\{\text{two points}\})$}\label{sec2} In this section, we describe the five-term triangulation of $\mathbb{S}^3\backslash (L\cup\{\text{two points}\})$. Many parts of explanation come from \cite{Cho13c}. We place an octahedron ${\rm A}_j{\rm B}_j{\rm C}_j{\rm D}_j{\rm E}_j{\rm F}_j$ on each crossing $j$ of the link diagram as in Figure \ref{twistocta} so that the vertices ${\rm A}_j$ and ${\rm C}_j$ lie on the over-bridge and the vertices ${\rm B}_j$ and ${\rm D}_j$ on the under-bridge of the diagram, respectively. Then we twist the octahedron by gluing edges ${\rm B}_j{\rm F}_j$ to ${\rm D}_j{\rm F}_j$ and ${\rm A}_j{\rm E}_j$ to ${\rm C}_j{\rm E}_j$, respectively. The edges ${\rm A}_j{\rm B}_j$, ${\rm B}_j{\rm C}_j$, ${\rm C}_j{\rm D}_j$ and ${\rm D}_j{\rm A}_j$ are called {\it horizontal edges} and we sometimes express these edges in the diagram as arcs around the crossing in the left-hand side of Figure \ref{twistocta}. \begin{figure} \caption{Octahedron on the crossing $j$} \label{twistocta} \end{figure} Then we glue faces of the octahedra following the edges of the link diagram. Specifically, there are three gluing patterns as in Figure \ref{glue pattern}. In each cases (a), (b) and (c), we identify the faces $\triangle{\rm A}_{j}{\rm B}_{j}{\rm E}_{j}\cup\triangle{\rm C}_{j}{\rm B}_{j}{\rm E}_{j}$ to $\triangle{\rm C}_{j+1}{\rm D}_{j+1}{\rm F}_{j+1}\cup\triangle{\rm C}_{j+1}{\rm B}_{j+1}{\rm F}_{j+1}$, $\triangle{\rm B}_{j}{\rm C}_{j}{\rm F}_{j}\cup\triangle{\rm D}_{j}{\rm C}_{j}{\rm F}_{j}$ to $\triangle{\rm D}_{j+1}{\rm C}_{j+1}{\rm F}_{j+1}\cup\triangle{\rm B}_{j+1}{\rm C}_{j+1}{\rm F}_{j+1}$ and $\triangle{\rm A}_{j}{\rm B}_{j}{\rm E}_{j}\cup\triangle{\rm C}_{j}{\rm B}_{j}{\rm E}_{j}$ to $\triangle{\rm C}_{j+1}{\rm B}_{j+1}{\rm E}_{j+1}\cup\triangle{\rm A}_{j+1}{\rm B}_{j+1}{\rm E}_{j+1}$, respectively. \begin{figure} \caption{Three gluing patterns} \label{glue pattern} \end{figure} Note that this gluing process identifies vertices $\{{\rm A}_j, {\rm C}_j\}$ to one point, denoted by $-\infty$, and $\{{\rm B}_j, {\rm D}_j\}$ to another point, denoted by $\infty$, and finally $\{{\rm E}_j, {\rm F}_j\}$ to the other points, denoted by ${\rm P}_k$ where $k=1,\ldots,s$ and $s$ is the number of the components of the link $L$. The regular neighborhoods of $-\infty$ and $\infty$ are 3-balls and that of $\cup_{k=1}^s P_k$ is {cone over the tori of the link $L$.} Therefore, if we remove the vertices ${\rm P}_1,\ldots,{\rm P}_s$ from the octahedra, then we obtain a decomposition of $\mathbb{S}^3\backslash L$, denoted by $T$. On the other hand, if we remove all the vertices of the octahedra, the result becomes an ideal decomposition of $\mathbb{S}^3\backslash (L\cup\{\pm\infty\})$. We call the latter {\it the octahedral decomposition} and denote it by $T'$. To obtain an ideal triangulation from $T'$, we divide each octahedron ${\rm A}_j{\rm B}_j{\rm C}_j{\rm D}_j{\rm E}_j{\rm F}_j$ in Figure \ref{twistocta} into five ideal tetrahedra ${\rm A}_j{\rm B}_j{\rm D}_j{\rm F}_j$, ${\rm B}_j{\rm C}_j{\rm D}_j{\rm F}_j$, ${\rm A}_j{\rm B}_j{\rm C}_j{\rm D}_j$, ${\rm A}_j{\rm B}_j{\rm C}_j{\rm E}_j$ and ${\rm A}_j{\rm C}_j{\rm D}_j{\rm E}_j$. We call the result {\it the five-term triangulation} of $\mathbb{S}^3\backslash (L\cup\{\pm\infty\})$. Note that if we assign the shape parameter $u\in\mathbb{C}\backslash\{0,1\}$ to an edge of an ideal hyperbolic tetrahedron, then the other edges are also parametrized by $u, u':=\frac{1}{1-u}$ and $u'':=1-\frac{1}{u}$ as in Figure \ref{fig6}. \begin{figure} \caption{Parametrization of an ideal tetrahedron with a shape parameter $u$} \label{fig6} \end{figure} To determine the shape of the octahedron in Figure \ref{twistocta}, we assign shape parameters to edges of tetrahedra as in Figure \ref{fig7}. Note that $\frac{w_a w_c}{w_b w_d}$ in Figure \ref{fig7}(a) and $\frac{w_b w_d}{w_a w_c}$ in Figure \ref{fig7}(b) are the shape parameters of the tetrahedron ${\rm A}_j{\rm B}_j{\rm C}_j{\rm D}_j$ assigned to the edges ${\rm B}_j{\rm D}_j$ and ${\rm A}_j{\rm C}_j$. Also note that the assignment of shape parameters here does not depend on the orientations of the link diagram. \begin{figure} \caption{Assignment of shape parameters} \label{fig7} \end{figure} To obtain the boundary parabolic representation $\rho:\pi_1(\mathbb{S}^3\backslash (L\cup\{\pm\infty\}))\longrightarrow{\rm PSL}(2,\mathbb{C})$, we require two conditions on the ideal triangulation of $\mathbb{S}^3\backslash (L\cup\{\pm\infty\})$; the product of shape parameters on any edge in the triangulation becomes one, and the holonomies induced by meridian and longitude of the boundary torus act as {non-trivial} translations on the torus cusps. Note that these conditions are all expressed as equations of shape parameters. The former equations are called {\it (Thurston's) gluing equations}, the latter is called {\it completeness condition}, and the whole set of these equations are called {\it the hyperbolicity equations}. Using Yoshida's construction in Section 4.5 of \cite{Tillmann13}, an {essential} solution ${\bold w}^{(0)}$ of the hyperbolicity equations determines a representation $$\rho_{\bold{w}^{(0)}}:\pi_1(\mathbb{S}^3\backslash (L\cup\{\pm\infty\})) =\pi_1(\mathbb{S}^3\backslash L)\longrightarrow{\rm PSL}(2,\mathbb{C}).$$ \begin{proposition}[Proposition 1.1 of \cite{Cho13c}]\label{pro21} The set $\mathcal{I}$ defined in (\ref{defH}) is the set of the hyperbolicity equations of the five-term triangulation, where the shape parameters are assigned as in Figure \ref{fig7}. \end{proposition} {The proof of this proposition is quite complicate and technical, so we refer \cite{Cho13c} and \cite{Cho13b}. The following lemma was stated and used in \cite{Cho13c} without proof because it is almost trivial. To avoid confusion, we add the proof here. \begin{lemma}\label{ntri} The holonomy of a meridian induced by an essential solution $\bold{w}^{(0)}=(w_1^{(0)},\ldots,w_n^{(0)})$ of $\mathcal{I}$ is always non-trivial. \end{lemma} \begin{proof} At first, note that we assumed any component of the link diagram contains the gluing pattern in Figure \ref{glue pattern}(a). (See Footnote 4.) For the local diagram with the meridian loop $m$ and the variables $w_a, w_b$ in Figure \ref{nontricusp}(a), the corresponding cusp diagram of the five-term triangulation becomes Figure \ref{nontricusp}(b). \begin{figure} \caption{Holonomy of the meridian $m$} \label{nontricusp} \end{figure} Note that the same shape parameter $\frac{w_b^{(0)}}{w_a^{(0)}}$ is placed in two different positions in Figure \ref{nontricusp}(b). The essentialness of the solution $\bold{w}^{(0)}$ guarantees $\frac{w_b^{(0)}}{w_a^{(0)}}\neq 0,1,\infty$, so the holonomy cannot be trivial. \end{proof}} From Proposition \ref{pro21} {and Lemma \ref{ntri}}, a solution $\bold{w}^{(0)}=(w_1^{(0)},\ldots,w_n^{(0)})$ of $\mathcal{I}$ induces a boundary-parabolic representation $\rho_{\bold{w^{(0)}}}$. We will make a special solution $\bold{w}^{(0)}$ from the given representation $\rho$ in the next section, which satisfies $\rho_{\bold{w^{(0)}}}=\rho$ up to conjugation. \section{Construction of the solution}\label{sec3} \subsection{Reviews on shadow-coloring} This section is a summary of definitions and properties we need. For complete descriptions, see Section 2 of \cite{Cho14a}. (All definitions and Lemma \ref{lem} originally came from \cite{Kabaya14}.) Let $\mathcal{P}$ be the set of parabolic elements of ${\rm PSL}(2,\mathbb{C})={\rm Isom^+}(\mathbb{H}^3)$. We identify $\mathbb{C}^2\backslash\{0\}/\pm$ with $\mathcal{P}$ by \begin{equation}\label{matrixcc} \left(\begin{array}{cc}\alpha &\beta\end{array}\right) \longleftrightarrow\left(\begin{array}{cc}1+\alpha\beta & \beta^2 \\ -\alpha^2& 1-\alpha\beta\end{array}\right), \end{equation} and define operation $*$ by \begin{eqnarray*} \left(\begin{array}{cc}\alpha & \beta\end{array}\right)* \left(\begin{array}{cc}\gamma & \delta\end{array}\right) :=\left(\begin{array}{cc}\alpha & \beta\end{array}\right) \left(\begin{array}{cc}1+\gamma\delta &\delta^2 \\ -\gamma^2& 1-\gamma\delta\end{array}\right) \in \mathcal{P}, \end{eqnarray*} where this operation is actually induced by the conjugation as follows: $$ \left(\begin{array}{cc}\alpha & \beta\end{array}\right)* \left(\begin{array}{cc}\gamma & \delta\end{array}\right)\in\mathcal{P} \longleftrightarrow \left(\begin{array}{cc}\gamma&\delta\end{array}\right) \left(\begin{array}{cc}\alpha & \beta\end{array}\right) \left(\begin{array}{cc}\gamma &\delta\end{array}\right)^{-1}\in{\rm PSL}(2,\mathbb{C}).$$ The inverse operation $*^{-1}$ is expressed by $$\left(\begin{array}{cc}\alpha & \beta\end{array}\right)*^{-1} \left(\begin{array}{cc}\gamma & \delta\end{array}\right) = \left(\begin{array}{cc}\alpha & \beta\end{array}\right) \left(\begin{array}{cc}1-\gamma\delta & -\gamma^2 \\ \delta^2 & 1+\gamma\delta\end{array}\right)\in\mathcal{P},$$ and $(\mathcal{P},*)$ becomes a {\it conjugation quandle}. Here, {\it quandle} implies, for any $a,b,c\in\mathcal{P}$, the map $*b:a\mapsto a*b$ is bijective and $$a*a=a, ~(a*b)*c=(a*c)*(b*c)$$ hold. {\it Conjugation quandle} implies the operation $*$ is defined by the conjugation. We define {\it the Hopf map} $h:\mathcal{P}\rightarrow\mathbb{CP}^1=\mathbb{C}\cup\{\infty\}$ by $$\left(\begin{array}{cc}\alpha &\beta\end{array}\right)\mapsto \frac{\alpha}{\beta}.$$ Note that the image is the fixed point of the M\"{o}bius transformation $f(z)=\frac{(1+\alpha\beta)z-\alpha^2}{\beta^2 z+(1-\alpha\beta)}$. For an oriented link diagram $D$ of $L$ and a given boundary-parabolic representation $\rho$, we assign {\it arc-colors} $a_1,\ldots,a_r\in\mathcal{P}$ to arcs of $D$ so that each $a_k$ is the image of the meridian around the arc under the representation $\rho$. Note that, in Figure \ref{fig02}, we have \begin{equation}\label{ope} a_m=a_l*a_k. \end{equation} \begin{figure} \caption{Arc-coloring} \label{fig02} \end{figure} We also assign {\it region-colors} $s_1,\ldots,s_n\in\mathcal{P}$ to regions of $D$ satisfying the rule in Figure \ref{fig03}. Note that, if an arc-coloring is given, then a choice of one region-color determines all the other region-colors. \begin{figure} \caption{Region-coloring} \label{fig03} \end{figure} \begin{lemma}[Lemma 2.4 of \cite{Cho14a}]\label{lem} Consider the arc-coloring induced by the boundary-parabolic representation $\rho:\pi_1(L)\rightarrow {\rm PSL}(2,\mathbb{C})$. Then, for any triple $(a_k,s,s*a_k)$ of an arc-color $a_k$ and its surrounding region-colors $s, s*a_k$ as in Figure \ref{fig03}, there exists a region-coloring satisfying \begin{equation*} h(a_k)\neq h(s)\neq h(s*a_k)\neq h(a_k). \end{equation*} \end{lemma} \begin{proof} For the given arc-colors $a_1,\ldots,a_r$, we choose region-colors $s_1,\ldots,s_n$ so that \begin{equation}\label{exi} \{h(a_1),\ldots,h(a_r)\}\cap\{h(s_1),\ldots,h(s_n)\}=\emptyset. \end{equation} This is always possible because, { each $h(s_k)$ is written as $h(s_k)=M_k(h(s_1))$ by a M\"{o}bius transformation $M_k$, which only depends on the arc-colors $a_1,\dots,a_r$. If we choose $h(s_1)\in\mathbb{CP}^1$ away from the finite set $$\bigcup_{1\leq k\leq n}\left\{M_k^{-1}(h(a_1)),\ldots,M_k^{-1}(h(a_r))\right\},$$ we have $h(s_k)\notin \{h(a_1),\ldots,h(a_r)\}$ for all $k$.} Now consider Figure \ref{fig03} and assume $h(s*a_k)=h(s)$. Then we obtain \begin{equation}\label{eqnh} h(s*a_k)=\widehat{a_k}(h(s))=h(s), \end{equation} where $\widehat{a_k}:\mathbb{CP}^1\rightarrow\mathbb{CP}^1$ is the M\"{o}bius transformation \begin{equation}\label{mob} \widehat{a_k}(z)=\frac{(1+\alpha_k\beta_k)z-\alpha_k^2}{\beta_k^2 z+(1-\alpha_k\beta_k)} \end{equation} of $a_k=\left(\begin{array}{cc}\alpha_k &\beta_k\end{array}\right)$. Then (\ref{eqnh}) implies $h(s)$ is the fixed point of $\widehat{a_k}$, which means $h(a_k)=h(s)$ and this contradicts (\ref{exi}). \end{proof} We remark that Lemma \ref{lem} holds for any choice of $h(s_1)\in\mathbb{CP}^1$ with only finitely many exceptions. Therefore, if we want to find a region-coloring explicitly, we first choose $h(s_1)\notin\{h(a_1),\ldots,h(a_r)\}$ and then decide $h(s_2), \ldots, h(s_n)$ using \begin{equation}\label{app1} h(s_1*a)=\widehat{a}(h(s_1)),~h(s_1*^{-1} a)=\widehat{a}^{-1}(h(s_1)). \end{equation} If this choice does not satisfy Lemma \ref{lem}, then we change $h(s_1)$ and do the same process again. This process is very easy and it ends in finite steps. If proper $h(s_1)$ is chosen, then we can easily extend the value $h(s_1)$ to a region-color $s_1$ and find the proper region-coloring $\{s_1,\ldots,s_n\}$. This observation implies the following corollary. \begin{corollary}\label{cor32} Consider a sequence of diagrams $D_1,\ldots,D_m$, where each $D_{k+1}$ ($k=1,\ldots,m-1$) is obtained from $D_k$ by applying one of the Reidemeister moves in Figure \ref{pic02} once. Also assume arc-colorings of $D_1,\ldots,D_m$ are given by certain boundary-parabolic representation $\rho:\pi_1(L)\rightarrow {\rm PSL}(2,\mathbb{C})$. (Note that a region-coloring of $D_1$ determines the region-colorings of $D_2,\ldots,D_m$ uniquely.) Then there exists a region-coloring of $D_1$ satisfying Lemma \ref{lem} for all region-colorings of $D_1,\ldots,D_m$. \end{corollary} \begin{proof} Let $s_1$ be the region-color of the unbounded region of $D_1,\ldots,D_m$. For each $D_k$, the number of values of $h(s_1)\in\mathbb{CP}^1$ that does not satisfy Lemma \ref{lem} is finite. Therefore, we can choose $h(s_1)$ so that Lemma \ref{lem} holds for all $D_1,\ldots,D_m$. By extending $h(s_1)$ to a region-color $s_1$, we can determine the region-colorings of $D_1,\ldots,D_m$ satisfying Lemma \ref{lem} uniquely. \end{proof} The arc-coloring induced by $\rho$ together with the region-coloring satisfying Lemma \ref{lem} is called {\it the shadow-coloring induced by} $\rho$. We choose an element $p\in\mathcal{P}$ so that \begin{equation}\label{p} h(p)\notin\{h(a_1),\ldots,h(a_r), h(s_1),\ldots,h(s_n)\}. \end{equation} The geometric shape of the five-term triangulation in Section \ref{sec2} will be determined by the shadow-coloring induced by $\rho$ and $p$ in the next section. From now on, we fix the representatives of shadow-colors in $\mathbb{C}^2\backslash\{0\}$. As mentioned in \cite{Cho14a}, the representatives of some arc-colors may satisfy (\ref{ope}) up to sign, in other words, $a_m=\pm (a_l*a_k)$. However, the representatives of the region-colors are uniquely determined due to the fact $s*(\pm a)=s*a$ for any region-color $s$ and any arc-color $a$. For $a=\left(\begin{array}{cc}\alpha_1 &\alpha_2\end{array}\right)$ and $ b=\left(\begin{array}{cc}\beta_1 &\beta_2\end{array}\right)$ in $\mathbb{C}^2\backslash\{0\}$, we define {\it determinant} $\det(a,b)$ by \begin{equation*} \det(a,b):=\det\left(\begin{array}{cc}\alpha_1 & \alpha_2 \\\beta_1 & \beta_2\end{array}\right)=\alpha_1 \beta_2-\alpha_2 \beta_1. \end{equation*} Then the determinant satisfies $\det(a*c,b*c)=\det(a,b)$ for any $a,b,c\in\mathbb{C}^2\backslash\{0\}$. Furthermore, for $v_0,\ldots,v_3\in\mathbb{C}^2\backslash \{0\}$, the cross-ratio can be expressed using the determinant by $$[h(v_0),h(v_1),h(v_2),h(v_3)]=\frac{\det(v_0,v_3)\det(v_1,v_2)}{\det(v_0,v_2)\det(v_1,v_3)}.$$ (For the proofs, see Section 2 of \cite{Cho14a}.) \subsection{Geometric shape of the five-term triangulation}\label{sec22} Note that the five-term triangulation was already defined in Section \ref{sec2}. Consider the crossings in Figure \ref{fig04} with the shadow-colorings induced by $\rho$, and let $w_a,\ldots,w_d$ be the variables assigned to regions of $D$. \begin{figure} \caption{Crossings with shadow-colorings and the region variables} \label{fig04} \end{figure} We place tetrahedra at each crossings of $D$ and assign coordinates of them as in Figure \ref{fig06} so as to make them hyperbolic ideal tetrahedra in the upper-space model of the hyperbolic 3-space $\mathbb{H}^3$. \begin{figure} \caption{Five-term triangulation at the crossing in Figure \ref{fig04}} \label{fig06} \end{figure} \begin{proposition}[Proposition 2.3 of \cite{Cho14c}]\label{pro} All the tetrahedra in Figure \ref{fig06} are non-degenerate. \end{proposition} According to Section 2 of \cite{Kabaya12} and the proof of Theorem 5 of \cite{Kabaya14}, the five-term triangulation defined by Figure \ref{fig06} induces the given representation\footnote{The triangulation of \cite{Kabaya12} is different from ours. However, the fundamental domain obtained by the five-term triangulation coincides with that of \cite{Kabaya12}, so it induces the same representation. (Our triangulation is obtained by choosing different subdivision of the same fundamental domain. See Section 2.2 of \cite{Cho14c} for details.)} $\rho:\pi_1(L)\rightarrow{\rm PSL}(2,\mathbb{C})$ and the shape parameters of this triangulation satisfy the gluing equations of all edges. (The face-pairing maps are the isomorphisms induced by the M\"{o}bius transformations of ${a_1},\ldots,a_r\in{\rm PSL}(2,\mathbb{C})$. {Note that this construction is based on the construction method developed at \cite{Neumann99} and \cite{Zickert09}.}) Furthermore, the representation $\rho$ is boundary-parabolic, which implies the shape-parameters satisfy the hyperbolicity equations of the five-term triangulation. \subsection{Formula of the solution ${\bold w}^{(0)}$}\label{sec33} Consider the crossings in Figure \ref{fig04} and the tetrahedra in Figure \ref{fig06}. For the positive crossing, we assign shape parameters to the edges as follows: \begin{itemize} \item $\displaystyle\frac{w_d}{w_a}$ to $(h(a_k), h(s*a_k))$ of $(h(p*a_k),h(p),h(a_k), h(s*a_k))$, \item $\displaystyle\frac{w_b}{w_c}$ to $(h(a_k), h((s*a_l)*a_k))$ of $-(h(p*a_k),h(p),h(a_k), h((s*a_l)*a_k))$, \item $\displaystyle\frac{w_b}{w_a}$ to $(h(p), h(a_l*a_k))$ of $(h(p), h(a_l*a_k),h(s*a_k), h((s*a_l)*a_k))$ and \item $\displaystyle\frac{w_d}{w_c}$ to $(h(p), h(a_l))$ of $-(h(p), h(a_l),h(s), h(s*a_l))$, respectively. \end{itemize} On the other hand, for the negative crossing, we assign shape parameters to the edges as follows: \begin{itemize} \item $\displaystyle\frac{w_a}{w_b}$ to $(h(a_k), h((s*a_l)*a_k))$ of $-(h(p),h(p*a_k),h(a_k), h((s*a_l)*a_k))$, \item $\displaystyle\frac{w_c}{w_d}$ to $(h(a_k), h(s*a_k))$ of $(h(p),h(p*a_k),h(a_k), h(s*a_k))$, \item $\displaystyle\frac{w_c}{w_b}$ to $(h(p), h(a_l*a_k))$ of $(h(p), h(a_l*a_k), h((s*a_l)*a_k),h(s*a_k))$ and \item $\displaystyle\frac{w_a}{w_d}$ to $(h(p), h(a_l))$ of $-(h(p), h(a_l),h(s*a_l), h(s))$, respectively. \end{itemize} Note that these assignments coincide with the one defined in Figure \ref{fig7}. \begin{proposition}[Theorem 1.1 of \cite{Cho14c}]\label{pro2} For a region of $D$ with region-color $s_k$ and region-variable $w_k$, we define \begin{equation}\label{main} w_k^{(0)}:=\det(p,s_k). \end{equation} Then, $w_k^{(0)}\not= 0$ and ${\bold w^{(0)}}=(w_1^{(0)},\ldots,w_n^{(0)})$ is an {essential} solution of the hyperbolicity equations in $\mathcal{I}$ of (\ref{defH}). Furthermore, the solution satisfies $\rho_{\bold w^{(0)}}=\rho$ up to conjugation and \begin{equation}\label{volume} W_0(\bold w^{(0)})\equiv i({\rm vol}(\rho)+i\,{\rm cs}(\rho))~~({\rm mod}~\pi^2). \end{equation} \end{proposition} \begin{proof} The first property $w_k^{(0)}\not= 0$ is trivial from the definition of $p$ in (\ref{p}). From the discussion below Proposition \ref{pro}, the shape parameters of the five-term triangulation defined by Figure \ref{fig06} satisfy the hyperbolicity equations and the fundamental domain induces the boundary-parabolic representation $\rho$. On the other hand, direct calculation shows the values $w_k^{(0)}$ defined in (\ref{main}) determines the same shape parameter of the five-term triangulation defined by Figure \ref{fig06}. Specifically, for the first two cases of the positive crossing, the shape parameters assigned to edges $(h(a_k), h(s*a_k))$ and $(h(a_k), h((s*a_l)*a_k))$ are the cross-ratios \begin{eqnarray*} [h(p*a_k),h(p),h(a_k), h(s*a_k)]&=&\frac{\det(p*a_k,s*a_k)\det(p,a_k)}{\det(p*a_k,a_k)\det(p,s*a_k)}\\ &=&\frac{\det(p,s)\det(p,a_k)}{\det(p,a_k)\det(p,s*a_k)}=\frac{w_d^{(0)}}{w_a^{(0)}},\\ {[h}(p*a_k),h(p),h(a_k), h((s*a_l)*a_k)]^{-1} &=&\frac{\det(p*a_k,a_k)\det(p,(s*a_l)*a_k)}{\det(p*a_k,(s*a_l)*a_k)\det(p,a_k)}\\ &=&\frac{\det(p,a_k)\det(p,(s*a_l)*a_k)}{\det(p,s*a_l)\det(p,a_k)}=\frac{w_b^{(0)}}{w_c^{(0)}}, \end{eqnarray*} respectively, and all the other cases can be verified by the same way. From Proposition \ref{pro21}, we conclude that ${\bold w^{(0)}}=(w_1^{(0)},\ldots,w_n^{(0)})$ is an {essential} solution of $\mathcal{I}$. Finally, the identity (\ref{volume}) was already proved in Theorem 1.2 of \cite{Cho13c}. \end{proof} {In Appendix \ref{app}, we will show that any {essential} solution of $\mathcal{I}$ can be constructed by certain shadow-coloring.} \section{Reidemeister transformations on the solution}\label{sec4} In this section, we show how the solution ${\bold w^{(0)}}$ of $\mathcal{I}$ defined in Proposition \ref{pro2} changes under the Reidemeister moves. We assume all the region-colorings in this and later sections satisfy Corollary \ref{cor32} {so that the original and the transformed solutions are both essential}. At first, we introduce very simple, but useful lemma. {Recall that, according to Proposition \ref{pro21}, the set $\mathcal{I}$ defined in (\ref{defH}) is the set of the hyperbolicity equations.} The following lemma shows the hyperbolicity equations do not change under the change of the orientation. \begin{figure} \caption{Change of orientation} \label{fig10} \end{figure} \begin{lemma}\label{lem40} For the potential function $W^j(w_a,w_b,w_c,w_d)$ and $W^{j'}(w_a,w_b,w_c,w_d)$ of Figure \ref{fig10}(a) and (b), respectively, we have $$\exp(w_k\frac{\partial W^j}{\partial w_k})=\exp(w_k\frac{\partial W^{j'}}{\partial w_k}),$$ for $k=a,b,c,d$. \end{lemma} \begin{proof} It is easily verified by direct calculation. For example, $$\exp(w_a\frac{\partial W^j}{\partial w_a})=\frac{(w_b-w_a)(w_d-w_a)}{w_bw_d-w_cw_a} =\exp(w_a\frac{\partial W^{j'}}{\partial w_a}).$$ \end{proof} \subsection{Reidemeister 1st move} Consider the Reidemeister 1st moves in Figure \ref{LocalMove1}. Let $\alpha\in\mathcal{P}$ be the arc-color, $s,s*\alpha,(s*\alpha)*\alpha\in\mathcal{P}$ be the region-colors and $w_a,w_b,w_c$ be the variables of the potential function. Then, by (\ref{main}), \begin{equation}\label{eq41} w_a^{(0)}=\det(p,s),~w_b^{(0)}=\det(p,s*\alpha), ~w_c^{(0)}=\det(p,(s*\alpha)*\alpha). \end{equation} \begin{figure} \caption{First Reidemeister moves} \label{LocalMove1} \end{figure} \begin{lemma}\label{LemR1} The values $w_a^{(0)},w_b^{(0)},w_c^{(0)}$ defined in (\ref{eq41}) satisfy $$w_c^{(0)}=2w_b^{(0)}-w_a^{(0)}.$$ \end{lemma} \begin{proof} Using the identification (\ref{matrixcc}), let \begin{equation*} \alpha=\left(\begin{array}{cc}\alpha_1 & \alpha_2\end{array}\right) \longleftrightarrow A=\left(\begin{array}{cc}1+\alpha_1\alpha_2 &\alpha_2^2 \\-\alpha_1^2 & 1-\alpha_1\alpha_2\end{array}\right). \end{equation*} Then $s*\alpha=sA\in\mathcal{P}$ and $(s*\alpha)*\alpha=sA^2\in\mathcal{P}$ holds by the definition of the operation $*$. Furthermore, by the Cayley-Hamilton theorem, the matrix $A$ satisfies $$A^2-2A+I=0,$$ where $I$ is the $2\times 2$ identity matrix. Using these, we obtain $$w_c^{(0)}=\det(p,(s*\alpha)*\alpha)=\det(p,sA^2)=2\det(p,sA )-\det(p,s I )=2w_b^{(0)}-w_a^{(0)}.$$ \end{proof} \subsection{Reidemeister 2nd moves} Consider the Reidemeister 2nd moves in Figure \ref{LocalMove2}. Let $\alpha$, $\beta$, $\alpha*\beta\in\mathcal{P}$ be the arc-colors, $s$, $s*\alpha$, $s*\beta$, $(s*\alpha)*\beta\in\mathcal{P}$ be the region-colors and $w_a,\ldots,w_e$ be the variables of the potential function. Then, by (\ref{main}), \begin{equation*} w_a^{(0)}=\det(p,s*\beta),~w_b^{(0)}=w_d^{(0)}=\det(p,(s*\alpha)*\beta), ~w_c^{(0)}=\det(p,s*\alpha),~w_e^{(0)}=\det(p,s) \end{equation*} for the case of R2 move in Figure \ref{LocalMove2}(a), and \begin{equation*} w_a^{(0)}=\det(p,s*\alpha),~w_b^{(0)}=w_d^{(0)}=\det(p,s), ~w_c^{(0)}=\det(p,s*\beta),~w_e^{(0)}=\det(p,(s*\alpha)*\beta) \end{equation*} for the case of R2$'$ move in Figure \ref{LocalMove2}(b). \begin{figure} \caption{Second Reidemeister moves} \label{LocalMove2} \end{figure} \begin{lemma}\label{LemR2} Let $T(W)(...,w_a,w_b,w_c,w_d,w_e,...)$ be the potential function (\ref{R2a}) (or (\ref{R2c})) of the diagram in Figure \ref{LocalMove2}(a) (or (b)) after applying $R2$ (or $R2'$) move. Then $w_d^{(0)}=w_b^{(0)}$ and $w_e^{(0)}$ is uniquely determined from the values of the parameters around the region of $w_b$ by the equation \begin{equation*} \exp(w_b\frac{\partial T(W)}{\partial w_b})=1. \end{equation*} \end{lemma} \begin{proof} At first, $$\exp(w_e\frac{\partial T(W)}{\partial w_e})=\frac{w_aw_c-w_bw_e}{w_aw_c-w_dw_e}=1$$ induces $w_d^{(0)}=w_b^{(0)}$. Consider the case of Figure \ref{LocalMove2}(a). The variable $w_e$ of the potential function $T(W)$ appears only inside the function $V_{R2}(w_a, w_b, w_c, w_d, w_e)$ defined in Section \ref{sec12}, and direct calculation shows $$\exp(w_b\frac{\partial V_{R2}}{\partial w_b}) =\frac{w_a w_c-w_b w_e}{(w_c-w_b)(w_a-w_b)}.$$ Therefore, the equation $\exp(w_b\frac{\partial T(W)}{\partial w_b})=1$ is linear with respect to $w_e$ and it determines $w_e^{(0)}$ uniquely. The case of Figure \ref{LocalMove2}(b) is trivial from Lemma \ref{lem40} and the above. \end{proof} Remark that the equation $\exp(w_d\frac{\partial T(W)}{\partial w_d})=1$ also determines the same value $w_e^{(0)}$. \subsection{Reidemeister 3rd move} Consider the Reidemeister 3rd move in Figure \ref{LocalMove3}. Let $\alpha$, $\beta$, $\gamma$, $\beta*\gamma$, $(\alpha*\beta)*\gamma=(\alpha*\gamma)*(\beta*\gamma)\in\mathcal{P}$ be the arc-colors, $s$, $s*\alpha$, $s*\beta$, $s*\gamma$, $(s*\alpha)*\beta$, $(s*\alpha)*\gamma$, $(s*\beta)*\gamma$, $((s*\alpha)*\beta)*\gamma\in\mathcal{P}$ be the region-colors and $w_a,\ldots,w_g,w_h$ be the variables of the potential function. Then, by (\ref{main}), \begin{align} w_a^{(0)}=\det(p,s),~w_b^{(0)}=\det(p,s*\gamma),~w_c^{(0)}=\det(p,(s*\beta)*\gamma),\label{eq43}\\ w_d^{(0)}=\det(p,((s*\alpha)*\beta)*\gamma),~w_e^{(0)}=\det(p,(s*\alpha)*\beta),\nonumber\\ w_f^{(0)}=\det(p,s*\alpha),~w_g^{(0)}=\det(p,s*\beta),~w_h^{(0)}=\det(p,(s*\alpha)*\gamma).\nonumber \end{align} \begin{figure} \caption{Third Reidemeister move} \label{LocalMove3} \end{figure} \begin{lemma}\label{LemR3} The values $w_a^{(0)},\ldots,w_h^{(0)}$ defined in (\ref{eq43}) satisfy $$w_d^{(0)}w_g^{(0)}-w_c^{(0)}w_e^{(0)}=w_a^{(0)}w_h^{(0)}-w_b^{(0)}w_f^{(0)}.$$ \end{lemma} \begin{proof} Using the identification (\ref{matrixcc}), let $$\alpha\leftrightarrow A,~ \beta\leftrightarrow B,~\gamma\leftrightarrow C,$$ {and $$A=\left(\begin{array}{cc}1+a_1 a_2 &a_2^2 \\-a_1^2 & 1-a_1a_2\end{array}\right), B=\left(\begin{array}{cc}1+b_1 b_2 &b_2^2 \\-b_1^2& 1-b_1b_2\end{array}\right), C=\left(\begin{array}{cc}1+c_1 c_2 & c_2^2\\ -c_1^2& 1-c_1c_2\end{array}\right).$$ Also, put $$p=\left(\begin{array}{cc}p_1 &p_2\end{array}\right)\text{ and } s=\left(\begin{array}{cc}s_1 &s_2\end{array}\right).$$ Then direct calculation shows the following identity: \begin{align*} &\det(p,sABC)\det(p,sB)-\det(p,sBC)\det(p,sAB)\\ &=-(c_2p_1-c_1p_2)^2(a_2s_1-a_1s_2)^2\\ &=\det(p,s)\det(p,sAC)-\det(p,sC)\det(p,sA). \end{align*} ( Although this identity looks very elementary, the authors cannot find any other proof except the direct calculation.) Applying (\ref{eq43}) to this identity proves the lemma.} \end{proof} \section{Orientation change and the mirror image}\label{sec5} The proofs of the relations of solutions in Lemma \ref{LemR1}-\ref{LemR3} needed orientation of the link diagram. However, we can show that the same relations still hold for any choice of orientations and for the mirror images. (Exact statements will appear below.) These results are very useful when we consider the actual examples because they reduce the number of the Reidemeister moves. \begin{lemma}[Uniqueness of the solution]\label{uniquelemma} Let $\bold{w}=(w_1^{(0)},\ldots,w_n^{(0)})$ be the solution of the hyperbolicity equations obtained by the shadow-coloring induced by $\rho$. (See Proposition \ref{pro2} for the construction.) After applying one of the Reidemeister moves R1, R1$'$, R3 and (R3)$^{-1}$ to the link diagram once, assume the new variable $w_{n+1}$ appeared. Then the value $w_{n+1}^{(0)}$ satisfying $(w_1^{(0)},\ldots,w_{n+1}^{(0)})$ to be a solution of the hyperbolicity equations is uniquely determined by the values $w_1^{(0)},\ldots,w_n^{(0)}$. Likewise, if new variables $w_{n+1}$ and $w_{n+2}$ appeared after applying R2 or R2$\,'$ move once, then the values $w_{n+1}^{(0)}$ and $w_{n+2}^{(0)}$ satisfying the hyperbolicity equations are uniquely determined by the values $w_1^{(0)},\ldots,w_n^{(0)}$. \end{lemma} \begin{proof} Note that the main idea was already appeared in the proof of Lemma \ref{LemR2}. Let $T(W)$ be the potential function of the link diagram obtained after applying the Reidemeister move once. Now consider Figure \ref{pic02}. In Figure \ref{pic02}(a), the value $w_{c}^{(0)}$ is uniquely determined by the equation $\exp(w_a\frac{\partial T(W)}{\partial w_a})=1$. In Figure \ref{pic02}(b), the equation $\exp(w_e\frac{\partial T(W)}{\partial w_e})=1$ uniquely determines the value $w_{d}^{(0)}=w_{b}^{(0)}$ and $w_{e}^{(0)}$ is uniquely determined by the equation $\exp(w_b\frac{\partial T(W)}{\partial w_b})=1$. In Figure \ref{pic02}(c), the values $w_{g}^{(0)}$ and $w_{h}^{(0)}$ are uniquely determined by the equations $\exp(w_d\frac{\partial T(W)}{\partial w_d})=1$ and $\exp(w_a\frac{\partial T(W)}{\partial w_a})=1$, respectively. \end{proof} { Now we introduce {\it the local orientation change} and show how the arc-color changes. From (\ref{matrixcc}), we obtain \begin{equation*} \pm i \left(\begin{array}{cc}\alpha &\beta\end{array}\right) \longleftrightarrow\left(\begin{array}{cc}1-\alpha\beta & -\beta^2 \\ \alpha^2& 1+\alpha\beta\end{array}\right) =\left(\begin{array}{cc}1+\alpha\beta & \beta^2 \\ -\alpha^2& 1-\alpha\beta\end{array}\right)^{-1} \end{equation*} and $s*^{-1}(\pm i a_k)=s*a_k.$ Therefore we define the local orientation change as in Figure \ref{orichang}. Note that the arc-color of the reversed orientated arc changes to $\pm i a_k$, but the region-colors are still well-defined. Therefore, the invariance of the region-colors shows the invariance of the solution under the local orientation change. \begin{figure} \caption{Local orientation change} \label{orichang} \end{figure} } \begin{figure} \caption{Un-oriented Reidemeister moves} \label{pic15} \end{figure} \begin{proposition}\label{unR} Consider the un-oriented Reidemeister moves in Figure \ref{pic15} and a link diagram $D$ containing one of Figure \ref{pic15}. Let $(\ldots,w_a^{(0)},w_b^{(0)},\ldots)$ be the solution of the hyperbolicity equations obtained by the shadow-coloring induced by $\rho$. Then, for Figure \ref{pic15}(a), the values of the variables satisfy \begin{equation}\label{eq20} w_c^{(0)}=2w_b^{(0)}-w_a^{(0)}, \end{equation} and, for Figure \ref{pic15}(c), the values satisfy \begin{equation}\label{eq21} w_d^{(0)}w_g^{(0)}-w_c^{(0)}w_e^{(0)}=w_a^{(0)}w_h^{(0)}-w_b^{(0)}w_f^{(0)}. \end{equation} \end{proposition} \begin{proof} { By applying the local orientation change whenever it is necessary, we can assign the local orientation of Figure \ref{pic02} to the un-oriented diagrams in Figure \ref{pic15}. Then, from the oriented Reidemeister transformations, we obtain the relations (\ref{eq20}) and (\ref{eq21}). The solution of the hyperbolicity equations is invariant under the local orientation change, so the relations are independent with the choice of the orientations. } \end{proof} As a result of Proposition \ref{unR}, we define the {\it un-oriented Reidemeister 1st and 3rd transformations of solutions} by the same formulas of the oriented version in Definition \ref{def1}. On the other hand, the formula of the oriented Reidemeister 2nd move in Definition \ref{def1} needed an explicit potential function, which depends on the orientation. However, we can formulate the Reidemeister 2nd move without orientations using the following lemma. At first, we define the weight of the corner as in Figure \ref{pic16}. \begin{figure} \caption{Weight $x_a^j$ assigned to the corner of the crossing $j$} \label{pic16} \end{figure} \begin{lemma}\label{lem53} For an oriented link diagram $D$, let $W$ be the potential function of $D$ and choose a region R assigned with variable $w_k$. Then, $$\exp(w_k\frac{\partial W}{\partial w_k})=\prod_{j}x_k^j,$$ where $j$ is over all the crossings adjacent to the region R. \end{lemma} \begin{proof} The proof can be easily obtained by direct calculations. In the case of Figure \ref{pic01}(a), \begin{align*} &\exp(w_a\frac{\partial W^j}{\partial w_a})=\frac{(w_b-w_a)(w_c-w_a)}{w_bw_d-w_cw_a}=x_a^j, &\exp(w_b\frac{\partial W^j}{\partial w_b})=\frac{w_aw_c-w_dw_b}{(w_a-w_b)(w_c-w_b)}=x_b^j,\\ &\exp(w_c\frac{\partial W^j}{\partial w_c})=\frac{(w_b-w_c)(w_d-w_c)}{w_bw_d-w_aw_c}=x_c^j, &\exp(w_d\frac{\partial W^j}{\partial w_d})=\frac{w_aw_c-w_bw_d}{(w_a-w_d)(w_c-w_d)}=x_d^j, \end{align*} and the case of Figure \ref{pic01}(b) can be obtained from Lemma \ref{lem40}. The main equation is obtained by $$\exp(w_k\frac{\partial W}{\partial w_k})=\exp(w_k\frac{\partial (\sum_j W^j)}{\partial w_k}) =\prod_{j}\exp(w_k\frac{\partial W^j}{\partial w_k})=\prod_{j}x_k^j,$$ where $j$ is over all the crossings adjacent to the region R. (Note that if $j$ is not adjacent to the region R, then $w_k\frac{\partial W^j}{\partial w_k}=0$.) \end{proof} The weight does not depends on the orientation, so we can describe the equation $\exp(w_b\frac{\partial W}{\partial w_b})=1$ in the right-hand side of Figure \ref{pic15}(b) without orientation. This equation determines the value $w_e^{(0)}$ uniquely and it defines {\it the un-oriented Reidemeister 2nd move of the solution}. Now consider a link diagram $D$ and its mirror image\footnote{The mirror image is defined as follows: assume the link is in the $(x,y,z)$-space and the diagram $D$ is obtained by the projection along $z$-axis to the $(x,y)$-plane. Then $\overline{D}$ is the mirror image of $D$ by the reflection on the $(y,z)$-plane. For an example, see Figure \ref{pic15}(c) and Figure \ref{pic17}.} $\overline{D}$. When the variables $w_a,w_b,\ldots$ are assigned to $D$, we always assume the same variables are assigned to the same mirrored regions. Let $W(w_a,w_b,\ldots)$ be the potential function of $D$. Then, from the definition of the potential functions in Figure \ref{pic01}, the potential function of $\overline{D}$ becomes $-W(w_a,w_b,\ldots)$. (Note that we are using here the invariance of the potential functions in Figure \ref{pic01} under $w_b\leftrightarrow w_d$.) This suggests that the hyperbolicity equations in $\mathcal{I}$ are invariant under the mirroring. \begin{lemma}\label{R3mirror} Consider the diagrams in Figure \ref{pic17} and let $(\ldots,w_a^{(0)},\ldots,w_f^{(0)},w_g^{(0)},\ldots)$ and $(\ldots,w_a^{(0)},\ldots,w_f^{(0)},w_h^{(0)},\ldots)$ be the solutions of the hyperbolicity equations obtained by the shadow-coloring induced by $\rho$. The the values satisfy the equation (\ref{eq21}). \end{lemma} \begin{proof} For Figure \ref{pic15}(c), the relation (\ref{eq21}) holds. The hyperbolicity equations are invariant under the mirroring, {so a solution on the diagram $D$ is also a solution on $\overline{D}$. Therefore, a pair of solutions related by the relation (\ref{eq21}) on Figure \ref{pic15}(c) is also a pair of solution on Figure \ref{pic17} related by the same relation. From the uniqueness of the solution in Lemma \ref{uniquelemma}, this is the only relation induced by a shadow-coloring.} \end{proof} \begin{figure} \caption{Mirror image of Figure \ref{pic15}(c)} \label{pic17} \end{figure} \begin{proposition} Consider a diagram $D'$ is obtained from $D$ by applying one un-oriented Reidemeister move, assume variables $w_1,\ldots,w_n$ are assigned to regions of $D$ and $w_{n+1}$ is assigned to a newly appeared region of $D'$. Then Lemma \ref{uniquelemma} showed that the value $w_{n+1}^{(0)}$ is uniquely determined from $w_1^{(0)},\ldots, w_n^{(0)}$ by a certain equation. If we consider the mirror image $\overline{D}'$ obtained from $\overline{D}$, then the value $w_{n+1}^{(0)}$ of the mirror image is uniquely determined by the same equation. \end{proposition} \begin{proof} The mirror images of the un-oriented Reidemeister 1st and 2nd moves are just $\pi$-rotations of the original moves, so the same equation holds for each cases. The mirror image of the un-oriented Reidemeister 3rd move was already proved in Lemma \ref{R3mirror}. \end{proof} \begin{exa}\label{exa1} Consider Figure \ref{pic17} and let $(\ldots,w_a^{(0)},\ldots,w_f^{(0)},w_g^{(0)},\ldots)$ and $(\ldots,w_a^{(0)},\ldots,w_f^{(0)},w_h^{(0)},\ldots)$ be the solutions of the hyperbolicity equations of each diagrams obtained by the shadow-colorings induced by $\rho$. The the variables satisfy \begin{equation*} w_d^{(0)}w_g^{(0)}-w_c^{(0)}w_e^{(0)}=w_a^{(0)}w_h^{(0)}-w_b^{(0)}w_f^{(0)}, \end{equation*} which is the same equation with (\ref{eq21}). \end{exa} \section{Examples}\label{sec6} \begin{exa}\label{exa2} Consider the twisting move in Figure \ref{pic18} and let $(\ldots,w_a^{(0)},w_b^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},\ldots)$ and $(\ldots,w_a^{(0)},w_c^{(0)},w_d^{(0)},w_e^{(0)},w_f^{(0)},w_g^{(0)},\ldots)$ be the solutions of the hyperbolicity equations of each diagrams obtained by the shadow-colorings induced by $\rho$. \begin{figure} \caption{Twisting move} \label{pic18} \end{figure} If we apply the twisting move from the left to the right, then $w_f^{(0)}$ and $w_g^{(0)}$ are uniquely determined by $$w_f^{(0)}=2w_a^{(0)}-w_b^{(0)}~\text{ and }~w_d^{(0)} w_g^{(0)}-w_c^{(0)} w_e^{(0)}=w_f^{(0)} w_b^{(0)}-(w_a^{(0)})^2,$$ and if we apply it from the right to the left, then $w_b^{(0)}$ is uniquely determined by $$w_b^{(0)}=2w_a^{(0)}-w_f^{(0)}.$$ \end{exa} \begin{proof} By adding a kink from the left-hand side of Figure \ref{pic18}, we obtain Figure \ref{pic19} and the equation $$w_f^{(0)}=2w_a^{(0)}-w_b^{(0)}.$$ \begin{figure} \caption{Adding a kink} \label{pic19} \end{figure} Applying Example \ref{exa1} to Figure \ref{pic19}, we obtain another equation $$w_d^{(0)} w_g^{(0)}-w_c^{(0)} w_e^{(0)}=w_f^{(0)} w_b^{(0)}-(w_a^{(0)})^2.$$ \end{proof} Now we will show the changes of the solution from one figure-eight knot diagram to its mirror image. At first, consider Figure \ref{example1}. \begin{figure} \caption{Figure-eight knot $4_1$ with parameters} \label{example1} \end{figure} Let $\rho:\pi_1(4_1)\rightarrow{\rm PSL}(2,\mathbb{C})$ be the boundary-parabolic representation defined by the arc-colors $$a_1=\left(\begin{array}{cc}0 &t\end{array}\right),~a_2=\left(\begin{array}{cc}1 &0\end{array}\right),~ a_3=\left(\begin{array}{cc}-t &1+t\end{array}\right),~a_4=\left(\begin{array}{cc}-t &t\end{array}\right),$$ where $t$ is a solution of $t^2+t+1=0$, and let one region-color $s_1=\left(\begin{array}{cc}1 &1\end{array}\right)$. Then the other region-colors become \begin{align*} s_2=\left(\begin{array}{cc}0 &1\end{array}\right),~ s_3=\left(\begin{array}{cc}-t-1 &t+2\end{array}\right),~s_4=\left(\begin{array}{cc}-2t-1 &2t+3\end{array}\right),\\ s_5=\left(\begin{array}{cc}-2t-1 &t+4\end{array}\right),~s_6=\left(\begin{array}{cc}1 &t+2\end{array}\right). \end{align*} The potential function $W(w_1,\ldots,w_6)$ of Figure \ref{example1} is \begin{align*} W=\left\{{\rm Li}_2(\frac{w_1}{w_2})+{\rm Li}_2(\frac{w_1}{w_4})-{\rm Li}_2(\frac{w_1 w_3}{w_2 w_4}) -{\rm Li}_2(\frac{w_2}{w_3})-{\rm Li}_2(\frac{w_4}{w_3})+\frac{\pi^2}{6}-\log\frac{w_2}{w_3}\log\frac{w_4}{w_3}\right\}\\ +\left\{{\rm Li}_2(\frac{w_3}{w_2})+{\rm Li}_2(\frac{w_3}{w_6})-{\rm Li}_2(\frac{w_1 w_3}{w_2 w_6}) -{\rm Li}_2(\frac{w_2}{w_1})-{\rm Li}_2(\frac{w_6}{w_1})+\frac{\pi^2}{6}-\log\frac{w_2}{w_1}\log\frac{w_6}{w_1}\right\}\\ +\left\{-{\rm Li}_2(\frac{w_4}{w_3})-{\rm Li}_2(\frac{w_4}{w_5})+{\rm Li}_2(\frac{w_4 w_6}{w_3 w_5}) +{\rm Li}_2(\frac{w_3}{w_6})+{\rm Li}_2(\frac{w_5}{w_6})-\frac{\pi^2}{6}+\log\frac{w_3}{w_6}\log\frac{w_5}{w_6}\right\}\\ +\left\{-{\rm Li}_2(\frac{w_6}{w_1})-{\rm Li}_2(\frac{w_6}{w_5})+{\rm Li}_2(\frac{w_4 w_6}{w_1 w_5}) +{\rm Li}_2(\frac{w_1}{w_4})+{\rm Li}_2(\frac{w_5}{w_4})-\frac{\pi^2}{6}+\log\frac{w_1}{w_4}\log\frac{w_5}{w_4}\right\}. \end{align*} By putting $p=\left(\begin{array}{cc}2 &1\end{array}\right)$, we obtain \begin{align} w_1^{(0)}=\det(p,s_1)=1,~w_2^{(0)}=\det(p,s_2)=2,~w_3^{(0)}=\det(p,s_3)=3t+5,\label{sol1}\\ w_4^{(0)}=\det(p,s_4)=6t+7,~w_5^{(0)}=\det(p,s_5)=4t+9,~w_6^{(0)}=\det(p,s_6)=2t+3,\nonumber \end{align} and $(w_1^{(0)},\ldots,w_6^{(0)})$ becomes a solution of $\mathcal{I}=\{\exp(w_k\frac{\partial W}{\partial w_k})=1~|~k=1,\ldots,6\}$. Furthermore, we obtain \begin{equation*} W_0(w_1^{(0)},\ldots,w_6^{(0)})\equiv i({\rm vol}(\rho)+i\,{\rm cs}(\rho))~~({\rm mod}~\pi^2), \end{equation*} and numerical calculation verifies it by \begin{equation*} W_0(w_1^{(0)},\ldots,w_6^{(0)})= \left\{\begin{array}{ll}i(2.0299...+0\,i)=i({\rm vol}(4_1)+i\,{\rm cs}(4_1))&\text{ if }t=\frac{-1-\sqrt{3} \,i}{2}, \\ i(-2.0299...+0\,i)=i(-{\rm vol}(4_1)+i\,{\rm cs}(4_1))&\text{ if }t=\frac{-1+\sqrt{3}\,i}{2}. \end{array}\right. \end{equation*} Note that the above example was already appeared in Section 3.1.\;of \cite{Cho14c}. From Theorem \ref{thm1}, we can easily specify the discrete faithful representation by Figure \ref{example1} together with the solution (\ref{sol1}). (The explicit construction of the representation can be done by applying Yoshida's construction of \cite{Tillmann13} to the five-term triangulation defined in Figure \ref{fig7}.) Now we will apply (un-oriented) Reidemeister moves to the solution in (\ref{sol1}). Consider the changes of the figure-eight knot diagrams in Figure \ref{mirror4_1}. \begin{figure} \caption{Changing the figure-eight knot diagram to its mirror image} \label{mirror4_1} \end{figure} Note that Figure \ref{mirror4_1}(c) is obtained by the Reidemeister 3rd move, Figures \ref{mirror4_1}(d)-(e) are obtained by the twist move defined in Figure \ref{pic18} and Figure \ref{mirror4_1}(g) is obtained by the rotation. Then the values of the variables are determined by \begin{align*} w_7^{(0)}&=\frac{w_1^{(0)}w_5^{(0)}(w_3^{(0)}-w_4^{(0)})^2-(w_1^{(0)}w_3^{(0)}-w_2^{(0)}w_4^{(0)})(w_3^{(0)}w_5^{(0)}-w_4^{(0)}w_6^{(0)})}{w_4^{(0)}(w_3^{(0)}-w_4^{(0)})^2}\\ &=\frac{6t^2+37t+36}{(3t+2)^2}=-5t-3,\\ w_8^{(0)}&=w_4^{(0)}=6t+7,\\ w_9^{(0)}&=\frac{w_4^{(0)}w_7^{(0)}-w_1^{(0)}w_5^{(0)}+w_2^{(0)}w_6^{(0)}}{w_3^{(0)}}=\frac{-30t^2-53t-24}{3t+5}=-7t-3,\\ w_{10}^{(0)}&=2w_2^{(0)}-w_3^{(0)}=-3t-1,\\ w_{11}^{(0)}&=\frac{w_3^{(0)}w_{10}^{(0)}-(w_2^{(0)})^2+w_1^{(0)}w_9^{(0)}}{w_6^{(0)}}=\frac{-9t^2-25t-12}{2t+3}=-6t-5,\\ w_{12}^{(0)}&=2w_1^{(0)}-w_8^{(0)}=-6t-5=w_{11}^{(0)}. \end{align*} (Here, $w_7^{(0)}$ is calculated by the partial derivative of the potential function with respect to $w_4$.) The potential function of Figure \ref{mirror4_1}(g) becomes $-W(w_1,w_7,w_9,w_{2},w_{10},w_{11})$ and the following numerical calculation \begin{align*} -W_0(w_1^{(0)},w_7^{(0)},w_9^{(0)},w_{2}^{(0)},w_{10}^{(0)},w_{11}^{(0)})&= \left\{\begin{array}{ll}i(2.0299...+0\,i)&\text{ if }t=\frac{-1-\sqrt{3} \,i}{2} \\ i(-2.0299...+0\,i)&\text{ if }t=\frac{-1+\sqrt{3}\,i}{2} \end{array}\right.\\ &=W_0(w_1^{(0)},w_2^{(0)},w_3^{(0)},w_4^{(0)},w_5^{(0)},w_6^{(0)}) \end{align*} confirms Theorem \ref{thm1}. \appendix \section{Shadow-coloring induced by a solution}\label{app} In \cite{Cho14c} and this article, we always start from a given boundary-parabolic representation $\rho:\pi_1(L)\rightarrow{\rm PSL}(2,\mathbb{C})$ and construct a solution $(w_1^{(0)},\ldots,w_n^{(0)})$ of the hyperbolicity equations $\mathcal{I}$ using (\ref{main}). In other words, for any representation $\rho$, we can always construct a solution that induces $\rho$. Therefore natural question arises that whether any {essential} solution of $\mathcal{I}$ can be constructed by the formula (\ref{main}) of certain shadow-coloring. This question is important because, if it is true, then any essential solution of $\mathcal{I}$ is governed by the Reidemeister transformations. Furthermore, we can characterize the solutions of $\mathcal{I}$ by the choices of certain shadow-coloring. \begin{theorem} For any {essential} solution $\bold{w}^{(0)}=(w_1^{(0)},\ldots,w_n^{(0)})$ of $\mathcal{I}$, there exist an arc-coloring $\{a_1,\ldots,a_r\}$, a region coloring $\{s_1,\ldots,s_n\}$ and an element $p\in\mathbb{C}^2\backslash\{0\}$ satisfying \begin{equation*} w_k^{(0)}=\det(p,s_k), \end{equation*} for $k=1,\ldots,n$. \end{theorem} \begin{proof} From the discussion above of Proposition \ref{pro21}, the solution $\bold{w}^{(0)}$ induces the boundary-parabolic representation $\rho$. After fixing an oriented diagram $D$ of the link $L$, the representation $\rho$ induces unique arc-coloring $\{a_1,\ldots,a_r\}$ up to conjugation. By using proper conjugation, we may assume $$\infty\notin\{h(a_1),\ldots,h(a_r)\}.$$ Then we define $p=\left(\begin{array}{cc}1 &0\end{array}\right)$. The main idea of this proof is to show the following region-coloring $$s_k=w_k^{(0)}\left(\begin{array}{cc}h(s_k) &1\end{array}\right)$$ for $k=1,\ldots,n$, is what we want. To prove rigorously, assume the regions with $s_1$ and $s_2$ are adjacent, as in Figure \ref{fig22}. \begin{figure} \caption{Defining the region-color $s_1$} \label{fig22} \end{figure} Define $s_1$ by $$s_1=w_1^{(0)}\left(\begin{array}{cc}\left(\alpha_l \beta_l-1+\frac{w_2^{(0)}}{w_1^{(0)}}\right)/\beta_l^2 & 1 \end{array}\right).$$ Then $h(s_1)=\left(\alpha_l \beta_l-1+\frac{w_2^{(0)}}{w_1^{(0)}}\right)/\beta_l^2$. We can decide $h(s_2),\ldots,h(s_n)$ from $h(s_1)$ using the arc-coloring $\{a_1,\ldots,a_r\}$ together with (\ref{app1}), and we denote the region-colorings by \begin{equation}\label{app2} s_k=x_k\left(\begin{array}{cc}h(s_k) &1\end{array}\right) \end{equation} for $k=1,\ldots,n$. (Therefore, $x_1=w_1^{(0)}$ and $x_2,\ldots,x_n$ are uniquely determined.) Then, from the second row of the following matrices $$s_2=x_2\left(\begin{array}{cc}h(s_2) &1 \end{array}\right)=s_1*\alpha_l =x_1\left(\begin{array}{cc}h(s_1) &1 \end{array}\right) \left(\begin{array}{cc}1+\alpha_l\beta_l &\beta_l^2 \\-\alpha_l^2& 1-\alpha_l\beta_l\end{array}\right),$$ we obtain $$x_2=x_1(\beta_l^2 h(s_1)+1-\alpha_l\beta_l)=x_1\frac{w_2^{(0)}}{w_1^{(0)}}=w_2^{(0)}.$$ Now let the developing map induced by the solution $\bold{w}^{(0)}$ be $D_1$, and the one induced by the shadow-coloring $\{a_1,\ldots,a_r,s_1,\ldots,s_n,p\}$ together with (\ref{app2}) be $D_2$. (The definition of the developing map we are using here is {\it Definition 4.10} from Section 4 of \cite{Zickert09}.) Note that $D_1$ can be constructed by gluing tetrahedra with the shape parameters determined by $\bold{w}^{(0)}$, and $D_2$ is constructed explicitly at Figure \ref{fig06}. (The developing map $D_2$ satisfies the condition (4.2) in {\it Proof} of THEOREM 4.11 of \cite{Zickert09}.) To make $D_1=D_2$, consider the five-term triangulation defined in Section \ref{sec2} and see Figure \ref{fig23}. \begin{figure} \caption{Figure \ref{fig22} together with the crossing $j$} \label{fig23} \end{figure} We also consider the octahedron ${\rm A}_j{\rm B}_j{\rm C}_j{\rm D}_j{\rm E}_j{\rm F}_j$ at the crossing $j$ as in Figure \ref{fig7}. (Note that Figure \ref{fig23} is the left-hand side of Figure \ref{fig7}.) The property $x_1=w_1^{(0)}$ and $x_2=w_2^{(0)}$ imply that, in case of Figures \ref{fig7}(a) and \ref{fig23}(a), the shape parameter of $\Delta{\rm D}_j{\rm B}_j{\rm F}_j{\rm A}_j$ assigned to ${\rm F}_j{\rm A}_j$ is $\frac{w_1^{(0)}}{w_2^{(0)}}$ and the shape parameter of $\Delta{\rm D}_j{\rm E}_j{\rm A}_j{\rm C}_j$ assigned to ${\rm D}_j{\rm E}_j$ is $\frac{w_2^{(0)}}{w_1^{(0)}}$. These shape parameters coincide with the shape parameters determined by the solution $\bold{w}^{(0)}$. Hence, for the lifts $\widetilde{{\rm A}}_j$, $\widetilde{{\rm B}}_j$, ..., $\widetilde{{\rm F}}_j$ of the vertices, we can put \begin{equation}\label{app3} D_1(\widetilde{{\rm D}}_j)=D_2(\widetilde{{\rm D}}_j),~D_1(\widetilde{{\rm B}}_j)=D_2(\widetilde{{\rm B}}_j),~ D_1(\widetilde{{\rm F}}_j)=D_2(\widetilde{{\rm F}}_j), D_1(\widetilde{{\rm A}}_j)=D_2(\widetilde{{\rm A}}_j) \end{equation} in the case of Figures \ref{fig7}(a) and \ref{fig23}(a), and put \begin{equation}\label{app4} D_1(\widetilde{{\rm D}}_j)=D_2(\widetilde{{\rm D}}_j),~ D_1(\widetilde{{\rm E}}_j)=D_2(\widetilde{{\rm E}}_j),~ D_1(\widetilde{{\rm A}}_j)=D_2(\widetilde{{\rm A}}_j),~ D_1(\widetilde{{\rm C}}_j)=D_2(\widetilde{{\rm C}}_j) \end{equation} in the case of Figures \ref{fig7}(b) and \ref{fig23}(b). Note that the developing maps $D_1$ and $D_2$ are defined from the same representation $\rho$. Therefore, from the uniqueness theorem of the developing map in THEOREM 4.11 of \cite{Zickert09}, $D_1$ and $D_2$ agree on the ideal points corresponding to the {\it nontrivial} ends. (See {\it Definition 4.3} of \cite{Zickert09} for the definitions of the nontrivial and trivial ends.) Furthermore, the five-term triangulation we are using has two trivial ends and we denoted the corresponding points by $\pm\infty$ in Section \ref{sec2}. At the octahedron in Figure \ref{fig7}, the ideal points $\widetilde{{\rm A}}_j$ and $\widetilde{{\rm C}}_j$ corresponds to $\infty$ and $\widetilde{{\rm B}}_j$ and $\widetilde{{\rm D}}_j$ corresponds to $-\infty$. From (\ref{app3}) and (\ref{app4}), the two developing maps $D_1$ and $D_2$ coincide not only at the points corresponding to the nontrivial ends, but also at the points corresponding to the trivial ends. Therefore, we obtain $D_1=D_2$. The coincidence of the two developing maps implies the shape parameters of the tetrahedra in the five-term triangulation coincide everywhere. Therefore, from (\ref{main}), we obtain $$\frac{x_k}{x_m}=\frac{w_k^{(0)}}{w_m^{(0)}}$$ for any adjacent regions with $w_k$ and $w_m$. (Note that $\left(\frac{x_k}{x_m}\right)^{\pm}$ is the shape parameter determined by the formula (\ref{main}) and the developing map $D_2$.) We already know $x_1=w_1^{(0)}$ and $x_2=w_2^{(0)}$, so we obtain $x_k=w_k^{(0)}$ for $k=1,\ldots,n$, and the shadow-coloring $\{a_1,\ldots,a_r,s_1,\ldots,s_n,p\}$ is what we want. \end{proof} \end{ack} { \begin{flushleft} Busan National University of Education\\ Busan 47503, Republic of Korea\\ E-mail: [email protected]\\ Department of Mathematics, Faculty of Science and Engineering, Waseda University\\ 3-4-1 Ohkumo, Shinjuku-ku, Tokyo 169-855, Japan\\ E-mail: [email protected]\\ \end{flushleft}} \end{document}
\begin{document} \maketitle \begin{abstract} We will show a rigidity of a K\"ahler potential of the Poincar\'e metric with a constant length differential. \end{abstract} \section{Introduction} From the fundamental result of Donnelly-Fefferman~\cite{DF}, the vanishing of the space of $L^2$ harmonic $(p,q)$ forms has been an important research theme in the theory of complex domains. Since M.~Gromov (\cite{Gromov}, see also \cite{Donnelly1994}) suggested the concept of the K\"ahler hyperbolicity and gave a connection to the vanishing theorem, there have been many studies on the K\"ahler hyperbolicity of the Bergman metric, which is a fundamental K\"ahler structure of bounded pseudoconvex domains. The K\"ahler structure $\omega$ is \emph{K\"ahler hyperbolic} if there is a global $1$-form $\eta$ with $d\eta=\omega$ and $\sup\norm{\eta}_\omega<\infty$. In \cite{Donnelly1997}, H. Donnelly showed the K\"ahler hyperbolicity of Bergman metric on some class of weakly pseudoconvex domains. For bounded homogeneous domain $D$ in $\mathbb{C}^n$ and its Bergman metric $\omega_D$ especially, he used a classical result of Gindikin~\cite{Gindikin} to show that $\sup\norm{d\log K_D}_{\omega_D}<\infty$. Here $K_D$ is the Bergman kernel function of $D$ so $\log K_D$ is a canonical potential of $\omega_D$. In their paper \cite{Kai-Ohsawa}, S.~Kai and T.~Ohsawa gave another approach. They proved that every bounded homogeneous domain has a K\"ahler potential of the Bergman metric whose differential has a constant length. \begin{theorem}[Kai-Ohsawa \cite{Kai-Ohsawa}]\label{thm:KO1} For a bounded homogeneous domain $D$ in $\mathbb{C}^n$, there exists a positive real valued function $\varphi$ on $D$ such that $\log\varphi$ is a K\"ahler potential of the Bergman metric $\omega_D$ and $\norm{d\log\varphi}_{\omega_D}$ is constant. \end{theorem} It can be obtained by the facts that each homogeneous domain is biholomorphic to a Siegel domain (see \cite{VGP}) and a homogeneous Siegel domain is affine homogeneous (see \cite{KMO}). More precisely, let us consider a bounded homogeneous domain $D$ in $\mathbb{C}^n$ and a biholomorphism $F:D\to S$ for a Siegel domain $S$. For the Bergman kernel function $K_S$ of $S$ which is a canonical potential of the Bergman metric $\omega_S$, it is easy to show that $d\log K_S$ has a constant length with respect to $\omega_S$ from the affine homogeneity of $S$ (the group of affine holomorphic automorphisms acts transitively on $S$). Since $\log K_S$ is a K\"ahler potential of $\omega_S$, the transformation formula of the Bergman kernel implies that the pullback $F^*\log K_S=\log K_S\circ F$ is also a K\"ahler potential of $\omega_D$. Using the fact that $F:(D,\omega_D)\to(S,\omega_S)$ is an isometry, we have $\norm{d(F^*\log K_S)}_{\omega_D}=\norm{d\log K_S}_{\omega_S}\circ F$. As a function $\varphi$ in Theorem~\ref{thm:KO1}, we can choose the pullback $K_S\circ F$ of the Bergman kernel function of the Siegel domain. At this junction, it is natural to ask: \begin{quote} \textit{If there is a K\"ahler potential $\log\varphi$ with a constant $\norm{d\log\varphi}_{\omega_D}$, is it always obtained by the pullback of the Bergman kernel function of the Siegel domain?} \end{quote} The aim of this paper is to discuss of this question in the $1$-dimensional case. The only bounded homogeneous domain in $\mathbb{C}$ is the unit disc $\Delta=\{z\in\mathbb{C}:\abs{z}<1\}$ up to the biholomorphic equivalence and the $1$-dimensional correspondence of the Bergman metric, namely a holomorphically invariant hermitian structure, is only the Poincar\'e metric. Hence the main theorem as follows gives a positive answer to the question. \begin{theorem}\label{thm:main thm rough} Let $\omega_\Delta$ be the Poincar\'e metric of the unit disc $\Delta$. Suppose that there exists a positive real valued function $\varphi:\Delta\to\mathbb{R}$ such that $\log\varphi$ is a K\"ahler potential of the Poincar\'e metric and $\norm{d\log\varphi}_{\omega_\Delta}$ is constant on $\Delta$. Then $\varphi$ is the pullback of the canonical potential on the half-plane $\mathbf{H}=\{z\in\mathbb{C}: \mathrm{Re}\, z<0\}$. \end{theorem} Note that $1$-dimensional Siegel domain is just the half-plane. We will introduce the Poincar\'e metric and related notions in Section~\ref{sec:2}. As an application of the main theorem, we can characterize the half-plane by the canonical potential. \begin{corollary}\label{cor:main cor rough} Let $D$ be a simply connected, proper domain in $\mathbb{C}$ with a Poincar\'e metric $\omega_D=i\lambda dz \wedge d\bar z$. If $\norm{d\log \lambda}_{\omega_D}$ is constant on $D$, then $D$ is affine equivalent to the half-plane $\mathbf{H}=\{z\in\mathbb{C}: \mathrm{Re}\, z<0\}$. \end{corollary} In Section~\ref{sec:2}, we will introduce notions and concrete version of the main theorem. Then we will study the existence of a nowhere vanishing complete holomorphic vector field which is tangent to a potential whose differential is of constant length (Section~\ref{sec:3}). Using relations between complete holomorphic vector fields and model potentials in Section~\ref{sec:4}, we will prove theorems. \section{Background materials}\label{sec:2} Let $X$ be a Riemann surface. The Poincar\'e metric of $X$ is a complete hermitian metric with a constant Gaussian curvature, $-4$. The Poincar\'e metric exists on $X$ if and only if $X$ is a quotient of the unit disc. If $X$ is covered by $\Delta$, the Poincar\'e metric can be induced by the covering map $\pi:\Delta\to X$ and it is uniquely determined. Throughout of this paper, the K\"ahler form of the Poincar\'e metric of $X$, denoted by $\omega_X$, stands for the metric also. When $\omega_X=i\lambda dz\wedge d\bar z$ in the local holomorphic coordinate function $z$, the curvature can be written by \begin{equation*} \kappa=-\frac{2}{\lambda}\spd{}{z}{\bar z} \log \lambda \;. \end{equation*} So the curvature condition $\kappa\equiv -4$ implies that \begin{equation*} \spd{}{z}{\bar z} \log \lambda = 2\lambda \;, \end{equation*} equivalently \begin{equation*} dd^c\log\lambda = 2\omega_X \;, \end{equation*} where $d^c=\frac{i}{2}(\overline\partial-\partial)$. That means the function $\frac{1}{2}\log\lambda$ is a local K\"ahler potential of $\omega_X$. Any other local potential of $\omega_X$ is always of the form $\frac{1}{2}\log\lambda+\log\abs{f}^2$ where $f$ is a local holomorphic function on the domain of $z$. We call $\frac{1}{2}\log\lambda$ the \emph{canonical potential} with respect to the coordinate function $z$. For a domain $D$ in $\mathbb{C}$, the canonical potential of $D$ means the canonical potential with respect to the standard coordinate function of $\mathbb{C}$. Let us consider the Poincar\'e metric $\omega_\Delta$ of the unit disc $\Delta$: \begin{equation*} \omega_\Delta=i\frac{1}{\paren{1-\abs{z}^2}^2}dz\wedge d\bar z = i\lambda_\Delta dz\wedge d\bar z \;. \end{equation*} The canonical potential $\lambda_\Delta$ satisfies \begin{equation*} \norm{d\log\lambda_\Delta}_{\omega_\Delta}^2=\norm{\pd{\log\lambda_\Delta}{z}dz+\pd{\log\lambda_\Delta}{\bar z}d\bar z}_{\omega_\Delta}^2 =\pd{\log\lambda_\Delta}{z}\pd{\log\lambda_\Delta}{\bar z}\frac{1}{\lambda_\Delta}=4\abs{z}^2 \;, \end{equation*} so does not have a constant length. By the same way of Kai-Ohsawa~\cite{Kai-Ohsawa}, we can get a model for $\varphi$ in Theorem~\ref{thm:KO1} for the unit disc, \begin{equation}\label{eqn:model} \varphi_\theta(z)=\frac{\abs{1+e^{i\theta}z}^4}{\paren{1-\abs{z}^2}^2} \quad\text{for $\theta\in\mathbb{R}$} \end{equation} as a pullback of the canonical potential $\lambda_\mathbf{H}=1/\abs{\mathrm{Re}\, w}^2$ on the left-half plane $\mathbf{H}=\{w:\mathrm{Re}\, w<0\}$ by the Cayley transforms (see \eqref{eqn:CT} for instance). The term $\theta$ depends on the choice of the Cayley transform. Since $\log\varphi_\theta=\log\lambda_\Delta+\log\abs{1+e^{i\theta}z}^4$, the function $\frac{1}{2}\log\varphi_\theta$ is a K\"ahler potential. Moreover \begin{equation*} \norm{d\log\varphi_\theta}_{\omega_\Delta}^2 \equiv 4 \;. \end{equation*} At this moment, we introduce a significant result of Kai-Ohsawa. \begin{theorem}[Kai-Ohsawa \cite{Kai-Ohsawa}]\label{thm:KO2} For a bounded homogeneous domain $D$ in $\mathbb{C}^n$, suppose that there is a K\"ahler potential $\log\psi$ of the Bergman metric $\omega_D$ with a constant $\norm{d\log\psi}_{\omega_D}$, then $\norm{d\log\psi}_{\omega_D}=\norm{d\log\varphi}_{\omega_D}$ where $\varphi$ is as in Theorem~\ref{thm:KO1}. \end{theorem} Suppose that a positively real valued $\varphi$ on $\Delta$ satisfies that $dd^c\log\varphi=2\omega_\Delta$ and $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv c$ for some constant $c$. Theorem~\ref{thm:KO2} implies that $c$ must be $4$. Therefore, we can rewrite Theorem~\ref{thm:main thm rough} by \begin{theorem}\label{thm:main thm} If there exists a function $\varphi:\Delta\to\mathbb{R}$ satisfying \begin{equation}\label{eqn:basic condition} dd^c\log\varphi=2\omega_\Delta \quad\text{and}\quad \norm{d\log\varphi}_{\omega_\Delta}^2\equiv4 \;. \end{equation} Then $\varphi=r\varphi_\theta$ as in \eqref{eqn:model} for some $r>0$ and $\theta\in\mathbb{R}$. \end{theorem} Corollary~\ref{cor:main cor rough} can be also written by \begin{corollary}\label{cor:main cor} Let $D$ be a simply connected, proper domain in $\mathbb{C}$ with a Poincar\'e metric $\omega_D=i\lambda dz \wedge d\bar z$. If $\norm{d\log \lambda}_{\omega_D}^2\equiv 4$, then $D$ is affine equivalent to the half-plane $\mathbf{H}=\{z\in\mathbb{C}: \mathrm{Re}\, z<0\}$. \end{corollary} \section{Existence of nowhere vanishing complete holomorphic vector field}\label{sec:3} In this section, we will study an existence of a complete holomorphic tangent vector field on a Riemann surface $X$ which admits a K\"ahler potential of the Poincar\'e metric with a constant length differential. By a holomorphic tangent vector field of a Riemann surface $X$, we means a holomorphic section $\mathcal{W}$ to the holomorphic tangent bundle $T^{1,0}X$. If the corresponding real tangent vector field $\mathrm{Re}\, \mathcal{W}=\mathcal{W}+\overline{\mathcal{W}}$ is complete, we also say $\mathcal{W}$ is complete. Thus the complete holomorphic tangent vector field generates a $1$-parameter family of holomorphic transformations. In this section, we will show that \begin{theorem}\label{thm:existence} Let $X$ be a Riemann surface with the Poincar\'e metric $\omega_X$. If there is a function $\varphi:X\to\mathbb{R}$ with \begin{equation}\label{eqn:condition on surface} dd^c\log\varphi = 2\omega_X \quad\text{and}\quad \norm{d\log\varphi}_{\omega_X}^2\equiv 4 \end{equation} then there is a nowhere vanishing complete holomorphic vector field $\mathcal{W}$ such that $(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$. \end{theorem} \begin{proof} Take a local holomorphic coordinate function $z$ and let $\omega_X=i\lambda dz\wedge d\bar z$. The equation~\eqref{eqn:condition on surface} can be written by \begin{equation*} \paren{\log\varphi}_{z\bar z} = 2\lambda \quad\text{and}\quad \paren{\log\varphi}_z \paren{\log\varphi}_{\bar z} = 4\lambda \end{equation*} Here, $\paren{\log\varphi}_z=\pd{}{z}\log\varphi$, $\paren{\log\varphi}_{\bar z}=\pd{}{\bar z}\log\varphi$ and $\paren{\log\varphi}_{z\bar z}=\spd{}{z}{\bar z}\log\varphi$. This implies that \begin{align*} \paren{\varphi^{-1/2}}_{z} &= \pd{}{z}\varphi^{-1/2} = -\frac{1}{2}\varphi^{-1/2} \paren{\log\varphi}_{z} \; ; \\ \paren{\varphi^{-1/2}}_{z\bar z} &= \spd{}{z}{\bar z}\varphi^{-1/2} = -\frac{1}{2}\varphi^{-1/2} \paren{\log\varphi}_{z\bar z} +\frac{1}{4}\varphi^{-1/2}\paren{\log\varphi}_z \paren{\log\varphi}_{\bar z} \\ &= -\frac{1}{2}\varphi^{-1/2} \paren{ \paren{\log\varphi}_{z\bar z} -\frac{1}{2}\paren{\log\varphi}_z \paren{\log\varphi}_{\bar z} } \\ &= 0 \; . \end{align*} Thus we have that the function $\varphi^{-1/2}$ is harmonic so $\paren{\varphi^{-1/2}}_{z}$ is holomorphic. Let us consider a local holomorphic vector field, \begin{equation*} \mathcal{W}=\frac{i}{\paren{\varphi^{-1/2}}_z}\pd{}{z} =\frac{-2i\varphi^{3/2}}{\varphi_z}\pd{}{z} =\frac{-2i\varphi^{1/2}}{\paren{\log\varphi}_z}\pd{}{z} \;. \end{equation*} In any other local holomorphic coordinate function $w$, we have \begin{equation*} \mathcal{W}=\frac{i}{\paren{\varphi^{-1/2}}_z}\pd{}{z} =\frac{i}{\paren{\varphi^{-1/2}}_w\pd{w}{z}}\pd{w}{z}\pd{}{w} =\frac{i}{\paren{\varphi^{-1/2}}_w}\pd{}{w} \;. \end{equation*} so $W$ is globally defined on $X$. Now we will show that $\mathcal{W}$ satisfies conditions in the theorem. Since \begin{equation*} \norm{\varphi^{-1/2}\mathcal{W}}_{\omega_X}^2 =\norm{\frac{-2i}{\paren{\log\varphi}_{z}}\pd{}{z} }_{\omega_X}^2 =\frac{4\lambda}{\paren{\log\varphi}_{z}\paren{\log\varphi}_{\bar z}} =1 \;, \end{equation*} the vector field $\varphi^{-1/2}\mathcal{W}$ has a unit length with respect to the complete metric $\omega_X$, so the corresponding real vector field $\mathrm{Re}\, \varphi^{-1/2}\mathcal{W}=\varphi^{-1/2}(\mathcal{W}+\overline{\mathcal{W}})$ is complete. Moreover \begin{equation*} (\mathrm{Re}\, \mathcal{W})\varphi = \frac{-2i\varphi^{3/2}}{\varphi_{z}}\varphi_z +\frac{2i\varphi^{3/2}}{\varphi_{\bar z}}\varphi_{\bar z} = 0 \;. \end{equation*} Hence it remains to show the completeness of $\mathcal{W}$. Take any integral curve $\gamma:\mathbb{R}\to X$ of $\varphi^{-1/2}\mathrm{Re}\,\mathcal{W}$. It satisfies \begin{equation*} \paren{\varphi^{-1/2}(\mathrm{Re}\, \mathcal{W})}\circ \gamma = \dot\gamma \end{equation*} equivalently \begin{equation*} (\mathrm{Re}\, \mathcal{W})\circ \gamma = \paren{\varphi^{1/2}\circ\gamma} \dot\gamma \end{equation*} The condition $(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$, equivalently $\varphi^{-1/2}(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$, implies that the curve $\gamma$ is on a level set of $\varphi$ so $\varphi^{1/2}\circ\gamma\equiv C$ for some constant $C$. The curve $\sigma:\mathbb{R}\to X$ defined by $\sigma(t)=\gamma(Ct)$ satisfies \begin{equation*} (\mathrm{Re}\, \mathcal{W})\circ \sigma (t) = (\mathrm{Re}\, \mathcal{W})(\gamma(Ct)) = C\dot\gamma(Ct) =\dot\sigma(t) \end{equation*} This means that $\sigma:\mathbb{R}\to X$ is the integral curve of $\mathrm{Re}\, \mathcal{W}$; therefore $\mathrm{Re}\, \mathcal{W}$ is complete. This completes the proof. \end{proof} \section{Complete holomorphic vector fields on the unit disc}\label{sec:4} In this section, we introduce parabolic and hyperbolic vector fields on the unit disc and discuss their relation to the model potential, \begin{equation}\label{eqn:model0} \varphi_0=\frac{\abs{1+z}^4}{\paren{1-\abs{z}^2}^2} \end{equation} where it is $\varphi_\theta$ in \eqref{eqn:model} with $\theta=0$. \subsection{Nowhere vanishing complete holomorphic vector fields from the left-half plane} On the left-half plane $\mathbf{H}=\{w\in\mathbb{C}:\mathrm{Re}\, w<0\}$, there are two kinds of affine transformations: \begin{equation*} \mathcal{D}_s(w)=e^{2s} w \quad\text{and}\quad \mathcal{T}_s(w)=w+2is \end{equation*} for $s\in\mathbb{R}$. Their infinitesimal generators are \begin{equation*} \mathcal{D}= 2w\pd{}{w} \quad\text{and}\quad \mathcal{T}=2i\pd{}{w} \end{equation*} which are nowhere vanishing complete holomorphic vector fields of $\mathbf{H}$. Note that \begin{equation}\label{eqn:relation} (\mathcal{T}_s)_*\mathcal{D} = 2(w-2is)\pd{}{w} = \mathcal{D}-2s\mathcal{T} \quad\text{and}\quad (\mathcal{T}_s)_*\mathcal{T} =2i\pd{}{w}= \mathcal{T} \end{equation} for any $s$. For the Cayley transform $F:\mathbf{H}\to\Delta$ defined by \begin{equation}\label{eqn:CT} \begin{aligned} F:\mathbf{H}&\longrightarrow\Delta \\ w&\longmapsto z=\frac{1+w}{1-w} \;, \end{aligned} \end{equation} we can take two nowhere vanishing complete holomorphic vector fields of $\Delta$: \begin{equation*} \mathcal{H}=F_*(\mathcal{D})=(z^2-1)\pd{}{z} \end{equation*} and \begin{equation*} \mathcal{P}=F_*(\mathcal{T})=i(z+1)^2\pd{}{z}\;. \end{equation*} When we define $\mathcal{H}_s=F\circ\mathcal{D}_s\circ F^{-1}$ and $\mathcal{P}_s=F\circ\mathcal{T}_s\circ F^{-1}$, vector fields $\mathcal{H}$ and $\mathcal{P}$ are infinitesimal generators of $\mathcal{H}_s$ and $\mathcal{P}_s$, respectively. Moreover Equation~\eqref{eqn:relation} can be written by \begin{equation}\label{eqn:relation'} (\mathcal{P}_s)_*\mathcal{H} = \mathcal{H}-2s\mathcal{P} \quad\text{and}\quad (\mathcal{P}_s)_*\mathcal{P}= \mathcal{P} \;. \end{equation} There is another complete holomorphic vector field $\mathcal{R}=iz\pdl{}{z}$ generating the rotational symmetry \begin{equation}\label{eqn:rotation} \mathcal{R}_s(z)=e^{is}z \;. \end{equation} The holomorphic automorphism group of $\Delta$ is a real $3$-dimension connected Lie group (cf. see \cite{Cartan,Narasimhan}), we can conclude that any complete holomorphic vector field can be a real linear combination of $\mathcal{H}$, $\mathcal{P}$ and $\mathcal{R}$. Since $\mathcal{H}(-1)=\mathcal{P}(-1)=0$ and $\mathcal{R}(-1)=-i\pdl{}{z}$, we have \begin{lemma}\label{lem:lc} If $\mathcal{W}$ is a complete holomorphic vector field of $\Delta$ satisfying $\mathcal{W}(-1)=0$, then there exist $a,b\in\mathbb{R}$ with $\mathcal{W}=a\mathcal{H}+b\mathcal{P}$. \end{lemma} \subsection{Hyperbolic vector fields} In this subsection, we will show that the hyperbolic vector field $\mathcal{H}$ can not be tangent to a K\"ahler potential with a constant length differential. By the simple computation, \begin{equation*} \mathcal{H}(\log\varphi_0) = (z^2-1)\frac{2(1+\bar z)}{(1+z)(1-\abs{z}^2)} =2\frac{\abs{z}^2+z-\bar z-1}{(1-\abs{z}^2)} \;, \end{equation*} we get \begin{equation*} (\mathrm{Re}\,\mathcal{H})\log\varphi_0 \equiv-4 \;. \end{equation*} That means $\mathrm{Re}\,\mathcal{H}$ is nowhere tangent to $\varphi_0$. Moreover \begin{lemma}\label{lem:hyperbolic} Let $\varphi:\Delta\to\mathbb{R}$ with $dd^c \log\varphi=2\omega_\Delta$ and $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4$. If $(\mathrm{Re}\,\mathcal{H}) \log\varphi\equiv c$ for some $c$, then $c=\pm4$. \end{lemma} \begin{proof} Since $dd^c\log\varphi_0=2\omega_\Delta$ also, the function $\log\varphi-\log\varphi_0$ is harmonic; hence we may let $\log\varphi=\log\varphi_0+f+\bar f$ for some holomorphic function $f:\Delta\to\mathbb{C}$. Then the condition $(\mathrm{Re}\,\mathcal{H}) \log\varphi\equiv c$ can be written by \begin{equation}\label{eqn:basic identity} (\mathrm{Re}\,\mathcal{H})\log\varphi = -4+(z^2-1)f'+(\bar{z}^2-1)\bar f' \equiv c \;. \end{equation} This implies that $(z^2-1)f'$ is constant. Thus we can let \begin{equation}\label{eqn:hmm} f'=\frac{C}{z^2-1} \end{equation} for some $C\in\mathbb{C}$. Since \begin{equation*} \pd{}{z}\log\varphi=f'+\pd{}{z}\log\varphi_0 = f'+ \frac{2(1+\bar z)}{(1+z)(1-\abs{z}^2)}\;, \end{equation*} we have \begin{multline*} \norm{d\log\varphi}_{\omega_\Delta}^2=\paren{\pd{}{z}\log\varphi}\paren{\pd{}{\bar z}\log\varphi}\frac{1}{\lambda_\Delta} \\ =\abs{f'}^2(1-|z|^2)^2 +\frac{2(1+\bar z)(1-\abs{z}^2)}{(1+z)}\bar f' +\frac{2(1+z)(1-\abs{z}^2)}{(1+\bar z)}f' +\norm{d\log\varphi_0}_{\omega_\Delta}^2 \;. \end{multline*} From the condition $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4\equiv\norm{d\log\varphi_0}_{\omega_\Delta}^2$, it follows \begin{equation*} \abs{f'}^2(1-|z|^2)^2 = -\frac{2(1+\bar z)(1-\abs{z}^2)}{(1+z)}\bar f' -\frac{2(1+z)(1-\abs{z}^2)}{(1+\bar z)}f' \;, \end{equation*} equivalently \begin{equation}\label{eqn:identity} \frac{1}{2}\abs{f'}^2(1-|z|^2) = -\frac{(1+\bar z)}{(1+z)}\bar f' -\frac{(1+z)}{(1+\bar z)}f' \;. \end{equation} Applying \eqref{eqn:hmm} to the right side above, \begin{multline*} -\frac{(1+\bar z)}{(1+z)}\bar f' -\frac{(1+z)}{(1+\bar z)}f' =\frac{(1+\bar z)}{(1+z)}\frac{\bar C}{1-\bar z^2} +\frac{(1+z)}{(1+\bar z)}\frac{C}{1-z^2} \\ =\frac{(1+\bar z-z-\abs{z}^2)\bar C + (1-\bar z+z-\abs{z}^2)C}{\abs{1-z^2}^2} \;. \end{multline*} Let $C=a+bi$ for $a,b\in\mathbb{R}$, then \begin{equation*} (1+\bar z-z-\abs{z}^2)\bar C + (1-\bar z+z-\abs{z}^2)C = 2a(1-\abs{z}^2) +2bi(z-\bar z) \;. \end{equation*} Now Equation~\eqref{eqn:identity} can be written by \begin{equation*} \frac{1}{2}\frac{\abs{C}^2}{\abs{z^2-1}^2}(1-|z|^2) =\frac{2a(1-\abs{z}^2) +2bi(z-\bar z)}{\abs{1-z^2}^2} \;, \end{equation*} so we have \begin{equation*} (\abs{C}^2-4a)(1-\abs{z}^2) =4bi(z-\bar z) \end{equation*} on $\Delta$. Take $\partial\bar\partial$ to above, we have \begin{equation*} \abs{C}^2-4a=0 \;. \end{equation*} Simultaneously $b=0$ so $C=a$. Now we have $a^2=4a$. Such $a$ is $0$ or $4$. If $f'=4/(z^2-1)$, then $c=4$ from \eqref{eqn:basic identity}. If $f'=0$, then $c=-4$. \end{proof} \subsection{Parabolic vector fields} Since \begin{equation*} \mathcal{P}(\log\varphi_0) = i(z+1)^2\frac{2(1+\bar z)}{(1+z)(1-\abs{z}^2)} =2i\frac{\abs{1+z}^2}{1-\abs{z}^2} \;, \end{equation*} we have \begin{equation*} (\mathrm{Re}\,\mathcal{P})\log\varphi_0 \equiv 0 \;. \end{equation*} That means that the parabolic vector field $\mathcal{P}$ is tangent to $\varphi_0$. The vector field $\mathcal{P}$ is indeed the nowhere vanishing complete holomorphic vector field as constructed in Theorem~\ref{thm:existence} corresponding to $\varphi_0$. The main result of this section is the following. \begin{lemma}\label{lem:parabolic} Let $\varphi:\Delta\to\mathbb{R}$ with $dd^c \log\varphi=2\omega_\Delta$ and $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4$. If $(\mathrm{Re}\,\mathcal{P}) \log\varphi\equiv c$ for some $c$, then $c=0$ and $\varphi=r\varphi_0$ for some $r>0$. \end{lemma} \begin{proof} By the same way in the proof of Lemma~\ref{lem:hyperbolic}, we let $\log\varphi=\log\varphi_0+f+\bar f$ for some holomorphic $f:\Delta\to\mathbb{C}$. Since \begin{equation}\label{eqn:basic identity1} (\mathrm{Re}\,\mathcal{P})\log\varphi = i(z+1)^2f'-i(\bar{z}+1)^2\bar f' \equiv c \end{equation} it follows that $(z+1)^2f'$ is constant. Thus we have \begin{equation}\label{eqn:hmm1} f'=\frac{C}{(z+1)^2} \end{equation} for some $C\in\mathbb{C}$. Since \eqref{eqn:identity} also holds, we can apply \eqref{eqn:hmm1} to the right side of \eqref{eqn:identity} to get \begin{multline*} -\frac{(1+\bar z)}{(1+z)}\bar f' -\frac{(1+z)}{(1+\bar z)}f' = -\frac{(1+\bar z)}{(1+z)}\frac{\bar C}{(\bar z+1)^2} -\frac{(1+z)}{(1+\bar z)}\frac{C}{(z+1)^2} \\ =\frac{-\bar C}{\abs{1+z}^2} +\frac{-C}{\abs{1+z}^2} =\frac{-\bar C -C}{\abs{1+z}^2} \end{multline*} Now Equation \eqref{eqn:identity} is can be written by \begin{equation*} \frac{\abs{C}^2}{\abs{z+1}^4}(1-|z|^2)=2\frac{-\bar C -C}{\abs{1+z}^2} \end{equation*} equivalently \begin{equation*} \abs{C}^2(1-|z|^2) =-\paren{2\bar C +2C}\abs{1+z}^2\;. \end{equation*} Evaluating $z=0$, we have $\abs{C}^2=-2\bar C-2C$. And taking $\partial\bar\partial$ to above, we have $-\abs{C}^2=-2\bar C-2C$. It follows that $C=0$ so $f$ is constant. Moreover Equation \eqref{eqn:basic identity1} implies that $c=0$. \end{proof} \section{Proof of the main theorem}\label{sec:5} Now we prove Theorem~\ref{thm:main thm} and Corollary~\ref{cor:main cor} \noindent\textit{Proof of Theorem~\ref{thm:main thm}.} Let $\varphi:\Delta\to\mathbb{R}$ be a function with \begin{equation*} dd^c\log\varphi=2\omega_\Delta \quad\text{and}\quad \norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4 \;. \end{equation*} By Theorem~\ref{thm:existence}, we can take a nowhere vanishing complete holomorphic vector field $\mathcal{W}$ with $(\mathrm{Re}\, \mathcal{W})\varphi\equiv0$. Since every automorphism of $\Delta$ has at least one fixed point on $\overline\Delta$ and $\mathcal{W}$ is nowhere vanishing on $\Delta$, any nontrivial automorphism generated by $\mathrm{Re}\, \mathcal{W}$ has no fixed point in $\Delta$ and should have a common fixed point $p$ at the boundary $\partial\Delta$. This means $p$ is a vanishing point of $\mathcal{W}$. Consider a rotational symmetry $\mathcal{R}_\theta$ in \eqref{eqn:rotation} satisfying $\mathcal{R}_\theta(-1)=p$. We will show that $\varphi\circ\mathcal{R}_\theta=r\varphi_0$ where $\varphi_0$ is as in \eqref{eqn:model0} and $r>0$. This implies that $\varphi=r\varphi_{-\theta}$. Now we can simply denote by $\varphi=\varphi\circ\mathcal{R}_\theta$ and $\mathcal{W}=(\mathcal{R}_\theta^{-1})_*\mathcal{W}$. Since $-1$ is a vanishing point of $\mathcal{W}$, Lemma~\ref{lem:lc} implies \begin{equation*} \mathcal{W}=a\mathcal{H}+b\mathcal{P} \end{equation*} for some real numbers $a$, $b$. Suppose that $a\neq0$. Equation~\eqref{eqn:relation'} implies that \begin{equation*} (\mathcal{P}_s)_* \mathcal{W} = (\mathcal{P}_s)_*(a\mathcal{H}+b\mathcal{P}) = a\mathcal{H}-2as\mathcal{P}+b\mathcal{P} = a\mathcal{H}+(b-2as)\mathcal{P}\;. \end{equation*} Take $s=b/2a$, then $\widetilde\mathcal{W}=(\mathcal{P}_s)_* \mathcal{W} =a\mathcal{H}$. Let $\tilde\varphi=\varphi\circ\mathcal{P}_{-s}$ for this $s$. Then $\tilde\varphi$ satisfies conditions in Theorem~\ref{thm:main thm} and $(\mathrm{Re}\, \widetilde\mathcal{W})\tilde\varphi\equiv 0$. But Lemma~\ref{lem:hyperbolic} said that $(\mathrm{Re}\, \widetilde\mathcal{W})\tilde\varphi=a(\mathrm{Re}\, \mathcal{H})\tilde\varphi\equiv \pm4a\tilde\varphi$. It contradicts to $(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$ equivalently $(\mathrm{Re}\, \widetilde\mathcal{W})\tilde\varphi\equiv 0$. Thus $a=0$. Now $\mathcal{W}=b\mathcal{P}$. Since $\mathcal{W}$ is nowhere vanishing already, $b\neq 0$. The condition $(\mathrm{Re}\,\mathcal{W})\varphi\equiv0$ implies $(\mathrm{Re}\,\mathcal{P})\varphi\equiv 0$. Lemma~\ref{lem:parabolic} says that $\varphi=r\varphi_0$ for some positive $r$. This completes the proof. \qed \noindent\textit{Proof of Corollary~\ref{cor:main cor}.} Let $D$ be a simply connected proper domain in $\mathbb{C}$ and let $\omega_D=i\lambda_D dz \wedge d\bar z$ be its Poincar\'e metric with $\norm{d\log \lambda_D}_{\omega_D}^2\equiv 4$. By Theorem~\ref{thm:existence}, there is a nowhere vanishing complete holomorphic vector field $\mathcal{W}$ with $(\mathrm{Re}\,\mathcal{W})\lambda_D\equiv0$. Take a biholomorphism $G:\Delta\to D$ and let \begin{equation*} \varphi=\lambda_D\circ G \quad\text{and}\quad \mathcal{Z}=(G^{-1})_*\mathcal{W} \,. \end{equation*} Note that $(\mathrm{Re}\,\mathcal{Z})\varphi\equiv0$ by assumption. Using the rotational symmetry $\mathcal{R}_\theta$ of $\Delta$ which is also affine, we may assume that $\mathcal{Z}(-1)=0$ and we will prove that $G$ is a Cayley transform. Since $G:(\Delta,\omega_\Delta)\to(D,\omega_D)$ is an isometry, we have $G^*\omega_D=\omega_\Delta$, equivalently \begin{equation*} \varphi=\frac{\lambda_\Delta}{\abs{G'}^2} \;. \end{equation*} Moreover $d\log\varphi=d(G^*\log\lambda_D)$ implies that $\norm{d\log\varphi}_{\omega_D}^2=\norm{d(G^*\log\lambda_D)}_{\omega_D}^2\equiv4$. By Theorem~\ref{thm:main thm}, we have \begin{equation*} \frac{\lambda_\Delta}{\abs{G'}^2}=\varphi=r\varphi_0=r\lambda_\Delta \abs{1+z}^4 \end{equation*} for some positive $r$. This means that $G'=e^{i\theta'}/\sqrt{r}(1+z)^2$ for some $\theta'\in\mathbb{R}$ so that \begin{equation*} G=\frac{e^{i\theta'}}{2\sqrt{r}}\frac{z-1}{z+1}+C \end{equation*} Since the function $z\mapsto (z-1)/(z+1)$ is the inverse mapping of the Cayley transform $F:\mathbf{H}\to\Delta$ in \eqref{eqn:CT}, we have \begin{align*} G\circ F:\mathbf{H}&\to D \\ z&\mapsto \frac{e^{i\theta'}}{2\sqrt{r}}z+C \;. \end{align*} This implies that $D=G(F(\mathbf{H}))$ is affine equivalent to $\mathbf{H}$. \qed \end{document}
\begin{document} \begin{frontmatter} \thanks[SIR]{ GR is partially, and NP is totally, supported by the project MIUR-SIR CMACBioSeq (``Combinatorial methods for analysis and compression of biological sequences'') grant n.~RBSI146R5L.} \title{Space-Efficient Construction of Compressed Suffix Trees} \author{ Nicola Prezza\thanksref{SIR}} \address{ Department of Computer Science, University of Pisa, Italy {\tt [email protected]}} \author{Giovanna Rosone\thanksref{SIR}} \address{ Department of Computer Science, University of Pisa, Italy {\tt [email protected]}} \begin{abstract} We show how to build several data structures of central importance to string processing, taking as input the Burrows-Wheeler transform (BWT) and using small extra working space. Let $n$ be the text length and $\sigma$ be the alphabet size. We first provide two algorithms that enumerate all LCP values and suffix tree intervals in $O(n\log\sigma)$ time using just $o(n\log\sigma)$ bits of working space on top of the input BWT. Using these algorithms as building blocks, for any parameter $0 < \epsilon \leq 1$ we show how to build the PLCP bitvector and the balanced parentheses representation of the suffix tree topology in $O\left(n(\log\sigma + \epsilon^{-1}\cdot \log\log n)\right)$ time using at most $n\log\sigma \cdot(\epsilon + o(1))$ bits of working space on top of the input BWT and the output. In particular, this implies that we can build a compressed suffix tree from the BWT using just succinct working space (i.e. $o(n\log\sigma)$ bits) and any time in $\Theta(n\log\sigma) + \omega(n\log\log n)$. This improves the previous most space-efficient algorithms, which worked in $O(n)$ bits and $O(n\log n)$ time. We also consider the problem of merging BWTs of string collections, and provide a solution running in $O(n\log\sigma)$ time and using just $o(n\log\sigma)$ bits of working space. An efficient implementation of our LCP construction and BWT merge algorithms use (in RAM) as few as $n$ bits on top of a packed representation of the input/output and process data as fast as $2.92$ megabases per second. \begin{keyword} Burrows-Wheeler transform, compressed suffix tree, LCP, PLCP. \end{keyword} \end{abstract} \end{frontmatter} \section{Introduction and Related Work} The increasingly-growing production of large string collections---especially in domains such as biology, where new generation sequencing technologies can nowadays generate Gigabytes of data in few hours---is lately generating much interest towards fast and space-efficient algorithms able to index this data. The Burrows-Wheeler Transform~\cite{burrows1994block} and its extension to sets of strings~\cite{MantaciRRS07,BauerCoxRosoneTCS2013} is becoming the gold-standard in the field: even when not compressed, its size is one order of magnitude smaller than classic suffix arrays (while preserving many of their indexing capabilities). This generated considerable interest towards fast and space-efficient BWT construction algorithms~\cite{BauerCoxRosoneTCS2013,Karkkainen:2007:FBS:1314704.1314852,10.1007/978-3-319-15579-1_46,10.1007/978-3-319-02432-5_5,8712716,Kempa:2019:SSS:3313276.3316368,navarro2014optimal,10.1007/978-3-319-15579-1_46}. As a result, the problem of building the BWT is well understood to date. The fastest algorithm solving this problem operates in sublinear $O(n/\sqrt{\log n})$ time and $O(n)$ bits of space on a binary text of length $n$ by exploiting word parallelism~\cite{Kempa:2019:SSS:3313276.3316368}. The authors also provide a conditional lower bound suggesting that this running time might be optimal. The most space-efficient algorithm terminates in $O(n\log n/\log\log n)$ time and uses just $o(n\log\sigma)$ bits of space (succinct) on top of the input and output~\cite{navarro2014optimal}, where $\sigma$ is the alphabet's size. In the average case, this running time can be improved to $O(n)$ on constant-sized alphabets while still operating within succinct space~\cite{10.1007/978-3-319-15579-1_46}. In some cases, a BWT alone is not sufficient to complete efficiently particular string-processing tasks. For this reason, the functionalities of the BWT are often extended by flanking to it additional structures such as the Longest Common Prefix (LCP) array~\cite{CGRS_JDA_2016} (see e.g.~\cite{prezza2018detecting,prezza2019,GuerriniRosoneAlcob2019,TUSTUMI2016} for bioinformatic applications requiring this additional component). A disadvantage of the LCP array is that it requires $O(n\log n)$ bits to be stored in plain form. To alleviate this problem, usually the PLCP array~\cite{Sadakane:2002:SRL:545381.545410}---an easier-to-compress permutation of the LCP array---is preferred. The PLCP relies on the idea of storing LCP values in text order instead of suffix array order. As shown by Kasai et al.~\cite{10.1007/3-540-48194-X_17}, this permutation is almost increasing ($PLCP[i+1] \geq PLCP[i]-1$) and can thus be represented in just $2n$ bits in a bitvector known as the \emph{PLCP bitvector}. More advanced applications might even require full suffix tree functionality. In such cases, compressed suffix trees~\cite{Grossi:2005:CSA:1093654.1096192,Sadakane:2007:CST:1326296.1326297} (CSTs) are the preferred choice when the space is at a premium. A typical compressed suffix tree is formed by a compressed suffix array (CSA), the PLCP bitvector, and a succinct representation of the suffix tree topology~\cite{Sadakane:2007:CST:1326296.1326297} (there exist other designs, see Ohlebusch et al.~\cite{10.1007/978-3-642-16321-0_34} for an exhaustive survey). To date, several practical algorithms have been developed to solve the task of building \emph{de novo} such additional components~\cite{CGRS_JDA_2016,holt2014constructing,holt2014merging,Bonizzoni2018,EgidiAMB2019,Belazzougui:2014:LTC:2591796.2591885,10.1007/978-3-642-02441-2_17,Valimaki:2010:ECS:1498698.1594228}, but little work has been devoted to the task of computing them from the BWT in little working space (internal and external). Considering the advanced point reached by state-of-the-art BWT construction algorithms, it is worth to explore whether such structures can be built more efficiently starting from the BWT, rather than from the raw input text. \textbf{CSA} As far as the CSA is concerned, this component can be easily built from the BWT using small space as it is formed (in its simplest design) by just a BWT with rank/select functionality enhanced with a suffix array sampling, see also~\cite{Belazzougui:2014:LTC:2591796.2591885}. \textbf{LCP} We are aware of only one work building the LCP array in small space from the BWT: Beller et al.~\cite{beller2013computing} show how to build the LCP array in $O(n\log\sigma)$ time and $O(n)$ bits of working space on top of the input BWT and the output. Other works~\cite{munro2017space,Belazzougui:2014:LTC:2591796.2591885} show how to build the LCP array directly from the text in $O(n)$ time and $O(n\log\sigma)$ bits of space (compact). \textbf{PLCP} K{\"a}rkk{\"a}inen et al.~\cite{10.1007/978-3-642-02441-2_17} show that the PLCP bitvector can be built in $O(n\log n)$ time using $n$ bits of working space on top of the text, the suffix array, and the output PLCP. Kasai at al.'s lemma also stands at the basis of a more space-efficient algorithm from V\"{a}lim\"{a}ki et al.~\cite{Valimaki:2010:ECS:1498698.1594228}, which computes the PLCP from a CSA in $O(n\log n)$ time using constant working space on top of the CSA and the output. Belazzougui~\cite{Belazzougui:2014:LTC:2591796.2591885} recently presented an algorithm for building the PLCP bitvector from the text in optimal $O(n)$ time and compact space ($O(n\log\sigma)$ bits). \textbf{Suffix tree topology} The remaining component required to build a compressed suffix tree (in the version described by Sadakane~\cite{Sadakane:2007:CST:1326296.1326297}) is the suffix tree topology, represented either in BPS~\cite{Munro:1997:SRB:795663.796328} (balanced parentheses) or DFUDS~\cite{Benoit2005} (depth first unary degree sequence), using $4n$ bits. As far as the BPS representation is concerned, Hon et al.~\cite{Hon:2009:BTB:1654348.1654351} show how to build it from a CSA in $O(n(\log\sigma + \log^\epsilon n))$ time and compact space for any constant $\epsilon>0$. Belazzougui~\cite{Belazzougui:2014:LTC:2591796.2591885} improves this running time to the optimal $O(n)$, still working within compact space. V\"{a}lim\"{a}ki et al.~\cite{Valimaki:2010:ECS:1498698.1594228} describe a linear-time algorithm that improves the space to $O(n)$ bits on top of the LCP array (which however needs to be represented in plain form), while Ohlebusch et al.~\cite{10.1007/978-3-642-16321-0_34} show how to build the DFUDS representation of the suffix tree topology in $O(t_{lcp}\cdot n)$ time using $n+o(n)$ bits of working space on top of a structure supporting access to LCP array values in $O(t_{lcp})$ time. Summing up, the situation for building compressed suffix trees from the BWT is the following: algorithms working in optimal linear time require $O(n\log\sigma)$ bits of working space. Algorithms reducing this space to $O(n)$ (on top of a CSA) are only able to build the suffix tree topology within $O(n\cdot t_{lcp})$ time, which is $\Omega(n\log^{\epsilon}n)$ with the current best techniques, and the PLCP bitvector in $O(n\log n)$ time. No algorithm can build all the three CST components within $o(n\log\sigma)$ bits of working space on top of the input BWT and the output. Combining the most space-efficient existing algorithms, the following two trade-offs can therefore be achieved for building all compressed suffix tree components from the BWT: \begin{itemize} \item $O(n\log\sigma)$ bits of working space and $O(n)$ time, or \item $O(n)$ bits of working space and $O(n\log n)$ time. \end{itemize} \paragraph{Our contributions} In this paper, we give new space-time trade-offs that allow building the CST's components in smaller working space (and in some cases even faster) with respect to the existing solutions. We start by combining Beller et al.'s algorithm~\cite{beller2013computing} with the suffix-tree enumeration procedure of Belazzougui~\cite{Belazzougui:2014:LTC:2591796.2591885} to obtain an algorithm that enumerates (i) all pairs $(i,LCP[i])$, and (ii) all suffix tree intervals in $O(n\log\sigma)$ time using just $o(n\log\sigma)$ bits of working space on top of the input BWT. We use this procedure to obtain algorithms that build (working space is on top of the input BWT and the output): \begin{enumerate} \item\label{goalLCP} The LCP array of a string collection in $O(n\log\sigma)$ time and $o(n\log\sigma)$ bits of working space (see Section \ref{sec:LCP}). \item\label{goalPLCP} the PLCP bitvector and the BPS representation of the suffix tree topology in $O\left(n(\log\sigma + \epsilon^{-1}\cdot \log\log n)\right)$ time and $n\log\sigma \cdot (\epsilon + o(1))$ bits of working space, for any user-defined parameter $0 < \epsilon \leq 1$ (see Section \ref{sec:PLCP} and \ref{sec:ST topology}). \item\label{goalMerge} The BWT of the union of two string collections of total size $n$ in $O(n\log\sigma)$ time and $o(n\log\sigma)$ bits of working space, given the BWTs of the two collections as input (see Section \ref{sec:algo2}). \end{enumerate} Contribution (\ref{goalLCP}) is the first showing that the LCP array can be induced from the BWT using succinct working space \emph{for any alphabet size}. Contribution (\ref{goalPLCP}) can be used to build a compressed suffix tree from the BWT using just $o(n\log\sigma)$ bits of working space and any time in $O(n\log\sigma) + \omega(n\log\log n)$---for example, $O(n(\log\sigma + (\log\log n)^{1+\delta}))$, for any $\delta>0$. On small alphabets, this improves both working space and running time of existing $O(n)$-bits solutions. Also contribution (\ref{goalMerge}) improves the state-of-the-art, due to Belazzougui et al.~\cite{Belazzougui:2014:LTC:2591796.2591885,belazzougui2016linear}. In those papers, the authors show how to merge the BWTs of two texts $T_1, T_2$ and obtain the BWT of the collection $\{T_1, T_2\}$ in $O(nk)$ time and $n\log\sigma(1+1/k) + 11n + o(n)$ bits of working space for any $k \geq 1$~\cite[Thm. 7]{belazzougui2016linear}. When $k=\log\sigma$, this running time is the same as our result (\ref{goalMerge}), but the working space is much higher on small alphabets. We implemented and tested our algorithms (\ref{goalLCP}, \ref{goalMerge}) on DNA alphabet. Our tools use (in RAM) as few as $n$ bits on top of a packed representation of the input/output, and process data as fast as $2.92$ megabases per second. Contributions (\ref{goalLCP}, \ref{goalMerge}) are part of a preliminary version~\cite{prezza_et_al:LIPIcs:2019:10478} of this paper. This paper also extends such results with the suffix tree interval enumeration procedure and with the algorithms of contribution (\ref{goalPLCP}) for building the PLCP bitvector and the BPS representation of the suffix tree topology. \section{Basic Concepts}\label{sec:notation} Let $\Sigma =\{c_1, c_2, \ldots, c_\sigma\}$ be a finite ordered alphabet of size $\sigma$ with $\# = c_1< c_2< \ldots < c_\sigma$, where $<$ denotes the standard lexicographic order. Given a text $T=t_1 t_2 \cdots t_n \in \Sigma^*$ we denote by $|T|$ its length $n$. We assume that the input text is terminated by the special symbol (terminator) $\#$, which does not appear elsewhere in $T$. We use $\epsilon$ to denote the empty string. A \emph{factor} (or \emph{substring}) of $T$ is written as $T[i,j] = t_i \cdots t_j$ with $1\leq i \leq j \leq n$. When declaring an array $A$, we use the same notation $A[1,n]$ to indicate that the array has $n$ entries indexed from $1$ to $n$. A \emph{right-maximal} substring $W$ of $T$ is a string for which there exist at least two distinct characters $a,b$ such that $Wa$ and $Wb$ occur in $T$. The \emph{suffix array} SA of a string $T$ (see \cite{PuglisiTurpin2008} for a survey) is an array containing the permutation of the integers $1,2, \ldots, n$ that arranges the starting positions of the suffixes of $T$ into lexicographical order, i.e., for all $1 \leq i < j \leq n$, $SA[i] < SA[j]$. The \emph{inverse suffix array} $ISA[1, n]$ is the inverse permutation of $SA$, i.e., $ISA[i] = j$ if and only if $SA[j] = i$. The Burrows-Wheeler Transform of a string $T$ is a reversible transformation that permutates its symbols, i.e. $BWT[i]=T[SA[i]-1]$ if $SA[i] > 1$ or $\#$ otherwise. In some of our results we deal with \emph{string collections}. There exist some natural extensions of the suffix array and the Burrows-Wheeler Transform to a collection of strings. Let $\mathcal{S}\xspace = \{T_1, \dots, T_m\}$ be a string collection of total length $n$, where each $T_i$ is terminated by a character $\#$ (the terminator) lexicographically smaller than all other alphabet's characters. In particular, a collection is an ordered multiset, and we denote $\mathcal{S}\xspace[i] = T_i$. We define lexicographic order among the strings' suffixes in the usual way, except that, \emph{only while sorting}, each terminator $\#$ of the $i$-th string $\mathcal{S}\xspace[i]$ is considered (implicitly) a different symbol $\#_i$, with $\#_i < \#_j$ if and only if $i<j$. Equivalently, in case of equal suffixes ties are broken by input's order: if $T_i[k,|T_i|-1]=T_j[k',|T_j|-1]$, then we define $T_i[k,|T_i|] < T_j[k',|T_j|]$ if and only if $i < j$. The \emph{generalized suffix array} $GSA[1,n]$ (see~\cite{Shi:1996,CGRS_JDA_2016,Louza2017}) of $\mathcal{S}\xspace$ is an array of pairs $GSA[i] = \langle j,k \rangle$ such that $\mathcal{S}\xspace[j][k,|\mathcal{S}\xspace[j]|]$ is the $i$-th lexicographically smallest suffix of strings in $\mathcal{S}\xspace$, where we break ties by input position (i.e. $j$ in the notation above). Note that, if the collection is formed by a single string $T$, then the first component in $GSA$'s pairs is always equal to 1, and the second components form the suffix array of $T$. We denote by $\mathtt{range(W)} = \langle \mathtt{left(W)}, \mathtt{right(W)} \rangle$, also referred to as \emph{suffix array (SA) interval of $W$, or simply $W$-interval}, the maximal pair $\langle L,R \rangle$ such that all suffixes in $GSA[L,R]$ are prefixed by $W$. We use the same notation with the suffix array of a single string $T$. Note that the number of suffixes lexicographically smaller than $W$ in the collection is $L-1$. We extend this definition also to cases where $W$ is not present in the collection: in this case, the (empty) range is $\langle L, L-1\rangle$ and we still require that $L-1$ is the number of suffixes lexicographically smaller than $W$ in the collection (or in the string). The \emph{extended Burrows-Wheeler Transform} $BWT[1,n]$ \cite{MantaciRRS07,BauerCoxRosoneTCS2013} of $\mathcal{S}\xspace$ is the character array defined as $BWT[i] = \mathcal{S}\xspace[j][k-1\ \mathtt{mod}\ |\mathcal{S}\xspace[j]|]$, where $\langle j,k \rangle = GSA[i]$. To simplify notation, we indicate with ``$BWT$'' both the Burrows-Wheeler Transform of a string and of a string collection. The used transform will be clear from the context. The \emph{longest common prefix} (LCP) array of a string $s$ \cite{ManberMyers1993} (resp. a collection $\mathcal{S}\xspace$ of strings, see \cite{CGRS_JDA_2016,Louza2017,EgidiAMB2019}) is an array storing the length of the longest common prefixes between two consecutive suffixes of $s$ (resp. $\mathcal{S}\xspace$) in lexicographic order (with $LCP[1]=0$). When applied to a string collection, we take the longest common prefix of two equal suffixes of length $\ell$ to be equal to $\ell-1$ (i.e. as if their terminators were different). Given two collections $\mathcal{S}\xspace_1, \mathcal{S}\xspace_2$ of total length $n$, the Document Array of their union is the binary array $DA[1,n]$ such that $DA[i] = 0$ if and only if the $i$-th smallest suffix comes from $\mathcal{S}\xspace_1$. When merging suffixes of the two collections, ties are broken by collection number (i.e. suffixes of $\mathcal{S}\xspace_1$ are smaller than suffixes of $\mathcal{S}\xspace_2$ in case of ties). The $C$-array of a string (or collection) $S$ is an array $C[1,\sigma]$ such that $C[i]$ contains the number of characters lexicographically smaller than $i$ in $S$, plus one ($S$ will be clear from the context). Equivalently, $C[c]$ is the starting position of suffixes starting with $c$ in the suffix array of the string. When $S$ (or any of its permutations) is represented with a balanced wavelet tree, then we do not need to store explicitly $C$, and $C[c]$ can be computed in $O(\log\sigma)$ time with no space overhead on top of the wavelet tree (see~\cite{navarro2012wavelet}). Function $\mathtt{S.rank_c(i)}$ returns the number of characters equal to $c$ in $S[1,i-1]$. When $S$ is represented by a wavelet tree, \emph{rank} can be computed in $O(\log \sigma)$ time. Function $\mathtt{getIntervals(L,R,BWT)}$, where $BWT$ is the extended Burrows-Wheeler transform of a string collection $\mathcal{S}\xspace$ and $\langle L,R\rangle$ is the suffix array interval of some string $W$ appearing as a substring of some element of $\mathcal{S}\xspace$, returns all suffix array intervals of strings $cW$, with $c\neq \#$, that occur in $\mathcal{S}\xspace$. When $BWT$ is represented with a balanced wavelet tree, we can implement this function so that it terminates in $O(\log\sigma)$ time per returned interval~\cite{beller2013computing}. The function can be made to return the output intervals on-the-fly, one by one (in an arbitrary order), without the need to store them all in an auxiliary vector, with just $O(\log n)$ bits of additional overhead in space~\cite{beller2013computing} (this requires to DFS-visit the sub-tree of the wavelet tree induced by $BWT[L,R]$; the visit requires only $\log\sigma$ bits to store the current path in the tree). An extension of the above function that navigates in parallel two BWTs is immediate. Function $\mathtt{getIntervals(L_1,R_1,L_2, R_2, BWT_1, BWT_2)}$ takes as input two ranges of a string $W$ on the BWTs of two collections, and returns the pairs of ranges on the two BWTs corresponding to all left-extensions $cW$ of $W$ ($c\neq \#$) such that $cW$ appears in at least one of the two collections. To implement this function, it is sufficient to navigate in parallel the two wavelet trees as long as at least one of the two intervals is not empty. Let $S$ be a string. The function $S.\mathtt{rangeDistinct(i,j)}$ returns the set of distinct alphabet characters \emph{different than the terminator} $\#$ in $S[i,j]$. Also this function can be implemented in $O(\log\sigma)$ time per returned element when $S$ is represented with a wavelet tree (again, this requires a DFS-visit of the sub-tree of the wavelet tree induced by $S[i,j]$). $BWT.\mathtt{bwsearch(\langle L,R \rangle, c)}$ is the function that, given the suffix array interval $\langle L,R \rangle$ of a string $W$ occurring in the collection, returns the suffix array interval of $cW$ by using the BWT of the collection~\cite{ferragina2000opportunistic}. This function requires access to array $C$ and \emph{rank} support on $BWT$, and runs in $O(\log\sigma)$ time when $BWT$ is represented with a balanced wavelet tree. To conclude, our algorithms will take as input a wavelet tree representing the BWT. As shown in the next lemma by Claude et al., this is not a restriction: \begin{lemma}[\cite{claude2015wavelet}]\label{thm:BWT->WT} Given a word-packed string of length $n$ on alphabet $[1,\sigma]$, we can replace it with its wavelet matrix~\cite{claude2015wavelet} in $O(n\log\sigma)$ time using $n$ bits of additional working space. \end{lemma} Wavelet matrices~\cite{claude2015wavelet} are a particular space-efficient representation of wavelet trees taking $n\log\sigma \cdot (1+o(1))$ bits of space and supporting all their operations within the same running times. Since the output of all our algorithms will take at least $n$ bits, it will always be possible to re-use a portion of the output's space (before computing it) to fit the extra $n$ bits required by Lemma \ref{thm:BWT->WT}. \section{Belazzougui's Enumeration Algorithm}\label{sec:belazzougui} In~\cite{Belazzougui:2014:LTC:2591796.2591885}, Belazzougui showed that a BWT with \emph{rank} and \emph{range distinct} functionality (see Section \ref{sec:notation}) is sufficient to enumerate in small space a rich representation of the internal nodes of the suffix tree of a text $T$. For the purposes of this article, we assume that the BWT is represented using a wavelet tree (whereas Belazzougui's original result is more general), and thus that all queries take $O(\log \sigma)$ time. \begin{theorem}[Belazzougui \cite{Belazzougui:2014:LTC:2591796.2591885}]\label{th:Belazzougui} Given the Burrows-Wheeler Transform of a text $T\in[1,\sigma]^n$ represented with a wavelet tree, we can enumerate the following information for each distinct right-maximal substring $W$ of $T$: (i) $|W|$, and (ii) $range(Wc_i)$ for all $c_1 < \dots < c_k$ such that $Wc_i$ occurs in $T$. The process runs in $O(n\log\sigma)$ time and uses $O(\sigma^2\log^2n)$ bits of working space on top of the BWT. \end{theorem} To keep the article self-contained, in this section we describe the algorithm at the core of the above result. Remember that explicit suffix tree nodes correspond to right-maximal substrings. The first idea is to represent any substring $W$ (not necessarily right-maximal) as follows. Let $\mathtt{chars_W[1,k_W]}$ be the alphabetically-sorted character array such that $W\cdot \mathtt{chars_W[i]}$ is a substring of $T$ for all $i=1,\dots, k_W$, where $k_W$ is the number of right-extensions of $W$. We require $\mathtt{chars_W}$ to be also complete: if $Wc$ is a substring of $T$, then $c\in \mathtt{chars_W}$. Let moreover $\mathtt{first_W[1,k_W+1]}$ be the array such that $\mathtt{first_W[i]}$ is the starting position of (the range of) $W\cdot \mathtt{chars_W[i]}$ in the suffix array of $T$ for $i=1,\dots, k_W$, and $\mathtt{first_W[k_W+1]}$ is the end position of $W$ in the suffix array of $T$. The representation for $W$ is (differently from~\cite{Belazzougui:2014:LTC:2591796.2591885}, we omit $\mathtt{chars_W}$ from the representation and we add $|W|$; these modifications will turn useful later): $$\mathtt{repr(W) = \langle \mathtt{first_W},\ |W| \rangle}$$ Note that, if $W$ is not right-maximal nor a text suffix, then $W$ is followed by $k_W=1$ distinct characters in $T$ and the above representation is still well-defined. When $W$ is right-maximal, we will also say that $\mathtt{repr(W)}$ is the representation of a suffix tree explicit node (i.e. the node reached by following the path labeled $W$ from the root). \paragraph{Weiner Link Tree Visit}\label{app:belazzougui} The enumeration algorithm works by visiting the Weiner Link tree of $T$ starting from the root's representation, that is, $\mathtt{repr(\epsilon) = \langle \mathtt{first_\epsilon},\ 0 \rangle}$, where $\mathtt{first_\epsilon} = \langle C[c_1], \dots, C[c_\sigma], n \rangle$ (see Section \ref{sec:notation} for a definition of the $C$-array) and $c_1, \dots, c_\sigma$ are the sorted alphabet's characters. Since the suffix tree and the Weiner link tree share the same set of nodes, this is sufficient to enumerate all suffix tree nodes. The visit uses a stack storing representations of suffix tree nodes, initialized with $\mathtt{repr(\epsilon)}$. At each iteration, we pop the head $\mathtt{repr(W)}$ from the stack and we push $\mathtt{repr(cW)}$ such that $cW$ is right-maximal in $T$. To keep the stack's size under control, once computed $\mathtt{repr(cW)}$ for the right-maximal left-extensions $cW$ of $W$ we push them on the stack in decreasing order of range length $\mathtt{range(cW)}$ (i.e. the node with the smallest range is pushed last). This guarantees that the stack will always contain at most $O(\sigma\log n)$ elements~\cite{Belazzougui:2014:LTC:2591796.2591885}. Since each element takes $O(\sigma\log n)$ bits to be represented, the stack's size never exceeds $O(\sigma^2\log^2 n)$ bits. \paragraph{Computing Weiner Links} We now show how to efficiently compute the node representation $\mathtt{repr(cW)}$ from $\mathtt{repr(W)}$ for the characters $c$ such that $cW$ is right-maximal in $T$. In~\cite{Belazzougui:2014:LTC:2591796.2591885,belazzougui2016linear} this operation is supported efficiently by first enumerating all \emph{distinct} characters in each range $BWT[\mathtt{first_W[i], first_W[i+1]}]$ for $i=1, \dots, k_W$, using function $\mathtt{BWT.rangeDistinct(first_W[i], first_W[i+1])}$ (see Section \ref{sec:notation}). Equivalently, for each $a\in \mathtt{chars_W}$ we want to list all distinct left-extensions $cWa$ of $Wa$. Note that, in this way, we may also visit implicit suffix tree nodes (i.e. some of these left-extensions could be not right-maximal). Stated otherwise, we are traversing all explicit \emph{and} implicit Weiner links. Since the number of such links is linear~\cite{Belazzougui:2014:LTC:2591796.2591885,belazzougui2014alphabet} (even including implicit Weiner links\footnote{To see this, first note that the number of right-extensions $Wa$ of $W$ that have only one left-extension $cWa$ is at most equal to the number of right-extensions of $W$; globally, this is at most the number of suffix tree's nodes (linear). Any other right-extension $Wa$ that has at least two distinct left-extensions $cWa$ and $bWa$ is, by definition, left maximal and corresponds therefore to a node in the suffix tree of the reverse of $T$. It follows that all left-extensions of $Wa$ can be charged to an edge of the suffix tree of the reverse of $T$ (again, the number of such edges is linear).}), globally the number of distinct characters returned by $\mathtt{rangeDistinct}$ operations is $O(n)$. An implementation of $\mathtt{rangeDistinct}$ on wavelet trees is discussed in \cite{beller2013computing} with the procedure \texttt{getIntervals} (this procedure actually returns more information: the suffix array range of each $cWa$). This implementation runs in $O(\log\sigma)$ time per returned character. Globally, we therefore spend $O(n\log\sigma)$ time using a wavelet tree. We now need to compute $\mathtt{repr(cW)}$ for all left-extensions of $W$ and keep only the right-maximal ones. Let $x=\mathtt{repr(W)}$ and $\mathtt{BWT.Weiner(x)}$ be the function that returns the representations of such strings (used in Line \ref{range distinct2} of Algorithm \ref{alg:fill nodes}). This function can be implemented by observing that $$ \begin{array}{lcl} \mathtt{range(cWa)} & = \mathtt{\langle}& \mathtt{C[c] + BWT.rank_c(left(Wa))}, \\ && \mathtt{ C[c] + BWT.rank_c(right(Wa)+1)-1 \ \rangle} \\ \end{array} $$ where $a=\mathtt{chars_W[i]}$ for $1\leq i < |\mathtt{first_W}|$, and noting that $\mathtt{left(Wa)}$ and $\mathtt{right(Wa)}$ are available in $\mathtt{repr(W)}$. Note also that we do not actually need to know the value of characters $\mathtt{chars_W[i]}$ to compute the ranges of each $cW\cdot \mathtt{chars_W[i]}$; this is the reason why we can omit $\mathtt{chars_W}$ from $\mathtt{repr(W)}$. Using a wavelet tree, the above operation takes $O(\log\sigma)$ time. By the above observations, the number of strings $cWa$ such that $W$ is right-maximal is bounded by $O(n)$. Overall, computing $\mathtt{repr(cW)} = \langle \mathtt{first_{cW}}, |W|+1 \rangle$ for all left-extensions $cW$ of all right-maximal strings $W$ takes therefore $O(n\log\sigma)$ time. Within the same running time, we can check which of those extensions is right maximal (i.e. those such that $|\mathtt{first_{cW}}|\geq 2$), sort them in-place by interval length (we always sort at most $\sigma$ node representations, therefore also sorting takes globally $O(n\log\sigma)$ time), and push them on the stack. \section{Beller et al.'s Algorithm}\label{sec:beller} The second ingredient used in our solutions is the following result, due to Beller et al (we slightly re-formulate their result to fit our purposes, read below for a description of the differences): \begin{theorem}[Beller et al.\cite{beller2013computing}]\label{th:Beller} Given the Burrows-Wheeler Transform of a text $T$ represented with a wavelet tree, we can enumerate all pairs $(i,LCP[i])$ in $O(n\log\sigma)$ time using $5n$ bits of working space on top of the BWT. \end{theorem} Theorem \ref{th:Beller} represents the state of the art for computing the LCP array from the BWT. Also Beller et al.'s algorithm works by enumerating a (linear) subset of the BWT intervals. LCP values are induced from a particular visit of those intervals. Belazzougui's and Beller et al.'s algorithms have, however, two key differences which make the former more space-efficient on small alphabets, while the latter more space-efficient on large alphabets: (i) Beller et al. use a queue (FIFO) instead of a stack (LIFO), and (ii) they represent $W$-intervals with just the pair of coordinates $\mathtt{range(W)}$ and the value $|W|$. In short, while Beller et al.'s queue might grow up to size $\Theta(n)$, the use of intervals (instead of the more complex representation used by Belazzougui) makes it possible to represent it using $O(1)$ bitvectors of length $n$. On the other hand, the size of Belazzougui's stack can be upper-bounded by $O(\sigma\log n)$, but its elements take more space to be represented. We now describe in detail Beller et al.'s result. We keep a bitvector $U[1,n]$ such that $U[i]=0$ if and only if the pair $(i,LCP[i])$ has not been output yet. In their original algorithm, Beller et al. use the LCP array itself to mark undefined LCP entries. In our case, we don't want to store the whole LCP array (for reasons that will be clear in the next sections) and thus we only record which LCP values have been output. Bitvector $U$ accounts for the additional $n$ bits used by Theorem \ref{th:Beller} with respect to the original result described in~\cite{beller2013computing}. At the beginning, $U[i]=0$ for all $i=1, \dots, n$. Beller et al.'s algorithm starts by inserting in the queue the triple $\langle1,n,0 \rangle$, where the first two components are the BWT interval of $\epsilon$ (the empty string) and the third component is its length. From this point, the algorithm keeps performing the following operations until the queue is empty. We remove the first (i.e. the oldest) element $\langle L,R,\ell\rangle$ from the queue, which (by induction) is the interval and length of some string $W$: $\mathtt{range(W)}= \langle L,R \rangle$ and $|W|=\ell$. Using operation $\mathtt{getIntervals(L,R,BWT)}$~\cite{beller2013computing} (see Section \ref{sec:notation}) we left-extend the BWT interval $\langle L,R\rangle$ with the characters $c_1, \dots, c_k$ in $\mathtt{BWT.rangeDistinct(L,R)}$, obtaining the triples $\langle L_1, R_1, \ell+1 \rangle, \dots, \langle L_k, R_k, \ell+1 \rangle$ corresponding to the strings $c_1W, \dots, c_kW$. For each such triple $\langle L_i, R_i, \ell+1\rangle$, if $R_i\neq n$ and $U[R_i+1] = 0$ then we set $U[R_i+1] \leftarrow 1$, we output the LCP pair $(R_i+1, \ell)$ and push $\langle L_i, R_i, \ell+1\rangle$ on the queue. Importantly, note that we can push the intervals returned by $\mathtt{getIntervals(L,R,BWT)}$ in the queue in any order; as discussed in Section \ref{sec:notation}, this step can be implemented with just $O(\log n)$ bits of space overhead with a DFS-visit of the wavelet tree's sub-tree induced by $BWT[L,R]$ (i.e. the intervals are not stored temporarily anywhere: they are pushed as soon as they are generated). \paragraph{Queue implementation} To limit space usage, Beller et al. use the following queue representations. First note that, at each time point, the queue's triples are partitioned into a (possibly empty) sequence with associated length (i.e. the third element in the triples) $\ell+1$, followed by a sequence with associated length $\ell$, for some $\ell$. To simplify the description, let us assume that these two sequences are kept as two distinct queues, indicated in the following as $Q_\ell$ and $Q_{\ell+1}$. At any stage of the algorithm, we pop from $Q_\ell$ and push into $Q_{\ell+1}$. It follows that there is no need to store strings' lengths in the triples themselves (i.e. the queue's elements become just ranges), since the length of each element in $Q_\ell$ is $\ell$. When $Q_\ell$ is empty, we create a new empty queue $Q_{\ell+2}$, pop from $Q_{\ell+1}$, and push into $Q_{\ell+2}$ (and so on). Beller et al. represent $Q_\ell$ as follows. While pushing elements in $Q_\ell$, as long as its size does not exceed $n/\log n$ we represent it as a vector of pairs (of total size at most $O(n)$ bits). This representation supports push/pop operations in (amortized) constant time and takes at most $O(\log n \cdot n/\log n) = O(n)$ bits of space. As soon as $Q_\ell$'s size exceeds $n/\log n$, we switch to a representation that uses two packed bitvectors of length $n$ storing, respectively, the left- and right-most boundaries of the ranges in the queue. Note that this representation can be safely used since the pairs in $Q_\ell$ are suffix array ranges of strings of some fixed length $\ell$, therefore there cannot be overlapping intervals. Pushing an interval into such a queue takes constant time (it just requires setting two bits). Popping all the $t = |Q_\ell|$ intervals, on the other hand, can easily be implemented in $O(t+ n/\log n)$ time by scanning the bitvectors and exploiting word-parallelism (see \cite{beller2013computing} for all details). Since Beller et al.'s procedure visits $O(n)$ SA intervals, $Q_\ell$ will exceed size $n/\log n$ for at most $O(\log n)$ values of $\ell$. It follows that also with this queue representation pop operations take amortized constant time. \paragraph{Time complexity} It is easy to see that the algorithm inserts in total a linear number of intervals in the queue since an interval $\langle L_i, R_i, \ell+1 \rangle$ is inserted only if $U[R_i+1] = 0$, and successively $U[R_i+1]$ is set to $1$. Clearly, this can happen at most $n$ times. In~\cite{beller2013computing} the authors moreover show that, even when counting the left-extensions of those intervals (computed after popping each interval from the queue), the total number of generated intervals stays linear. Overall, the algorithm runs therefore in $O(n\log\sigma)$ time (as discussed in Section \ref{sec:notation}, $\mathtt{getIntervals}$ runs in $O(\log\sigma)$ time per returned element). \section{Enumerating LCP values}\label{sec:LCP} In this section we prove our first main result: how to enumerate LCP pairs $(i,LCP[i])$ using succinct working space on top of a wavelet tree representing the BWT. Later we will use this procedure to build the LCP and PLCP arrays in small space on top of a plain representation of the BWT. We give our lemma in the general form of string collections, which will require adapting the algorithms seen in the previous sections to this more general setting. Our first observation is that Theorem \ref{th:Belazzougui}, extended to string collections as described below, can be directly used to enumerate LCP pairs $(i,LCP[i])$ using just $O(\sigma^2\log^2n)$ bits of working space on top of the input and output. We combine this procedure with an extended version of Beller et al.'s algorithm working on string collections in order to get small working space for all alphabets. Algorithms \ref{alg:fill nodes} and \ref{alg:fill leaves} report our complete procedure; read below for an exhaustive description. We obtain our first main result: \begin{lemma}\label{thm:LCP collection} Given a wavelet tree for the Burrows-Wheeler Transform of a collection $\mathcal{S}\xspace = \{T_1, \dots, T_m\}$ of total length $n$ on alphabet $[1,\sigma]$, we can enumerate all pairs $(i, LCP[i])$ in $O(n\log\sigma)$ time using $o(n\log\sigma)$ bits of working space on top of the BWT. \end{lemma} \begin{proof} If $\sigma < \sqrt{n}/\log^2n$ then $\sigma^2\log^2n = o(n)$ and our extension of Theorem \ref{th:Belazzougui} gives us $o(n\log\sigma)$ additional working space. If $\sigma \geq \sqrt{n}/\log^2n$ then $\log\sigma = \Theta(\log n)$ and we can use our extension to string collections of Theorem \ref{th:Beller}, which yields extra working space $O(n) = o(n\log n) = o(n\log\sigma)$. Note that, while we used the threshold $\sigma < \sqrt{n}/\log^2n$, any threshold of the form $\sigma < \sqrt{n}/\log^{1+\epsilon}n$, with $\epsilon>0$ would work. The only constraint is that $\epsilon>0$, since otherwise for $\epsilon=0$ the working space would become $O(n\log\sigma)$ for constant $\sigma$ (not good since we aim at $o(n\log\sigma)$). \end{proof} We now describe all the details of our extensions of Theorems \ref{th:Belazzougui} and \ref{th:Beller} used in the proof of Lemma \ref{thm:LCP collection}. Procedure \texttt{BGOS(BWT)} in Line \ref{beller et al.} of Algorithm \ref{alg:fill nodes} is a call to Beller et al.'s algorithm, modified as follows. First, we enumerate the LCP pairs $(C[c], 0)$ for all $c\in\Sigma$. Then, we push in the queue $\langle \mathtt{range(c), 1} \rangle$ for all $c\in\Sigma$ and start the main algorithm. Note moreover that (see Section \ref{sec:notation}) from now on we never left-extend ranges with $\#$. Recall that each string of a text collection $\mathcal{S}\xspace$ is ended by a terminator $\#$ (common to all strings). Consider now the LCP and GSA arrays of $\mathcal{S}\xspace$. We divide LCP values in two types. Let $GSA[i] = \langle j,k \rangle$, with $i>1$, indicate that the $i$-th suffix in the lexicographic ordering of all suffixes of strings in $\mathcal{S}\xspace$ is $\mathcal{S}\xspace[j][k,|\mathcal{S}\xspace[j]|]$. A LCP value $\mathtt{LCP[i]}$ is of \emph{node type} when the $i$-th and $(i-1)$-th suffixes are distinct: $\mathcal{S}\xspace[j][k,|\mathcal{S}\xspace[j]|] \neq \mathcal{S}\xspace[j'][k',|\mathcal{S}\xspace[j']|]$, where $GSA[i] = \langle j,k \rangle$ and $GSA[i-1] = \langle j',k' \rangle$. Those two suffixes differ before the terminator is reached in both suffixes (it might be reached in one of the two suffixes, however); we use the name \emph{node-type} because $i-1$ and $i$ are the last and first suffix array positions of the ranges of two adjacent children of some suffix tree node, respectively (i.e. the node corresponding to string $\mathcal{S}\xspace[j][k,k+LCP[i]-1]$). Note that it might be that one of the two suffixes, $\mathcal{S}\xspace[j][k,|\mathcal{S}\xspace[j]|]$ or $\mathcal{S}\xspace[j'][k',|\mathcal{S}\xspace[j']|]$, is the string ``$\#$''. Similarly, a \emph{leaf-type} LCP value $\mathtt{LCP[i]}$ is such that the $i$-th and $(i-1)$-th suffixes are equal: $\mathcal{S}\xspace[j][k,|\mathcal{S}\xspace[j]|] = \mathcal{S}\xspace[j'][k',|\mathcal{S}\xspace[j']|]$. We use the name \emph{leaf-type} because, in this case, it must be the case that $i \in [L+1,R]$, where $\langle L,R \rangle$ is the suffix array range of some suffix tree leaf (it might be that $R>L$ since there might be repeated suffixes in the collection). Note that, in this case, $\mathcal{S}\xspace[j][k,|\mathcal{S}\xspace[j]|] = \mathcal{S}\xspace[j'][k',|\mathcal{S}\xspace[j']|]$ could coincide with $\#$. Entry $LCP[0]$ escapes the above classification, so we output it separately. Our idea is to compute first node-type and then leaf-type LCP values. We argue that Beller et al.'s algorithm already computes the former kind of LCP values. When this algorithm uses too much space (i.e. on small alphabets), we show that Belazzougui's enumeration strategy can be adapted to reach the same goal: by the very definition of node-type LCP values, they lie between children of some suffix tree node $x$, and their value corresponds to the string depth of $x$. This strategy is described in Algorithm \ref{alg:fill nodes}. Function $\mathtt{BWT.Weiner(x)}$ in Line \ref{range distinct2} takes as input the representation of a suffix tree node $x$ and returns all explicit nodes reached by following Weiner links from $x$ (an implementation of this function is described in Section~\ref{sec:belazzougui}). Leaf-type LCP values, on the other hand, can easily be computed by enumerating intervals corresponding to suffix tree leaves. To reach this goal, it is sufficient to enumerate ranges of suffix tree leaves starting from $\mathtt{range(\#)}$ and recursively left-extending with backward search with characters different from $\#$ whenever possible. For each range $\langle L,R \rangle$ obtained in this way, we set each entry $LCP[L+1,R]$ to the string depth (terminator excluded) of the corresponding leaf. This strategy is described in Algorithm \ref{alg:fill leaves}. In order to limit space usage, we use again a stack or a queue to store leaves and their string depth (note that each leaf takes $O(\log n)$ bits to be represented): we use a queue when $\sigma > n/\log^3n$, and a stack otherwise. The queue is the same used by Beller et al.\cite{beller2013computing} and described in Section \ref{sec:beller}. This guarantees that the bit-size of the queue/stack never exceeds $o(n\log\sigma)$ bits: since leaves take just $O(\log n)$ bits to be represented and the stack's size never contains more than $O(\sigma\cdot \log n)$ leaves, the stack's bit-size never exceeds $O(n/\log n) = o(n)$ when $\sigma \leq n/\log^3n$. Similarly, Beller et al's queue always takes at most $O(n)$ bits of space, which is $o(n\log\sigma)$ for $\sigma > n/\log^3n$. Note that in Lines \ref{getIntervals1}-\ref{push2} we can afford storing temporarily the $k$ resulting intervals since, in this case, the alphabet's size is small enough. To sum up, our full procedure works as follows: (1), we output node-type LCP values using procedure \texttt{Node-Type}$(\mathtt{BWT})$ described in Algorithm \ref{alg:fill nodes}, and (2) we output leaf-type LCP values using procedure \texttt{Leaf-Type}$(\mathtt{BWT})$ described in Algorithm \ref{alg:fill leaves}. \begin{algorithm} \begin{algorithmic}[1] \If{$\sigma > \sqrt n/\log^2n$} \State $\mathtt{BGOS(BWT)}$ \Comment Run Beller et al.'s algorithm\label{beller et al.} \Else \State $\mathtt P \leftarrow \texttt{new\_stack()}$\Comment Initialize new stack\label{new stack2} \State $\mathtt P\mathtt{.push( repr(\epsilon))}$\Comment Push representation of $\epsilon$ \label{push3} \While{$\mathtt{\mathbf{not}\ P.empty()}$}\label{while2} \State $\langle \mathtt{first_W},\ \ell \rangle \leftarrow \mathtt{P.pop()}$\Comment Pop highest-priority element\label{pop2} \State $t \leftarrow |\mathtt{first_W}|-1$\Comment Number of children of ST node\label{nchild} \For{$i = 2, \dots, t$} \State \textbf{output} $(\mathtt{first_W}[i], \ell)$\Comment Output LCP value\label{LCP in Node} \EndFor \State $x_1,\dots, x_k \leftarrow \mathtt{BWT.Weiner(\langle first_W,\ \ell \rangle)}$\Comment Follow Weiner Links\label{range distinct2} \State $x'_1, \dots, x'_k \leftarrow \mathtt{sort}(x_1, \dots, x_k)$\Comment Sort by interval length\label{sort2} \For{$i=k\dots 1$} \State $\mathtt{P.push}(x'_i)$\Comment Push representations\label{push4} \EndFor \EndWhile \EndIf \caption{\texttt{Node-Type(BWT)}}\label{alg:fill nodes} \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic}[1] \For{$i=left(\#),\dots, right(\#)$} \State \textbf{output} $(i,0)$ \EndFor \If{$\sigma > n/\log^3n$} \State $\mathtt P \leftarrow \mathtt{new\_queue()}$\Comment{Initialize new queue}\label{new queue1} \Else \State $\mathtt P \leftarrow \mathtt{new\_stack()}$\Comment{Initialize new stack}\label{new stack1} \EndIf \State $\mathtt P\mathtt{.push( BWT.range(\#),0)}$\Comment{Push range of terminator and LCP value 0}\label{push1} \While{$\mathtt{\mathbf{not}\ P.empty()}$}\label{while1} \State $\langle \langle L,R \rangle, \ell \rangle \leftarrow \mathtt{P.pop()}$\Comment{Pop highest-priority element}\label{pop1} \For{$i=L+1\dots R$} \State \textbf{output} $(i, \ell)$\Comment{Output LCP inside range of ST leaf}\label{LCP in Leaves} \EndFor \If{$\sigma > n/\log^3n$} \State $\mathtt{P.push(getIntervals(L, R, BWT), \ell+1)}$\Comment{Pairs $\langle$interval,$\ell+1\rangle$}\label{push7} \Else \State $\langle L_i, R_i\rangle_{i=1, \dots, k} \leftarrow \mathtt{getIntervals(L, R,BWT)}$\label{getIntervals1} \State $\langle L'_i, R'_i\rangle_{i=1, \dots, k} \leftarrow \mathtt{sort}(\langle L_i, R_i\rangle_{i=1, \dots, k})$\Comment{Sort by interval length}\label{sort1} \For{$i=k\dots 1$} \State $\mathtt{P.push}(\langle L'_i, R'_i\rangle, \ell+1)$\Comment{Push in order of decreasing length}\label{push2} \EndFor \EndIf \EndWhile \caption{\texttt{Leaf-Type(BWT)}}\label{alg:fill leaves} \end{algorithmic} \end{algorithm} The correctness, completeness, and complexity of our procedure are proved in the following Lemma: \begin{lemma}\label{lemma:proof of thm1} Algorithms \ref{alg:fill nodes} and \ref{alg:fill leaves} correctly output all LCP pairs $(i,LCP[i])$ of the collection in $O(n\log\sigma)$ time using $o(n\log\sigma)$ bits of working space on top of the input BWT. \end{lemma} \begin{proof} \emph{Correctness - Algorithm \ref{alg:fill nodes}}. We start by proving that Beller et al.'s procedure in Line \ref{beller et al.} of Algorithm \ref{alg:fill nodes} (procedure \texttt{BGOS(BWT)}) outputs all the node-type LCP entries correctly. The proof proceeds by induction on the LCP value $\ell$ and follows the original proof of~\cite{beller2013computing}. At the beginning, we insert in the queue all $c$-intervals, for $c\in\Sigma$. For each such interval $\langle L,R \rangle$ we output $LCP[R+1]=\ell = 0$. It is easy to see that after this step all and only the node-type LCP values equal to 0 have been correctly computed. Assume, by induction, that all node-type LCP values less than or equal to $\ell$ have been correctly output, and that we are about to extract from the queue the first triple $\langle L,R,\ell+1 \rangle$ having length $\ell+1$. For each extracted triple with length $\ell+1$ associated to a string $W$, consider the triple $\langle L',R',\ell+2 \rangle$ associated to one of its left-extensions $cW$. If $LCP[R'+1]$ has been computed, i.e. if $U[R'+1]=1$, then we have nothing to do. However, if $U[R'+1]=0$, then it must be the case that (i) the corresponding LCP value satisfies $LCP[R'+1] \geq \ell+1$, since by induction we have already computed all node-type LCP values smaller than or equal to $\ell$, and (ii) $LCP[R'+1]$ is of node-type, since otherwise the BWT interval of $cW$ would also include position $R'+1$. On the other hand, it cannot be the case that $LCP[R'+1] > \ell+1$ since otherwise the $cW$-interval would include position $R'+1$. We therefore conclude that $LCP[R'+1] = \ell+1$ must hold. \emph{Completeness - Algorithm \ref{alg:fill nodes}}. The above argument settles correctness; to prove completeness, assume that, at some point, $U[i] = 0$ and the value of $LCP[i]$ to be computed and output is $\ell+1$. We want to show that we will pull a triple $\langle L,R,\ell+1 \rangle$ from the queue corresponding to a string $W$ (note that $\ell+1=|W|$ and, moreover, $W$ could end with $\#$) such that one of the left-extensions $aW$ of $W$ satisfies $\mathtt{range(aW)} = \langle L',i-1 \rangle$, for some $L'$. This will show that, at some point, we will output the LCP pair $(i, \ell+1)$. We proceed by induction on $|W|$. Note that we separately output all LCP values equal to 0. The base case $|W|=1$ is easy: by the way we initialized the queue, $\langle \mathtt{range(c)}, 1\rangle$, for all $c\in\Sigma$, are the first triples we pop. Since we left-extend these ranges with all alphabet's characters except $\#$, it is easy to see that all LCP values equal to 1 have been output. From now on we can therefore assume that we are working on LCP values equal to $\ell+1>1$, i.e. $W=b\cdot V$, for $b\in\Sigma-\{\#\}$ and $V\in \Sigma^+$. Let $abV$ be the length-$(\ell+2)$ left-extension of $W=bV$ such that $\mathtt{right(abV)+1} = i$. Since, by our initial hypothesis, $\mathtt{LCP[i]} = \ell+1$, the collection contains also a suffix $aU$ lexicographically larger than $abV$ and such that $\mathtt{LCP(aU,abV)} = \ell+1$. But then, it must be the case that $\mathtt{LCP(right(bV)+1)} = \ell$ (it cannot be smaller by the existence of $U$ and it cannot be larger since $|bV|=\ell+1$). By inductive hypothesis, this value was set after popping a triple $\langle L'', R'', \ell\rangle$ corresponding to string $V$, left-extending $V$ with $b$, and pushing $\langle \mathtt{range(bV)}, \ell+1 \rangle$ in the queue. This ends the completeness proof since we showed that $\langle \mathtt{range(bV)}, \ell+1 \rangle$ is in the queue, so at some point we will pop it, extend it with $a$, and output $(right(abV)+1,\ell+1) = (i,\ell+1)$. If the queue uses too much space, then Algorithm \ref{alg:fill nodes} switches to a stack and Lines \ref{new stack2}-\ref{push4} are executed instead of Line \ref{beller et al.}. Note that this pseudocode fragment corresponds to Belazzougui's enumeration algorithm, except that now we also set LCP values in Line \ref{LCP in Node}. By the enumeration procedure's correctness, we have that, in Line \ref{LCP in Node}, $\langle \mathtt{first_W[1]}, \mathtt{first_W[t+1]} \rangle$ is the SA-range of a right-maximal string $W$ with $\ell = |W|$, and $\mathtt{first_W[i]}$ is the first position of the SA-range of $Wc_i$, with $i=1,\dots,t$, where $c_1, \dots, c_2$ are all the (sorted) right-extensions of $W$. Then, clearly each LCP value in Line \ref{LCP in Node} is of node-type and has value $\ell$, since it is the LCP between two strings prefixed by $W\cdot \mathtt{chars_W[i-1]}$ and $W\cdot \mathtt{chars_W[i]}$. Similarly, completeness of the procedure follows from the completeness of the enumeration algorithm. Let $LCP[i]$ be of node-type. Consider the prefix $Wb$ of length $LCP[i]+1$ of the $i$-th suffix in the lexicographic ordering of all strings' suffixes. Since $LCP[i] = |W|$, the $(i-1)$-th suffix is of the form $Wa$, with $b\neq a$, and $W$ is right-maximal. But then, at some point our enumeration algorithm will visit the representation of $W$, with $|W|=\ell$. Since $i$ is the first position of the range of $Wb$, we have that $i= \mathtt{first_W[j]}$ for some $j \geq 2$, and Line \ref{LCP in Node} correctly outputs the LCP pair $(first_W[j], |W|) = (i,|W|)$. \emph{Correctness and completeness - Algorithm \ref{alg:fill leaves}}. Proving correctness and completeness of this procedure is much easier. It is sufficient to note that the \texttt{while} loop iterates over all ranges $\langle L,R \rangle$ of strings ending with $\#$ and not containing $\#$ anywhere else (note that we start from the range of $\#$ and we proceed by recursively left-extending this range with symbols different than $\#$). Then, for each such range we conclude that $LCP[L+1,R]$ is equal to $\ell$, i.e. the string depth of the corresponding string (excluding the final character $\#$). By their definition, all leaf-type LCP values are correctly computed in this way. \emph{Complexity - Algorithm \ref{alg:fill nodes}}. If $\sigma > \sqrt n/\log^2 n$, then we run Beller et al's algorithm, which terminates in $O(n\log\sigma)$ time and uses $O(n) = o(n\log\sigma)$ bits of additional working space. Otherwise, we perform a linear number of operations on the stack since, as observed in Section \ref{sec:belazzougui}, the number of Weiner links is linear. By the same analysis of Section \ref{sec:belazzougui}, the operation in Line \ref{range distinct2} takes $O(k\log\sigma)$ amortized time on wavelet trees, and sorting in Line \ref{sort2} (using any comparison-sorting algorithm sorting $m$ integers in $O(m\log m)$ time) takes $O(k\log\sigma)$ time. Note that in this sorting step we can afford storing in temporary space nodes $x_1, \dots, x_k$ since this takes additional space $O(k\sigma\log n) = O(\sigma^2\log n) = O(n/\log^3n) = o(n)$ bits. All these operations sum up to $O(n\log\sigma)$ time. Since the stack always takes at most $O(\sigma^2\log^2n)$ bits and $\sigma \leq \sqrt n/\log^2 n$, the stack's size never exceeds $O(n/\log^2n) = o(n)$ bits. \emph{Complexity - Algorithm \ref{alg:fill leaves}}. Note that, in the \texttt{while} loop, we start from the interval of $\#$ and recursively left-extend with characters different than $\#$ until this is possible. It follows that we visit the intervals of all strings of the form $W\#$ such that $\#$ does not appear inside $W$. Since these intervals form a cover of $[1,n]$, their number (and therefore the number of iterations in the \texttt{while} loop) is also bounded by $n$. This is also the maximum number of operations performed on the queue/stack. Using Beller et al.'s implementation for the queue and a simple vector for the stack, each operation takes constant amortized time. Operating on the stack/queue takes therefore overall $O(n)$ time. For each interval $\langle L,R \rangle$ popped from the queue/stack, in Line \ref{LCP in Leaves} we output $R-L-2$ LCP values. As observed above, these intervals form a cover of $[1,n]$ and therefore Line \ref{LCP in Leaves} is executed no more than $n$ times. Line \ref{getIntervals1} takes time $O(k\log\sigma)$. Finally, in Line \ref{sort1} we sort at most $\sigma$ intervals. Using any fast comparison-based sorting algorithm, this costs overall at most $O(n\log\sigma)$ time. As far as the space usage of Algorithm \ref{alg:fill leaves} is concerned, note that we always push just pairs interval/length ($O(\log n)$ bits) in the queue/stack. If $\sigma > n/\log^3n$, we use Beller et al.'s queue, taking at most $O(n) = o(n\log\sigma)$ bits of space. Otherwise, the stack's size never exceeds $O(\sigma\cdot \log n)$ elements, with each element taking $O(\log n)$ bits. This amounts to $O(\sigma\cdot \log^2 n) = O(n/\log n) = o(n)$ bits of space usage. Moreover, in Lines \ref{getIntervals1}-\ref{sort1} it holds $\sigma\leq n/\log^3n$ so we can afford storing temporarily all intervals returned by $\mathtt{getIntervals}$ in $O(k\log n) = O(\sigma\log n) = O(n/\log^2n) = o(n)$ bits. \end{proof} Combining Lemma \ref{thm:LCP collection} and Lemma \ref{thm:BWT->WT}, we obtain: \begin{theorem}\label{thm:LCP collection succinct} Given the word-packed Burrows-Wheeler Transform of a collection $\mathcal{S}\xspace = \{T_1, \dots, T_m\}$ of total length $n$ on alphabet $[1,\sigma]$, we can build the LCP array of the collection in $O(n\log\sigma)$ time using $o(n\log\sigma)$ bits of working space on top of the BWT. \end{theorem} \section{Enumerating Suffix Tree Intervals}\label{sec:intervals} In this section we show that the procedures described in Section \ref{sec:LCP} can be used to enumerate all suffix tree intervals---that is, the suffix array intervals of all right-maximal text substrings---taking as input the BWT of a text. Note that in this section we consider just simple texts rather than string collections as later we will use this procedure to build the compressed suffix tree of a text. When $\sigma \leq \sqrt n/\log^2n$, we can directly use Belazzougui's procedure (Theorem \ref{th:Belazzougui}), which already solves the problem. For larger alphabets, we modify Beller et al's procedure (Theorem \ref{th:Beller}) to also generate suffix tree's intervals as follows. When $\sigma > \sqrt n/\log^2n$, we modify Beller et al.'s procedure to enumerate suffix tree intervals using $O(n) = o(n\log\sigma)$ bits of working space, as follows. We recall that (see Section \ref{sec:beller}), Beller et al's procedure can be conveniently described using two separate queues: $Q_{\ell}$ and $Q_{\ell+1}$. At each step, we pop from $Q_\ell$ an element $\langle\langle L,R \rangle, |W| \rangle$ with $\langle L,R \rangle = \mathtt{range(W)}$ and $|W|=\ell$ for some string $W$, left-extend the range with all $a\in \mathtt{BWT.rangeDistinct(L,R)}$, obtaining the ranges $\mathtt{range(aW)} = \langle L_a, R_a\rangle$ and, only if $U[R_a+1]=0$, set $U[R_a+1]\leftarrow 1$, output the LCP pair $(R_a+1, |W|)$, and push $\langle \langle L_a, R_a\rangle, |W|+1 \rangle$ into $Q_{\ell+1}$. Note that, since $LCP[R_a+1] = |W|$ we have that the $R_a$-th and $(R_a+1)$-th smallest suffixes start, respectively, with $aXc$ and $aXd$ for some $c<d\in\Sigma$, where $W=Xc$. This implies that $aX$ is right-maximal. It is also clear that, from the completeness of Beller et al.'s procedure, all right-maximal text substrings are visited by the procedure, since otherwise the LCP values equal to $\ell = |aX|$ inside $\mathtt{range(aX)}$ would not be generated. It follows that, in order to generate all suffix tree intervals \emph{once}, we need two extra ingredients: (i) whenever we pop from $Q_\ell$ an element $\langle\langle L,R \rangle, |W| \rangle$ corresponding to a string $W = Xc$, we also need the range of $X$, and (ii) we need to quickly check if a given range $\mathtt{range(aX)}$ of a right-maximal substring $aX$ has already been output. Point (ii) is necessary since, using only the above procedure (augmented with point (i)), $\mathtt{range(aX)}$ will be output for each of its right-extensions (except the lexicographically largest, which does not cause the generation of an LCP pair). Remember that, in order to keep space usage under control (i.e. $O(n)$ bits), we represent $Q_\ell$ as a standard queue of pairs $\langle \mathtt{range(W)}, |W| \rangle$ if and only if $|Q_\ell| < n/\log n$. For now, let us assume that the queue size does not exceed this quantity (the other case will be considered later). In this case, to implement point (i) we simply augment queue pairs as $\langle \mathtt{range(W)}, \mathtt{range(X)}, |W| \rangle$, where $W=Xc$ for some $c\in\Sigma$. When left-extending $W$ with a character $a$, we also left-extend $X$ with $a$, obtaining $\mathtt{range(aX)}$. Let $\mathtt{range(aW)} = \langle L_a, R_a\rangle$. At this point, if $U[R_a+1]=0$ we do the following: \begin{enumerate} \item we set $U[R_a+1]\leftarrow 1$, \item we push $\langle \mathtt{range(aW)}, \mathtt{range(aX)}, |W|+1 \rangle$ in $Q_{\ell+1}$, and \item if $\mathtt{range(aX)}$ has not already been generated, we output $\mathtt{range(aX)}$. \end{enumerate} Note that steps (1) and (2) correspond to Beller et al.'s procedure. The test in step (3) (that is, point (ii) above) can be implemented as follows. Note that a suffix array range $\mathtt{range(aX)} = \langle L,R\rangle$ can be identified unambiguously by the two integers $L$ and $|aX| = \ell$. Note also that we generate suffix tree intervals in increasing order of string depth (i.e. when popping elements from $Q_\ell$, we output suffix array intervals of string depth $\ell$). It follows that we can keep a bitvector $GEN_\ell$ of length $n$ recording in $GEN_\ell[i]$ whether or not the suffix array interval of the string of length $\ell$ whose first coordinate is $i$ has already been output. Each time we change the value of a bit $GEN_\ell[i]$ from 0 to 1, we also push $i$ into a stack $SET_\ell$. Let us assume for now that also $SET_\ell$'s size does not exceed $n/\log n$ (later we will consider a different representation for the other case). Then, also the bit-size of $SET_\ell$ will never exceed $O(n)$ bits. After $Q_\ell$ has been emptied, for each $i\in SET_\ell$ we set $GEN_\ell[i] \leftarrow 0$. This makes all $GEN_\ell$'s entries equal to 0, and we can thus re-use its space for $GEN_{\ell+1}$ at the next stage (i.e. when popping elements from $Q_{\ell+1}$). Now, let us consider the case $|Q_\ell| \geq n/\log n$. The key observation is that $Q_\ell$ exceeds this value for at most $O(\log n)$ values of $\ell$, therefore we can afford spending extra $O(n/\log n)$ time to process each of these queues. As seen in Section \ref{sec:beller}, whenever $Q_\ell$'s size exceeds $n/\log n$ (while pushing elements in it) we switch to a different queue representation using packed bitvectors. Point (i) can be solved by storing two additional bitvectors as follows. Suppose we are about to push the triple $\langle \mathtt{range(W)}, \mathtt{range(X)}, |W| \rangle$ in $Q_\ell$, where $W=Xc$ for some $c\in\Sigma$. The solution seen in Section \ref{sec:beller} consisted in marking, in two packed bitvectors $\mathtt{open[1,n]}$ and $\mathtt{close[1,n]}$, the start and end points of $\mathtt{range(W)}$. Now, we just use two additional packed bitvectors $\mathtt{\overline{open}[1,n]}$ and $\mathtt{\overline{close}[1,n]}$ to also mark the start and end points of $\mathtt{range(X)}$. As seen in Section \ref{sec:beller}, intervals are extracted from $Q_\ell$ by scanning $\mathtt{open[1,n]}$ and $\mathtt{close[1,n]}$ in $O(n/\log n + |Q_\ell|)$ time (exploiting word-parallelism). Note that $W$ is a right-extension of $X$, therefore $\mathtt{range(W)}$ is contained in $\mathtt{range(X)}$. It follows that we can scan in parallel the bitvectors $\mathtt{open[1,n]}$, $\mathtt{close[1,n]}$, $\mathtt{\overline{open}[1,n]}$, and $\mathtt{\overline{close}[1,n]}$ and retrieve, for each $\mathtt{range(W)}$ extracted from the former two bitvectors, the (unique in the queue) interval $\mathtt{range(X)}$ enclosing $\mathtt{range(W)}$ (using the latter two bitvectors). More formally, whenever finding a bit set at $\mathtt{\overline{open}[i]}$, we search $\mathtt{\overline{close}[i,n]}$ to find the next bit set. Let us call $j$ the position containing such bit set. Then, we similarly scan $\mathtt{open[i,j]}$ and $\mathtt{close[i,j]}$ to generate all intervals $\langle l,r \rangle$ enclosed by $\langle i,j \rangle$, and for each of them generate the triple $\langle \langle l,r \rangle, \langle i,j \rangle, \ell \rangle$. Again, exploiting word-parallelism the process takes $O(n/\log n + |Q_\ell|)$ time to extract all triples $\langle \mathtt{range(W)}, \mathtt{range(X)}, |W| \rangle$ from $Q_\ell$. A similar solution can be used to solve point (ii) for large $SET_\ell$. Whenever $SET_\ell$ exceeds size $n/\log n$, we simply empty it and just use bitvector $GEN_\ell$. This time, however, this bitvector is packed in $O(n/\log n)$ words. It can therefore be erased (i.e. setting all its entries to 0) in $O(n/\log n)$ time, and we do not need to use the stack $SET_\ell$ at all. Since (a) we insert an element in some $SET_\ell$ only when outputting a suffix tree range and (b) in total we output $O(n)$ such ranges, $SET_\ell$ can exceed size $n/\log n$ for at most $O(\log n)$ values of $\ell$. We conclude that also the cost of creating and processing all $GEN_\ell$ and $SET_\ell$ amortizes to $O(n)$. To sum up, the overall procedure runs in $O(n\log\sigma)$ time and uses $O(n)$ bits of space. By combining it with Belazzougui's procedure as seen above (i.e. choosing the right procedure according to the alphabet's size), we obtain: \begin{lemma}\label{lem:ST intervals} Given a wavelet tree representing the Burrows-Wheeler transform of a text $T$ of length $n$ on alphabet $[1,\sigma]$, in $O(n\log\sigma)$ time and $o(n\log\sigma)$ bits of working space we can enumerate the suffix array intervals corresponding to all right maximal text's substrings. \end{lemma} \section{Building the PLCP Bitvector}\label{sec:PLCP} The PLCP array is defined as $PLCP[i] = LCP[ISA[i]]$, and can thus be used to retrieve LCP values as $LCP[i] = PLCP[SA[i]]$ (note that this requires accessing the suffix array). Kasai et al. showed in~\cite{10.1007/3-540-48194-X_17} that PLCP is almost increasing: $PLCP[i+1] \geq PLCP[i]-1$. This allows representing it in small space as follows. Let $\mathtt{plcp[1,2n]}$ denote the bitvector having a bit set at each position $PLCP[i]+2i$, for $i=1, \dots, n$ (and 0 in all other positions). Since $PLCP[i+1] \geq PLCP[i]-1$, the quantity $PLCP[i]+2i$ is different for each $i$. By definition, $PLCP[i]$ can be written as $j-2i$, where $j$ is the position of the $i$-th bit set in $\mathtt{plcp}$; this shows that each PLCP entry can be retrieved in constant time using the bitvector $\mathtt{plcp}$, augmented to support constant-time \emph{select} queries. We now show how to build the $\mathtt{plcp}$ bitvector in small space using the LCP enumeration procedure of Section \ref{sec:LCP}. Our procedure relies on the concept of \emph{irreducible LCP values}: \begin{definition}\label{def:irreducible} $LCP[i]$ is said to be \emph{irreducible} if and only if either $i=0$ or $BWT[i] \neq BWT[i-1]$ hold. \end{definition} We call \emph{reducible} a non-irreducible LCP value. We extend the above definition to PLCP values, saying that $PLCP[i]$ is irreducible if and only if $LCP[ISA[i]]$ is irreducible. The following Lemma, shown in~\cite{10.1007/978-3-540-27810-8_32}, is easy to prove (see also ~\cite[Lem. 4]{10.1007/978-3-642-02441-2_17}): \begin{lemma}[\cite{10.1007/978-3-540-27810-8_32}, Lem. 1]\label{lem:reducible} If $PLCP[i]$ is reducible, then $PLCP[i] = PLCP[i - 1] - 1$. \end{lemma} We also make use of the following Theorem from K{\"a}rkk{\"a}inen et al.~\cite{10.1007/978-3-642-02441-2_17}: \begin{theorem}[\cite{10.1007/978-3-642-02441-2_17}, Thm. 1]\label{lem:sum irreducible} The sum of all irreducible lcp values is at most $2n\log n$. \end{theorem} Our strategy is as follows. We divide $BWT[1,n]$ in $\lceil n/B \rceil$ blocks $BWT[(i-1)\cdot B+1,i\cdot B]$, $i=1, \dots, \lceil n/B \rceil$ of size $B$ (assume for simplicity that $B$ divides $n$). For each block $i=1, \dots, \lceil n/B \rceil$, we use Lemma \ref{thm:LCP collection} to enumerate all pairs $(j,LCP[j])$. Whenever we generate a pair $(j,LCP[j])$ such that (i) $j$ falls in the current block's range $[(i-1)\cdot B+1,i\cdot B]$, (ii) $LCP[j]> \log^3 n$, and (iii) $LCP[j]$ is irreducible (this can be checked easily using Definition \ref{def:irreducible}), we store $(j,LCP[j])$ in a temporary array $\mathtt{LARGE\_LCP}$ (note: each such pair requires $O(\log n)$ bits to be stored). By Theorem \ref{lem:sum irreducible}, there cannot be more than $2n/\log^2 n$ irreducible LCP values being larger than $\log^3 n$, that is, $\mathtt{LARGE\_LCP}$ will never contain more than $2n/\log^2 n$ values and its bit-size will never exceed $O(n/\log n) = o(n)$ bits. We also mark all such relative positions $j-(i-1)\cdot B$ in a bitvector of length $B$ with rank support and radix-sort $\mathtt{LARGE\_LCP}$ in $O(B)$ time to guarantee constant-time access to $LCP[j]$ whenever conditions (i-iii) hold true for index $j$. On the other hand, if (i) $j$ falls in the current block's range $[(i-1)\cdot B+1,i\cdot B]$, (ii) $LCP[j] \leq \log^3 n$, and (iii) $LCP[j]$ is irreducible then we can store $LCP[j]$ in another temporary vector $\mathtt{SMALL\_LCP[1,B]}$ as follows: $\mathtt{SMALL\_LCP}[j-(i-1)\cdot B] \leftarrow LCP[j]$ (at the beginning, the vector is initialized with undefined values). By condition (ii), $\mathtt{SMALL\_LCP}$ can be stored in $O(B\log\log n)$ bits. Using $\mathtt{LARGE\_LCP}$ and $\mathtt{SMALL\_LCP}$, we can access in constant time all irreducible values $LCP[j]$ whenever $j$ falls in the current block $[(i-1)\cdot B+1,i\cdot B]$. At this point, we enumerate all pairs $(i,ISA[i])$ in text order (i.e. for $i=1, \dots, n$) using the FL function on the BWT. Whenever one of those pairs $(i,ISA[i]) = (i,j)$ is such that (i) $j$ falls in the current block's range $[(i-1)\cdot B+1,i\cdot B]$ and (ii) $LCP[j]$ is irreducible, we retrieve $LCP[j]$ in constant time as seen above and we set $\mathtt{plcp[2i+LCP[j]]} \leftarrow 1$; the correctness of this assignment follows from the fact that $j=ISA[i]$, thus $LCP[j] = PLCP[i]$. Using Lemma \ref{lem:reducible}, we can moreover compute the reducible PLCP values that follow $PLCP[i]$ in text order (up to the next irreducible value), and set the corresponding bits in $\mathtt{plcp}$. After repeating the above procedure for all blocks $BWT[(i-1)\cdot B+1,i\cdot B]$, $i=1, \dots, \lceil n/B \rceil$, we terminate the computation of bitvector $\mathtt{plcp}$. For each block, we spend $O(n\log\sigma)$ time (one application of Lemma \ref{thm:LCP collection} and one BWT navigation to generate all pairs $(i,ISA[i])$). We also spend $O(n/\log^2 n)$ time to allocate the instances of $\mathtt{LARGE\_LCP}$ across all blocks. Overall, we spend $O((n^2/B)\log\sigma + n\log\sigma)$ time across all blocks. The space used is $o(n) + O(B\cdot \log\log n)$ bits on top of the BWT. By setting $B = (\epsilon\cdot n\log\sigma)/\log\log n$ we obtain our result: \begin{lemma}\label{lem:PLCP} Given a wavelet tree for the Burrows-Wheeler transform of a text $T$ of length $n$ on alphabet $[1,\sigma]$, for any parameter $0<\epsilon \leq 1$ we can build the PLCP bitvector in $O(n(\log\sigma + \epsilon^{-1}\log\log n))$ time and $\epsilon \cdot n\log\sigma + o(n)$ bits of working space on top of the input BWT and the optput. \end{lemma} \section{Building the Suffix Tree Topology}\label{sec:ST topology} In order to build the suffix tree topology we use a strategy analogous to the one proposed by Belazzougui~\cite{Belazzougui:2014:LTC:2591796.2591885}. The main observation is that, given a procedure that enumerates suffix tree intervals, for each interval $[l,r]$ we can increment a counter $\mathtt{Open[l]}$ and a counter $\mathtt{Close[r]}$, where $\mathtt{Open}$ and $\mathtt{Close}$ are integer vectors of length $n$. Then, the BPS representation of the suffix tree topology can be built by scanning left-to right the two arrays and, for each $i=1, \dots, n$, append $\mathtt{Open[i]}$ open parentheses followed by $\mathtt{Close[i]}$ close parentheses to the BPS representation. The main drawback of this solution is that it takes too much space: $2n\log n$ bits to store the two arrays. Belazzougui solves this problem by noticing that the sum of all the values in the two arrays is the length of the final BPS representation, that is, at most $4n$. This makes it possible to represent the arrays in just $O(n)$ bits of space by representing (the few) large counters in plain form and (the many) small counters using delta encoding (while still supporting updates in constant time). Our goal in this section is to reduce the working space from $O(n)$ to a (small) fraction of $n\log \sigma$. A first idea could be to iterate Belazzougui's strategy on chunks of the interval $[1,n]$. Unfortunately, this does not immediately give the correct solution as a chunk could still account for up to $\Theta(n)$ parentheses, no matter what the length of the chunk is; as a result, Belazzougui's representation could still take $O(n)$ bits of space for a chunk (when using large enough chunks to keep the running time under control as seen in the previous section). We use a solution analogous to the one discussed in the previous section. This solution corresponds to the first part of Belazzougui's strategy (in particular, we will store small counters in plain form instead of using delta encoding). We divide $BWT[1,n]$ in $\lceil n/B \rceil$ blocks $BWT[(i-1)\cdot B+1,i\cdot B]$, $i=1, \dots, \lceil n/B \rceil$ of size $B$ (assume for simplicity that $B$ divides $n$). For each block $i=1, \dots, \lceil n/B \rceil$, we use Lemma \ref{lem:ST intervals} to enumerate all suffix tree intervals $[l,r]$. We keep two arrays $\mathtt{Open[1,B]}$ and $\mathtt{Close[1,B]}$ storing integers of $2\log\log n$ bits each. Whenever the beginning $l$ of a suffix tree interval $[l,r]$ falls inside the current block $[(i-1)\cdot B+1,i\cdot B]$, we increment $\mathtt{Open[l- (i-1)\cdot B]}$ (the description is analogous for index $r$ and array $\mathtt{Close}$). If $\mathtt{Open[l- (i-1)\cdot B]}$ reaches the maximum value $2^{2\log\log n-1}$, we no longer increment it. Adopting Belazzougui's terminology, we call such a bucket ``saturated''. After having generated all suffix tree intervals, let $k$ be the number of saturated counters. We allocate a vector $\mathtt{LARGE\_COUNTERS}$ storing $k$ integers of $\log n + 2$ bits each (enough to store the value $4n$, i.e. an upper-bound to the value that a counter can reach). We also allocate a bitvector of length $B$ marking saturated counters, and process it to support constant-time rank queries. This allows us to obtain in constant time the location in $\mathtt{LARGE\_COUNTERS}$ corresponding to any saturated counter in the block. We generate all suffix tree intervals for a second time using again Lemma \ref{lem:ST intervals}, this time incrementing (in $\mathtt{LARGE\_COUNTERS}$) only locations corresponding to saturated counters. Since the BPS sequence has length at most $4n$ and a counter saturates when it reaches value $\Theta(\log^2 n)$, we have that $k = O(n/\log^2n)$ and thus $\mathtt{LARGE\_COUNTERS}$ takes at most $O(n/\log n) = o(n)$ bits to be stored. The rest of the analysis is identical to the algorithm described in the previous section. For each block, we spend $O(n\log\sigma)$ time (two applications of Lemma \ref{lem:ST intervals}). We also spend $O(n/\log^2 n)$ time to allocate the instances of $\mathtt{LARGE\_COUNTERS}$ across all blocks. Overall, we spend $O((n^2/B)\log\sigma + n\log\sigma)$ time across all blocks. The space used is $o(n) + O(B\cdot \log\log n)$ bits on top of the BWT. By setting $B = (\epsilon\cdot n\log\sigma)/\log\log n$ we obtain: \begin{lemma}\label{lem:BPS} Given a wavelet tree for the Burrows-Wheeler transform of a text $T$ of length $n$ on alphabet $[1,\sigma]$, for any parameter $0<\epsilon \leq 1$ we can build the BPS representation of the suffix tree topology in $O(n(\log\sigma + \epsilon^{-1}\log\log n))$ time and $\epsilon \cdot n\log\sigma + o(n)$ bits of working space on top of the input BWT and the optput. \end{lemma} To conclude, we note that our procedures can be immediately used to build space-efficiently the compressed suffix tree described by Sadakane~\cite{Sadakane:2007:CST:1326296.1326297} starting from the BWT. The only missing ingredients are (i) to augment the BWT with a suffix array sample in order to turn it into a CSA, and (ii) to pre-process the PLCP and BPS sequences to support fast queries (\emph{select} on the PLCP and navigational queries on the BPS). Step (i) can be easily performed in $O(n\log\sigma)$ time and $n+o(n)$ bits of working space with a folklore solution that iteratively applies function LF to navigate all BWT's positions and collect one suffix array sample every $O(\log^{1+\delta} n/\log\sigma)$ text positions, for any fixed $\delta>0$ (using a succinct bitvector to mark sampled positions). The resulting CSA takes $n\log\sigma + o(n\log\sigma)$ bits of space and allows computing any $SA[i]$ in $O(\log^{1+\delta} n)$ time. Step (ii) can be performed in $O(n)$ time and $o(n)$ bits of working space using textbook solutions (see \cite{Navarro:2016:CDS:3092586}). Combining this with Lemmas \ref{thm:BWT->WT}, \ref{lem:PLCP}, and \ref{lem:BPS}, we obtain: \begin{theorem}\label{lem:BPS} Given the word-packed BWT of a text $T$ of length $n$ on alphabet $[1,\sigma]$, for any parameter $0<\epsilon \leq 1$ we can replace it in $O(n(\log\sigma + \epsilon^{-1}\log\log n))$ time and $\epsilon \cdot n\log\sigma + o(n)$ bits of working space with a compressed suffix tree taking $n\log\sigma + 6n + o(n\log\sigma)$ bits of space and supporting all operations in $O(\tt{polylog}\ n)$ time. \end{theorem} \section{Merging BWTs in Small Space}\label{sec:algo2} In this section we use our space-efficient BWT-navigation strategies to tackle an additional problem: to merge the BWTs of two string collections. In~\cite{Belazzougui:2014:LTC:2591796.2591885,belazzougui2016linear}, Belazzougui et al. show that Theorem \ref{th:Belazzougui} can be adapted to merge the BWTs of two texts $T_1, T_2$ and obtain the BWT of the collection $\{T_1, T_2\}$ in $O(nk)$ time and $n\log\sigma(1+1/k) + 11n + o(n)$ bits of working space for any $k \geq 1$~\cite[Thm. 7]{belazzougui2016linear}. We show that our strategy enables a more space-efficient algorithm for the task of merging BWTs of collections. The following theorem, whose proof is reported later in this section, merges two BWTs by computing the binary DA of their union. After that, the merged BWT can be streamed to external memory (the DA tells how to interleave characters from the input BWTs) and does not take additional space in internal memory. Similarly to what we did in the proof of Theorem \ref{thm:LCP collection succinct}, this time we re-use the space of the Document Array to accommodate the extra $n$ bits needed to replace the BWTs of the two collections with their wavelet matrices. This is the main result of this section: \begin{theorem}\label{th:merge} Given the Burrows-Wheeler Transforms of two collections $\mathcal{S}\xspace_1$ and $\mathcal{S}\xspace_2$ of total length $n$ on alphabet $[1,\sigma]$, we can compute the Document Array of $\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2$ in $O(n\log\sigma)$ time using $o(n\log\sigma)$ bits of working space on top of the input BWTs and the output DA. \end{theorem} We also briefly discuss how to extend Theorem \ref{th:merge} to build the LCP array of the merged collection. In Section \ref{sec:experiments} we present an implementation of our algorithms and an experimental comparison with \texttt{eGap}\xspace~\cite{egidi2017lightweight}, the state-of-the-art tool designed for the same task of merging BWTs while inducing the LCP of their union. The procedure of Algorithm \ref{alg:fill leaves} can be extended to merge BWTs of two collections $\mathcal{S}\xspace_1$, $\mathcal{S}\xspace_2$ using $o(n\log\sigma)$ bits of working space on top of the input BWTs and output Document Array (here, $n$ is the cumulative length of the two BWTs). The idea is to simulate a navigation of the \emph{leaves} of the generalized suffix tree of $\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2$ (note: for us, a collection is an ordered multi-set of strings). Our procedure differs from that described in~\cite[Thm. 7]{belazzougui2016linear} in two ways. First, they navigate a subset of the suffix tree \emph{nodes} (so-called \emph{impure} nodes, i.e. the roots of subtrees containing suffixes from distinct strings), whereas we navigate leaves. Second, their visit is implemented by following Weiner links. This forces them to represent the nodes with the ``heavy'' representation $\mathtt{repr}$ of Section \ref{sec:belazzougui}, which is not efficient on large alphabets. On the contrary, leaves can be represented simply as ranges and allow for a more space-efficient queue/stack representation. We represent each leaf by a pair of intervals, respectively on $BWT(\mathcal{S}\xspace_1)$ and $BWT(\mathcal{S}\xspace_2)$, of strings of the form $W\#$. Note that: (i) the suffix array of $\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2$ is covered by the non-overlapping intervals of strings of the form $W\#$, and (ii) for each such string $W\#$, the interval $\mathtt{range(W\#)} = \langle L,R \rangle$ in $GSA(\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2)$ can be partitioned as $\langle L, M \rangle \cdot \langle M+1, R\rangle$, where $\langle L,M\rangle$ contains only suffixes from $\mathcal{S}\xspace_1$ and $\langle M+1,R \rangle$ contains only suffixes from $\mathcal{S}\xspace_2$ (one of these two intervals could be empty). It follows that we can navigate in parallel the leaves of the suffix trees of $\mathcal{S}\xspace_1$ and $\mathcal{S}\xspace_2$ (using again a stack or a queue containing pairs of intervals on the two BWTs), and fill the Document Array $DA[1,n]$, an array that will tell us whether the $i$-th entry of $BWT(\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2)$ comes from $BWT(\mathcal{S}\xspace_1)$ ($DA[i] = 0$) or $BWT(\mathcal{S}\xspace_2)$ ($DA[i] = 1$). To do this, let $\langle L_1, R_1\rangle$ and $\langle L_2, R_2\rangle$ be the ranges on the suffix arrays of $\mathcal{S}\xspace_1$ and $\mathcal{S}\xspace_2$, respectively, of a suffix $W\#$ of some string in the collections. Note that one of the two intervals could be empty: $R_j<L_j$. In this case, we still require that $L_j-1$ is the number of suffixes in $\mathcal{S}\xspace_j$ that are smaller than $W\#$. Then, in the collection $\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2$ there are $L_1 + L_2 - 2$ suffixes smaller than $W\#$, and $R_1 + R_2$ suffixes smaller than or equal to $W\#$. It follows that the range of $W\#$ in the suffix array of $\mathcal{S}\xspace_1 \cup \mathcal{S}\xspace_2$ is $\langle L_1+L_2-1, R_1+R_2\rangle$, where the first $R_1-L_1+1$ entries correspond to suffixes of strings from $\mathcal{S}\xspace_1$. Then, we set $DA[L_1+L_2-1, L_2 + R_1-1] \leftarrow 0$ and $DA[L_2 + R_1,R_1+R_2] \leftarrow 1$. The procedure starts from the pair of intervals corresponding to the ranges of the string ``$\#$'' in the two BWTs, and proceeds recursively by left-extending the current pair of ranges $\langle L_1, R_1\rangle$, $\langle L_2, R_2\rangle$ with the symbols in $\mathtt{BWT_1.rangeDistinct(L_1,R_1)} \cup \mathtt{BWT_2.rangeDistinct(L_2,R_2)}$. The detailed procedure is reported in Algorithm \ref{alg:merge}. The leaf visit is implemented, again, using a stack or a queue; this time however, these containers are filled with pairs of intervals $\langle L_1, R_1\rangle$, $\langle L_2, R_2\rangle$. We implement the stack simply as a vector of quadruples $\langle L_1, R_1, L_2, R_2\rangle$. As far as the queue is concerned, some care needs to be taken when representing the pairs of ranges using bitvectors as seen in Section \ref{sec:beller} with Beller et al.'s representation. Recall that, at any time, the queue can be partitioned in two sub-sequences associated with LCP values $\ell$ and $\ell+1$ (we pop from the former, and push in the latter). This time, we represent each of these two subsequences as a vector of quadruples (pairs of ranges on the two BWTs) as long as the number of quadruples in the sequence does not exceed $n/\log n$. When there are more quadruples than this threshold, we switch to a bitvector representation defined as follows. Let $|BWT(\mathcal{S}\xspace_1)|=n_1$, $|BWT(\mathcal{S}\xspace_2)|=n_2$, and $|BWT(\mathcal{S}\xspace_1\cup \mathcal{S}\xspace_2)| = n = n_1+n_2$. We keep two bitvectors $\mathtt{Open[1,n]}$ and $\mathtt{Close[1,n]}$ storing opening and closing parentheses of intervals in $BWT(\mathcal{S}\xspace_1\cup \mathcal{S}\xspace_2)$. We moreover keep two bitvectors $\mathtt{NonEmpty_1[1,n]}$ and $\mathtt{NonEmpty_2[1,n]}$ keeping track, for each $i$ such that $\mathtt{Open[i]=1}$, of whether the interval starting in $BWT(\mathcal{S}\xspace_1\cup \mathcal{S}\xspace_2)[i]$ contains suffixes of reads coming from $\mathcal{S}\xspace_1$ and $\mathcal{S}\xspace_2$, respectively. Finally, we keep four bitvectors $\mathtt{Open_j[1,n_j]}$ and $\mathtt{Close_j[1,n_j]}$, for $j=1,2$, storing non-empty intervals on $BWT(\mathcal{S}\xspace_1)$ and $BWT(\mathcal{S}\xspace_2)$, respectively. To insert a pair of intervals $\langle L_1, R_1\rangle,\ \langle L_2, R_2\rangle$ in the queue, let $\langle L,R \rangle = \langle L_1+L_2-1, R_1+R_2\rangle$. We set $\mathtt{Open[L]} \leftarrow 1$ and $\mathtt{Close[R]} \leftarrow 1$. Then, for $j=1,2$, we set $\mathtt{NonEmpty_j[L]} \leftarrow 1$, $\mathtt{Open_j[L_j]} \leftarrow 1$ and $\mathtt{Close_j[R_j]} \leftarrow 1$ if and only if $R_j\geq L_j$. This queue representation takes $O(n)$ bits. By construction, for each bit set in $\mathtt{Open}$ at position $i$, there is a corresponding bit set in $\mathtt{Open_j}$ if and only if $\mathtt{NonEmpty_j[i]} = 1$ (moreover, corresponding bits set appear in the same order in $\mathtt{Open}$ and $\mathtt{Open_j}$). It follows that a left-to-right scan of these bitvectors is sufficient to identify corresponding intervals on $BWT(\mathcal{S}\xspace_1\cup \mathcal{S}\xspace_2)$, $BWT(\mathcal{S}\xspace_1)$, and $BWT(\mathcal{S}\xspace_2)$. By packing the bits of the bitvectors in words of $\Theta(\log n)$ bits, the $t$ pairs of intervals contained in the queue can be extracted in $O(t+ n/\log n)$ time (as described in~\cite{beller2013computing}) by scanning in parallel the bitvectors forming the queue. Particular care needs to be taken only when we find the beginning of an interval $\mathtt{Open[L]=1}$ with $\mathtt{NonEmpty_1[L]} = 0$ (the case $\mathtt{NonEmpty_2[L]} = 0$ is symmetric). Let $L_2$ be the beginning of the corresponding non-empty interval on $BWT(\mathcal{S}\xspace_2)$. Even though we are not storing $L_1$ (because we only store nonempty intervals), we can retrieve this value as $L_1=L-L_2+1$. Then, the empty interval on $BWT(\mathcal{S}\xspace_1)$ is $\langle L_1, L_1-1\rangle$. The same arguments used in the previous section show that the algorithm runs in $O(n\log\sigma)$ time and uses $o(n\log\sigma)$ bits of space on top of the input BWTs and output Document Array. This proves Theorem \ref{th:merge}. To conclude, we note that the algorithm can be easily extended to compute the LCP array of the merged collection while merging the BWTs. This requires adapting Algorithm \ref{alg:fill nodes} to work on pairs of suffix tree nodes (as we did in Algorithm \ref{alg:merge} with pairs of leaves). Results on an implementation of the extended algorithm are discussed in the next section. From the practical point of view, note that it is more advantageous to induce the LCP of the merged collection while merging the BWTs (rather than first merging and then inducing the LCP using the algorithm of the previous section), since leaf-type LCP values can be induced directly while computing the document array. \begin{algorithm} \begin{algorithmic}[1] \If{$\sigma > n/\log^3n$} \State $\mathtt P \leftarrow \mathtt{new\_queue()}$\Comment{Initialize new queue of interval pairs}\label{new queue3} \Else \State $\mathtt P \leftarrow \mathtt{new\_stack()}$\Comment{Initialize new stack of interval pairs}\label{new stack3} \EndIf \State $\mathtt P\mathtt{.push(BWT_1.range(\#),BWT_2.range(\#))}$\Comment{Push SA-ranges of terminator}\label{push5} \While{$\mathtt{\mathbf{not}\ P.empty()}$}\label{while3} \State $\langle L_1,R_1, L_2, R_2 \rangle \leftarrow \mathtt{P.pop()}$\Comment{Pop highest-priority element}\label{pop3} \For{$i=L_1+L_2-1\dots L_2+R_1-1$} \State $\mathtt{DA}[i] \leftarrow 0$\Comment{Suffixes from $\mathcal{S}\xspace_1$}\label{DA1} \EndFor \For{$i=L_2+R_1\dots R_1+R_2$} \State $\mathtt{DA}[i] \leftarrow 1$\Comment{Suffixes from $\mathcal{S}\xspace_2$}\label{DA2} \EndFor \If{$\sigma > n/\log^3n$} \State$\mathtt{P.push(getIntervals(L_1, R_1, L_2, R_2, BWT_1, BWT_2))}$\Comment{New intervals}\label{push8} \Else \State $c_1^1, \dots, c_{k_1}^1 \leftarrow \mathtt{BWT_1.rangeDistinct(L_1,R_1)}$\label{range distinct3} \State $c_1^2, \dots, c_{k_2}^2 \leftarrow \mathtt{BWT_2.rangeDistinct(L_2,R_2)}$\label{range distinct4} \State $\{c_1\dots c_k\} \leftarrow \{c_1^1, \dots, c_{k_1}^1\} \cup \{c_1^2, \dots, c_{k_2}^2\}$\label{range distinct5} \For{$i=1\dots k$} \State $\langle L_1^i, R_1^i\rangle \leftarrow \mathtt{BWT_1.bwsearch(\langle L_1, R_1\rangle, c_i)}$\Comment{Backward search step}\label{BWS2} \EndFor \For{$i=1\dots k$} \State $\langle L_2^i, R_2^i\rangle \leftarrow \mathtt{BWT_2.bwsearch(\langle L_2, R_2\rangle, c_i)}$\Comment{Backward search step}\label{BWS3} \EndFor \State $\langle \hat L_1^i, \hat R_1^i, \hat L_2^i, \hat R_2^i, \rangle_{i=1, \dots, k} \leftarrow \mathtt{sort}(\langle L_1^i, R_1^i, L_2^i, R_2^i, \rangle_{i=1, \dots, k})$\label{sort3} \For{$i=k\dots 1$} \State $\mathtt{P.push}(\hat L_1^i, \hat R_1^i, \hat L_2^i, \hat R_2^i)$\Comment{Push in order of decreasing length}\label{push6} \EndFor \EndIf \EndWhile \caption{$\mathtt{Merge(BWT_1,BWT_2, DA)}$}\label{alg:merge} \end{algorithmic} \end{algorithm} Note that Algorithm \ref{alg:merge} is similar to Algorithm \ref{alg:fill leaves}, except that now we manipulate pairs of intervals. In Line \ref{sort3}, we sort quadruples according to the length $R_1^i + R_2^i - (L_1^i + L_2^i) +2$ of the combined interval on $BWT(\mathcal{S}\xspace_1\cup \mathcal{S}\xspace_2)$. Finally, note that Backward search can be performed correctly also when the input interval is empty: $\mathtt{BWT_j.bwsearch(\langle L_j, L_j-1 \rangle, c)}$, where $L_j-1$ is the number of suffixes in $\mathcal{S}\xspace_j$ smaller than some string $W$, correctly returns the pair $\langle L', R'\rangle$ such that $L'$ is the number of suffixes in $\mathcal{S}\xspace_j$ smaller than $cW$: this is true when implementing backward search with a $rank_c$ operation on position $L_j$; then, if the original interval is empty we just set $R'=L'-1$ to keep the invariant that $R'-L'+1$ is the interval's length. \section{Implementation and Experimental Evaluation}\label{sec:experiments} We implemented our LCP construction and BWT merge algorithms on DNA alphabet in \url{https://github.com/nicolaprezza/bwt2lcp}\xspace using the language C++. Due to the small alphabet size, it was actually sufficient to implement our extension of Belazzougui's enumeration algorithm (and not the strategy of Beller et al., which becomes competitive only on large alphabets). The repository features a new packed string on DNA alphabet $\Sigma_{DNA}=\{A,C,G,T,\#\}$ using 4 bits per character and able to compute the quintuple $\langle BWT.rank_c(i) \rangle_{i\in \Sigma_{DNA}}$ with just one cache miss. This is crucial for our algorithms, since at each step we need to left-extend ranges by all characters. This structure divides the text in blocks of 128 characters. Each block is stored using 512 cache-aligned bits (the typical size of a cache line), divided as follows. The first 128 bits store four 32-bits counters with the partial ranks of A, C, G, and T before the block (if the string is longer than $2^{32}$ characters, we further break it into superblocks of $2^{32}$ characters; on reasonably-large inputs, the extra rank table fits in cache and does not cause additional cache misses). The following three blocks of 128 bits store the first, second, and third bits, respectively, of the characters' binary encodings (each character is packed in 3 bits). Using this layout, the rank of each character in the block can be computed with at most three masks, a bitwise AND (actually less, since we always compute the rank of all five characters and we re-use partial results whenever possible), and a \texttt{popcount} operation. We also implemented a packed string on the augmented alphabet $\Sigma_{DNA}^+=\{A,C,G,N,T,\#\}$ using $4.38$ bits per character and offering the same cache-efficiency guarantees. In this case, a 512-bits block stores 117 characters, packed as follows. As seen above, the first 128 bits store four 32-bits counters with the partial ranks of A, C, G, and T before the block. Each of the following three blocks of 128 bits is divided in a first part of 117 bits and a second part of 11 bits. The first parts store the first, second, and third bits, respectively, of the characters' binary encodings. The three parts of 11 bits, concatenated together, store the rank of N's before the block. This layout minimizes the number of bitwise operations (in particular, shifts and masks) needed to compute a parallel rank. Several heuristics have been implemented to reduce the number of cache misses in practice. In particular, we note that in Algorithm \ref{alg:fill leaves} we can avoid backtracking when the range size becomes equal to one; the same optimization can be implemented in Algorithm \ref{alg:merge} when also computing the LCP array, since leaves of size one can be identified during navigation of internal suffix tree nodes. Overall, we observed (using a memory profiler) that in practice the combination of Algorithms \ref{alg:fill nodes}-\ref{alg:fill leaves} generates at most $1.5n$ cache misses, $n$ being the total collection's size. The extension of Algorithm \ref{alg:merge} that computes also LCP values generates twice this number of cache misses (this is expected, since the algorithm navigates two BWTs). We now report some preliminary experiments on our algorithms: \texttt{bwt2lcp}\xspace (Algorithms \ref{alg:fill nodes}-\ref{alg:fill leaves}) and \texttt{merge}\xspace (Algorithm \ref{alg:merge}, extended to compute also the LCP array). All tests were done on a DELL PowerEdge R630 machine, used in non exclusive mode. Our platform is a $24$-core machine with Intel(R) Xeon(R) CPU E5-2620 v3 at $2.40$ GHz, with $128$ GiB of shared memory and 1TB of SSD. The system is Ubuntu 14.04.2 LTS. The code was compiled using gcc 8.1.0 with flags \texttt{-Ofast} \texttt{-fstrict-aliasing}. \begin{table}[t] \centering {\scriptsize \begin{tabular}{|@{\ }l@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|@{\ }c@{\ }|c@{\ }|c@{\ }|} \hline Name & Size & $\sigma$ & N. of & Max read & Bytes for\\ & GiB & & reads & length & lcp values\\ \hline NA12891.8 & 8.16 & 5 & 85,899,345 & 100 & 1 \\ \hline shortreads & 8.0 & 6 & 85,899,345 & 100 & 1 \\ \hline pacbio & 8.0 & 6 & 942,248 & 71,561 & 4 \\ \hline pacbio.1000 & 8.0 & 6 & 8,589,934 & 1000 & 2\\ \hline NA12891.24 & 23.75 & 6 & 250,000,000 & 100 & 1 \\ \hline NA12878.24 & 23.75 & 6 & 250,000,000 & 100 & 1 \\ \hline \end{tabular} } \caption{Datasets used in our experiments. Size accounts only for the alphabet's characters. The alphabet's size $\sigma$ includes the terminator.} \label{tableDataset} \end{table} \begin{table}[t] \centering {\scriptsize \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Preprocessing} & \multicolumn{2}{c|}{\texttt{eGap}\xspace} & \multicolumn{2}{c|}{\texttt{merge}\xspace} \\ \hline Name & Wall Clock & RAM & Wall Clock & RAM & Wall Clock & RAM \\ & (h:mm:ss) & (GiB) & (h:mm:ss) & (GiB) & (h:mm:ss) & (GiB) \\ \hline NA12891.8 & 1:15:57 & 2.84 & \multirow{2}{*}{10:15:07} & \multirow{2}{*}{18.09 (-m 32000)} & \multirow{2}{*}{3:16:40} & \multirow{2}{*}{26.52} \\ \cline{1-3} NA12891.8.RC & 1:17:55 & 2.84 & & & & \\ \hline shortreads & 1:14:51 & 2.84 & \multirow{2}{*}{11:03:10} & \multirow{2}{*}{16.24 (-m 29000)} & \multirow{2}{*}{3:36:21} & \multirow{2}{*}{26.75} \\ \cline{1-3} shortreads.RC & 1:19:30 & 2.84 & & & & \\ \hline pacbio.1000 & 2:08:56 & 31.28 & \multirow{2}{*}{5:03:01} & \multirow{2}{*}{21.23 (-m 45000)} & \multirow{2}{*}{4:03:07} & \multirow{2}{*}{42.75} \\ \cline{1-3} pacbio.1000.RC & 2:15:08 & 31.28 & & & & \\ \hline pacbio & 2:27:08 & 31.25 & \multirow{2}{*}{2:56:31} & \multirow{2}{*}{33.40 (-m 80000)} & \multirow{2}{*}{4:38:27} & \multirow{2}{*}{74.76} \\ \cline{1-3} pacbio.RC & 2:19:27 & 31.25 & & & & \\ \hline NA12878.24 & 4:24:27 & 7.69 & \multirow{2}{*}{31:12:28} & \multirow{2}{*}{47.50 (-m 84000)} & \multirow{2}{*}{6:41:35} & \multirow{2}{*}{73.48} \\ \cline{1-3} NA12891.24 & 4:02:42 & 7.69 & & & & \\ \hline \end{tabular} \caption{In this experiment, we merge pairs of BWTs and induce the LCP of their union using \texttt{eGap}\xspace and \texttt{merge}\xspace. We also show the resources used by the pre-processing step (building the BWTs) for comparison. Wall clock is the elapsed time from start to completion of the instance, while RAM (in GiB) is the peak Resident Set Size (RSS). All values were taken using the \texttt{/usr/bin/time} command. During the preprocessing step on the collections pacBio.1000 and pacBio, the available memory in MB (parameter m) of \texttt{eGap}\xspace was set to 32000 MB. In the merge step this parameter was set to about to the memory used by \texttt{merge}\xspace. \texttt{eGap}\xspace and \texttt{merge}\xspace take as input the same BWT file.} \label{tab:merge} } \end{table} Table \ref{tableDataset} summarizes the datasets used in our experiments. ``NA12891.8''\footnote{{\scriptsize \url{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA12891/sequence_read/SRR622458_1.filt.fastq.gz}}} contains Human DNA reads on the alphabet $\Sigma_{DNA}$ where we have removed reads containing the nucleotide $N$. ``shortreads'' contains Human DNA short reads on the extended alphabet $\Sigma_{DNA}^+$. ``pacbio'' contains PacBio RS II reads from the species \emph{Triticum aestivum} (wheat). ``pacbio.1000'' are the strings from ``pacbio'' trimmed to length 1,000. All the above datasets except the first have been download from \url{https://github.com/felipelouza/egap/tree/master/dataset}. To conclude, we added two collections, ``NA12891.24'' and ``NA12878.24'' obtained by taking the first $250,000,000$ reads from individuals NA12878\footnote{{\scriptsize \url{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA12878/sequence_read/SRR622457_1.filt.fastq.gz}}} and NA12891. All datasets except ``NA12891.8'' are on the alphabet $\Sigma_{DNA}^+$. In Tables \ref{tab:merge} and \ref{tab:induce}, the suffix ``.RC'' added to a dataset's name indicates the reverse-complemented dataset. We compare our algorithms with \texttt{eGap}\xspace\footnote{{\scriptsize\url{https://github.com/felipelouza/egap}}} and BCR\footnote{{\scriptsize\url{https://github.com/giovannarosone/BCR_LCP_GSA}}}, two tools designed to build the BWT and LCP of a set of DNA reads. Since no tools for inducing the LCP from the BWT of a set of strings are available in the literature, in Table \ref{tab:induce} we simply compare the resources used by \texttt{bwt2lcp}\xspace with the time and space requirements of \texttt{eGap}\xspace and BCR when building the BWT. In \cite{EgidiAMB2019}, experimental results show that BCR works better on short reads and collections with a large average LCP, while \texttt{eGap}\xspace works better when the datasets contain long reads and relatively small average LCP. For this reason, in the preprocessing step we have used BCR for the collections containing short reads and \texttt{eGap}\xspace for the other collections. \texttt{eGap}\xspace, in addition, is capable of merging two or more BWTs while inducing the LCP of their union. In this case, we can therefore directly compare the performance of \texttt{eGap}\xspace with our tool \texttt{merge}\xspace; results are reported in Table \ref{tab:merge}. Since the available RAM is greater than the size of the input, we have used the semi-external strategy of \texttt{eGap}\xspace. Notice that an entirely like-for-like comparison between our tools and \texttt{eGap}\xspace is not completely feasible, being \texttt{eGap}\xspace a semi-external memory tool (our tools, instead, use internal memory only). While in our tables we report RAM usage only, it is worth to notice that \texttt{eGap}\xspace uses a considerable amount of disk working space. For example, the tool uses $56$GiB of disk working space when run on a $8$GiB input (in general, the disk usage is of $7n$ bytes). \begin{table}[t] {\scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Preprocessing} & \multicolumn{2}{c|}{\texttt{bwt2lcp}\xspace} \\ \hline Name & Wall Clock & RAM & Wall Clock & RAM \\ & (h:mm:ss) & GiB & (h:mm:ss) & (GiB) \\ \hline NA12891.8 $\cup$ NA12891.8.RC (BCR) & 2:43:02 & 5.67 & 1:40:01 & 24.48 \\ \hline shortread $\cup$ shortread.RC (BCR) & 2:47:07 & 5.67 & 2:14:41 & 24.75 \\ \hline pacbio.1000 $\cup$ pacbio.1000.RC (\texttt{eGap}\xspace -m 32000) & 7:07:46 & 31.28 & 1:54:56 & 40.75 \\ \hline pacbio $\cup$ pacbio.RC (\texttt{eGap}\xspace -m 80000) & 6:02:37 & 78.125 & 2:14:37 & 72.76 \\ \hline NA12878.24 $\cup$ NA12891.24 (BCR) & 8:26:34 & 16.63 & 6:41:35 & 73.48 \\ \hline \end{tabular} \caption{In this experiment, we induced the LCP array from the BWT of a collection (each collection is the union of two collections from Table \ref{tab:merge}). We also show pre-processing requirements (i.e. building the BWT) of the better performing tool between BCR and \texttt{eGap}\xspace.} \label{tab:induce} } \end{table} Our tools exhibit a dataset-independent linear time complexity, whereas \texttt{eGap}\xspace's running time depends on the average LCP. Table \ref{tab:induce} shows that our tool \texttt{bwt2lcp}\xspace induces the LCP from the BWT faster than building the BWT itself. When 'N's are not present in the dataset, \texttt{bwt2lcp}\xspace processes data at a rate of $2.92$ megabases per second and uses $0.5$ Bytes per base in RAM in addition to the LCP. When 'N's are present, the throughput decreases to $2.12$ megabases per second and the tool uses $0.55$ Bytes per base in addition to the LCP. As shown in Table \ref{tab:merge}, our tool \texttt{merge}\xspace is from $1.25$ to $4.5$ times faster than \texttt{eGap}\xspace on inputs with large average LCP, but $1.6$ times slower when the average LCP is small (dataset ``pacbio''). When 'N's are not present in the dataset, \texttt{merge}\xspace processes data at a rate of $1.48$ megabases per second and uses $0.625$ Bytes per base in addition to the LCP. When 'N's are present, the throughput ranges from $1.03$ to $1.32$ megabases per second and the tool uses $0.673$ Bytes per base in addition to the LCP. When only computing the merged BWT (results not shown here for space reasons), \texttt{merge}\xspace uses in total $0.625$/$0.673$ Bytes per base in RAM (without/with 'N's) and is about $1.2$ times faster than the version computing also the LCP. \end{document}
\begin{document} \title[GRADED ANNIHILATORS AND TIGHT CLOSURE]{GRADED ANNIHILATORS OF MODULES OVER THE FROBENIUS SKEW POLYNOMIAL RING, AND TIGHT CLOSURE} \author{RODNEY Y. SHARP} \address{Department of Pure Mathematics, University of Sheffield, Hicks Building, Sheffield S3 7RH, United Kingdom\\ {\it Fax number}: 0044-114-222-3769} \email{[email protected]} \thanks{The author was partially supported by the Engineering and Physical Sciences Research Council of the United Kingdom (Overseas Travel Grant Number EP/C538803/1).} \subjclass[2000]{Primary 13A35, 16S36, 13D45, 13E05, 13E10; Secondary 13H10} \date{\today} \keywords{Commutative Noetherian ring, prime characteristic, Frobenius homomorphism, tight closure, (weak) test element, (weak) parameter test element, skew polynomial ring; local cohomology; Cohen--Macaulay local ring.} \begin{abstract} This paper is concerned with the tight closure of an ideal ${\mathfrak{a}}$ in a commutative Noetherian local ring $R$ of prime characteristic $p$. Several authors, including R. Fedder, K.-i. Watanabe, K. E. Smith, N. Hara and F. Enescu, have used the natural Frobenius action on the top local cohomology module of such an $R$ to good effect in the study of tight closure, and this paper uses that device. The main part of the paper develops a theory of what are here called `special annihilator submodules' of a left module over the Frobenius skew polynomial ring associated to $R$; this theory is then applied in the later sections of the paper to the top local cohomology module of $R$ and used to show that, if $R$ is Cohen--Macaulay, then it must have a weak parameter test element, even if it is not excellent. \end{abstract} \maketitle \setcounter{section}{-1} \section{\sc Introduction} \label{in} Throughout the paper, $R$ will denote a commutative Noetherian ring of prime characteristic $p$. We shall always denote by $f:R\longrightarrow R$ the Frobenius homomorphism, for which $f(r) = r^p$ for all $r \in R$. Let ${\mathfrak{a}}$ be an ideal of $R$. The {\em $n$-th Frobenius power\/} ${\mathfrak{a}}^{[p^n]}$ of ${\mathfrak{a}}$ is the ideal of $R$ generated by all $p^n$-th powers of elements of ${\mathfrak{a}}$. We use $R^{\circ}$ to denote the complement in $R$ of the union of the minimal prime ideals of $R$. An element $r \in R$ belongs to the {\em tight closure ${\mathfrak{a}}^*$ of ${\mathfrak{a}}$\/} if and only if there exists $c \in R^{\circ}$ such that $cr^{p^n} \in {\mathfrak{a}}^{[p^n]}$ for all $n \gg 0$. We say that ${\mathfrak{a}}$ is {\em tightly closed\/} precisely when ${\mathfrak{a}}^* = {\mathfrak{a}}$. The theory of tight closure was invented by M. Hochster and C. Huneke \cite{HocHun90}, and many applications have been found for the theory: see \cite{Hunek96} and \cite{Hunek98}, for example. In the case when $R$ is local, several authors have used, as an aid to the study of tight closure, the natural Frobenius action on the top local cohomology module of $R$: see, for example, R. Fedder \cite{F87}, Fedder and K.-i. Watanabe \cite{FW87}, K. E. Smith \cite{S94}, N. Hara and Watanabe \cite{HW96} and F. Enescu \cite{Enesc03}. This device is employed in this paper. The natural Frobenius action provides the top local cohomology module of $R$ with a natural structure as a left module over the skew polynomial ring $R[x,f]$ associated to $R$ and $f$. Sections \ref{nt} and \ref{ga} develop a theory of what are here called `special annihilator submodules' of a left $R[x,f]$-module $H$. To explain this concept, we need the definition of the {\em graded annihilator\/} $\grann_{R[x,f]}H$ of $H$. Now $R[x,f]$ has a natural structure as a graded ring, and $\grann_{R[x,f]}H$ is defined to be the largest graded two-sided ideal of $R[x,f]$ that annihilates $H$. On the other hand, for a graded two-sided ideal ${\mathfrak{B}}$ of $R[x,f]$, the {\em annihilator of ${\mathfrak{B}}$ in $H$\/} is defined as $$ \ann_H{\mathfrak{B}} := \{ h \in H : \theta h = 0 \mbox{~for all~}\theta \in {\mathfrak{B}}\}. $$ I say that an $R[x,f]$-submodule of $H$ is a {\em special annihilator submodule\/} of $H$ if it has the form $\ann_H{\mathfrak{B}}$ for some graded two-sided ideal ${\mathfrak{B}}$ of $R[x,f]$. There is a natural bijective inclusion-reversing correspondence between the set of all special annihilator submodules of $H$ and the set of all graded annihilators of submodules of $H$. A large part of this paper is concerned with exploration and exploitation of this correspondence. It is particularly satisfactory in the case where the left $R[x,f]$-module $H$ is $x$-torsion-free, for then it turns out that the set of all graded annihilators of submodules of $H$ is in bijective correspondence with a certain set of radical ideals of $R$, and one of the main results of \S \ref{ga} is that this set is finite in the case where $H$ is Artinian as an $R$-module. The theory that emerges has some uncanny similarities to tight closure theory. Use is made of the Hartshorne--Speiser--Lyubeznik Theorem (see R. Hartshorne and R. Speiser \cite[Proposition 1.11]{HarSpe77}, G. Lyubeznik \cite[Proposition 4.4]{Lyube97}, and M. Katzman and R. Y. Sharp \cite[1.4 and 1.5]{KS}) to pass between a general left $R[x,f]$-module that is Artinian over $R$ and one that is $x$-torsion-free. In \S \ref{tc}, this theory of special annihilator submodules is applied to prove an existence theorem for weak parameter test elements in a Cohen--Macaulay local ring of characteristic $p$. To explain this, I now review some definitions concerning weak test elements. A {\em $p^{w_0}$-weak test element\/} for $R$ (where $w_0$ is a non-negative integer) is an element $c' \in R^{\circ}$ such that, for every ideal ${\mathfrak{b}}$ of $R$ and for $r \in R$, it is the case that $r \in {\mathfrak{b}}^*$ if and only if $c'r^{p^n} \in {\mathfrak{b}}^{[p^n]}$ for all $n \geq w_0$. A $p^0$-weak test element is called a {\em test element\/}. A proper ideal ${\mathfrak{a}}$ in $R$ is said to be a {\em parameter ideal\/} precisely when it can be generated by $\height {\mathfrak{a}}$ elements. Parameter ideals play an important r\^ole in tight closure theory, and Hochster and Huneke introduced the concept of parameter test element for $R$. A {\em $p^{w_0}$-weak parameter test element\/} for $R$ is an element $c' \in R^{\circ}$ such that, for every parameter ideal ${\mathfrak{b}}$ of $R$ and for $r \in R$, it is the case that $r \in {\mathfrak{b}}^*$ if and only if $c'r^{p^n} \in {\mathfrak{b}}^{[p^n]}$ for all $n \geq w_0$. A $p^0$-weak parameter test element is called a {\em parameter test element\/}. It is a result of Hochster and Huneke \cite[Theorem (6.1)(b)]{HocHun94} that an algebra of finite type over an excellent local ring of characteristic $p$ has a $p^{w_0}$-weak test element for some non-negative integer $w_0$; furthermore, such an algebra which is reduced actually has a test element. Of course, a (weak) test element is a (weak) parameter test element. One of the main results of this paper is Theorem \ref{tc.2}, which shows that every Cohen--Macaulay local ring of characteristic $p$, even if it is not excellent, has a $p^{w_0}$-weak parameter test element for some non-negative integer $w_0$. Lastly, the final \S \ref{en} establishes some connections between the theory developed in this paper and the $F$-stable primes of F. Enescu \cite{Enesc03}. \section{\sc Graded annihilators and related concepts} \label{nt} \begin{ntn} \label{nt.1} Throughout, $R$ will denote a commutative Noetherian ring of prime characteristic $p$. We shall work with the skew polynomial ring $R[x,f]$ associated to $R$ and $f$ in the indeterminate $x$ over $R$. Recall that $R[x,f]$ is, as a left $R$-module, freely generated by $(x^i)_{i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ (I use $\mathbb N$ and $\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ to denote the set of positive integers and the set of non-negative integers, respectively), and so consists of all polynomials $\sum_{i = 0}^n r_i x^i$, where $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r_0,\ldots,r_n \in R$; however, its multiplication is subject to the rule $$ xr = f(r)x = r^px \quad \mbox{~for all~} r \in R\/. $$ Note that $R[x,f]$ can be considered as a positively-graded ring $R[x,f] = \bigoplus_{n=0}^{\infty} R[x,f]_n$, with $R[x,f]_n = Rx^n$ for all $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. The ring $R[x,f]$ will be referred to as the {\em Frobenius skew polynomial ring over $R$.} Throughout, we shall let $G$ and $H$ denote left $R[x,f]$-modules. The {\em annihilator of $H$\/} will be denoted by $\ann_{R[x,f]}H$ or $\ann_{R[x,f]}(H)$. Thus $$ \ann_{R[x,f]}(H) = \{ \theta \in R[x,f] : \theta h = 0 \mbox{~for all~} h \in H\}, $$ and this is a (two-sided) ideal of $R[x,f]$. For a two-sided ideal ${\mathfrak{B}}$ of $R[x,f]$, we shall use $\ann_H{\mathfrak{B}}$ or $\ann_H({\mathfrak{B}})$ to denote the {\em annihilator of ${\mathfrak{B}}$ in $H$}. Thus $$ \ann_H{\mathfrak{B}} = \ann_H({\mathfrak{B}}) = \{ h \in H : \theta h = 0 \mbox{~for all~}\theta \in {\mathfrak{B}}\}, $$ and this is an $R[x,f]$-submodule of $H$. \end{ntn} \begin{defrmks} \label{nt.2} We say that the left $R[x,f]$-module $H$ is {\em $x$-torsion-free\/} if $xh = 0$, for $h \in H$, only when $h = 0$. The set $\Gamma_x(H) := \left\{ h \in H : x^jh = 0 \mbox{~for some~} j \in \mathbb N \right\}$ is an $R[x,f]$-submodule of $H$, called the\/ {\em $x$-torsion submodule} of $H$. The $R[x,f]$-module $H/\Gamma_x(H)$ is $x$-torsion-free. \end{defrmks} \begin{rmk} \label{nt.2b} Let ${\mathfrak{B}}$ be a subset of $R[x,f]$. It is easy to see that ${\mathfrak{B}}$ is a graded two-sided ideal of $R[x,f]$ if and only if there is an ascending chain $({\mathfrak{b}}_n)_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ of ideals of $R$ (which must, of course, be eventually stationary) such that ${\mathfrak{B}} = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}}_n x^n$. We shall sometimes denote the ultimate constant value of the ascending sequence $({\mathfrak{b}}_n)_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ by $\lim_{n \rightarrow \infty}{\mathfrak{b}}_n$. Note that, in particular, if ${\mathfrak{b}}$ is an ideal of $R$, then ${\mathfrak{b}} R[x,f] = \bigoplus_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}} x^n$ is a graded two-sided ideal of $R[x,f]$. It was noted in \ref{nt.1} that the annihilator of a left $R[x,f]$-module is a two-sided ideal. \end{rmk} \begin{lem} [Y. Yoshino {\cite[Corollary (2.7)]{Yoshi94}}] \label{nt.2c} The ring $R[x,f]$ satisfies the ascending chain condition on graded two-sided ideals. \end{lem} \begin{proof} This can be proved by the argument in Yoshino's proof of \cite[Corollary (2.7)]{Yoshi94}. \end{proof} \begin{defs} \label{nt.2d} We define the {\em graded annihilator\/} $\grann_{R[x,f]}H$ of the left $R[x,f]$-module $H$ by $$ \grann_{R[x,f]}H = \left\{ \sum_{i=0}^n r_ix^i \in R[x,f] : n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi, \mbox{~and~} r_i \in R,\, r_ix^i \in \ann_{R[x,f]}H \mbox{~for all~} i = 0, \ldots, n\right\}. $$ Thus $\grann_{R[x,f]}H$ is the largest graded two-sided ideal of $R[x,f]$ contained in $\ann_{R[x,f]}H$; also, if we write $\grann_{R[x,f]}H = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}}_n x^n$ for a suitable ascending chain $({\mathfrak{b}}_n)_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ of ideals of $R$, then ${\mathfrak{b}}_0 = (0:_RH)$, the annihilator of $H$ as an $R$-module. We say that an $R[x,f]$-submodule of $H$ is a {\em special annihilator submodule of $H$\/} if it has the form $\ann_H({\mathfrak{B}})$ for some {\em graded\/} two-sided ideal ${\mathfrak{B}}$ of $R[x,f]$. We shall use $\mathcal{A}(H)$ to denote the set of special annihilator submodules of $H$. \end{defs} \begin{defrmks} \label{nt.05d} There are some circumstances in which $\grann_{R[x,f]}H = \ann_{R[x,f]}H$: for example, this would be the case if $H$ was a $\mathbb Z$-graded left $R[x,f]$-module. Work of Y. Yoshino in \cite[\S 2]{Yoshi94} provides us with further examples. Following Yoshino \cite[Definition (2.1)]{Yoshi94}, we say that $R$ {\em has sufficiently many units\/} precisely when, for each $n \in \mathbb N$, there exists $r_n \in R$ such that all $n$ elements $(r_n)^{p^i} - r_n~(1 \leq i \leq n)$ are units of $R$. Yoshino proved in \cite[Lemma (2.2)]{Yoshi94} that if either $R$ contains an infinite field, or $R$ is local and has infinite residue field, then $R$ has sufficiently many units. He went on to show in \cite[Theorem (2.6)]{Yoshi94} that, if $R$ has sufficiently many units, then each two-sided ideal of $R[x,f]$ is graded. Thus if $R$ has sufficiently many units, then $\grann_{R[x,f]}H = \ann_{R[x,f]}H$, even if $H$ is not graded. \end{defrmks} \begin{lem} \label{nt.3} Let ${\mathfrak{B}}$ and ${\mathfrak{B}} '$ be graded two-sided ideals of $R[x,f]$ and let $N$ and $N'$ be $R[x,f]$-submodules of the left $R[x,f]$-module $H$. \begin{enumerate} \item If ${\mathfrak{B}} \subseteq {\mathfrak{B}} '$, then $\ann_H({\mathfrak{B}}) \supseteq \ann_H({\mathfrak{B}} ')$. \item If $N \subseteq N'$, then $\grann_{R[x,f]}N \supseteq \grann_{R[x,f]}N '$. \item We have ${\mathfrak{B}} \subseteq \grann_{R[x,f]}\left(\ann_H({\mathfrak{B}})\right)$. \item We have $N \subseteq \ann_H\!\left(\grann_{R[x,f]}N\right).$ \item There is an order-reversing bijection, $\Gamma\/,$ from the set $\mathcal{A}(H)$ of special annihilator submo\-d\-ules of $H$ to the set of graded annihilators of submodules of $H$ given by $$ \Gamma : N \longmapsto \grann_{R[x,f]}N. $$ The inverse bijection, $\Gamma^{-1},$ also order-reversing, is given by $$ \Gamma^{-1} : {\mathfrak{B}} \longmapsto \ann_H({\mathfrak{B}}). $$ \end{enumerate} \end{lem} \begin{proof} Parts (i), (ii), (iii) and (iv) are obvious. (v) Application of part (i) to the inclusion in part (iii) yields that $$ \ann_H({\mathfrak{B}}) \supseteq \ann_H\!\left( \grann_{R[x,f]}\left(\ann_H({\mathfrak{B}})\right) \right)\mbox{;} $$ however, part (iv) applied to the $R[x,f]$-submodule $\ann_H({\mathfrak{B}})$ of $H$ yields that $$ \ann_H({\mathfrak{B}}) \subseteq \ann_H\!\left( \grann_{R[x,f]}\left(\ann_H({\mathfrak{B}})\right) \right)\mbox{;} $$ hence $ \ann_H({\mathfrak{B}}) = \ann_H\!\left( \grann_{R[x,f]}\left(\ann_H({\mathfrak{B}})\right) \right). $ Similar considerations show that $$ \grann_{R[x,f]}N = \grann_{R[x,f]}\left(\ann_H\!\left(\grann_{R[x,f]}N\right)\right). $$ \end{proof} \begin{rmk} \label{nt.4} It follows from Lemma \ref{nt.3} that, if $N$ is a special annihilator submodule of $H$, then it is the annihilator (in $H$) of its own graded annihilator. Likewise, a graded two-sided ideal ${\mathfrak{B}}$ of $R[x,f]$ which is the graded annihilator of some $R[x,f]$-submodule of $H$ must be the graded annihilator of $\ann_H({\mathfrak{B}})$. \end{rmk} Much use will be made of the following lemma. \begin{lem} \label{nt.5} Assume that the left $R[x,f]$-module $G$ is $x$-torsion-free. Then there is a radical ideal ${\mathfrak{b}}$ of $R$ such that $\grann_{R[x,f]}G = {\mathfrak{b}} R[x,f] = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}} x^n$. \end{lem} \begin{proof} There is a family $({\mathfrak{b}}_n)_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ of ideals of $R$ such that ${\mathfrak{b}}_n \subseteq {\mathfrak{b}}_{n+1}$ for all $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $\grann_{R[x,f]}G = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}}_n x^n$. There exists $n_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that ${\mathfrak{b}}_n = {\mathfrak{b}}_{n_0}$ for all $n \geq n_0$. Set ${\mathfrak{b}} := {\mathfrak{b}}_{n_0}$. It is enough for us to show that, if $r \in R$ and $e \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ are such that $r^{p^e} \in {\mathfrak{b}}$, then $r \in {\mathfrak{b}}_0$. To this end, let $h \in \mathbb N$ be such that $h \geq \max \{e,n_0\}$. Then, for all $g \in G$, we have $x^hrg = r^{p^h}x^hg = 0$, since $r^{p^h} \in {\mathfrak{b}} = {\mathfrak{b}}_h$. Since $G$ is $x$-torsion-free, it follows that $rG = 0$, so that $r \in {\mathfrak{b}}_0$. \end{proof} \begin{defi} \label{nt.6} Assume that the left $R[x,f]$-module $G$ is $x$-torsion-free. An ideal ${\mathfrak{b}}$ of $R$ is called a {\em $G$-special $R$-ideal} if there is an $R[x,f]$-submodule $N$ of $G$ such that $\grann_{R[x,f]}N = {\mathfrak{b}} R[x,f] = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}} x^n$. It is worth noting that, then, the ideal ${\mathfrak{b}}$ is just $(0:_RN)$. We shall denote the set of $G$-special $R$-ideals by $\mathcal{I}(G)$. Note that, by Lemma \ref{nt.5}, all the ideals in $\mathcal{I}(G)$ are radical. \end{defi} We can now combine together the results of Lemmas \ref{nt.3}(v) and \ref{nt.5} to obtain the following result, which is fundamental for the work in this paper. \begin{prop} \label{nt.7} Assume that the left $R[x,f]$-module $G$ is $x$-torsion-free. There is an order-reversing bijection, $\Delta : \mathcal{A}(G) \longrightarrow \mathcal{I}(G),$ from the set $\mathcal{A}(G)$ of special annihilator submodules of $G$ to the set $\mathcal{I}(G)$ of $G$-special $R$-ideals given by $$ \Delta : N \longmapsto \left(\grann_{R[x,f]}N\right)\cap R = (0:_RN). $$ The inverse bijection, $\Delta^{-1} : \mathcal{I}(G) \longrightarrow \mathcal{A}(G),$ also order-reversing, is given by $$ \Delta^{-1} : {\mathfrak{b}} \longmapsto \ann_G\left({\mathfrak{b}} R[x,f])\right). $$ When $N \in \mathcal{A}(G)$ and ${\mathfrak{b}} \in \mathcal{I}(G)$ are such that $\Delta(N) = {\mathfrak{b}}$, we shall say simply that `{\em $N$ and ${\mathfrak{b}}$ correspond}'. \end{prop} \begin{cor} \label{nt.9} Assume that the left $R[x,f]$-module $G$ is $x$-torsion-free. Then both the sets $\mathcal{A}(G)$ and $\mathcal{I}(G)$ are closed under taking arbitrary intersections. \end{cor} \begin{proof} Let $\left(N_{\lambda}\right)_{\lambda \in \Lambda}$ be an arbitrary family of special annihilator submodules of $G$. For each $\lambda \in \Lambda$, let ${\mathfrak{b}}_{\lambda}$ be the $G$-special $R$-ideal corresponding to $N_{\lambda}$. In view of Proposition \ref{nt.7}, it is sufficient for us to show that $\bigcap_{\lambda \in \Lambda}N_{\lambda} \in \mathcal{A}(G)$ and ${\mathfrak{b}} := \bigcap_{\lambda \in \Lambda}{\mathfrak{b}}_{\lambda} \in \mathcal{I}(G)$. To prove these, simply note that $$ {\textstyle \bigcap_{\lambda \in \Lambda}N_{\lambda} = \bigcap_{\lambda \in \Lambda}\ann_G\left({\mathfrak{b}}_{\lambda} R[x,f])\right) = \ann_G\left(\left(\sum_{\lambda \in \Lambda}{\mathfrak{b}}_{\lambda}\right)\! R[x,f]\right)} $$ and that $\sum_{\lambda \in \Lambda}N_{\lambda}$ is an $R[x,f]$-submodule of $G$ such that $$ {\textstyle \grann_{R[x,f]}\left(\sum_{\lambda \in \Lambda}N_{\lambda}\right) = \bigcap_{\lambda \in \Lambda}\grann_{R[x,f]}N_{\lambda} = \bigcap_{\lambda \in \Lambda}\left({\mathfrak{b}}_{\lambda}R[x,f] \right) = {\mathfrak{b}} R[x,f].} $$ \end{proof} \begin{rmk} \label{nt.8} Suppose that the left $R[x,f]$-module $G$ is $x$-torsion-free. It is worth pointing out now that, since $R$ is Noetherian, so that the set $\mathcal{I}(G)$ of $G$-special $R$-ideals satisfies the ascending chain condition, it is a consequence of Proposition \ref{nt.7} that the set $\mathcal{A}(G)$ of special annihilator submodules of $G$, partially ordered by inclusion, satisfies the descending chain condition. This is the case even if $G$ is not finitely generated. Note that (by \cite[Theorem (1.3)]{Yoshi94}), the (noncommutative) ring $R[x,f]$ is neither left nor right Noetherian if $\dim R > 0$. \end{rmk} \section{\sc Examples relevant to the theory of tight closure} \label{ex} The purpose of this section is to present some motivating examples, from the theory of tight closure, of some of the concepts introduced in \S \ref{nt}. Throughout this section, we shall again employ the notation of \ref{nt.1}, and ${\mathfrak{a}}$ will always denote an ideal of $R$. Recall that the {\em Frobenius closure ${\mathfrak{a}}^F$ of ${\mathfrak{a}}$} is the ideal of $R$ defined by $$ {\mathfrak{a}} ^F := \big\{ r \in R : \mbox{there exists~} n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi \mbox{~such that~} r^{p^n} \in {\mathfrak{a}}^{[p^n]}\big\}. $$ \begin{rmk} \label{ex.0} Let $({\mathfrak{b}}_n)_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ be a family of ideals of $R$ such that ${\mathfrak{b}}_n \subseteq f^{-1}({\mathfrak{b}}_{n+1})$ for all $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Then $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}}_nx^n$ is a graded left ideal of $R[x,f]$, and so we may form the graded left $R[x,f]$-module $ R[x,f]/\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}}_nx^n$. This may be viewed as $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}R/{\mathfrak{b}}_n$, where, for $r \in R$ and $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, the result of multiplying the element $r + {\mathfrak{b}}_n$ of the $n$-th component by $x$ is the element $r^{p} + {\mathfrak{b}}_{n+1}$ of the $(n+1)$-th component. Note that the left $R[x,f]$-module $R[x,f]/\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}}_nx^n$ is $x$-torsion-free if and only if ${\mathfrak{b}}_n = f^{-1}({\mathfrak{b}}_{n+1})$ for all $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, that is, if and only if $({\mathfrak{b}}_n)_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ is an $f$-sequence in the sense of \cite[Definition 4.1(ii)]{SN}. \end{rmk} \begin{ntn} \label{ex.0a} Since $R[x,f]{\mathfrak{a}} = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{a}}^{[p^n]}x^n$, we can view the graded left $R[x,f]$-module $$R[x,f]/R[x,f]{\mathfrak{a}}$$ as $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}R/{\mathfrak{a}}^{[p^n]}$ in the manner described in \ref{ex.0}. We shall denote the graded left $R[x,f]$-module $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}R/{\mathfrak{a}}^{[p^n]}$ by $H({\mathfrak{a}})$. Recall from \cite[4.1(iii)]{SN} that $\left(({\mathfrak{a}}^{[p^n]})^F\right)_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ is the {\em canonical $f$-sequence associated to ${\mathfrak{a}}$}. We shall denote $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}R/({\mathfrak{a}}^{[p^n]})^F$, considered as a graded left $R[x,f]$-module in the manner described in \ref{ex.0}, by $G({\mathfrak{a}})$. Note that $G({\mathfrak{a}})$ is $x$-torsion-free. \end{ntn} \begin{lem} \label{ex.0b} With the notation of\/ {\rm \ref{ex.0a}}, we have $\Gamma_x(H({\mathfrak{a}})) = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}({\mathfrak{a}}^{[p^n]})^F/{\mathfrak{a}}^{[p^n]}$, so that there is an isomorphism of graded left $R[x,f]$-modules $$ H({\mathfrak{a}})/ \Gamma_x(H({\mathfrak{a}})) \cong G({\mathfrak{a}}). $$ \end{lem} \begin{proof} Let $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r \in R$. Then the element $r + {\mathfrak{a}}^{[p^n]}$ of the $n$-th component of $H({\mathfrak{a}})$ belongs to $\Gamma_x(H({\mathfrak{a}}))$ if and only if there exists $m \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $x^m(r + {\mathfrak{a}}^{[p^n]})= r^{p^m} + ({\mathfrak{a}}^{[p^{n}]})^{[p^{m}]} = 0$, that is, if and only if $r \in ({\mathfrak{a}}^{[p^n]})^F$. \end{proof} \begin{prop} \label{ex.1} We use the notation of\/ {\rm \ref{ex.0a}}. Suppose that there exists a $p^{w_0}$-weak test element $c$ for $R$, for some $w_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Then \begin{enumerate} \item $ \ann_{H({\mathfrak{a}})}\left( \bigoplus_{n\geq w_0} Rcx^n\right) = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}({\mathfrak{a}}^{[p^n]})^*/{\mathfrak{a}}^{[p^n]}\mbox{;} $ \item $ \ann_{G({\mathfrak{a}})}\left(\bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} Rcx^n\right) = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F. $ \end{enumerate} \end{prop} \begin{proof} (i) Let $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r \in R$. Then the element $r + {\mathfrak{a}}^{[p^j]}$ of the $j$-th component of $H({\mathfrak{a}})$ belongs to $\ann_{H({\mathfrak{a}})}\left(\bigoplus_{n\geq w_0} Rcx^n\right)$ if and only if $cr^{p^n} \in ({\mathfrak{a}}^{[p^j]})^{[p^n]}$ for all $n \geq w_0$, that is, if and only if $r \in ({\mathfrak{a}}^{[p^j]})^*$. (ii) By part (i), $$ {\textstyle \bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F \subseteq \ann_{G({\mathfrak{a}})}\left({\textstyle \bigoplus_{n\geq w_0}} Rcx^n\right). $$ Note that $\ann_{G({\mathfrak{a}})}\left(\bigoplus_{n\geq w_0} Rcx^n\right)$ is a graded $R[x,f]$-submodule of $G({\mathfrak{a}})$. Let $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r \in R$ be such that $r + ({\mathfrak{a}}^{[p^j]})^F$ belongs to the $j$-th component of $\ann_{G({\mathfrak{a}})}\left(\bigoplus_{n\geq w_0} Rcx^n\right)$. Then, for all $n \geq w_0$, we have $cr^{p^n} \in ({\mathfrak{a}}^{[p^{j+n}]})^F = (({\mathfrak{a}}^{[p^{j}]})^{[p^{n}]})^F$. Therefore, by \cite[Lemma 0.1]{KS}, we have $r \in ({\mathfrak{a}}^{[p^{j}]})^*$. It follows from this that $$ {\textstyle \bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F = \ann_{G({\mathfrak{a}})}\left({\textstyle \bigoplus_{n\geq w_0}} Rcx^n\right), $$ and so $\bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F$ is a special annihilator submodule of the $x$-torsion-free graded left $R[x,f]$-module $G({\mathfrak{a}})$. Let ${\mathfrak{b}}$ be the $G({\mathfrak{a}})$-special $R$-ideal corresponding to this member of $\mathcal{A}(G({\mathfrak{a}}))$. The above-displayed equation shows that $Rc \subseteq {\mathfrak{b}}$. Hence, by Proposition \ref{nt.7}, \begin{align*} {\textstyle \bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F & = \ann_{G({\mathfrak{a}})}\left({\textstyle \bigoplus_{n\geq w_0}} Rcx^n\right) \supseteq \ann_{G({\mathfrak{a}})}\left({\textstyle \bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}} Rcx^n\right) \\ & \supseteq \ann_{G({\mathfrak{a}})}\left({\textstyle \bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}} {\mathfrak{b}} x^n\right) = {\textstyle \bigoplus_{n\in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F. \end{align*} \end{proof} \begin{defi} \label{ex.1t} The {\em weak test ideal $\tau'(R)$ of $R$\/} is defined to be the ideal generated by $0$ and all weak test elements for $R$. (By a `weak test element' for $R$ we mean a $p^{w_0}$-weak test element for $R$ for some $w_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$.) It is easy to see that each element of $\tau'(R) \cap R^{\circ}$ is a weak test element for $R$. \end{defi} \begin{thm} \label{ex.2} We use the notation of\/ {\rm \ref{ex.0a}}. Suppose that there exists a $p^{w_0}$-weak test element $c$ for $R$, for some $w_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Let $H$ be the positively-graded left $R[x,f]$-module given by $$ H := \bigoplus_{{\mathfrak{a}} \textit{~is an ideal of~} R}H({\mathfrak{a}}) = \bigoplus_{{\mathfrak{a}} \textit{~is an ideal of~} R}\Big(\textstyle{\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}} R/{\mathfrak{a}}^{[p^n]}\Big). $$ Set $T := {\displaystyle \bigoplus_{{\mathfrak{a}} \textit{~is an ideal of~} R}} \Big(\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}({\mathfrak{a}}^{[p^n]})^*/{\mathfrak{a}}^{[p^n]}\Big)$. \begin{enumerate} \item Then $ T = \ann_{H}\left( \bigoplus_{n\geq w_0} Rcx^n\right), $ and so is a special annihilator submodule of $H$. \item Write $\grann_{R[x,f]}T = \bigoplus_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{c}}_nx^n$ for a suitable ascending chain $({\mathfrak{c}}_n)_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ of ideals of $R$. Then $\lim_{n \rightarrow \infty}{\mathfrak{c}}_n = \tau'(R)$, the weak test ideal for $R$. \item Furthermore, $T$ contains every special annihilator submodule $T'$ of $H$ for which the graded annihilator $\grann_{R[x,f]}T' = \bigoplus_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}}_nx^n$ has\/ $\height (\lim_{n \rightarrow \infty}{\mathfrak{b}}_n) \geq 1$. (The height of the improper ideal $R$ is considered to be $\infty$.) \end{enumerate} \end{thm} \begin{proof} (i) This is immediate from Proposition \ref{ex.1}(i). (ii) Write ${\mathfrak{c}} := \lim_{n \rightarrow \infty}{\mathfrak{c}}_n$. Since there exists a weak test element for $R$, the ideal $\tau'(R)$ can be generated by finitely many weak test elements for $R$, say by $c_i~(i = 1, \ldots, h)$, where $c_i$ is a $p^{w_i}$-weak test element for $R$ (for $i= 1, \ldots, h$). Set $\widetilde{w} = \max\{w_1, \ldots, w_h\}$. It is immediate from part (i) that $\bigoplus_{n \geq \widetilde{w}}\tau'(R)x^n \subseteq \grann_{R[x,f]}T$, and so $\tau'(R) \subseteq {\mathfrak{c}}$. Therefore $\height {\mathfrak{c}} \geq 1$, so that ${\mathfrak{c}} \cap R^{\circ} \neq \emptyset$ by prime avoidance, and ${\mathfrak{c}}$ can be generated by its elements in $R^{\circ}$. There exists $m_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that ${\mathfrak{c}}_n = {\mathfrak{c}}$ for all $n \geq m_0$. Let $c' \in {\mathfrak{c}} \cap R^{\circ}$. Thus $T$ is annihilated by $c'x^n$ for all $n \geq m_0$; therefore, for each ideal ${\mathfrak{a}}$ of $R$, and for all $r \in {\mathfrak{a}}^*$, we have $c'r^{p^n} \in {\mathfrak{a}}^{[p^n]}$ for all $n \geq m_0$, so that $c'$ is a $p^{m_0}$-weak test element for $R$. Therefore $c' \in \tau'(R)$. Since ${\mathfrak{c}}$ can be generated by elements in ${\mathfrak{c}} \cap R^{\circ}$, it follows that ${\mathfrak{c}} \subseteq \tau'(R)$. (iii) Since $T' = \ann_{H}\Big(\bigoplus_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}}_nx^n\Big)$, it follows that $$ T' = \bigoplus_{{\mathfrak{a}} \text{~is an ideal of~} R} \Big({\textstyle \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}}{\mathfrak{a}}_n/{\mathfrak{a}}^{[p^n]}\Big), $$ where, for each ideal ${\mathfrak{a}}$ of $R$ and each $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, the ideal ${\mathfrak{a}}_n$ of $R$ contains ${\mathfrak{a}}^{[p^n]}$. Suppose that $\lim_{n \rightarrow \infty}{\mathfrak{b}}_n = {\mathfrak{b}}$ and that $v_0 \in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ is such that ${\mathfrak{b}}_n = {\mathfrak{b}}$ for all $n \geq v_0$. Since $\height {\mathfrak{b}} \geq 1$, there exists $\overline{c} \in {\mathfrak{b}}\cap R^{\circ}$, by prime avoidance. Let ${\mathfrak{a}}$ be an ideal of $R$ and let $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Then, for each $r \in {\mathfrak{a}}_n$, the element $r + {\mathfrak{a}}^{[p^n]}$ of the $n$-th component of $H({\mathfrak{a}})$ is annihilated by $\overline{c}x^j$ for all $j \geq v_0$. This means that $\overline{c}r^{p^j} \in ({\mathfrak{a}}^{[p^n]})^{[p^j]}$ for all $j \geq v_0$, so that $r \in ({\mathfrak{a}}^{[p^n]})^*$. Therefore $T' \subseteq T$. \end{proof} \begin{thm} \label{ex.3} We use the notation of\/ {\rm \ref{ex.0a}}. Suppose that there exists a $p^{w_0}$-weak test element $c$ for $R$, for some $w_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Let $G$ be the positively-graded $x$-torsion-free left $R[x,f]$-module given by $$ G := \bigoplus_{{\mathfrak{a}} \textit{~is an ideal of~} R}G({\mathfrak{a}}) = \bigoplus_{{\mathfrak{a}} \textit{~is an ideal of~} R}\Big({\textstyle \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}} R/({\mathfrak{a}}^{[p^n]})^F\Big). $$ Set $U := {\displaystyle \bigoplus_{{\mathfrak{a}} \textit{~is an ideal of~} R}} \Big(\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}({\mathfrak{a}}^{[p^n]})^*/({\mathfrak{a}}^{[p^n]})^F\Big)$. \begin{enumerate} \item Then $ U = \ann_{G}\left( \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} Rcx^n\right), $ and so is a special annihilator submodule of $G$. \item Let ${\mathfrak{b}}$ be the $G$-special $R$-ideal corresponding to $U$. Then ${\mathfrak{b}}$ is the smallest member of $\mathcal{I}(G)$ of positive height. \end{enumerate} \end{thm} \begin{proof} (i) This is immediate from Proposition \ref{ex.1}(ii). (ii) Note that $Rc \subseteq {\mathfrak{b}}$, by part (i); therefore $\height {\mathfrak{b}} \geq 1$. To complete the proof, we show that, if ${\mathfrak{b}}' \in \mathcal{I}(G)$ has $\height {\mathfrak{b}}' \geq 1$, then ${\mathfrak{b}} \subseteq {\mathfrak{b}}'$. By prime avoidance, there exists $\widetilde{c} \in {\mathfrak{b}}' \cap R^{\circ}$. Let $U' \in \mathcal{A}(G)$ correspond to ${\mathfrak{b}}'$ (in the correspondence of Proposition \ref{nt.7}). Since $U' = \ann_G{\mathfrak{b}}' R[x,f]$, it follows that $$ U' = \bigoplus_{{\mathfrak{a}} \text{~is an ideal of~} R} \Big({\textstyle \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}}{\mathfrak{a}}_n/({\mathfrak{a}}^{[p^n]})^F\Big), $$ where, for each ideal ${\mathfrak{a}}$ of $R$ and each $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, the ideal ${\mathfrak{a}}_n$ of $R$ contains $({\mathfrak{a}}^{[p^n]})^F$. Let ${\mathfrak{a}}$ be an ideal of $R$ and let $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Then, for each $r \in {\mathfrak{a}}_n$, the element $r + ({\mathfrak{a}}^{[p^n]})^F$ of the $n$-th component of $G({\mathfrak{a}})$ is annihilated by $\widetilde{c}x^j$ for all $j \geq 0$. This means that $\widetilde{c}r^{p^j} \in (({\mathfrak{a}}^{[p^n]})^{[p^j]})^F$ for all $j \geq 0$, so that $r \in ({\mathfrak{a}}^{[p^n]})^*$ by \cite[Lemma 0.1(i)]{KS}. Therefore $U' \subseteq U$, so that ${\mathfrak{b}}' \supseteq {\mathfrak{b}}$. \end{proof} \section{\sc Properties of special annihilator submodules in the $x$-torsion-free case} \label{ga} Throughout this section, we shall employ the notation of \ref{nt.1}. The aim is to develop the theory of special annihilator submodules of an $x$-torsion-free left $R[x,f]$-module. \begin{lem} \label{ga.1} Suppose that $G$ is $x$-torsion-free. Let $N$ be a special annihilator submodule of $G$. Then the left $R[x,f]$-module $G/N$ is also $x$-torsion-free. \end{lem} \begin{proof} By Lemma \ref{nt.5} and Proposition \ref{nt.7}, there is a radical ideal ${\mathfrak{b}}$ of $R$ such that $N = \ann_G\left({\mathfrak{b}} R[x,f]\right).$ Let $g \in G$ be such that $xg \in N$. Therefore, for all $r \in {\mathfrak{b}}$ and all $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $rx^j(xg) = 0$, that is $rx^{j+1}g = 0$. Also, for $r \in {\mathfrak{b}}$, since $r(xg) = 0$, we have $x(rg) = r^pxg = 0$, and so $rg = 0$ because $G$ is $x$-torsion-free. Thus $g \in \ann_G\left(\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}} x^n\right) = N$. It follows that $G/N$ is $x$-torsion-free. \end{proof} \begin{lem} \label{ga.1a} Suppose that $G$ is $x$-torsion-free. Let ${\mathfrak{a}}$ be an ideal of $R$, and set $L := \ann_G\left({\mathfrak{a}} R[x,f]\right) \in \mathcal{A}(G)$. Then $L = \ann_G\left(\sqrt{{\mathfrak{a}}} R[x,f]\right)$. \end{lem} \begin{proof} Let ${\mathfrak{d}} \in \mathcal{I}(G)$ correspond to $L$. Note that ${\mathfrak{d}}$ is radical, by Lemma \ref{nt.5}; also, ${\mathfrak{a}} \subseteq {\mathfrak{d}}$. Hence $ {\mathfrak{a}} \subseteq \sqrt{{\mathfrak{a}}} \subseteq \sqrt{{\mathfrak{d}}} = {\mathfrak{d}}$. Since $\ann_G\left({\mathfrak{a}} R[x,f]\right) = \ann_G\left({\mathfrak{d}} R[x,f]\right)$, we must have $\ann_G\left({\mathfrak{a}} R[x,f]\right) = \ann_G\left(\sqrt{{\mathfrak{a}}} R[x,f]\right).$ \end{proof} \begin{prop} \label{ga.3} Suppose that $G$ is $x$-torsion-free. Let ${\mathfrak{a}}$ be an ideal of $R$, and set $$L := \ann_G\left({\mathfrak{a}} R[x,f]\right) \in \mathcal{A}(G).$$ Note that $G/L$ is $x$-torsion-free, by Lemma\/ {\rm \ref{ga.1}}. Let $N$ be an $R$-submodule of $G$ such that $L \subseteq N \subseteq G$. \begin{enumerate} \item If $N = \ann_G\left({\mathfrak{b}} R[x,f]\right) \in \mathcal{A}(G)$, where ${\mathfrak{b}}$ is an ideal of $R$ contained in ${\mathfrak{a}}$, then $$N/L = \ann_{G/L}\left(({\mathfrak{b}}:{\mathfrak{a}})R[x,f]\right) \in \mathcal{A}(G/L).$$ Furthermore, if the ideal in $\mathcal{I}(G)$ corresponding to $N$ is ${\mathfrak{b}}$, then $({\mathfrak{b}}:{\mathfrak{a}})$ is the ideal in $\mathcal{I}(G/L)$ corresponding to $N/L$. \item If $N/L = \ann_{G/L}\left({\mathfrak{c}} R[x,f]\right) \in \mathcal{A}(G/L)$, where ${\mathfrak{c}}$ is an ideal of $R$, then $$N = \ann_{G}\left({\mathfrak{a}}{\mathfrak{c}} R[x,f]\right) = \ann_{G}\left(({\mathfrak{a}}\cap{\mathfrak{c}}) R[x,f]\right) \in \mathcal{A}(G).$$ Furthermore, if ${\mathfrak{a}}$ is the ideal in $\mathcal{I}(G)$ corresponding to $L$ and ${\mathfrak{c}}$ is the ideal in $\mathcal{I}(G/L)$ corresponding to $N/L$, then ${\mathfrak{a}}\cap{\mathfrak{c}}$ is the ideal in $\mathcal{I}(G)$ corresponding to $N$. \item There is an order-preserving bijection from $\{ N \in \mathcal{A}(G) : N \supseteq L\}$ to $\mathcal{A}(G/L)$ given by $N \mapsto N/L$. \end{enumerate} \end{prop} \begin{proof} By Lemma \ref{ga.1}, the left $R[x,f]$-module $G/L$ is $x$-torsion-free. (i) Let $g \in N$. Let $i,j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r \in ({\mathfrak{b}}:{\mathfrak{a}})$, $u \in {\mathfrak{a}}$. Then $ux^i(rx^jg) = ur^{p^i}x^{i+j}g = 0$ because $ ur^{p^i} \in {\mathfrak{b}}$ and ${\mathfrak{b}} x^{i+j}$ annihilates $N$. This is true for all $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $u \in {\mathfrak{a}}$. Therefore $ rx^jg \in \ann_G\left({\mathfrak{a}} R[x,f]\right) = L. $ Since this is true for all $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r \in ({\mathfrak{b}}:{\mathfrak{a}})$, we see that $N/L \subseteq \ann_{G/L}\left(({\mathfrak{b}}:{\mathfrak{a}}) R[x,f]\right)$. Now suppose that $g \in G$ is such that $g + L \in \ann_{G/L}\left(({\mathfrak{b}}:{\mathfrak{a}}) R[x,f]\right)$. Let $r \in {\mathfrak{b}}$ and $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Then $r \in ({\mathfrak{b}}:{\mathfrak{a}})$ and so $rx^{i+1}$ annihilates $g + L \in G/L$. Hence $rx^{i+1}g \in L$. Since ${\mathfrak{b}} \subseteq {\mathfrak{a}}$, we see that $r^{p-1}rx^{i+1}g = 0$, so that $xrx^ig = 0$. As $G$ is $x$-torsion-free, it follows that $rx^ig = 0$. As this is true for all $r \in {\mathfrak{b}}$ and $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we see that $ g \in \ann_G\left({\mathfrak{b}} R[x,f]\right) = N. $ Hence $N/L = \ann_{G/L}\left(({\mathfrak{b}}:{\mathfrak{a}}) R[x,f]\right)$. To prove the final claim, we have to show that $\grann_{R[x,f]}(N/L) = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} ({\mathfrak{b}}:{\mathfrak{a}}) x^n$, given that $\grann_{R[x,f]}N = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}} x^n$. In view of the preceding paragraph, it remains only to show that $$\grann_{R[x,f]}(N/L) \subseteq ({\mathfrak{b}}:{\mathfrak{a}})R[x,f].$$ Let $r \in R$ be such that $rx^i \in \grann_{R[x,f]}(N/L)$ for all $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Let $g \in N$. Then $rx^ig \in L$ for all $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, and so ${\mathfrak{a}} rx^ig = 0$ for all $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. As this is true for all $g \in N$ and for all $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, it follows that ${\mathfrak{a}} r \subseteq \left(\grann_{R[x,f]}N\right) \cap R = {\mathfrak{b}}$. Hence $r \in ({\mathfrak{b}}:{\mathfrak{a}})$. (ii) Let $g \in N$. Then $ux^ig \in L$ for all $u \in {\mathfrak{c}}$ and $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, and so $rux^ig = 0$ for all $r \in {\mathfrak{a}}$, $u \in {\mathfrak{c}}$ and $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Hence $N \subseteq \ann_G\left(\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{a}}{\mathfrak{c}} x^n\right) = \ann_G\left({\mathfrak{a}}{\mathfrak{c}} R[x,f]\right)$. Now let $g \in \ann_G\left({\mathfrak{a}}{\mathfrak{c}} R[x,f]\right)$. Then, for all $r \in {\mathfrak{a}}$, $u \in {\mathfrak{c}}$ and $i,j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $rx^i(ux^jg) = ru^{p^i}x^{i+j}g = 0$, and so $ux^jg \in L$ for all $u \in {\mathfrak{c}}$ and $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. Hence $g + L \in \ann_{G/L}\left(\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{c}} x^n\right) = N/L$, and $g \in N$. It follows that $ N = \ann_G\left({\mathfrak{a}}{\mathfrak{c}} R[x,f]\right). $ Also, by Lemma \ref{ga.1a}, we have $$\ann_G\left({\mathfrak{a}}{\mathfrak{c}} R[x,f]\right) = \ann_G\left(({\mathfrak{a}}\cap{\mathfrak{c}}) R[x,f]\right),$$ because ${\mathfrak{a}} {\mathfrak{c}}$ and ${\mathfrak{a}} \cap {\mathfrak{c}}$ have the same radical. To prove the final claim, we have to show that $\grann_{R[x,f]}N = ({\mathfrak{a}}\cap{\mathfrak{c}})R[x,f]$, given that $$\grann_{R[x,f]}(N/L) = {\mathfrak{c}} R[x,f] \quad \mbox{~and~} \quad \grann_{R[x,f]}(L) = {\mathfrak{a}} R[x,f].$$ In view of the preceding paragraph, it remains only to show that $\grann_{R[x,f]}N \subseteq ({\mathfrak{a}}\cap{\mathfrak{c}})R[x,f].$ However, this is clear, because $ \grann_{R[x,f]}N \subseteq \grann_{R[x,f]}L \cap \grann_{R[x,f]}(N/L). $ (iii) This is now immediate from parts (i) and (ii). \end{proof} \begin{rmk} \label{ga.3r} It follows from Proposition \ref{ga.3}(ii) (and with the hypotheses and notation thereof) that, if ${\mathfrak{a}}$ is an ideal of $R$ and $L := \ann_G\left({\mathfrak{a}} R[x,f]\right)$, then $ \ann_{G/L}\left({\mathfrak{a}} R[x,f]\right) = 0$. \end{rmk} Because the special $R$-ideals introduced in Definition \ref{nt.6} are radical, the following lemma will be very useful. \begin{lem} \label{ga.2} Let ${\mathfrak{a}}$ and ${\mathfrak{b}}$ be proper radical ideals of $R$, and let their (unique) minimal primary decompositions be $$ {\mathfrak{a}} = {\mathfrak{r}}_1 \cap \ldots \cap {\mathfrak{r}}_k \cap {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t \cap {\mathfrak{p}}_1' \cap \ldots \cap {\mathfrak{p}}_u' $$ and $$ {\mathfrak{b}} = {\mathfrak{r}}_1 \cap \ldots \cap {\mathfrak{r}}_k \cap {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v \cap {\mathfrak{q}}_1' \cap \ldots \cap {\mathfrak{q}}_w', $$ where the notation is such that $$ \left\{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t, {\mathfrak{p}}_1', \ldots, {\mathfrak{p}}_u'\right\} \cap \left\{{\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v, {\mathfrak{q}}_1', \ldots, {\mathfrak{q}}_w'\right\} = \emptyset, $$ and such that none of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t$ contains an associated prime of ${\mathfrak{b}}$, each of ${\mathfrak{p}}_1', \ldots, {\mathfrak{p}}_u'$ contains an associated prime of ${\mathfrak{b}}$, none of ${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$ contains an associated prime of ${\mathfrak{a}}$, and each of ${\mathfrak{q}}_1', \ldots, {\mathfrak{q}}_w'$ contains an associated prime of ${\mathfrak{a}}$. (Note that some, but not all, of the integers $k$, $t$ and $u$ might be zero; a similar comment applies to the primary decomposition of ${\mathfrak{b}}$.) Then \begin{enumerate} \item ${\mathfrak{a}} \cap {\mathfrak{b}} = {\mathfrak{r}}_1 \cap \ldots \cap {\mathfrak{r}}_k \cap {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t \cap {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v$ is the minimal primary decomposition; \item if ${\mathfrak{a}} \not\subseteq {\mathfrak{b}}$, the equation $ ({\mathfrak{b}} : {\mathfrak{a}}) = {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v$ gives the minimal primary decomposition. \end{enumerate} \end{lem} \begin{proof} (i) Each of ${\mathfrak{p}}_1', \ldots, {\mathfrak{p}}_u'$ must contain one of ${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$; likewise, each of ${\mathfrak{q}}_1', \ldots, {\mathfrak{q}}_w'$ must contain one of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t$ . The claim then follows easily. (ii) Since $ ({\mathfrak{b}} : {\mathfrak{a}}) = ({\mathfrak{b}} \cap {\mathfrak{a}} : {\mathfrak{a}})$, it is clear from part (i) that $$ {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v \subseteq ({\mathfrak{b}} : {\mathfrak{a}}). $$ Now let $r \in ({\mathfrak{b}} : {\mathfrak{a}}) = ({\mathfrak{b}} \cap {\mathfrak{a}} : {\mathfrak{a}})$. Then, for each $i = 1, \ldots, v$, we have $r{\mathfrak{a}} \subseteq {\mathfrak{q}}_i$, whereas ${\mathfrak{a}} \not\subseteq {\mathfrak{q}}_i$; hence $r \in {\mathfrak{q}}_i$ because ${\mathfrak{q}}_i$ is prime. \end{proof} \begin{thm} \label{ga.4} Suppose that $G$ is $x$-torsion-free. Let $N := \ann_G\left({\mathfrak{b}} R[x,f]\right) \in \mathcal{A}(G)$, where the ideal ${\mathfrak{b}} \in \mathcal{I}(G)$ corresponds to $N$. Assume that $N \neq 0$, and let ${\mathfrak{b}} = {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t$ be the minimal primary decomposition of the (radical) ideal ${\mathfrak{b}}$. Suppose that $t > 1$, and consider any partition $\{1,\ldots,t\} = U \cup V$, where $U$ and $V$ are two non-empty disjoint sets. Set $ {\mathfrak{a}} = \bigcap_{i\in U} {\mathfrak{p}}_i$ and ${\mathfrak{c}} = \bigcap_{i\in V} {\mathfrak{p}}_i$. Let $L := \ann_G\left({\mathfrak{a}} R[x,f]\right) \in \mathcal{A}(G)$. Then \begin{enumerate} \item $0 \subset L \subset N$ (the symbol `$\subset$' is reserved to denote strict inclusion); \item $N/L = \ann_{G/L}\left({\mathfrak{c}} R[x,f]\right) \in \mathcal{A}(G/L)$ with corresponding ideal ${\mathfrak{c}} \in \mathcal{I}(G/L)$; and \item $\grann_{R[x,f]}L = {\mathfrak{a}} R[x,f]$, so that ${\mathfrak{a}} \in \mathcal{I}(G)$ corresponds to $L$. \end{enumerate} \end{thm} \begin{proof} (i) It is clear that $ L \subseteq N$. Suppose that $L = 0$ and seek a contradiction. Let $g \in N$. Let $i,j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $r \in {\mathfrak{c}}$, $u \in {\mathfrak{a}}$. Then $ux^i(rx^jg) = ur^{p^i}x^{i+j}g = 0$ because $ ur^{p^i} \in {\mathfrak{b}}$ and ${\mathfrak{b}} x^{i+j}$ annihilates $N$. This is true for all $i \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $u \in {\mathfrak{a}}$. Therefore $ rx^jg \in \ann_G\left(\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{a}} x^n\right) = L = 0. $ It follows that $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{c}} x^n \subseteq \grann_{R[x,f]}N = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}} x^n$, so that ${\mathfrak{c}} \subseteq {\mathfrak{b}}$. But ${\mathfrak{b}} \subseteq {\mathfrak{c}}$, and so ${\mathfrak{c}} = {\mathfrak{b}}$. However, this contradicts the fact that ${\mathfrak{b}} = {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t$ is the unique minimal primary decomposition of ${\mathfrak{b}}$. Therefore $L \neq 0$. Now suppose that $L = N$ and again seek a contradiction. Then $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{a}} x^n \subseteq \grann_{R[x,f]}N = \bigoplus _{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi} {\mathfrak{b}} x^n$, so that ${\mathfrak{a}} \subseteq {\mathfrak{b}}$. But ${\mathfrak{b}} \subseteq {\mathfrak{a}}$, and so ${\mathfrak{a}} = {\mathfrak{b}}$, and this again leads to a contradiction. Therefore $L \neq N$. (ii) Since ${\mathfrak{b}} \subseteq {\mathfrak{a}}$, it is immediate from Proposition \ref{ga.3}(i) that $N/L = \ann_{G/L}\left(({\mathfrak{b}}:{\mathfrak{a}})R[x,f]\right) \in \mathcal{A}(G/L)$ and that the ideal $({\mathfrak{b}}:{\mathfrak{a}}) \in \mathcal{I}(G/L)$ corresponds to $N/L$. However, it follows from Lemma \ref{ga.2}(ii) that $({\mathfrak{b}}:{\mathfrak{a}}) = \bigcap_{i\in V} {\mathfrak{p}}_i = {\mathfrak{c}}$. (iii) Let ${\mathfrak{d}} \in \mathcal{I}(G)$ correspond to $L$. Note that ${\mathfrak{a}} = \bigcap_{i\in U} {\mathfrak{p}}_i \subseteq {\mathfrak{d}}$. By Proposition \ref{ga.3}(i), the ideal in $\mathcal{I}(G/L)$ corresponding to $N/L$ is $({\mathfrak{b}}:{\mathfrak{d}})$. Therefore, by part (ii), we have $({\mathfrak{b}}:{\mathfrak{d}}) = {\mathfrak{c}}$. But, by Proposition \ref{ga.3}(ii), the ideal in $\mathcal{I}(G)$ corresponding to $N$ is ${\mathfrak{d}}\cap{\mathfrak{c}}$. Therefore ${\mathfrak{b}} = {\mathfrak{d}} \cap {\mathfrak{c}}$, and so ${\mathfrak{d}} \neq R$. Now ${\mathfrak{d}}$ is a radical ideal of $R$. By Lemma \ref{ga.2}(i), each ${\mathfrak{p}}_j$, for $j \in U$, is an associated prime of ${\mathfrak{d}}$. Hence ${\mathfrak{d}} \subseteq \bigcap_{j\in U} {\mathfrak{p}}_j = {\mathfrak{a}}$. But we already know that ${\mathfrak{a}} \subseteq {\mathfrak{d}}$, and so ${\mathfrak{d}} = {\mathfrak{a}}$. \end{proof} \begin{cor} \label{ga.10} Suppose that $G$ is $x$-torsion-free. Then the set of $G$-special $R$-ideals is precisely the set of all finite intersections of prime $G$-special $R$-ideals (provided one includes the empty intersection, $R$, which corresponds to the zero special annihilator submodule of $G$). In symbols, $$ \mathcal{I}(G) = \left\{ {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t : t \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi \mbox{~and~} {\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t \in \mathcal{I}(G)\cap\Spec(R)\right\}. $$ \end{cor} \begin{proof} By Corollary \ref{nt.9}, the set $\mathcal{I}(G)$ is closed under taking intersections. A proper ideal ${\mathfrak{a}} \in \mathcal{I}(G)$ is radical and it follows from Theorem \ref{ga.4} that each (necessarily prime) primary component of ${\mathfrak{a}}$ also belongs to $\mathcal{I}(G)$. This is enough to complete the proof. \end{proof} \begin{lem} \label{ga.11} Suppose that $G$ is $x$-torsion-free. Let ${\mathfrak{p}}$ be a maximal member of $\mathcal{I}(G) \setminus \{R\}$ with respect to inclusion, and let $L \in \mathcal{A}(G)$ be the corresponding special annihilator submodule of $G$. Thus $L$ is a minimal member of the set of non-zero special annihilator submodules of $G$. Then ${\mathfrak{p}}$ is prime, and any non-zero $g\in L$ satisfies $\grann_{R[x,f]}R[x,f]g = {\mathfrak{p}} R[x,f]$. \end{lem} \begin{proof} It follows from Corollary \ref{ga.10} that ${\mathfrak{p}}$ is prime. Since $R[x,f]g$ is a non-zero $R[x,f]$-submodule of $L$, there is a proper radical ideal ${\mathfrak{a}} \in \mathcal{I}(G)$ such that $$ {\mathfrak{a}} R[x,f] = \grann_{R[x,f]}R[x,f]g \supseteq \grann_{R[x,f]}L = {\mathfrak{p}} R[f,x]. $$ Since ${\mathfrak{p}}$ is a maximal member of $\mathcal{I}(G) \setminus \{R\}$, we must have ${\mathfrak{a}} = {\mathfrak{p}}$. \end{proof} Our next major aim is to show that, in the situation of Corollary \ref{ga.10}, the set $\mathcal{I}(G)$ is finite if $G$ has the property that, for each special annihilator submodule $L$ of $G$ (including $0 = \ann_GR[x,f]$), the $x$-torsion-free residue class module $G/L$ (see Lemma \ref{ga.1}) does not contain, as an $R[x,f]$-submodule, an infinite direct sum of non-zero special annihilator submodules of $G/L$. This may seem rather a complicated hypothesis, and so we point out now that it is satisfied if $G$ is a Noetherian or Artinian left $R[x,f]$-module, and therefore if $G$ is a Noetherian or Artinian $R$-module. These ideas will be applied, later in the paper, to an example in which $G$ is Artinian as an $R$-module. The following lemma will be helpful in an inductive argument in the proof of Theorem \ref{ga.12}. \begin{lem} \label{ga.12p} Suppose that $G$ is $x$-torsion-free, and that the set $\mathcal{I}(G) \setminus \{R\}$ is non-empty and has finitely many maximal members: suppose that there are $n$ of these and denote them by ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. (The ideals ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$ are prime, by \/ {\rm \ref{ga.11}}.) Let $L := \ann_G\left({\mathfrak{p}}_1 \cap \cdots \cap {\mathfrak{p}}_n\right)R[x,f]$. Then the left $R[x,f]$-module $G/L$ is $x$-torsion-free, and $$ \mathcal{I}(G/L) \cap \Spec (R) = \mathcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}. $$ \end{lem} \begin{proof} Note that $\bigcap_{i=1}^n {\mathfrak{p}}_i \in \mathcal{I}(G)$, by Corollary \ref{nt.9}. Therefore $\grann_{R[x,f]}L = \left(\bigcap_{i=1}^n {\mathfrak{p}}_i\right)R[x,f]$ and $L$ corresponds to $\bigcap_{i=1}^n {\mathfrak{p}}_i$. That $G/L$ is $x$-torsion-free follows from Lemma \ref{ga.1}. By Proposition \ref{ga.3}(iii), $$ \mathcal{A}(G/L) = \left\{ N/L : N \in \mathcal{A}(G) \mbox{~and~} L \subseteq N \right\}. $$ Let $N \in \mathcal{A}(G)$ with $L \subset N$, and let ${\mathfrak{b}} \in \mathcal{I}(G)$ correspond to $N$. Note that ${\mathfrak{b}} \subset \bigcap_{i=1}^n {\mathfrak{p}}_i$, and that no associated prime of ${\mathfrak{b}}$ can contain properly any of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. Therefore the minimal primary decomposition of the radical ideal ${\mathfrak{b}}$ will have the form $$ {\mathfrak{b}} = \left( {\textstyle \bigcap_{i\in I}} {\mathfrak{p}}_i \right) \cap {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v, $$ where $I$ is some (possibly empty) subset of $\{1, \ldots, n\}$ and none of ${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$ contains any of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. Note that ${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$ must all belong to $\mathcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}$. Proposition \ref{ga.3}(i), this time used in conjunction with Lemma \ref{ga.2}(ii), now shows that $N/L \in \mathcal{A}(G/L)$ and the ideal of $\mathcal{I}(G/L)$ corresponding to $N/L$ is $$ \left({\mathfrak{b}} : {\mathfrak{p}}_1 \cap \cdots \cap {\mathfrak{p}}_n\right) = {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v. $$ Note also that, if ${\mathfrak{q}} \in \mathcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}$ and $$ J := \left\{ j \in \{1, \ldots, n\} : {\mathfrak{p}}_j \not\supset {\mathfrak{q}} \right\}, $$ then ${\mathfrak{c}} := \left(\bigcap_{j\in J} {\mathfrak{p}}_j \right) \cap {\mathfrak{q}} \in \mathcal{I}(G)$ and ${\mathfrak{c}} \subset \bigcap_{i=1}^n{\mathfrak{p}}_i$. It now follows from Corollary \ref{ga.10} that $$ \mathcal{I}(G/L) \cap \Spec (R) = \mathcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}, $$ as required. \end{proof} \begin{thm} \label{ga.12} Suppose that $G$ is $x$-torsion-free. Assume that $G$ has the property that, for each special annihilator submodule $L$ of $G$ (including $0 = \ann_GR[x,f]$), the $x$-torsion-free residue class module $G/L$ does not contain, as an $R[x,f]$-submodule, an infinite direct sum of non-zero special annihilator submodules of $G/L$. Then the set $\mathcal{I}(G)$ of $G$-special $R$-ideals is finite. \end{thm} \begin{proof} By Corollary \ref{ga.10}, it is enough for us to show that the set $\mathcal{I}(G)\cap \Spec (R)$ is finite; we may suppose that the latter set is not empty, so that it has maximal members with respect to inclusion. In the first part of the proof, we show that $\mathcal{I}(G)\cap \Spec (R)$ has only finitely many such maximal members. Let $\left({\mathfrak{p}}_{\lambda}\right)_{\lambda \in \Lambda}$ be a labelling of the set of maximal members of $\mathcal{I}(G)\cap \Spec (R)$, arranged so that ${\mathfrak{p}}_{\lambda} \neq {\mathfrak{p}}_{\mu}$ whenever $\lambda$ and $\mu$ are different elements of $\Lambda$. For each $\lambda \in \Lambda$, let $S_{\lambda}$ be the member of $\mathcal{A}(G)$ corresponding to ${\mathfrak{p}}_{\lambda}$. Consider $\lambda, \mu \in \Lambda$ with $\lambda \neq \mu$. By Lemma \ref{ga.11}, a non-zero $g \in S_{\lambda} \cap S_{\mu}$ would have to satisfy $\grann_{R[x,f]}R[x,f]g = {\mathfrak{p}}_{\lambda} R[x,f] = {\mathfrak{p}}_{\mu} R[x,f]$. Since ${\mathfrak{p}}_{\lambda} \neq {\mathfrak{p}}_{\mu}$, this is impossible. Therefore $S_{\lambda} \cap S_{\mu} = 0$ and the sum $S_{\lambda} + S_{\mu}$ is direct. Suppose, inductively, that $n \in \mathbb N$ and we have shown that, whenever $\lambda_1, \ldots, \lambda_n$ are $n$ distinct members of $\Lambda$, then the sum $\sum_{i=1}^nS_{\lambda_i}$ is direct. We can now use Lemma \ref{ga.11} to see that, if $g_i \in S_{\lambda_i}$ for $i = 1, \ldots, n$, then $$ \grann_{R[x,f]}R[x,f](g_1 + \cdots + g_n) = \bigcap_{\stackrel{\scriptstyle i=1}{g_i \neq 0}}^n{\mathfrak{p}}_{\lambda_i}R[x,f], $$ and then to deduce that, for $\lambda_{n+1} \in \Lambda \setminus\{\lambda_1, \ldots, \lambda_n\}$, we must have $ \left( \bigoplus_{i=1}^n S_{\lambda_i} \right) \bigcap S_{\lambda_{n+1}} = 0, $ so that the sum $S_{\lambda_1} + \cdots + S_{\lambda_n} + S_{\lambda_{n+1}}$ is direct. It follows that the sum $\sum_{\lambda \in \Lambda}S_{\lambda}$ is direct; since each $S_{\lambda}$ is non-zero, the hypothesis about $G/0$ (that is, about $G$) ensures that $\Lambda$ is finite. We have thus shown that $\mathcal{I}(G)\cap \Spec (R)$ has only finitely many maximal members. Note that $\max\{ \height {\mathfrak{p}} : {\mathfrak{p}} \mbox{~is a maximal member of~} \mathcal{I}(G)\cap \Spec (R) \}$ is an upper bound for the lengths of chains $$ {\mathfrak{p}}_0 \subset {\mathfrak{p}}_1 \subset \cdots \subset {\mathfrak{p}}_w $$ of prime ideals in $\mathcal{I}(G)\cap \Spec (R)$. We argue by induction on the maximum $t$ of these lengths. When $t = 0$, all members of $\mathcal{I}(G)\cap \Spec (R)$ are maximal members of that set, and so, by the first part of this proof, $\mathcal{I}(G)\cap \Spec (R)$ is finite. Now suppose that $t > 0$, and that it has been proved that $\mathcal{I}(G)\cap \Spec (R)$ is finite for smaller values of $t$. We know that there are only finitely many maximal members of $\mathcal{I}(G)\cap \Spec (R)$; suppose that there are $n$ of these and denote them by ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. Let $L := \ann_G\left({\mathfrak{p}}_1 \cap \cdots \cap {\mathfrak{p}}_n\right)R[x,f]$. We can now use Lemma \ref{ga.12p} to deduce that the left $R[x,f]$-module $G/L$ is $x$-torsion-free and $$ \mathcal{I}(G/L) \cap \Spec (R) = \mathcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}. $$ It follows from this and Proposition \ref{ga.3}(ii) that the inductive hypothesis can be applied to $G/L$, and so we can deduce that the set $$\mathcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}$$ is finite. Hence $\mathcal{I}(G) \cap \Spec (R)$ is a finite set and the inductive step is complete. \end{proof} \begin{cor} \label{ga.12c} Suppose that the left $R[x,f]$-module $G$ is $x$-torsion-free and either Artinian or Noetherian as an $R$-module. Then the set $\mathcal{I}(G)$ of $G$-special $R$-ideals is finite. \end{cor} \begin{thm} \label{ga.13} Suppose that $G$ is $x$-torsion-free and that the set $\mathcal{I}(G)$ of $G$-special $R$-ideals is finite. Then there exists a (uniquely determined) ideal ${\mathfrak{b}} \in \mathcal{I}(G)$ with the properties that $\height {\mathfrak{b}} \geq 1$ (the improper ideal $R$ is considered to have infinite height) and ${\mathfrak{b}} \subset {\mathfrak{c}}$ for every other ideal ${\mathfrak{c}} \in \mathcal{I}(G)$ with $\height {\mathfrak{c}} \geq 1$. Furthermore, for $g \in G$, the following statements are equivalent: \begin{enumerate} \item $g$ is annihilated by ${\mathfrak{b}} R[x,f] = \bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}} x^n$; \item there exists $c \in R^{\circ}\cap {\mathfrak{b}}$ such that $cx^ng = 0$ for all $n \gg 0$; \item there exists $c \in R^{\circ}$ such that $cx^ng = 0$ for all $n \gg 0$. \end{enumerate} \end{thm} \begin{proof} By Corollary \ref{ga.10}, we have $$ \mathcal{I}(G) = \left\{ {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t : t \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi \mbox{~and~} {\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t \in \mathcal{I}(G)\cap\Spec(R)\right\}. $$ Since $\mathcal{I}(G)$ is finite, it is immediate that $$ {\mathfrak{b}} := \bigcap_{\stackrel{\scriptstyle {\mathfrak{p}} \in \mathcal{I}(G)\cap\Spec(R)}{\height {\mathfrak{p}} \geq 1}}{\mathfrak{p}} $$ is the smallest ideal in $\mathcal{I}(G)$ of height greater than $0$. Since $\height {\mathfrak{b}} \geq 1$, so that there exists $c \in {\mathfrak{b}} \cap R^{\circ}$ by prime avoidance, it is clear that (i) $\Rightarrow$ (ii) and (ii) $\Rightarrow$ (iii). (iii) $\Rightarrow$ (i) Let $n_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ and $c \in R^{\circ}$ be such that $cx^ng = 0$ for all $n \geq n_0$. Then, for all $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $x^{n_0}cx^jg = c^{p^{n_0}}x^{n_0 + j}g = 0$, so that $cx^jg = 0$ because $G$ is $x$-torsion-free. Therefore $g \in \ann_G(RcR[x,f])$. Now $\ann_G(RcR[x,f])\in \mathcal{A}(G)$: let ${\mathfrak{a}} \in \mathcal{I}(G)$ be the corresponding $G$-special $R$-ideal. Since $c \in {\mathfrak{a}}$, we must have $\height {\mathfrak{a}} \geq 1$. Therefore ${\mathfrak{b}} \subseteq {\mathfrak{a}}$, by definition of ${\mathfrak{b}}$, and so $$ g \in \ann_G(RcR[x,f]) = \ann_G({\mathfrak{a}} R[x,f]) \subseteq \ann_G({\mathfrak{b}} R[x,f]). $$ \end{proof} Corollary \ref{ga.12c} and Theorem \ref{ga.13} give hints about how this work will be exploited, in Section \ref{tc} below, to obtain results in the theory of tight closure. The aim is to apply Corollary \ref{ga.12c} and Theorem \ref{ga.13} to $H^d_{{\mathfrak{m}}}(R)/\Gamma_x(H^d_{{\mathfrak{m}}}(R))$, where $(R,{\mathfrak{m}})$ is a local ring of dimension $d > 0$; the local cohomology module $H^d_{{\mathfrak{m}}}(R)$, which is well known to be Artinian as an $R$-module, carries a natural structure as a left $R[x,f]$-module. The passage between $H^d_{{\mathfrak{m}}}(R)$ and its $x$-torsion-free residue class $R[x,f]$-module $H^d_{{\mathfrak{m}}}(R)/\Gamma_x(H^d_{{\mathfrak{m}}}(R))$ is facilitated by the following extension, due to G. Lyubeznik, of a result of R. Hartshorne and R. Speiser. It shows that, when $R$ is local, an $x$-torsion left $R[x,f]$-module which is Artinian (that is, `cofinite' in the terminology of Hartshorne and Speiser) as an $R$-module exhibits a certain uniformity of behaviour. \begin{thm} [G. Lyubeznik {\cite[Proposition 4.4]{Lyube97}}] \label{hs.4} {\rm (Compare Hartshorne--Speiser \cite[Proposition 1.11]{HarSpe77}.)} Suppose that $(R,{\mathfrak{m}})$ is local, and let $H$ be a left $R[x,f]$-module which is Artinian as an $R$-module. Then there exists $e \in \mathbb N_0$ such that $x^e\Gamma_x(H) = 0$. \end{thm} Hartshorne and Speiser first proved this result in the case where $R$ is local and contains its residue field which is perfect. Lyubeznik applied his theory of $F$-modules to obtain the result without restriction on the local ring $R$ of characteristic $p$. \begin{defi} \label{hslno} Suppose that $(R,{\mathfrak{m}})$ is local, and let $H$ be a left $R[x,f]$-module which is Artinian as an $R$-module. By the Hartshorne--Speiser--Lyubeznik Theorem \ref{hs.4}, there exists $e \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $x^e\Gamma_x(H) = 0$: we call the smallest such $e$ the {\em Hartshorne--Speiser--Lyubeznik number\/}, or {\em HSL-number\/} for short, of $H$. \end{defi} It will be helpful to have available an extension of this idea. \begin{defi} \label{ga.14} We say that the left $R[x,f]$-module $H$ {\em admits an HSL-number\/} if there exists $e \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $x^e\Gamma_x(H) = 0$; then we call the smallest such $e$ the {\em HSL-number\/} of $H$. \end{defi} We have seen above in \ref{hs.4} and \ref{hslno} that if $H$ is Artinian as an $R$-module, then it admits an HSL-number. Note also that if $H$ is Noetherian as an $R$-module, then it admits an HSL-number, because $\Gamma_x(H)$ is an $R[x,f]$-submodule of $H$, and so is an $R$-submodule and therefore finitely generated. \begin{cor} \label{ga.15} Suppose that the left $R[x,f]$-module $H$ admits an HSL-number $m_0$, and that the $x$-torsion-free left $R[x,f]$-module $G := H/\Gamma_x(H)$ has only finitely many $G$-special $R$-ideals. Let ${\mathfrak{b}}$ be the smallest ideal in $\mathcal{I}(G)$ of positive height (see\/ {\rm \ref{ga.13}}). For $h \in H$, the following statements are equivalent: \begin{enumerate} \item $h$ is annihilated by $\bigoplus_{n\geq m_0}{\mathfrak{b}}^{[p^{m_0}]} x^n$; \item there exists $c \in R^{\circ}\cap {\mathfrak{b}}$ such that $cx^nh = 0$ for all $n \geq m_0$; \item there exists $c \in R^{\circ}\cap {\mathfrak{b}}$ such that $cx^nh = 0$ for all $n \gg 0$; \item there exists $c \in R^{\circ}$ such that $cx^nh = 0$ for all $n \gg 0$. \end{enumerate} \end{cor} \begin{proof} Since ${\mathfrak{b}} \cap R^{\circ} \neq 0$ by prime avoidance, it is clear that (i) $\Rightarrow$ (ii), (ii) $\Rightarrow$ (iii) and (iii) $\Rightarrow$ (iv). (iv) $\Rightarrow$ (i) Since $cx^n(h + \Gamma_x(H)) = 0$ in $G$ for all $n \gg 0$, it follows from Theorem \ref{ga.13} that $h + \Gamma_x(H)$ is annihilated by ${\mathfrak{b}} R[x,f]$. Therefore, for all $r \in {\mathfrak{b}}$ and $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $rx^j(h + \Gamma_x(H)) = 0$, so that $rx^jh \in \Gamma_x(H)$ and $r^{p^{m_0}}x^{m_0+j}h = x^{m_0}rx^jh = 0$. Therefore $h \in \ann_H \left(\bigoplus_{n\geq m_0}{\mathfrak{b}}^{[p^{m_0}]} x^n\right)$. \end{proof} \section{\sc Applications to tight closure} \label{tc} The aim of this section is to apply results from Section \ref{ga} to the theory of tight closure in the local ring $(R,{\mathfrak{m}})$ of dimension $d > 0$. As was mentioned in Section \ref{ga}, we shall be concerned with the top local cohomology module $H^d_{{\mathfrak{m}}}(R)$, which has a natural structure as a left $R[x,f]$-module, and its $x$-torsion-free residue class module $H^d_{{\mathfrak{m}}}(R)/\Gamma_x(H^d_{{\mathfrak{m}}}(R))$. The (well-known) left $R[x,f]$-module structure carried by $H^d_{{\mathfrak{m}}}(R)$ is described in detail in \cite[2.1 and 2.3]{KS}. \begin{rmd} \label{tc.1} Suppose that $(R,{\mathfrak{m}})$ is a local ring of dimension $d > 0$. The above-mentioned natural left $R[x,f]$-module structure carried by $H^d_{{\mathfrak{m}}}(R)$ is independent of any choice of a system of parameters for $R$. However, if one does choose a system of parameters $a_1, \ldots, a_d$ for $R$, then one can obtain a quite concrete representation of the local cohomology module $H^d_{{\mathfrak{m}}}(R)$ and, through this, an explicit formula for the effect of multiplication by the indeterminate $x \in R[x,f]$ on an element of $H^d_{{\mathfrak{m}}}(R)$. Denote by $a_1, \ldots, a_d$ a system of parameters for $R$. \begin{enumerate} \item Represent $H^d_{{\mathfrak{m}}}(R)$ as the $d$-th cohomology module of the \u{C}ech complex of $R$ with respect to $a_1, \ldots, a_d$, that is, as the residue class module of $R_{a_1 \ldots a_d}$ modulo the image, under the \u{C}ech complex `differentiation' map, of $\bigoplus_{i=1}^dR_{a_1 \ldots a_{i-1}a_{i+1}\ldots a_d}$. See \cite[\S 5.1]{LC}. We use `$\left[\phantom{=} \right]$' to denote natural images of elements of $R_{a_1\ldots a_d}$ in this residue class module. Note that, for $i \in \{1, \ldots, d\}$, we have $$ \left[\frac{a_i^k}{(a_1 \ldots a_d)^k}\right] = 0 \quad \mbox{~for all~} k \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi. $$ Denote the product $a_1 \ldots a_d$ by $a$. A typical element of $H^d_{{\mathfrak{m}}}(R)$ can be represented as $ \left[r/a^j\right]$ for some $r \in R$ and $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$; moreover, for $r, r_1 \in R$ and $j, j_1 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $ \left[r/a^j\right] = \left[r_1/a^{j_1}\right] $ if and only if there exists $k \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $k \geq \max\{j,j_1\}$ and $ a^{k-j}r - a^{k-j_1}r_1 \in (a_1^k, \ldots, a_d^k)R. $ In particular, if $a_1, \ldots, a_d$ form an $R$-sequence (that is, if $R$ is Cohen--Macaulay), then $ \left[r/a^j\right] = 0$ if and only if $r \in (a_1^j, \ldots, a_d^j)R$, by \cite[Theorem 3.2]{O'Car83}, for example. \item The left $R[x,f]$-module structure on $H^d_{{\mathfrak{m}}}(R)$ is such that $$ x\left[\frac{r}{(a_1\ldots a_d)^j}\right] = \left[\frac{r^p}{(a_1\ldots a_d)^{jp}}\right] \quad \mbox{~for all~} r \in R \mbox{~and~} j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi. $$ The reader might like to consult \cite[2.3]{KS} for more details, and should in any case note that this left $R[x,f]$-module structure does not depend on the choice of system of parameters $a_1, \ldots, a_d$. \end{enumerate} \end{rmd} \begin{rmk} \label{tc.1r} Let the situation and notation be as in \ref{tc.1}. Here we relate the left $R[x,f]$-module structure on $H := H^d_{{\mathfrak{m}}}(R)$ described in \ref{tc.1} to the tight closure in $H$ of its zero submodule. See \cite[Definition (8.2)]{HocHun90} for the definition of the tight closure in an $R$-module of one of its submodules. Let $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$. \begin{enumerate} \item The $n$-th component $Rx^n$ of $R[x,f]$ is isomorphic, as an $(R,R)$-bimodule, to $R$ considered as a left $R$-module in the natural way and as a right $R$-module via $f^n$, the $n$-th power of the Frobenius ring homomorphism. Let $L$ be a submodule of the $R$-module $M$. It follows that an element $m \in M$ belongs to $L^*_M$, the {\em tight closure of $L$ in $M$\/}, if and only if there exists $c \in R^\circ$ such that $cx^n \otimes m$ belongs, for all $n \gg 0$, to the image of $R[x,f]\otimes_R L$ in $R[x,f]\otimes_R M$ under the map induced by inclusion. \item Let $S$ be a multiplicatively closed subset of $R$. It is straightforward to check that there is an isomorphism of $R$-modules $$\gamma_n: Rx^n \otimes_R S^{-1}R \stackrel{\cong}{\longrightarrow} S^{-1}R$$ for which $\gamma_n ( bx^n \otimes (r/s) ) = br^{p^n}/s^{p^n}$ for all $b,r \in R$ and $s \in S$; the inverse of $\gamma_n$ satisfies $(\gamma_n)^{-1} (r/s) = rs^{p^n-1}x^n \otimes (1/s)$ for all $r \in R$ and $s \in S$. \item Now represent $H := H^d_{{\mathfrak{m}}}(R)$ as the $d$-th cohomology module of the \u{C}ech complex of $R$ with respect to the system of parameters $a_1, \ldots, a_d$, as in \ref{tc.1}(i). We can use isomorphisms like that described in part (ii), together with the right exactness of tensor product, to see that (when we think of $H$ simply as an $R$-module) there is an isomorphism of $R$-modules $\delta_n: Rx^n \otimes_R H \stackrel{\cong}{\longrightarrow} H$ for which $$ \delta_n \left( bx^n \otimes \left[\frac{r}{(a_1\ldots a_d)^j}\right] \right) = \left[\frac{br^{p^n}}{(a_1\ldots a_d)^{jp^n}}\right] \quad \mbox{~for all~} b,r \in R \mbox{~and~} j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi. $$ Thus, in terms of the natural left $R[x,f]$-module structure on $H$, we have $\delta_n \left( bx^n \otimes h\right) = bx^nh$ for all $b \in R$ and $h \in H$. \item It thus follows that, for $h \in H$, we have $h \in 0^*_{H}$ if and only if there exists $c \in R^{\circ}$ such that $cx^nh = 0$ for all $n \gg 0$. \item Observe that $\Gamma_x(H) \subseteq 0^*_H$. \item Suppose that $(R,{\mathfrak{m}})$ is Cohen--Macaulay, and use the notation of part (iii) again; write $a := a_1\ldots a_d$. Let $r \in R$, $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, and let $h := \left[r/a^j\right]$ in $H$. It follows from \ref{tc.1}(i) that $r \in ((a_1^j, \ldots, a_d^j)R)^*$ if and only if there exists $c \in R^{\circ}$ such that $cx^nh = 0$ for all $n \gg 0$. Thus, by part (iv) above, $r \in ((a_1^j, \ldots, a_d^j)R)^*$ if and only if $h \in 0^*_{H}$. \item Let the situation and notation be as in part (vi) above. Then the $R$-homomorphism $\nu_j : R/(a_1^j, \ldots, a_d^j)R \longrightarrow H$ for which $\nu_j(r' + (a_1^j, \ldots, a_d^j)R) = \left[r'/a^j\right]$ for all $r'\in R$ is a monomorphism (by \ref{tc.1}(i)). Furthermore, the induced homogeneous $R[x,f]$-homomorphism $$ R[x,f]\otimes_R\nu_j : R[x,f]\otimes_R\left(R/(a_1^j, \ldots, a_d^j)R\right) \longrightarrow R[x,f]\otimes_RH $$ of graded left $R[x,f]$-modules is also a monomorphism: this is because a homogeneous element of $\Ker \left(R[x,f]\otimes_R\nu_j\right)$ must have the form $r'x^k\otimes(1 + (a_1^j, \ldots, a_d^j)R)$ for some $r' \in R$ and $k \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$; since $r'x^k\otimes\left[1/a^j\right] = 0$, it follows from \ref{tc.1r}(iii) and \ref{tc.1}(i) that $r' \in (a_1^{jp^k}, \ldots, a_d^{jp^k})R$, so that $r'x^k\otimes(1 + (a_1^j, \ldots, a_d^j)R) = 0$. \end{enumerate} \end{rmk} \begin{lem} \label{tc.1s} Suppose that $(R,{\mathfrak{m}})$ is a local ring of dimension $d > 0$; set $H := H^d_{{\mathfrak{m}}}(R)$ and $G := H/\Gamma_x(H)$. Let $h \in H$. Then the following statements are equivalent: \begin{enumerate} \item $h \in 0^*_{H}$; \item $h + \Gamma_x(H) \in 0^*_{G}$; \item there exists $c \in R^{\circ}$ such that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \gg 0$; \item there exists $c \in R^{\circ}$ such that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \geq 0$. \end{enumerate} \end{lem} \begin{proof} Let $m_0$ denote the HSL-number of $H$ (see\/ {\rm \ref{hslno}}). (i) $\Rightarrow$ (ii) This is immediate from the fact that $0^*_{H} \subseteq (\Gamma_x(H))^*_H$ once it is recalled from \cite[Remark (8.4)]{HocHun90} that $h + \Gamma_x(H) \in 0^*_G$ if and only if $h \in (\Gamma_x(H))^*_H$. (ii) $\Rightarrow$ (iii) Suppose that $h + \Gamma_x(H) \in 0^*_{G}$, so that $h \in (\Gamma_x(H))^*_H$. Under the isomorphism $\delta_n: Rx^n \otimes_R H \stackrel{\cong}{\longrightarrow}H$ of \ref{tc.1r}(iii) (where $n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$), the image of $Rx^n \otimes_R \Gamma_x(H)$ is mapped into $\Gamma_x(H)$. Therefore there exists $c \in R^{\circ}$ such that $cx^nh \in\Gamma_x(H)$ for all $n \gg 0$, that is, such that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \gg 0$. (iii) $\Rightarrow$ (iv) Suppose that there exist $c \in R^{\circ}$ and $n_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \geq n_0$. Then, for all $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $$ x^{n_0}cx^j(h+\Gamma_x(H)) = c^{p^{n_0}}x^{n_0 + j}(h+\Gamma_x(H)) = 0 \quad \mbox{~in~} G. $$ Since $G$ is $x$-torsion-free, we see that $cx^j(h+\Gamma_x(H)) = 0$ for all $j \geq 0$. (iv) $\Rightarrow$ (i) Suppose that there exists $c \in R^{\circ}$ such that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \geq 0$. Then $cx^nh \in \Gamma_x(H)$ for all $n \geq 0$, so that $x^{m_0}cx^nh = 0$ for all $n \geq 0$. This implies that $c^{p^{m_0}}x^{m_0 + n}h = 0$ for all $n \geq 0$, so that $h \in 0^*_{H}$ by \ref{tc.1r}(iv). \end{proof} \begin{defi} \label{tc.1t} The {\em weak parameter test ideal $\sigma'(R)$ of $R$\/} is defined to be the ideal generated by $0$ and all weak parameter test elements for $R$. (By a `weak parameter test element' for $R$ we mean a $p^{w_0}$-weak parameter test element for $R$ for some $w_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$.) It is easy to see that each element of $\sigma'(R) \cap R^{\circ}$ is a weak parameter test element for $R$. \end{defi} The next theorem is one of the main results of this paper. \begin{thm} \label{tc.2} Let $(R,{\mathfrak{m}})$ (as in\/ {\rm \ref{nt.1}}) be a Cohen--Macaulay local ring of dimension $d > 0$. Set $H := H^d_{{\mathfrak{m}}}(R)$, a left $R[x,f]$-module which is Artinian as an $R$-module; let $m_0$ be its HSL-number (see\/ {\rm \ref{hslno}}), and let $q_0 := p^{m_0}$. Set $G := H/\Gamma_x(H)$, an $x$-torsion-free left $R[x,f]$-module. By\/ {\rm \ref{ga.12c}} and\/ {\rm \ref{ga.13}}, there exists a (uniquely determined) smallest ideal ${\mathfrak{b}}$ of height at least $1$ in the set $\mathcal{I}(G)$ of $G$-special $R$-ideals. Let $c$ be any element of ${\mathfrak{b}} \cap R^{\circ}$. Then $c^{q_0}$ is a $q_0$-weak parameter test element for $R$. In particular, $R$ has a $q_0$-weak parameter test element. In fact, the weak parameter test ideal $\sigma'(R)$ of $R$ satisfies ${\mathfrak{b}}^{[q_0]} \subseteq \sigma'(R) \subseteq {\mathfrak{b}}$. \end{thm} \begin{note} It should be noted that, in Theorem \ref{tc.2}, it is not assumed that $R$ is excellent. There are examples of Gorenstein local rings of characteristic $p$ which are not excellent: see \cite[p.\ 260]{HMold}. \end{note} \begin{proof} We have to show that, for an arbitrary parameter ideal ${\mathfrak{a}}$ of $R$ and $r \in {\mathfrak{a}}^*$, we have $c^{q_0}r^{p^n} \in {\mathfrak{a}}^{[p^n]}$ for all $n \geq m_0$. In the first part of the proof, we establish this in the case where ${\mathfrak{a}}$ is an ideal ${\mathfrak{q}}$ generated by a full system of parameters $a_1, \ldots, a_d$ for $R$. Let $r \in {\mathfrak{q}}^*$, so that there exists $\widetilde{c} \in R^{\circ}$ such that $\widetilde{c}r^{p^n} \in {\mathfrak{q}}^{[p^n]}$ for all $n \gg 0$. Use $a_1, \ldots, a_d$ in the notation of \ref{tc.1}(i) for $H^d_{{\mathfrak{m}}}(R) = H$, and write $a := a_1\ldots a_d$. We have $ \widetilde{c}x^n\left[r/a\right] = \left[\widetilde{c}r^{p^n}\!/a^{p^n}\right] = 0 $ in $H$ for all $n \gg 0$. Set $h := \left[r/a\right] \in H$. Thus $\widetilde{c}x^nh = 0$ for all $n \gg 0$. It therefore follows from Corollary \ref{ga.15} that $h$ is annihilated by $ \bigoplus_{n\geq m_0}{\mathfrak{b}}^{[p^{m_0}]} x^n$, so that, in particular, $c^{p^{m_0}}x^nh = 0$ for all $n \geq m_0$. Hence, in $H$, $$ \left[\frac{c^{q_0}r^{p^{n}}}{(a_1 \ldots a_d)^{p^{ n}}}\right] = c^{p^{m_0}}x^{n}\left[\frac{r}{a_1 \ldots a_d}\right] = c^{p^{m_0}}x^n h = 0 \quad \mbox{~for all~} n \geq m_0. $$ Since $R$ is Cohen--Macaulay, we can now deduce from \ref{tc.1}(i) that $c^{q_0}r^{p^n} \in {\mathfrak{q}}^{[p^n]}$ for all $n \geq m_0$, as required (for ${\mathfrak{q}}$). Now let ${\mathfrak{a}}$ be an arbitrary parameter ideal of $R$. A proper ideal in a Cohen--Macaulay local ring is a parameter ideal if and only if it can be generated by part of a system of parameters. In view of the first part of this proof, we can, and do, assume that $\height {\mathfrak{a}} < d$. There exist a system of parameters $a_1, \ldots, a_d$ for $R$ and an integer $i \in \{0, \ldots, d-1\}$ such that ${\mathfrak{a}} = (a_1,\ldots, a_i)R$. Let $r \in {\mathfrak{a}}^*$. Then, for each $v \in \mathbb N$, we have $r \in ((a_1,\ldots, a_i,a_{i+1}^v, \ldots, a_d^v)R)^*$, and, since $a_1,\ldots, a_i,a_{i+1}^v, \ldots, a_d^v$ is a system of parameters for $R$, it follows from the first part of this proof that $$ c^{q_0}r^{p^n} \in ((a_1,\ldots, a_i,a_{i+1}^v, \ldots, a_d^v)R)^{[p^n]} = \left(a_1^{p^n},\ldots, a_i^{p^n},a_{i+1}^{vp^n}, \ldots, a_d^{vp^n}\right)\!R \quad \mbox{~for all~} n \geq m_0. $$ Therefore, for all $n \geq m_0$, $$ c^{q_0}r^{p^n} \in \bigcap_{v \in \mathbb N} \left(a_1^{p^n},\ldots, a_i^{p^n},a_{i+1}^{vp^n}, \ldots, a_d^{vp^n}\right)\!R \subseteq \bigcap_{v \in \mathbb N} \left({\mathfrak{a}}^{[p^{n}]} + {\mathfrak{m}}^{vp^{n}} \right) = {\mathfrak{a}}^{[p^{n}]} $$ by Krull's Intersection Theorem. This shows that $c^{q_0}$ is a $q_0$-weak parameter test element for $R$, so that, since ${\mathfrak{b}}$ can be generated by elements in ${\mathfrak{b}} \cap R^{\circ}$, it follows that ${\mathfrak{b}}^{[q_0]} \subseteq \sigma'(R)$. Now let $c \in \sigma'(R)\cap R^{\circ}$; we suppose that $c \not\in {\mathfrak{b}}$ and seek a contradiction. Thus ${\mathfrak{b}} \subset {\mathfrak{b}} + Rc$. Let $L : = \ann_G({\mathfrak{b}} R[x,f])$ and $L' := \ann_G(({\mathfrak{b}} + Rc)R[x,f])$, two special annihilator submodules of the $x$-torsion-free left $R[x,f]$-module $G$. Since $L$ corresponds to the $G$-special $R$-ideal ${\mathfrak{b}}$, we must have $L' \subset L$, since otherwise we would have $$ ({\mathfrak{b}} + Rc)R[x,f] \subseteq \grann_{R[x,f]}L' = \grann_{R[x,f]}L = {\mathfrak{b}} R[x,f]. $$ Therefore there exists $h \in H$ such that $h + \Gamma_x(H)$ is annihilated by ${\mathfrak{b}} R[x,f]$ but not by $({\mathfrak{b}} + Rc) R[x,f]$. Since $\height {\mathfrak{b}} \geq 1$, it follows from Lemma \ref{tc.1s} that $h \in 0^*_H$. Choose a system of parameters $a_1, \ldots, a_d$ for $R$; use $a_1, \ldots, a_d$ in the notation of \ref{tc.1}(i) for $H^d_{{\mathfrak{m}}}(R) = H$, and write $a := a_1\ldots a_d$. There exist $r \in R$ and $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $h = [r/a^j]$. By \ref{tc.1r}(vi), we have $r \in ((a_1^j, \ldots, a_d^j)R)^*$. Since $c$ is a weak parameter test element for $R$, we see that $cr^{p^n} \in (a_1^{jp^n}, \ldots, a_d^{jp^n})R$ for all $n \gg 0$, so that $cx^nh = 0$ for all $n \gg 0$. Thus there is some $n_0 \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$ such that $cx^n(h + \Gamma_x(H)) = 0$ in $G$ for all $n \geq n_0$. Therefore, for all $j \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi$, we have $$ x^{n_0}cx^j(h + \Gamma_x(H)) = c^{p^{n_0}}x^{n_0 + j}(h + \Gamma_x(H)) = 0, $$ so that $cx^j(h + \Gamma_x(H)) = 0$ because $G$ is $x$-torsion-free. Thus $h + \Gamma_x(H)$ is annihilated by $RcR[x,f]$ as well as by ${\mathfrak{b}} R[x,f]$. This is a contradiction. Therefore ${\mathfrak{b}}^{[q_0]} \subseteq \sigma'(R) \subseteq {\mathfrak{b}}$, since $\sigma'(R)$ can be generated by its elements that lie in $R^{\circ}$. \end{proof} Use $R'$ to denote $R$ (as in\/ {\rm \ref{nt.1}}) regarded as an $R$-module by means of $f$. With this notation, $f : R \longrightarrow R'$ becomes a homomorphism of $R$-modules. Recall that, when $(R,{\mathfrak{m}})$ is a local ring of dimension $d > 0$, we say that $R$ is {\em $F$-injective\/} precisely when the induced homomorphisms $ H^i_{{\mathfrak{m}}}(f) : H^i_{{\mathfrak{m}}}(R) \longrightarrow H^i_{{\mathfrak{m}}}(R')$ are injective for all $i = 0, \ldots, d$. See R. Fedder and K-i. Watanabe \cite[Definition 1.7]{FW87} and the ensuing discussion. \begin{cor} \label{tc.3} Let $(R,{\mathfrak{m}})$ be an $F$-injective Cohen--Macaulay local ring of dimension $d > 0$. The left $R[x,f]$-module $H := H^d_{{\mathfrak{m}}}(R)$ is $x$-torsion-free. By Theorem\/ {\rm \ref{ga.13}}, there exists a (uniquely determined) smallest ideal ${\mathfrak{b}}$ of height at least $1$ in the set $\mathcal{I}(H)$ of $H$-special $R$-ideals. Let $c$ be any element of ${\mathfrak{b}} \cap R^{\circ}$. Then $c$ is a parameter test element for $R$. In fact, ${\mathfrak{b}}$ is the parameter test ideal of $R$ (see\/ {\rm \cite[Definition 4.3]{Smith95}}). \end{cor} \begin{note} It should be noted that, in Corollary \ref{tc.3}, it is not assumed that $R$ is excellent. \end{note} \begin{proof} With the notation of Theorem \ref{tc.2}, the HSL-number $m_0$ of $H$ is $0$ when $R$ is $F$-injective, and so $q_0 = 1$ and $G \cong H$ in this case. By Theorem \ref{tc.2}, each element $c \in {\mathfrak{b}} \cap R^{\circ}$ is a $q_0$-weak parameter test element for $R$, that is, a parameter test element for $R$. Since $R$ has a parameter test element, its parameter test ideal $\sigma(R)$ is equal to the ideal of $R$ generated by all parameter test elements. By Theorem \ref{tc.2}, we therefore have $ {\mathfrak{b}} = {\mathfrak{b}}^{[q_0]} \subseteq \sigma(R) \subseteq \sigma'(R) \subseteq {\mathfrak{b}}. $ \end{proof} \begin{cor} \label{tc.4} Let $(R,{\mathfrak{m}})$ be an $F$-injective Gorenstein local ring of dimension $d > 0$. The left $R[x,f]$-module $H := H^d_{{\mathfrak{m}}}(R)$ is $x$-torsion-free. By Theorem\/ {\rm \ref{ga.13}}, there exists a (uniquely determined) smallest ideal ${\mathfrak{b}}$ of height at least $1$ in the set $\mathcal{I}(H)$ of $H$-special $R$-ideals. Let $c$ be any element of ${\mathfrak{b}} \cap R^{\circ}$. Then $c$ is a test element for $R$. In fact, ${\mathfrak{b}}$ is the test ideal of $R$. \end{cor} \begin{note} It should be noted that, in Corollary \ref{tc.4}, it is not assumed that $R$ is excellent. \end{note} \begin{proof} This follows immediately from Corollary \ref{tc.3} once it is recalled that an $F$-injective Cohen--Macaulay local ring is reduced and that a parameter test element for a reduced Gorenstein local ring $R$ of characteristic $p$ is automatically a test element for $R$: see the proof of \cite[Proposition 4.1]{Hunek98}. \end{proof} \section{\sc Special $R$-ideals and Enescu's $F$-stable primes} \label{en} The purpose of this section is to establish connections between the work in \S \ref{ga} and \S \ref{tc} above and F. Enescu's $F$-stable primes of an $F$-injective Cohen--Macaulay local ring $(R,{\mathfrak{m}})$, defined in \cite[\S 2]{Enesc03}. \begin{ntn} \label{en.1} Throughout this section, $(R,{\mathfrak{m}})$ will be assumed to be a Cohen--Macaulay local ring of dimension $d > 0$, and we shall let $a_1, \ldots, a_d$ denote a fixed system of parameters for $R$, and set ${\mathfrak{q}} := (a_1, \ldots, a_d)R$. We shall use $a_1, \ldots, a_d$ in the notation of \ref{tc.1}(i) for $H := H^d_{{\mathfrak{m}}}(R)$, and write $a := a_1\ldots a_d$. For each $b \in R$, we define (following Enescu \cite[Definition 1.1]{Enesc03}) the ideal ${\mathfrak{q}}(b)$ by $$ {\mathfrak{q}}(b) := \left\{ c \in R : cb^{p^n} \in {\mathfrak{q}}^{[p^n]} \mbox{~for all~} n \gg 0 \right\}. $$ (Actually, Enescu only made this definition when $b \not\in {\mathfrak{q}}$; however, the right-hand side of the above display is equal to $R$ when $b \in {\mathfrak{q}}$, and there is no harm in our defining ${\mathfrak{q}}(b)$ to be $R$ in this case.) In view of \ref{tc.1}(i), the ideal ${\mathfrak{q}}(b)$ is equal to the ultimate constant value of the ascending chain $({\mathfrak{b}}_n)_{n \in \relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}$ of ideals of $R$ for which $\bigoplus_{n\in\relax\ifmmode{\mathbb N_{0}}\else$\mathbb N_{0}$\fi}{\mathfrak{b}}_n x^n = \grann_{R[x,f]}R[x,f][b/a]$, the graded annihilator of the $R[x,f]$-submodule of $H$ generated by $[b/a]$. Now consider the special case in which $R$ is (also) $F$-injective. Then the left $R[x,f]$-module $H$ is $x$-torsion-free, and so it follows from Lemma \ref{nt.5} that, for each $b \in R$, the ideal ${\mathfrak{q}}(b)$ is radical and $\grann_{R[x,f]}R[x,f][b/a] = {\mathfrak{q}}(b)R[x,f]$; thus ${\mathfrak{q}}(b)$ is an $H$-special $R$-ideal. We again follow Enescu and set $$ Z_{{\mathfrak{q}},R} := \{ {\mathfrak{q}}(b) : b \in R \setminus {\mathfrak{q}} \}. $$ \end{ntn} Enescu proved, in \cite[Theorem 2.1]{Enesc03}, that (when $(R,{\mathfrak{m}})$ is Cohen--Macaulay and $F$-injective) the set of maximal members of $Z_{{\mathfrak{q}},R}$ is independent of the choice of ${\mathfrak{q}}$, is finite, and consists of prime ideals. The next theorem shows that the set of maximal members of $Z_{{\mathfrak{q}},R}$ is actually equal to the set of maximal members of $\mathcal{I}(H) \setminus \{R\}$: we saw in Lemma \ref{ga.11} that this set consists of prime ideals, and in Corollary \ref{ga.12c} that it is finite. \begin{thm} \label{en.2} Let the situation and notation be as in\/ {\rm \ref{en.1}}, and suppose that the Cohen--Macaulay local ring $(R,{\mathfrak{m}})$ is $F$-injective. Then the set of maximal members of $Z_{{\mathfrak{q}},R}$ is equal to the set of maximal members of $\mathcal{I}(H) \setminus \{R\}$. \end{thm} \begin{proof} The comments in \ref{en.1} show that $Z_{{\mathfrak{q}},R} \subseteq \mathcal{I}(H)$; clearly, no member of $Z_{{\mathfrak{q}},R}$ can be equal to $R$. It is therefore sufficient for us to show that a maximal member ${\mathfrak{p}}$ of $\mathcal{I}(H) \setminus \{R\}$ must belong to $Z_{{\mathfrak{q}},R}$. Let $L \in \mathcal{A}(H)$ be the special annihilator submodule of $H$ corresponding to ${\mathfrak{p}}$. Now $H$ is an Artinian $R$-module: let $h$ be a non-zero element of the socle of $L$. By Lemma \ref{ga.11}, we have $\grann_{R[x,f]}R[x,f]h = {\mathfrak{p}} R[x,f]$. However, for each $j \in \mathbb N$, we have $R[1/a^j] \cong R/(a_1^j, \ldots, a_d^j)R$, by \ref{tc.1r}(vii), so that $$ \Hom_R(R/{\mathfrak{m}},R[1/a^j]) \cong \Hom_R(R/{\mathfrak{m}},R/(a_1^j, \ldots, a_d^j)R) \cong \Ext^d_R(R/{\mathfrak{m}},R) $$ by \cite[Lemma 1.2.4]{BH}. It follows that it is possible to write $h$ in the form $h = [r/a]$ for some $r \in R$, and therefore ${\mathfrak{p}} = {\mathfrak{q}}(r) \in Z_{{\mathfrak{q}},R}$. \end{proof} \end{document}
\begin{document} \maketitle \begin{abstract} Almost all viruses, regardless of their genomic material, produce defective viral genomes (DVG) as an unavoidable byproduct of their error-prone replication. Defective interfering (DI) elements are a subgroup of DVGs that have been shown to interfere with the replication of the wild-type (WT) virus. Along with DIs, other genetic elements known as satellite RNAs (satRNAs), that show no genetic relatedness with the WT virus, can co-infect cells with WT helper viruses and take advantage of viral proteins for their own benefit. These satRNAs have effects that range from reduced symptom severity to enhanced virulence. The interference dynamics of DIs over WT viruses has been thoroughly modelled at within-cell, within-host, and population levels. However, nothing is known about the dynamics resulting from the nonlinear interactions between WT viruses and DIs in the presence of satellites, a process that is frequently seen in plant RNA viruses and in biomedically relevant pathosystems like hepatitis B virus and its $\delta$ satellite. Here, we look into a phenomenological mathematical model that describes how a WT virus replicates and produces DIs in presence of a satRNA at the intra-host level. The WT virus is subject to mechanisms of complementation, competition, and various levels of interference from DIs and the satRNA. Examining the dynamics analytically and numerically reveals three possible stable states: (i) full extinction, (ii) satellite extinction and virus-DIs coexistence and (iii) full coexistence. Assuming DIs replicate faster than the satRNA owed to their smaller size drives to scenario (ii), which implies that DIs could wipe out the satRNA. In addition, a small region of the parameter space exists wherein the system is bistable (either scenarios (ii) or (iii) are concurrently stable). We have identified transcritical bifurcations in the transitions between scenarios (i) to (iii) and saddle-node bifurcations behind the change from bistability to monostability. Despite the model simplicity, our findings may have applications in biomedicine and agronomy. They will cast light on the dynamics of this three-species system and aid in the identification of scenarios in which the clearance of the satRNAs may be possible thus \emph{e.g.}, allowing for less severe disease symptoms. \vskip1cm \noindent\textit{Keywords:} Bifurcations; Complex systems; Defective interfering genomes; Dynamical systems; RNA satellites; subviral particles. \end{abstract} \section{Introduction} Viruses are found infecting organisms from all realms of the Tree of Life. Viruses are obligate intracellular parasites that lack of translation machinery to complete their infection cycles. Hence, they need to infect and take profit of the cell's machinery to replicate their genomes and produce the structural proteins that will be used for packaging their genomes. Perhaps the most remarkable characteristic of viruses, in particular those having RNA genomes, is their high mutation rate, consequence of a lack of proof-reading mechanisms in their replicases \cite{Sanjuan2010a}. At the one hand, this high mutation rate, along with their very short generation time and large population size, bestow viral populations with great evolvability \cite{Andino2015}. At the other hand, RNA viruses' extremely compacted genome organization makes mutations potentially harmful. In fact, most randomly introduced mutations either impose a significant fitness cost or are fatal \cite{Sanjuan2010b}. These highly deleterious or lethal mutations can vary from point mutations to genomic deletions of variable length; these mutants are collectively referred as defective viral genomes (DVGs) \cite{Vignuzzi2019}. A fraction of deletion DVGs has been long shown to interfere with genome replication and accumulation, being known as defective interfering (DI) RNAs. DI RNAs were first reported by Preben von Magnus~\cite{VonMagnus1954}, who studied their accumulation in influenza A virus populations passaged in embryonated chicken eggs. Based on these serial passage experiments the existence of incomplete virus variants which increase rapidly in frequency and cause drops in overall virus titers was proposed. The existence of virus variants with large genomic deletions has been confirmed thereafter in many virus families \cite{Vignuzzi2019}, both with RNA and DNA genomes. DI RNAs are thought to replicate much faster than full-length wild-type (WT) viruses, due to their smaller genome sizes. Moreover, DI RNAs can evolve other strategies to better compete with WT viruses. DI RNAs cannot autonomously replicate because they lack most, if not all, of WT coding sequences. They must, therefore, co-infect a cell with a WT virus in order to replicate, becoming obligate parasites of WT viruses. As the frequency of the DIs increases, the overall virus production is reduced because essential WT-encoded gene products are no longer available (\emph{i.e.}, interference) \cite{Chao2017}. DI RNAs can have implications for virus amplification in cultured cells, protein expression using viral vectors, and vaccine development \cite{PalmaHuang1974}. Nearly all animal and many plant RNA viruses infections are associated to DI RNAs. The viral genes necessary for movement, replication, and encapsidation are typically absent from these truncated and frequently rearranged versions of WT viruses, but they still have all of the \emph{cis}-acting components needed for replication by the WT virus's RNA-dependent RNA polymerase (RdRp). In the past 20 years, \emph{de novo} generation of DI RNAs has received a great deal of attention. For plant virus DI RNAs, the RdRp-mediated copy choice model, which was first outlined for the generation of DI RNAs from animal viruses, still holds true \cite{White1999}. DI RNAs are probably subject to intense selective pressure for biological success after \emph{de novo} generation. While the majority of DI RNAs attenuate the WT virus's symptoms, DI RNAs of broad bean mottle virus and of turnip crinkle virus (TCV) possess the unusual attribute of exacerbating symptom severity (reviewed in \cite{Simon2004} and in \cite{Badar2021}). Interestingly, DI RNAs can also be produced by DNA viruses such as hepatitis B virus (HBV). Defective forms of HBV, named spliced HBV, have been characterized and investigated \emph{in vivo} \cite{TerrePetit1991,RosmorducPetit1995}. HBV DNA genome is transcribed into a pre-genomic RNA (pgRNA) by the viral P protein in the cell nucleus. Then pgRNAs are exported to the cytoplasm to be further processed to produce mature viruses. During the synthesis of pgRNA molecules, P also produces defective RNAs \cite{TerrePetit1991} which, after reverse transcription in the cytoplasm result in defective DNA genomes, can be packaged into mature viral particles, thus behaving as DI agents. Another relevant member of the subviral RNA brotherhood are the so-called satellite viruses and the satellite RNAs (satRNAs), which can be either linear or circular (also known as virusoids) \cite{Simon2004,Palukaitis2016}. While satellite viruses generally encode for the components to build their own capsid protein, but depend on the helper WT virus for replication and movement, satRNAs often do not encode for any protein. Typically, virus satellites have been suggested to establish symbiotic relations with the WT helper virus, thus getting a benefit. However, other side-effect processes such as competition may arise during co-infection. Moreover, some satellite viruses can also act as parasites of the WT virus, thus taking a profit of the presence of the WT virus but not providing an advantage to it. SatRNA and satellite virus genomes are mostly or completely unrelated to their WT helper virus genome, a major difference with DI RNAs. The diversity of satRNAs and satellite virus structure and interaction with their helper WT virus is remarkable. For example, satC associated with TCV is a hybrid molecule composed of sequence from a second satRNA and two portions from the $3'$ end of TCV genomic RNA~\cite{Simon2004}. The satRNA associated to the ground rosette virus (GRV) further confounds earlier classifications. While not necessary for viral movement within a host, this noncoding satRNA is necessary for GRV to encapsidate in the coat protein of its luteovirus partner, groundnut rosette assistor virus, as a requirement for aphid transmission~\cite{Robinson1999}. Although more often found in plant viruses, some satellites are known to infect vertebrates~\cite{Krupovic2016}, insects~\cite{Ribiere2010}, and unicellular eukaryotic cells~\cite{Schmitt2002}. Some virus satellites have a strong clinical impact. For example, HBV has its own satellite RNA virus, the hepatitis $\delta$ virus (HDV). Infections with HBV are more virulent, quickly evolving towards fatal cirrhosis when there is coinfection with HDV~\cite{Taylor2020}. HDV is replicated by cellular RNA polymerases I, II and III but uses for packaging HBV envelope proteins in order to accomplish viral particle assembly and release~\cite{Taylor2020}. Understanding the host's reaction to viral invasion has recently made strides that have helped to clarify how DI RNAs, satRNAs and satellite viruses cause, enhance, or minimize disease symptoms. For instance, symptom attenuation was once primarily ascribed to direct competition for limited replication factors between the helper WT virus and subviral RNA \cite{Chao2017}. Recent data from a number of viral species, however, point to the possibility that the enhancement of host resistance by subviral RNA may be just as important, if not more so \cite{Palukaitis2016,Gnanasekaran2019,Badar2021}. Concepts defining the genetic connection between WT viruses and subviral RNA are also developing. A recent study suggests that some pairs of subviral RNA and helper WT viruses have more complex relationships, including mutualistic ones benefiting both participants \cite{Simon2004,Badar2021}. Furthermore, in natural infections, WT viruses could support the replication of more than one subviral element. For example, these three-ways interactions shall be relevant to understanding the dynamics of HBV, HBV-derived DI RNAs and HDV. Even more complex systems exist, as it is the case for panicum mosaic virus (PMV), which is found coinfecting with a satellite virus (sPMV) and at least two satRNAs (S and C) and DI RNAs produced both from the WT virus as well as from sPMV \cite{Qiu2001,Pyle2018}, or the case of the bipartite tomato black ring virus that coexists with DIs derived from its both genomic RNAs as well as with a satRNA that affects its vertical transmission efficiency; all the interactions being strongly dependent on the host species \cite{Hasiow2018,Pospieszny2020,Minicka2022}. \begin{figure}\label{fig:model} \end{figure} Theoretical investigations of the dynamical impact of DI RNAs in the replication of WT viruses have been carried out by several authors~\cite{Szathmary1993,BanghamKirkwood1990,Kirkwood1994,Sardanyes2010,Chao2017}. Typically, these mathematical models had taken mean-field approximations considering either discrete-~\cite{Szathmary1993,Zwart2013} or continuous-time \cite{BanghamKirkwood1990,Kirkwood1994,Chao2017} dynamical systems. However, to the extend of our knowledge, the only previous theoretical study incorporating both helper and satellite viruses used an epidemiological approach in which host individuals could be infected by different combinations of viral and subviral RNAs~\cite{LuciaSanz2022}. Here we take a population dynamics approach to explore the within-host dynamics of a system of molecular replicators composed by a WT helper virus, one satRNA and the DI RNAs generated during WT virus replication (Fig.~\ref{fig:model}). With this approach, we want to determine the dynamics arising from most basic principles of replication and interaction between replicators without entering into mechanistic details involving proteins. By doing so, satRNAs and viral satellites could be considered as homologous. For simplicity, hereafter we will refer to the WT helper virus as HV. The manuscript is organised as follows. In Section~\ref{se:model} we introduce the mathematical model. Section~\ref{se:anal:results} contains analytical results concerning the domain of the dynamics, equilibrium points and their local stability. Section~\ref{se:numerical:analysis} illustrates different scenarios from numerical results, also including information on transients for those systems with DI RNAs shorter than the satRNA and for which no full coexistence is possible. Finally, we show the system can display bistability and that achieving either satRNA clearance with HV-DIs persistence or full coexistence depends on the initial populations of replicators. \section{Mathematical model} \label{se:model} We develop a dynamical model based on three coupled autonomous ordinary differential equations (ODEs) to investigate the dynamics of a WT helper virus (HV) population supporting the replication of a satRNA together with the synthesis of DIs as a by product of the replication of the HV genome. Let us denote by $x=(V,S,D)$ the following state variables being the (normalised) concentration of HV ($V$), the satRNA ($S$) and, for simplicity, all possible DI RNAs grouped into a single category ($D$), respectively. The corresponding system of ODEs is given by: \begin{eqnarray} \dot{V} &=& \alpha \, (1-\mu) \,V \, \Omega(x) - \varepsilon V, \label{eq1} \\ \dot{S} &=& \beta \, V \, S \, \theta(x) - \varepsilon \, S, \label{eq2} \\ \dot{D} &=& \left( \alpha \, \mu + \gamma \, D \right)\, V \, \theta(x) - \varepsilon \, D, \label{eq3} \end{eqnarray} with $\Omega(x) = 1-V-\eta S - \eta_D D,$ and $\theta(x) = 1-V-S-D$. We will refer in short to the model as $\dot{x}=F(x)$. The model considers well-mixed populations and takes into account the processes of virus replication, complementation, competition with asymmetric interference strengths, and spontaneous degradation of the different RNAs (see Fig.~\ref{fig:model} for a schematic diagram of the modeled processes). To keep the model as simple as possible, the production of viral proteins is ignored and replication/encapsidation processes for the satRNA and the DIs are made proportional to the amount of HV (simulating complementation). The replication rates of the viral genomes are proportional to parameters $\alpha$ (HV), $\beta$ (satRNA), and $\gamma$ (DIs). We will generically assume that $\beta, \gamma > \alpha$. This assumption is based on the fact that both DIs and the satRNA genomes are always shorter than the genome of the HV (see tables 1 - 5 in \cite{Badar2021}), and thus replication is expected to be faster. For example, tobacco mosaic virus has a genome size of ca. 6.4 kb and its satellite virus sTMV is about 1.1 kb~\cite{Arenal1999}; TCV genome is 4.1 kb long while its satC has only 0.4 kb~\cite{Altenbach1981}. Lucerne transient streak virus is about 4.2 Kb long while its satellite scLTSV is 0.3 Kb long~\cite{Badar2021}; HBV pgRNA is about 3.5 kb long, while HDV is 1.7 kb. Interestingly, in the case of HBV, the length of the DI RNA (deletion-containing pgRNA) is about 2.2 kb~\cite{Rosmorduc1995}, a bit longer than the satellite HDV. As we will show below, the case where DIs replicate faster that the satRNAs ($\gamma > \beta$) does not allow for the coexistence of the three populations. For those cases with satRNAS replicating faster than DIs ($\beta>\gamma$) coexistence is possible. This latter case may correspond to viruses supporting very short satRNAs such as linear or circular ones. The replication of the HV unavoidably results in the production of DIs at a rate $\mu$ (we assume $0< \mu <1$). Both $\Omega(x)$ and $\theta(x)$ are logistic functions introducing competition between the three viral populations due to finite host resources. Notice that the logistic function for the HV, given by $\Omega(x)$, involves the competition parameters $\eta_D, \eta >1$ to investigate higher interference strengths by the satRNAs and the DIs on the HV. Such interference may be due to competition for host resources, viral components shared by the three RNAs (\emph{e.g.}, envelop proteins) or triggering of host antiviral defenses by an excessive accumulation of viral particles or post-transcriptional gene silencing in response to the accumulation of different RNA species \cite{Palukaitis2016,Gnanasekaran2019,Badar2021}. Finally, parameter $\varepsilon$ denotes the degradation rate of all RNA molecules, which, for simplicity, is considered to be the same for the three populations considering that the expected growth asymmetries have been introduced in replication rates. \section{Analytical results} \label{se:anal:results} In this section we first study the domain where dynamics take place and compute the nullclines of the system. Then, we provide an analysis of its equilibrium points~\eqref{eq1}-\eqref{eq3} and their local stability. \subsection{Domain of confined dynamics and nullclines} \label{se:domain:nullclines} As it is common in many biological models, the competition for limited resources term $\theta(x)$ limits populations' growth and confines the dynamics to a finite domain. In our case, this is given by the tetrahedron \begin{equation} \mathcal{U} = \left\{ x=(V,S,D) \ \bigg| \ x\geq 0 \quad \textrm{and} \quad V+S+D \leq 1 \right\}, \label{def:domain:U} \end{equation} which is determined by the coordinate planes and the plane $\theta(x)=0$ \emph{i.e.}, the plane $V+S+D=1$. The fact that the planes $V=0$ and $S=0$ are invariant under the dynamics of~\eqref{eq1}-~\eqref{eq3} and that the vector field $F$ of~\eqref{eq1}-\eqref{eq3} points inwards in the rest of its faces, makes the domain $\mathcal{U}$ positively invariant. That is, orbits with initial conditions on $\mathcal{U}$ remain inside of this domain for all $t\geq 0$. Let us now compute the nullclines of Eqs.~\eqref{eq1}-~\eqref{eq3}, which determine the regions of increase or decrease of the variables. In our case, the nullcline $\dot{V}=0$ is easily computable and exhibits two connected components: the planes $V=0$ and \begin{equation} \Omega(x) = \frac{\varepsilon}{\alpha (1-\mu)} \Leftrightarrow V+ \eta S + \eta_D D = \sigma, \label{V:nullcline} \end{equation} where $\sigma$ is defined as \begin{equation} \sigma:= 1 - \frac{\varepsilon}{\alpha (1-\mu)}. \label{def:sigma} \end{equation} The $V$-nullcline component $V=0$ is biologically trivial: the absence of HV leads to no satRNA and no DIs (since both need the first to be present) and, therefore, to total extinction. \begin{figure}\label{fig:Udomain} \end{figure} The second one~\eqref{V:nullcline} determines the evolution of the WT helper virus in $\mathcal{U}$. A view of this domain and the latter planes are depicted in Fig.~\ref{def:domain:U}. The other two nullclines, $\dot{S}=0$ and $\dot{D}=0$ do not provide a simple representation. The first one is formed by the (invariant) plane $S=0$ and the (piece of) hyperbolic cylinder \[ \mathcal{U} \cap \left\{ V \, (1-V-S-D) = \frac{\varepsilon}{\beta} \right\}. \] The second one, $\dot{D}=0$, is given by the algebraic surface $\left( \alpha \mu + \gamma D \right) V \, \theta(x) = \varepsilon D$. Notice also that $\Omega(x) < \theta(x)$ for any $x=(V,S,D)\in \ensuremath{\mathbb{R}}_+^3\setminus{(1,0,0)}$ and that $\Omega(1,0,0) = \theta(1,0,0)$. \subsection{Equilibrium points and local stability} \label{se:equilibrium-points} The equilibrium points are the solutions $x^*$ of $F(x)=0$. As usual, their (local) stability is approached through its linearized system $\dot{x}=DF(x^*)(x-x^*)$, whose Jacobian matrix is given by \begin{equation} DF(x) = \left( \begin{array}{ccc} \alpha(1-\mu) (\Omega(x) - V)-\varepsilon & -\alpha (1-\mu)\eta V & -\alpha(1-\mu)\eta_D V \\ \beta S (\theta(x) - V) & \beta V(\theta(x)-S) - \varepsilon & - \beta VS \\ (\alpha \mu+\gamma D) (\theta(x)-V) & -(\alpha \mu + \gamma D)V & \gamma V\theta(x) - (\alpha \mu + \gamma D) V - \varepsilon \end{array} \right). \label{DF} \end{equation} Regarding the equilibrium points of system~\eqref{eq1}--\eqref{eq3}, the following statements hold: \begin{itemize} \item The origin, $\mathcal{O}=(0, 0, 0)$, is always an equilibrium point for any value of the parameters. It represents the full extinction of $V$, $S$ and $D$. Its Jacobian matrix \[ DF(0,0,0)=\left( \begin{array}{crr} \alpha (1-\mu) - \varepsilon & 0 & 0 \\ 0 & -\varepsilon & 0 \\ \alpha \mu & 0 & - \varepsilon \end{array} \right) \] has eigenvalues $\lambda_1=\alpha(1-\mu)-\varepsilon$, $\lambda_2=\lambda_3=-\varepsilon<0$ (semisimple). Notice that its stability depends on the sign of $\lambda_1$. Precisely, $\lambda_1<0$ (\emph{i.e.}, $\mathcal{O}$ is locally attractor) is equivalent to the condition \begin{equation} \mu > \mu_c = 1-\frac{\varepsilon}{\alpha} \Leftrightarrow \frac{\varepsilon}{\alpha(1-\mu)}> 1 \Leftrightarrow \sigma<0. \label{cond:origin:gas} \end{equation} If this condition holds, that is, the DIs generation rate $\mu$ exceeds the critical value $\mu_c$, then all the points in $\mathcal{U}$ satisfy $\dot{V}<0$. This, in its turn, leads to $\dot{S}<0$ and $\dot{D}<0$ and, afterwards, to total extinction. Consequently, the origin is the unique equilibrium point of system~\eqref{eq1}-\eqref{eq3}, and it is a global asymptotically attractor. Henceforth, let us assume that condition \begin{equation} \mu < \mu_c\Leftrightarrow 0 < \frac{\varepsilon}{\alpha(1-\mu)}<1 \Leftrightarrow \sigma>0, \label{origin:saddle:cond} \end{equation} is satisfied. Hence, the origin is a saddle equilibrium point, with a $2$-dimensional stable manifold and a $1$-dimensional unstable curve. The latter is tangent to the vector $(1-\mu,0,\mu)$ at the origin. In this case we also have: \item No equilibrium points of the form $(0,S,D)$. As previously mentioned, $V=0$ leads necessarily to total extinction. \item No equilibrium points on the line $\{ S=0, D=0 \}$ except the origin. Indeed, if this was the case they would be solutions of \begin{eqnarray*} \dot{V} &=& \alpha (1-\mu) V (1-V) - \varepsilon V = 0, \\ \dot{D} &=& \alpha \mu V (1-V) = 0. \end{eqnarray*} From the latter equation we get either $V=0$ or $V=1$. The first case corresponds to extinction. The second one, $V=1$, leads to $\varepsilon=0$, which does not hold since $\varepsilon>0$ by assumption. \item In a similar way it can be proved that there are no equilibrium points in the plane $\{ D= 0 \}$ other than the origin. Indeed, in this plane the system becomes \begin{eqnarray*} \alpha(1-\mu)V (1-V-\eta S)-\varepsilon V &=& 0, \\ \beta VS (1-V-S) - \varepsilon S &=& 0, \\ \alpha \mu V (1-V-S) &=& 0. \end{eqnarray*} Since $V\neq 0$, from the last equation it turns out that $V+S=1$. Substituting into the second equation we get $S=0$ (since $\varepsilon\neq 0$) and, therefore, $V=1$. Clearly, $(1,0,0)$ does not satisfy the first equation. \end{itemize} The next two propositions summarise the type of non-trivial equilibrium points that the system can have. \begin{prop}[\textsf{No-satRNA equilibria, $P$-point}] \label{prop:P:points} Let us assume that condition~\eqref{origin:saddle:cond} holds. Then, there exists a unique biologically meaningful equilibrium point $P=(V_1,0,D_1)$ of the system~\eqref{eq1}--\eqref{eq3}. This $P$-point satisfies that \[ V_1=\sigma - \eta_D D_1, \] where $D_1$ is the unique real root in the interval $(0,\frac{\sigma}{\eta_D})$ of the following polynomial of degree $3$: \[ q(D)=-\gamma \eta_{D} \left( \eta_{D}-1 \right) {D}^{3}+A_2 D^2+ A_1 D+\alpha\mu\sigma \left( 1-\sigma \right), \] with \begin{eqnarray*} A_2 &=& \left( -\alpha \mu \eta_D+\gamma\,\sigma \right) \left( \eta_D-1 \right) -\gamma\eta_D \left( 1-\sigma \right) \\ A_1 &=& \alpha\mu\sigma \left( \eta_D-1 \right) + \left( - \alpha\mu\eta_D+\gamma\sigma \right) \left( 1-\sigma \right) -\varepsilon. \end{eqnarray*} In particular, this point $P=(V_1,0,D_1)$ does not depend on the parameter $\eta$. \end{prop} \begin{proof} The plane $\{ S=0\}$ (absence of satRNA) is invariant under the dynamics of system~\eqref{eq1}--\eqref{eq3}. These dynamics are governed by equations \begin{eqnarray} \dot{V} &=& \alpha (1-\mu) V (1-V-\eta_D D) - \varepsilon V, \label{eq:s0:1}\\ \dot{D} &=& (\alpha \mu + \gamma D) V (1-V-D) - \varepsilon D. \label{eq:s0:2} \end{eqnarray} Thus, $P$-points correspond to the solutions making these equations vanish. Since $V_1>0$, the first one becomes $V_1+\eta_D D_1= \sigma$ which, in particular, implies that $0< D_1 < \frac{\sigma}{\eta_D}$. Substituting $V_1+\eta_D D_1= \sigma$ into equation $(\alpha \mu + \gamma D) V (1-V-D) - \varepsilon D=0$ it turns out that $D_1$ must to be a root of the polynomial \[ q(D)=(\alpha \mu + \gamma D)(\sigma - \eta_D D) \big(1-\sigma - (\eta_D-1)D \big) - \varepsilon D \] (in the interval $(0,\frac{\sigma}{\eta_D})$). On one hand, since $q(0)=\alpha \mu \sigma (1-\sigma)>0$ (recall that $0<\sigma<1$) and $q(\sigma/\eta_D)=-\varepsilon \sigma/\eta_D<0$, we get from Bolzano's theorem the existence of, at least, one zero of $q(D)$ in this interval. On the other, expanding and collecting in powers of $D$, we reach the following equivalent expression for $q(D)$: \[ -\gamma \eta_{D} \left( \eta_{D}-1 \right) {D}^{3}+A_2 D^2+ A_1 D+\alpha\mu\sigma \left( 1-\sigma \right) \] where \begin{eqnarray*} A_2 &=& \left( -\alpha \mu \eta_D+\gamma\,\sigma \right) \left( \eta_D-1 \right) -\gamma\eta_D \left( 1-\sigma \right), \\ A_1 &=& \alpha\mu\sigma \left( \eta_D-1 \right) + \left( - \alpha\mu\eta_D+\gamma\sigma \right) \left( 1-\sigma \right) -\varepsilon. \end{eqnarray*} From the fact that $\eta_D>1$, it follows that $\lim_{D\to +\infty} q(D)=-\infty$ and that $\lim_{D\to -\infty} q(D)=+\infty$. So Bolzano's theorem ensures that the three roots of $q(D)$ are real: one is negative, a second one stays in the interval $(0,\frac{\sigma}{\eta_D})$ and the third one is greater than $\frac{\sigma}{\eta_D}$. Consequently, since $V_1=\sigma-\eta_D D_1 \in (0,1)$ if $D_1 \in (0, \frac{\sigma}{\eta_D})$, there is exactly one biologically meaningful $P$-point $(V_1,0,D_1)$. \end{proof} \begin{prop}[\textsf{Coexistence equilibrium points, $Q$-points}] \label{prop:Q:points} Let us assume condition~\eqref{origin:saddle:cond} is satisfied. Then, $Q=(V_2,S_2,D_2)$ is a coexistence equilibrium point of system~\eqref{eq1}--\eqref{eq3} ($Q$-point in short) if and only if $Q\in \mathcal{U}$ and the following conditions hold: \begin{itemize} \item[(i)] its $D$-component is given by \[ D_2=\frac{\alpha \mu}{\beta - \gamma} \] which, necessarily, implies that $\beta>\gamma$. In order to make $Q$, in principle, biologically meaningful, it must satisfy necessarily that $D_2< \frac{\sigma}{\eta_D}.$ \item[(ii)] The component $0<V_2<\sigma$ is a root of the degree-$2$ polynomial $V_2^2 + M V_2 + m=0$ where \begin{equation} M:=\frac{\sigma - (\eta_D - \eta) D_2 - \eta}{\eta-1}, \qquad m:= \frac{\eta\varepsilon}{\beta(\eta-1)}>0. \label{def:M:m} \end{equation} More precisely, $V_2$ is given by \begin{equation} V_2^{\pm} = \frac{-M \pm \sqrt{M^2 - 4m}}{2}, \label{Qpoints:eq:V2} \end{equation} provided that $M^2-4m\geq 0$. \item[(iii)] The component $0<S_2<1$ is given by the expression \begin{equation} S_2 = 1-V_2-D_2 - \frac{\varepsilon}{\beta V_2}, \label{def:s2} \end{equation} where $0<V_2<1$ is a solution of $V_2^2 + M V_2 + m=0$. \end{itemize} \end{prop} \begin{remark} (a) The restriction $D_2<\frac{\sigma}{\eta_D}$ follows from the same argument used for the $P$-points: since the equilibrium point must fall onto the $V$-nullcline $V + \eta S + \eta_D D =\sigma$, $D$ cannot exceed this value. (b) From statement (i) it turns out that if the DIs replication rate $\gamma$ is larger than the satRNA's, $\beta$, coexistence $Q$-equilibria no longer exist (indeed, $D_2<0$). (c) The maximal number of biologically meaningful $Q$-points for fixed values of the parameters is $2$. As it will be showed in the numerics, there are examples with none, one and two $Q$-points. \end{remark} \begin{proof} \begin{itemize} \item[(i)] We seek for points of type $Q=(V_2,S_2,D_2)\in \mathcal{U}$, with $V_2, S_2, D_2>0$, steady state of our system~\eqref{eq1}--\eqref{eq3}. In particular, this implies that $Q$ must belong to the intersection of the nullclines which are not coordinate planes. That is, $Q$ must satisfy the following three conditions: \begin{equation} \Omega(Q)=\frac{\varepsilon}{\alpha(1-\mu)}, \qquad V_2\, \theta(Q)=\frac{\varepsilon}{\beta}, \qquad \textrm{and} \qquad (\alpha \mu + \gamma D_2) V_2 \,\theta(Q)=\varepsilon D_2. \label{eq:conditions:coexistence:eq} \end{equation} Substituting the second equation into the third one it leads to \[ (\alpha \mu + \gamma D_2) \frac{\varepsilon}{\beta}=\varepsilon D_2 \Rightarrow (\alpha \mu + \gamma D_2) = \beta D_2 \] and therefore \begin{equation} D_2 = \frac{\alpha \mu}{\beta - \gamma}. \label{eq:D2} \end{equation} where $\beta>\gamma$ to have $D_2>0$. Since $V,S,\eta,\eta_D$ are all positive, and $V_2+\eta S_2 + \eta_D D_2 =\sigma$ it turns out that $\eta_D D_2 < \sigma \Rightarrow D_2 < \frac{\sigma}{\eta_D}$. \item[(ii)] Consider now the two first conditions in~\eqref{eq:conditions:coexistence:eq} and the value $D=D_2$ in~\eqref{eq:D2}: \begin{eqnarray} \Omega(Q)=1-\sigma &\Rightarrow& V_2+\eta S_2 = \sigma - \eta_D D_2 \label{eq:cond:coex1}\\ V_2 \, \theta(Q)=\frac{\varepsilon}{\beta} &\Rightarrow& V_2 + S_2 = 1- D_2 - \frac{\varepsilon}{\beta v_2}. \label{eq:cond:coex2} \end{eqnarray} Subtracting~\eqref{eq:cond:coex1} multiplied by $\eta$ to~\eqref{eq:cond:coex2}, and performing some trivial algebraic manipulations, it turns out that $V_2$ must be a root of the following degree $2$ polynomial $V_2^2 + M V_2 + m =0$, where \begin{equation*} M=\frac{\sigma - (\eta_D - \eta) D_2 - \eta}{\eta-1}, \qquad m= \frac{\eta\varepsilon}{\beta(\eta-1)}>0. \end{equation*} That is, $V_2$ is given by \begin{equation*} V_2^{\pm} = \frac{-M \pm \sqrt{M^2 - 4m}}{2}, \end{equation*} provided that $M^2-4m\geq 0$. \item[(iii)] Once determined $V_2$ we seek an expression for $S_2$. Indeed, \[ V_2 \theta(Q)=\frac{\varepsilon}{\beta} \Longleftrightarrow 1-V_2-S_2-D_2 = \frac{\varepsilon}{\beta V_2}, \] and so, \[ S_2 = 1 - V_2 - D_2 - \frac{\varepsilon}{\beta V_2}. \] The points $Q=(V_2,S_2,D_2)$ obtained in this way will be biologically meaningful provided that $Q\in \mathcal{U}$. \end{itemize} \end{proof} The complex dependence (in the sense of the number of parameters involved) of the expressions of the $P$ and $Q$-points makes cumbersome to analytically determine their regions of existence and their local stability. In the next section we perform a numerical study of these equilibrium points for particular choices of the parameters. We have focused on, under view, which are the most virologically-relevant parameters (production of DI RNAS $\mu$, and interference coefficients $\eta,\eta_D$). We believe with this choice of parameters we are illustrating the most remarkable features in terms of asymptotic and transient dynamics, and bifurcation phenomena. \begin{figure}\label{fig:Ppoints:existence:stability} \end{figure} \section{Numerical results} \label{se:numerical:analysis} Numerical integration has been done with the 7th-8th order Runge-Kutta-Fehlberg-Simó method with automatic step size control and local relative tolerance $10^{-15}$. In most of the numerical results we will use initial conditions $(V(0), S(0), D(0))=(0.1, 0.05, 0)$. These initial conditions seem feasible in terms of real virus populations: an initial small quantity of HV, a lower order of magnitude quantity of its satRNA and no DIs at all that will be produced during HV replication. Despite this choice, we must note that in most of the identified scenarios initial conditions are not really important since the system is monostable. In the small region of bistability we have identified (see below) the basin of attraction of $P$-point is extremely small. \subsection{Analysis of \boldmath{$P$-} and $Q$-points in terms of $\mu$ and $\eta_D$} \label{se:numerics:PQpoints} This section is devoted to the study of the equilibrium points of the system~\eqref{eq1}-\eqref{eq3}, assuming all the parameters fixed except $\mu$ (DIs generation rate during imperfect replication of the HV) and $\eta_D$ (interference competition strength exerted by DI RNAs on the HV). The other parameters are set, if not otherwise specified, as follows: \begin{equation} \alpha=1, \quad \beta=2, \quad \eta=1.3, \quad \varepsilon=3\cdot 10^{-2}, \quad \gamma=1.5. \label{numerics:fixed:parameters} \end{equation} We will let $\mu \in [0,1]$ and $\eta_D \in (1,1.5]$. Notice that, in this particular case, \[ \mu_c = 1-\frac{\varepsilon}{\alpha} =0.97. \] \begin{figure}\label{fig:location:PQpoints} \end{figure} \begin{figure}\label{fig:vaps:PQ:etaD:1p1} \end{figure} As already mentioned in Section~\ref{se:equilibrium-points}, if $\mu>0.97$ then the origin, the total extinction of $V$, $S$, and $D$, is a global attractor. So let us assume, henceforth, that $\mu < 0.97.$ The study we provide here is divided into several parts: existence, location in phase space and stability of equilibrium points and their bifurcations. Results on times ``to equilibrium'' (understood in the sense ``up to a given distance from it'') have been deferred to Section~\ref{se:time:to:equilibria}, specially for those cases with outcompetition of satRNAs by the HV and DI RNAs. As mentioned, this is an interesting scenario from a biomedical point of view, especially for those systems in which the clearance of the satRNA may avoid most severe disease outcomes. \begin{figure}\label{fig:bistability} \end{figure} From Proposition~\ref{prop:P:points} we have the existence of a unique $P$-point for all $(\eta_D,\mu) \in (1,1.5] \times [0, 0.97)$. Similarly, from Proposition~\ref{prop:Q:points} we get the existence of $Q$-points in some areas inside. Precisely, in the regions depicted in \textcolor{orange}{orange} and \textcolor{ForestGreen}{green} in Figure~\ref{fig:Ppoints:existence:stability} we have a unique $Q$-point. Moreover, in a tiny region on the left hand side of the same figure, depicted in light-green, we have two coexisting $Q$-points, named $Q_1$ and $Q_2$ (see also the inset). Fixed the value of $\eta_D$, as we increase $\mu$ the (unique) $Q$-point existing in the orange and green regions approaches the plane $S=0$ and leaves the (biologically meaningful) domain $\mathcal{U}$ undergoing a collision with the corresponding $P$-point. That is, as the production rate of DI RNAs increases, the system evolves to a behaviour with a unique equilibrium point $P$ with no satRNA. As we will see below, this no-satellite equilibrium is point attractor. Regarding the local stability of the $P$ and $Q$-points, we refer the reader to Fig.~\ref{fig:Ppoints:existence:stability} for the colors' meaning. For the sake of illustration, Fig.~\ref{fig:vaps:PQ:etaD:1p1} shows equilibria for the HV, satRNAs and DI RNAs, and the eigenvalues at the equilibrium points $P$ and $Q$ for the particular case $\eta_D = 1.1$ (see vertical dashed yellow line in Fig.~\ref{fig:Ppoints:existence:stability}). More generically, we have: \begin{itemize} \item In the \textcolor{orange}{orange} region, $P$ is a saddle point (so unstable) with a $1$-dimensional stable manifold, \emph{i.e.}, $\dim W^s(P) = 1$. There is also a unique coexistence equilibrium $Q$, which is an attractor. In particular, of type stable focus $+$ sink (so its Jacobian matrix having a couple of complex eigenvalues with negative real part and a third one real negative). \item In the \textcolor{ForestGreen}{green} region the $P$-point is a saddle (with $\dim W^s(P) = 1$) and the $Q$-point is a sink (all three eigenvalues of its Jacobian matrix are real negative), attractor. \item The separation between the \textcolor{orange}{orange} and the \textcolor{ForestGreen}{green} regions is given by a so-called \emph{Belyakov bifurcation curve}. This kind of bifurcation corresponds to $Q$ passing from stable focus $+$ sink to a sink. That is, the two complex eigenvalues with negative real part become real (and negative). It does not imply any change in its local stability. \item Between the \textcolor{ForestGreen}{green} and the \textcolor{RoyalBlue}{blue} regions, $P$ (saddle) and $Q$ (sink) undergo a transcritical bifurcation: they collide and exchange their stability. Figure~\ref{fig:location:PQpoints} displays their spatial location in the plane $S=0$ (for the $P$-points) and in $\mathcal{U}$ (for the $Q$-points). \item Inside the \textcolor{RoyalBlue}{blue} area the $Q$ point has some negative component, and so it is out of the biologically meaningful domain $\mathcal{U}$. The remaining unique equilibrium is of $P$-type, so with no-satellite, and is an attracting sink. \item As above mentioned, in the \textcolor{Red}{red} area the origin (total extinction) is a global attractor. In the rest of the diagram, it always exists as an equilibrium but is of saddle type, with $\dim W^s(\mathcal{O})=2$. \item Finally, in the narrow \textcolor{green}{light-green} area (see the inset in the main panel of Fig.~\ref{fig:Ppoints:existence:stability}), the system exhibits coexistence of two attracting equilibrium points: two attracting sinks points of type $P$ and $Q$, and a second unstable $Q$-point (of saddle type). In spite of its small measure, we provide some notes on this bistability scenario below. \end{itemize} \begin{figure}\label{fig:timeseries:P:Q} \end{figure} A very important feature of biological dynamical systems is whether they simultaneously have different stable states. This means that, for given parameter values, different initial conditions can drive to different equilibrium values (which have their own basins of attraction). Typically, biological systems can be monostable or bistable. Whether a system is bistable or monostable can have deep implications in the nature of the bifurcations and, in virology, it can involve the clearance of a given population, since often some of the two possible stable states has some component equal to zero. As we already mentioned, we have identified a very narrow region in the parameter space where bistability is found (Fig.~\ref{fig:Ppoints:existence:stability}). This region has been explored in more detail and is displayed in Fig.~\ref{fig:bistability}(a) by plotting those parameter values in the space $(\mu,\eta_D,\eta)$ giving place to bistability. The inset in panel (a) shows two time series using two different initial conditions. The upper one reaches the $Q$-point while the lower one the $P$-point. Despite this result indicating that systems with HV, DI RNAs, and RNA satellites could be bistable, we have to notice that the basin of attraction of the $P$-point is extremely small (results not shown), and thus the clearance of the satRNA could take place when the amount of co-infecting satRNA is very low. The transition from this bistable scenario to the region where the $P$-point is a sink node (blue area in Fig.~\ref{fig:Ppoints:existence:stability}) is governed by a saddle-node bifurcation between the $Q_1$ and $Q_2$ points (Fig.~\ref{fig:bistability}(b)). However, in the results displayed in Fig.~\ref{fig:Ppoints:existence:stability} the bifurcation curve separating coexistence from satRNA extinction is mainly due to transcritical bifurcations. \subsection{Time to equilibria.} \label{se:time:to:equilibria} In order to complement the numerical analyses displayed so far, we show some characteristic time series for parameter values falling inside the phase diagram of Fig.~\ref{fig:Ppoints:existence:stability}. Notice that in all of the cases displayed in Fig.~\ref{fig:timeseries:P:Q} the population of HV starts increasing rapidly, being followed by increase in the populations of both DI RNAs and satRNAs. Once these two latest populations achieve large numbers, the HV population starts decreasing due to the their interference. The DI RNAs asymptotically achieve large population numbers and satRNAs as well at increasing their interferent effect, as shown in panels (a), (c), and (d) where $\eta_D$ has been increased. Notice that satRNAs population in panel (e) become dominant due to the large value of $\eta_D$ and the low production of DI RNAs ($\mu = 0.1$). Generically, the increase in satRNAs population seems to have a higher impact on the population of DI RNAs than on the HV. Increasing the production of DI RNAs typically involves the clearance of the satRNAs, as shown in panels (b), (d), and (f) in Fig.~\ref{fig:timeseries:P:Q}. This increase involves larger DI RNAs amounts and thus the outcompetition of the satRNAS. Concerning transients, as expected, they become longer close to the bifurcations separating the full coexistence from the scenario where only HV and DI RNAs persist (Fig.~\ref{fig:timetoQpoints}. This means that the time that satRNAs are able to persist in the scenario where they are outcompeted by the HV and DI RNAs depends on parameter values. For instance, Fig.~\ref{fig:timetoQpoints}(a) shows that for large values of DIs production the times are very fast (about $10^3$ time units). However, when $\mu$ is close to the transcritical bifurcation curve such transients can be about two orders of magnitude longer. The same occurs for the scenario with full coexistence found at further decreasing $\mu$. \begin{figure}\label{fig:timetoQpoints} \end{figure} \begin{figure} \caption{Transient times (in $\log_{10}$-scale) to reach a distance $10^{-10}$ from the (attracting and unique) $P$-point by the orbit with initial conditions $(V(0),S(0),D(0))=(0.1, 0.05, 0)$ computed in the $(\eta_D,\eta)$ plane. (a) Times obtained from replication rates from Section~\ref{values:HBV-HDV}, given by $\alpha=1$, $\beta=1.84$ and $\gamma=2.40$. Other values for $\gamma$ are shown in panels (b) $3.5$; (c) $4$, and (d) $10$. In all of the panels we have used $\mu=0.47$ and $\varepsilon=3 \cdot 10^{-2}$.} \label{fig:times:HBV} \end{figure} \subsubsection{Time to equilibria: case $\gamma>\beta$.} \label{values:HBV-HDV} As discussed in the Introduction, the most common situation is that DIs take advantage of their shorter genome to replicate faster than their parental HV. Furthermore, their genomes are shorter than the ones characteristic of the large linear satellites (median length $1.1 \pm 0.4$ Kb ($\pm$ IQR)), but not necessarily so for the small linear (median length $0.4 \pm 0.1$ Kb) nor the virusoid (median length $0.3 \pm 0.1$ Kb) satellites \cite{Badar2021} accompanying the HV. Mathematically, this translates into the fact that $\gamma \gtrsim \beta$. This implies that full coexistence i.e., $Q$-points, would rarely exist, and equilibrium scenarios with no satRNA would be the most common outcome. That is, DIs are efficient enough to outcompete the longer satRNA from this steady state solution. Despite this equilibrium situation, satRNAs could persist in a transitory way in the system for very long times, as mentioned above. To illustrate these transients, we have chosen the clinically relevant case of hepatitis B virus (HBV), its defective D-RNAs, and the satellite hepatitis delta virus (HDV). In order to numerically simulate this case, it is reasonable to assume that their replication rates are proportionally inverse to their genome's length. If they are \[ \textrm{HBV}: [3017,3248], \qquad \textrm{HDV}: [1679,1682], \qquad \textrm{D-RNA}: \sim 1290 \] nucleotides long, we can obtain the following estimates for the three replication rates: \[ \alpha=1, \qquad \beta \simeq \frac{3100}{1680} \simeq 1.84, \qquad \gamma \simeq \frac{3100}{1290} \simeq 2.4. \] Figure~\ref{fig:times:HBV} displays transient times for this particular case where $\gamma > \beta$. Specifically, panel (a) shows the result for the replication rates listed above for the HBV-HDV case while in the other panels $\gamma$ has been further increased. In all cases the orbit starts with initial conditions $(V(0),S(0),D(0))=(0.1, 0.05, 0)$ and the distance from the (attracting) $P$-point at which numerical integration stops is $10^{-10}$. Notice that, although the coordinates of the $P$-point do not depend on $\eta$, the vector field does. This produces slightly variations in these transient times. Moreover, the results show that $\eta_D$ has a stronger impact on these transient times towards the outcompetition of the satellite in comparison to $\eta$. Observe also that these transients vary in a nontrivial way as $\gamma$ increases. For instance, for the values of $\gamma$ and $\beta$ chosen as representatives of the HBV-HDV virus system, the longest times are found at low interference values of DI RNAs. When DIs replication is further increased, this region with longer transients moves to larger $\eta_D$ values. This result is somehow counter-intuitive since faster DIs' replication should involve faster satellites extinction, but this outcome is probably counteracted by the larger interference of DIs on the HV. Regardless of these results, we must notice that the difference in the length of the transients between the yellow-orange and the black-blue scales is not very large (as compared to those of Fig.~\ref{fig:timetoQpoints}). Although the replication cycle of HBV and HDV is rather more complex and our model may be very limited due to its level of abstraction, it indicates that the faster replicator (in this case the DIs) will typically outperform the other sub-viral element, so the dynamics follow Gause's competitive exclusion principle. \begin{comment} #define ALPHA 1. // PMV: 4326. // HBV [3017,3248] --> 3100. #define BETA 3100./1680. // 4326./824. // [1679,1682] --> 3100./1680. #define GAMMA 10. // 4326./400. // 1290 --> 3100./1290. #define EPSILON 3.e-2 #define ETA_MIN 1.1 #define ETA_MAX 3. #define ETAD_MIN 1.1 #define ETAD_MAX 3. #define ACCURACY_FIXED_POINT 1.e-13 // for the plot (window) #define MU 0.47 #define GRID_STEP 1.e-1 // initial conditions for the orbit selected #define V_INITIAL 0.1 // 0.01 #define S_INITIAL 0.05 // 0.05 #define D_INITIAL 0. #define DISTANCE_TO_EQPOINT 1.e-10 \end{comment} \section{Conclusions} DI RNAs are an unavoidable consequence of the error-prone replication of RNA viruses and retroviruses. The impact of these defectors in the population dynamics of their parental virus have been deeply studied~\cite{Szathmary1993,BanghamKirkwood1990,Kirkwood1994,Sardanyes2010,Zwart2013,Chao2017}. However, viral infections, specially in plants, are more complex and contain additional genetic elements that are unrelated with the virus: the satellite RNAs (satRNAs) and the satellite viruses. Both DIs and satellites share a common feature, they need the wild type virus for their own replication since they lack essential genes such as those coding for the viral polymerases or for the coat proteins. Hence, they need to co-infect with the wild type virus (helper virus, HV) to complete their replication/infection cycle. The presence of these extranumerary elements have been shown to deeply affect the virulence of infection \cite{Simon2004,Gnanasekaran2019,Taylor2020}, in some cases exacerbating symptoms while in other resulting in their attenuation. Therefore, the interaction between satellites and their HV ranges from commensalism to parasitism. Here, we present a simple, yet dynamically rich, model of infection with a HV, a generic satellite and the DI RNAs generated from the HV. All three RNA species compete for limited host resources, thus we implicitly assume the satRNA acts as a hyperparasite. Analytical and numerical explorations of the model show three possible stable states: (i) full extinction; (ii) outcompetition of the satRNA by the duo HV-DI RNAs; and (iii) coexistence of the three replicators. A rather small region of bistability involving coexistence of states (ii) and (iii) has been found, having the fixed point responsible for scenario (ii) a very small basin of attraction. We have analytically found the condition under which the three replicators can go to extinction, showing that there is a critical rate of DIs production that only depends on the balance between the degradation and replication of the HV through the equation $\mu_c = 1 - \varepsilon / \alpha$. This means that when the rate of production of DIs overcomes the critical condition $\mu_c$, a full virus clearance occurs through a transcritical bifurcation. Note that this critical value does not depend at all on the satRNAs parameters. We have also identified that the majority of transitions between scenarios (i) to (iii) are given by transcritical bifurcations, except for the tiny bistability region, where saddle-node bifurcations are found. Most remarkably from an applied perspective, we found conditions in which the WT virus takes advantage of the unavoidable production of DIs to outcompete the satellite. Indeed, the strength of this outcompetition effect becomes stronger as the difference in lenght between the DI RNAs and the satRNAs increases: large linear satellites, and the special case of the HDV virusoid, outcompetition takes place rapidly ($\beta < \gamma$), while for small linear satellites, it might take longer ($\beta \approx \gamma$). The model here presented is a minimum one that lacks of mechanistic details and only focuses in replication interactions. An obvious extension of the study of these interesting multi-species viral models would consist in including proteins in the picture: HV will encode for replication and encapsidation machinery whereas DI RNAs would simply kidnap these proteins for their own replication and encapsidation. In such mechanistic model, satellite viruses and satRNAs would not be collapsed into a single category, as we have done here, but represented by two different molecular species, one encoding for some protein (satellite virus) and other do not encoding for any factor (satRNAs). Another possible extension of our model would consist in imposing a second layer of complexity involving eco-evolutionary dynamics, \emph{e.g.} a multi-strain SIR-like model. Hyperparasitism could potentially play a key role in biological control of viral infections \cite{Sandhu2021} by reducing the deleterious impact of the WT virus in its host, also hampering its transmission. Indeed, this principle is the ground for the recent development of antiviral therapies based in the generation of engineered artificial DI RNAs that strongly interfere with the target virus \cite{Notton2014,Tanner2016}. These novel approaches, however, need of additional careful theoretical considerations, as recent eco-evolutionary models have shown that introducing a hyperparasite into the original host-parasite system results in a shift of the evolutionarily optimal virulence of the pathogen toward higher values \cite{Sandhu2021}. Our results suggest that, in the case of highly mutable RNA viruses, the constitutive production of DI RNAs may contribute to avoid the establishment of a hyperparasite competing for helper virus resources. \end{document}
\begin{document} \title[A class of nonlocal elliptic problems in the half space with a hole] {Existence of positive solution for a class of nonlocal elliptic problems in the half space with a hole} \author{Xing Yi} \thanks{*College of Mathematics and Computer Science, Key Laboratory of High Performance Computing and Stochastic Information Processing (Ministry of Education of China), Hunan Normal University, Changsha, Hunan 410081, P. R. China ([email protected])} \maketitle \vskip 0.3in {\bf Abstract}\quad This work concerns with the existence of solutions for the following class of nonlocal elliptic problems \begin{eqnarray}\label{eq:0.1} &&\left\{\begin{array}{l} (-\Delta)^{s} u+u=|u|^{p-2} u \text { in } \Omega_{r} \\ u \geq 0 \quad \text { in }\Omega_{r} \text { and } u \neq 0 \\ u=0 \quad \mathbb{R}^{N} \backslash \Omega_{r} \end{array}\right., \end{eqnarray} involving the fractional Laplacian operator $(-\Delta)^{s},$ where $s \in(0,1), N>2 s$, $\Omega_{r}$ is the half space with a hole in $\mathbb{R}^N$ and $p \in\left(2,2_{s}^{*}\right) .$ The main technical approach is based on variational and topological methods. \vskip 0.1in \noindent{\it Keywords:}\quad Nonlocal elliptic problems, Positive high energy solution, Half space with a hole \noindent {\bf AMS} classification: 58J05, 35J60. \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}\Section*{1. Introduction} \setcounter{section}{1}\setcounter{equation}{0} Let $\mathbb{R}_{+}^{N}=\left\{\left(x^{\prime}, x_{N}\right) \in \mathbb{R}^{N-1} \times \mathbb{R} \mid 0<x_{N}<\infty\right\}$ be the upper half space. $\Omega_{r}$ is an unbounded smooth domain such that \[\overline{\Omega_{r}}\subset \mathbb{R}_{+}^{N},\] and \[ \mathbb{R}_{+}^{N}\setminus\overline{\Omega_{r}}\subset B_{\rho}(a_{r})\subset \mathbb{R}_{+}^{N}\ \] with $a_{r}=(a,r)\in \mathbb{R}_{+}^{N}$. Indeed, $\Omega_r$ is the upper half space with a hole. We consider the following fractional elliptic problem: \begin{eqnarray}\label{eq:1.1} &&\left\{\begin{array}{l} (-\Delta)^{s} u+u=|u|^{p-2} u \text { in } \Omega \\ u \geq 0 \quad \text { in }\Omega_{r} \text { and } u \neq 0 \\ u=0 \quad \mathbb{R}^{N} \backslash \Omega \end{array}\right., \end{eqnarray} where $ \Omega=\Omega_r,\ s \in(0,1), N>2 s$, $p \in\left(2,2_{s}^{*}\right),$ where $2_{s}^{*}=\frac{2 N}{N-2 s}$ is the fractional critical Sobolev exponent and $(-\Delta)^{s}$ is the classical fractional Laplace operator. When $s \nearrow 1^{-},$ problem(\ref{eq:1.1}) is related to the following elliptic problem \begin{eqnarray}\label{eq:1.2} -\triangle u+u=|u|^{p-1}u,\quad x\in \Omega, \quad u\in H_{0}^{1}(\Omega) \end{eqnarray} . When $\Omega$ is a bounded domain, by applying the compactness of the embedding $H_{0}^{1}(\Omega)\hookrightarrow L^{p}(\Omega), 1< p<\frac{2N}{N-2}$, there is a positive solution of (\ref{eq:1.2}). If $\Omega$ is an unbounded domain, we can not obtain a solution for problem (\ref{eq:1.2}) by using Mountain-Pass Theorem directly because the embedding $H_{0}^{1}(\Omega)\hookrightarrow L^{p}(\Omega), 1< p<\frac{2N}{N-2}$ is not compactness. However, if $\Omega=\mathbb{R}^{N}$, Berestycki-Lions \cite{21}, proved that there is a radial positive solution of equation (\ref{eq:1.2}) by applying the compactness of the embedding $H_{r}^{1}(\mathbb{R}^{N})\hookrightarrow L^{p}(R^{N}),2<p<\frac{2N}{N-2}$, where $H_r^{1}(\mathbb{R}^{N})$ consists of the radially symmetric functions in $H^{1}(\mathbb{R}^{N})$. By the P.L.Lions's Concentration-Compactness Principle \cite{7}, there exists an unique positive solution for problem (\ref{eq:1.2}) in $\mathbb{R}^{N}$. By moving Plane method, Gidas-Ni-Nirenberg \cite{3} also proved that every positive solution of equation \begin{eqnarray}\label{eq:1.3} -\triangle u+u=|u|^{p-1}u,\quad x\in \mathbb{R}^{N}, \quad u\in H^{1}(\mathbb{R}^{N}) \end{eqnarray} is radially symmetric with respect to some point in $\mathbb{R}^{N}$ satisfying \begin{eqnarray} u(r)re^{r}=\gamma+o(1)\ as \ r \rightarrow \infty. \end{eqnarray} Kwong \cite{188} proved that the positive solution of (\ref{eq:1.3}) is unique up to translations. In fact, Esteban and Lions \cite{1} proved that there is not any nontrivial solution of equation (\ref{eq:1.2}) when $\Omega$ is an Esteban-Lions domain (for example $\mathbb{R}_+^3$). Thus, we want to change the topological property of the domain $\Omega$ to look for a solution of problem (\ref{eq:1.2}). Wang \cite{4} proved that if $\rho$ is sufficiently small and $z_{0N}\rightarrow\infty$, then Eq(\ref{eq:1.2}) admits a positive higher energy solution in $\mathbb{R}_{+}^{N} \backslash \overline{B_{\rho}\left(z_{0}^{\prime}, z_{0 N}\right)}$. Such problem has been extensively studied in recent years, see for instance, \cite{9,55} and references therein. From the above researches, we believed that the existence of the solution to the equation (\ref{eq:1.2}) will be affected by the topological property of the domain $\Omega$. Recently, the case $s \in(0,1)$ has received a special attention, because involves the fractional Laplacian operator $(-\Delta)^{s}$, which arises in a quite natural way in many different contexts, such as, among the others, the thin obstacle problem, optimization, finance, phase transitions, stratified materials, anomalous diffusion, crystal dislocation, soft thin films, semipermeable membranes, flame propagation, conservation laws, ultra-relativistic limits of quantum mechanics, quasigeostrophic flows, multiple scattering, minimal surfaces, materials science and water waves, for more detail see \cite{x1,e16,x2,D21,x4}. When $ \Omega \subset \mathbb{R}^{N}$ is an exterior domain, i.e. an unbounded domain with smooth boundary $\partial \Omega \neq \emptyset$ such that $\mathbb{R}^{N} \backslash \Omega$ is bounded, $s \in(0,1), N>2 s$, $p \in\left(2,2_{s}^{*}\right),$ the above problem has been studied by O. Alves, Giovanni Molica Bisci , César E. Torres Ledesma in \cite{21} proving that (1.1) does not have a ground state solution. This fact represents a serious difficulty when dealing with this kind of nonlinear fractional elliptic phenomena. More precisely, the authors analyzed the behavior of Palais-Smale sequences and showed a precise estimate of the energy levels where the Palais-Smale condition fails, which made it possible to show that the problem (1.1) has at least one positive solution, for $\mathbb{R}^{N} \backslash \Omega$ small enough. A key point in the approach explored in \cite{16,9} is the existence and uniqueness, up to a translation, of a positive solution $Q$ of the limit problem associated with (\ref{eq:1.1}) given by \begin{eqnarray}\label{eq:1.6} (-\Delta)^{s} u+u=|u|^{p-2} u \text { in } \mathbb{R}^{N}, \end{eqnarray} for every $p \in\left(2,2_{s}^{*}\right)$. Moreover, $Q$ is radially symmetric about the origin and monotonically decreasing in $|x| .$ On the contrary of the classical elliptic case, the exponential decay at infinity is not used in order to prove the existence of a nonnegative solution for (\ref{eq:1.1}). When $\Omega \subset \mathbb{R}^{N}$ is $\mathbb{R}_+^N$, by moving Plane method, Wenxiong Chen, Yan Li and Pei Ma \cite{11d6}(p123 Theorem 6.8.3 ) proved that there is no nontrivial solution of problem (\ref{eq:1.1}). It is interesting in considering the existence of the high energy equation for the problem (\ref{eq:1.1}) in the half space with a hole in $\mathbb{R}^{N}$. \begin{theorem} There is $\rho_{0}>0,r_{0}>0$ such that if $0<\rho\leq\rho_{0} $ and $r \geq r_{0}$, then there is a positive solution of equation (\ref{eq:0.1}). \end{theorem} The paper is organized as follows. In section 2, we give some preliminary results. The Compactness lemma will be given in Section 3. At last, we give the proof of Theorem 1.1. \setcounter{equation}{0}\Section{Some preliminary results } For $s \in(0,1)$ and $N>2 s,$ the fractional Sobolev space of order $s$ on $\mathbb{R}^{N}$ is defined by $$ H^{s}\left(\mathbb{R}^{N}\right):=\left\{u \in L^{2}\left(\mathbb{R}^{N}\right): \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x<\infty\right\} $$ endowed with the norm $$ \|u\|_{s}:=\left(\int_{\mathbb{R}^{N}}|u(x)|^{2} d x+\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x\right)^{1 / 2} .$$ We recall the fractional version of the Sobolev embeddings (see \cite{q16}). \begin{theorem} Let $s \in(0,1),$ then there exists a positive constant $C=C(N, s)>0$ such that $$ \|u\|_{L^{2_{s}^{*}}\left(\mathbb{R}^{N}\right)}^{2} \leq C \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2 s}} d y d x $$ and then $H^{s}\left(\mathbb{R}^{N}\right) \hookrightarrow L^{q}\left(\mathbb{R}^{N}\right)$ is continuous for all $q \in\left[2,2_{s}^{*}\right] .$ Moreover, if $\Theta \subset \mathbb{R}^{N}$ is a bounded domain, we have that the embedding $H^{s}\left(\mathbb{R}^{N}\right) \hookrightarrow L^{q}(\Theta)$ is compact for any $q \in\left[2,2_{s}^{*}\right) .$ \end{theorem} Hereafter, we denote by $X_{0}^{s} \subset H^{s}\left(\mathbb{R}^{N}\right)$ the subspace defined by $$ X_{0}^{s}:=\left\{u \in H^{s}\left(\mathbb{R}^{N}\right): u=0 \text { a.e. in } \mathbb{R}^{N} \backslash \Omega\right\}. $$ We endow $X_{0}^{s}$ with the norm $\|\cdot\|_{s}$. Moreover we introduce the following norm $$ \|u\|:=\left(\int_{\Omega_{r}}|u(x)|^{2} d x+\iint_{\mathcal{Q}} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x\right)^{\frac{1}{2}} $$ where $\mathcal{Q}:=\mathbb{R}^{2 N} \backslash\left(\Omega_{r}^{c} \times \Omega_{r}^{c}\right)$. We point out that $\|u\|_{s}=\|u\|$ for any $u \in X_{0}^{s}$. Since $\partial \Omega$ is bounded and smooth, by [\cite{D21}, Theorem 2.6], we have the following result. \begin{theorem} The space $C_{0}^{\infty}(\Omega)$ is dense in $\left(X_{0}^{s},\|\cdot\|\right) .$ \end{theorem} In what follows, we denote by $H^{s}(\Omega)$ the usual fractional Sobolev space endowed with the norm $$ \|u\|_{H^{s}}:=\left(\int_{\Omega}|u(x)|^{2} d x+\int_{\Omega} \int_{\Omega} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x\right)^{\frac{1}{2}}. $$ Related to these fractional spaces, we have the following properties {\bf Proposition 2.3.}The following assertions hold true: (i) If $v \in X_{0}^{s}$, we have that $v \in H^{s}(\Omega)$ and $$ \|v\|_{H^{s}} \leq\|v\|_{s}=\|v\| . $$ (ii) Let $\Theta$ an open set with continuous boundary. Then, there exists a positive constant $\mathfrak{C}=$ $\mathfrak{C}(N, s),$ such that $$ \|v\|_{L^{2_{s}^{*}}(\Theta)}^{2}=\|v\|_{L^{2^{*}}\left(\mathbb{R}^{N}\right)}^{2} \leq \mathfrak{C} \iint_{\mathbb{R}^{2 N}} \frac{|v(x)-v(z)|^{2}}{|x-z|^{N+2 s}} d z d x $$ for every $v \in X_{0}^{s} ;$ see [\cite{e16}, Theorem 6.5$]$. From now on, $M_{\infty}$ denotes the following constant \begin{eqnarray}\label{eq:1.s6} M_{\infty}:=\inf \left\{\|u\|_{s}^{2}: u \in H^{s}\left(\mathbb{R}^{N}\right) \text { and } \int_{\mathbb{R}^{N}}|u(x)|^{p} d x=1\right\}, \end{eqnarray} which is positive by Theorem 2.1. Furthermore, for any $v \in H^{s}\left(\mathbb{R}^{N}\right)$ and $z \in \mathbb{R}^{N},$ we set the function $$ v^{z}(x):=v(x+z). $$ Then, by doing the change of variable $\tilde{x}=x+z$ and $\tilde{y}=y+z,$ it is easily seen that $$\left\|v^{z}\right\|_{s}^{2}=\|v\|_{s}^{2} \text { as well as }\left\|v^{z}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}=\|v\|_{L^{p}\left(\mathbb{R}^{N}\right)} .$$ Arguing as in \cite{b16} the following result holds true. \begin{theorem} Let $\left\{u_{n}\right\} \subset H^{s}\left(\mathbb{R}^{N}\right)$ be a minimizing sequence such that $$ \left\|u_{n}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1 \text { and }\left\|u_{n}\right\|_{s}^{2} \rightarrow M_{\infty} \text { as } n \rightarrow+\infty. $$ Then, there is a sequence $\left\{y_{n}\right\} \subset \mathbb{R}^{N}$ such that $\left\{u_{n}^{y_{n}}\right\}$ has a convergent subsequence, and so, $M_{\infty}$ is attained. \end{theorem} As a byproduct of the above result the next corollary is obtained. {\bf Corollary 1} There is $v \in H^{s}\left(\mathbb{R}^{N}\right)$ such that $\|v\|_{s}=M_{\infty}$ and $\|v\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1 .$ Let $\varphi$ be a minimizer of $(\ref{eq:1.s6}),$ that is $$ \varphi \in H^{s}\left(\mathbb{R}^{N}\right), \quad \int|\varphi|^{p} d x=1 \text { and } M_{\infty}=\|\varphi\|_{s}^{2}. $$ Take \begin{eqnarray} &&\xi \in C^{\infty}(\mathbb{R}^{+},\mathbb{R}),\eta\in C^{\infty}(\mathbb{R},\mathbb{R}), \end{eqnarray} such that \[\xi(t)=\left\{ \begin{array}{ll} 0,0\leq t\leq \rho,\\[2mm] 1,t\geq 2\rho, \end{array} \right.\] \[ \eta(t)=\left\{ \begin{array}{ll} 0,t\leq 0,\\[2mm] 1,t\geq 1, \end{array} \right.\] and \[0\leq \zeta\leq 1,0\leq \eta\leq 1. \] Now, we define \[f_{y}(x)=\xi(|x-a_{r}|)\eta(x_{N})\varphi(x-y),\] and \[\Psi_{y}(x)=\frac{f_{y}(x)}{\left\|f_{y}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}}=c_{y} f_{y}(x) \text { where } c_{y}=\frac{1}{\left\|f_{y}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}} .\] Throughout this section we endow $X_{0}^{s}$ with the norm $$ \|u\|:=\left(\iint_{\mathcal{Q}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{n+2 s}} d y d x+\int_{\Omega_{r}}|u|^{2} d x\right)^{1 / 2} $$ and denote by $M>0$ the number \begin{eqnarray}\label{eq:1w.sq6} M:=\inf \left\{\|u\|^{2}: u \in X_{0}^{s}, \int_{\Omega_{r}}|u(x)|^{p} d x=1\right\} . \end{eqnarray} \begin{lemma}\label{lm:2.4} Let $y=(y^{'},y_{N})$ , we have\\ (1)$\|f_{y}-\varphi(x-y)\|_{L^{p}(\mathbb{R}^{N})}=o(1)$, $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, or $y_{N} \rightarrow \infty$ and $\rho \rightarrow 0$; (2)$\|f_{y}-\varphi(x-y)\|=o(1),|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, or $y_{N} \rightarrow +\infty$ and $\rho \rightarrow 0$.\label{11} \end{lemma} \begin{proof} Similarly as \cite{9,4}, we have (i) After the change of variables $z=x-y,$ one has \[ \begin{array}{ll} \ \|f_{y}-\varphi(x-y)\|^{p}_{L^{p}(\mathbb{R}^{N})} &=\int_{\mathbb{R}^{N}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm] &=\int_{\mathbb{R}^{N}}|\xi(|x+y-a_{r}|)\eta(x_{N}+y)-1|^{p}|\varphi(z)|^{p}\ dz \end{array} \] Let $g_{y}(z)=|\xi(|x+y-a_{r}|)\eta(x_{N}+y)-1|^{p}|\varphi(z)|^{p}.$ Since $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, it follows that $$ g_{y}(z) \rightarrow 0 \text { a.e. in } \mathbb{R}^{N} $$ Now, taking into account that $$ g_{y}(z)=|\xi(|x+y-a_{r}|)\eta(x_{N}+y)-1|^{p}|\varphi(z)|^{p} \leq 2^{p}|\varphi(z)|^{p} \in L^{1}\left(\mathbb{R}^{N}\right), $$ the Lebesgue's dominated convergence theorem yields $$ \int_{\mathbb{R}^{N}} g_{y}(z) d z \rightarrow 0 \text { as } |y-a_{r}|\rightarrow \infty, \ y_{N}\rightarrow +\infty. $$ Therefore $$ \|f_{y}-\varphi(x-y)\|_{L^{p}(\mathbb{R}^{N})}=o(1), |y-a_{r}|\rightarrow \infty, \ y_{N}\rightarrow +\infty $$ \[ \begin{array}{ll} \ \|f_{y}-\varphi(x-y)\|^{p}_{L^{p}(\mathbb{R}^{N})} &=\int_{B_{2\rho(a_{r})}\cup\{x_{N}\leq 1\}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm] &=\int_{B_{2\rho(a_{r})}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm] &+\int_{\{x_{N}\leq 1\}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx \end{array} \] and \[ \begin{array}{ll} \ \int_{B_{2\rho(a_{r})}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx &\leq C mesB_{2\rho(a_{r})}\max_{x\in\mathbb{R}^{N}}\varphi(x)\rightarrow 0\ as\ \rho \rightarrow 0, \end{array} \] \[ \begin{array}{ll} \ \int_{\{x_{N}\leq 1\}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx &=\int_{\{x_{N}\leq 1\}}|\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm] &=\int_{\{z_{3}\leq y_{N}+1\}}|\eta (x_{N}+y_{N})-1|^{p}|\varphi(z)|^{p}\ dz\\[2mm] & \rightarrow 0 \ as\ y_{N}\rightarrow +\infty,\ \rho \rightarrow 0. \end{array} \] Therefore $$ \|f_{y}-\varphi(x-y)\|_{L^{p}(\mathbb{R}^{N})}=o(1)\ y_{N} \rightarrow \infty\ and \ \rho \rightarrow 0. $$ (ii) Now, we claim that $$ \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|(\xi(|x-a_{r}|)\eta(x_{N})-1) \varphi\left(x-y\right)-(\xi(|z-a_{r}|)\eta(z_{N})-1) \varphi\left(z-y\right)\right|^{2}}{|x-z|^{N+2 s}} d z d x=o_{n}(1) $$ Indeed, let $$ \Upsilon_{u}(x, y):=\frac{u(x)-u(z)}{|x-z|^{\frac{N}{2}+s}} $$ Then, after the change of variables $\tilde{x}=x-y$ and $\tilde{y}=z-y,$ one has \[\begin{array}{ll} \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|(\xi(|x-a_{r}|)\eta(x_{N})-1) \varphi\left(x-y\right)-(\xi(|z-a_{r}|)\eta(z_{N})-1) \varphi\left(z-y\right)\right|^{2}}{|x-z|^{N+2 s}} d z d x & =\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}}\left|\Upsilon_{n}(x, z)\right|^{2} d z d x. \end{array} \] where $$ \Phi_{n}(x, z):=\frac{\left(\xi(|x+y-a_{r}|)\eta(x_{N}+y_{N}))-1\right) \varphi(x)-\left(\xi(|z+y-a_{r}|)\eta(z_{N}+y_{N}))-1\right) \varphi(z)}{|x-z|^{\frac{N}{2}+s}} $$ Recalling that $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, we also have $$ \Upsilon_{n}(x, y) \rightarrow 0 \text { a.e. in } \mathbb{R}^{N} \times \mathbb{R}^{N} $$ On the other hand, a direct application of the mean value theorem yields \begin{eqnarray}\label{ezq:0b.1} \begin{aligned} \left|\Upsilon_{n}(x, z)\right| & \leq\left|\left(\xi(|x+y-a_{r}|)\eta(x_{N}+y_{N}))-1\right) \| \Upsilon_{\varphi}(x, y)\right|+|\varphi(z)|\left|\Upsilon_{1-\xi\eta}\left(x+y, z+y\right)\right| \\ & \leq\left|\Upsilon_{\varphi}(x, z)\right|+\frac{C|\varphi(z)|}{|x-z|^{\frac{N}{2}+s-1}} \chi_{B(z, 1)}(x)+\frac{2|\varphi(z)|}{|x-z|^{\frac{N}{2}+s}} \chi_{B^{c}(z, 1)}(x), \end{aligned} \end{eqnarray} for almost every $(x, z) \in \mathbb{R}^{N} \times \mathbb{R}^{N} .$ Now, it is easily seen that the right hand side in (\ref{ezq:0b.1}) is $L^{2}$ -integrable. Thus, By the Lebesgue's dominated convergence theorem and i, it follows that $$\|f_{y}-\varphi(x-y)\|=o(1),|y-a_{r}|\rightarrow \infty,\ and\ y_{N}\rightarrow +\infty$$ By i,we have $\|f_{y}-\varphi(x-y)\|_{L^{2}(\mathbb{R}^{N})}=o(1)$, $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, or $y_{N} \rightarrow \infty$ and $\rho \rightarrow 0$; \[\begin{array}{ll} \ \|f_{y}-\varphi(x-y)\|^{2} &= \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|\left(f_{y}-\varphi(\cdot-y)\right)(x)-\left(f_{y}-\varphi(\cdot-y)\right)(z)\right|^{2}}{|x-z|^{N+2 s}} d z d x \\[2mm] &+\int_{\mathbb{R}^{N}}\left|f_{y}(x)-\varphi(x-y)\right|^{2} d x . \end{array} \] Setting $$ I_{1}:=\iint_{\mathbb{R}^{2 N}} \frac{(\xi(|x-a_{r}|)\eta(x_{N}))-\xi(|z-a_{r}|)\eta(z_{3}))|^{2}|\varphi(x-y)|^{2}}{|x-z|^{N+2 s}} d z d x $$ and $$ I_{2}:=\iint_{\mathbb{R}^{2 N}} \frac{\left.\left|\xi(|z-a_{r}|)\eta(z_{3}))-1\right|^{2} \mid \varphi(x-y)-\varphi(z-y)\right)\left.\right|^{2}}{|x-z|^{N+2 s}} d z d x $$ the following inequality holds $$ \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|\left(f_{y}-\varphi(\cdot-y)\right)(x)-\left(f_{y}-\varphi(\cdot-y)\right)(z)\right|^{2}}{|x-z|^{N+2 s}} d z d x \leq I_{1}+I_{2}. $$ Moreover, by definition of $\xi,$ we also have $$ \xi(|z+y-a_{r}|)\eta(z_{3}+y_{N})-\left.1\right|^{2} \frac{|\varphi(x)-\varphi(z)|^{2}}{|x-z|^{N+2 s}} \leq 4 \frac{|\varphi(x)-\varphi(z)|^{2}}{|x-z|^{N+2 s}} \in L^{1}\left(\mathbb{R}^{N} \times \mathbb{R}^{N}\right) $$ and $$\xi(|z+y-a_{r}|)\eta(z_{3}+y_{N})-1|^{2} \frac{|\varphi(x)-\varphi(z)|^{2}}{|x-y|^{N+2 s}} \rightarrow 0 \text { a.e. in } \mathbb{R}^{N} \times \mathbb{R}^{N}$$ as $y_{N} \rightarrow \infty$ and $\rho \rightarrow 0$. Hence, the Lebesgue's theorem ensures that $$ I_{2}\rightarrow 0 \text { as } \rho \rightarrow 0. $$ Now, by [\cite{m21}, Lemma 2.3], for every $y \in \mathbb{R}^{N}$, one has $$ I_{1}=\iint_{\mathbb{R}^{2 N}} \frac{(\xi(|x-a_{r}|)\eta(x_{N}))-\xi(|z-a_{r}|)\eta(z_{3}))|^{2}|\varphi(x-y)|^{2}}{|x-z|^{N+2 s}} d z d x \rightarrow 0 \text { as } \rho \rightarrow 0 $$. Therefore $$ \|f_{y}-\varphi(x-y)\|=o(1), y_{N} \rightarrow +\infty , \rho \rightarrow 0. $$ \end{proof} \begin{lemma} The equality $M_{\infty}=M$ holds true. Hence, there is no $u \in X_{0}^{s}$ such that $\|u\|^{2}=M$ and $\|u\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1$, and so, the minimization problem (\ref{eq:1w.sq6}) does not have solution. \end{lemma} \begin{proof} The proof is similar to \cite{4}, and we only give a sketch here. By Proposition 2.3 - part (i) it follows that $$ M_{\infty} \leq M $$ Take a sequence $y^{n}$ in $\Omega_{r}$ such that\\ \[|y^{n}-a_{r}|\rightarrow \infty,and\ y_{N}^{n} \rightarrow +\infty\ as\ n\rightarrow \infty.\] Then by lemma \ref{lm:2.4}, we have \[\|f_{y^{n}}-\varphi(x-y^{n})\|_{L^{p}(R^{N})}=o(1),|y^{n}-a_{r}|\rightarrow \infty,and\ y^{n}_{3}\rightarrow +\infty,\] \[\|f_{y^{n}}-\varphi(x-y^{n})\|_{s}=o(1),|y^{n}-a_{r}|\rightarrow \infty, and \ y^{n}_{3}\rightarrow +\infty,\] \[c_{y^{n}}=\frac{1}{\left\|f_{y^{n}}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}} \rightarrow 1,|y^{n}-a_{r}|\rightarrow \infty,and\ y^{n}_{3}\rightarrow +\infty\] Now, since $\varphi$ is a minimizer of $(\ref{eq:1.s6}),$ one has $$ \left\|f_{y^{n}}\right\|_{s}^{2}=\left\|\varphi\left(\cdot-y_{n}\right)\right\|_{s}^{2}+o_{n}(1)=\|\varphi\|_{s}^{2}+o_{n}(1)=M_{\infty}+o_{n}(1) $$ Similar arguments ensure that $$ \left\|\Psi_{n}\right\|_{s}^{2}=\left\|\Psi_{n}\right\|^{2}=M_{\infty}+o_{n}(1) $$ and$$\left\|\Psi_{n}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1$$ So\[ M \leq M_{\infty}\]. We then conclude that $M= M_{\infty}$. Now, suppose by contradiction that there is $v_{0} \in X_{0}^{s}$ satisfying $$ \left\|v_{0}\right\|=M \text { and }\left\|v_{0}\right\|_{L^{p}(\Omega)}=1 $$ Without loss of generality, we can assume that $v_{0} \geq 0$ in $\Omega$. Note that by $M= M_{\infty}$, since $v_{0} \in$ $H^{s}\left(\mathbb{R}^{N}\right)$ and $\left\|v_{0}\right\|=\left\|v_{0}\right\|_{s},$ it follows that $v_{0}$ is a minimizer for $(\ref{eq:1.s6}),$ and so, a solution of roblem \begin{eqnarray}\label{eq:0.v1} \left\{\begin{aligned} (-\Delta)^{s} u+u &=M_{\infty} u^{p-1} \text { in } \mathbb{R}^{N} \\ u & \in H^{s}\left(\mathbb{R}^{N}\right) . \end{aligned}\right. \end{eqnarray} Therefore, by the maximum principle we get that $v_{0}>0$ in $\mathbb{R}^{N}$, which is impossible, because $v_{0}=0$ in $\mathbb{R}^{N} \backslash \Omega_{r}$. This completes the proof. \end{proof} \setcounter{equation}{0}\Section{A Compactness lemma } In this section we prove a compactness result involving the energy functional $I: X_{0}^{s} \rightarrow \mathbb{R}$ associated to the main problem (\ref{eq:0.1}) and given by $$ I(u):=\frac{1}{2}\left(\iint_{\mathcal{Q}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2 s}} d y d x+\int_{\Omega_{r}}|u|^{2} d x\right)-\frac{1}{p} \int_{\Omega_{r}}|u|^{p} d x $$ In order to do this, we consider the problem \begin{eqnarray}\label{eq:0.x1} \left\{\begin{aligned} (-\Delta)^{s} u+u &=|u|^{p-2} u \text { in } \mathbb{R}^{N} \\ u & \in H^{s}\left(\mathbb{R}^{N}\right) \end{aligned}\right. \end{eqnarray} whose energy functional $I_{\infty}: H^{s}\left(\mathbb{R}^{N}\right) \rightarrow \mathbb{R}$ has the form $$ I_{\infty}(u):=\frac{1}{2}\left(\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2 s}} d y d x+\int_{\mathbb{R}^{N}}|u|^{2} d x\right)-\frac{1}{p} \int_{\mathbb{R}^{N}}|u|^{p} d x. $$ With the above notations we are able to prove the following compactness result. \begin{lemma}\label{eq:0cb.1} Let $\left\{u_{n}\right\} \subset X_{0}^{s}$ be a sequence such that \begin{eqnarray}\label{eq:0bx.1} I\left(u_{n}\right) \rightarrow c \text { and } I^{\prime}\left(u_{n}\right) \rightarrow 0 \text { as } n \rightarrow \infty. \end{eqnarray} Then there are a nonnegative integer $k, k$ sequences $\left\{y_{n}^{i}\right\}$ of points of the form $\left(x_{n}^{\prime}, m_{n}+1 / 2\right)$ for integers $m_{n}, i=1,2, \cdots, k, $ $u_{0} \in X_{0}^{s}$ solving equation (\ref{eq:0.1}) and nontrivial functions $u^{1}, \cdots, u^{k}$ in $H^{s}\left(\mathbb{R}^{N}\right)$ solving equation (\ref{eq:0.x1}). Moreover there is a subsequence $\left\{u_{n}\right\}$ satisfying $$ \begin{array}{l} \text { (1) }u_{n}(x)=u^{0}(x)+u^{1}\left(x-x_{n}^{1}\right)+\cdots+u^{k}\left(x-x_{n}^{k}\right)+o(1) strongly, where x_{n}^{i}=y_{n}^{1}+\cdots+y_{n}^{i} \rightarrow \infty,\\ i=1,2, \cdots, k \\ \text { (2) }\left\|u_{n}\right\|^{2}=\left\|u^{0}\right\|_{\Omega_{r}}^{2}+\left\|u^{1}\right\|^{2}+\cdots+\left\|u^{k}\right\|^{2}+o(1) \\ \text { (3) } I\left(u_{n}\right)=I\left(u^{0}\right)+I_{\infty}\left(u^{1}\right)+\cdots+I_{\infty}\left(u^{k}\right)+o(1) \end{array} $$ If $u_{n} \geqslant 0$ for $n=1,2, \cdots,$ then $u^{1}, \cdots, u^{k}$ can be chosen as positive solutions, and $u^{0} \geqslant 0$ \end{lemma} \begin{proof} See \cite{21,4}. \end{proof} {\bf COROLLARY 2} Let $\left\{u_{n}\right\} \subset M_{\Omega_{r}}$ satisfy $\|u_{n}\|_{\Omega_{r}}^{2}=c+o(1)$ and $M<c<2^{(p-2) / p} M.$ Then $\left\{u_{n}\right\}$ contains a strongly convergent subsequence. \begin{proof} See \cite{21,4}. \end{proof} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}\Section{ Proof of Theorem 1} Set $$ \chi(t)=\left\{\begin{array}{ll} 1 & \text { if } 0 \leqslant t \leqslant 1 \\ \frac{1}{t} & \text { if } 1 \leqslant t<\infty \end{array}\right. $$ and define $\beta: H^{s}\left(\mathbb{R}^{N}\right) \rightarrow \mathbb{R}^{N}$\cite{4} by $$ \beta(u)=\int_{\mathbb{R}^{N}} u^{2}(x) \chi(|x|) x d x. $$ For $r \geqslant r_{1}$, let $$ \begin{array}{l}V_{r}=\left\{\left.u \in H_{0}^{1}(\Omega_{r}\right)|\int_{\Omega_{r}}|u|^{p}=1, \beta(u)=a_{r}\right\} \\ c_{r}=\inf _{u \in V_{r}}\|u\|_{\Omega_{r}}^{2}.\end{array} $$ \begin{lemma} $c_{r}>M$. \end{lemma} ProoF: It is easy to see that $c_{r} \geqslant M.$ Suppose $c_{r}=\alpha .$ Take a sequence $\left\{v_{m}\right\} \subset X_{0}^{s}$ s.t. $$ \begin{array}{l} \left\|v_{m}\right\|_{L_{p} \left(\Omega_{r}\right)}=1, \beta\left(v_{m}\right)=a_{r} \quad \text { for } \quad m=1,2, \cdots, \\ \left\|v_{m}\right\|^{2}=M+o(1). \end{array} $$ Let $u_{m}=M^{1 /(p-2)} v_{m}$ for $m=1,2, \cdots$. Then $$I^{\prime}\left(u_{n}\right)=o_{n}(1) \text { in }\left(X_{0}^{s}\right)^{*}$$ and $$I\left(u_{n}\right)=\left(\frac{1}{2}-\frac{1}{p}\right) M^{\frac{p}{p-2}}+o_{n}(1).$$ By the maximum principle, $\left\{u_{m}\right\}$ does not contain any convergent subsequence. $\mathrm{By}$ lemma (\ref{eq:0cb.1}) there is a sequence $\left\{x_{m}\right\}$ of the form $\left(x_{m}^{\prime}, m+\frac{1}{2}\right)$ for integers $m$ such that $$ \begin{array}{c} \left|x_{m}\right| \longrightarrow \infty \\ u_{m}(x)=\varphi\left(x-x_{m}\right)+o(1) \text { strongly. } \end{array} $$ Since $\varphi$ is radially symmetric, we may take $m$ to be positive. Next, we consider the following sets $$ \mathbb{R}_{+}^{N}:=\left\{x \in \mathbb{R}^{N}:\left\langle x, x_{m}\right\rangle>0\right\} \text { and }\mathbb{R}_{-}^{N}:=\mathbb{R}^{N} \backslash\mathbb{R}_{+}^{N}. $$ We may assume that $$ \begin{aligned} \left|x_{m}\right| \geqslant 4 \text { from } m=1,2, \cdots . \text { Now } & \\ \qquad\left\langle\beta\left(\varphi\left(x-x_{m}\right)\right), x_{m}\right\rangle=& \int_{\mathbb{R}^{N}} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\ =& \int_{\mathbb{R}_{+}^{N}} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\ &+\int_{\left(\mathbb{R}_{-}^{N}\right)} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\ \geqslant & \int_{B_{1}\left(x_{m}\right)} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\ &+\int_{\mathbb{R}_{-}^{N}} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x. \end{aligned} $$ Note that there are $c_{1}>0, c_{2}>0$ such that for $x \in B_{1}\left(x_{m}\right),$ we have $$ \begin{array}{l} \varphi^{2}\left(x-x_{m}\right) \geqslant c_{1} \\ \quad\left\langle x, x_{m}\right\rangle \geqslant c_{2}|x|\left|x_{m}\right| \text { for } m=1,2, \cdots . \end{array} $$ Thus $$ \begin{aligned} \int_{B_{1}\left(x_{m}\right)} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x & \geqslant c_{1} c_{2} \int_{B_{1}\left(x_{m}\right)} \chi(|x|)|x|\left|x_{m}\right| d x \\ & \geqslant c_{3}\left|x_{m}\right|^{N+1}, \quad c_{3}>0 \quad \text { a constant. } \end{aligned} $$ Recalling that for each $x\in\mathbb{R}_{-}^{N}$, $$ \left|x-y_{n}\right| \geq|x| $$ it follows that $$ \left|u\left(x-y_{n}\right)\right|^{2} \chi(|x|)|x| \leq R|u(|x|)|^{2} \in L^{1}\left(\mathbb{R}^{N}\right)(R>0) $$ (see \cite{21} lemma 4.3). This fact, combined with the limit $$ u\left(x-y_{n}\right) \rightarrow 0 \text { as }\left|y_{n}\right| \rightarrow+\infty $$ implies that $$ \int_{\mathbb{R}_{-}^{N}}\left|u\left(x-y_{n}\right)\right|^{2} \chi(|x|)|x| d x=o_{n}(1). $$ We conclude that $$ \begin{aligned} M^{1 /(p-2)}\left|a_{r}\right| & \geqslant\left\langle\beta\left(u_{m}\right), \frac{x_{m}}{\left|x_{m}\right|}\right\rangle \\ &=\left\langle\beta\left(\varphi\left(x-x_{m}\right)\right), \frac{x_{m}}{\left|x_{m}\right|}\right\rangle+o(1) \\ & \geqslant c_{3}\left|x_{m}\right|^{N}+o(1) \end{aligned} $$ a contradiction. Thus $c_{r}>M$. {\bf REMARK 1} By Lemma \ref{lm:2.4}(1), there is $r_{1}>0$ such that $$ \frac{1}{2} \leqslant\left\|f_{y}\right\|_{L^{p}\left(\Omega_{r}\right)} \leqslant \frac{3}{2} $$ where $r \geqslant r_{1}$ and $\left|y-a_{r}\right| \geqslant r / 2$ and $y_{N} \geqslant r / 2$. {\bf REMARK 2}. By Lemma \ref{lm:2.4}(2), there is $r_{2} \geqslant r_{1}$ such that $M<\left\|\Psi_{y}\right\|^{2}<\frac{c_{r}+M}{2}$ where $r \geqslant r_{2}$ and $\left|y-a_{r}\right| \geqslant r / 2$ and $y_{N} \geqslant r / 2$. \begin{lemma}\label{eq:1.f3} There is $r_{3} \geqslant r_{2}$ such that if $r \geqslant r_{3},$ then $$ \left\langle\beta\left(\varphi_{y}\right), y\right\rangle>0 \quad \text { for } \quad y \in \partial\left(B_{r / 2}\left(a_{r}\right)\right). $$ \end{lemma} \begin{proof} By lemma \ref{lm:2.4}, we have $ 2 / 3 \leqslant c_{y} \leqslant 2 .$ For $r \geqslant r_{2},$ let $$ \begin{aligned} A_{\left((3 / 8) r_{1}(5 / 8) r\right)} &=\left\{x \in \mathbb{R}^{N}\left|\frac{3}{8} r \leqslant\right| x-a_{r} \mid \leqslant \frac{5}{8} r\right\}, \\ \mathbb{R}_{+}^{N}(y) &=\left\{x \in \mathbb{R}^{N} \mid\langle x, y\rangle>0\right\} \\ \mathbb{R}_{-}^{N}(y) &=\left\{x \in \mathbb{R}^{N} \mid\langle x, y\rangle<0\right\} \end{aligned} $$ $$\begin{aligned} \left\langle\beta\left(\varphi_{y}\right), y\right\rangle=c_{y} &\left[\int_{\mathbb{R}_{+}^{N}(y)} \xi^{2}\left(\left|x-a_{r}\right|\right) \eta^{2}\left(x_{N}\right) \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x\right.\\ &\left.+\int_{\mathbb{R}_{-}^{N}(y)} \xi^{2}\left(\left|x-a_{r}\right|\right) \eta^{2}\left(x_{N}\right) \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x\right] \\ \geqslant \frac{2}{3} &\left[\int_{A((3 / 8) r,(5 / 8) r)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle\right.\\ &+\int_{\mathbb{R}_{-}^{N}(y)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x \end{aligned}$$ $$\begin{aligned} \int_{A((3 / 8) r,(5 / 8) r)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x & \geqslant c_{6} \int_{A((3 / 8) r,(5 / 8) r)} \chi(|x|)|x||y| d x \text { for } c_{6}>0 \\ & \geqslant c_{6}|y|\left[\left(\frac{5}{8} r\right)^{N}-\left(\frac{3}{8} r\right)^{N}\right] \\ & \geqslant c_{7} r^{N+1} \text { for } c_{7}>0 . \\ \int_{R_{-}^{N}(y)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x =o_{n}(1). \end{aligned}$$ Therefore, there is $r_{3} \geqslant r_{2},$ such that if $r \geqslant r_{3},\left|y-a_{r}\right|=r / 2$ $$ \left\langle\beta\left(\Psi_{y}\right), y\right\rangle \geqslant c_{7} r^{N+1}-o_{n}(1)>0. $$ This completes the proof. \end{proof} By Lemma \ref{lm:2.4} and Lemma \ref{eq:1.f3}, fix $\rho_{0}>0, r_{0} \geqslant r_{3}$ such that if $0<\rho \leqslant \rho_{0}, r \geqslant r_{0}$ then $\left\|\varphi_{y}\right\|_{\Omega_{r}}^{2}<2^{(p-2) / p} \alpha$ for $y \in \overline{B_{r / 2}}\left(a_{r}\right) .$ From now on, fix $\rho_{0}, r_{0},$ for $r \geqslant r_{0} .$ Let $$ \begin{array}{l} B=\left\{\Psi_{y}|| y-a_{r} \mid \leqslant \frac{r}{2}\right\} \\ \Gamma=\left\{h \in C\left(V_{r}, V_{r}\right) \mid h(u)=u \quad \text { if } \quad\|u\|^{2}<\frac{c_{r}+\alpha}{2}\right\}. \end{array} $$ \begin{lemma} $$h(B) \cap V_{r} \neq \emptyset \text { for each } h \in \Gamma.$$ \end{lemma} \begin{proof} Let $h \in \Gamma$ and $H(x)=\beta \circ h \circ \varphi_{x}: \mathbb{R}^{N} \rightarrow \mathbb{R}^{N}$. Consider the homotopy, for $0 \leqslant t \leqslant 1$ $$ F(t, x)=(1-t) H(x)+t I(x) \quad \text { for } \quad x \in \mathbb{R}^{N}. $$ If $x \in \partial\left(B_{r / 2}\left(a_{r}\right)\right),$ then, by Remark 8 and Lemma 9, $$ \begin{array}{c} \left\langle\beta\left(\Psi_{x}\right), x\right\rangle>0 \\ \alpha<\left\|\Psi_{x}\right\|^{2}<\frac{c_{r}+\alpha}{2}. \end{array} $$ Then $$ \begin{aligned} \langle F(t, x), x\rangle &=\langle(1-t) H(x), x\rangle+\langle t x, x\rangle \\ &=(1-t)\left\langle\beta\left(\Psi_{x}\right), x\right\rangle+t\langle x, x\rangle \\ &>0. \end{aligned} $$ Thus $F(t, x) \neq 0$ for $x \in \partial\left(B_{r / 2}\left(a_{r}\right)\right) .$ By the homotopic invariance of the degree $$ d\left(H(x), B_{r / 2}\left(a_{r}\right), a_{r}\right)=d\left(I, B_{r / 2}\left(a_{r}\right), a_{r}\right)=1. $$ There is $x \in B_{r / 2}\left(a_{r}\right)$ such that $$ a_{r}=H(x)=\beta\left(h \circ \Psi_{x}\right). $$ Thus $h(B) \cap V_{r} \neq \emptyset$ for each $h \in \Gamma$. Now we are in the position to prove Theorem A: Consider the class of mappings $$ F=\left\{h \in C\left(\overline{B_{r / 2}\left(a_{r}\right)}\right), H^{1}\left(R_{N}\right):\left.h\right|_{\partial B_{r / 2}\left(a_{r}\right)}=\Psi_{y}\right\} $$ and set $$ c=\inf _{h \in F} \frac{\sup }{y \in B_{r / 2}\left(a_{r}\right)}\|h(y)\|_{\Omega_{r}}^{2}. $$ It follows from the above Lemmas, with the appropriate choice of $r$ that $$ \alpha<c_{r}=\inf _{u \in V_{\gamma}}\|u\|_{\Omega_{r}}^{2} \leqslant c<2^{(p-2) / p} \alpha $$ and $$ \max _{\partial B_{r / 2}\left(a_{r}\right)}\|h(y)\|_{\Omega_{r}}^{2}<\max _{B_{r / 2}\left(a_{r}\right)}\|h(y)\|_{\Omega_{r}}^{2}. $$ Theorem 1,1 then follows by applying the version of the mountain pass theorem from Brezis-Nirenberg \cite{d6}. \end{proof} \vskip 0.3in \end{document}
\begin{document} \title{On finite factors of centralizers of parabolic subgroups in Coxeter groups ootnotetext{MSC2000: 20F55 (primary), 20E34 (secondary)} \begin{abstract} It has been known that the centralizer $Z_W(W_I)$ of a parabolic subgroup $W_I$ of a Coxeter group $W$ is a split extension of a naturally defined reflection subgroup by a subgroup defined by a $2$-cell complex $\mathcal{Y}$. In this paper, we study the structure of $Z_W(W_I)$ further and show that, if $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$, then every element of finite irreducible components of the inner factor is fixed by a natural action of the fundamental group of $\mathcal{Y}$. This property has an application to the isomorphism problem in Coxeter groups. \end{abstract} \section{Introduction} \label{sec:intro} A pair $(W,S)$ of a group $W$ and its (possibly infinite) generating set $S$ is called a \emph{Coxeter system} if $W$ admits the following presentation \begin{displaymath} W=\langle S \mid (st)^{m(s,t)}=1 \mbox{ for all } s,t \in S \mbox{ with } m(s,t)<\infty \rangle \enspace, \end{displaymath} where $m \colon (s,t) \mapsto m(s,t) \in \{1,2,\dots\} \cup \{\infty\}$ is a symmetric mapping in $s,t \in S$ with the property that we have $m(s,t)=1$ if and only if $s=t$. A group $W$ is called a \emph{Coxeter group} if $(W,S)$ is a Coxeter system for some $S \subseteq W$. Since Coxeter systems and some associated objects, such as root systems, appear frequently in various topics of mathematics, algebraic or combinatorial properties of Coxeter systems and those associated objects have been investigated very well, forming a long history and establishing many beautiful theories (see e.g., \cite{Hum} and references therein). For example, it has been well known that, given an arbitrary Coxeter system $(W,S)$, the mapping $m$ by which the above group presentation defines the same group $W$ is uniquely determined. In recent decades, not only the properties of a Coxeter group $W$ associated to a specific generating set $S$, but also the group-theoretic properties of an arbitrary Coxeter group $W$ itself have been studied well. One of the recent main topics in the study of group-theoretic properties of Coxeter groups is the \emph{isomorphism problem}, that is, the problem of determining which of the Coxeter groups are isomorphic to each other as abstract groups. In other words, the problem is to investigate the possible \lq\lq types'' of generating sets $S$ for a given Coxeter group $W$. For example, it has been known that for a Coxeter group $W$ in certain classes, the set of reflections $S^W := \{wsw^{-1} \mid w \in W \mbox{ and } s \in S\}$ associated to any possible generating set $S$ of $W$ (as a Coxeter group) is equal to each other and independent of the choice of $S$ (see e.g., \cite{Bah}). A Coxeter group $W$ having this property is called \emph{reflection independent}. A simplest nontrivial example of a Coxeter group which is not reflection independent is Weyl group of type $G_2$ (or the finite Coxeter group of type $I_2(6)$) with two simple reflections $s,t$, which admits another generating set $\{s,ststs,(st)^3\}$ of type $A_1 \times A_2$ involving an element $(st)^3$ that is not a reflection with respect to the original generating set. One of the main branches of the isomorphism problem in Coxeter groups is to determine the possibilities of a group isomorphism between two Coxeter groups which preserves the sets of reflections (with respect to some specified generating sets). Such an isomorphism is called \emph{reflection-preserving}. In a recent study by the author of this paper, it is revealed that some properties of the centralizers $Z_W(r)$ of reflections $r$ in a Coxeter group $W$ (with respect to a generating set $S$) can be applied to the study of reflection independent Coxeter groups and reflection-preserving isomorphisms. An outline of the idea is as follows. First, by a general result on the structures of the centralizers of parabolic subgroups \cite{Nui11} or the normalizers of parabolic subgroups \cite{Bri-How} in Coxeter groups applied to the case of a single reflection, we have a decomposition $Z_W(r) = \langle r \rangle \times (W^{\perp r} \rtimes Y_r)$, where $W^{\perp r}$ denotes the subgroup generated by all the reflections except $r$ itself that commute with $r$, and $Y_r$ is a subgroup isomorphic to the fundamental group of a certain graph associated to $(W,S)$. The above-mentioned general results also give a canonical presentation of $W^{\perp r}$ as a Coxeter group. Then the unique maximal reflection subgroup (i.e., subgroup generated by reflections) of $Z_W(r)$ is $\langle r \rangle \times W^{\perp r}$. Now suppose that $W^{\perp r}$ has no finite irreducible components. In this case, the maximal reflection subgroup of $Z_W(r)$ has only one finite irreducible component, that is $\langle r \rangle$. Now it can be shown that, if the image $f(r)$ of $r$ by a group isomorphism $f$ from $W$ to another Coxeter group $W'$ is not a reflection with respect to a generating set of $W'$, then the finite irreducible components of the unique maximal reflection subgroup of the centralizer of $f(r)$ in $W'$ have more elements than $\langle r \rangle$, which is a contradiction. Hence, in such a case of $r$, the image of $r$ by any group isomorphism from $W$ to another Coxeter group is always a reflection. See the author's preprint \cite{Nui_ref} for more detailed arguments. As we have seen in the previous paragraph, it is worthy to look for a class of Coxeter groups $W$ for which the above subgroup $W^{\perp r}$ of the centralizer $Z_W(r)$ of each reflection $r$ has no finite irreducible components. The aim of this paper is to establish a tool for finding Coxeter groups having the desired property. The main theorem (in a special case) of this paper can be stated as follows: \begin{quote} \textbf{Main Theorem (in a special case).} Let $r \in W$ be a reflection, and let $s_{\gamma}$ be a generator of $W^{\perp r}$ (as a Coxeter group) which belongs to a finite irreducible component of $W^{\perp r}$. Then $s_{\gamma}$ commutes with every element of $Y_r$. (See the previous paragraph for the notations.) \end{quote} By virtue of this result, to show that $W^{\perp r}$ has no finite irreducible components, it suffices to find (by using the general structural results in \cite{Nui11} or \cite{Bri-How}) for each generator $s_{\gamma}$ of $W^{\perp r}$ an element of $Y_r$ that does not commute with $s_{\gamma}$. A detailed argument along this strategy is given in the preprint \cite{Nui_ref}. In fact, the main theorem (Theorem \ref{thm:YfixesWperpIfin}) of this paper is not only proven for the above-mentioned case of single reflection $r$, but also generalized to the case of centralizers $Z_W(W_I)$ of parabolic subgroups $W_I$ generated by some subsets $I \subseteq S$, with the property that $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$. (We notice that there exists a counterexample when the assumption on $I$ is removed; see Section \ref{sec:counterexample} for details.) In the generalized statement, the group $W^{\perp r}$ is replaced naturally with the subgroup of $W$ generated by all the reflections except those in $I$ that commute with every element of $I$, while the group $Y_r$ is replaced with a subgroup of $W$ isomorphic to the fundamental group of a certain $2$-cell complex defined in \cite{Nui11}. We emphasize that, although the general structures of these subgroups of $Z_W(W_I)$ have been described in \cite{Nui11} (or \cite{Bri-How}), the main theorem of this paper is still far from being trivial; moreover, to the author's best knowledge, no other results on the structures of the centralizers $Z_W(W_I)$ which is in a significantly general form and involves much detailed information than those given in the general structural results \cite{Bri-How,Nui11} have been known in the literature. The paper is organized as follows. In Section \ref{sec:Coxetergroups}, we summarize some fundamental properties and definitions for Coxeter groups. In Section \ref{sec:properties_centralizer}, we summarize some properties of the centralizers of parabolic subgroups relevant to our argument in the following sections, which have been shown in some preceding works (mainly in \cite{Nui11}). In Section \ref{sec:main_result}, we give the statement of the main theorem of this paper (Theorem \ref{thm:YfixesWperpIfin}), and give a remark on its application to the isomorphism problem in Coxeter groups (also mentioned in a paragraph above). The proof of the main theorem is divided into two main steps: First, Section \ref{sec:proof_general} presents some auxiliary results which do not require the assumption, put in the main theorem, on the subset $I$ of $S$ that $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$. Then, based on the results in Section \ref{sec:proof_general}, Section \ref{sec:proof_special} deals with the special case as in the main theorem that $I$ has no such irreducible components, and completes the proof of the main theorem. The proof of the main theorem makes use of the list of positive roots given in Section \ref{sec:Coxetergroups} several times. Finally, in Section \ref{sec:counterexample}, we describe in detail a counterexample of our main theorem when the assumption that $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$ is removed. \paragraph*{Acknowledgments.} The author would like to express his deep gratitude to everyone who helped him, especially to Professor Itaru Terada who was the supervisor of the author during the graduate course in which a part of this work was done, and to Professor Kazuhiko Koike, for their invaluable advice and encouragement. The author would also like to the anonymous referee for the precious comments, especially for suggestion to reduce the size of the counterexample shown in Section \ref{sec:counterexample} which was originally of larger size. A part of this work was supported by JSPS Research Fellowship (No.~16-10825). \section{Coxeter groups} \label{sec:Coxetergroups} The basics of Coxeter groups summarized here are found in \cite{Hum} unless otherwise noticed. For some omitted definitions, see also \cite{Hum} or the author's preceding paper \cite{Nui11}. \subsection{Basic notions} \label{sec:defofCox} A pair $(W,S)$ of a group $W$ and its (possibly infinite) generating set $S$ is called a \emph{Coxeter system}, and $W$ is called a \emph{Coxeter group}, if $W$ admits the following presentation \begin{displaymath} W=\langle S \mid (st)^{m(s,t)}=1 \mbox{ for all } s,t \in S \mbox{ with } m(s,t)<\infty \rangle \enspace, \end{displaymath} where $m \colon (s,t) \mapsto m(s,t) \in \{1,2,\dots\} \cup \{\infty\}$ is a symmetric mapping in $s,t \in S$ with the property that we have $m(s,t)=1$ if and only if $s=t$. Let $\Gamma$ denote the \emph{Coxeter graph} of $(W,S)$, which is a simple undirected graph with vertex set $S$ in which two vertices $s,t \in S$ are joined by an edge with label $m(s,t)$ if and only if $m(s,t) \geq 3$ (by usual convention, the label is omitted when $m(s,t)=3$; see Figure \ref{fig:finite_irreducible_Coxeter_groups} below for example). If $\Gamma$ is connected, then $(W,S)$ is called \emph{irreducible}. Let $\ell$ denote the length function of $(W,S)$. For $w,u \in W$, we say that $u$ is a \emph{right divisor} of $w$ if $\ell(w) = \ell(wu^{-1}) + \ell(u)$. For each subset $I \subseteq S$, the subgroup $W_I := \langle I \rangle$ of $W$ generated by $I$ is called a \emph{parabolic subgroup} of $W$. Let $\Gamma_I$ denote the Coxeter graph of the Coxeter system $(W_I,I)$. For two subsets $I,J \subseteq S$, we say that $I$ is \emph{adjacent to} $J$ if an element of $I$ is joined by an edge with an element of $J$ in the Coxeter graph $\Gamma$. We say that $I$ is \emph{apart from} $J$ if $I \cap J = \emptyset$ and $I$ is not adjacent to $J$. For the terminologies, we often abbreviate a set $\{s\}$ with a single element of $S$ to $s$ for simplicity. \subsection{Root systems and reflection subgroups} \label{sec:rootsystem} Let $V$ denote the \emph{geometric representation space} of $(W,S)$, which is an $\mathbb{R}$-linear space equipped with a basis $\Pi = \{\alpha_s \mid s \in S\}$ and a $W$-invariant symmetric bilinear form $\langle \,,\, \rangle$ determined by \begin{displaymath} \langle \alpha_s, \alpha_t \rangle = \begin{cases} -\cos(\pi / m(s,t)) & \mbox{if } m(s,t) < \infty \enspace; \\ -1 & \mbox{if } m(s,t) = \infty \enspace, \end{cases} \end{displaymath} where $W$ acts faithfully on $V$ by $s \cdot v=v-2\langle \alpha_s, v\rangle \alpha_s$ for $s \in S$ and $v \in V$. Then the \emph{root system} $\Phi=W \cdot \Pi$ consists of unit vectors with respect to the bilinear form $\langle \,,\, \rangle$, and $\Phi$ is the disjoint union of $\Phi^+ := \Phi \cap \mathbb{R}_{\geq 0}\Pi$ and $\Phi^- := -\Phi^+$ where $\mathbb{R}_{\geq 0}\Pi$ signifies the set of nonnegative linear combinations of elements of $\Pi$. Elements of $\Phi$, $\Phi^+$, and $\Phi^-$ are called \emph{roots}, \emph{positive roots}, and \emph{negative roots}, respectively. For a subset $\Psi \subseteq \Phi$ and an element $w \in W$, define \begin{displaymath} \Psi^+ := \Psi \cap \Phi^+ \,,\, \Psi^- := \Psi \cap \Phi^- \,,\, \Psi[w] :=\{\gamma \in \Psi^+ \mid w \cdot \gamma \in \Phi^-\} \enspace. \end{displaymath} It is well known that the length $\ell(w)$ of $w$ is equal to $|\Phi[w]|$. For an element $v=\sum_{s \in S}c_s\alpha_s$ of $V$, define the \emph{support} $\mathrm{Supp}\,v$ of $v$ to be the set of all $s \in S$ with $c_s \neq 0$. For a subset $\Psi$ of $\Phi$, define the support $\mathrm{Supp}\,\Psi$ of $\Psi$ to be the union of $\mathrm{Supp}\,\gamma$ over all $\gamma \in \Psi$. For each $I \subseteq S$, define \begin{displaymath} \Pi_I := \{\alpha_s \mid s \in I\} \subseteq \Pi \,,\, V_I := \mathrm{span}\,\Pi_I \subseteq V \,,\, \Phi_I := \Phi \cap V_I \enspace. \end{displaymath} It is well known that $\Phi_I$ coincides with the root system $W_I \cdot \Pi_I$ of $(W_I,I)$. We notice the following well-known fact: \begin{lem} \label{lem:support_is_irreducible} The support of any root $\gamma \in \Phi$ is irreducible. \end{lem} \begin{proof} Note that $\gamma \in \Phi_I = W_I \cdot \Pi_I$, where $I= \mathrm{Supp}\,\gamma$. On the other hand, it follows by induction on the length of $w$ that, for any $w \in W_I$ and $s \in I$, the support of $w \cdot \alpha_s$ is contained in the irreducible component of $I$ containing $s$. Hence the claim follows. \end{proof} For a root $\gamma=w \cdot \alpha_s \in \Phi$, let $s_\gamma := wsw^{-1}$ be the \emph{reflection} along $\gamma$, which acts on $V$ by $s_\gamma \cdot v=v-2 \langle \gamma, v \rangle \gamma$ for $v \in V$. For any subset $\Psi \subseteq \Phi$, let $W(\Psi)$ denote the \emph{reflection subgroup} of $W$ generated by $\{s_{\gamma} \mid \gamma \in \Psi\}$. It was shown by Deodhar \cite{Deo_refsub} and by Dyer \cite{Dye} that $W(\Psi)$ is a Coxeter group. To determine their generating set $S(\Psi)$ for $W(\Psi)$, let $\Pi(\Psi)$ denote the set of all \lq\lq simple roots'' $\gamma \in (W(\Psi) \cdot \Psi)^+$ in the \lq\lq root system'' $W(\Psi) \cdot \Psi$ of $W(\Psi)$, that is, all the $\gamma$ for which any expression $\gamma=\sum_{i=1}^{r}c_i\beta_i$ with $c_i>0$ and $\beta_i \in (W(\Psi) \cdot \Psi)^+$ satisfies that $\beta_i=\gamma$ for every index $i$. Then the set $S(\Psi)$ is given by \begin{displaymath} S(\Psi) := \{s_\gamma \mid \gamma \in \Pi(\Psi)\} \enspace. \end{displaymath} We call $\Pi(\Psi)$ the \emph{simple system} of $(W(\Psi),S(\Psi))$. Note that the \lq\lq root system'' $W(\Psi) \cdot \Psi$ and the simple system $\Pi(\Psi)$ for $(W(\Psi),S(\Psi))$ have several properties that are similar to the usual root systems $\Phi$ and simple systems $\Pi$ for $(W,S)$; see e.g., Theorem 2.3 of \cite{Nui11} for the detail. In particular, we have the following result: \begin{thm} [{e.g., \cite[Theorem 2.3]{Nui11}}] \label{thm:reflectionsubgroup_Deodhar} Let $\Psi \subseteq \Phi$, and let $\ell_\Psi$ be the length function of $(W(\Psi),S(\Psi))$. Then for $w \in W(\Psi)$ and $\gamma \in (W(\Psi) \cdot \Psi)^+$, we have $\ell_\Psi(ws_\gamma)<\ell_\Psi(w)$ if and only if $w \cdot \gamma \in \Phi^-$. \end{thm} We say that a subset $\Psi \subseteq \Phi^+$ is a \emph{root basis} if for each pair $\beta,\gamma \in \Psi$, we have \begin{displaymath} \begin{cases} \langle \beta,\gamma \rangle=-\cos(\pi/m) & \mbox{if } s_\beta s_\gamma \mbox{ has order } m<\infty \enspace;\\ \langle \beta,\gamma \rangle \leq -1 & \mbox{if } s_\beta s_\gamma \mbox{ has infinite order}. \end{cases} \end{displaymath} For example, it follows from Theorem \ref{thm:conditionforrootbasis} below that the simple system $\Pi(\Psi)$ of $(W(\Psi),S(\Psi))$ is a root basis for any $\Psi \subseteq \Phi$. For two root bases $\Psi_1,\Psi_2 \subseteq \Phi^+$, we say that a mapping from $\Psi_1 = \Pi(\Psi_1)$ to $\Psi_2 = \Pi(\Psi_2)$ is an isomorphism if it induces an isomorphism from $S(\Psi_1)$ to $S(\Psi_2)$. We show some properties of root bases: \begin{thm} [{\cite[Theorem 4.4]{Dye}}] \label{thm:conditionforrootbasis} Let $\Psi \subseteq \Phi^+$. Then we have $\Pi(\Psi)=\Psi$ if and only if $\Psi$ is a root basis. \end{thm} \begin{prop} [{\cite[Corollary 2.6]{Nui11}}] \label{prop:fintyperootbasis} Let $\Psi \subseteq \Phi^+$ be a root basis with $|W(\Psi)|<\infty$. Then $\Psi$ is a basis of a positive definite subspace of $V$ with respect to the bilinear form $\langle \,,\, \rangle$. \end{prop} \begin{prop} [{\cite[Proposition 2.7]{Nui11}}] \label{prop:finitesubsystem} Let $\Psi \subseteq \Phi^+$ be a root basis with $|W(\Psi)|<\infty$, and $U= \mathrm{span}\,\Psi$. Then there exist an element $w \in W$ and a subset $I \subseteq S$ satisfying that $|W_I|<\infty$ and $w \cdot (U \cap \Phi^+)=\Phi_I^+$. Moreover, the action of this $w$ maps $U \cap \Pi$ into $\Pi_I$. \end{prop} \subsection{Finite parabolic subgroups} \label{sec:longestelement} We say that a subset $I \subseteq S$ is of \emph{finite type} if $|W_I|<\infty$. The finite irreducible Coxeter groups have been classified as summarized in \cite[Chapter 2]{Hum}. Here we determine a labelling $r_1,r_2,\dots,r_n$ (where $n = |I|$) of elements of an irreducible subset $I \subseteq S$ of each finite type in the following manner, where the values $m(r_i,r_j)$ not listed here are equal to $2$ (see Figure \ref{fig:finite_irreducible_Coxeter_groups}): \begin{description} \item[Type $A_n$ ($1 \leq n < \infty$):] $m(r_i,r_{i+1})=3$ ($1 \leq i \leq n-1$); \item[Type $B_n$ ($2 \leq n < \infty$):] $m(r_i,r_{i+1})=3$ ($1 \leq i \leq n-2$) and $m(r_{n-1},r_n)=4$; \item[Type $D_n$ ($4 \leq n < \infty$):] $m(r_i,r_{i+1})=m(r_{n-2},r_n)=3$ ($1 \leq i \leq n-2$); \item[Type $E_n$ ($n=6,7,8$):] $m(r_1,r_3)=m(r_2,r_4)=m(r_i,r_{i+1})=3$ ($3 \leq i \leq n-1$); \item[Type $F_4$:] $m(r_1,r_2)=m(r_3,r_4)=3$ and $m(r_2,r_3)=4$; \item[Type $H_n$ ($n=3,4$):] $m(r_1,r_2)=5$ and $m(r_i,r_{i+1})=3$ ($2 \leq i \leq n-1$); \item[Type $I_2(m)$ ($5 \leq m < \infty$):] $m(r_1,r_2)=m$. \end{description} We call the above labelling $r_1,\dots,r_n$ the \emph{standard labelling} of $I$. \begin{figure} \caption{Coxeter graphs of the finite irreducible Coxeter groups (here we write $i$ instead of $r_i$ for each vertex)} \label{fig:finite_irreducible_Coxeter_groups} \end{figure} Let $w_0(I)$ denote the (unique) longest element of a finite parabolic subgroup $W_I$. It is well known that $w_0(I)^2 = 1$ and $w_0(I) \cdot \Pi_I = -\Pi_I$. Now let $I$ be irreducible of finite type. If $I$ is of type $A_n$ ($n \geq 2$), $D_k$ ($k$ odd), $E_6$ or $I_2(m)$ ($m$ odd), then the automorphism of the Coxeter graph $\Gamma_I$ of $W_I$ induced by (the conjugation action of) $w_0(I)$ is the unique nontrivial automorphism of $\Gamma_I$. Otherwise, $w_0(I)$ lies in the center $Z(W_I)$ of $W_I$ and the induced automorphism of $\Gamma_I$ is trivial, in which case we say that $I$ is of \emph{$(-1)$-type}. Moreover, if $W_I$ is finite but not irreducible, then $w_0(I)=w_0(I_1) \dotsm w_0(I_k)$ where the $I_i$ are the irreducible components of $I$. \section{Known properties of the centralizers} \label{sec:properties_centralizer} This section summarizes some known properties (mainly proven in \cite{Nui11}) of the centralizers $Z_W(W_I)$ of parabolic subgroups $W_I$ in Coxeter groups $W$, especially those relevant to the argument in this paper. First, we fix an abstract index set $\Lambda$ with $|\Lambda| = |I|$, and define $S^{(\Lambda)}$ to be the set of all injective mappings $x \colon \Lambda \to S$. For $x \in S^{(\Lambda)}$ and $\lambda \in \Lambda$, we put $x_\lambda = x(\lambda)$; thus $x$ may be regarded as a duplicate-free \lq\lq $\Lambda$-tuple'' $(x_\lambda)=(x_\lambda)_{\lambda \in \Lambda}$ of elements of $S$. For each $x \in S^{(\Lambda)}$, let $[x]$ denote the image of the mapping $x$; $[x] = \{x_{\lambda} \mid \lambda \in \Lambda\}$. In the following argument, we fix an element $x_I \in S^{(\Lambda)}$ with $[x_I] = I$. We define \begin{displaymath} C_{x,y} := \{w \in W \mid \alpha_{x_\lambda}=w \cdot \alpha_{y_\lambda} \mbox{ for every } \lambda \in \Lambda\} \mbox{ for } x,y \in S^{(\Lambda)} \enspace. \end{displaymath} Note that $C_{x,y} \cdot C_{y,z} \subseteq C_{x,z}$ and $C_{x,y}{}^{-1} = C_{y,x}$ for $x,y,z \in S^{(\Lambda)}$. Now we define \begin{displaymath} w \ast y_{\lambda} := x_{\lambda} \mbox{ for } x,y \in S^{(\Lambda)}, w \in C_{x,y} \mbox{ and } \lambda \in \Lambda \enspace, \end{displaymath} therefore we have $w \cdot \alpha_s = \alpha_{w \ast s}$ for any $w \in C_{x,y}$ and $s \in [y]$. (This $\ast$ can be interpreted as the conjugation action of elements of $C_{x,y}$ to the elements of $[y]$.) Moreover, we define \begin{displaymath} w \ast y := x \mbox{ for } x,y \in S^{(\Lambda)} \mbox{ and } w \in C_{x,y} \end{displaymath} (this $\ast$ can be interpreted as the diagonal action on the $\Lambda$-tuples). We define $C_I = C_{x_I,x_I}$, therefore we have \begin{displaymath} C_I = \{w \in W \mid w \cdot \alpha_s=\alpha_s \mbox{ for every } s \in I\} \enspace, \end{displaymath} which is a normal subgroup of $Z_W(W_I)$. To describe generators of $C_I$, we introduce some notations. For subsets $J,K \subseteq S$, let $J_{\sim K}$ denote the set of elements of $J \cup K$ that belongs to the same connected component of $\Gamma_{J \cup K}$ as an element of $K$. Now for $x \in S^{(\Lambda)}$ and $s \in S \smallsetminus [x]$ for which $[x]_{\sim s}$ is of finite type, there exists a unique $y \in S^{(\Lambda)}$ for which the element \begin{displaymath} w_x^s := w_0([x]_{\sim s})w_0([x]_{\sim s} \smallsetminus \{s\}) \end{displaymath} belongs to $C_{y,x}$. In this case, we define \begin{displaymath} \varphi(x,s) := y \enspace, \end{displaymath} therefore $\varphi(x,s) = w_x^s \ast x$ in the above notations. We have the following result: \begin{prop} [{see \cite[Theorem 3.5(iii)]{Nui11}}] \label{prop:generator_C} Let $x,y \in S^{(\Lambda)}$ and $w \in C_{x,y}$. Then there are a finite sequence $z_0 = y,z_1,\dots,z_{n-1},z_n = x$ of elements of $S^{(\Lambda)}$ and a finite sequence $s_0,s_1,\dots,s_{n-1}$ of elements of $S$ satisfying that $s_i \not\in [z_i]$, $[z_i]_{\sim s_i}$ is of finite type and $\varphi(z_i,s_i) = z_{i+1}$ for each index $0 \leq i \leq n-1$, and we have $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0}$. \end{prop} For subsets $J,K \subseteq S$, define \begin{displaymath} \Phi_J^{\perp K} := \{\gamma \in \Phi_J \mid \langle \gamma,\alpha_s \rangle = 0 \mbox{ for every } s \in K\} \,,\, W_J^{\perp K} := W(\Phi_J^{\perp K}) \end{displaymath} (see Section \ref{sec:rootsystem} for notations). Then $(W_J^{\perp K},R^{J,K})$ is a Coxeter system with root system $\Phi_J^{\perp K}$ and simple system $\Pi^{J,K}$, where \begin{displaymath} R^{J,K} := S(\Phi_J^{\perp K}) \,,\, \Pi^{J,K} := \Pi(\Phi_J^{\perp K}) \end{displaymath} (see \cite[Section 3.1]{Nui11}). In the notations, the symbol $J$ will be omitted when $J=S$; hence we have \begin{displaymath} W^{\perp I}=W_S^{\perp I}=\langle \{s_\gamma \mid \gamma \in \Phi^{\perp I}\} \rangle \enspace. \end{displaymath} On the other hand, we define \begin{displaymath} Y_{x,y} := \{w \in C_{x,y} \mid w \cdot (\Phi^{\perp [y]})^+ \subseteq \Phi^+\} \mbox{ for } x,y \in S^{(\Lambda)} \enspace. \end{displaymath} Note that $Y_{x,y} = \{w \in C_{x,y} \mid (\Phi^{\perp [x]})^+=w \cdot (\Phi^{\perp [y]})^+\}$ (see \cite[Section 3.1]{Nui11}). Note also that $Y_{x,y} \cdot Y_{y,z} \subseteq Y_{x,z}$ and $Y_{x,y}{}^{-1} = Y_{y,x}$ for $x,y,z \in S^{(\Lambda)}$. Now we define $Y_I = Y_{x_I,x_I}$, therefore we have \begin{displaymath} Y_I = \{w \in C_I \mid (\Phi^{\perp I})^+ = w \cdot (\Phi^{\perp I})^+\} \enspace. \end{displaymath} We have the following results: \begin{prop} [{see \cite[Lemma 4.1]{Nui11}}] \label{prop:charofBphi} For $x \in S^{(\Lambda)}$ and $s \in S \smallsetminus [x]$, the three conditions are equivalent: \begin{enumerate} \item $[x]_{\sim s}$ is of finite type, and $\varphi(x,s) = x$; \item $[x]_{\sim s}$ is of finite type, and $\Phi^{\perp [x]}[w_x^s] \neq \emptyset$; \item $\Phi_{[x] \cup \{s\}}^{\perp [x]} \neq \emptyset$. \end{enumerate} If these three conditions are satisfied, then we have $\Phi^{\perp [x]}[w_x^s]=(\Phi_{[x] \cup \{s\}}^{\perp [x]})^+=\{\gamma(x,s)\}$ for a unique positive root $\gamma(x,s)$ satisfying $s_{\gamma(x,s)}=w_x^s$. \end{prop} \begin{prop} \label{prop:factorization_C} Let $x,y \in S^{(\Lambda)}$. \begin{enumerate} \item \label{item:prop_factorization_C_decomp} (See \cite[Theorem 4.6(i)(iv)]{Nui11}.) The group $C_{x,x}$ admits a semidirect product decomposition $C_{x,x} = W^{\perp [x]} \rtimes Y_{x,x}$. Moreover, if $w \in Y_{x,y}$, then the conjugation action by $w$ defines an isomorphism $u \mapsto wuw^{-1}$ of Coxeter systems from $(W^{\perp [y]},R^{[y]})$ to $(W^{\perp [x]},R^{[x]})$. \item \label{item:prop_factorization_C_generator_Y} (See \cite[Theorem 4.6(ii)]{Nui11}.) Let $w \in Y_{x,y}$. Then there are a finite sequence $z_0 = y,z_1,\dots,z_{n-1},z_n = x$ of elements of $S^{(\Lambda)}$ and a finite sequence $s_0,s_1,\dots,s_{n-1}$ of elements of $S$ satisfying that $z_{i+1} \neq z_i$, $s_i \not\in [z_i]$, $[z_i]_{\sim s_i}$ is of finite type and $w_{z_i}^{s_i} \in Y_{z_{i+1},z_i}$ for each index $0 \leq i \leq n-1$, and we have $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0}$. \item \label{item:prop_factorization_C_generator_perp} (See \cite[Theorem 4.13]{Nui11}.) The generating set $R^{[x]}$ of $W^{\perp [x]}$ consists of elements of the form $w s_{\gamma(y,s)} w^{-1}$ satisfying that $y \in S^{(\Lambda)}$, $w \in Y_{x,y}$ and $\gamma(y,s)$ is a positive root as in the statement of Proposition \ref{prop:charofBphi} (hence $[y]_{\sim s}$ is of finite type and $\varphi(y,s) = y$). \end{enumerate} \end{prop} \begin{prop} [{see \cite[Proposition 4.8]{Nui11}}] \label{prop:Yistorsionfree} For any $x \in S^{(\Lambda)}$, the group $Y_{x,x}$ is torsion-free. \end{prop} For the structure of the entire centralizer $Z_W(W_I)$, a general result (Theorem 5.2 of \cite{Nui11}) implies the following proposition in a special case (a proof of the proposition from Theorem 5.2 of \cite{Nui11} is straightforward by noticing the fact that, under the hypothesis of the following proposition, the group $\mathcal{A}$ defined in the last paragraph before Theorem 5.2 of \cite{Nui11} is trivial and hence the group $B_I$ used in Theorem 5.2 of \cite{Nui11} coincides with $Y_I$): \begin{prop} [{see \cite[Theorem 5.2]{Nui11}}] \label{prop:Z_for_special_case} If every irreducible component of $I$ of finite type is of $(-1)$-type (see Section \ref{sec:longestelement} for the terminology), then we have $Z_W(W_I) = Z(W_I) \times (W^{\perp I} \rtimes Y_I)$. \end{prop} We also present an auxiliary result, which will be used later: \begin{lem} [{see \cite[Lemma 3.2]{Nui11}}] \label{lem:rightdivisor} Let $w \in W$ and $J,K \subseteq S$, and suppose that $w \cdot \Pi_J \subseteq \Pi$ and $w \cdot \Pi_K \subseteq \Phi^-$. Then $J \cap K=\emptyset$, the set $J_{\sim K}$ is of finite type, and $w_0(J_{\sim K})w_0(J_{\sim K} \smallsetminus K)$ is a right divisor of $w$ (see Section \ref{sec:defofCox} for the terminology). \end{lem} \section{Main results} \label{sec:main_result} In this section, we state the main results of this paper, and give some relevant remarks. The proof will be given in the following sections. The main results deal with the relations between the \lq\lq finite part'' of the reflection subgroup $W^{\perp I}$ and the subgroup $Y_I$ of the centralizer $Z_W(W_I)$. In general, for any Coxeter group $W$, the product of the finite irreducible components of $W$ is called the \emph{finite part} of $W$; here we write it as $W_{\mathrm{fin}}$. Then, since $W^{\perp I}$ is a Coxeter group (with generating set $R^I$ and simple system $\Pi^I$) as mentioned in Section \ref{sec:properties_centralizer}, $W^{\perp I}$ has its own finite part $W^{\perp I}{}_{\mathrm{fin}}$. To state the main theorem, we introduce a terminology: We say that a subset $I$ of $S$ is \emph{$A_{>1}$-free} if $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$. Then the main theorem of this paper is stated as follows: \begin{thm} \label{thm:YfixesWperpIfin} Let $I$ be an $A_{>1}$-free subset of $S$ (see above for the terminology). Then for each $\gamma \in \Pi^I$ with $s_\gamma \in W^{\perp I}{}_{\mathrm{fin}}$, we have $w \cdot \gamma = \gamma$ for every $w \in Y_I$. Hence each element of the subgroup $Y_I$ of $Z_W(W_I)$ commutes with every element of $W^{\perp I}{}_{\mathrm{fin}}$. \end{thm} Among the several cases for the subset $I$ of $S$ covered by Theorem \ref{thm:YfixesWperpIfin}, we emphasize the following important special case: \begin{cor} \label{cor:YfixesWperpIfin} Let $I \subseteq S$. If every irreducible component of $I$ of finite type is of $(-1)$-type (see Section \ref{sec:longestelement} for the terminology), then we have \begin{displaymath} Z_W(W_I) = Z(W_I) \times W^{\perp I}{}_{\mathrm{fin}} \times (W^{\perp I}{}_{\mathrm{inf}} \rtimes Y_I) \enspace, \end{displaymath} where $W^{\perp I}{}_{\mathrm{inf}}$ denotes the product of the infinite irreducible components of $W^{\perp I}$ (hence $W^{\perp I} = W^{\perp I}{}_{\mathrm{fin}} \times W^{\perp I}{}_{\mathrm{inf}}$). \end{cor} \begin{proof} Note that the assumption on $I$ in Theorem \ref{thm:YfixesWperpIfin} is now satisfied. In this situation, Proposition \ref{prop:Z_for_special_case} implies that $Z_W(W_I) = Z(W_I) \times (W^{\perp I} \rtimes Y_I)$. Now by Theorem \ref{thm:YfixesWperpIfin}, both $Y_I$ and $W^{\perp I}{}_{\mathrm{inf}}$ centralize $W^{\perp I}{}_{\mathrm{fin}}$, therefore the latter factor of $Z_W(W_I)$ decomposes further as $W^{\perp I}{}_{\mathrm{fin}} \times (W^{\perp I}{}_{\mathrm{inf}} \rtimes Y_I)$. \end{proof} We notice that the conclusion of Theorem \ref{thm:YfixesWperpIfin} will not generally hold when we remove the $A_{>1}$-freeness assumption on $I$. A counterexample will be given in Section \ref{sec:counterexample}. Here we give a remark on an application of the main results to a study of the isomorphism problem in Coxeter groups. An important branch in the research on the isomorphism problem in Coxeter groups is to investigate, for two Coxeter systems $(W,S)$, $(W',S')$ and a group isomorphism $f \colon W \to W'$, the possibilities of \lq\lq shapes'' of the images $f(r) \in W'$ by $f$ of reflections $r \in W$ (with respect to the generating set $S$); for example, whether $f(r)$ is always a reflection in $W'$ (with respect to $S'$) or not. Now if $r \in S$, then Corollary \ref{cor:YfixesWperpIfin} and Proposition \ref{prop:Yistorsionfree} imply that the unique maximal reflection subgroup of the centralizer of $r$ in $W$ is $\langle r \rangle \times W^{\perp \{r\}}$, which has finite part $\langle r \rangle \times W^{\perp \{r\}}{}_{\mathrm{fin}}$. Moreover, the property of $W^{\perp \{r\}}{}_{\mathrm{fin}}$ shown in Theorem \ref{thm:YfixesWperpIfin} can imply that the factor $W^{\perp \{r\}}{}_{\mathrm{fin}}$ becomes \lq\lq frequently'' almost trivial. In such a case, the finite part of the unique maximal reflection subgroup of the centralizer of $f(r)$ in $W'$ should be very small, which can be shown to be impossible if $f(r)$ is too far from being a reflection. Thus the possibilities of the shape of $f(r)$ in $W'$ can be restricted by using Theorem \ref{thm:YfixesWperpIfin}. See \cite{Nui_ref} for a detailed study along this direction. The author hope that such an argument can be generalized to the case that $r$ is not a reflection but an involution of \lq\lq type'' which is $A_{>1}$-free (in a certain appropriate sense). \section{Proof of Theorem \ref{thm:YfixesWperpIfin}: General properties} \label{sec:proof_general} In this and the next sections, we give a proof of Theorem \ref{thm:YfixesWperpIfin}. First, this section gives some preliminary results that hold for an arbitrary $I \subseteq S$ (not necessarily $A_{>1}$-free; see Section \ref{sec:main_result} for the terminology). Then the next section will focus on the case that $I$ is $A_{>1}$-free as in Theorem \ref{thm:YfixesWperpIfin} and complete the proof of Theorem \ref{thm:YfixesWperpIfin}. \subsection{Decompositions of elements of $Y_{z,y}$} \label{sec:finitepart_decomposition_Y} It is mentioned in Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}) that each element $u \in Y_{z,y}$ with $y,z \in S^{(\Lambda)}$ admits a kind of decomposition into elements of some $Y$. Here we introduce a generalization of such decompositions, which will play an important role below. We give a definition: \begin{defn} \label{defn:standard_decomposition} Let $u \in Y_{z,y}$ with $y,z \in S^{(\Lambda)}$. We say that an expression $\mathcal{D} := \omega_{n-1} \cdots \omega_1\omega_0$ of $u$ is a \emph{semi-standard decomposition} of $u$ with respect to a subset $J$ of $S$ if there exist $y^{(i)} = y^{(i)}(\mathcal{D}) \in S^{(\Lambda)}$ for $0 \leq i \leq n$, $t^{(i)} = t^{(i)}(\mathcal{D}) \in S$ for $0 \leq i \leq n-1$ and $J^{(i)} = J^{(i)}(\mathcal{D}) \subseteq S$ for $0 \leq i \leq n$, with $y^{(0)} = y$, $y^{(n)} = z$ and $J^{(0)} = J$, satisfying the following conditions for each index $0 \leq i \leq n-1$: \begin{itemize} \item We have $t^{(i)} \not\in [y^{(i)}] \cup J^{(i)}$ and $t^{(i)}$ is adjacent to $[y^{(i)}]$. \item The subset $K^{(i)} = K^{(i)}(\mathcal{D}) := ([y^{(i)}] \cup J^{(i)})_{\sim t^{(i)}}$ of $S$ is of finite type (see Section \ref{sec:properties_centralizer} for the notation). \item We have $\omega_i = \omega_{y^{(i)},J^{(i)}}^{t^{(i)}} := w_0(K^{(i)})w_0(K^{(i)} \smallsetminus \{t^{(i)}\})$. \item We have $\omega_i \in Y_{y^{(i+1)},y^{(i)}}$ and $\omega_i \cdot \Pi_{J^{(i)}} = \Pi_{J^{(i+1)}}$. \end{itemize} We call the above subset $K^{(i)}$ of $S$ the \emph{support} of $\omega_i$. We call a component $\omega_i$ of $\mathcal{D}$ a \emph{wide transformation} if its support $K^{(i)}$ intersects with $J^{(i)} \smallsetminus [y^{(i)}]$; otherwise, we call $\omega_i$ a \emph{narrow transformation}, in which case we have $\omega_i = \omega_{y^{(i)},J^{(i)}}^{t^{(i)}} = w_{y^{(i)}}^{t^{(i)}}$. Moreover, we say that $\mathcal{D} = \omega_{n-1} \cdots \omega_1\omega_0$ is a \emph{standard decomposition} of $u$ if $\mathcal{D}$ is a semi-standard decomposition of $u$ and $\ell(u) = \sum_{j=0}^{n-1} \ell(\omega_j)$. The integer $n$ is called the \emph{length} of $\mathcal{D}$ and is denoted by $\ell(\mathcal{D})$. \end{defn} \begin{exmp} \label{exmp:semi-standard_decomposition} We give an example of a semi-standard decomposition. Let $(W,S)$ be a Coxeter system of type $D_7$, with standard labelling $r_1,\dots,r_7$ of elements of $S$ given in Section \ref{sec:longestelement}. We put $n := 4$, and define the objects $y^{(i)}$, $t^{(i)}$ and $J^{(i)}$ as in Table \ref{tab:example_semi-standard_decomposition}, where we abbreviate each $r_i$ to $i$ for simplicity. In this case, the subsets $K^{(i)}$ of $S$ introduced in Definition \ref{defn:standard_decomposition} are determined as in the last row of Table \ref{tab:example_semi-standard_decomposition}. We have \begin{displaymath} \begin{split} \omega_0 &= w_0(\{r_1,r_2,r_3,r_4,r_5\})w_0(\{r_1,r_2,r_3,r_5\}) = r_2r_3r_4r_5r_1r_2r_3r_4 \enspace, \\ \omega_1 &= w_0(\{r_3,r_4,r_5,r_6\})w_0(\{r_3,r_4,r_5\}) = r_3r_4r_5r_6 \enspace, \\ \omega_2 &= w_0(\{r_4,r_5,r_6,r_7\})w_0(\{r_4,r_5,r_6\}) = r_7r_5r_4r_6r_5r_7 \enspace, \\ \omega_3 &= w_0(\{r_3,r_4,r_5,r_6\})w_0(\{r_4,r_5,r_6\}) = r_6r_5r_4r_3 \enspace. \end{split} \end{displaymath} Let $u$ denote the element $\omega_3\omega_2\omega_1\omega_0$ of $W$. Then it can be shown that $u \in Y_{z,y}$ where $y := y^{(0)} = (r_1,r_2,r_3)$ and $z := y^{(n)} = (r_5,r_4,r_3)$, and the expression $\mathcal{D} = \omega_3\omega_2\omega_1\omega_0$ is a semi-standard decomposition of $u$ of length $4$ with respect to $J := J^{(0)} = \{r_5\}$. Moreover, $\mathcal{D}$ is in fact a standard decomposition of $u$ (which is the same as the one obtained by using Proposition \ref{prop:standard_decomposition_existence} below). Among the four component $\omega_i$, the first one $\omega_0$ is a wide transformation and the other three $\omega_1,\omega_2,\omega_3$ are narrow transformations. \end{exmp} \begin{table}[hbt] \centering \caption{The data for the example of semi-standard decompositions} \label{tab:example_semi-standard_decomposition} \begin{tabular}{|c||c|c|c|c|c|} \hline $i$ & $4$ & $3$ & $2$ & $1$ & $0$ \\ \hline\hline $y^{(i)}$ & $(5,4,3)$ & $(6,5,4)$ & $(4,5,6)$ & $(3,4,5)$ & $(1,2,3)$ \\ \hline $t^{(i)}$ & --- & $3$ & $7$ & $6$ & $4$ \\ \hline $J^{(i)}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{5\}$ \\ \hline $K^{(i)}$ & --- & $\{3,4,5,6\}$ & $\{4,5,6,7\}$ & $\{3,4,5,6\}$ & $\{1,2,3,4,5\}$ \\ \hline \end{tabular} \end{table} The next proposition shows existence of standard decompositions: \begin{prop} \label{prop:standard_decomposition_existence} Let $u \in Y_{z,y}$ with $y,z \in S^{(\Lambda)}$, and let $J \subseteq S$ satisfying that $u \cdot \Pi_J \subseteq \Pi$. Then there exists a standard decomposition of $u$ with respect to $J$. \end{prop} \begin{proof} We proceed the proof by induction on $\ell(u)$. For the case $\ell(u) = 0$, i.e., $u = 1$, the empty expression satisfies the conditions for a standard decomposition of $u$. From now, we consider the case $\ell(u) > 0$. Then there is an element $t = t^{(0)} \in S$ satisfying that $u \cdot \alpha_t \in \Phi^-$. Since $u \in Y_{z,y}$ and $u \cdot \Pi_J \subseteq \Pi \subseteq \Phi^+$, we have $t \not\in [y] \cup J$ and $\alpha_t \not\in \Phi^{\perp [y]}$, therefore $t$ is adjacent to $[y]$. Now by Lemma \ref{lem:rightdivisor}, $K = K^{(0)} := ([y] \cup J)_{\sim t}$ is of finite type and $\omega_0 := \omega_{y,J}^{t}$ is a right divisor of $u$ (see Section \ref{sec:defofCox} for the terminology). By the definition of $\omega_{y,J}^{t}$ in Definition \ref{defn:standard_decomposition}, there exist unique $y^{(1)} \in S^{(\Lambda)}$ and $J^{(1)} \subseteq S$ satisfying that $y^{(1)} = \omega_0 \ast y$ (see Section \ref{sec:properties_centralizer} for the notation) and $\omega_0 \cdot \Pi_J = \Pi_{J^{(1)}}$. Moreover, since $\omega_0$ is a right divisor of $u$, it follows that $\Phi[\omega_0] \subseteq \Phi[u]$ (see e.g., Lemma 2.2 of \cite{Nui11}), therefore $\Phi^{\perp [y]}[\omega_0] \subseteq \Phi^{\perp [y]}[u] = \emptyset$ and $\omega_0 \in Y_{y^{(1)},y}$. Put $u' = u\omega_0{}^{-1}$. Then we have $u' \in Y_{z,y^{(1)}}$, $u' \cdot \Pi_{J^{(1)}} \subseteq \Pi$ and $\ell(u') = \ell(u) - \ell(\omega_0) < \ell(u)$ (note that $\omega_0 \neq 1$). Hence the concatenation of $\omega_0$ to a standard decomposition of $u' \in Y_{z,y^{(1)}}$ with respect to $J^{(1)}$ obtained by the induction hypothesis gives a desired standard decomposition of $u$. \end{proof} We present some properties of (semi-)standard decompositions. First, we have the following: \begin{lem} \label{lem:another_decomposition_Y_no_loop} For any semi-standard decomposition $\omega_{n-1} \cdots \omega_1\omega_0$ of an element of $W$, for each $0 \leq i \leq n-1$, there exists an element of $\Pi_{K^{(i)} \smallsetminus \{t^{(i)}\}}$ which is not fixed by $\omega_i$. \end{lem} \begin{proof} Assume contrary that $\omega_i$ fixes $\Pi_{K^{(i)} \smallsetminus \{t^{(i)}\}}$ pointwise. Then by applying Proposition \ref{prop:charofBphi} to the pair of $[y^{(i)}] \cup J^{(i)}$ and $t^{(i)}$ instead of the pair of $[x]$ and $s$, it follows that there exists a root $\gamma \in (\Phi_{K^{(i)}}^{\perp K^{(i)} \smallsetminus \{t^{(i)}\}})^+$ with $\omega_i \cdot \gamma \in \Phi^-$ (note that, in this case, the element $w_x^s$ in Proposition \ref{prop:charofBphi} coincides with $\omega_i$). By the definition of the support $K^{(i)}$ of $\omega_i$, $K^{(i)}$ is apart from $[y^{(i)}] \smallsetminus K^{(i)}$, therefore this root $\gamma$ also belongs to $(\Phi^{\perp [y^{(i)}]})^+$. Hence we have $\Phi^{\perp [y^{(i)}]}[\omega_i] \neq \emptyset$, contradicting the property $\omega_i \in Y_{y^{(i+1)},y^{(i)}}$ in Definition \ref{defn:standard_decomposition}. Hence Lemma \ref{lem:another_decomposition_Y_no_loop} holds. \end{proof} For a semi-standard decomposition $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$ of $u \in Y_{z,y}$, let $0 \leq i_1 < i_2 < \cdots < i_k \leq n$ be the indices $i$ with the property that $[y^{(i+1)}(\mathcal{D})] = [y^{(i)}(\mathcal{D})]$ and $J^{(i+1)}(\mathcal{D}) = J^{(i)}(\mathcal{D})$. Then we define the \emph{simplification} $\widehat{\mathcal{D}}$ of $\mathcal{D}$ to be the expression $\omega_n \cdots \hat{\omega_{i_k}} \cdots \hat{\omega_{i_1}} \cdots \hat{\omega_{i_0}} \cdots \omega_0$ obtained from $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$ by removing all terms $\omega_{i_j}$ with $1 \leq j \leq k$. Let $\widehat{u}$ denote the element of $W$ expressed by the product $\widehat{\mathcal{D}}$. The following lemma is straightforward to prove: \begin{lem} \label{lem:another_decomposition_Y_reduce_redundancy} In the above setting, let $\sigma$ denote the mapping from $\{0,1,\dots,n-k\}$ to $\{0,1,\dots,n\}$ satisfying that $\widehat{\mathcal{D}} = \omega_{\sigma(n-k)} \cdots \omega_{\sigma(1)}\omega_{\sigma(0)}$. Then we have $\widehat{u} \in Y_{\widehat{z},y}$ for some $\widehat{z} \in S^{(\Lambda)}$ with $[\widehat{z}] = [z]$; $\widehat{\mathcal{D}}$ is a semi-standard decomposition of $\widehat{u}$ with respect to $J^{(0)}(\widehat{\mathcal{D}}) = J^{(0)}(\mathcal{D})$; we have $J^{(n-k+1)}(\widehat{\mathcal{D}}) = J^{(n+1)}(\mathcal{D})$; and for each $0 \leq j \leq n-k$, we have $[y^{(j)}(\widehat{\mathcal{D}})] = [y^{(\sigma(j))}(\mathcal{D})]$, $[y^{(j+1)}(\widehat{\mathcal{D}})] = [y^{(\sigma(j)+1)}(\mathcal{D})]$, $J^{(j)}(\widehat{\mathcal{D}}) = J^{(\sigma(j))}(\mathcal{D})$ and $J^{(j+1)}(\widehat{\mathcal{D}}) = J^{(\sigma(j)+1)}(\mathcal{D})$. \end{lem} \begin{exmp} \label{exmp:simplification} For the case of Example \ref{exmp:semi-standard_decomposition}, the simplification $\widehat{\mathcal{D}}$ of the standard decomposition $\mathcal{D} = \omega_3\omega_2\omega_1\omega_0$ of $u$ is obtained by removing the third component $\omega_2$, therefore $\widehat{\mathcal{D}} = \omega_3\omega_1\omega_0$. We have \begin{displaymath} \begin{split} &y^{(0)}(\widehat{\mathcal{D}}) = y^{(0)}(\mathcal{D}) = (r_1,r_2,r_3) \,,\, y^{(1)}(\widehat{\mathcal{D}}) = y^{(1)}(\mathcal{D}) = (r_3,r_4,r_5) \enspace, \\ &y^{(2)}(\widehat{\mathcal{D}}) = y^{(2)}(\mathcal{D}) = (r_4,r_5,r_6) \,,\, y^{(3)}(\widehat{\mathcal{D}}) = (r_3,r_4,r_5) = \widehat{z} \enspace. \end{split} \end{displaymath} Now since $\omega_3$ is the inverse of $\omega_1$, the semi-standard decomposition $\widehat{\mathcal{D}}$ of $\widehat{u}$ is not a standard decomposition of $\widehat{u}$. \end{exmp} Moreover, we have the following result: \begin{lem} \label{lem:another_decomposition_Y_shift_x} Let $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element $u \in W$. Let $r \in [y^{(0)}]$, and suppose that the support of each $\omega_i$ is apart from $r$. Moreover, let $s \in J^{(0)}$, $s' \in J^{(n+1)}$ and suppose that $u \ast s = s'$. Then we have $r \in [y^{(n+1)}]$, $u \ast r = r$ and $u \in Y_{z',z}$, where $z$ and $z'$ are elements of $S^{(\Lambda)}$ obtained from $y^{(0)}$ and $y^{(n+1)}$ by replacing $r$ with $s$ and with $s'$, respectively. \end{lem} \begin{proof} We use induction on $n \geq 0$. Put $\mathcal{D}' = \omega_{n-1} \cdots \omega_1\omega_0$, and let $u' \in Y_{y^{(n)},y^{(0)}}$ be the element expressed by the product $\mathcal{D}'$. Let $s'' := u' \ast s \in J^{(n)}$. By the induction hypothesis, we have $r \in [y^{(n)}]$, $u' \ast r = r$ and $u' \in Y_{z'',z}$, where $z''$ is the element of $S^{(\Lambda)}$ obtained from $y^{(n)}$ by replacing $r$ with $s''$. Now, since the support $K^{(n)}$ of $\omega_n$ is apart from $r \in [y^{(n)}]$, it follows that $r \in [y^{(n+1)}]$ and $\omega_n \ast r = r$, therefore $u \ast r = \omega_n u' \ast r = r$. On the other hand, we have $z' = \omega_n \ast z''$ by the construction of $z'$ and $z''$. Moreover, by the definition of $\omega_n$, the set $K^{(n)}$ is apart from $([y^{(n)}] \cup J^{(n)}) \smallsetminus K^{(n)}$, therefore $K^{(n)}$ is also apart from the subset $([z''] \cup J^{(n)}) \smallsetminus K^{(n)}$ of $([y^{(n)}] \cup J^{(n)}) \smallsetminus K^{(n)}$. Since $[y^{(n)}] \cap K^{(n)} \subseteq [z''] \cap K^{(n)}$, it follows that $\Phi^{\perp [z'']}[\omega_n] = \Phi_{K^{(n)}}^{\perp [z''] \cap K^{(n)}}[\omega_n] \subseteq \Phi_{K^{(n)}}^{\perp [y^{(n)}] \cap K^{(n)}}[\omega_n] = \Phi^{\perp [y^{(n)}]}[\omega_n] = \emptyset$ (note that $\omega_n \in Y_{y^{(n+1)},y^{(n)}}$), therefore we have $\omega_n \in Y_{z',z''}$. Hence we have $u = \omega_n u' \in Y_{z',z}$, concluding the proof. \end{proof} \subsection{Reduction to a special case} \label{sec:proof_reduction} Here we give a reduction of our proof of Theorem \ref{thm:YfixesWperpIfin} to a special case where the possibility of the subset $I \subseteq S$ is restricted in a certain manner. First, for $J \subseteq S$, let $\iota(J)$ denote temporarily the union of the irreducible components of $J$ that are not of finite type, and let $\overline{\iota}(J)$ denote temporarily the set of elements of $S$ that are not apart from $\iota(J)$ (hence $J \cap \overline{\iota}(J) = \iota(J)$). For example, when $(W,S)$ is given by the Coxeter graph in Figure \ref{fig:example_notation_iota} (where we abbreviate each $r_i \in S$ to $i$) and $J = \{r_1,r_3,r_4,r_5,r_6\}$ (indicated in Figure \ref{fig:example_notation_iota} by the black vertices), we have $\iota(J) = \{r_1,r_5,r_6\}$ and $\overline{\iota}(J) = \{r_1,r_2,r_5,r_6,r_7\}$, therefore $J \cap \overline{\iota}(J) = \{r_1,r_5,r_6\} = \iota(J)$ as mentioned above. Now we have the following: \begin{figure} \caption{An example for the notations $\iota(J)$ and $\overline{\iota}(J)$; here $J = \{1,3,4,5,6\}$} \label{fig:example_notation_iota} \end{figure} \begin{lem} \label{lem:proof_reduction_root_perp} Let $I$ be an arbitrary subset of $S$. Then we have $w \in W_{S \smallsetminus \overline{\iota}(I)}$ for any $w \in Y_{y,x_I}$ with $y \in S^{(\Lambda)}$, and we have $\Phi^{\perp I} = \Phi_{S \smallsetminus \overline{\iota}(I)}^{\perp I \smallsetminus \overline{\iota}(I)}$. \end{lem} \begin{proof} First, let $w \in Y_{y,x_I}$ with $y \in S^{(\Lambda)}$. Then by Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}), there are a finite sequence $z_0 = x_I,z_1,\dots,z_{n-1},z_n = y$ of elements of $S^{(\Lambda)}$ and a finite sequence $s_0,s_1,\dots,s_{n-1}$ of elements of $S$ satisfying that $z_{i+1} \neq z_i$, $s_i \not\in [z_i]$, $[z_i]_{\sim s_i}$ is of finite type and $w_{z_i}^{s_i} \in Y_{z_{i+1},z_i}$ for each index $0 \leq i \leq n-1$, and we have $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0}$. We show, by induction on $0 \leq i \leq n-1$, that $\iota([z_{i+1}]) = \iota(I)$, $\overline{\iota}([z_{i+1}]) = \overline{\iota}(I)$, and $w_{z_i}^{s_i} \in W_{S \smallsetminus \overline{\iota}(I)}$. It follows from the induction hypothesis when $i > 0$, and is trivial when $i = 0$, that $\iota([z_i]) = \iota(I)$ and $\overline{\iota}([z_i]) = \overline{\iota}(I)$. Since $s_i \not\in [z_i]$ and $[z_i]_{\sim s_i}$ is of finite type, it follows from the definition of $\overline{\iota}$ that $[z_i]_{\sim s_i} \subseteq S \smallsetminus \overline{\iota}([z_i])$, therefore we have $w_{z_i}^{s_i} \in W_{S \smallsetminus \overline{\iota}([z_i])} = W_{S \smallsetminus \overline{\iota}(I)}$, $\iota([z_{i+1}]) = \iota([z_i]) = \iota(I)$, and $\overline{\iota}([z_{i+1}]) = \overline{\iota}([z_i]) = \overline{\iota}(I)$, as desired. This implies that $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0} \in W_{S \smallsetminus \overline{\iota}(I)}$, therefore the first part of the claim holds. For the second part of the claim, the inclusion $\supseteq$ is obvious by the definitions of $\iota(I)$ and $\overline{\iota}(I)$. For the other inclusion, it suffices to show that $\Phi^{\perp I} \subseteq \Phi_{S \smallsetminus \overline{\iota}(I)}$, or equivalently $\Pi^I \subseteq \Phi_{S \smallsetminus \overline{\iota}(I)}$. Let $\gamma \in \Pi^I$. By Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_perp}), we have $\gamma = w \cdot \gamma(y,s)$ for some $y \in S^{(\Lambda)}$, $w \in Y_{x_I,y}$ and a root $\gamma(y,s)$ introduced in the statement of Proposition \ref{prop:charofBphi}. Now by applying the result of the previous paragraph to $w^{-1} \in Y_{y,x_I}$, it follows that $\iota([y]) = \iota(I)$, $\overline{\iota}([y]) = \overline{\iota}(I)$, and $w \in W_{S \smallsetminus \overline{\iota}(I)}$. Moreover, since $[y]_{\sim s}$ is of finite type (see Proposition \ref{prop:charofBphi}), a similar argument implies that $[y]_{\sim s} \subseteq S \smallsetminus \overline{\iota}([y]) = S \smallsetminus \overline{\iota}(I)$ and $w_y^s \in W_{S \smallsetminus \overline{\iota}(I)}$, therefore $\gamma(y,s) \in \Phi_{S \smallsetminus \overline{\iota}(I)}$. Hence we have $\gamma = w \cdot \gamma(y,s) \in \Phi_{S \smallsetminus \overline{\iota}(I)}$, concluding the proof of Lemma \ref{lem:proof_reduction_root_perp}. \end{proof} For an arbitrary subset $I$ of $S$, suppose that $\gamma \in \Pi^I$, $s_\gamma \in W^{\perp I}{}_{\mathrm{fin}}$, and $w \in Y_I$. Then by the second part of Lemma \ref{lem:proof_reduction_root_perp}, we have $\gamma \in \Pi^I = \Pi^{S \smallsetminus \overline{\iota}(I),I \smallsetminus \overline{\iota}(I)}$ and $s_\gamma$ also belongs to the finite part of $W_{S \smallsetminus \overline{\iota}(I)}^{\perp I \smallsetminus \overline{\iota}(I)}$. Moreover, we have $w \in W_{S \smallsetminus \overline{\iota}(I)}$ by the first part of Lemma \ref{lem:proof_reduction_root_perp}, therefore $w$ also belongs to the group $Y_{I \smallsetminus \overline{\iota}(I)}$ constructed from the pair $S \smallsetminus \overline{\iota}(I)$, $I \smallsetminus \overline{\iota}(I)$ instead of the pair $S$, $I$. Hence we have the following result: If the conclusion of Theorem \ref{thm:YfixesWperpIfin} holds for the pair $S \smallsetminus \overline{\iota}(I)$, $I \smallsetminus \overline{\iota}(I)$ instead of the pair $S$, $I$, then the conclusion of Theorem \ref{thm:YfixesWperpIfin} also holds for the pair $S$, $I$. Note that $I \smallsetminus \overline{\iota}(I) = I \smallsetminus \iota(I)$ is the union of the irreducible components of $I$ of finite type. As a consequence, we may assume without loss of generality that every irreducible component of $I$ is of finite type (note that the $A_{>1}$-freeness in the hypothesis of Theorem \ref{thm:YfixesWperpIfin} is preserved by considering $I \smallsetminus \iota(I)$ instead of $I$). From now on, we assume that every irreducible component of $I$ is of finite type, as mentioned in the last paragraph. For any $J \subseteq S$, we say that a subset $\Psi$ of the simple system $\Pi^J$ of $W^{\perp J}$ is an \emph{irreducible component} of $\Pi^J$ if $S(\Psi) = \{s_\beta \mid \beta \in \Psi\}$ is an irreducible component of the generating set $R^J$ of $W^{\perp J}$. Now, as in the statement of Theorem \ref{thm:YfixesWperpIfin}, let $w \in Y_I$ and $\gamma \in \Pi^I$, and suppose that $s_{\gamma} \in W^{\perp I}{}_{\mathrm{fin}}$. Let $\Psi$ denote the union of the irreducible components of $\Pi^I$ containing some $w^k \cdot \gamma$ with $k \in \mathbb{Z}$. Then we have the following: \begin{lem} \label{lem:proof_reduction_Psi_finite} In this setting, $\Psi$ is of finite type; in particular, $|\Psi| < \infty$. Moreover, the two subsets $I \smallsetminus \mathrm{Supp}\,\Psi$ and $\mathrm{Supp}\,\Psi$ of $S$ are not adjacent. \end{lem} \begin{proof} First, there exists a finite subset $K$ of $S$ for which $w \in W_K$ and $\gamma \in \Phi_K$. Then, the number of mutually orthogonal roots of the form $w^k \cdot \gamma$ is at most $|K| < \infty$, since those roots are linearly independent and contained in the $|K|$-dimensional space $V_K$. This implies that the number of irreducible components of $\Pi^I$ containing some $w^k \cdot \gamma$, which are of finite type by the property $s_\gamma \in W^{\perp I}{}_{\mathrm{fin}}$ and Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_decomp}), is finite. Therefore, the union $\Psi$ of those irreducible components is also of finite type. Hence the first part of the claim holds. For the second part of the claim, assume contrary that some $s \in I \smallsetminus \mathrm{Supp}\,\Psi$ and $t \in \mathrm{Supp}\,\Psi$ are adjacent. By the definition of $\mathrm{Supp}\,\Psi$, we have $t \in \mathrm{Supp}\,\beta \subseteq \mathrm{Supp}\,\Psi$ for some $\beta \in \Psi$. Now we have $s \not\in \mathrm{Supp}\,\beta$. Let $c > 0$ be the coefficient of $\alpha_t$ in $\beta$. Then the property $s \not\in \mathrm{Supp}\,\beta$ implies that $\langle \alpha_s,\beta \rangle \leq c \langle \alpha_s,\alpha_t \rangle < 0$, contradicting the property $\beta \in \Phi^{\perp I}$. Hence the claim holds, concluding the proof of Lemma \ref{lem:proof_reduction_Psi_finite}. \end{proof} We temporarily write $L = I \cap \mathrm{Supp}\,\Psi$, and put $\Psi' = \Psi \cup \Pi_L$. Then we have $\mathrm{Supp}\,\Psi' = \mathrm{Supp}\,\Psi$, therefore by Lemma \ref{lem:proof_reduction_Psi_finite}, $I \smallsetminus \mathrm{Supp}\,\Psi'$ and $\mathrm{Supp}\,\Psi'$ are not adjacent. On the other hand, we have $|\Psi| < \infty$ by Lemma \ref{lem:proof_reduction_Psi_finite}, therefore $\mathrm{Supp}\,\Psi' = \mathrm{Supp}\,\Psi$ is a finite set. By these properties and the above-mentioned assumption that every irreducible component of $I$ is of finite type, it follows that $\Pi_L$ is of finite type as well as $\Psi$. Note that $\Psi \subseteq \Pi^I \subseteq \Phi^{\perp L}$. Hence the two root bases $\Psi$ and $\Pi_L$ are orthogonal, therefore their union $\Psi'$ is also a root basis by Theorem \ref{thm:conditionforrootbasis}, and we have $|W(\Psi')|<\infty$. By Proposition \ref{prop:fintyperootbasis}, $\Psi'$ is a basis of a subspace $U := \mathrm{span}\,\Psi'$ of $V_{\mathrm{Supp}\,\Psi'}$. By applying Proposition \ref{prop:finitesubsystem} to $W_{\mathrm{Supp}\,\Psi'}$ instead of $W$, it follows that there exist $u \in W_{\mathrm{Supp}\,\Psi'}$ and $J \subseteq \mathrm{Supp}\,\Psi'$ satisfying that $W_J$ is finite, $u \cdot (U \cap \Phi^+) = \Phi_J^+$ and $u \cdot (U \cap \Pi) \subseteq \Pi_J$. Now we have the following: \begin{lem} \label{lem:proof_reduction_conjugate_by_Y} In this setting, if we choose such an element $u$ of minimal length, then there exists an element $y \in S^{(\Lambda)}$ satisfying that $u \in Y_{y,x_I}$, the sets $[y] \smallsetminus J$ and $J$ are not adjacent, and $(u \cdot \Psi) \cup \Pi_{[y] \cap J}$ is a basis of $V_J$. \end{lem} \begin{proof} Since $\Psi'$ is a basis of $U$, the property $u \cdot (U \cap \Phi^+) = \Phi_J^+$ implies that $u \cdot \Psi'$ is a basis of $V_J$. Now we have $u \cdot \Pi_L \subseteq \Pi_J$ since $\Pi_L \subseteq U \cap \Pi$, while $u$ fixes $\Pi_{I \smallsetminus L}$ pointwise since the sets $I \smallsetminus \mathrm{Supp}\,\Psi' = I \smallsetminus L$ and $\mathrm{Supp}\,\Psi'$ are not adjacent. By these properties, there exists an element $y \in S^{(\Lambda)}$ satisfying that $y = u \ast x_I$, $[y] \cap \mathrm{Supp}\,\Psi' \subseteq J$ and $[y] \smallsetminus \mathrm{Supp}\,\Psi' = I \smallsetminus \mathrm{Supp}\,\Psi'$. Since $J \subseteq \mathrm{Supp}\,\Psi'$, it follows that $[y] \smallsetminus J$ and $J$ are not adjacent. On the other hand, since $u \cdot \Pi_{I \smallsetminus L} = \Pi_{I \smallsetminus L}$, $u \cdot (U \cap \Phi^+) = \Phi_J^+$ and $\Pi_{I \smallsetminus L} \cap U = \emptyset$, it follows that $\Pi_{I \smallsetminus L} \cap \Phi_J^+ = \emptyset$, therefore we have $u \cdot \Pi_L = \Pi_{[y] \cap J}$. Hence $u \cdot \Psi' = (u \cdot \Psi) \cup \Pi_{[y] \cap J}$ is a basis of $V_J$. Finally, we show that such an element $u$ of minimal length satisfies that $u \cdot \Pi^I \subseteq \Phi^+$, hence $u \cdot (\Phi^{\perp I})^+ \subseteq \Phi^+$ and $u \in Y_{y,x_I}$. We have $u \cdot \Psi \subseteq u \cdot (U \cap \Phi^+) = \Phi_J^+$. Secondly, for any $\beta \in \Pi^I \smallsetminus \Psi$, assume contrary that $u \cdot \beta \in \Phi^-$. Then we have $\beta \in \Phi_{\mathrm{Supp}\,\Psi'}$ since $u \in W_{\mathrm{Supp}\,\Psi'}$, therefore $s_{\beta} \in W_{\mathrm{Supp}\,\Psi'}$. On the other hand, since $\Psi$ is the union of some irreducible components of $\Pi^I$, it follows that $\beta$ is orthogonal to $\Psi$, hence orthogonal to $\Psi'$. By these properties, the element $u s_{\beta}$ also satisfies the above characteristics of the element $u$. However, now the property $u \cdot \beta \in \Phi^-$ implies that $\ell(u s_{\beta}) < \ell(u)$ (see Theorem \ref{thm:reflectionsubgroup_Deodhar}), contradicting the choice of $u$. Hence we have $u \cdot \beta \in \Phi^+$ for every $\beta \in \Pi^I \smallsetminus \Psi$, therefore $u \cdot \Pi^I \subseteq \Phi^+$, concluding the proof of Lemma \ref{lem:proof_reduction_conjugate_by_Y}. \end{proof} For an element $u \in Y_{y,x_I}$ as in Lemma \ref{lem:proof_reduction_conjugate_by_Y}, Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_decomp}) implies that $u \cdot \gamma \in \Pi^{[y]}$ and $s_{u \cdot \gamma} = u s_\gamma u^{-1} \in W^{\perp [y]}{}_{\mathrm{fin}}$. Now $w$ fixes the root $\gamma$ if and only if the element $uwu^{-1} \in Y_{y,y}$ fixes the root $u \cdot \gamma$. Moreover, the conjugation by $u$ defines an isomorphism of Coxeter systems $(W_I,I) \to (W_{[y]},[y])$. Hence, by considering $[y] \subseteq S$, $uwu^{-1} \in Y_{[y]}$, $u \cdot \gamma \in \Pi^{[y]}$ and $u \cdot \Psi \subseteq \Pi^{[y]}$ instead of $I$, $w$, $\gamma$ and $\Psi$ if necessary, we may assume without loss of generality the following conditions: \begin{description} \item[(A1)] Every irreducible component of $I$ is of finite type. \item[(A2)] There exists a subset $J \subseteq S$ of finite type satisfying that $I \smallsetminus J$ and $J$ are not adjacent and $\Psi \cup \Pi_{I \cap J}$ is a basis of $V_J$. \end{description} Moreover, if an irreducible component $J'$ of $J$ is contained in $I$, then a smaller subset $J \smallsetminus J'$ instead of $J$ also satisfies the assumption (A2); indeed, now $\Pi_{J'} \subseteq \Pi_{I \cap J}$ spans $V_{J'}$, and since $\Psi \cup \Pi_{I \cap J}$ is a basis of $V_J$ and the support of any root is irreducible (see Lemma \ref{lem:support_is_irreducible}), it follows that the support of any element of $\Psi \cup \Pi_{I \cap (J \smallsetminus J')}$ does not intersect with $J'$. Hence, by choosing a subset $J \subseteq S$ in (A2) as small as possible, we may also assume without loss of generality the following condition: \begin{description} \item[(A3)] Any irreducible component of $J$ is not contained in $I$. \end{description} We also notice the following properties: \begin{lem} \label{lem:Psi_is_full} In this setting, we have $\Psi = \Pi^{J,I \cap J}$, hence $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is a basis of $V_J$. \end{lem} \begin{proof} The inclusion $\Psi \subseteq \Pi^{J,I \cap J}$ follows from the definition of $\Psi$ and the condition (A2). Now assume contrary that $\beta \in \Pi^{J,I \cap J} \smallsetminus \Psi$. Then we have $\beta \in \Pi^I$ by (A2). Since $\Psi$ is the union of some irreducible components of $\Pi^I$, it follows that $\beta$ is orthogonal to $\Psi$ as well as to $\Pi_{I \cap J}$. This implies that $\beta$ belongs to the radical of $V_J$, which should be trivial by Proposition \ref{prop:fintyperootbasis}. This is a contradiction. Hence the claim holds. \end{proof} \begin{lem} \label{lem:property_w} In this setting, the element $w \in Y_I$ satisfies that $w \cdot \Phi_J = \Phi_J$, and the subgroup $\langle w \rangle$ generated by $w$ acts transitively on the set of the irreducible components of $\Pi^{J,I \cap J}$. \end{lem} \begin{proof} The second part of the claim follows immediately from the definition of $\Psi$ and Lemma \ref{lem:Psi_is_full}. It also implies that $w \cdot \Pi^{J,I \cap J} = \Pi^{J,I \cap J}$, while $w \cdot \Pi_{I \cap J} = \Pi_{I \cap J}$ since $w \in Y_I$. Moreover, $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is a basis of $V_J$ by Lemma \ref{lem:Psi_is_full}. This implies that $w \cdot V_J = V_J$, therefore we have $w \cdot \Phi_J = \Phi_J$. Hence the claim holds. \end{proof} \subsection{A key lemma} \label{sec:proof_key_lemma} Let $I^{\perp}$ denote the set of all elements of $S$ that are apart from $I$. Then there are two possibilities: $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$, or $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$. Here we present a key lemma regarding the former possibility (recall the three conditions (A1)--(A3) specified above): \begin{lem} \label{lem:finitepart_first_case_irreducible} If $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$, then we have $I \cap J \neq \emptyset$ and $J$ is irreducible. \end{lem} \begin{proof} First, take an element $\beta \in \Pi^{J,I \cap J} \smallsetminus \Phi_{I^{\perp}}$. Then we have $\beta \not\in \Phi_I$ since $\Pi^{J,I \cap J} \subseteq \Phi^{\perp I}$. Moreover, since the support $\mathrm{Supp}\,\beta$ of $\beta$ is irreducible (see Lemma \ref{lem:support_is_irreducible}), there exists an element $s \in \mathrm{Supp}\,\beta \smallsetminus I$ which is adjacent to an element of $I$, say $s' \in I$. Now the property $\beta \in \Phi^{\perp I}$ implies that $s' \in \mathrm{Supp}\,\beta$, since otherwise we have $\langle \beta,\alpha_{s'} \rangle \leq c \langle \alpha_s,\alpha_{s'} \rangle < 0$ where $c > 0$ is the coefficient of $\alpha_s$ in $\beta$. Hence we have $s' \in \mathrm{Supp}\,\Pi^{J,I \cap J} \subseteq J$. Let $K$ denote the irreducible component of $J$ containing $s'$. Put $\Psi' = \Pi^{J,I \cap J} \cap \Phi_K$. Then, since $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is a basis of $V_J$ by Lemma \ref{lem:Psi_is_full} and the support of any root is irreducible (see Lemma \ref{lem:support_is_irreducible}), it follows that $\beta \in \Psi'$, $\Psi'$ is orthogonal to $\Pi^{J,I \cap J} \smallsetminus \Psi'$ and $\Psi' \cup \Pi_{I \cap K}$ is a basis of $V_K$. Now $\Psi'$ is the union of some irreducible components of $\Pi^{J,I \cap J}$. We show that $J$ is irreducible if we have $w \cdot \Phi_K = \Phi_K$. In this case, we have $w \cdot \Psi' = \Psi'$, therefore $\Pi^{J,I \cap J} = \Psi' \subseteq \Phi_K$ by the second part of Lemma \ref{lem:property_w}. Now by the condition (A3), $J$ has no irreducible components other than $K$ (indeed, if such an irreducible component $J'$ of $J$ exists, then the property $\Pi^{J,I \cap J} \subseteq \Phi_K$ implies that the space $V_{J'}$ should be spanned by a subset of $\Pi_{I \cap J}$, therefore $J' \subseteq I$). Hence $J = K$ is irreducible. Thus it suffices to show that $w \cdot \Phi_K = \Phi_K$. For the purpose, it also suffices to show that $w \cdot \Phi_K \subseteq \Phi_K$ (since $K$ is of finite type as well as $J$), or equivalently $w \cdot \Pi_K \subseteq \Phi_K$. Moreover, by the three properties that $K$ is irreducible, $K \cap I \neq \emptyset$ and $w \cdot \Pi_{K \cap I} = \Pi_{K \cap I}$, it suffices to show that $w \cdot \alpha_{t'} \in \Phi_K$ provided $t' \in K$ is adjacent to some $t \in K$ with $w \cdot \alpha_t \in \Phi_K$. Now note that $w \cdot \Phi_J = \Phi_J$ by Lemma \ref{lem:property_w}. Assume contrary that $w \cdot \alpha_{t'} \not\in \Phi_K$. Then we have $w \cdot \alpha_{t'} \in \Phi_J \smallsetminus \Phi_K = \Phi_{J \smallsetminus K}$ since $K$ is an irreducible component of $J$, therefore $w \cdot \alpha_{t'}$ is orthogonal to $w \cdot \alpha_t \in \Phi_K$. This contradicts the property that $t'$ is adjacent to $t$, since $w$ leaves the bilinear form $\langle\,,\,\rangle$ invariant. Hence we have $w \cdot \alpha_{t'} \in \Phi_K$, as desired. \end{proof} \section{Proof of Theorem \ref{thm:YfixesWperpIfin}: On the special case} \label{sec:proof_special} In this section, we introduce the assumption in Theorem \ref{thm:YfixesWperpIfin} that $I$ is $A_{>1}$-free, and continue the argument in Section \ref{sec:proof_general}. Recall the properties (A1), (A2) and (A3) of $I$, $J$ and $\Psi = \Pi^{J,I \cap J}$ (see Lemma \ref{lem:Psi_is_full}) given in Section \ref{sec:proof_reduction}. Our aim here is to prove that $w$ fixes $\Pi^{J,I \cap J}$ pointwise, which implies our goal $w \cdot \gamma = \gamma$ since $\gamma \in \Psi = \Pi^{J,I \cap J}$ by the definition of $\Psi$. We divide the following argument into two cases: $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$, or $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$ (see Section \ref{sec:proof_key_lemma} for the definition of $I^{\perp}$). \subsection{The first case $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$} \label{sec:proof_special_first_case} Here we consider the case that $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$. In this case, the subset $J \subseteq S$ of finite type is irreducible by Lemma \ref{lem:finitepart_first_case_irreducible}, therefore we can apply the classification of finite irreducible Coxeter groups. Let $J = \{r_1,r_2,\dots,r_N\}$, where $N = |J|$, be the standard labelling of $J$ (see Section \ref{sec:longestelement}). We write $\alpha_i = \alpha_{r_i}$ for simplicity. We introduce some temporal terminology. We say that an element $y \in S^{(\Lambda)}$ satisfies \emph{Property P} if $[y] \smallsetminus J = I \smallsetminus J$ (hence $[y] \smallsetminus J$ is apart from $J$ by the condition (A2)) and $\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}$ is a basis of $V_J$. For example, $x_I$ itself satisfies Property P. For any $y \in S^{(\Lambda)}$ satisfying Property P and any element $s \in J \smallsetminus [y]$ with $\varphi(y,s) \neq y$, we say that the isomorphism $t \mapsto w_y^s \ast t$ from $[y] \cap J$ to $[\varphi(y,s)] \cap J$ is a \emph{local transformation} (note that now $[y]_{\sim s} \subseteq J$ and $w_y^s \in W_J$ by the above-mentioned property that $[y] \smallsetminus J$ is apart from $J$). By abusing the terminology, in such a case we also call the correspondence $y \mapsto \varphi(y,s)$ a local transformation. Note that, in this case, $\varphi(y,s)$ also satisfies Property P, we have $w_y^s \in Y_{\varphi(y,s),y}$ and $w_y^s \ast t = t$ for any $t \in [y] \smallsetminus J$, and the action of $w_y^s$ induces an isomorphism from $\Pi^{J,[y] \cap J}$ to $\Pi^{J,[\varphi(y,s)] \cap J}$. Since $w \cdot \Pi^{J,I \cap J} = \Pi^{J,I \cap J}$, the claim is trivial if $|\Pi^{J,I \cap J}| = 1$. From now, we consider the case that $|\Pi^{J,I \cap J}| \geq 2$, therefore we have $N = |J| \geq |I \cap J| + 2 \geq 3$ (note that $I \cap J \neq \emptyset$ by Lemma \ref{lem:finitepart_first_case_irreducible}). In particular, $J$ is not of type $I_2(m)$. On the other hand, we have the following results: \begin{lem} \label{lem:J_not_A_N} In this setting, $J$ is not of type $A_N$. \end{lem} \begin{proof} We show that $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ cannot span $V_J$ if $J$ is of type $A_N$, which deduces a contradiction and hence concludes the proof. By the $A_{>1}$-freeness of $I$, each irreducible component of $I \cap J$ (which is also an irreducible component of $I$) is of type $A_1$. Now by applying successive local transformations, we may assume without loss of generality that $r_1 \in I$ (indeed, if the minimal index $i$ with $r_i \in I$ satisfies $i \geq 2$, then we have $\varphi(x_I,r_{i-1}) \ast r_i = r_{i-1}$). In this case, we have $r_2 \not\in I$, while we have $\Phi_J^{\perp I} \subseteq \Phi_{J \smallsetminus \{r_1,r_2\}}$ by the fact that any positive root in the root system $\Phi_J$ of type $A_N$ is of the form $\alpha_i + \alpha_{i+1} + \cdots + \alpha_{i'}$ with $1 \leq i \leq i' \leq N$. This implies that the subset $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ of $\Phi_J^{\perp I} \cup \Pi_{I \cap J}$ cannot span $V_J$, as desired. \end{proof} To prove the next lemma (and some other results below), we give a list of all positive roots of the Coxeter group of type $E_8$. The list is divided into six parts (Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}). In the lists, we use the standard labelling $r_1,\dots,r_8$ of generators. The coefficients of each root are placed at the same relative positions as the corresponding vertices of the Coxeter graph of type $E_8$ in Figure \ref{fig:finite_irreducible_Coxeter_groups}; for example, the last root $\gamma_{120}$ in Table \ref{tab:positive_roots_E_8_6} is $2\alpha_1 + 3\alpha_2 + 4\alpha_3 + 6\alpha_4 + 5\alpha_5 + 4\alpha_6 + 3\alpha_7 + 2\alpha_8$ (which is the highest root of type $E_8$). For the columns for actions of generators (4th to 11th columns), a blank cell means that the generator $r_j$ fixes the root $\gamma_i$ (or equivalently, $\langle \alpha_j,\gamma_i \rangle = 0$); while a cell filled by \lq\lq ---'' means that $\gamma_i = \alpha_j$. Moreover, the positive roots of the parabolic subgroup of type $E_6$ (respectively, $E_7$) generated by $\{r_1,\dots,r_6\}$ (respectively, $\{r_1,\dots,r_7\}$) correspond to the rows indicated by \lq\lq $E_6$'' (respectively, \lq\lq $E_7$''). By the data for actions of generators, it can be verified that the list indeed exhausts all the positive roots. \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $E_8$ (part $1$)} \label{tab:positive_roots_E_8_1} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11} height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}\cline{1-11} $1$ & $1$ & \Eroot{1}{0}{0}{0}{0}{0}{0}{0} & --- & & $9$ & & & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $2$ & \Eroot{0}{1}{0}{0}{0}{0}{0}{0} & & --- & & $10$ & & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $3$ & \Eroot{0}{0}{1}{0}{0}{0}{0}{0} & $9$ & & --- & $11$ & & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $4$ & \Eroot{0}{0}{0}{1}{0}{0}{0}{0} & & $10$ & $11$ & --- & $12$ & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $5$ & \Eroot{0}{0}{0}{0}{1}{0}{0}{0} & & & & $12$ & --- & $13$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $6$ & \Eroot{0}{0}{0}{0}{0}{1}{0}{0} & & & & & $13$ & --- & $14$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $7$ & \Eroot{0}{0}{0}{0}{0}{0}{1}{0} & & & & & & $14$ & --- & $15$ & & $E_7$ \\ \cline{2-11} & $8$ & \Eroot{0}{0}{0}{0}{0}{0}{0}{1} & & & & & & & $15$ & --- \\ \cline{1-11} $2$ & $9$ & \Eroot{1}{0}{1}{0}{0}{0}{0}{0} & $3$ & & $1$ & $16$ & & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $10$ & \Eroot{0}{1}{0}{1}{0}{0}{0}{0} & & $4$ & $17$ & $2$ & $18$ & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $11$ & \Eroot{0}{0}{1}{1}{0}{0}{0}{0} & $16$ & $17$ & $4$ & $3$ & $19$ & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $12$ & \Eroot{0}{0}{0}{1}{1}{0}{0}{0} & & $18$ & $19$ & $5$ & $4$ & $20$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $13$ & \Eroot{0}{0}{0}{0}{1}{1}{0}{0} & & & & $20$ & $6$ & $5$ & $21$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $14$ & \Eroot{0}{0}{0}{0}{0}{1}{1}{0} & & & & & $21$ & $7$ & $6$ & $22$ & & $E_7$ \\ \cline{2-11} & $15$ & \Eroot{0}{0}{0}{0}{0}{0}{1}{1} & & & & & & $22$ & $8$ & $7$ \\ \cline{1-11} $3$ & $16$ & \Eroot{1}{0}{1}{1}{0}{0}{0}{0} & $11$ & $23$ & & $9$ & $24$ & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $17$ & \Eroot{0}{1}{1}{1}{0}{0}{0}{0} & $23$ & $11$ & $10$ & & $25$ & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $18$ & \Eroot{0}{1}{0}{1}{1}{0}{0}{0} & & $12$ & $25$ & & $10$ & $26$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $19$ & \Eroot{0}{0}{1}{1}{1}{0}{0}{0} & $24$ & $25$ & $12$ & & $11$ & $27$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $20$ & \Eroot{0}{0}{0}{1}{1}{1}{0}{0} & & $26$ & $27$ & $13$ & & $12$ & $28$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $21$ & \Eroot{0}{0}{0}{0}{1}{1}{1}{0} & & & & $28$ & $14$ & & $13$ & $29$ & & $E_7$ \\ \cline{2-11} & $22$ & \Eroot{0}{0}{0}{0}{0}{1}{1}{1} & & & & & $29$ & $15$ & & $14$ \\ \cline{1-11} \end{tabular} \end{table} \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $E_8$ (part $2$)} \label{tab:positive_roots_E_8_2} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11} height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11} $4$ & $23$ & \Eroot{1}{1}{1}{1}{0}{0}{0}{0} & $17$ & $16$ & & & $30$ & & & & $E_6$ & $E_7$ \\ \cline{2-11} & $24$ & \Eroot{1}{0}{1}{1}{1}{0}{0}{0} & $19$ & $30$ & & & $16$ & $31$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $25$ & \Eroot{0}{1}{1}{1}{1}{0}{0}{0} & $30$ & $19$ & $18$ & $32$ & $17$ & $33$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $26$ & \Eroot{0}{1}{0}{1}{1}{1}{0}{0} & & $20$ & $33$ & & & $18$ & $34$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $27$ & \Eroot{0}{0}{1}{1}{1}{1}{0}{0} & $31$ & $33$ & $20$ & & & $19$ & $35$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $28$ & \Eroot{0}{0}{0}{1}{1}{1}{1}{0} & & $34$ & $35$ & $21$ & & & $20$ & $36$ & & $E_7$ \\ \cline{2-11} & $29$ & \Eroot{0}{0}{0}{0}{1}{1}{1}{1} & & & & $36$ & $22$ & & & $21$ \\ \cline{1-11} $5$ & $30$ & \Eroot{1}{1}{1}{1}{1}{0}{0}{0} & $25$ & $24$ & & $37$ & $23$ & $38$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $31$ & \Eroot{1}{0}{1}{1}{1}{1}{0}{0} & $27$ & $38$ & & & & $24$ & $39$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $32$ & \Eroot{0}{1}{1}{2}{1}{0}{0}{0} & $37$ & & & $25$ & & $40$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $33$ & \Eroot{0}{1}{1}{1}{1}{1}{0}{0} & $38$ & $27$ & $26$ & $40$ & & $25$ & $41$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $34$ & \Eroot{0}{1}{0}{1}{1}{1}{1}{0} & & $28$ & $41$ & & & & $26$ & $42$ & & $E_7$ \\ \cline{2-11} & $35$ & \Eroot{0}{0}{1}{1}{1}{1}{1}{0} & $39$ & $41$ & $28$ & & & & $27$ & $43$ & & $E_7$ \\ \cline{2-11} & $36$ & \Eroot{0}{0}{0}{1}{1}{1}{1}{1} & & $42$ & $43$ & $29$ & & & & $28$ \\ \cline{1-11} $6$ & $37$ & \Eroot{1}{1}{1}{2}{1}{0}{0}{0} & $32$ & & $44$ & $30$ & & $45$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $38$ & \Eroot{1}{1}{1}{1}{1}{1}{0}{0} & $33$ & $31$ & & $45$ & & $30$ & $46$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $39$ & \Eroot{1}{0}{1}{1}{1}{1}{1}{0} & $35$ & $46$ & & & & & $31$ & $47$ & & $E_7$ \\ \cline{2-11} & $40$ & \Eroot{0}{1}{1}{2}{1}{1}{0}{0} & $45$ & & & $33$ & $48$ & $32$ & $49$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $41$ & \Eroot{0}{1}{1}{1}{1}{1}{1}{0} & $46$ & $35$ & $34$ & $49$ & & & $33$ & $50$ & & $E_7$ \\ \cline{2-11} & $42$ & \Eroot{0}{1}{0}{1}{1}{1}{1}{1} & & $36$ & $50$ & & & & & $34$ \\ \cline{2-11} & $43$ & \Eroot{0}{0}{1}{1}{1}{1}{1}{1} & $47$ & $50$ & $36$ & & & & & $35$ \\ \cline{1-11} \end{tabular} \end{table} \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $E_8$ (part $3$)} \label{tab:positive_roots_E_8_3} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11} height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11} $7$ & $44$ & \Eroot{1}{1}{2}{2}{1}{0}{0}{0} & & & $37$ & & & $51$ & & & $E_6$ & $E_7$ \\ \cline{2-11} & $45$ & \Eroot{1}{1}{1}{2}{1}{1}{0}{0} & $40$ & & $51$ & $38$ & $52$ & $37$ & $53$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $46$ & \Eroot{1}{1}{1}{1}{1}{1}{1}{0} & $41$ & $39$ & & $53$ & & & $38$ & $54$ & & $E_7$ \\ \cline{2-11} & $47$ & \Eroot{1}{0}{1}{1}{1}{1}{1}{1} & $43$ & $54$ & & & & & & $39$ \\ \cline{2-11} & $48$ & \Eroot{0}{1}{1}{2}{2}{1}{0}{0} & $52$ & & & & $40$ & & $55$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $49$ & \Eroot{0}{1}{1}{2}{1}{1}{1}{0} & $53$ & & & $41$ & $55$ & & $40$ & $56$ & & $E_7$ \\ \cline{2-11} & $50$ & \Eroot{0}{1}{1}{1}{1}{1}{1}{1} & $54$ & $43$ & $42$ & $56$ & & & & $41$ \\ \cline{1-11} $8$ & $51$ & \Eroot{1}{1}{2}{2}{1}{1}{0}{0} & & & $45$ & & $57$ & $44$ & $58$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $52$ & \Eroot{1}{1}{1}{2}{2}{1}{0}{0} & $48$ & & $57$ & & $45$ & & $59$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $53$ & \Eroot{1}{1}{1}{2}{1}{1}{1}{0} & $49$ & & $58$ & $46$ & $59$ & & $45$ & $60$ & & $E_7$ \\ \cline{2-11} & $54$ & \Eroot{1}{1}{1}{1}{1}{1}{1}{1} & $50$ & $47$ & & $60$ & & & & $46$ \\ \cline{2-11} & $55$ & \Eroot{0}{1}{1}{2}{2}{1}{1}{0} & $59$ & & & & $49$ & $61$ & $48$ & $62$ & & $E_7$ \\ \cline{2-11} & $56$ & \Eroot{0}{1}{1}{2}{1}{1}{1}{1} & $60$ & & & $50$ & $62$ & & & $49$ \\ \cline{1-11} $9$ & $57$ & \Eroot{1}{1}{2}{2}{2}{1}{0}{0} & & & $52$ & $63$ & $51$ & & $64$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $58$ & \Eroot{1}{1}{2}{2}{1}{1}{1}{0} & & & $53$ & & $64$ & & $51$ & $65$ & & $E_7$ \\ \cline{2-11} & $59$ & \Eroot{1}{1}{1}{2}{2}{1}{1}{0} & $55$ & & $64$ & & $53$ & $66$ & $52$ & $67$ & & $E_7$ \\ \cline{2-11} & $60$ & \Eroot{1}{1}{1}{2}{1}{1}{1}{1} & $56$ & & $65$ & $54$ & $67$ & & & $53$ \\ \cline{2-11} & $61$ & \Eroot{0}{1}{1}{2}{2}{2}{1}{0} & $66$ & & & & & $55$ & & $68$ & & $E_7$ \\ \cline{2-11} & $62$ & \Eroot{0}{1}{1}{2}{2}{1}{1}{1} & $67$ & & & & $56$ & $68$ & & $55$ \\ \cline{1-11} \end{tabular} \end{table} \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $E_8$ (part $4$)} \label{tab:positive_roots_E_8_4} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11} height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11} $10$ & $63$ & \Eroot{1}{1}{2}{3}{2}{1}{0}{0} & & $69$ & & $57$ & & & $70$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $64$ & \Eroot{1}{1}{2}{2}{2}{1}{1}{0} & & & $59$ & $70$ & $58$ & $71$ & $57$ & $72$ & & $E_7$ \\ \cline{2-11} & $65$ & \Eroot{1}{1}{2}{2}{1}{1}{1}{1} & & & $60$ & & $72$ & & & $58$ \\ \cline{2-11} & $66$ & \Eroot{1}{1}{1}{2}{2}{2}{1}{0} & $61$ & & $71$ & & & $59$ & & $73$ & & $E_7$ \\ \cline{2-11} & $67$ & \Eroot{1}{1}{1}{2}{2}{1}{1}{1} & $62$ & & $72$ & & $60$ & $73$ & & $59$ \\ \cline{2-11} & $68$ & \Eroot{0}{1}{1}{2}{2}{2}{1}{1} & $73$ & & & & & $62$ & $74$ & $61$ \\ \cline{1-11} $11$ & $69$ & \Eroot{1}{2}{2}{3}{2}{1}{0}{0} & & $63$ & & & & & $75$ & & $E_6$ & $E_7$ \\ \cline{2-11} & $70$ & \Eroot{1}{1}{2}{3}{2}{1}{1}{0} & & $75$ & & $64$ & & $76$ & $63$ & $77$ & & $E_7$ \\ \cline{2-11} & $71$ & \Eroot{1}{1}{2}{2}{2}{2}{1}{0} & & & $66$ & $76$ & & $64$ & & $78$ & & $E_7$ \\ \cline{2-11} & $72$ & \Eroot{1}{1}{2}{2}{2}{1}{1}{1} & & & $67$ & $77$ & $65$ & $78$ & & $64$ \\ \cline{2-11} & $73$ & \Eroot{1}{1}{1}{2}{2}{2}{1}{1} & $68$ & & $78$ & & & $67$ & $79$ & $66$ \\ \cline{2-11} & $74$ & \Eroot{0}{1}{1}{2}{2}{2}{2}{1} & $79$ & & & & & & $68$ & \\ \cline{1-11} $12$ & $75$ & \Eroot{1}{2}{2}{3}{2}{1}{1}{0} & & $70$ & & & & $80$ & $69$ & $81$ & & $E_7$ \\ \cline{2-11} & $76$ & \Eroot{1}{1}{2}{3}{2}{2}{1}{0} & & $80$ & & $71$ & $82$ & $70$ & & $83$ & & $E_7$ \\ \cline{2-11} & $77$ & \Eroot{1}{1}{2}{3}{2}{1}{1}{1} & & $81$ & & $72$ & & $83$ & & $70$ \\ \cline{2-11} & $78$ & \Eroot{1}{1}{2}{2}{2}{2}{1}{1} & & & $73$ & $83$ & & $72$ & $84$ & $71$ \\ \cline{2-11} & $79$ & \Eroot{1}{1}{1}{2}{2}{2}{2}{1} & $74$ & & $84$ & & & & $73$ & \\ \cline{1-11} \end{tabular} \end{table} \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $E_8$ (part $5$)} \label{tab:positive_roots_E_8_5} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11} height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11} $13$ & $80$ & \Eroot{1}{2}{2}{3}{2}{2}{1}{0} & & $76$ & & & $85$ & $75$ & & $86$ & & $E_7$ \\ \cline{2-11} & $81$ & \Eroot{1}{2}{2}{3}{2}{1}{1}{1} & & $77$ & & & & $86$ & & $75$ \\ \cline{2-11} & $82$ & \Eroot{1}{1}{2}{3}{3}{2}{1}{0} & & $85$ & & & $76$ & & & $87$ & & $E_7$ \\ \cline{2-11} & $83$ & \Eroot{1}{1}{2}{3}{2}{2}{1}{1} & & $86$ & & $78$ & $87$ & $77$ & $88$ & $76$ \\ \cline{2-11} & $84$ & \Eroot{1}{1}{2}{2}{2}{2}{2}{1} & & & $79$ & $88$ & & & $78$ & \\ \cline{1-11} $14$ & $85$ & \Eroot{1}{2}{2}{3}{3}{2}{1}{0} & & $82$ & & $89$ & $80$ & & & $90$ & & $E_7$ \\ \cline{2-11} & $86$ & \Eroot{1}{2}{2}{3}{2}{2}{1}{1} & & $83$ & & & $90$ & $81$ & $91$ & $80$ \\ \cline{2-11} & $87$ & \Eroot{1}{1}{2}{3}{3}{2}{1}{1} & & $90$ & & & $83$ & & $92$ & $82$ \\ \cline{2-11} & $88$ & \Eroot{1}{1}{2}{3}{2}{2}{2}{1} & & $91$ & & $84$ & $92$ & & $83$ & \\ \cline{1-11} $15$ & $89$ & \Eroot{1}{2}{2}{4}{3}{2}{1}{0} & & & $93$ & $85$ & & & & $94$ & & $E_7$ \\ \cline{2-11} & $90$ & \Eroot{1}{2}{2}{3}{3}{2}{1}{1} & & $87$ & & $94$ & $86$ & & $95$ & $85$ \\ \cline{2-11} & $91$ & \Eroot{1}{2}{2}{3}{2}{2}{2}{1} & & $88$ & & & $95$ & & $86$ & \\ \cline{2-11} & $92$ & \Eroot{1}{1}{2}{3}{3}{2}{2}{1} & & $95$ & & & $88$ & $96$ & $87$ & \\ \cline{1-11} $16$ & $93$ & \Eroot{1}{2}{3}{4}{3}{2}{1}{0} & $97$ & & $89$ & & & & & $98$ & & $E_7$ \\ \cline{2-11} & $94$ & \Eroot{1}{2}{2}{4}{3}{2}{1}{1} & & & $98$ & $90$ & & & $99$ & $89$ \\ \cline{2-11} & $95$ & \Eroot{1}{2}{2}{3}{3}{2}{2}{1} & & $92$ & & $99$ & $91$ & $100$ & $90$ & \\ \cline{2-11} & $96$ & \Eroot{1}{1}{2}{3}{3}{3}{2}{1} & & $100$ & & & & $92$ & & \\ \cline{1-11} $17$ & $97$ & \Eroot{2}{2}{3}{4}{3}{2}{1}{0} & $93$ & & & & & & & $101$ & & $E_7$ \\ \cline{2-11} & $98$ & \Eroot{1}{2}{3}{4}{3}{2}{1}{1} & $101$ & & $94$ & & & & $102$ & $93$ \\ \cline{2-11} & $99$ & \Eroot{1}{2}{2}{4}{3}{2}{2}{1} & & & $102$ & $95$ & & $103$ & $94$ & \\ \cline{2-11} & $100$ & \Eroot{1}{2}{2}{3}{3}{3}{2}{1} & & $96$ & & $103$ & & $95$ & & \\ \cline{1-11} \end{tabular} \end{table} \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $E_8$ (part $6$)} \label{tab:positive_roots_E_8_6} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11} height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11} $18$ & $101$ & \Eroot{2}{2}{3}{4}{3}{2}{1}{1} & $98$ & & & & & & $104$ & $97$ \\ \cline{2-11} & $102$ & \Eroot{1}{2}{3}{4}{3}{2}{2}{1} & $104$ & & $99$ & & & $105$ & $98$ & \\ \cline{2-11} & $103$ & \Eroot{1}{2}{2}{4}{3}{3}{2}{1} & & & $105$ & $100$ & $106$ & $99$ & & \\ \cline{1-11} $19$ & $104$ & \Eroot{2}{2}{3}{4}{3}{2}{2}{1} & $102$ & & & & & $107$ & $101$ & \\ \cline{2-11} & $105$ & \Eroot{1}{2}{3}{4}{3}{3}{2}{1} & $107$ & & $103$ & & $108$ & $102$ & & \\ \cline{2-11} & $106$ & \Eroot{1}{2}{2}{4}{4}{3}{2}{1} & & & $108$ & & $103$ & & & \\ \cline{1-11} $20$ & $107$ & \Eroot{2}{2}{3}{4}{3}{3}{2}{1} & $105$ & & & & $109$ & $104$ & & \\ \cline{2-11} & $108$ & \Eroot{1}{2}{3}{4}{4}{3}{2}{1} & $109$ & & $106$ & $110$ & $105$ & & & \\ \cline{1-11} $21$ & $109$ & \Eroot{2}{2}{3}{4}{4}{3}{2}{1} & $108$ & & & $111$ & $107$ & & & \\ \cline{2-11} & $110$ & \Eroot{1}{2}{3}{5}{4}{3}{2}{1} & $111$ & $112$ & & $108$ & & & & \\ \cline{1-11} $22$ & $111$ & \Eroot{2}{2}{3}{5}{4}{3}{2}{1} & $110$ & $113$ & $114$ & $109$ & & & & \\ \cline{2-11} & $112$ & \Eroot{1}{3}{3}{5}{4}{3}{2}{1} & $113$ & $110$ & & & & & & \\ \cline{1-11} $23$ & $113$ & \Eroot{2}{3}{3}{5}{4}{3}{2}{1} & $112$ & $111$ & $115$ & & & & & \\ \cline{2-11} & $114$ & \Eroot{2}{2}{4}{5}{4}{3}{2}{1} & & $115$ & $111$ & & & & & \\ \cline{1-11} $24$ & $115$ & \Eroot{2}{3}{4}{5}{4}{3}{2}{1} & & $114$ & $113$ & $116$ & & & & \\ \cline{1-11} $25$ & $116$ & \Eroot{2}{3}{4}{6}{4}{3}{2}{1} & & & & $115$ & $117$ & & & \\ \cline{1-11} $26$ & $117$ & \Eroot{2}{3}{4}{6}{5}{3}{2}{1} & & & & & $116$ & $118$ & & \\ \cline{1-11} $27$ & $118$ & \Eroot{2}{3}{4}{6}{5}{4}{2}{1} & & & & & & $117$ & $119$ & \\ \cline{1-11} $28$ & $119$ & \Eroot{2}{3}{4}{6}{5}{4}{3}{1} & & & & & & & $118$ & $120$ \\ \cline{1-11} $29$ & $120$ & \Eroot{2}{3}{4}{6}{5}{4}{3}{2} & & & & & & & & $119$ \\ \cline{1-11} \end{tabular} \end{table} Then we have the following: \begin{lem} \label{lem:possibility_J_is_E_6} In this setting, if $J$ is of type $E_6$, then $|I \cap J| = 1$. \end{lem} \begin{proof} By the property $N \geq |I \cap J| + 2$ and the $A_{>1}$-freeness of $I$, it follows that $I \cap J$ is either $\{r_2,r_3,r_4,r_5\}$ (of type $D_4$) or the union of irreducible components of type $A_1$. In the former case, we have $\Phi_J^{\perp I} = \emptyset$ (see Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}), a contradiction. Therefore, $I \cap J$ consists of irreducible components of type $A_1$. Now assume contrary that $I \cap J$ is not irreducible. Then, by applying successive local transformations and by using symmetry, we may assume without loss of generality that $r_1 \in I$ (cf., the proof of Lemma \ref{lem:J_not_A_N}). Now we have $\Pi^{J,\{r_1\}} = \{\alpha_2,\alpha_4,\alpha_5,\alpha_6,\alpha'\}$ which is the standard labelling of type $A_5$, where $\alpha'$ is the root $\gamma_{44}$ in Table \ref{tab:positive_roots_E_8_3}. Note that $\Pi_{(I \cap J) \smallsetminus \{r_1\}} \subseteq \Pi^{J,\{r_1\}}$. Now the same argument as Lemma \ref{lem:J_not_A_N} implies that the subspace $V'$ spanned by $\Pi^{J,I \cap J} \cup \Pi_{(I \cap J) \smallsetminus \{r_1\}}$ is a proper subspace of the space spanned by $\Pi^{J,\{r_1\}}$, therefore $\dim V' < 5$. This implies that the subspace spanned by $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$, which is the sum of $V'$ and $\mathbb{R}\alpha_1$, has dimension less than $6 = \dim V_J$, contradicting the fact that $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ spans $V_J$ (see Lemma \ref{lem:Psi_is_full}). Hence $I \cap J$ is irreducible, therefore the claim holds. \end{proof} We also give a list of all positive roots of the Coxeter group of type $D_n$ (Table \ref{tab:positive_roots_D_n}) in order to prove the next lemma (and some other results below). Some notations are similar to the above case of type $E_8$. For the data for actions of generators on the roots, if the action $r_k \cdot \gamma$ does not appear in the list, then it means either $r_k$ fixes $\gamma$ (or equivalently, $\gamma$ is orthogonal to $\alpha_k$), or $\gamma = \alpha_k$. Again, these data imply that the list indeed exhausts all the positive roots. \begin{table}[hbt] \centering \caption{List of positive roots for Coxeter group of type $D_n$} \label{tab:positive_roots_D_n} \begin{tabular}{|c|c|} \hline roots & actions of generators \\ \hline $\gamma^{(1)}_{i,j} := \sum_{h=i}^{j} \alpha_h$ & $r_{i-1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i-1,j}$ ($i \geq 2$) \\ ($1 \leq i \leq j \leq n-2$) & $r_i \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i+1,j}$ ($i \leq j-1$) \\ ($\gamma^{(1)}_{i,i} = \alpha_i$) & $r_j \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j-1}$ ($i \leq j-1$) \\ & $r_{j+1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j+1}$ ($j \leq n-3$) \\ & $r_{n-1} \cdot \gamma^{(1)}_{i,n-2} = \gamma^{(2)}_i$ \\ & $r_n \cdot \gamma^{(1)}_{i,n-2} = \gamma^{(3)}_i$ \\ \hline $\gamma^{(2)}_i := \sum_{h=i}^{n-1} \alpha_h$ & $r_{i-1} \cdot \gamma^{(2)}_i = \gamma^{(2)}_{i-1}$ ($i \geq 2$) \\ ($1 \leq i \leq n-1$) & $r_i \cdot \gamma^{(2)}_i = \gamma^{(2)}_{i+1}$ ($i \leq n-2$) \\ ($\gamma^{(2)}_{n-1} = \alpha_{n-1}$) & $r_{n-1} \cdot \gamma^{(2)}_i = \gamma^{(1)}_{i,n-2}$ ($i \leq n-2$) \\ & $r_n \cdot \gamma^{(2)}_i = \gamma^{(4)}_{i,n-1}$ ($i \leq n-2$) \\ \hline $\gamma^{(3)}_i := \sum_{h=i}^{n-2} \alpha_h + \alpha_n$ & $r_{i-1} \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i-1}$ ($i \geq 2$) \\ ($1 \leq i \leq n-1$) & $r_i \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i+1}$ ($i \leq n-2$) \\ ($\gamma^{(3)}_{n-1} = \alpha_n$) & $r_n \cdot \gamma^{(3)}_i = \gamma^{(1)}_{i,n-2}$ ($i \leq n-2$) \\ & $r_{n-1} \cdot \gamma^{(3)}_i = \gamma^{(4)}_{i,n-1}$ ($i \leq n-2$) \\ \hline $\gamma^{(4)}_{i,j} := \sum_{h=i}^{j-1} \alpha_h + \sum_{h=j}^{n-2} 2\alpha_h + \alpha_{n-1} + \alpha_n$ & $r_{i-1} \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i-1,j}$ ($i \geq 2$) \\ ($1 \leq i < j \leq n-1$) & $r_i \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i+1,j}$ ($i \leq j-2$) \\ & $r_{j-1} \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i,j-1}$ ($i \leq j-2$) \\ & $r_j \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i,j+1}$ ($j \leq n-2$) \\ & $r_{n-1} \cdot \gamma^{(4)}_{i,n-1} = \gamma^{(3)}_i$ \\ & $r_n \cdot \gamma^{(4)}_{i,n-1} = \gamma^{(2)}_i$ \\ \hline \end{tabular} \end{table} Then we have the following: \begin{lem} \label{lem:possibility_J_is_D_N} In this setting, suppose that $J$ is of type $D_N$. \begin{enumerate} \item \label{item:lem_possibility_J_is_D_N_case_1} If $I \cap J$ has an irreducible component of type $D_k$ with $k \geq 4$ and $N - k$ is odd, then we have $|I \cap J| \leq k + (N-k-3)/2$. \item \label{item:lem_possibility_J_is_D_N_case_2} If $N$ is odd, $I \cap J$ does not have an irreducible component of type $D_k$ with $k \geq 4$ and $\{r_{N-1},r_N\} \not\subseteq I$, then we have $|I \cap J| \leq (N-3)/2$. \item \label{item:lem_possibility_J_is_D_N_case_3} If $N$ is odd, $I \cap J$ does not have an irreducible component of type $D_k$ with $k \geq 4$ and $\{r_{N-1},r_N\} \subseteq I$, then we have $|I \cap J| \leq (N-1)/2$. \end{enumerate} \end{lem} \begin{proof} Assume contrary that the hypothesis of one of the three cases in the statement is satisfied but the inequality in the conclusion does not hold. We show that $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ cannot span $V_J$, which is a contradiction and therefore concludes the proof. First, recall the property $N \geq |I \cap J| + 2$ and the $A_{>1}$-freeness of $I$. Then, in the case \ref{item:lem_possibility_J_is_D_N_case_1}, by applying successive local transformations, we may assume without loss of generality that $I \cap J$ consists of elements $r_{2j}$ with $1 \leq j \leq (N-k-1)/2$ and $r_j$ with $N-k+1 \leq j \leq N$. Similarly, in the case \ref{item:lem_possibility_J_is_D_N_case_2} (respectively, the case \ref{item:lem_possibility_J_is_D_N_case_3}), by applying successive local transformations and using symmetry, we may assume without loss of generality that $I \cap J$ consists of elements $r_{2j}$ with $1 \leq j \leq (N-1)/2$ (respectively, $r_{2j}$ with $1 \leq j \leq (N-1)/2$ and $r_N$). In any case, we have $\Phi_J^{\perp I} \subseteq \Phi_{J \smallsetminus \{r_1\}}$ (see Table \ref{tab:positive_roots_D_n}), therefore the subspace spanned by $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is contained in $V_{J \smallsetminus \{r_1\}}$. Hence $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ cannot span $V_J$, concluding the proof. \end{proof} We divide the following argument into two cases. \subsubsection{Case $w \cdot \Pi_J \not\subseteq \Phi^+$} \label{sec:proof_special_first_case_subcase_1} In order to prove that $w \cdot \Pi_J \subseteq \Phi^+$, here we assume contrary that $w \cdot \Pi_J \not\subseteq \Phi^+$ and deduce a contradiction. In this setting, we construct a decomposition of $w$ in the following manner. Take an element $s \in J$ with $w \cdot \alpha_s \in \Phi^-$. By Lemma \ref{lem:rightdivisor}, the element $w_{x_I}^s$ is a right divisor of $w$. This implies that $\Phi^{\perp I}[w_{x_I}^s] \subseteq \Phi^{\perp I}[w] = \emptyset$ (see Lemma 2.2 of \cite{Nui11} for the first inclusion), therefore we have $w_{x_I}^s \in Y_{y,x_I}$ where we put $y := \varphi(x_I,s) \in S^{(\Lambda)}$. By Proposition \ref{prop:charofBphi}, we have $y \neq x_I$. This element $w_{x_I}^s$ induces a local transformation $x_I \mapsto y$. Now if $w(w_{x_I}^s)^{-1} \cdot \Pi_J \not\subseteq \Phi^+$, then we can similarly factor out from $w(w_{x_I}^s)^{-1}$ a right divisor of the form $w_y^t \in Y_{\varphi(y,t),y}$ with $t \in J$. Iterating this process, we finally obtain a decomposition of $w$ of the form $w = u w_{y_{n-1}}^{s_{n-1}} \cdots w_{y_1}^{s_1} w_{y_0}^{s_0}$ satisfying that $n \geq 1$, $u \in Y_{x_I,z}$ with $z \in S^{(\Lambda)}$, $w_{y_i}^{s_i} \in Y_{y_{i+1},y_i} \cap W_J$ for every $0 \leq i \leq n-1$ where we put $y_0 = x_I$ and $y_n = z$, and $u \cdot \Pi_J \subseteq \Phi^+$. Put $u' := w_{y_{n-1}}^{s_{n-1}} \cdots w_{y_1}^{s_1} w_{y_0}^{s_0} \neq 1$. By the construction, the action of $u' \in Y_{z,x_I} \cap W_J$ induces (as the composition of successive local transformations) an isomorphism $\sigma \colon I \cap J \to [z] \cap J$, $t \mapsto u' \ast t$, while $u'$ fixes every element of $\Pi_{I \smallsetminus J}$. Now $\sigma$ is not an identity mapping; otherwise, we have $z = x_I$ and $1 \neq u' \in Y_{x_I,x_I}$, while $u'$ has finite order since $|W_J| < \infty$, contradicting Proposition \ref{prop:Yistorsionfree}. On the other hand, we have $u \cdot \Phi_J = wu'{}^{-1} \cdot \Phi_J = w \cdot \Phi_J = \Phi_J$, therefore $u \cdot \Phi_J^+ = \Phi_J^+$ since $u \cdot \Pi_J \subseteq \Phi^+$. This implies that $u \cdot \Pi_J = \Pi_J$, therefore the action of $u$ defines an automorphism $\tau$ of $J$. Since $w = u u' \in Y_I$, the composite mapping $\tau \circ \sigma$ is the identity mapping on $I \cap J$, while $\sigma$ is not identity as above. As a consequence, we have $\tau^{-1}|_{I \cap J} = \sigma$ and hence $\tau^{-1}$ is a nontrivial automorphism of $J$, therefore the possibilities of the type of $J$ are $D_N$, $E_6$ and $F_4$ (recall that $J$ is neither of type $A_N$ nor of type $I_2(m)$). \begin{lem} \label{lem:proof_special_first_case_subcase_1_not_F_4} In this setting, $J$ is not of type $F_4$. \end{lem} \begin{proof} Assume contrary that $J = \{r_1,r_2,r_3,r_4\}$ is of type $F_4$. In this case, each of $r_1$ and $r_2$ is not conjugate in $W_J$ to one of $r_3$ and $r_4$ by the well-known fact that the conjugacy classes for the simple reflections $r_i$ are determined by the connected components of the graph obtained from the Coxeter graph by removing all edges having non-odd labels. Therefore, the mapping $\tau^{-1}|_{I \cap J} = \sigma$ induced by the action of $u' \in W_J$ cannot map an element $r_i$ ($1 \leq i \leq 4$) to $r_{5-i}$. This contradicts the fact that $\tau^{-1}$ is a nontrivial automorphism of $J$. Hence the claim holds. \end{proof} From now, we consider the remaining case that $J$ is either of type $D_N$ with $4 \leq N < \infty$ or of type $E_6$. Take a standard decomposition $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ of $u \in Y_{x_I,z}$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}). Note that $J$ is irreducible and $J \not\subseteq [z]$. This implies that, if $0 \leq i \leq \ell(\mathcal{D})-1$ and $\omega_j$ is a narrow transformation for every $0 \leq j \leq i$, then it follows by induction on $0 \leq j \leq i$ that the support of $\omega_j$ is apart from $J$, the product $\omega_j \cdots \omega_1\omega_0$ fixes $\Pi_J$ pointwise, $[y^{(j+1)}] \cap J = [z] \cap J$, and $[y^{(j+1)}] \smallsetminus J$ is not adjacent to $J$ (note that $[z] \smallsetminus J = I \smallsetminus J$ is not adjacent to $J$). By these properties, since $u$ does not fix $\Pi_J$ pointwise, $\mathcal{D}$ contains at least one wide transformation. Let $\omega := \omega_i$ be the first (from right) wide transformation in $\mathcal{D}$, and write $y = y^{(i)}(\mathcal{D})$, $t = t^{(i)}(\mathcal{D})$ and $K = K^{(i)}(\mathcal{D})$ for simplicity. Note that $J^{(i)}(\mathcal{D}) = J$ by the above argument. Note also that $\Pi^{K,[y] \cap K} \subseteq \Pi^{[y]}$, since $[y] \smallsetminus K$ is not adjacent to $K$ by the definition of $K$. Now the action of $\omega_{i-1} \cdots \omega_1\omega_0 u' \in Y_{y,x_I}$ induces an isomorphism $\Pi^I \to \Pi^{[y]}$ which maps $\Pi^{J,I \cap J}$ onto $\Pi^{J,[y] \cap J} = \Pi^{J,[z] \cap J}$. Hence we have the following (recall that $\Pi^{J,I \cap J}$ is the union of some irreducible components of $\Pi^I$): \begin{lem} \label{lem:proof_special_first_case_subcase_1_local_irreducible_components} In this setting, $\Pi^{J,[y] \cap J}$ is isomorphic to $\Pi^{J,I \cap J}$ and is the union of some irreducible components of $\Pi^{[y]}$. In particular, each element of $\Pi^{J,[y] \cap J}$ is orthogonal to any element of $\Pi^{K,[y] \cap K} \smallsetminus \Phi_J$. \end{lem} Now note that $K = ([y] \cup J)_{\sim t}$ is irreducible and of finite type, and $t$ is adjacent to $[y]$. Moreover, by Lemma \ref{lem:another_decomposition_Y_no_loop}, the element $\omega = \omega_{y,J}^{t}$ does not fix $\Pi_{K \smallsetminus \{t\}}$ pointwise. By these properties and symmetry, we may assume without loss of generality that the possibilities of $K$ are as follows: \begin{enumerate} \item \label{item:proof_special_first_case_subcase_1_J_E_6} $J$ is of type $E_6$, and; \begin{enumerate} \item \label{item:proof_special_first_case_subcase_1_J_E_6_K_E_8} $K = J \cup \{t,t'\}$ is of type $E_8$ where $t$ is adjacent to $r_6$ and $t'$, and $t' \in [y]$, \item \label{item:proof_special_first_case_subcase_1_J_E_6_K_E_7} $K = J \cup \{t\}$ is of type $E_7$ where $t$ is adjacent to $r_6$, and $r_6 \in [y]$, \end{enumerate} \item \label{item:proof_special_first_case_subcase_1_J_D_7} $J$ is of type $D_7$, $K = J \cup \{t\}$ is of type $E_8$ where $t$ is adjacent to $r_7$, and $r_7 \in [y]$, \item \label{item:proof_special_first_case_subcase_1_J_D_5} $J$ is of type $D_5$, and; \begin{enumerate} \item \label{item:proof_special_first_case_subcase_1_J_D_5_K_E_7} $K = J \cup \{t,t'\}$ is of type $E_7$ where $t$ is adjacent to $r_5$ and $t'$, and $t' \in [y]$, \item \label{item:proof_special_first_case_subcase_1_J_D_5_K_E_6} $K = J \cup \{t\}$ is of type $E_6$ where $t$ is adjacent to $r_5$, and $r_5 \in [y]$, \end{enumerate} \item \label{item:proof_special_first_case_subcase_1_J_D_N} $J$ is of type $D_N$, $K = J \cup \{t\}$ is of type $D_{N+1}$ where $t$ is adjacent to $r_1$, and $r_1 \in [y]$. \end{enumerate} We consider Case \ref{item:proof_special_first_case_subcase_1_J_E_6_K_E_8}. We have $|[y] \cap J| = |I \cap J| = 1$ by Lemma \ref{lem:possibility_J_is_E_6}. Now by Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6} (where $r_7 = t$ and $r_8 = t'$), we have $\langle \beta,\beta' \rangle \neq 0$ for some $\beta \in \Pi^{J,[y] \cap J}$ and $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ (namely, $(\beta,\beta') = (\alpha_4,\gamma_{84})$ when $[y] \cap J = \{r_1\}$; $(\beta,\beta') = (\gamma_{16},\gamma_{74})$ when $[y] \cap J = \{r_3\}$; and $(\beta,\beta') = (\alpha_1,\gamma_{74})$ when $[y] \cap J = \{r_j\}$ with $j \in \{2,4,5,6\}$, where the roots $\gamma_k$ are as in Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}). This contradicts Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}. We consider Case \ref{item:proof_special_first_case_subcase_1_J_E_6_K_E_7}. We have $|[y] \cap J| = |I \cap J| = 1$ by Lemma \ref{lem:possibility_J_is_E_6}, hence $[y] \cap J = \{r_6\}$. Now we have $\alpha_5 + \alpha_6 + \alpha_t \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$, $\alpha_4 \in \Pi^{J,[y] \cap J}$, and these two roots are not orthogonal, contradicting Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}. We consider Case \ref{item:proof_special_first_case_subcase_1_J_D_7}. Note that $N = 7 \geq |I \cap J| + 2 = |[y] \cap J| + 2$, therefore $|[y] \cap J| \leq 5$. By Lemma \ref{lem:possibility_J_is_D_N} and $A_{>1}$-freeness of $I$, it follows that the possibilities of $[y] \cap J$ are as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_7}, where we put $(r'_1,r'_2,r'_3,r'_4,r'_5,r'_6,r'_7,r'_8) = (t,r_6,r_7,r_5,r_4,r_3,r_2,r_1)$ (hence $K = \{r'_1,\dots,r'_8\}$ is the standard labelling of type $E_8$). Now by Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}, we have $\langle \beta,\beta' \rangle \neq 0$ for some $\beta \in \Pi^{J,[y] \cap J}$ and $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_7}, where we write $\alpha'_j = \alpha_{r'_j}$ and the roots $\gamma_k$ are as in Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}. This contradicts Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}. \begin{table}[hbt] \centering \caption{List of roots for Case \ref{item:proof_special_first_case_subcase_1_J_D_7}} \label{tab:proof_special_first_case_subcase_1_J_D_7} \begin{tabular}{|c|c|c|} \hline $[y] \cap J$ & $\beta$ & $\beta'$ \\ \hline $r'_3 \in [y] \cap J \subseteq \{r'_3,r'_6,r'_7,r'_8\}$ & $\alpha'_2$ & $\gamma_{16}$ \\ \hline $\{r'_3,r'_5\}$ & $\alpha'_2$ & $\gamma_{31}$ \\ \hline $\{r'_2,r'_3\} \subseteq [y] \cap J \subseteq \{r'_2,r'_3,r'_4,r'_5,r'_6\}$ & $\alpha'_8$ & $\gamma_{97}$ \\ \cline{1-2} $\{r'_2,r'_3,r'_7\}$ & $\gamma_{22}$ & \\ \hline $\{r'_2,r'_3,r'_8\}$ & $\alpha'_6$ & $\gamma_{104}$ \\ \hline \end{tabular} \end{table} We consider Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}. Note that $N = 5 \geq |I \cap J| + 2 = |[y] \cap J| + 2$, therefore $|[y] \cap J| \leq 3$. By $A_{>1}$-freeness of $I$, every irreducible component of $[y] \cap J$ is of type $A_1$. Now by Lemma \ref{lem:possibility_J_is_D_N}, the possibilities of $[y] \cap J$ are as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_5_K_E_7}, where we put $(r'_1,r'_2,r'_3,r'_4,r'_5,r'_6,r'_7) = (r_1,r_4,r_2,r_3,r_5,t,t')$ (hence $K = \{r'_1,\dots,r'_7\}$ is the standard labelling of type $E_7$). Now by Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}, we have $\langle \beta,\beta' \rangle \neq 0$ for some $\beta \in \Pi^{J,[y] \cap J}$ and $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_5_K_E_7}, where we write $\alpha'_j = \alpha_{r'_j}$ and the roots $\gamma_k$ are as in Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}. This contradicts Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}. \begin{table}[hbt] \centering \caption{List of roots for Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}} \label{tab:proof_special_first_case_subcase_1_J_D_5_K_E_7} \begin{tabular}{|c|c|c|} \hline $[y] \cap J$ & $\beta$ & $\beta'$ \\ \hline $[y] \cap J \subseteq \{r'_2,r'_4,r'_5\}$ & $\alpha'_1$ & $\gamma_{61}$ \\ \cline{1-2} $\{r'_3\}$ & $\gamma_{16}$ & \\ \hline $\{r'_1\}$ & $\alpha'_4$ & $\gamma_{71}$ \\ \hline \end{tabular} \end{table} We consider Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_6}. By the same reason as Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}, every irreducible component of $[y] \cap J$ is of type $A_1$. Now by Lemma \ref{lem:possibility_J_is_D_N}, we have only two possibilities of $[y] \cap J$; $\{r_5\}$ and $\{r_4,r_5\}$. In the first case $[y] \cap J = \{r_5\}$, we have $\alpha_2 \in \Pi^{J,[y] \cap J}$, $\alpha_3 + \alpha_5 + \alpha_t \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$, and these two roots are not orthogonal, contradicting Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}. Hence we consider the second case $[y] \cap J = \{r_4,r_5\}$. In this case, the action of the first wide transformation $\omega$ in $\mathcal{D}$ maps the elements $r_1$, $r_2$, $r_3$, $r_4$ and $r_5$ to $t$, $r_5$, $r_3$, $r_2$ and $r_4$, respectively (note that $\{t,r_5,r_3,r_2,r_4\}$ is the standard labelling of type $D_5$). Now, by a similar argument as above, the possibility of the second wide transformation $\omega_{i'}$ in $\mathcal{D}$ (if exists) is as in Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_6}, where $t'' := t^{(i')}(\mathcal{D})$ is adjacent to either $r_2$ or $r_4$ (note that Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7} cannot occur as discussed above, while Case \ref{item:proof_special_first_case_subcase_1_J_D_N} cannot occur by the shape of $J$ and the property $r_1 \not\in [y] \cap J$). This implies that the action of $\omega_{i'}$ either maps the elements $t$, $r_5$, $r_3$, $r_4$ and $r_2$ to $t''$, $r_2$, $r_3$, $r_5$ and $r_4$, respectively (forming a subset of type $D_5$ with the ordering being the standard labelling), or maps the elements $t$, $r_5$, $r_3$, $r_2$ and $r_4$ to $t''$, $r_4$, $r_3$, $r_5$ and $r_2$, respectively (forming a subset of type $D_5$ with the ordering being the standard labelling). By iterating the same argument, it follows that the sequence of elements $(r_2,r_3,r_4,r_5)$ is mapped by successive wide transformations in $\mathcal{D}$ to one of the following three sequences; $(r_2,r_3,r_4,r_5)$, $(r_5,r_3,r_2,r_4)$ and $(r_4,r_3,r_5,r_2)$. Hence $u$ itself should map $(r_2,r_3,r_4,r_5)$ to one of the above three sequences; while the action of $u$ induces the nontrivial automorphism $\tau$ of $J$, which maps $(r_1,r_2,r_3,r_4,r_5)$ to $(r_1,r_2,r_3,r_5,r_4)$. This is a contradiction. Finally, we consider the case \ref{item:proof_special_first_case_subcase_1_J_D_N}. First we have the following lemma: \begin{lem} \label{lem:proof_special_first_case_subcase_1_J_D_N} In this setting, suppose further that there exists an integer $k \geq 1$ satisfying that $2k \leq N - 3$, $r_{2j - 1} \in [y]$ and $r_{2j} \not\in [y]$ for every $1 \leq j \leq k$, and $r_{2k + 1} \not\in [y]$. Then there exist a root $\beta \in \Pi^{J,[y] \cap J}$ and a root $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ with $\langle \beta,\beta' \rangle \neq 0$. \end{lem} \begin{proof} Put $J' := \{r_j \mid 2k+1 \leq j \leq N\}$. First, we have $\beta' := \alpha_t + \sum_{j=1}^{2k} \alpha_j \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ in this case. On the other hand, $\Pi^{J,[y] \cap J} \smallsetminus \Phi_{J'}$ consists of $k$ roots $\gamma^{(4)}_{2j-1,2j}$ with $1 \leq j \leq k$ (see Table \ref{tab:positive_roots_D_n} for the notation), while $\Pi_{[y] \cap J} \smallsetminus \Phi_{J'}$ consists of $k$ roots $\alpha_{2j-1}$ with $1 \leq j \leq k$. Hence $|(\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}) \smallsetminus \Phi_{J'}| = 2k$. Since $\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}$ is a basis of the space $V_J$ of dimension $N$, it follows that the subset $(\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}) \cap \Phi_{J'}$ spans a subspace of dimension $N - 2k = |J'|$. This implies that $(\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}) \cap \Phi_{J'} \not\subseteq \Phi_{J' \smallsetminus \{r_{2k+1}\}}$, therefore (since $\alpha_{2k+1} \not\in \Pi_{[y] \cap J}$) we have $\Pi^{J,[y] \cap J} \cap \Phi_{J'} \not\subseteq \Phi_{J' \smallsetminus \{r_{2k+1}\}}$, namely there exists a root $\beta \in \Pi^{J,[y] \cap J} \cap \Phi_{J'}$ which has non-zero coefficient of $\alpha_{2k+1}$. These $\beta$ and $\beta'$ satisfy $\langle \beta,\beta' \rangle \neq 0$ by the construction, concluding the proof. \end{proof} By Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} and Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}, the hypothesis of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} should not hold. By this fact, $A_{>1}$-freeness of $I$ and the property $N \geq |I \cap J| + 2 = |[y] \cap J| + 2$, it follows that the possibilities of $[y] \cap J$ are as follows (up to the symmetry $r_{N-1} \leftrightarrow r_N$); (I) $[y] \cap J = J \smallsetminus \{r_{2j} \mid 1 \leq j \leq k\}$ for an integer $k$ with $2 \leq k \leq (N-2)/2$ and $2k \neq N-3$; (II) $N$ is odd and $[y] \cap J = \{r_{2j-1} \mid 1 \leq j \leq (N-1)/2\}$; (III) $N$ is even and $[y] \cap J = \{r_{2j-1} \mid 1 \leq j \leq (N-2)/2\}$; (IV) $N$ is even and $[y] \cap J = \{r_{2j-1} \mid 1 \leq j \leq N/2\}$. For Case (I), by the shape of $J$ and $[y] \cap J$, it follows that $I \cap J = [y] \cap J$, and each local transformation can permute the irreducible components of $I \cap J$ containing neither $r_{N-1}$ nor $r_N$ but it fixes pointwise the irreducible component(s) of $I \cap J$ containing $r_{N-1}$ or $r_N$. This contradicts the fact that $\sigma = \tau^{-1}|_{I \cap J}$ for a nontrivial automorphism $\tau^{-1}$ of $J$ (note that $\tau^{-1}$ exchanges $r_{N-1}$ and $r_N$). Case (II) contradicts Lemma \ref{lem:possibility_J_is_D_N}(\ref{item:lem_possibility_J_is_D_N_case_2}). For Case (III), the roots $\alpha_{N-1} \in \Pi^{J,[y] \cap J}$ and $\alpha_t + \sum_{j=1}^{N-2} \alpha_j \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ are not orthogonal, contradicting Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}. Finally, for the remaining case, i.e., Case (IV), by the shape of $J$ and $[y] \cap J$, it follows that $I \cap J = [y] \cap J$ and each local transformation leaves the set $I \cap J$ invariant. By this result and the property that $\sigma = \tau^{-1}|_{I \cap J}$ for a nontrivial automorphism $\tau^{-1}$ of $J$, only the possibility of $[y] \cap J$ is that $N = 4$ and $[y] \cap J = I \cap J = \{r_1,r_3\}$, and $\sigma$ exchanges $r_1$ and $r_3$. Now we arrange the standard decomposition $\mathcal{D}$ of $u$ as $u = \omega''_{\ell} \omega'_{\ell-1} \omega''_{\ell-1} \cdots \omega'_2 \omega''_2 \omega'_1 \omega''_1$, where each $\omega'_j$ is a wide transformation and each $\omega''_j$ is a (possibly empty) product of narrow transformations. Let each wide transformation $\omega'_j$ belong to $Y_{z'_j,z_j}$ with $z_j,z'_j \in S^{(\Lambda)}$. In particular, we have $\omega'_1 = \omega$ and $z_1 = y$. Now we give the following lemma: \begin{lem} \label{lem:proof_special_first_case_subcase_1_J_D_N_case_IV} In this setting, the following properties hold for every $1 \leq j \leq \ell - 1$: The action of the element $u_j := \omega''_j \omega'_{j-1} \omega''_{j-1} \cdots \omega'_1 \omega''_1$ maps $(r_1,r_2,r_3,r_4)$ to $(r_1,r_2,r_3,r_4)$ when $j$ is odd and to $(r_1,r_2,r_4,r_3)$ when $j$ is even; the subsets $J$ and $[z_j] \smallsetminus J$ are not adjacent; the support of $\omega'_j$ is as in Case \ref{item:proof_special_first_case_subcase_1_J_D_N} above, with $t$ replaced by some element $t_j \in S$; and $\omega'_j$ maps $(r_1,r_2,r_3,r_4)$ to $(r_1,r_2,r_4,r_3)$. \end{lem} \begin{proof} We use induction on $j$. By the definition of narrow transformations, the first and the second parts of the claim hold obviously when $j = 1$ and follow from the induction hypothesis when $j > 1$. In particular, we have $u_j \cdot \Pi_J = \Pi_J$. Put $(h,h') := (3,4)$ when $j$ is odd and $(h,h') := (4,3)$ when $j$ is even. Then we have $[z_j] \cap J = \{r_1,r_h\}$. Now, by using the above argument, it follows that the support of $\omega'_j$ is of the form $\{r_1,r_2,r_3,r_4,t_j\}$ which is the standard labelling of type $D_5$, where $t_j$ is adjacent to one of the two elements of $[z_j] \cap J$. We show that $t_j$ is adjacent to $r_1$, which already holds when $j = 1$ (note that $t_j = t$ when $j = 1$). Suppose $j > 1$ and assume contrary that $t_j$ is adjacent to $r_h$. In this case, $t_j$ is apart from $[z_j] \smallsetminus \{r_h\}$. On the other hand, we have $[z'_{j-1}] \cap J = \{r_1,r_h\}$, the subsets $[z'_{j-1}] \smallsetminus J$ and $J$ are not adjacent, and the support of each narrow transformation in $\omega''_j$ is apart from to $J$. Moreover, by the induction hypothesis, we have $[z_{j-1}] \cap J = \{r_1,r_{h'}\}$ and the action of $\omega'_{j-1}$ maps $(r_1,r_2,r_h,r_{h'})$ to $(r_1,r_2,r_{h'},r_h)$ while it fixes every element of $[z_{j-1}] \smallsetminus J$. This implies that $\omega''_j \in Y_{z'',z_{j-1}}$ for the element $z'' \in S^{(\Lambda)}$ obtained from $z_j$ by replacing $r_h$ with $r_{h'}$. Now we have $\alpha_{t_j} \in \Pi^{[z'']}$ since $t_j$ is not adjacent to $[z''] = ([z_j] \smallsetminus \{r_h\}) \cup \{r_{h'}\}$, therefore $\beta' := (\omega''_j)^{-1} \cdot \alpha_{t_j} \in \Pi^{[z_{j-1}]}$. This root belongs to $\Phi_{S \smallsetminus J}$ and has non-zero coefficient of $\alpha_{t_j}$, since the support of each narrow transformation in $\omega''_j$ is not adjacent to $J$ and hence does not contain $t_j$. Therefore, the roots $\beta' \in \Pi^{[z_{j-1}]} \smallsetminus \Pi^{J,[z_{j-1}] \cap J}$ and $\alpha_1 + 2\alpha_2 + \alpha_3 + \alpha_4 \in \Pi^{J,[z_{j-1}] \cap J}$ are not orthogonal. This contradicts the fact that $\Pi^{J,[y] \cap J}$ is the union of some irreducible components of $\Pi^{[y]}$ (see Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}) and the isomorphism $\Pi^{[y]} \to \Pi^{[z_{j-1}]}$ induced by the action of $\omega''_{j-1}\omega'_{j-2}\omega''_{j-2} \cdots \omega''_2\omega'_1$ maps $\Pi^{J,[y] \cap J}$ to $\Pi^{J,[z_{j-1}] \cap J}$ (since the action of this element leaves the set $\Pi_J$ invariant). This contradiction proves that $t_j$ is adjacent to $r_1$, therefore the third part of the claim holds. Finally, the fourth part of the claim follows immediately from the third part. Hence the proof of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N_case_IV} is concluded. \end{proof} By Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N_case_IV}, the action of the element $\omega'_{\ell-1} \omega''_{\ell-1} \cdots \omega'_2 \omega''_2 \omega'_1 \omega''_1$, hence of $u = \omega''_{\ell}\omega'_{\ell-1}u_{\ell-1}$, maps the elements $(r_1,r_2,r_3,r_4)$ to either $(r_1,r_2,r_3,r_4)$ or $(r_1,r_2,r_4,r_3)$. This contradicts the above-mentioned fact that $\sigma$ exchanges $r_1$ and $r_3$. Summarizing, we have derived a contradiction in each of the six possible cases, Cases \ref{item:proof_special_first_case_subcase_1_J_E_6_K_E_8}--\ref{item:proof_special_first_case_subcase_1_J_D_N}. Hence we have proven that the assumption $w \cdot \Pi_J \not\subseteq \Phi^+$ implies a contradiction, as desired. \subsubsection{Case $w \cdot \Pi_J \subseteq \Phi^+$} \label{sec:proof_special_first_case_subcase_2} By the result of Section \ref{sec:proof_special_first_case_subcase_1}, we have $w \cdot \Pi_J \subseteq \Phi^+$. Since $w \cdot \Phi_J = \Phi_J$ by Lemma \ref{lem:property_w}, it follows that $w \cdot \Phi_J^+ \subseteq \Phi_J^+$, therefore $w \cdot \Phi_J^+ = \Phi_J^+$ (note that $|\Phi_J| < \infty$). Hence the action of $w$ defines an automorphism $\tau$ of $J$ (in particular, $w \cdot \Pi_J = \Pi_J$). To show that $\tau$ is the identity mapping (which implies the claim that $w$ fixes $\Pi^{J,I \cap J}$ pointwise), assume contrary that $\tau$ is a nontrivial automorphism of $J$. Then the possibilities of the type of $J$ are as follows: $D_N$, $E_6$ and $F_4$ (recall that $J$ is neither of type $A_N$ nor of type $I_2(m)$). Moreover, since the action of $w \in Y_I$ fixes every element of $I \cap J$, the subset $I \cap J$ of $J$ is contained in the fixed point set of $\tau$. This implies that $J$ is not of type $F_4$, since the nontrivial automorphism of a Coxeter graph of type $F_4$ has no fixed points. Suppose that $J$ is of type $E_6$. Then, by the above argument on the fixed points of $\tau$ and Lemma \ref{lem:possibility_J_is_E_6}, we have $I \cap J = \{r_2\}$ or $I \cap J = \{r_4\}$. Now take a standard decomposition of $w$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}). Then no wide transformation can appear due to the shape of $J$ and the position of $I \cap J$ in $J$ (indeed, we cannot obtain a subset of finite type by adding to $J$ an element of $S$ adjacent to $I \cap J$). This implies that the decomposition of $w$ consists of narrow transformations only, therefore $w$ fixes $\Pi_J$ pointwise, contradicting the fact that $\tau$ is a nontrivial automorphism. Secondly, suppose that $J$ is of type $D_N$ with $N \geq 5$. Then, by the above argument on the fixed points of $\tau$, we have $I \cap J \subseteq J \smallsetminus \{r_{N-1},r_N\}$, therefore every irreducible component of $I \cap J$ is of type $A_1$ (by $A_{>1}$-freeness of $I$). Now take a standard decomposition $\mathcal{D}$ of $w$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}). Note that $\mathcal{D}$ involves at least one wide transformation, since $\tau$ is not the identity mapping. By the shape of $J$ and the position of $I \cap J$ in $J$, only the possibility of the first (from right) wide transformation $\omega = \omega_i$ in $\mathcal{D}$ is as follows: $K = J \cup \{t\}$ is of type $D_{N+1}$, $t$ is adjacent to $r_1$, and $r_1 \in [y]$, where we put $y = y^{(i)}(\mathcal{D})$, $t = t^{(i)}(\mathcal{D})$, and $K = K^{(i)}(\mathcal{D})$. Now the claim of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} in Section \ref{sec:proof_special_first_case_subcase_1} also holds in this case, while $\Pi^{J,[y] \cap J}$ is the union of some irreducible components of $\Pi^{[y]}$ by the same reason as in Section \ref{sec:proof_special_first_case_subcase_1}. Hence the hypothesis of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} should not hold. This argument and the properties that $N \geq |I \cap J| + 2 = |[y] \cap J| + 2$ and $I \cap J \subseteq J \smallsetminus \{r_{N-1},r_N\}$ imply that the possibilities of $[y] \cap J$ are the followings: $N$ is odd and $[y] \cap J$ consists of elements $r_{2j-1}$ with $1 \leq j \leq (N-1)/2$; or, $N$ is even and $[y] \cap J$ consists of elements $r_{2j-1}$ with $1 \leq j \leq (N-2)/2$. The former possibility contradicts Lemma \ref{lem:possibility_J_is_D_N}(\ref{item:lem_possibility_J_is_D_N_case_2}). On the other hand, for the latter possibility, the roots $\alpha_{N-1} \in \Pi^{J,[y] \cap J}$ and $\alpha_t + \sum_{j=1}^{N-2} \alpha_j \in \Pi^{[y]} \smallsetminus \Pi^{J,[y] \cap J}$ are not orthogonal, contradicting the above-mentioned fact that $\Pi^{J,[y] \cap J}$ is the union of some irreducible components of $\Pi^{[y]}$. Hence we have a contradiction for any of the two possibilities. Finally, we consider the remaining case that $J$ is of type $D_4$. By the property $N = 4 \geq |I \cap J| + 2$ and $A_{>1}$-freeness of $I$, it follows that $I \cap J$ consists of at most two irreducible components of type $A_1$. On the other hand, by the shape of $J$, the fixed point set of the nontrivial automorphism $\tau$ of $J$ is of type $A_1$ or $A_2$. Since $I \cap J$ is contained in the fixed point set of $\tau$ as mentioned above, it follows that $|I \cap J| = 1$. If $I \cap J = \{r_1\}$, then we have $\Pi^{J,I \cap J} = \{\alpha_3,\alpha_4,\beta\}$ where $\beta = \alpha_1 + 2\alpha_2 + \alpha_3 + \alpha_4$ (see Table \ref{tab:positive_roots_D_n}), and every element of $\Pi^{J,I \cap J}$ forms an irreducible component of $\Pi^{J,I \cap J}$. However, now the property $w \cdot \Pi_J = \Pi_J$ implies that $w$ fixes $\alpha_2$ and permutes the three simple roots $\alpha_1$, $\alpha_3$ and $\alpha_4$, therefore $w \cdot \beta = \beta$, contradicting the fact that $\langle w \rangle$ acts transitively on the set of the irreducible components of $\Pi^{J,I \cap J}$ (see Lemma \ref{lem:property_w}). By symmetry, the same result holds when $I \cap J = \{r_3\}$ or $\{r_4\}$. Hence we have $I \cap J = \{r_2\}$. Take a standard decomposition of $w$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}). Then no wide transformation can appear due to the shape of $J$ and the position of $I \cap J$ in $J$ (indeed, we cannot obtain a subset of finite type by adding to $J$ an element of $S$ adjacent to $I \cap J$). This implies that the decomposition of $w$ consists of narrow transformations only, therefore $w$ fixes $\Pi_J$ pointwise, contradicting the fact that $\tau$ is a nontrivial automorphism. Summarizing, we have derived in any case a contradiction from the assumption that $\tau$ is a nontrivial automorphism. Hence it follows that $\tau$ is the identity mapping, therefore our claim has been proven in the case $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$. \subsection{The second case $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$} \label{sec:proof_special_second_case} In this subsection, we consider the remaining case that $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$. In this case, we have $\Pi_{I^{\perp}} \subseteq \Pi^I$, therefore $\Pi^{J,I \cap J} = \Pi_{J \smallsetminus I}$. Let $L$ be an irreducible component of $J \smallsetminus I$. Then $L$ is of finite type. The aim of the following argument is to show that $w$ fixes $\Pi_L$ pointwise; indeed, if this is satisfied, then we have $\Pi^{J,I \cap J} = \Pi_{J \smallsetminus I} = \Pi_L$ since $\langle w \rangle$ acts transitively on the set of irreducible components of $\Pi^{J,I \cap J}$ (see Lemma \ref{lem:property_w}), therefore $w$ fixes $\Pi^{J,I \cap J}$ pointwise, as desired. Note that $w \cdot \Pi_L \subseteq \Pi_{J \smallsetminus I}$, since now $w$ leaves the set $\Pi^{J,I \cap J} = \Pi_{J \smallsetminus I}$ invariant. \subsubsection{Possibilities of semi-standard decompositions} \label{sec:proof_special_second_case_transformations} Here we investigate the possibilities of narrow and wide transformations in a semi-standard decomposition of the element $w$, in a somewhat wider context. Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element $u$ of $W$, with the property that $[y^{(0)}]$ is isomorphic to $I$, $J^{(0)}$ is irreducible and of finite type, and $J^{(0)}$ is apart from $[y^{(0)}]$. Note that any semi-standard decomposition of the element $w \in Y_I$ with respect to the set $L$ defined above satisfies the condition. Note also that $\mathcal{D}^{-1} := (\omega_0)^{-1}(\omega_1)^{-1} \cdots (\omega_{\ell(\mathcal{D})-1})^{-1}$ is also a semi-standard decomposition of $u^{-1}$, and $(\omega_i)^{-1}$ is a narrow (respectively, wide) transformation if and only if $\omega_i$ is a narrow (respectively, wide) transformation. The proof of the next lemma uses a concrete description of root systems of all finite irreducible Coxeter groups except types $A$ and $I_2(m)$. Table \ref{tab:positive_roots_B_n} shows the list for type $B_n$, where the notational conventions are similar to the case of type $D_n$ (Table \ref{tab:positive_roots_D_n}). For the list for type $F_4$ (Table \ref{tab:positive_roots_F_4}), the list includes only one of the two conjugacy classes of positive roots (denoted by $\gamma_i^{(1)}$), and the other positive roots (denoted by $\gamma_i^{(2)}$) are obtained by using the symmetry $r_1 \leftrightarrow r_4$, $r_2 \leftrightarrow r_3$. In the list, $[c_1,c_2,c_3,c_4]$ signifies a positive root $c_1 \alpha_1 + c_2 \alpha_2 + c_3\alpha_3 + c_4\alpha_4$, and the description in the columns for actions of generators is similar to the case of type $E_8$ (Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}). The list for type $H_4$ is divided into two parts (Tables \ref{tab:positive_roots_H_4_1} and \ref{tab:positive_roots_H_4_2}). In the list, $[c_1,c_2,c_3,c_4]$ signifies a positive root $c_1 \alpha_1 + c_2 \alpha_2 + c_3\alpha_3 + c_4\alpha_4$, where we put $c = 2\cos(\pi/5)$ for simplicity and therefore $c^2 = c+1$. The other description is in a similar manner as the case of type $E_8$, and the marks \lq\lq $H_3$'' indicate the positive roots of the parabolic subgroup of type $H_3$ generated by $\{r_1,r_2,r_3\}$. \begin{table}[hbt] \centering \caption{List of positive roots for Coxeter group of type $B_n$} \label{tab:positive_roots_B_n} \begin{tabular}{|c|c|} \hline roots & actions of generators \\ \hline $\gamma^{(1)}_{i,j} := \sum_{h=i}^{j} \alpha_h$ & $r_{i-1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i-1,j}$ ($i \geq 2$) \\ ($1 \leq i \leq j \leq n-1$) & $r_i \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i+1,j}$ ($i \leq j-1$) \\ ($\gamma^{(1)}_{i,i} = \alpha_i$) & $r_j \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j-1}$ ($i \leq j-1$) \\ & $r_{j+1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j+1}$ ($j \leq n-2$) \\ & $r_n \cdot \gamma^{(1)}_{i,n-1} = \gamma^{(2)}_{i,n}$ \\ \hline $\gamma^{(2)}_{i,j} := \sum_{h=i}^{j-1} \alpha_h + \sum_{h=j}^{n-1} 2\alpha_h + \sqrt{2}\alpha_n$ & $r_{i-1} \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i-1,j}$ ($i \geq 2$) \\ ($1 \leq i < j \leq n$) & $r_i \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i+1,j}$ ($i \leq j-2$) \\ & $r_{j-1} \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i,j-1}$ ($i \leq j-2$) \\ & $r_j \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i,j+1}$ ($j \leq n-1$) \\ & $r_n \cdot \gamma^{(2)}_{i,n} = \gamma^{(1)}_{i,n-1}$ \\ \hline $\gamma^{(3)}_i := \sum_{h=i}^{n-1} \sqrt{2}\alpha_h + \alpha_n$ & $r_{i-1} \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i-1}$ ($i \geq 2$) \\ ($1 \leq i \leq n$) & $r_i \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i+1}$ ($i \leq n-1$) \\ ($\gamma^{(3)}_n = \alpha_n$) & \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \caption{List of positive roots for Coxeter group of type $F_4$} \label{tab:positive_roots_F_4} The data of the remaining positive roots $\gamma^{(2)}_i$ are obtained by replacing $[c_1,c_2,c_3,c_4]$ with $[c_4,c_3,c_2,c_1]$ and replacing each $r_j$ with $r_{5-j}$.\\ \begin{tabular}{|c||c|c|c|c|c|c|} \cline{1-7} height & $i$ & root $\gamma^{(1)}_i$ & \multicolumn{4}{|c|}{$k$; $r_j \cdot \gamma^{(1)}_i = \gamma^{(1)}_k$} \\ \cline{4-7} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ \\ \cline{1-7} $1$ & $1$ & $[1,0,0,0]$ & --- & $3$ & & \\ \cline{2-7} & $2$ & $[0,1,0,0]$ & $3$ & --- & $4$ & \\ \cline{1-7} $2$ & $3$ & $[1,1,0,0]$ & $2$ & $1$ & $5$ & \\ \cline{2-7} & $4$ & $[0,1,\sqrt{2},0]$ & $5$ & & $2$ & $6$ \\ \cline{1-7} $3$ & $5$ & $[1,1,\sqrt{2},0]$ & $4$ & $7$ & $3$ & $8$ \\ \cline{2-7} & $6$ & $[0,1,\sqrt{2},\sqrt{2}]$ & $8$ & & & $4$ \\ \cline{1-7} $4$ & $7$ & $[1,2,\sqrt{2},0]$ & & $5$ & & $9$ \\ \cline{2-7} & $8$ & $[1,1,\sqrt{2},\sqrt{2}]$ & $6$ & $9$ & & $5$ \\ \cline{1-7} $5$ & $9$ & $[1,2,\sqrt{2},\sqrt{2}]$ & & $8$ & $10$ & $7$ \\ \cline{1-7} $6$ & $10$ & $[1,2,2\sqrt{2},\sqrt{2}]$ & & $11$ & $9$ & \\ \cline{1-7} $7$ & $11$ & $[1,3,2\sqrt{2},\sqrt{2}]$ & $12$ & $10$ & & \\ \cline{1-7} $8$ & $12$ & $[2,3,2\sqrt{2},\sqrt{2}]$ & $11$ & & & \\ \cline{1-7} \end{tabular} \end{table} \begin{table}[hbt] \centering \caption{List of positive roots for Coxeter group of type $H_4$ (part $1$), where $c = 2\cos(\pi/5)$, $c^2 = c + 1$} \label{tab:positive_roots_H_4_1} \begin{tabular}{|c||c|c|c|c|c|c|c} \cline{1-7} height & $i$ & root $\gamma_i$ & \multicolumn{4}{|c|}{$k$; $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-7} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ \\ \cline{1-7} $1$ & $1$ & $[1,0,0,0]$ & --- & $5$ & & & $H_3$ \\ \cline{2-7} & $2$ & $[0,1,0,0]$ & $6$ & --- & $7$ & & $H_3$ \\ \cline{2-7} & $3$ & $[0,0,1,0]$ & & $7$ & --- & $8$ & $H_3$ \\ \cline{2-7} & $4$ & $[0,0,0,1]$ & & & $8$ & --- \\ \cline{1-7} $2$ & $5$ & $[1,c,0,0]$ & $9$ & $1$ & $10$ & & $H_3$ \\ \cline{2-7} & $6$ & $[c,1,0,0]$ & $2$ & $9$ & $11$ & & $H_3$ \\ \cline{2-7} & $7$ & $[0,1,1,0]$ & $11$ & $3$ & $2$ & $12$ & $H_3$ \\ \cline{2-7} & $8$ & $[0,0,1,1]$ & & $12$ & $4$ & $3$ \\ \cline{1-7} $3$ & $9$ & $[c,c,0,0]$ & $5$ & $6$ & $13$ & & $H_3$ \\ \cline{2-7} & $10$ & $[1,c,c,0]$ & $13$ & & $5$ & $14$ & $H_3$ \\ \cline{2-7} & $11$ & $[c,1,1,0]$ & $7$ & $15$ & $6$ & $16$ & $H_3$ \\ \cline{2-7} & $12$ & $[0,1,1,1]$ & $16$ & $12$ & & $7$ \\ \cline{1-7} $4$ & $13$ & $[c,c,c,0]$ & $10$ & $17$ & $9$ & $18$ & $H_3$ \\ \cline{2-7} & $14$ & $[1,c,c,c]$ & $18$ & & & $10$ \\ \cline{2-7} & $15$ & $[c,c+1,1,0]$ & $19$ & $11$ & $17$ & $20$ & $H_3$ \\ \cline{2-7} & $16$ & $[c,1,1,1]$ & $12$ & $20$ & & $11$ \\ \cline{1-7} $5$ & $17$ & $[c,c+1,c,0]$ & $21$ & $13$ & $15$ & $22$ & $H_3$ \\ \cline{2-7} & $18$ & $[c,c,c,c]$ & $14$ & $22$ & & $13$ \\ \cline{2-7} & $19$ & $[c+1,c+1,1,0]$ & $15$ & & $21$ & $23$ & $H_3$ \\ \cline{2-7} & $20$ & $[c,c+1,1,1]$ & $23$ & $16$ & $24$ & $15$ \\ \cline{1-7} $6$ & $21$ & $[c+1,c+1,c,0]$ & $17$ & $25$ & $19$ & $26$ & $H_3$ \\ \cline{2-7} & $22$ & $[c,c+1,c,c]$ & $26$ & $18$ & $27$ & $17$ \\ \cline{2-7} & $23$ & $[c+1,c+1,1,1]$ & $20$ & & $28$ & $19$ \\ \cline{2-7} & $24$ & $[c,c+1,c+1,1]$ & $28$ & & $20$ & $27$ \\ \cline{1-7} \end{tabular} \end{table} \begin{table}[p] \centering \caption{List of positive roots for Coxeter group of type $H_4$ (part $2$), where $c = 2\cos(\pi/5)$, $c^2 = c + 1$} \label{tab:positive_roots_H_4_2} \begin{tabular}{|c||c|c|c|c|c|c|c} \cline{1-7} height & $i$ & root $\gamma_i$ & \multicolumn{4}{|c|}{$k$; $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-7} & & & $r_1$ & $r_2$ & $r_3$ & $r_4$ \\ \cline{1-7} $7$ & $25$ & $[c+1,2c,c,0]$ & & $21$ & & $29$ & $H_3$ \\ \cline{2-7} & $26$ & $[c+1,c+1,c,c]$ & $22$ & $29$ & $30$ & $21$ \\ \cline{2-7} & $27$ & $[c,c+1,c+1,c]$ & $30$ & & $22$ & $24$ \\ \cline{2-7} & $28$ & $[c+1,c+1,c+1,1]$ & $24$ & $31$ & $23$ & $30$ \\ \cline{1-7} $8$ & $29$ & $[c+1,2c,c,c]$ & & $26$ & $32$ & $25$ \\ \cline{2-7} & $30$ & $[c+1,c+1,c+1,c]$ & $27$ & $33$ & $26$ & $28$ \\ \cline{2-7} & $31$ & $[c+1,2c+1,c+1,1]$ & $34$ & $28$ & & $33$ \\ \cline{1-7} $9$ & $32$ & $[c+1,2c,2c,c]$ & & $35$ & $29$ & \\ \cline{2-7} & $33$ & $[c+1,2c+1,c+1,c]$ & $36$ & $30$ & $35$ & $31$ \\ \cline{2-7} & $34$ & $[2c+1,2c+1,c+1,1]$ & $31$ & $37$ & & $36$ \\ \cline{1-7} $10$ & $35$ & $[c+1,2c+1,2c,c]$ & $38$ & $32$ & $33$ & \\ \cline{2-7} & $36$ & $[2c+1,2c+1,c+1,c]$ & $33$ & $39$ & $38$ & $34$ \\ \cline{2-7} & $37$ & $[2c+1,2c+2,c+1,1]$ & & $34$ & $40$ & $39$ \\ \cline{1-7} $11$ & $38$ & $[2c+1,2c+1,2c,c]$ & $35$ & $41$ & $36$ & \\ \cline{2-7} & $39$ & $[2c+1,2c+2,c+1,c]$ & & $36$ & $42$ & $37$ \\ \cline{2-7} & $40$ & $[2c+1,2c+2,c+2,1]$ & & & $37$ & $43$ \\ \cline{1-7} $12$ & $41$ & $[2c+1,3c+1,2c,c]$ & $44$ & $38$ & $45$ & \\ \cline{2-7} & $42$ & $[2c+1,2c+2,2c+1,c]$ & & $45$ & $39$ & $46$ \\ \cline{2-7} & $43$ & $[2c+1,2c+2,c+2,c+1]$ & & & $46$ & $40$ \\ \cline{1-7} $13$ & $44$ & $[2c+2,3c+1,2c,c]$ & $41$ & & $47$ & \\ \cline{2-7} & $45$ & $[2c+1,3c+1,2c+1,c]$ & $47$ & $42$ & $41$ & $48$ \\ \cline{2-7} & $46$ & $[2c+1,2c+2,2c+1,c+1]$ & & $48$ & $43$ & $42$ \\ \cline{1-7} $14$ & $47$ & $[2c+2,3c+1,2c+1,c]$ & $45$ & $49$ & $44$ & $50$ \\ \cline{2-7} & $48$ & $[2c+1,3c+1,2c+1,c+1]$ & $50$ & $46$ & & $45$ \\ \cline{1-7} $15$ & $49$ & $[2c+2,3c+2,2c+1,c]$ & $51$ & $47$ & & $52$ \\ \cline{2-7} & $50$ & $[2c+2,3c+1,2c+1,c+1]$ & $48$ & $52$ & & $47$ \\ \cline{1-7} $16$ & $51$ & $[3c+1,3c+2,2c+1,c]$ & $49$ & & & $53$ \\ \cline{2-7} & $52$ & $[2c+2,3c+2,2c+1,c+1]$ & $53$ & $50$ & $54$ & $49$ \\ \cline{1-7} $17$ & $53$ & $[3c+1,3c+2,2c+1,c+1]$ & $52$ & & $55$ & $51$ \\ \cline{2-7} & $54$ & $[2c+2,3c+2,2c+2,c+1]$ & $55$ & & $52$ & \\ \cline{1-7} $18$ & $55$ & $[3c+1,3c+2,2c+2,c+1]$ & $54$ & $56$ & $53$ & \\ \cline{1-7} $19$ & $56$ & $[3c+1,3c+3,2c+2,c+1]$ & $57$ & $55$ & & \\ \cline{1-7} $20$ & $57$ & $[3c+2,3c+3,2c+2,c+1]$ & $56$ & $58$ & & \\ \cline{1-7} $21$ & $58$ & $[3c+2,4c+2,2c+2,c+1]$ & & $57$ & $59$ & \\ \cline{1-7} $22$ & $59$ & $[3c+2,4c+2,3c+1,c+1]$ & & & $58$ & $60$ \\ \cline{1-7} $23$ & $60$ & $[3c+2,4c+2,3c+1,2c]$ & & & & $59$ \\ \cline{1-7} \end{tabular} \end{table} Then, for the wide transformations in $\mathcal{D}$, we have the following: \begin{lem} \label{lem:proof_special_second_case_transformations_wide} In this setting, if $\omega_i$ is a wide transformation, then there exist only the following two possibilities, where $K^{(i)} = \{r_1,r_2,\dots,r_N\}$ is the standard labelling of $K^{(i)}$ given in Section \ref{sec:longestelement}: \begin{enumerate} \item $K^{(i)}$ is of type $A_N$ with $N \geq 3$, $t^{(i)} = r_2$, $[y^{(i)}] \cap K^{(i)} = \{r_1\}$ and $J^{(i)} = \{r_3,\dots,r_N\}$; now the action of $\omega_i$ maps $r_1$ to $r_N$ and $(r_3,r_4,\dots,r_N)$ to $(r_1,r_2,\dots,r_{N-2})$; \item $K^{(i)}$ is of type $E_7$, $t^{(i)} = r_6$, $[y^{(i)}] \cap K^{(i)} = \{r_1,r_2,r_3,r_4,r_5\}$ and $J^{(i)} = \{r_7\}$; now the action of $\omega_i$ maps $(r_1,r_2,r_3,r_4,r_5)$ to $(r_1,r_5,r_3,r_4,r_2)$ and $r_7$ to $r_7$. \end{enumerate} Hence, if $\mathcal{D}$ involves a wide transformation, then $J^{(0)}$ is of type $A_{N'}$ with $1 \leq N' < \infty$. \end{lem} \begin{proof} The latter part of the claim follows from the former part and the fact that the sets $J^{(i)}$ for $0 \leq i \leq \ell(\mathcal{D})$ are all isomorphic to each other. For the former part, note that $J^{(i)}$ is an irreducible subset of $K^{(i)}$ which is not adjacent to $[y^{(i)}]$ (by the above condition that $J^{(0)}$ is apart from $[y^{(0)}]$), $t^{(i)}$ is adjacent to $[y^{(i)}] \cap K^{(i)}$, and $\omega_i$ cannot fix the set $\Pi_{K^{(i)} \smallsetminus \{t\}}$ pointwise (see Lemma \ref{lem:another_decomposition_Y_no_loop}). Moreover, since $I$ is $A_{>1}$-free, $[y^{(i)}]$ is also $A_{>1}$-free. By these properties, a case-by-case argument shows that the possibilities of $K^{(i)}$, $[y^{(i)}]$ and $t^{(i)}$ are as enumerated in Table \ref{tab:lem_proof_special_second_case_transformations_wide} up to symmetry (note that $J^{(i)} = K^{(i)} \smallsetminus ([y^{(i)}] \cup \{t^{(i)}\})$). Now, for each case in Table \ref{tab:lem_proof_special_second_case_transformations_wide} except the two cases specified in the statement, it follows by using the tables for the root systems of finite irreducible Coxeter groups that there exists a root $\beta \in (\Phi_{K^{(i)}}^{\perp [y^{(i)}] \cap K^{(i)}})^+$ that has non-zero coefficient of $\alpha_{t^{(i)}}$, as listed in Table \ref{tab:lem_proof_special_second_case_transformations_wide} (where the notations for the roots $\beta$ are as in the tables). This implies that $\omega_i \cdot \beta \in \Phi^-$. Moreover, the definition of $K^{(i)}$ implies that the set $[y^{(i)}] \smallsetminus K^{(i)}$ is apart from $K^{(i)}$, therefore $\beta \in \Phi^{\perp [y^{(i)}]}$ and $\Phi^{\perp [y^{(i)}]}[\omega_i] \neq \emptyset$. However, this contradicts the property $\omega_i \in Y_{y^{(i+1)},y^{(i)}}$. Hence one of the two conditions specified in the statement should be satisfied, concluding the proof of Lemma \ref{lem:proof_special_second_case_transformations_wide}. \end{proof} \begin{table}[hbt] \centering \caption{List for the proof of Lemma \ref{lem:proof_special_second_case_transformations_wide}} \label{tab:lem_proof_special_second_case_transformations_wide} \begin{tabular}{|c|c|c|c|} \hline type of $K^{(i)}$ & $[y^{(i)}] \cap K^{(i)}$ & $t^{(i)}$ & $\beta$ \\ \hline $A_N$ ($N \geq 3$) & $\{r_1\}$ & $r_2$ & --- \\ \hline $B_N$ ($N \geq 4$) & $\{r_{k+1},\dots,r_N\}$ ($3 \leq k \leq N-1$) & $r_k$ & $\gamma^{(3)}_{k}$ \\ \hline $D_N$ & $\{r_{N-1},r_N\}$ & $r_{N-2}$ & $\gamma^{(4)}_{1,2}$ \\ \cline{2-3} & $\{r_{k+1},\dots,r_{N-1},r_N\}$ ($2 \leq k \leq N-4$) & $r_k$ & \\ \hline $E_N$ ($6 \leq N \leq 8$) & $\{r_1\}$ & $r_3$ & $\gamma_{44}$ \\ \hline $E_7$ & $\{r_1,r_2,r_3,r_4,r_5\}$ & $r_6$ & --- \\ \cline{2-4} & $\{r_7\}$ & $r_6$ & $\gamma_{61}$ \\ \hline $E_8$ & $\{r_1,r_2,r_3,r_4,r_5\}$ & $r_6$ & $\gamma_{119}$ \\ \cline{2-3} & $\{r_1,r_2,r_3,r_4,r_5,r_6\}$ & $r_7$ & \\ \cline{2-4} & $\{r_8\}$ & $r_7$ & $\gamma_{74}$ \\ \hline $F_4$ & $\{r_1\}$ & $r_2$ & $\gamma^{(1)}_7$ \\ \hline $H_4$ & $\{r_1\}$ & $r_2$ & $\gamma_{40}$ \\ \cline{2-3} & $\{r_1,r_2\}$ & $r_3$ & \\ \cline{2-4} & $\{r_4\}$ & $r_3$ & $\gamma_{32}$ \\ \hline \end{tabular} \end{table} On the other hand, for the narrow transformations in $\mathcal{D}$, we have the following: \begin{lem} \label{lem:proof_special_second_case_transformations_narrow} In this setting, suppose that $\omega_i$ is a narrow transformation, $[y^{(i+1)}] \neq [y^{(i)}]$, and $K^{(i)} \cap [y^{(i)}] = K^{(i)} \smallsetminus \{t^{(i)}\}$ has an irreducible component of type $A_1$. Then $K^{(i)}$ is of type $A_2$ or of type $I_2(m)$ with $m$ an odd number. \end{lem} \begin{proof} First, by the condition $[y^{(i+1)}] \neq [y^{(i)}]$ and the definition of $\omega_i$, the action of the longest element of $W_{K^{(i)}}$ induces a nontrivial automorphism of $K^{(i)}$ which does not fix the element $t^{(i)}$. This property restricts the possibilities of $K^{(i)}$ to one of the followings (where we use the standard labelling of $K^{(i)}$): $K^{(i)} = \{r_1,\dots,r_N\}$ is of type $A_N$ and $t^{(i)} \neq r_{(N+1)/2}$; $K^{(i)} = \{r_1,\dots,r_N\}$ is of type $D_N$ with $N$ odd and $t^{(i)} \in \{r_{N-1},r_N\}$; $K^{(i)} = \{r_1,\dots,r_6\}$ is of type $E_6$ and $t^{(i)} \not\in \{r_2,r_4\}$; or $K^{(i)}$ is of type $I_2(m)$ with $m$ odd. Secondly, by considering the $A_{>1}$-freeness of $I$ (hence of $[y^{(i)}]$), the possibilities are further restricted to the followings: $K^{(i)}$ is of type $A_2$; $K^{(i)}$ is of type $E_6$ and $t^{(i)} \in \{r_1,r_6\}$; and $K^{(i)}$ is of type $I_2(m)$ with $m$ odd. Moreover, by the hypothesis that $K^{(i)} \cap [y^{(i)}]$ has an irreducible component of type $A_1$, the above possibility of type $E_6$ is denied. Hence the claim holds. \end{proof} \subsubsection{Proof of the claim} \label{sec:finitepart_secondcase_N_2} From now, we prove our claim that $w$ fixes the set $\Pi_L$ pointwise. First, we have $w \cdot \Pi_L \subseteq \Pi_{J \smallsetminus I}$ as mentioned above, therefore Proposition \ref{prop:standard_decomposition_existence} implies that there exists a standard decomposition of $w$ with respect to $L$. Moreover, $L$ is apart from $I = [x_I]$, since $\Pi_L$ is an irreducible component of $\Pi^I$. Now if $L$ is not of type $A_N$ with $1 \leq N < \infty$, then Lemma \ref{lem:proof_special_second_case_transformations_wide} implies that the standard decomposition of $w$ involves no wide transformations, therefore $w$ fixes $\Pi_L$ pointwise, as desired (note that any narrow transformation $\omega_i$ fixes $\Pi_{J^{(i)}}$ pointwise by the definition). Hence, from now, we consider the case that $L$ is of type $A_N$ with $1 \leq N < \infty$. First, we present some definitions: \begin{defn} \label{defn:admissible_for_N_large} Suppose that $2 \leq N < \infty$. Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element of $W$. We say that a sequence $s_1,s_2,\dots,s_{\mu}$ of distinct elements of $S$ is \emph{admissible of type $A_N$} with respect to $\mathcal{D}$, if $J^{(0)}$ is of type $A_N$, $\mu \equiv N \pmod{2}$, and the following conditions are satisfied, where we put $M := \{s_1,s_2\dots,s_{\mu}\}$ (see Figure \ref{fig:finitepart_secondcase_N_2_definition}): \begin{enumerate} \item \label{item:admissible_N_large_J_irreducible_component} $\Pi_{J^{(0)}}$ is an irreducible component of $\Pi^{[y^{(0)}]}$. \item \label{item:admissible_N_large_M_line} $m(s_j,s_{j+1}) = 3$ for every $1 \leq j \leq \mu-1$. \item \label{item:admissible_N_large_J_in_M} For each $0 \leq h \leq \ell(\mathcal{D})$, there exists an odd number $\lambda(h)$ with $1 \leq \lambda(h) \leq \mu - N + 1$ satisfying the following conditions, where we put $\rho(h) := \lambda(h) + N - 1$: \begin{displaymath} J^{(h)} = \{s_j \mid \lambda(h) \leq j \leq \rho(h)\} \enspace, \end{displaymath} \begin{displaymath} \begin{split} [y^{(h)}] \cap M &= \{s_j \mid 1 \leq j \leq \lambda(h) - 2 \mbox{ and } j \equiv 1 \pmod{2}\} \\ &\cup \{s_j \mid \rho(h) + 2 \leq j \leq \mu \mbox{ and } j \equiv \mu \pmod{2}\} \enspace. \end{split} \end{displaymath} \item \label{item:admissible_N_large_y_isolated} For each $0 \leq h \leq \ell(\mathcal{D})$, every element of $[y^{(h)}] \cap M$ forms an irreducible component of $[y^{(h)}]$ of type $A_1$. \item \label{item:admissible_N_large_narrow_transformation} For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a narrow transformation, then one of the following two conditions is satisfied: \begin{itemize} \item $K^{(h)}$ intersects with $[y^{(h)}] \cap M$, and $[y^{(h+1)}] = [y^{(h)}]$; \item $K^{(h)}$ is apart from $[y^{(h)}] \cap M$ (hence $[y^{(h+1)}] \cap M = [y^{(h)}] \cap M$). \end{itemize} \item \label{item:admissible_N_large_wide_transformation} For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a wide transformation, then one of the following two conditions is satisfied: \begin{itemize} \item $\lambda(h+1) = \lambda(h) - 2$, $K^{(h)} = J^{(h)} \cup \{s_{\lambda(h)-2},s_{\lambda(h)-1}\}$ is of type $A_{N+2}$, $t^{(h)} = s_{\lambda(h)-1}$, and the action of $\omega_h$ maps $s_{\lambda(h)+j} \in J^{(h)}$ ($0 \leq j \leq N-1$) to $s_{\lambda(h+1)+j}$ and maps $s_{\lambda(h)-2} \in [y^{(h)}]$ to $s_{\rho(h)+2}$; \item $\lambda(h+1) = \lambda(h) + 2$, $K^{(h)} = J^{(h)} \cup \{s_{\rho(h)+1},s_{\rho(h)+2}\}$ is of type $A_{N+2}$, $t^{(h)} = s_{\rho(h)+1}$, and the action of $\omega_h$ maps $s_{\lambda(h)+j} \in J^{(h)}$ ($0 \leq j \leq N-1$) to $s_{\lambda(h+1)+j}$ and maps $s_{\rho(h)+2} \in [y^{(h)}]$ to $s_{\lambda(h)-2}$. \end{itemize} \end{enumerate} Moreover, we say that such a sequence $s_1,s_2,\dots,s_{\mu}$ is \emph{tight} if $M = \bigcup_{h=0}^{\ell(\mathcal{D})} J^{(h)}$. \end{defn} \begin{figure} \caption{Admissible sequence when $N \geq 2$; here $N = 7$, black circles in the top and the bottom rows indicate elements of $[y^{(h)}] \cap M$ and $[y^{(h+1)}] \cap M$, respectively, and $\omega_h$ is a wide transformation with $t^{(h)} = s_{\lambda(h)-1}$} \label{fig:finitepart_secondcase_N_2_definition} \end{figure} \begin{defn} \label{defn:admissible_for_N_1} Suppose that $N = 1$. Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element of $W$. We say that a sequence $s_1,s_2,\dots,s_{\mu}$ of distinct elements of $S$ is \emph{admissible of type $A_1$} with respect to $\mathcal{D}$, if $J^{(0)}$ is of type $A_1$ and the following conditions are satisfied, where we put $M = \{s_1,s_2\dots,s_{\mu}\}$ (see Figure \ref{fig:finitepart_secondcase_N_1_definition}): \begin{enumerate} \item \label{item:admissible_N_1_J_irreducible_component} $\Pi_{J^{(0)}}$ is an irreducible component of $\Pi^{[y^{(0)}]}$. \item \label{item:admissible_N_1_J_in_M} For each $0 \leq h \leq \ell(\mathcal{D})$, we have $J^{(h)} \subseteq M$ and $M \smallsetminus J^{(h)} \subseteq [y^{(h)}]$. \item \label{item:admissible_N_1_y_isolated} For each $0 \leq h \leq \ell(\mathcal{D})$, every element of $[y^{(h)}] \cap M$ forms an irreducible component of $[y^{(h)}]$ of type $A_1$. \item \label{item:admissible_N_1_narrow_transformation} For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a narrow transformation, then one of the following two conditions is satisfied: \begin{itemize} \item $K^{(h)}$ intersects with $[y^{(h)}] \cap M$, and $[y^{(h+1)}] = [y^{(h)}]$; \item $K^{(h)}$ is apart from $[y^{(h)}] \cap M$, hence $[y^{(h+1)}] \cap M = [y^{(h)}] \cap M$. \end{itemize} \item \label{item:admissible_N_1_wide_transformation} For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a wide transformation, then one of the following two conditions is satisfied: \begin{itemize} \item $J^{(h+1)} \neq J^{(h)}$, $K^{(h)}$ is of type $A_3$, $K^{(h)} \smallsetminus \{t^{(h)}\} = J^{(h)} \cup J^{(h+1)}$, $J^{(h+1)} \subseteq [y^{(h)}] \cap M$, and the action of $\omega_h$ exchanges the unique element of $J^{(h)}$ and the unique element of $J^{(h+1)}$; \item $J^{(h+1)} = J^{(h)}$ and $[y^{(h+1)}] = [y^{(h)}]$. \end{itemize} \end{enumerate} Moreover, we say that such a sequence $s_1,s_2,\dots,s_{\mu}$ is \emph{tight} if $M = \bigcup_{h=0}^{\ell(\mathcal{D})} J^{(h)}$. \end{defn} \begin{figure} \caption{Admissible sequence when $N = 1$; here $\omega_h$ is a wide transformation of the first type in Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}), the circles in each row signify elements of $M$, and the diamond signifies the element $t^{(h)}$} \label{fig:finitepart_secondcase_N_1_definition} \end{figure} Note that, if a sequence $s_1,s_2,\dots,s_{\mu}$ is admissible of type $A_N$ with respect to a semi-standard decomposition $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$, then the subsequence of $s_1,s_2,\dots,s_{\mu}$ consisting of the elements of $\bigcup_{j=0}^{\ell(\mathcal{D})} J^{(j)}$ is admissible of type $A_N$ with respect to $\mathcal{D}$ and is tight (for the case $N \geq 2$, the property of wide transformations in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}) implies that $\bigcup_{j=0}^{\ell(\mathcal{D})} J^{(j)} = \{s_i \mid \lambda(k) \leq i \leq \rho(k')\}$ for some $k,k' \in \{0,1,\dots,\ell(\mathcal{D})\}$). Moreover, the sequence $s_1$, $s_2,\dots,s_{\mu}$ is also admissible of type $A_N$ with respect to $\mathcal{D}^{-1}$. The above definitions are relevant to our purpose in the following manner: \begin{lem} \label{lem:claim_holds_when_admissible} Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of $w$ with respect to $L$. If there exists a sequence which is admissible of type $A_N$ with respect to $\mathcal{D}$, then $w$ fixes $\Pi_L$ pointwise. \end{lem} \begin{proof} First, note that $y^{(\ell(\mathcal{D}))} = x_I = y^{(0)}$ since $w \in Y_I$, therefore $[y^{(\ell(\mathcal{D}))}] \cap M = [y^{(0)}] \cap M$ where $M$ is as defined in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$). Now it follows from the properties in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_in_M}) when $N \geq 2$, or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_J_in_M}) when $N = 1$, that $J^{(\ell(\mathcal{D}))} = J^{(0)} = L$. Hence $w$ fixes $\Pi_L$ pointwise when $N = 1$. Moreover, when $N \geq 2$, the property in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}) implies that $\omega_h \ast s_{\lambda(h)+j} = s_{\lambda(h+1)+j}$ for every $0 \leq h \leq \ell(\mathcal{D})-1$ and $0 \leq j \leq N-1$. Now by this property and the above-mentioned property $J^{(\ell(\mathcal{D}))} = J^{(0)}$, it follows that $w$ fixes the set $\Pi_{J^{(0)}} = \Pi_{L}$ pointwise. Hence the proof is concluded. \end{proof} As mentioned above, a standard decomposition of $w$ with respect to $L$ exists. Therefore, by virtue of Lemma \ref{lem:claim_holds_when_admissible}, it suffices to show that there exists a sequence which is admissible with respect to this standard decomposition. More generally, we prove the following proposition (note that the above-mentioned standard decomposition of $w$ satisfies the assumption in this proposition): \begin{prop} \label{prop:admissible_sequence_exists} Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element. Suppose that $J^{(0)}$ is of type $A_N$ with $1 \leq N < \infty$, and $\Pi_{J^{(0)}}$ is an irreducible component of $\Pi^{[y^{(0)}]}$. Then there exists a sequence which is admissible of type $A_N$ with respect to $\mathcal{D}$. \end{prop} To prove Proposition \ref{prop:admissible_sequence_exists}, we give the following key lemma, which will be proven below: \begin{lem} \label{lem:admissible_sequence_extends} Let $n \geq 0$. Let $\mathcal{D} = \omega_n\omega_{n-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element, and put $\mathcal{D}' := \omega_{n-1} \cdots \omega_1\omega_0$, which is also a semi-standard decomposition of an element satisfying that $y^{(0)}(\mathcal{D}') = y^{(0)}(\mathcal{D})$ and $J^{(0)}(\mathcal{D}') = J^{(0)}(\mathcal{D})$. Suppose that $s_1,\dots,s_{\mu}$ is a sequence which is admissible of type $A_N$ with respect to $\mathcal{D}'$. For simplicity, put $y^{(j)} = y^{(j)}(\mathcal{D})$, $J^{(j)} = J^{(j)}(\mathcal{D})$, $t^{(j)} = t^{(j)}(\mathcal{D})$, and $K^{(j)} = K^{(j)}(\mathcal{D})$ for each index $j$. \begin{enumerate} \item \label{item:lem_admissible_sequence_extends_narrow} If $\omega_n$ is a narrow transformation, then we have either $[y^{(n+1)}] = [y^{(n)}]$, or $K^{(n)}$ is apart from $[y^{(n)}] \cap \bigcup_{j=0}^{n} J^{(j)}$. \item \label{item:lem_admissible_sequence_extends_wide_N_1_fix} If $N = 1$, $\omega_n$ is a wide transformation and $J^{(n+1)} = J^{(n)}$, then we have $[y^{(n+1)}] = [y^{(n)}]$. \item \label{item:lem_admissible_sequence_extends_wide_N_1} If $N = 1$, $\omega_n$ is a wide transformation and $J^{(n+1)} \neq J^{(n)}$, then $K^{(n)}$ is of type $A_3$, $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\}) \subseteq [y^{(n)}]$, and the action of $\omega_n$ exchanges the unique element of $J^{(n)}$ and the unique element of $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\})$ (the latter belonging to $[y^{(n)}] \cap J^{(n+1)}$). \item \label{item:lem_admissible_sequence_extends_wide_N_large} If $N \geq 2$ and $\omega_n$ is a wide transformation, then $K^{(n)}$ is of type $A_{N+2}$, the unique element $s'$ of $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\})$ belongs to $[y^{(n)}]$, and one of the following two conditions is satisfied: \begin{enumerate} \item \label{item:lem_admissible_sequence_extends_wide_N_large_left} $t^{(n)}$ is adjacent to $s'$ and $s_{\lambda(n)}$, and the action of $\omega_n$ maps the elements $s_{\lambda(n)}$, $s_{\lambda(n)+1}$, $s_{\lambda(n)+2},\dots,s_{\rho(n)}$ and $s'$ to $s'$, $t^{(n)}$, $s_{\lambda(n)},\dots,s_{\rho(n)-2}$ and $s_{\rho(n)}$, respectively. Moreover; \begin{enumerate} \item \label{item:lem_admissible_sequence_extends_wide_N_large_left_branch} if $\lambda(n) \geq 3$ and $s_{\lambda(n)-2} \in \bigcup_{j=0}^{n} J^{(j)}$, then we have $s' = s_{\lambda(n)-2}$ and $t^{(n)} = s_{\lambda(n)-1}$; \item \label{item:lem_admissible_sequence_extends_wide_N_large_left_terminal} otherwise, we have $s' \not\in \bigcup_{j=0}^{n} J^{(j)}$. \end{enumerate} \item \label{item:lem_admissible_sequence_extends_wide_N_large_right} $t^{(n)}$ is adjacent to $s'$ and $s_{\rho(n)}$, and the action of $\omega_n$ maps the elements $s_{\rho(n)}$, $s_{\rho(n)-1}$, $s_{\rho(n)-2},\dots,s_{\lambda(n)}$ and $s'$ to $s'$, $t^{(n)}$, $s_{\rho(n)},\dots,s_{\lambda(n)+2}$ and $s_{\lambda(n)}$, respectively. Moreover; \begin{enumerate} \item \label{item:lem_admissible_sequence_extends_wide_N_large_right_branch} if $\rho(n) \leq \mu - 2$ and $s_{\rho(n)+2} \in \bigcup_{j=0}^{n} J^{(j)}$, then we have $s' = s_{\rho(n)+2}$ and $t^{(n)} = s_{\rho(n)+1}$; \item \label{item:lem_admissible_sequence_extends_wide_N_large_right_terminal} otherwise, we have $s' \not\in \bigcup_{j=0}^{n} J^{(j)}$. \end{enumerate} \end{enumerate} \end{enumerate} \end{lem} Then Proposition \ref{prop:admissible_sequence_exists} is deduced by applying Lemma \ref{lem:admissible_sequence_extends} and the next lemma to the semi-standard decompositions $\mathcal{D}_{\nu} := \omega_{\nu-1} \cdots \omega_1\omega_0$ ($0 \leq \nu \leq \ell(\mathcal{D})$) successively (note that, when $\nu = 0$, i.e., $\mathcal{D}_{\nu}$ is an empty expression, the sequence $s_1,\dots,s_N$, where $J^{(0)} = \{s_1,\dots,s_N\}$ is the standard labelling of type $A_N$, is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$): \begin{lem} \label{lem:admissible_sequence_existence_from_lemma} In the situation of Lemma \ref{lem:admissible_sequence_extends}, we define a sequence $\sigma$ of elements of $S$ in the following manner: For Cases \ref{item:lem_admissible_sequence_extends_narrow}, \ref{item:lem_admissible_sequence_extends_wide_N_1_fix}, \ref{item:lem_admissible_sequence_extends_wide_N_large_left_branch} and \ref{item:lem_admissible_sequence_extends_wide_N_large_right_branch}, let $\sigma$ be the sequence $s_1,\dots,s_{\mu}$; for Case \ref{item:lem_admissible_sequence_extends_wide_N_1}, let $s'$ be the unique element of $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\}) = J^{(n+1)}$, and let $\sigma$ be the sequence $s_1,\dots,s_{\mu},s'$ when $s' \not\in \{s_1,\dots,s_{\mu}\}$ and the sequence $s_1,\dots,s_{\mu}$ when $s' \in \{s_1,\dots,s_{\mu}\}$; for Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}, let $\sigma$ be the sequence $s'$, $t^{(n)}$, $s_{\lambda(n)}$, $s_{\lambda(n)+1},\dots,s_{\rho'}$, where $\rho'$ denotes the largest index $1 \leq \rho' \leq \mu$ with $s_{\rho'} \in \bigcup_{j=0}^{n} J^{(j)}$; for the case \ref{item:lem_admissible_sequence_extends_wide_N_large_right_terminal}, let $\sigma$ be the sequence $s'$, $t^{(n)}$, $s_{\rho(n)}$, $s_{\rho(n)-1},\dots,s_{\lambda'}$, where $\lambda'$ denotes the smallest index $1 \leq \lambda' \leq \mu$ with $s_{\lambda'} \in \bigcup_{j=0}^{n} J^{(j)}$. Then $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$. \end{lem} Now our remaining task is to prove Lemma \ref{lem:admissible_sequence_extends} and Lemma \ref{lem:admissible_sequence_existence_from_lemma}. For the purpose, we present an auxiliary result: \begin{lem} \label{lem:admissible_sequence_non_adjacent_pairs} Let $s_1,\dots,s_{\mu}$ be a sequence which is admissible of type $A_N$, where $N \geq 2$, with respect to a semi-standard decomposition $\mathcal{D}$ of an element of $W$. Suppose that the sequence $s_1,\dots,s_{\mu}$ is tight. If $1 \leq j_1 < j_2 \leq \mu$, $j_2 - j_1 \geq 2$, and either $j_1 \equiv 1 \pmod{2}$ or $j_2 \equiv \mu \pmod{2}$, then $s_{j_1}$ is not adjacent to $s_{j_2}$. \end{lem} \begin{proof} By symmetry, we may assume without loss of generality that $j_1 \equiv 1 \pmod{2}$. Put $\mathcal{D} = \omega_{n-1} \cdots \omega_1\omega_0$. Since the sequence $s_1,\dots,s_\mu$ is tight, there exists an index $0 \leq h \leq n$ with $s_{j_2} \in J^{(h)}$. Now the properties \ref{item:admissible_N_large_M_line} and \ref{item:admissible_N_large_J_in_M} in Definition \ref{defn:admissible_for_N_large} imply that $J^{(h)} = \{s_{\lambda(h)},s_{\lambda(h)+1},\dots,s_{\rho(h)}\}$ is the standard labelling of type $A_N$, therefore the claim holds if $s_{j_1} \in J^{(h)}$ (note that $j_2 - j_1 \geq 2$). On the other hand, if $s_{j_1} \not\in J^{(h)}$, then the property \ref{item:admissible_N_large_J_in_M} in Definition \ref{defn:admissible_for_N_large} and the fact $j_1 < j_2$ imply that $j_1 < \lambda(h)$, therefore $s_{j_1} \in [y^{(h)}]$ since $j_1 \equiv 1 \pmod{2}$. Hence the claim follows from the fact that $J^{(h)}$ is apart from $[y^{(h)}]$ (see the property \ref{item:admissible_N_large_J_irreducible_component} in Definition \ref{defn:admissible_for_N_large}). \end{proof} From now, we prove the pair of Lemma \ref{lem:admissible_sequence_extends} and Lemma \ref{lem:admissible_sequence_existence_from_lemma} by induction on $n \geq 0$. First, we give a proof of Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $n = n_0$ by assuming Lemma \ref{lem:admissible_sequence_extends} for $0 \leq n \leq n_0$. Secondly, we will give a proof of Lemma \ref{lem:admissible_sequence_extends} for $n = n_0$ by assuming Lemma \ref{lem:admissible_sequence_extends} for $0 \leq n < n_0$ and Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $0 \leq n < n_0$. \begin{proof} [Proof of Lemma \ref{lem:admissible_sequence_existence_from_lemma} (for $n = n_0$) from Lemma \ref{lem:admissible_sequence_extends} (for $n \leq n_0$).] When $n_0 = 0$, the claim is obvious from the property of $\omega_{n_0}$ specified in Lemma \ref{lem:admissible_sequence_extends}. From now, we suppose that $n_0 > 0$. We may assume without loss of generality that the sequence $s_1,\dots,s_{\mu}$ (denoted here by $\sigma'$) which is admissible with respect to $\mathcal{D}'$ is tight, therefore we have $M' := \{s_1,\dots,s_{\mu}\} = \bigcup_{j=0}^{n_0} J^{(j)}$. We divide the proof according to the possibility of $\omega_{n_0}$ listed in Lemma \ref{lem:admissible_sequence_extends}. By symmetry, we may omit the argument for Case \ref{item:lem_admissible_sequence_extends_wide_N_large_right} without loss of generality. In Case \ref{item:lem_admissible_sequence_extends_narrow}, since $M' = \bigcup_{j=0}^{n_0} J^{(j)}$ as above, $\omega_{n_0}$ satisfies the condition for $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_narrow_transformation}) (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_narrow_transformation}) (when $N = 1$), hence $\sigma = \sigma'$ is admissible of type $A_N$ with respect to $\mathcal{D}$. Similarly, in Case \ref{item:lem_admissible_sequence_extends_wide_N_1_fix}, Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_branch}, and Case \ref{item:lem_admissible_sequence_extends_wide_N_1} with $s' \in M'$, respectively, the wide transformation $\omega_{n_0}$ satisfies the condition for $\sigma'$ in Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}), Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}), and Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}), respectively. Hence $\sigma = \sigma'$ is admissible of type $A_N$ with respect to $\mathcal{D}$ in these three cases. From now, we consider the remaining two cases: Case \ref{item:lem_admissible_sequence_extends_wide_N_1} with $s' \not\in M'$, and Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}. Note that, in Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}, the tightness of $\sigma'$ implies that $\lambda(n_0) = 1$ and $\rho' = \mu$, therefore $\sigma$ is the sequence $s'$, $t^{(n_0)}$, $s_1,\dots,s_{\mu}$. Moreover, in this case the unique element $s'$ of $K^{(n_0)} \cap [y^{(n_0)}]$ does not belong to $\bigcup_{j=0}^{n_0} J^{(j)} = M'$, therefore $t^{(n_0)}$ cannot be adjacent to $[y^{(n_0)}] \cap M'$; hence $t^{(n_0)} \not\in M'$ by the property of $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_in_M}). Note also that, in both of the two cases, we have $s' \in J^{(n_0+1)}$ and $\{s'\}$ is an irreducible component of $[y^{(n_0)}]$. We prove by induction on $0 \leq \nu \leq n_0$ that the sequence $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$ and $s' \in [y^{(\nu+1)}(\mathcal{D}_{\nu})]$, where \begin{displaymath} \mathcal{D}_{\nu} = \omega'_{\nu}\omega'_{\nu-1} \cdots \omega'_1\omega'_0 := (\omega_{n_0-\nu})^{-1}(\omega_{n_0-\nu+1})^{-1} \cdots (\omega_{n_0-1})^{-1}(\omega_{n_0})^{-1} \end{displaymath} is a semi-standard decomposition of an element with respect to $J^{(n_0+1)}$. Note that $y^{(j)}(\mathcal{D}_{\nu}) = y^{(n_0-j+1)}$, $J^{(j)}(\mathcal{D}_{\nu}) = J^{(n_0-j+1)}$, $t^{(j)}(\mathcal{D}_{\nu}) = t^{(n_0-j+1)}$ and $K^{(j)}(\mathcal{D}_{\nu}) = K^{(n_0-j+1)}$ for each index $j$. When $\nu = 0$, this claim follows immediately from the property of $\omega_{n_0}$ specified in Lemma \ref{lem:admissible_sequence_extends}, properties of $\sigma'$ and the definition of $\sigma$. Suppose that $\nu > 0$. Note that $s' \in [y^{(\nu)}(\mathcal{D}_{\nu-1})]$ (which is equal to $[y^{(\nu)}(\mathcal{D}_{\nu})] = [y^{(n_0-\nu+1)}]$) by the induction hypothesis. First, we consider the case that $\omega'_{\nu}$ (or equivalently, $\omega_{n_0-\nu}$) is a wide transformation. In this case, the possibility of $\omega_{n_0-\nu}$ is as specified in the condition of $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}) (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}) (when $N = 1$), where $h = n_0-\nu$; in particular, we have $K^{(n_0-\nu)} \smallsetminus \{t^{(n_0-\nu)}\} \subseteq M'$, therefore $[y^{(n_0-\nu+1)}] \smallsetminus M' = [y^{(n_0-\nu)}] \smallsetminus M'$. Hence the element $s'$ of $[y^{(n_0-\nu+1)}] \smallsetminus M'$ belongs to $[y^{(n_0-\nu)}] = [y^{(\nu+1)}(\mathcal{D}_{\nu})]$, and the property of $\omega_{n_0-\nu} = (\omega'_{n_0})^{-1}$ implies that $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$ as well as $\mathcal{D}_{\nu-1}$. Secondly, we consider the case that $\omega'_{\nu}$ (or equivalently, $\omega_{n_0-\nu}$) is a narrow transformation. By applying Lemma \ref{lem:admissible_sequence_extends} (for $n = \nu$) to the pair $\mathcal{D}_{\nu}$, $\mathcal{D}_{\nu-1}$ and the sequence $\sigma$, it follows that either $[y^{(\nu+1)}(\mathcal{D}_{\nu})] = [y^{(\nu)}(\mathcal{D}_{\nu})]$, or the support of $\omega'_{\nu}$ is apart from $[y^{(\nu)}(\mathcal{D}_{\nu})] \cap \bigcup_{j=0}^{\nu} J^{(j)}(\mathcal{D}_{\nu})$. Now in the former case, we have $s' \in [y^{(\nu)}(\mathcal{D}_{\nu})] = [y^{(\nu+1)}(\mathcal{D}_{\nu})]$. On the other hand, in the latter case, we have $s' \in [y^{(\nu)}(\mathcal{D}_{\nu})] \cap \bigcup_{j=0}^{\nu} J^{(j)}(\mathcal{D}_{\nu})$ since $s' \in [y^{(\nu)}(\mathcal{D}_{\nu})]$ as above and $s' \in J^{(0)}(\mathcal{D}_{\nu}) = J^{(n_0+1)}$ by the choice of $s'$, therefore $s'$ is apart from the support of $\omega'_{\nu}$. Hence, it follows in any case that $s' \in [y^{(\nu+1)}(\mathcal{D}_{\nu})]$; and the property of $\omega_{n_0-\nu} = (\omega'_{\nu})^{-1}$ specified by the condition of $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_narrow_transformation}) (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_narrow_transformation}) (when $N = 1$), where $h = n_0-\nu$, implies that $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$ as well as $\mathcal{D}_{\nu-1}$. Hence the claim of this paragraph follows. By using the result of the previous paragraph with $\nu = n_0$, the sequence $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{n_0} = \mathcal{D}^{-1}$, hence with respect to $\mathcal{D}$ as well. This completes the proof. \end{proof} By virtue of the above result, our remaining task is finally to prove Lemma \ref{lem:admissible_sequence_extends} for $n = n_0$ by assuming Lemma \ref{lem:admissible_sequence_extends} for $0 \leq n < n_0$ and Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $0 \leq n < n_0$ (in particular, with no assumptions when $n_0 = 0$). Put $M' := \{s_1,\dots,s_{\mu}\}$. In the proof, we may assume without loss of generality that the sequence $s_1,\dots,s_{\mu}$ (denoted here by $\sigma'$) which is admissible with respect to $\mathcal{D}'$ is tight (hence we have $J^{(0)} = M'$ when $n_0 = 0$). Now by Lemma \ref{lem:proof_special_second_case_transformations_wide}, the claim of Lemma \ref{lem:admissible_sequence_extends} holds for the case that $N = 1$ and $\omega_{n_0}$ is a wide transformation. From now, we consider the other case that either $N \geq 2$ or $\omega_{n_0}$ is a narrow transformation. Assume contrary that the claim of Lemma \ref{lem:admissible_sequence_extends} does not hold. Then, by Lemma \ref{lem:proof_special_second_case_transformations_wide}, Lemma \ref{lem:proof_special_second_case_transformations_narrow} and the properties of the tight sequence $\sigma'$ in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$), it follows that the possibilities for the $\omega_{n_0}$ is as follows (up to symmetry): \begin{description} \item[Case (I):] $\omega_{n_0}$ is a narrow transformation, $K^{(n_0)}$ is of type $A_2$ or type $I_2(m)$ with $m$ odd, and we have $s_{\eta} \in K^{(n_0)} \cap [y^{(n_0)}]$ for some index $1 \leq \eta \leq \mu$; hence $t^{(n_0)} \not\in M'$, $K^{(n_0)} = \{s_{\eta},t^{(n_0)}\}$ and the action of $\omega_{n_0}$ exchanges the two elements of $K^{(n_0)}$. \item[Case (II):] $N \geq 2$, $\omega_{n_0}$ is a wide transformation, $K^{(n_0)}$ is of type $A_{N+2}$, and $t^{(n_0)}$ is adjacent to $s_{\lambda(n_0)}$ and the unique element $s'$ of $[y^{(n_0)}] \cap K^{(n_0)}$; hence the action of $\omega_{n_0}$ maps the elements $s_{\lambda(n_0)}$, $s_{\lambda(n_0)+1}$, $s_{\lambda(n_0)+2},\cdots,s_{\rho(n_0)}$ and $s'$ to $s'$, $t^{(n_0)}$, $s_{\lambda(n_0)},\dots,s_{\rho(n_0)-2}$ and $s_{\rho(n_0)}$, respectively. Moreover, $t^{(n_0)} \not\in M'$, and \begin{description} \item[Case (II-1):] $s' = s_{j_0}$ for an index $\rho(n_0)+2 \leq j_0 \leq \mu$ with $j_0 \equiv \mu \pmod{2}$; \item[Case (II-2):] $\lambda(n_0) \geq 3$ and $s' \not\in \{s_{\lambda(n_0)-2},s_{\lambda(n_0)-1},\dots,s_{\mu}\}$; \item[Case (II-3):] $\lambda(n_0) \geq 3$ and $s' = s_{\lambda(n_0)-2}$. \end{description} \end{description} In particular, by the tightness of $\sigma'$, the conditions in the above four cases cannot be satisfied when $n_0 = 0$. Hence the claim holds when $n_0 = 0$. From now, we suppose that $n_0 > 0$. For each of the four cases, we determine an element $\overline{s} \in [y^{(n_0)}] \cap M'$ and an element $\overline{t} \in S \smallsetminus [y^{(n_0)}]$ in the following manner: $\overline{s} = s_{\eta}$ and $\overline{t} = t^{(n_0)}$ in Case (I); $\overline{s} = s_{j_0}$ and $\overline{t} = t^{(n_0)}$ in Case (II-1); $\overline{s} = s_{\lambda(n_0)-2}$ and $\overline{t} = s_{\lambda(n_0)-1}$ in Case (II-2); and $\overline{s} = s_{\lambda(n_0)-2}$ and $\overline{t} = t^{(n_0)}$ in Case (II-3). Note that $\overline{s}$ and $\overline{t}$ are adjacent by the definition. Since $\sigma'$ is tight, there exists an index $0 \leq h_0 \leq n_0-1$ with $\overline{s} \in J^{(h_0)}$; let $h_0$ be the largest index with this property. By the definition of $h_0$, $\omega_{h_0}$ is a wide transformation and $J^{(h_0+1)} \neq J^{(h_0)}$. Let $\overline{r}$ denote the element of $J^{(h_0+1)}$ with $\omega_{h_0} \ast \overline{s} = \overline{r}$. Then we have $\overline{r} \in [y^{(h_0)}]$ by the property of $\omega_{h_0}$ and the choice of $\overline{s}$. Let $\overline{\mathcal{D}} := \omega'_{n'-1} \cdots \omega'_1\omega'_0$ denote the simplification of \begin{displaymath} (\omega_{n_0-1} \cdots \omega_{h_0+2}\omega_{h_0+1})^{-1} = (\omega_{h_0+1})^{-1}(\omega_{h_0+2})^{-1} \cdots (\omega_{n_0-1})^{-1} \end{displaymath} (see Section \ref{sec:finitepart_decomposition_Y} for the terminology), and let $\overline{u}$ be the element of $W$ expressed by the product $\overline{\mathcal{D}}$. Here we present the following lemma: \begin{lem} \label{lem:proof_lem_admissible_sequence_extends_simplification} In this setting, the support of each transformation in $\overline{\mathcal{D}}$ does not contain $\overline{t}$ and is apart from $\overline{s}$. \end{lem} \begin{proof} We prove by induction on $0 \leq \nu' \leq n'-1$ that the support $K'$ of $\omega'_{\nu'}$ does not contain $\overline{t}$ and is apart from $\overline{s}$. Let $(\omega_{\nu})^{-1}$ be the term in $(\omega_{h_0+1})^{-1}(\omega_{h_0+2})^{-1} \cdots (\omega_{n_0-1})^{-1}$ corresponding to the term $\omega'_{\nu'}$ in the simplification $\overline{\mathcal{D}}$. First, by the definition of simplification and the property of narrow transformations specified in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$), $K'$ is apart from $[y^{(\nu')}(\overline{\mathcal{D}})] \cap M' = [y^{(\nu+1)}] \cap M'$ (see Lemma \ref{lem:another_decomposition_Y_reduce_redundancy} for the equality) if $\omega'_{\nu'}$ (or equivalently, $\omega_{\nu}$) is a narrow transformation. Now we have $\overline{s} \in [y^{(n_0)}] = [y^{(0)}(\overline{\mathcal{D}})]$ and $\overline{s} \in M'$ by the definition, therefore the induction hypothesis implies that $\overline{s} \in [y^{(\nu')}(\overline{\mathcal{D}})] \cap M'$. Hence $K'$ is apart from $\overline{s}$ if $\omega'_{\nu'}$ is a narrow transformation. This also implies that $\overline{t} \not\in K'$ if $\omega'_{\nu'}$ is a narrow transformation, since $\overline{t}$ is adjacent to $\overline{s}$. From now, we consider the other case that $\omega'_{\nu'}$ (or equivalently, $\omega_{\nu}$) is a wide transformation. Recall that $\overline{s} \in [y^{(\nu')}(\overline{\mathcal{D}})]$ as mentioned above. Then, by the property of wide transformation $\omega_{\nu}$ specified in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$) and the definition of simplification, it follows that $\overline{s} \in J^{(\nu'+1)}(\overline{\mathcal{D}})$ provided $K'$ is not apart from $\overline{s}$. On the other hand, by the definition of $h_0$, we have $\overline{s} \not\in J^{(j)}$ for any $h_0+1 \leq j \leq n_0$. This implies that $K'$ should be apart from $\overline{s}$; therefore we have $\overline{t} \not\in K'$, since $\overline{t}$ is adjacent to $\overline{s}$. Hence the proof of Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} is concluded. \end{proof} Now, in all the cases except Case (II-2), the following property holds: \begin{lem} \label{lem:proof_lem_admissible_sequence_extends_root_other_cases} In Cases (I), (II-1) and (II-3), there exists a root $\beta \in \Pi^{[y^{(h_0)}]}$ in which the coefficient of $\alpha_{\overline{s}}$ is zero and the coefficient of $\alpha_{\overline{t}} = \alpha_{t^{(n_0)}}$ is non-zero. \end{lem} \begin{proof} First, Lemma \ref{lem:another_decomposition_Y_reduce_redundancy} implies that $\overline{u} \cdot \Pi_{J^{(n_0)}} = \Pi_{J^{(h_0+1)}}$ and $[y'] = [y^{(h_0+1)}]$ where $y' := y^{(n')}(\overline{\mathcal{D}})$. Put $r' := \overline{u}^{-1} \ast \overline{r} \in J^{(n_0)}$. Then by Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} and Lemma \ref{lem:another_decomposition_Y_shift_x}, we have $\overline{u} \in Y_{z',z}$, where $z$ and $z'$ are elements of $S^{(\Lambda)}$ obtained from $y^{(0)}(\overline{\mathcal{D}}) = y^{(n_0)}$ and $y'$ by replacing the element $\overline{s}$ with $r'$ and $\overline{r}$, respectively. Now by the property of the wide transformation $\omega_{h_0}$, it follows that $y^{(h_0)}$ is obtained from $y^{(h_0+1)}$ by replacing $\overline{s}$ with $\overline{r}$; hence we have $[z'] = [y^{(h_0)}]$. We show that there exists a root $\beta' \in \Pi^{[z]}$ in which the coefficient of $\alpha_{\overline{s}}$ is zero and the coefficient of $\alpha_{t^{(n_0)}}$ is non-zero. In Case (I), $t^{(n_0)}$ is apart from both $[y^{(n_0)}] \smallsetminus \{\overline{s}\}$ and $J^{(n_0)}$, while we have $[z] \subseteq ([y^{(n_0)}] \smallsetminus \{\overline{s}\}) \cup J^{(n_0)}$ by the definition; hence $\beta' := \alpha_{t^{(n_0)}}$ satisfies the required condition. In Case (II-1), we have $\overline{r} = s_{\lambda(h_0+1)}$ by the property of $\omega_{h_0}$, therefore $r' = s_{\lambda(n_0)}$ by the property of wide transformations in $\overline{\mathcal{D}}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation})). Put $\beta' := \alpha_{t^{(n_0)}} + \alpha_{s_{\lambda(n_0)}} + \alpha_{s_{\lambda(n_0)+1}} \in \Pi^{K^{(n_0)},\{r'\}}$ (note that $N \geq 2$ and $K^{(n_0)}$ is of type $A_{N+2}$). Now $K^{(n_0)}$ is apart from $[y^{(n_0)}] \smallsetminus \{\overline{s}\} = [z] \smallsetminus \{r'\}$, therefore we have $\beta' \in \Pi^{[z]}$ and $\beta'$ satisfies the required condition. Moreover, in Case (II-3), we have $\overline{r} = s_{\rho(h_0+1)}$ by the property of $\omega_{h_0}$, therefore $r' = s_{\rho(n_0)}$ by the property of wide transformations in $\overline{\mathcal{D}}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation})). Now, since $N \geq 2$ and $K^{(n_0)}$ is of type $A_{N+2}$, $t^{(n_0)}$ is not adjacent to $r'$, while $K^{(n_0)}$ is apart from $[y^{(n_0)}] \smallsetminus \{\overline{s}\} = [z] \smallsetminus \{r'\}$. Hence $\beta' := \alpha_{t^{(n_0)}}$ satisfies the required condition. By Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification}, the action of $\overline{u}$ does not change the coefficients of $\alpha_{\overline{s}}$ and $\alpha_{\overline{t}}$. Hence by the result of the previous paragraph, the root $\beta := \overline{u} \cdot \beta' \in \Pi^{[z']} = \Pi^{[y^{(h_0)}]}$ satisfies the required condition, concluding the proof of Lemma \ref{lem:proof_lem_admissible_sequence_extends_root_other_cases}. \end{proof} Since $\overline{t} \not\in J^{(h_0)}$ and $\overline{t}$ is adjacent to $\overline{s}$, the root $\beta \in \Pi^{[y^{(h_0)}]}$ given by Lemma \ref{lem:proof_lem_admissible_sequence_extends_root_other_cases} does not belong to $\Pi_{J^{(h_0)}}$ and is not orthogonal to $\alpha_{\overline{s}}$. However, since $\overline{s} \in J^{(h_0)}$, this contradicts the fact that $\Pi_{J^{(h_0)}}$ is an irreducible component of $\Pi^{[y^{(h_0)}]}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_irreducible_component}) when $N \geq 2$, or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_J_irreducible_component}) when $N = 1$). Hence we have derived a contradiction in the three cases in Lemma \ref{lem:proof_lem_admissible_sequence_extends_root_other_cases}. From now, we consider the remaining case, i.e., Case (II-2). In this case, the following property holds: \begin{lem} \label{lem:proof_lem_admissible_sequence_extends_simplification_2} In this setting, the support of each transformation in $\overline{\mathcal{D}}$ does not contain $t^{(n_0)}$ and is apart from $s'$. \end{lem} \begin{proof} For each $0 \leq i \leq n_0 - h_0 - 1$, let $\mathcal{D}_i$ denote the semi-standard decomposition of an element defined by \begin{displaymath} \mathcal{D}_i = \omega''_i \cdots \omega''_1\omega''_0 := (\omega_{n_0-i})^{-1} \cdots (\omega_{n_0-1})^{-1}(\omega_{n_0})^{-1} \enspace. \end{displaymath} For each $0 \leq i \leq n_0 - h_0 - 1$, let $\sigma_i$ denote the sequence $s'$, $t^{(n_0)}$, $s_{\lambda(n_0)}$, $s_{\lambda(n_0)+1},\dots,s_{\overline{\rho}(i)}$, where $\overline{\rho}(i)$ denotes the largest index $s_{\lambda(n_0)} \leq \overline{\rho}(i) \leq \mu$ with $s_{\overline{\rho}(i)} \in \bigcup_{j=0}^{i+1} J^{(j)}(\mathcal{D}_i)$ ($= \bigcup_{j=n_0-i}^{n_0+1} J^{(j)}$). We prove the following properties by induction on $1 \leq i \leq n_0 - h_0 - 1$: The sequence $\sigma_i$ is admissible with respect to $\mathcal{D}_i$; we have $s' \in [y^{(i+1)}(\mathcal{D}_i)]$; and we have either $[y^{(i+1)}(\mathcal{D}_i)] = [y^{(i)}(\mathcal{D}_i)]$ and $J^{(i+1)}(\mathcal{D}_i) = J^{(i)}(\mathcal{D}_i)$, or the support $K'' = K^{(i)}(\mathcal{D}_i)$ of $\omega''_i$ is apart from $s'$. Note that, by the properties of $\omega_{n_0}$ and $\sigma'$, we have $s' \in [y^{(n_0)}] = [y^{(1)}(\mathcal{D}_0)]$, and the sequence $\sigma_0$ (which is $s'$, $t^{(n_0)}$, $s_{\lambda(n_0)},\dots,s_{\rho(n_0)}$) is admissible with respect to $\mathcal{D}_0$. By the induction hypothesis and Lemma \ref{lem:admissible_sequence_extends} for $n = i$ applied to the sequence $\sigma_{i-1}$ and the pair $\mathcal{D}_i$ and $\mathcal{D}_{i-1}$ (note that $i \leq n_0 - h_0 - 1 \leq n_0 - 1$), it follows that the possibilities of $\omega''_i = (\omega_{n_0-i})^{-1}$ are as listed in Lemma \ref{lem:admissible_sequence_extends}. Now if $\omega''_i$ is a narrow transformation, then as in Case \ref{item:lem_admissible_sequence_extends_narrow} of Lemma \ref{lem:admissible_sequence_extends}, we have either $[y^{(i+1)}(\mathcal{D}_i)] = [y^{(i)}(\mathcal{D}_i)]$, or $K''$ is apart from $s'$ (note that $s' \in [y^{(i)}(\mathcal{D}_i)]$ by the induction hypothesis, while $s' \in J^{(0)}(\mathcal{D}_i) = J^{(n_0+1)}$). On the other hand, suppose that $\omega''_i = (\omega_{n_0-i})^{-1}$ is a wide transformation. Then, by the property of $\sigma'$, the support $K''$ of the wide transformation $\omega_{n_0-i}$ is contained in $M'$, therefore $s' \not\in K''$. This implies that $K''$ is apart from $s'$, since we have $s' \in [y^{(i)}(\mathcal{D}_i)]$ by the induction hypothesis. Moreover, in any case of $\omega''_i$, we have $s' \in [y^{(i+1)}(\mathcal{D}_i)]$ by the above-mentioned fact $s' \in [y^{(i)}(\mathcal{D}_i)]$ and the above argument. On the other hand, the sequence $\sigma$ in Lemma \ref{lem:admissible_sequence_existence_from_lemma} corresponding to the current case is equal to $\sigma_i$, therefore $\sigma_i$ is admissible with respect to $\mathcal{D}_i$ by Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $n = i$ (note again that $i \leq n_0 - 1$). Hence the claim of the previous paragraph holds. By the above result, the simplification $\overline{D} = \omega'_{n'-1} \cdots \omega'_0$ of $\omega''_{n_0-h_0-1} \cdots \omega''_2\omega''_1$ satisfies the following conditions: For each $0 \leq \nu' \leq n'-1$, we have $s' \in [y^{(\nu')}(\overline{D})]$, and the support of $\omega'_{\nu'}$ is apart from $s'$. Since $t^{(n_0)}$ is adjacent to $s'$, this implies that the support of each $\omega'_{\nu'}$ does not contain $t^{(n_0)}$. Hence the proof of Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2} is concluded. \end{proof} By Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2}, we have $s' \in [y^{(n')}(\overline{\mathcal{D}})] = [y^{(h_0+1)}]$, therefore the set $K^{(h_0)}$ of type $A_{N+2}$ consisting of $s_{\lambda(n_0)-2}$, $s_{\lambda(n_0)-1},\dots,s_{\rho(n_0)}$ is apart from $s'$. On the other hand, since $s_{\lambda(n_0)-2} \in [y^{(n_0)}]$, the set $K^{(n_0)}$ of type $A_{N+2}$ is apart from $s'$. From now, by using these properties, we construct a root $\beta' \in \Pi^{[y^{(n_0)}]} \smallsetminus \Pi_{J^{(n_0)}}$ which is not orthogonal to $\alpha_{s_{\lambda(n_0)+1}} \in \Pi_{J^{(n_0)}}$ (note that $N \geq 2$), in the following five steps. \textbf{Step 1.} Note that the set $K^{(n_0)}$ is apart from $[y^{(n_0)}] \smallsetminus K^{(n_0)}$. Put $z^{(0)} := y^{(n_0)}$. Then we have $u_1 := w_{z^{(0)}}^{t^{(n_0)}} = s' t^{(n_0)} \in Y_{z^{(1)},z^{(0)}}$, where $z^{(1)} \in S^{(\Lambda)}$ is obtained from $z^{(0)}$ by replacing $s'$ with $t^{(n_0)}$. Similarly, we have $u_2 := w_{z^{(1)}}^{s_{\lambda(n_0)}} = t^{(n_0)} s_{\lambda(n_0)} \in Y_{z^{(2)},z^{(1)}}$, where $z^{(2)} \in S^{(\Lambda)}$ is obtained from $z^{(1)}$ by replacing $t^{(n_0)}$ with $s_{\lambda(n_0)}$. Now, since $\beta_0 := \alpha_{s_{\lambda(n_0)}}$ and $\beta'_0 := \alpha_{s_{\lambda(n_0)+1}}$ are non-orthogonal elements of $\Pi_{J^{(n_0)}} \subseteq \Pi^{[z^{(0)}]}$, the roots $\beta_2 := u_2u_1 \cdot \beta_0 = \alpha_{s'}$ and $\beta'_2 := u_2u_1 \cdot \beta'_0 = \alpha_{t^{(n_0)}} + \alpha_{s_{\lambda(n_0)}} + \alpha_{s_{\lambda(n_0)+1}}$ are non-orthogonal elements of $\Pi^{[z^{(2)}]}$. \textbf{Step 2.} By the construction, $z^{(2)}$ is obtained from $y^{(n_0)}$ by replacing $s'$ with $s_{\lambda(n_0)}$. On the other hand, we have $J^{(n_0)} = J^{(h_0+1)}$ and $\overline{u} \ast s_{\lambda(n_0)} = s_{\lambda(n_0)}$ by the property of wide transformations in $\overline{\mathcal{D}}$. Now by Lemma \ref{lem:another_decomposition_Y_shift_x}, we have $u_3 := \overline{u} \in Y_{z^{(3)},z^{(2)}}$, where $z^{(3)} \in S^{(\Lambda)}$ is obtained from $y^{(n')}(\overline{\mathcal{D}})$ by replacing $s'$ with $s_{\lambda(n_0)}$. Note that $[z^{(3)}] = ([y^{(n')}(\overline{\mathcal{D}})] \smallsetminus \{s'\}) \cup \{s_{\lambda(n_0)}\} = ([y^{(h_0+1)}] \smallsetminus \{s'\}) \cup \{s_{\lambda(n_0)}\}$. Put $\beta_3 := u_3 \cdot \beta_2$ and $\beta'_3 := u_3 \cdot \beta'_2$. Then we have $\beta_3,\beta'_3 \in \Pi^{[z^{(3)}]}$ and $\langle \beta_3,\beta'_3 \rangle \neq 0$. Moreover, by Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} and Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2}, $u_3$ fixes $\alpha_{s'}$, hence $\beta_3 = \alpha_{s'}$; and the action of $u_3$ does not change the coefficients of $\alpha_{s'}$, $\alpha_{t^{(n_0)}}$, $\alpha_{\overline{s}}$ and $\alpha_{\overline{t}}$, hence the coefficients of these four simple roots in $\beta'_3$ are $0$, $1$, $0$ and $0$, respectively. This also implies that the coefficient of $\alpha_{s_{\lambda(n_0)}}$ in $\beta'_3$ is non-zero, since $t^{(n_0)}$ is adjacent to $s_{\lambda(n_0)} \in [z^{(3)}]$. \textbf{Step 3.} Note that the set $K^{(h_0)}$ is apart from $[y^{(h_0+1)}] \smallsetminus K^{(h_0)}$, hence from $[z^{(3)}] \smallsetminus K^{(h_0)}$. Then we have $u_4 := w_{z^{(3)}}^{\overline{t}} = \overline{t} s_{\lambda(n_0)} \overline{s} \overline{t} \in Y_{z^{(4)},z^{(3)}}$, where $z^{(4)} \in S^{(\Lambda)}$ is obtained from $z^{(3)}$ by exchanging $s_{\lambda(n_0)}$ and $\overline{s}$. Now we have $\beta_4 := u_4 \cdot \beta_3 = \alpha_{s'} \in \Pi^{[z^{(4)}]}$, $\beta'_4 := u_4 \cdot \beta'_3 \in \Pi^{[z^{(4)}]}$ and $\langle \beta_4,\beta'_4 \rangle \neq 0$. Moreover, by the property of coefficients in $\beta'_3$ mentioned in Step 2 and the fact that $\overline{t}$ is adjacent to $s_{\lambda(n_0)}$ and $\overline{s}$, it follows that the coefficient of $\alpha_{\overline{s}}$ in $\beta'_4$ is non-zero. \textbf{Step 4.} Since $[z^{(4)}] = [z^{(3)}]$, there exists an element $z^{(5)} \in S^{(\Lambda)}$ satisfying that $[z^{(5)}] = [z^{(2)}]$ and $u_5 := \overline{u}{}^{-1} \in Y_{z^{(5)},z^{(4)}}$. We have $\beta_5 := u_5 \cdot \beta_4 \in \Pi^{[z^{(5)}]}$, $\beta'_5 := u_5 \cdot \beta'_4 \in \Pi^{[z^{(5)}]}$ and $\langle \beta_5,\beta'_5 \rangle \neq 0$. Now by Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} and Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2}, $u_5$ fixes $\alpha_{s'}$, hence $\beta_5 = \alpha_{s'}$; and the action of $u_5$ does not change the coefficient of $\alpha_{\overline{s}}$, hence the coefficient of $\alpha_{\overline{s}}$ in $\beta'_5$ is non-zero. \textbf{Step 5.} Put $u_6 := u_2{}^{-1}$ and $u_7 := u_1{}^{-1}$. Since $[z^{(5)}] = [z^{(2)}]$ as above, there exists an element $z^{(7)} \in S^{(\Lambda)}$ satisfying that $[z^{(7)}] = [z^{(0)}] = [y^{(n_0)}]$ and $u_7u_6 \in Y_{z^{(7)},z^{(5)}}$. Now we have $\beta_7 := u_7u_6 \cdot \beta_5 = \alpha_{s'}$, since $\beta_5 = \beta_2$. On the other hand, put $\beta'_7 := u_7u_6 \cdot \beta'_5$. Then we have $\beta'_7 \in \Pi^{[z^{(7)}]} = \Pi^{[y^{(n_0)}]}$ and $\langle \beta_7,\beta'_7 \rangle \neq 0$. Moreover, since $u_7u_6 \in W_{S \smallsetminus \{\overline{s}\}}$, the coefficient of $\alpha_{\overline{s}}$ in $\beta'_7$ is the same as the coefficient of $\alpha_{\overline{s}}$ in $\beta'_5$, which is non-zero as mentioned in Step 4. Hence we have constructed a root $\beta' = \beta'_7$ satisfying the above condition. However, this contradicts the fact that $\Pi_{J^{(n_0)}}$ is an irreducible component of $\Pi^{[y^{(n_0)}]}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_irreducible_component})). Summarizing, we have derived a contradiction in any of the four cases, Case (I)--Case (II-3), therefore Lemma \ref{lem:admissible_sequence_extends} for $n = n_0$ holds. Hence our claim has been proven in the case $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$. This completes the proof of Theorem \ref{thm:YfixesWperpIfin}. \section{A counterexample for the general case} \label{sec:counterexample} In this section, we present an example which shows that our main theorem, Theorem \ref{thm:YfixesWperpIfin}, will not generally hold when the assumption on the $A_{>1}$-freeness of $I \subseteq S$ is removed. We consider a Coxeter system $(W,S)$ of rank $7$ with Coxeter graph $\Gamma$ in Figure \ref{fig:counterexample}, where the vertex labelled by an integer $i$ corresponds to a generator $s_i \in S$. Put $I = \{s_4,s_5\}$ which is of type $A_2$ (hence is not $A_{>1}$-free). \begin{figure} \caption{Coxeter graph $\Gamma$ and subset $I \subseteq S$ for the counterexample; here the two duplicated circles correspond to $I = \{s_4,s_5\}$} \label{fig:counterexample} \end{figure} To determine the simple system $\Pi^I$ of $W^{\perp I}$, Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_perp}) implies that each element of $\Pi^I$ is written as $u \cdot \gamma(y,s)$, where $y \in S^{(\Lambda)}$, $u \in Y_{x_I,y}$, $s \in S \smallsetminus [y]$, $[y]_{\sim s}$ is of finite type, $\varphi(y,s) = y$, and $\gamma(y,s)$ is the unique element of $(\Phi_{[y] \cup \{s\}}^{\perp [y]})^+$ as in Proposition \ref{prop:charofBphi}. In this case, the element $u^{-1} \in Y_{y,x_I}$ admits a decomposition as in Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}). In particular, such an element $y$ can be obtained from $x_I$ by applying a finite number of operations of the form $z \mapsto \varphi(z,t)$ with an appropriate element $t \in S$. Table \ref{tab:list_counterexample} gives a list of all the element $y \in S^{(\Lambda)}$ obtained in this way. In the second and the fourth columns of the table, we abbreviate each $s_i$ ($1 \leq i \leq 7$) to $i$ for simplicity. This table shows, for each $y$, all the elements $t \in S \smallsetminus [y]$ satisfying that $[y]_{\sim t}$ is of finite type and $\varphi(y,t) \neq y$, as well as the corresponding element $\varphi(y,t) \in S^{(\Lambda)}$ (more precisely, the subset $[\varphi(y,t)]$ of $S$). Now the list of the $y$ in the table is closed by the operations $y \mapsto \varphi(y,t)$, while it involves the starting point $x_I$ (No.~I in Table \ref{tab:list_counterexample}), therefore the list indeed includes a complete list of the possible $y$. \begin{table}[hbt] \centering \caption{List for the counterexample} \label{tab:list_counterexample} \begin{tabular}{|c||c|c||c|c|} \hline No. & $[y]$ & $\gamma \in \Phi^{\perp [y]}$ & $t$ & $\varphi(y,t)$ \\ \hline I & $\{4,5\}$ & $[10|0\underline{00}|00]$, $[01|0\underline{00}|00]$ & $3$ & II \\ \cline{4-5} & & & $6$ & III \\ \cline{4-5} & & & $7$ & IV \\ \hline II & $\{3,4\}$ & $[10|\underline{11}1|00]$, $[01|\underline{11}1|00]$ & $1$ & V \\ \cline{4-5} & & & $2$ & VI \\ \cline{4-5} & & & $5$ & I \\ \hline III & $\{5,6\}$ & $[10|00\underline{0}|\underline{0}0]$, $[01|00\underline{0}|\underline{0}0]$ & $4$ & I \\ \cline{4-5} & & & $7$ & IV \\ \hline IV & $\{5,7\}$ & $[10|00\underline{0}|0\underline{0}]$, $[01|00\underline{0}|0\underline{0}]$ & $4$ & I \\ \cline{4-5} & & & $6$ & III \\ \hline V & $\{1,3\}$ & $[\underline{0}0|\underline{0}01|00]$, $[\underline{1}1|\underline{2}21|00]$ & $2$ & VI \\ \cline{4-5} & & & $4$ & II \\ \hline VI & $\{2,3\}$ & $[\underline{0}0|\underline{0}01|00]$, $[\underline{1}1|\underline{2}21|00]$ & $1$ & V \\ \cline{4-5} & & & $4$ & II \\ \hline \end{tabular} \end{table} On the other hand, Table \ref{tab:list_counterexample} also includes some elements of $(\Phi^{\perp [y]})^+$ for each possible $y \in S^{(\Lambda)}$. In the third column of the table, we abbreviate a root $\sum_{i=1}^{7} c_i \alpha_{s_i}$ to $[c_1c_2|c_3c_4c_5|c_6c_7]$. Moreover, a line is drawn under the coefficient $c_i$ of $\alpha_{s_i}$ if $s_i$ belongs to $[y]$. Now for each $y$, each root $\gamma \in (\Phi^{\perp [y]})^+$ and each $t$ appearing in the table, the root $w_y^t \cdot \gamma \in (\Phi^{\perp [\varphi(y,t)]})^+$ also appears in the row corresponding to the element $\varphi(y,t) \in S^{(\lambda)}$. Moreover, for each $y$ in the table, if an element $s \in S \smallsetminus [y]$ satisfies that $[y]_{\sim s}$ is of finite type and $\varphi(y,s) = y$, then the corresponding root $\gamma(y,s)$ always appears in the row corresponding to the $y$. By these properties, the above-mentioned characterization of the elements of $\Pi^I$ and the decompositions of elements of $Y_{x_I,y}$ given by Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}), it follows that all the elements of $\Pi^I$ indeed appear in the list. Hence we have $\Pi^I = \{\alpha_{s_1},\alpha_{s_2}\}$ (see the row I in Table \ref{tab:list_counterexample}), therefore both elements of $\Pi^I$ satisfy that the corresponding reflection belongs to $W^{\perp I}{}_{\mathrm{fin}}$. Moreover, we consider the following sequence of operations: \begin{displaymath} \begin{split} x_I {}:={} & (s_4,s_5) \overset{3}{\to} (s_3,s_4) \overset{1}{\to} (s_1,s_3) \overset{2}{\to} (s_3,s_2) \overset{4}{\to} (s_4,s_3) \\ &\overset{5}{\to} (s_5,s_4) \overset{6}{\to} (s_6,s_5) \overset{7}{\to} (s_5,s_7) \overset{4}{\to} (s_4,s_5) = x_I \enspace, \end{split} \end{displaymath} where we write $z \overset{i}{\to} z'$ to signify the operation $z \mapsto z' = \varphi(z,s_i)$. Then a direct calculation shows that the element $w$ of $Y_I$ defined by the product of the elements $w_z^t$ corresponding to the above operations satisfies that $w \cdot \alpha_{s_1} = \alpha_{s_2}$. Hence the conclusion of Theorem \ref{thm:YfixesWperpIfin} does not hold in this case where the assumption on the $A_{>1}$-freeness of $I$ is not satisfied. \noindent \textbf{Koji Nuida}\\ Present address: Research Institute for Secure Systems, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan\\ E-mail: k.nuida[at]aist.go.jp \end{document}
\begin{document} \title{A Budget-Adaptive Allocation Rule for Optimal Computing Budget Allocation} \author{Zirui Cao, Haowei Wang, Ek Peng Chew, Haobin Li, and Kok Choon Tan \thanks{ This research paper has been made possible by the funding support from the Singapore Maritime Institute \& the Center of Excellence in Modelling and Simulation for Next Generation Ports (Singapore Maritime Institute grant: SMI-2022-SP-02). } \thanks{Z. Cao, E.P. Chew, and H. Li are with the Department of Industrial Systems Engineering and Management, National University of Singapore, Singapore, CO 117576 Singapore (e-mail: [email protected]; [email protected]; li\[email protected]). } \thanks{H. Wang is with the Rice-Rick Digitalization PTE. Ltd., Singapore, CO 308900 Singapore (e-mail: haowei\[email protected]). } \thanks{K.C. Tan is with the Department of Analytics and Operations, National University of Singapore, Singapore, CO 119245 Singapore (e-mail: [email protected]). }} \maketitle \begin{abstract} Simulation-based ranking and selection (R\&S) is a popular technique for optimizing discrete-event systems (DESs). It evaluates the mean performance of system designs by simulation outputs and aims to identify the best system design from a finite set of alternatives by intelligently allocating a limited simulation budget. In R\&S, the optimal computing budget allocation (OCBA) is an efficient budget allocation rule that asymptotically maximizes the probability of correct selection (PCS). However, the OCBA allocation rule ignores the impact of budget size, which plays an important role in finite budget allocation. To address this, we develop a budget allocation rule that is adaptive to the simulation budget. Theoretical results show that the proposed allocation rule can dynamically determine the ratio of budget allocated to designs based on different simulation budgets and achieve asymptotic optimality. Furthermore, the finite-budget properties possessed by our proposed allocation rule highlight the significant differences between finite budget and sufficiently large budget allocation strategies. Based on the proposed budget-adaptive allocation rule, two heuristic algorithms are developed. In the numerical experiments, we use both synthetic examples and a case study to show the superior efficiency of our proposed allocation rule. \end{abstract} \begin{IEEEkeywords} Budget-adaptive allocation rule, discrete-event systems, optimal computing budget allocation (OCBA), ranking and selection, simulation. \end{IEEEkeywords} \section{Introduction} \label{sec:introduction} \IEEEPARstart{D}{iscrete}-event systems (DESs) are a widely-used technical abstraction for complex systems (see \cite{zeigler2000theory}), such as traffic control systems, manufacturing systems, and communication systems. When the complexity of DESs is high and analytical models are unavailable, a powerful tool for evaluating the performance of DESs is discrete-event system simulation (see \cite{banks2005discrete}). In this paper, we consider a problem of identifying the best system design from a finite set of competing alternatives, where “best” is defined with respect to the smallest or largest mean performance. The performance of each design is unknown and can be learnt by samples, i.e., by the sample mean of simulation outputs. Such problem is often called statistical ranking and selection (R\&S) problem (see \cite{kim2006selecting,chen2011stochastic,fu2015handbook}) or Ordinal Optimization (OO) problem (see \cite{ho1992ordinal} and \cite{ho2008ordinal}). In R\&S, sampling efficiency is of significant concern as the simulation budget is often limited by its high expense. For example, the running time for a single simulation replication of the 24-hour dynamics of the scheduling process in a transportation network with 20 intersections can be about 2 hours, and it can take 30 minutes to obtain an accurate estimate of a maintenance strategy's average cost by running 1,000 independent simulation replications for a re-manufacturing system (see \cite{ho2008ordinal}). With finite simulation replications, it is impossible to guarantee a correct selection of the best design occurs with probability 1. This nature of the problem motivates the need of implementing R\&S techniques to intelligently allocate simulation replications to designs for efficiently identifying the best design. In our problem, we consider a fixed budget setting, and the probability of correct selection (PCS), a primary criterion in R\&S literature, is used to measure the quality of budget allocation rules. The goal is to derive a budget allocation rule that can maximize the PCS subjecting to a constraint simulation budget. Although the simulation budget is limited and of vital importance, many R\&S algorithms allocate a simulation budget either by asymptotically optimal or by one-step-ahead optimal allocation rules, both of which can not adapt to the simulation budget. Intuitively, we argue that a desirable budget allocation rule should be adaptive to the simulation budget. We use an example to illustrate the significant impact of simulation budget on the optimal budget allocation rule. Suppose that there are three designs (design 1, 2, and 3) with normal sampling distributions $N(1,6^2)$, $N(2,6^2)$, and $N(3,6^2)$, respectively. The optimal budget allocation ratios, which can maximize the PCS, are derived by using an exhaustive search, and the PCS is estimated through Monte Carlo simulation. As shown in table \ref{table: optimal small budget allocation}, the optimal budget allocation ratios can be drastically different when the simulation budget changes. This observation is also consistent with theoretical analyses on the optimal budget allocation rules under different simulation budgets. The optimal computing budget allocation (OCBA) asymptotically maximizes the PCS, and it tends to allocate large budget allocation ratios to competitive designs, where competitive designs include the best design and non-best designs that are hard to distinguish from the best (see \cite{chen2000simulation}). However, when the simulation budget is relatively small, assigning large budget allocation ratios to competitive designs, according to the OCBA allocation rule, may decease the PCS (see \cite{peng2015non}). Such scenario is referred to as the low-confidence scenario (see \cite{peng2017gradient}) and also takes place in the expected value of information (EVI) in \cite{chick2010sequential} and knowledge gradient (KG) polices in \cite{frazier2008knowledge}. To avoid the decrease of PCS, the budget allocation ratios of competitive designs should be discounted and the budget allocation ratios of non-competitive designs should be increased (see \cite{peng2017gradient} and \cite{shi2022dynamic}). This counter-intuitive result emphasizes the significant impact of simulation budget on the budget allocation rule. It motivates the need of deriving a desirable budget allocation rule that considers and adapts to the simulation budget. \begin{table}[thp] \caption{Optimal static budget allocation ratios for three alternative designs under different simulation budgets} \label{table: optimal small budget allocation} \centering \begin{tabular}{@{}cccc@{}} \hline Simulation budget & Design 1 & Design 2 & Design 3 \\ \hline 20 & 0.100 & 0.350 & 0.550 \\ 50 & 0.320 & 0.360 & 0.320 \\ 200 & 0.400 & 0.365 & 0.235 \\ 500 & 0.436 & 0.396 & 0.168 \\ \hline \end{tabular} \end{table} In this work, we consider a simulation-based R\&S problem under a fixed budget setting. We formulate the budget allocation problem as an OCBA problem. Instead of assuming a sufficiently large simulation budget like OCBA does, we consider a finite simulation budget and derive an allocation rule that is adaptive to the simulation budget. Theoretical results show that, compared with the OCBA allocation rule in \cite{chen2000simulation}, our proposed allocation rule can discount the budget allocation ratios of non-best designs that are hard to distinguish from the best design while increase the budget allocation ratios of those that are easy to distinguish. These adjustments are based on the simulation budget. When the simulation budget goes to infinity, the proposed allocation rule reduces to the OCBA allocation rule, which is asymptotically optimal. The finite-budget properties possessed by our proposed allocation rule highlight the significant differences between finite budget and sufficiently large budget allocation strategies (see \cite{peng2015non} and \cite{shi2022dynamic}). To summarize, the main contributions of this paper are as follows: \begin{itemize} \item[1.] We incorporate the simulation budget into the budget allocation rule and explicitly present the distinct behavior of the proposed budget-adaptive allocation rule under both finite and sufficiently large budgets. The finite-budget properties possessed by our proposed allocation rule is important as the simulation budget is often limited in practice. \item[2.] Based on two approaches implementing the proposed budget-adaptive allocation rule, we design two heuristic algorithms, called final-budget anchorage allocation (FAA) and dynamic anchorage allocation (DAA). Specifically, FAA fixes the simulation budget to maximize the final PCS while DAA dynamically anchors the next simulation replication to maximize the PCS after the additional allocation. \end{itemize} \section{Related Literature} \label{related literature} In simulation optimization, the objective is to select the best design among a finite set of alternatives with respect to a performance metric, e.g., PCS. Due to the noisy simulation outputs, it is impossible to surely identify the best design within finite observations. Therefore, a strategy that intends to intelligently allocate simulation replications among designs is supposed to be developed. This problem falls into the actively studied ranking and selection (R\&S) problem. There are two branches of problem settings in R\&S literature. One is fixed-confidence setting and the other is fixed-budget setting. Fixed-confidence R\&S primarily focuses on the indifference zone (IZ) formulation and tries to guarantee a pre-specified level of PCS while using as little simulation budget as possible (see \cite{kim2001fully}). In later work, the IZ formulation is implemented to develop Frequentist procedures that can adapt to fully sequential setting (see \cite{hong2005tradeoff,batur2006fully,hong2007selecting}). Indifference-zone-free procedure that does not require an IZ parameter is proposed in \cite{fan2016indifference}. More recently, IZ procedures for large-scale R\&S problems in parallel computing environment are developed (see \cite{luo2015fully,zhong2022knockout,hong2022solving}). The fixed-budget R\&S procedures are designed to optimize a certain kind of performance metric by efficiently allocating a fixed simulation budget. In the fixed-budget setting, there are procedures that allocate a simulation budget according to an asymptotically optimal allocation rule, such as OCBA (see \cite{chen2000simulation}), the large deviation allocation (see \cite{glynn2004large}), and the optimal expected opportunity cost allocation (OEA) (see \cite{gao2016new}); and procedures that myopically maximize the expected one-step-ahead improvement, such as the expected value of information (EVI) (see \cite{chick2010sequential}), the knowledge gradient (see \cite{frazier2008knowledge}), and the approximately optimal myopic allocation policy (AOMAP) (see \cite{peng2016myopic}). In particular, the approximately optimal allocation policy (AOAP) achieves both one-step-ahead optimality and asymptotic optimality (see \cite{peng2018ranking}). In most cases, fixed-budget R\&S procedures require less simulation budget than fixed-confidence R\&S procedures to achieve the same level of PCS due to their better adaptiveness to the observed simulation outputs, however, they can not provide a statistical guarantee as fixed-confidence R\&S procedures do. There is a unique stream of literature in R\&S focusing on the asymptotic behavior of allocation rules. The premise of these allocations is that if such allocations perform optimally when the simulation budget is sufficiently large, then they should also have satisfactory performances when the simulation budget is finite. OCBA is such a typical method that allocates a simulation budget according to an asymptotically optimal allocation rule when sampling distributions are normal (see \cite{chen2000simulation}). In later work, Gylnn and Juneja \cite{glynn2004large} applies the Large Deviation theory extending the analyses to a more general setting where sampling distributions are non-Gaussian. Gao and Chen \cite{gao2016new} present a budget allocation rule that uses the expected opportunity cost (EOC) as the quality measure of their procedure and is shown to be asymptotically optimal. Peng \textit{et al.} \cite{peng2016dynamic} formulate the problem in a stochastic dynamic program framework and derive an approximately optimal design selection policy as well as an asymptotically optimal budget allocation policy. Furthermore, this stream of methods, which explore the asymptotic behavior of allocations, are extended to solving many variants of R\&S problem, such as the subset selection problem (e.g., \cite{gao2016new}, and \cite{chen2008efficient,zhang2015simulation,zhang2022efficient}), ranking and selection with input uncertainty problem (e.g., \cite{gao2017robust} and \cite{xiao2018simulation}), ranking and selection with multiple objectives problem (e.g., \cite{lee2010finding}), stochastically constrained ranking and selection problem (e.g., \cite{hunter2013optimal} and \cite{pasupathy2014stochastically}), and contextual ranking and selection problem (e.g., \cite{li2022efficient}). The most common simplification made by such methods is to consider a sufficiently large budget, which leads to solving a simplified budget allocation problem. However, this simplification results in derived allocation rules ignoring the impact of budget size on the budget allocation strategy. While a huge number of works contribute to developing asymptotically optimal budget allocation rules, few works investigate the impact of simulation budget on the budget allocation strategy. Typical myopic allocation rules (e.g., \cite{frazier2008knowledge}, \cite{chick2010sequential}, \cite{peng2016myopic}, and \cite{peng2018ranking}) optimize one-step-ahead improvement. In particular, Peng \textit{et al.} \cite{peng2017gradient} consider a low-confidence scenario and propose a gradient-based myopic allocation rule, which takes the induced correlations into account and performs well in such scenario. In later work, a myopic allocation rule that possesses both one-step-ahead optimality and asymptotic optimality is developed in \cite{peng2018ranking}. However, existing myopic allocation rules can not adapt to the simulation budget and some of them are not asymptotically optimal, even though they have excellent performances especially when the simulation budget is relatively small. More recently, Qin, Hong, and Fan \cite{qin2022non} formulate the budget allocation problem as a dynamic program (DP) problem and develop a non-myopic knowledge gradient (KG) policy, which can look multiple steps ahead. Shi \textit{et al.} \cite{shi2022dynamic} propose a dynamic budget-adaptive allocation rule for feasibility determination (FD) problem, a variant of R\&S problem, and show their allocation rule possesses both finite-budget properties and asymptotic optimality. None of existing works consider and develop a budget allocation rule that can not only adapt to the simulation budget but also achieve asymptotic optimality, for solving R\&S problems under a fixed budget setting. The rest of the paper is organized as follows. The problem statement and formulation are presented in Section \ref{problem formulation}. In Section \ref{optimal budget allocation strategy}, we generalize some asymptotic results in the OCBA paradigm to the case of a general budget size; derive a budget allocation rule that is adaptive to the simulation budget; and explicitly present its desirable finite-budget properties and asymptotic optimality. Then, two heuristic algorithms implementing the proposed budget-adaptive allocation rule are developed. Numerical experiments on synthetic examples and a case study are conducted in Section \ref{numerical experiments}. In the end, Section \ref{conclusion} concludes the paper. \section{Problem Formulation} \label{problem formulation} We introduce the following notations in our paper. \begin{table}[H] \normalsize \begin{tabular}{ll} $k$ & The total number of designs; \\ $\mathcal{K}$ & The set of designs, i.e., $\mathcal{K} = \{1,2,\dots,k\}$; \\ $T$ & Simulation budget; \\ $X_{i,j}$ & The $j$-th simulation output sample of design $i$; \\ $\mu_i$ & Mean of the performance of design $i$, i.e.,\\ & $ \mu_i = \mathbb{E}[X_{i,j}]$; \\ $\sigma^2_i$ & Variance of the performance of design $i$, i.e., \\ & $\sigma^2_i = \text{Var}[X_{i,j}]$; \\ $b$ & Real best design, i.e., $b = \arg \min_{i \in \mathcal{K}} \mu_i$; \\ $\mathcal{K}^{\prime}$ & The set of non-best designs, i.e., $\mathcal{K}^{\prime} = \mathcal{K} \backslash b$; \\ $w_i$ & The proportion of simulation budget allocated to \\ & design $i$; \\ $N_i$ & The number of simulation replications allocated to \\ & design $i$, i.e., $N_i = w_i T$; \\ $\hat{\mu}_i$ & Sample mean of the performance of design $i$, i.e., \\ & $\hat{\mu}_i = (1/N_i) \sum_{j=1}^{N_i} X_{i,j}$; \\ $\hat{b}$ & Observed best design, i.e., $\hat{b} = \arg \min_{i \in \mathcal{K}} \hat{\mu}_i$. \end{tabular} \end{table} Suppose that there are $k$ system designs in contention. For each design $i \in \mathcal{K}$, its mean performance $\mu_i$ is unknown and can only be estimated by sampling replications via a stochastic simulation model. The goal of R\&S is to identify the real best design $b$, where “best” is defined with respective to the smallest mean. Assume that the best design $b$ is unique, i.e., $\mu_b < \mu_i$, for $ i \in \mathcal{K}^{\prime}$. This assumption basically requires the best design can be distinguished from others. Since common random numbers and correlated sampling are not considered in the paper, we assume the simulation output samples are independent across different designs and replications, i.e., $X_{i,j}$ is independent for all $i$ and $j$. The most common assumption on the sampling distribution is that the simulation observations of each design $i$ are i.i.d. normally distributed with mean $\mu_i$ and variance $\sigma_i^2$, i.e., $X_{i,j} \sim N(\mu_i,\sigma_i^2)$, for $i \in \mathcal{K}$ and $j \in \mathbb{Z}^+$. For non-Gaussian distributions, the normal assumption can be justified by a central limit theorem, e.g., by the use of batching (see \cite{kim2006selecting}). For simplicity, we ignore the integer constraints on $N_i$, and in practice $ \lfloor N_i \rfloor $ simulation replications are allocated to design $i$, where $i \in \mathcal{K}$ and $ \lfloor \cdot \rfloor $ is the flooring function. After the simulation budget $T$ is depleted, the observed best design $\hat{b}$ (with the smallest sample mean) is selected. The event of correct selection occurs when the selected design, design $\hat{b}$, is the real best design, design $b$. Thus, we define the probability of correct selection (PCS) as \begin{align*} \text{PCS} &= \Pr ( \hat{b} = b )\\ &= \Pr \left( \bigcap_{i \in \mathcal{K}^{\prime}} \left\{ \hat{\mu}_b < \hat{\mu}_{i} \right\} \right). \end{align*} The problem of interest is to determine $w = (w_1,w_2,\dots,w_k)$, such that by the time the simulation budget is exhausted and we select the observed best design, the PCS is maximized. Following the OCBA paradigm, we model the budget allocation problem as follows: \begin{equation*} \begin{split} Problem \ \mathcal{P}: \ &\max_{w} \ \text{PCS} \\ &\text{s.t.} \ \sum\limits_{i \in \mathcal{K}} w_i = 1,\\ & \quad \quad w_i \geq 0, \quad i \in \mathcal{K}. \end{split} \end{equation*} Under general settings, the major difficulty in solving Problem $\mathcal{P}$ is that there is no closed-form expression for the PCS. Although Monte Carlo simulation can be used to approximate the PCS, its computational cost is usually unaffordable, especially when the simulated systems are of high complexity. To manage the difficulty, we evaluate the $\text{PCS}$ in an efficient way \begin{align} \label{Bonferroni inequality} \text{PCS} &= \Pr \left( \bigcap_{i \in \mathcal{K}^{\prime}} \left\{ \hat{\mu}_b < \hat{\mu}_{i} \right\} \right) \notag \\ & = 1 - \Pr \left( \bigcup_{i \in \mathcal{K}^{\prime}} \left\{ \hat{\mu}_i \leq \hat{\mu}_{b} \right\} \right) \notag \\ & \geq 1 - \sum_{i \in \mathcal{K}^{\prime}} \Pr \left( \hat{\mu}_i \leq \hat{\mu}_{b} \right) \notag \\ &= 1 - \sum_{i \in \mathcal{K}^{\prime}} \Pr \left( Z \leq - \frac{\mu_i - \mu_b}{ \sqrt{ \frac{\sigma_i^2}{w_i T} + \frac{\sigma_b^2}{w_b T} } } \right) \notag \\ &= 1 - \sum_{i \in \mathcal{K}^{\prime}} \Phi \left( - \frac{ \delta_{i,b} }{ \sigma_{i,b} } \right) \notag \\ & = \text{APCS}, \notag \end{align} where the inequality holds by Bonferroni inequality, $Z$ is a random variable follows the standard normal distribution, $\Phi$ denotes the cumulative distribution function (c.d.f.) of the standard normal random variable, $\delta_{i,b} = \mu_i - \mu_b $, and $\sigma_{i,b} = \sqrt{\sigma_i^2/(w_i T) + \sigma_b^2/(w_b T)} $. Of note, the \text{APCS}, which serves as a cheap lower bound for the $\text{PCS}$, is frequently used in the OCBA paradigm and converges to 1 as the simulation budget $T$ goes to infinity (see \cite{chen2000simulation} and \cite{zhang2015simulation}). Instead of solving Problem $\mathcal{P}$, with the new objective $\text{APCS}$, we consider the following optimization problem: \begin{equation*} \begin{split} Problem \ \mathcal{P}1: \ &\max_{w} \ \text{APCS} \\ &\text{s.t.} \ \sum\limits_{i \in \mathcal{K}} w_i = 1,\\ & \quad \quad w_i \geq 0, \quad i \in \mathcal{K}. \end{split} \end{equation*} \section{Budget Allocation Strategy} \label{optimal budget allocation strategy} In this section, we first generalize the convexity of Problem $\mathcal{P}$1 to the case of a general budget size and recover the OCBA allocation rule. Then, we derive a new budget allocation rule that is adaptive to the simulation budget and analyze its finite-budget properties and asymptotic optimality. Based on the proposed budget allocation rule, two heuristic algorithms are developed. Of note, we highlight the most important implication of this section: simulation budget significantly impacts the budget allocation strategy. To enhance readability, all the proofs are relegated to the Appendix. \subsection{Optimal Computing Budget Allocation} In the OCBA paradigm, the derivation of optimality conditions essentially requires Problem $\mathcal{P}$1 to be a convex optimization problem. Zhang \textit{et al.} \cite{zhang2015simulation} consider a top-$m$ designs selection problem and show their APCS bound is concave when the simulation budget $T$ is sufficiently large. By letting $m = 1$, this result applies to Problem $\mathcal{P}$1. We generalize this result and rigorously show in Lemma 1 that the concavity of the APCS bound indeed holds for any simulation budget, thereby establishing Problem $\mathcal{P}$1 as a convex optimization problem. \textit{Lemma 1}: $\text{APCS}$ is concave and therefore Problem $\mathcal{P}1$ is a convex optimization problem. \textit{Proof:} See Appendix A.1. With the convexity, Problem $\mathcal{P}1$ can be much more easily solved by optimization solvers than the original Problem $\mathcal{P}$. Since the sampling efficiency is of significant concern in R\&S, we try to derive a solution with an analytical form for Problem $\mathcal{P}1$. The solution satisfying the Karush-Kuhn-Tucker (KKT) conditions is the optimal solution to Problem $\mathcal{P}1$ (see \cite{boyd2004convex}). Theorem 1 gives the optimality conditions of Problem $\mathcal{P}1$, and some insights can be made from it in the following analyses. \textit{Theorem 1}: If the solution $w = (w_1,w_2,\dots,w_k)$ maximizes the $\text{APCS}$ in Problem $\mathcal{P}1$, it satisfies the following optimality conditions $\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3$, and $\mathcal{C}_4$ \begin{itemize} \item[$\mathcal{C}_1$:] $w_b = \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}} w_i^{2}/\sigma_i^2}$, \item[$\mathcal{C}_2$:] $ - \frac{\delta_{i,b}^2}{ 2(\sigma_i^2/w_i + \sigma_b^2/w_b) } T + \log \frac{\delta_{i,b}\sigma_i^2}{(\sigma_i^2/w_i + \sigma_b^2/w_b)^{\frac{3}{2}}} - 2 \log w_i = \lambda, \quad i \in \mathcal{K}^{\prime}$, \item[$\mathcal{C}_3$:] $\sum_{i \in \mathcal{K}} w_{i}=1$, \item[$\mathcal{C}_4$:] $w_i \geq 0, \quad i \in \mathcal{K}$, \end{itemize} where $\lambda$ is a constant. \textit{Proof:} See Appendix B.1. When the simulation budget is sufficiently large, i.e., $T \rightarrow \infty$, the effect of the two terms $\log (\delta_{i,b}\sigma_i^2/(\sigma_i^2/w_i + \sigma_b^2/w_b)^{\frac{3}{2}})$ and $ - 2 \log w_i$ in condition $\mathcal{C}_2$ can be negligibly small compared with $ - [\delta_{i,b}^2 / (2(\sigma_i^2/w_i + \sigma_b^2/w_b))] \times T$. When $T \rightarrow \infty$, this observation motivates us to simplify condition $\mathcal{C}_2$ as \begin{equation} \label{simplified C2} - \frac{\delta_{i,b}^2}{ 2(\sigma_i^2/w_i + \sigma_b^2/w_b) } T = \lambda, \quad i \in \mathcal{K}^{\prime}. \end{equation} By combining conditions $\mathcal{C}_1,\mathcal{C}_3,\mathcal{C}_4$, and \eqref{simplified C2}, we can verify that $w$ satisfies \begin{equation} \label{large deviation ratio} \begin{split} w_b &= \sigma_b \sqrt{\sum\limits_{i \in \mathcal{K}^{\prime}} \frac{w_i^{2}}{\sigma_i^2}}, \\ \frac{\delta_{i,b}^2}{(\sigma_i^2/w_i + \sigma_b^2/w_b) } &= \frac{\delta_{j,b}^2}{(\sigma_j^2/w_j + \sigma_b^2/w_b) }, i,j \in \mathcal{K}^{\prime} \ \text{and} \ i \neq j , \end{split} \end{equation} the form of which corresponds to the asymptotic optimality conditions in \cite{glynn2004large} derived by maximizing the asymptotic convergence rate of probability of false selection (PFS). Compared with Theorem 1, Equation \eqref{large deviation ratio} is a great simplification but still requires a numerical solver to determine $w$. To distinguish the best design from others, one would expect to spend most budget on the best design, i.e., $w_b \gg w_i$, for $i \in \mathcal{K}^{\prime}$. For a top-$m$ designs selection problem, Zhang \textit{et al.} \cite{zhang2015simulation} investigate the ratio of asymptotically optimal allocation ratio of non-critical designs to that of critical designs, and they show the upper bound of the ratio's growth rate is in the order of $1/\sqrt{k}$. By letting $m = 1$, one can show this result applies to $w$, that is, when $T \rightarrow \infty$, $w_i/w_b = \mathcal{O}(1/\sqrt{k})$, for $i \in \mathcal{K}^{\prime}$. The notation $f(x) = \mathcal{O}(g(x))$ means that $g(x)$ can be viewed as the upper bound of the growth rate of $f(x)$, i.e., there exists positive constants $M$ and $x_0$ such that $|f(x)| \leq M \cdot g(x)$, $\forall x \geq x_0$. When the simulation budget is finite, i.e., $T < \infty$, applying the result $w_i/w_b = \mathcal{O}(1/\sqrt{k})$, for $i \in \mathcal{K}^{\prime}$, is less straightforward. Proposition 1 formally generalizes this result to the case of a general budget size. \textit{Proposition 1}: Suppose that the variances of all designs are lower bounded by a positive constant $\underline{\sigma}^2 > 0$ and upper bounded by another positive constant $\bar{\sigma}^2 > 0$, that is, $0 < \underline{\sigma}^2 \leq \sigma^2_i \leq \bar{\sigma}^2$, for $i \in \mathcal{K}$. If the solution $w = (w_1,w_2,\dots,w_k)$ satisfies Theorem 1, there exists a positive constant $c > 0$, such that \begin{equation*} \frac{w_i}{w_b} \leq \frac{\bar{\sigma}}{\underline{\sigma}} \frac{c}{\sqrt{k-1}}, \quad i \in \mathcal{K}^{\prime}, \end{equation*} and therefore, $w_i/w_b = \mathcal{O}(1/\sqrt{k})$, for $i \in \mathcal{K}^{\prime}$. \textit{Proof:} See Appendix C.1. Based on Proposition 1, we further simplify \eqref{large deviation ratio} by considering a sufficiently large number of designs $k$, such that $w_b \gg w_i$, for $i \in \mathcal{K}^{\prime}$. Then, we obtain \begin{equation} \label{OCBA allocation rule} w^*_i = \frac{I_i}{\sum_{i \in \mathcal{K}} I_i}, \quad i \in \mathcal{K}, \end{equation} where \begin{equation*} I_i =\left\{\begin{array}{ll} \frac{\sigma_i^2}{\delta_{i,b}^2} & \quad \text{if\ } i \in \mathcal{K}^{\prime} \\ \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}}\frac{I_i^{2} }{\sigma_i^2}} & \quad \text{if\ }i = b \end{array}\right. \end{equation*} This corresponds to the OCBA allocation rule in \cite{chen2000simulation}. Note that Equation \eqref{OCBA allocation rule} only has slight difference from \eqref{large deviation ratio}. For presentation simplicity, we refer both \eqref{large deviation ratio} and \eqref{OCBA allocation rule} as asymptotic optimality conditions. \textit{Remark 1}: The asymptotically optimal solution $w^*$ tends to assign high budget allocation ratios to non-best designs with large $I_i$, where $I_i$ is the non-best design $i$'s variance to the difference in means (between it and the best design). This result implies that, when the simulation budget approaches infinity, more simulation budget should be allocated to non-best designs that are hard to distinguish from the best, while less simulation budget should be allocated to those that are easy to distinguish. The asymptotically optimal solution $w^*$ is independent of simulation budget and ignores the impact of budget size on it. This observation motivates the need to derive a desirable budget allocation rule which is adaptive to the simulation budget. \subsection{Budget-Adaptive Allocation Rule} In this subsection, we develop a budget-adaptive allocation rule that incorporates the simulation budget and approximately maximizes the APCS in Problem $\mathcal{P}$1. Instead of letting the simulation budget go to infinity, we consider a finite simulation budget, i.e., $T < \infty$. While the APCS bound in Problem $\mathcal{P}$1 is typically loose when the simulation budget is finite, solving Problem $\mathcal{P}$1 can provide a solution in analytical form. This not only aids in understanding the impact of budget size on the budget allocation strategy but also offers great improvement of computational efficiency compared with exactly calculating the PCS in Problem $\mathcal{P}$. To derive a solution in analytical form, similarly, we consider a sufficiently large number of designs $k$, such that $w_b \gg w_i$, for $i \in \mathcal{K}^{\prime}$. Then, condition $\mathcal{C}_2$ in Theorem 1 can be simplified as \begin{equation} \label{simplified condition C2-1} \log I_i + \log w_i + \frac{T}{I_i} w_i = \lambda, \quad i \in \mathcal{K}^{\prime}. \end{equation} \textit{Remark 2}: If the simulation budget is sufficiently small, i.e., $T \rightarrow 0$, the effect of $Tw_i/I_i$ in \eqref{simplified condition C2-1} can be negligible. We then combine conditions $\mathcal{C}_1$, $\mathcal{C}_3$, $\mathcal{C}_4$, and \eqref{simplified condition C2-1}, and obtain a solution $\frac{H_i}{\sum_{i \in \mathcal{K}}H_i}$, where $H_i = I_i^{-1}$, for $i \in \mathcal{K}^{\prime}$, and $H_b = \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}}H_i^2/\sigma_i^2}$. This solution tends to assign low budget allocation ratios to non-best designs with large $I_i$. More specifically, when the simulation budget is small enough, less simulation budget should be allocated to non-best designs that are hard to distinguish from the best, while more simulation budget should be allocated to those that are easy to distinguish. This result is contrary to the asymptotically optimal solution $w^*$. Although the simulation budget can not approach 0 in practice, this result highlights the significant differences between finite budget and sufficiently large budget allocation strategies. Furthermore, we approximate the term $\log w_i$ in \eqref{simplified condition C2-1} by its first-order Taylor series expansion at point $w^*_i$, for $i \in \mathcal{K}^{\prime}$ \begin{equation} \label{first order taylor expansion} \log w_i \approx \log w^*_i + \left(w_i - w^*_i \right)/w^*_i. \end{equation} In \eqref{first order taylor expansion}, the asymptotically optimal solution $w^* = (w^*_1,w^*_2,\allowbreak\dots,w^*_k)$ is regarded as a “good” approximation to the real optimal solution $w = (w_1,w_2,\dots,w_k)$. As the simulation budget $T$ goes to infinity, this approximation tends to be accurate because $w$ would be identical to $w^*$. Therefore, we substitute the term $\log w_i$ with its approximation provided by \eqref{first order taylor expansion} and get the approximated optimality conditions for Problem $\mathcal{P}1$: \begin{itemize} \item[$\mathcal{C}_1$:] $w_b = \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}} w_i^{2}/\sigma_i^2}$, \item[$\widehat{\mathcal{C}}_2$:] $ 2 \log I_i + \left(\frac{T}{I_i} + \frac{1}{w^*_i} \right) w_i = \lambda, \quad i \in \mathcal{K}^{\prime}$, \item[$\mathcal{C}_3$:] $\sum_{i \in \mathcal{K}} w_{i}=1$, \item[$\mathcal{C}_4$:] $w_i \geq 0, \quad i \in \mathcal{K}$, \end{itemize} where $\lambda$ is a constant. The condition $\mathcal{C}_2$ in Theorem 1 is approximated by condition $\widehat{\mathcal{C}_2}$, which uses the first-order Taylor series expansions of terms $\log w_i$ at points $w^*_i$, for $i \in \mathcal{K}^{\prime}$. To further simplify the problem, we temporarily omit the non-negativity constraints (condition $\mathcal{C}_4$) and consider conditions $\mathcal{C}_1$, $\widehat{\mathcal{C}_2}$, and $\mathcal{C}_3$ in Lemma 2. In Lemma 3, the non-negativity constraints are discussed to guarantee the feasibility of the solution obtained in Lemma 2. \textit{Lemma 2}: If the solution $W(T) = (W_1(T),W_2(T),\dots,\allowbreak W_k(T))$ solves conditions $\mathcal{C}_1$, $\widehat{\mathcal{C}}_2$, and $\mathcal{C}_3$, it satisfies \begin{equation} W_i(T) =\left\{\begin{array}{ll} w^*_i \alpha_i(T) & \quad \text{if\ } i \in \mathcal{K}^{\prime} \\ \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}}\frac{(W_i(T))^{2} }{\sigma_i^2}} & \quad \text{if\ } i = b \end{array}\right. \end{equation} where \begin{align} \alpha_i(T) &= \frac{ (\lambda - 2 \log I_i)}{1 + T/S}, \notag \\ S &= \sum_{i \in \mathcal{K}} I_i, \notag \\ \lambda &=\left\{\begin{array}{ll} \frac{- q + \sqrt{q^2 - 4 p r}}{2 p} & \quad \text{if} \ w^*_b \neq \frac{1}{2} \\ \frac{4\sum_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T + S}{2\sum_{i \in \mathcal{K}^{\prime}} I_i} & \quad \text{if} \ w^*_b = \frac{1}{2} \\ \end{array}\right. \notag \\ p &= S(2 I_b - S), \notag \\ q &= - 4 \sigma_b^2 \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i^2 \log I_i }{\sigma_i^2 } \notag \\ & \qquad + 2 (S- I_b) \left(2 \sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T+S \right), \notag\\ r &= 4 \sigma_b^2 \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i^2 \log^2 I_i }{\sigma_i^2 } - \left( 2 \sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T+S \right)^2. \notag \end{align} \textit{Proof:} See Appendix A.2. The solution $W(T)$ is an analytical function of the simulation budget $T$ and is asymptotically optimal. For a certain simulation budget $T$, a non-best design $i$, for $i \in \mathcal{K}^{\prime}$, tends to be allocated more simulation budget by $W_i(T)$ than by $w^*_i$ if $\alpha_i(T) > 1$ and be allocated less simulation budget by $W_i(T)$ than by $w^*_i$ if $\alpha_i(T) < 1$. The relationship between the best design and non-best designs remains unchanged. In particular, when the simulation budget goes to infinity, i.e., $T \rightarrow \infty$, we have $\alpha_i(T) \rightarrow 1 $, then $W_i(T) \rightarrow w_i^*$, for $i \in \mathcal{K}^{\prime}$, and consequently, $W_b(T) \rightarrow w_b^*$, implying that the solution $W(T)$ achieves asymptotic optimality. When the simulation budget $T$ is sufficiently large, the solution $W(T)$ in Lemma 2 is feasible due to the fact $\lim_{T \rightarrow \infty } W_i(T) \rightarrow w^*_i \geq 0$, for $i \in \mathcal{K}$. However, when the simulation budget $T$ is finite, some of the budget allocation ratios in Lemma 2 may violate the non-negativity constraints and become infeasible. Because $W(T)$ is derived by temporarily omitting the non-negativity constraints (condition $\mathcal{C}_4$). For non-best designs, let $\langle j \rangle$, $j = 1,2,\dots,k - 1$, be the ascending order statistics of $I_i$, for $i \in \mathcal{K}^{\prime}$, i.e., $I_{\langle 1 \rangle} \leq I_{\langle 2 \rangle} \leq \dots \leq I_{\langle k-1 \rangle}$. Lemma 3 gives the feasibility condition of $W(T)$. \textit{Lemma 3}: Suppose that the solution $W(T) = (W_1(T),\allowbreak W_2(T),\dots, W_k(T))$ solves conditions $\mathcal{C}_1$, $\widehat{\mathcal{C}}_2$, and $\mathcal{C}_3$. If $W(T)$ is feasible, the simulation budget $T$ satisfies \begin{equation*} T \geq T_0 = \left\{\begin{array}{ll} \max\{T_1,T_2\} & \text{if} \ w^*_b \neq \frac{1}{2} \\ 4 \sum_{i \in \mathcal{K}^{\prime}} I_i \log\frac{I_{\langle k-1 \rangle}}{I_i} - S & \text{if} \ w^*_b = \frac{1}{2} \\ \end{array}\right. \end{equation*} where \begin{align*} T_1 &= 2 \sum\limits_{i \in \mathcal{K}^{\prime}} \left[\frac{\sigma^2_b I_i^2}{\sigma_i^2 (S - I_b)} - I_i \right] \log \frac{I_{\langle k - 1 \rangle}}{I_i} - S,\\ T_2 &= 2 \sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log \frac{I_{\langle k - 1 \rangle}}{I_i} \\ & \qquad + 2 \sigma_b \sqrt{ \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{ I_i^2 }{\sigma_i^2 } \left( \log \frac{I_{\langle k - 1 \rangle}}{I_i} \right)^2 } - S. \end{align*} \textit{Proof:} See Appendix A.3. Lemma 3 shows that $W(T)$ is always feasible when the simulation budget is relatively large, i.e., $T \geq T_0$. However, when the simulation budget $T$ is small, i.e., $T < T_0$, there exists a factor $\alpha_i(T)$ could become negative and result in $W_i(T) = w_i^* \alpha_i(T) < 0$. This implies that $W_i(T)$ could be discounted too heavily to be feasible due to the effect of $\alpha_i(T)$. To address this issue, when $T < T_0$, we allocate $T$ simulation budget according to $W(T_0)$, which is always a feasible solution to Problem $\mathcal{P}1$. Let $\lceil T_0 \rceil$ denote the smallest integer that is larger or equal to $T_0$. For non-best designs $i \in \mathcal{K}^{\prime}$, define \begin{equation} \label{new allocation ratio for design i} \widetilde{W}_i(T) =\left\{\begin{array}{ll} W_i(T) & \quad \text{if\ } T \geq T_0 \\ W_i(\lceil T_0 \rceil) & \quad \text{if\ } T \leq T_0 \\ \end{array}\right. \end{equation} and the best design $b$ \begin{equation} \label{new allocation ratio for design b} \widetilde{W}_b(T) = \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}}\frac{( \widetilde{W}_i(T))^{2} }{\sigma_i^2}}. \end{equation} \textit{Theorem 2}: When the number of designs $k$ is sufficiently large, the solution $\widetilde{W}(T) = (\widetilde{W}_1(T),\widetilde{W}_2(T),\dots,\widetilde{W}_k(T))$ defined in \eqref{new allocation ratio for design i} and \eqref{new allocation ratio for design b} solves Problem $\mathcal{P}1$ and approximately maximizes the $\text{APCS}$. The approximation used in \eqref{first order taylor expansion} and asymptotic analyses yield an approximate solution $\widetilde{W}(T)$, which is in analytical form. This not only greatly facilitates the implementation of our results, but generates insights into the budget allocation strategy for identifying the best design. The factors $\alpha_i(T)$, for $i \in \mathcal{K}^{\prime}$, play key roles in influencing the behavior of the derived new budget-adaptive allocation ratios $\widetilde{W}_i(T)$. The following proposition gives an intuition on the behavior of $\alpha_i(T)$, for $i \in \mathcal{K}^{\prime}$. \textit{Proposition 2}: When $0 < T < \infty$, we have $\alpha_{\langle 1 \rangle}(T) \geq 1 $, $\alpha_{\langle k - 1 \rangle}(T) \leq 1$, and $\alpha_{\langle 1 \rangle}(T) \geq \alpha_{\langle 2 \rangle}(T) \geq \dots \geq \alpha_{\langle k - 1 \rangle}(T)$, where all the equalities hold if and only if $I_i$, for $i \in \mathcal{K}^{\prime}$, are all equal. \textit{Proof:} See Appendix C.2. \textit{Remark 3}: Notice that a non-best design $i$ with a large $I_i$ has a small $\alpha_i(T)$ and is hard to distinguish from the best design. Recall the analyses in Remark 1, the asymptotically optimal solution $w^*$ tends to assign high budget allocation ratios to non-best designs with large $I_i$. Due to the effects of $\alpha_i(T)$, for $i \in \mathcal{K}^{\prime}$, $\widetilde{W}(T)$ discounts the budget allocation ratios of non-best designs with large $I_i$ (e.g., design $\langle k - 1 \rangle$) and increases the budget allocation ratios of non-best designs with small $I_i$ (e.g., design $\langle 1 \rangle$). These adjustments are based on a finite simulation budget $T$. More specifically, under a finite simulation budget $T$, compared with $w^*$, $\widetilde{W}(T)$ discounts the simulation budget allocated to non-best designs that are hard to distinguish from the best, while it increases the simulation budget allocated to those that are easy to distinguish. In particular, if all non-best designs are equally hard or easy to distinguish from the best, i.e., $I_i$, for $i \in \mathcal{K}^{\prime}$, are all equal, $\widetilde{W}(T)$ is identical to $w^*$. The asymptotic optimality of $\widetilde{W}(T)$ is a theoretical evidence supporting the approximation used in \eqref{first order taylor expansion}. Furthermore, the finite-budget properties possessed by $\widetilde{W}(T)$ are even more important. Because $\widetilde{W}(T)$ considers the impact of budget size $T$ on the budget allocation ratios, and in practice, the simulation budget $T$ is often limited by its high expense. As we will see in the numerical experiments, the asymptotically optimal budget-adaptive allocation rule $\widetilde{W}(T)$ greatly improves the efficiency of selecting the best design. \subsection{Budget Allocation Algorithm} \label{budget allocation procedure} Based on the preceding analyses, we develop two heuristic algorithms based on two approaches implementing the proposed budget-adaptive allocation rule. Without loss of generality, we consider a fully sequential setting, where only one replication is allocated at each step. To facilitate presentation, we introduce some additional notations. \begin{table}[H] \normalsize \begin{tabular}{ll} $t$ & The number of simulation replications has been \\ & allocated so far; \\ $A_{t}$ & Design that is allocated the $t$-th replication; \\ $\hat{b}^{(t)}$ & Design with the smallest sample mean at step $t$; \\ $\hat{\mu}^{(t)}_i$ & Sample mean of design $i$ at step $t$; \\ $(\hat{\sigma}^{(t)}_i)^2$ & Sample variance of design $i$ at step $t$; \\ $N^{(t)}_i$ & The number of simulation replications has been \\ & allocated to design $i$ across $t$ steps; \\ $w^{*(t)}_i$ & OCBA allocation ratio of design $i$ at step $t$; \\ $\widetilde{W}^{(t)}_i(T)$ & The proportion of simulation budget allocated \\ & to design $i$ at step $t$ with total budget size $T$. \end{tabular} \end{table} To calculate the budget allocation ratios at each step $t$, we use every design's sample mean $\hat{\mu}^{(t)}_i$ and sample variance $(\hat{\sigma}^{(t)}_i)^2$ as plug-in estimates for its true mean $\mu_i$ and variance $\sigma^2_i$, respectively, for $i \in \mathcal{K}$. Chen \cite{chen1996lower} and Chick and Inoue \cite{chick2001new} describe the main superiority of fully sequential procedures is that it can improve each stage's sampling efficiency by incorporating information from all earlier stages. \subsubsection{Final-Budget Anchorage Allocation} We develop an efficient fully sequential allocation algorithm called final-budget anchorage allocation (FAA). First, each design is initially sampled $n_0$ replications. In the second stage, we run one more replication according to $\widetilde{W}^{(t)}(T) = (\widetilde{W}^{(t)}_1(T),\widetilde{W}^{(t)}_2(T),\dots,\widetilde{W}^{(t)}_k(T))$, observe the output sample, update sample estimates and allocation ratios, and repeat until exhausting simulation budget to further distinguish the performance of each design. Notice that in each iteration, the final simulation budget $T$ is anchored, and the goal is to maximize the PCS after $T$ simulation budget is depleted. The “most starving” technique introduced in \cite{chen2011stochastic} can be applied to define an allocation policy \begin{equation} \label{allocation policy: sequential} A_{t+1}^{FAA} = \arg \max_{i \in \mathcal{K}} \left\{ (t+1) \times \widetilde{W}^{(t)}_i(T) - N^{(t)}_i \right\}, \end{equation} which allocates the $(t+1)$-th replication to a design that is the most starving for it at step $t$. After the simulation budget is exhausted, the design with the smallest overall mean performance is selected as the best. The fully sequential FAA procedure is described in Algorithm \ref{alg:FAA}. \begin{algorithm} \caption{FAA}\label{alg:FAA} \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \begin{algorithmic} \Require Set of designs $\mathcal{K}$, initial sample size $n_0$, and simulation budget $T$. \State \textbf{Initialization}: Set $t = n_0 \times k$ and $N^{(t)}_1 = N^{(t)}_2 = \dots = N^{(t)}_k = n_0$. Perform $n_0$ replications for each design. \While{$t < T$} \State Update $\hat{\mu}^{(t)}_i$, $(\hat{\sigma}^{(t)}_i)^2$, for $i \in \mathcal{K}$, $\hat{b}^{(t)} = \arg \min_{i \in \mathcal{K}} \hat{\mu}_i^{(t)}$. \State Calculate $w^{*(t)}_i$, for $i \in \mathcal{K}$, according to (\ref{OCBA allocation rule}). \State Calculate $\widetilde{W}^{(t)}_i(T)$, for $i \in \mathcal{K}$, according to (\ref{new allocation ratio for design i}) and (\ref{new allocation ratio for design b}). \State Find $A_{t+1}^{FAA}$ according to (\ref{allocation policy: sequential}). \State Perform additional one replication for design $A_{t+1}^{FAA}$. \State Set $N^{(t+1)}_{A^{FAA}_{t+1}} = N^{(t)}_{A^{FAA}_{t+1}} + 1$, $N^{(t+1)}_i = N^{(t)}_i $, for $ i \in \mathcal{K}$ and $i \neq A^{FAA}_{t+1}$, and $t = t + 1$. \EndWhile \Ensure $\hat{b}^{(T)} = \arg \min_{i \in \mathcal{K}} \hat{\mu}_i^{(T)}$ \end{algorithmic} \end{algorithm} \subsubsection{Dynamic Anchorage Allocation} We extend the FAA procedure to a more flexible variant, named as dynamic anchorage allocation (DAA), by allowing dynamically changing the anchored final budget instead of fixing it during the procedure. At each step $t$, we anchor $t + 1$ as the final budget, run additional one replication according to $\widetilde{W}^{(t)}(t + 1)$, observe the output sample, update sample estimates and allocation ratios, and repeat until the simulation budget is exhausted. Notice that in each iteration, the next simulation replication is anchored, and the goal becomes maximizing the PCS after the additional allocation. Again, the “most starving” technique in \cite{chen2011stochastic} can be used to obtain a budget allocation policy, which is define as \begin{equation} \label{allocation policy: dynamic} A_{t+1}^{DAA} = \arg \max_{i \in \mathcal{K}} \left\{ (t+1) \times \widetilde{W}^{(t)}_i(t+1) - N^{(t)}_i \right\}. \end{equation} And the fully sequential DAA procedure can be implemented by Algorithm \ref{alg:DAA}. \begin{breakablealgorithm} \caption{DAA} \label{alg:DAA} \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \begin{algorithmic} \Require Set of designs $\mathcal{K}$, initial sample size $n_0$, and simulation budget $T$. \State \textbf{Initialization}: Set $t = n_0 \times k$ and $N^{(t)}_1 = N^{(t)}_2 = \dots = N^{(t)}_k = n_0$. Perform $n_0$ replications for each design. \While{$t < T$} \State Update $\hat{\mu}^{(t)}_i$, $(\hat{\sigma}^{(t)}_i)^2$, for $i \in \mathcal{K}$, $\hat{b}^{(t)} = \arg \min_{i \in \mathcal{K}} \hat{\mu}_i^{(t)}$. \State Calculate $w^{*(t)}_i$, for $i \in \mathcal{K}$, according to (\ref{OCBA allocation rule}). \State Calculate $\widetilde{W}^{(t)}_i(t+1)$, for $i \in \mathcal{K}$, according to (\ref{new allocation ratio for design i}) and (\ref{new allocation ratio for design b}). \State Find $A_{t+1}^{DAA}$ according to (\ref{allocation policy: dynamic}). \State Perform additional one replication for design $A_{t+1}^{DAA}$. \State Set $N^{(t+1)}_{A^{DAA}_{t+1}} = N^{(t)}_{A^{DAA}_{t+1}} + 1$, $N^{(t+1)}_i = N^{(t)}_i $, for $ i \in \mathcal{K}$ and $i \neq A^{DAA}_{t+1}$, and $t = t + 1$. \EndWhile \Ensure $\hat{b}^{(T)} = \arg \min_{i \in \mathcal{K}} \hat{\mu}_i^{(T)}$. \end{algorithmic} \end{breakablealgorithm} \section{Numerical Experiments} \label{numerical experiments} In this section, we conduct numerical experiments on synthetic examples and a facility location problem to show the superior performance and applicability of our proposed FAA and DAA procedures. The experiments are conducted in MATLAB R2022b on a computer with Intel Core i5-10400 CPU with 2.90 GHz, 16 GB memory, a 64-bit operating system, and 6 cores with 12 logical processors. We use three simulation budget allocation procedures for comparison. \begin{itemize} \item Equal allocation (EA). This is the simplest method to conduct experiments. The simulation budget is equally allocated to all designs, i.e., $N_i = T/k$ and $w_i = 1/k$, for $i \in \mathcal{K}$. The equal allocation is a good benchmark for performance comparison. \item OCBA allocation (see \cite{chen2000simulation}). OCBA is guided by the asymptotically optimal allocation rule defined in \eqref{OCBA allocation rule}. We implement a fully sequential OCBA procedure, which allocates a single replication at each step according to the “most starving” technique in \cite{chen2011stochastic}. Similarly, at each step, sample means and variances are used as plug-in estimates for the true means and variances to calculate the OCBA allocation ratios. \item AOAP allocation (see \cite{peng2018ranking}). AOAP is an efficient budget allocation procedure that achieves both one-step-head optimality and asymptotic optimality. AOAP requires the variances of designs to be known, and again, we use sample variances as plug-in estimates for the true variances. When the simulation budget goes to infinity, AOAP reaches the asymptotically optimal budget allocation ratios defined in \eqref{large deviation ratio}. \end{itemize} \subsection{Test Problems} \subsubsection{Synthetic Examples} To demonstrate the efficiency of the proposed FAA and DAA, we consider four synthetic problem settings, which are described as follows: \textit{Example 1}: There are 10 alternative designs with sampling distributions $N(i,6^2)$, for $i = 1,2,\dots,10$. The goal is to identify the best design via simulation samples, where the best is $b=1$ in this example. \textit{Example 2}: This is a variant of Example 1. All settings are the same except that the variance is decreasing with respect to the indices. In this example, better designs are with larger variances. The designs' sampling distributions are $N(i,(11-i)^2)$, for $i = 1,2,\dots,10$. Again, the best design is $b = 1$. \textit{Example 3}: This is another variant of Example 1 with larger number of designs and variances. The designs' sampling distributions are $N(i,10^2)$, for $i = 1,2,\dots,50$. Again, the best design is $b = 1$. \textit{Example 4}: There are 500 normal alternative designs. The sampling distribution of the best design, design $1$, is $N(0,6^2)$. As for non-best designs $i$, for $i = 2,3,\dots,500$, their sampling distributions are $N(\mu_i, \sigma_i^2)$, where $\mu_i$ and $\sigma_i$ are generated from two uniform distributions $U(1,16)$ and $U(3,9)$, respectively. In all experiments with synthetic examples, we set the initial number of simulation replications per design to be 3, i.e., $n_0 = 3$. Because we want to distinguish the performances of different allocation procedures especially when the simulation budget is relatively small; the simulation budget $T$ is 1,000, 3,000, 5,000, and 80,000 for Example 1, 2, 3, and 4, respectively; the empirical PCS of each procedure is estimated from 100,000 independent macro replications for Example 1-3 and from 10,000 independent macro replications for Example 4. The performance comparison of the five procedures with different simulation budgets are reported in Figure \ref{fig:sfig1}-\ref{fig:sfig4} and Table \ref{table: performance comparison}. \subsubsection{Facility Location Problem} The facility location problem is a practical test problem provided by the Simulation Optimization Library (\url{https://github.com/simopt-admin/simopt}) and has also been studied in \cite{gao2016new}. There is a company selling one product that will never be out of stock in a city. Without loss of generality, the city is assumed to be a unit square, i.e., $[0,1]^2$, and distances are measured in units of 30 km. Two warehouses are located in the city and each of them has 10 trucks delivering orders individually. Orders are generated from 8 AM to 5 PM by a stationary Poisson process with a rate parameter 1/3 per minutes and are located in the city according to a density function \begin{equation*} f(x,y) = 1.6 - ( |x-0.8| + |y - 0.8| ), \quad x, y \in [0,1]. \end{equation*} When order arrives, it is dispatched to the nearest warehouse with available trucks. Otherwise, it is placed into a queue and satisfied by following the first-in-first-out pattern when trucks become idle. Then, the trucks pick the order up, travel to the delivery point, deliver the products and return to their assigned warehouses waiting for the next order, where the pick-up and deliver time are exponentially distributed with mean 5 and 10, respectively. All trucks travel in Manhattan fashion at a constant speed 30 km/hour, and orders must be delivered on the day when it is received. The objective is to find the locations of the two warehouses that can maximize the proportion of orders which are delivered within 60 minutes. Let $(z_{i,1},z_{i,2})$ and $(z_{i,3},z_{i,4})$ be the two locations, respectively. We consider 10 alternatives $(z_{i,1},z_{i,2},z_{i,3},z_{i,4}) = (0.49+0.01i,0.59+0.01i,0.59+0.01i,0.79+0.01i)$, for $i = 1,2,\dots,10$. In this experiment, we run 30 days of simulation in each replication, and the proportion of orders satisfied within 60 minutes is the average proportion of satisfied orders during the 30 days. Thus, the proportion of orders satisfied within 60 minutes is approximately normally distributed. By comparing $100,000$ simulation samples of each design, the best design is determined, i.e., $(z_{1,1},z_{1,2},z_{1,3},z_{1,4}) = (0.5,0.6,0.6,0.8)$. In the facility location problem, we set the initial number of simulation replication per design $n_0$, the simulation budget $T$, and the number of independent macro replications for estimating the empirical PCS as 3, 800, and 10,000, respectively. The performance of each tested budget allocation procedure is presented in Figure \ref{fig:FACLOC} and Table \ref{table: performance comparison}. \begin{figure} \caption{(Color online) Performance comparison of the five procedures on Example 1} \label{fig:sfig1} \end{figure} \begin{figure} \caption{(Color online) Performance comparison of the five procedures on Example 2} \label{fig:sfig2} \end{figure} \begin{figure} \caption{(Color online) Performance comparison of the five procedures on Example 3} \label{fig:sfig3} \end{figure} \begin{figure} \caption{(Color online) Performance comparison of the five procedures on Example 4} \label{fig:sfig4} \end{figure} \begin{figure} \caption{(Color online) Performance comparison of the five allocation procedures on the facility location problem} \label{fig:FACLOC} \end{figure} \begin{table}[thp] \caption{Performance comparison of the five allocation procedures on different simulation budgets for synthetic examples} \label{table: performance comparison} \centering \begin{tabular}{@{}lccccccc@{}} \midrule \multirow{2}{*}{Example 1} & \multicolumn{7}{c}{Simulation budget} \\ \cmidrule(l){2-8} & 50 & 100 & 200 & 400 & 600 & 800 & 1000 \\ \midrule EA & 0.425 & 0.523 & 0.631 & 0.744 & 0.805 & 0.846 & 0.876 \\ OCBA & 0.466 & 0.623 & 0.749 & 0.856 & 0.906 & 0.934 & 0.950 \\ AOAP & 0.492 & 0.643 & 0.760 & 0.857 & 0.902 & 0.928 & 0.943 \\ FAA (ours) & 0.474 & 0.631 & 0.771 & 0.981 & 0.930 & 0.954 & 0.967 \\ DAA (ours) & 0.473 & 0.631 & 0.771 & 0.986 & 0.934 & 0.957 & 0.969 \\ \midrule \multirow{2}{*}{Example 2} & \multicolumn{7}{c}{Simulation budget} \\ \cmidrule(l){2-8} & 50 & 150 & 500 & 1000 & 1500 & 2000 & 3000 \\ \midrule EA & 0.389 & 0.505 & 0.654 & 0.753 & 0.811 & 0.850 & 0.900 \\ OCBA & 0.388 & 0.571 & 0.760 & 0.858 & 0.906 & 0.933 & 0.959 \\ AOAP & 0.404 & 0.583 & 0.751 & 0.844 & 0.892 & 0.919 & 0.949 \\ FAA (ours) & 0.398 & 0.589 & 0.789 & 0.890 & 0.935 & 0.955 & 0.974 \\ DAA (ours) & 0.396 & 0.586 & 0.792 & 0.895 & 0.938 & 0.958 & 0.976 \\ \midrule \multirow{2}{*}{Example 3} & \multicolumn{7}{c}{Simulation budget} \\ \cmidrule(l){2-8} & 200 & 500 & 800 & 1000 & 2000 & 3000 & 5000 \\ \midrule EA & 0.281 & 0.382 & 0.443 & 0.375 & 0.581 & 0.643 & 0.725 \\ OCBA & 0.356 & 0.635 & 0.724 & 0.762 & 0.864 & 0.907 & 0.947 \\ AOAP & 0.429 & 0.672 & 0.755 & 0.791 & 0.886 & 0.924 & 0.955 \\ FAA (ours) & 0.383 & 0.677 & 0.775 & 0.814 & 0.912 & 0.945 & 0.970 \\ DAA (ours) & 0.382 & 0.679 & 0.782 & 0.822 & 0.920 & 0.953 & 0.974 \\ \midrule \multirow{2}{*}{Example 4} & \multicolumn{7}{c}{Simulation budget ($ \times 10^3$)} \\ \cmidrule(l){2-8} & 6 & 8 & 10 & 20 & 30 & 60 & 80 \\ \midrule EA & 0.089 & 0.115 & 0.145 & 0.290 & 0.414 & 0.675 & 0.780 \\ OCBA & 0.591 & 0.631 & 0.658 & 0.736 & 0.768 & 0.818 & 0.834 \\ AOAP & 0.734 & 0.772 & 0.791 & 0.841 & 0.862 & 0.892 & 0.902 \\ FAA (ours) & 0.733 & 0.789 & 0.813 & 0.877 & 0.906 & 0.931 & 0.940 \\ DAA (ours) & 0.729 & 0.788 & 0.817 & 0.879 & 0.910 & 0.947 & 0.954 \\ \midrule \multirow{2}{*}{\makecell[l]{Facility \\location \\problem}} & \multicolumn{7}{c}{Simulation budget} \\ \cmidrule(l){2-8} & 40 & 120 & 200 & 300 & 500 &700 & 800 \\ \midrule EA & 0.458 & 0.600 & 0.673 & 0.731 & 0.805 & 0.853 & 0.872 \\ OCBA & 0.501 & 0.715 & 0.805 & 0.863 & 0.922 & 0.952 & 0.959 \\ AOAP & 0.509 & 0.740 & 0.813 & 0.869 & 0.922 & 0.947 & 0.957 \\ FAA (ours) & 0.504 & 0.731 & 0.827 & 0.880 & 0.942 & 0.962 & 0.971 \\ DAA (ours) & 0.502 & 0.743 & 0.830 & 0.889 & 0.943 & 0.968 & 0.976 \\ \bottomrule \end{tabular} \end{table} \subsection{Discussion on Experiment Results} From Figure \ref{fig:sfig1}-\ref{fig:FACLOC} and Table \ref{table: performance comparison}, we can see that both FAA and DAA have the best overall performances across all tested examples. This observation verifies the benefits of the proposed budget-adaptive allocation rule. AOAP performs the best at the beginning, but it surpassed by both FAA and DAA when the simulation budget is relatively large. This observation corresponds to the fact that AOAP is a myopic procedure that maximizes one-step-ahead improvement but can not adapt to the simulation budget. When the problem scale is small (e.g., Example 1-3 and the facility location problem), AOAP and OCBA are compatible. However, when the problem scale is relatively large (e.g., Example 4), AOAP performs better than OCBA. It is interesting to see in Example 2 that competitive designs (that are hard to distinguish from the best design) are assigned large budget allocation ratios by OCBA due to their large variances. However, based on the finite budget size, the budget allocation ratios assigned to competitive designs and non-competitive designs should be discounted and increased, respectively. Therefore, both FAA and DAA, which consider the impact of simulation budget, have better performances than OCBA on Example 2 (see Figure \ref{fig:sfig2}). Since EA incorporates no sample information and is a pure random sampling procedure, it is dominated by the other four procedures. The reason why both FAA and DAA have superior performances is that they consider the impact of budget size on budget allocation strategies and possess not only desirable finite-budget properties but also asymptotic optimality. In Table \ref{table: performance comparison}, we can see that FAA performs slightly better than DAA when the simulation budget is relatively small. This verifies the benefit of anchoring the final budget. However, FAA is outperformed by DAA after the simulation budget grows up to 400, 500, 500, 10,000, and 120 in Example 1, 2, 3, 4, and the facility location problem, respectively. These observations illustrate that, when the simulation budget $T$ or the problem scale is relatively small, FAA is preferred. Otherwise, DAA is recommended. As shown in Table \ref{table: Average runtime of the five allocation procedures (in seconds)}, we can see that EA are the fastest procedure as it does not utilize any sample information. The average runtimes for both FAA and DAA are almost the same, and they are longer than EA and OCBA but on the same magnitude as OCBA. The reason why both FAA and DAA are more time-consuming than OCBA is that, at each iteration, they require additional computational time on $\alpha_i(T)$ to consider the impact of budget size on budget allocation ratios. Furthermore, the average runtimes of AOAP are compatible with both FAA and DAA when the problem scale is relatively small, e.g., in Example 1-2. However, as the problem scale becomes large, e.g., in Example 3-4, the average runtimes of AOAP drastically increases and is much larger than both FAA and DAA. At each iteration, AOAP requires $(k-1)^2$ pairwise comparisons, while both FAA and DAA require $k-1$ parwise comparisons, which is the same as OCBA. Since pairwise comparisons are very time-consuming, both FAA and DAA are more computationally efficient and are more suitable for solving large-scale problems than AOAP, even though AOAP performs better when the simulation is relatively small. The average runtimes for the five competing procedures are very close in the experiments on the facility location problem. This result implies that in the case where the complexity of the real system is high and the simulation time of such system is relative long, the additional runtimes required for calculating $\alpha_i(T)$ in both FAA and DAA, as well as the computational burden caused by too many pairwise comparisons in AOAP, can be negligible. Note that the industrial systems in the real world can be much more complex than the logistic system considered in this paper. Therefore, both FAA and DAA, as well as AOAP, are competitive in real industrial applications. These results verify the effectiveness and applicability of our proposed FAA and DAA procedures. \begin{table}[thp] \caption{Average runtime of the five allocation procedures (in seconds)} \label{table: Average runtime of the five allocation procedures (in seconds)} \centering \begin{tabular}{@{}lccccc@{}} \midrule & EA & OCBA & AOAP & FAA (ours) & DAA (ours) \\ \hline Example 1 & 0.001 & 0.006 & 0.015 & 0.012 & 0.012 \\ Example 2 & 0.001 & 0.003 & 0.007 & 0.006 & 0.006 \\ Example 3 & 0.005 & 0.030 & 0.335 & 0.057 & 0.058 \\ Example 4 & 0.009 & 2.780 & 107.061 & 7.802 & 7.842 \\ \makecell[l]{Facility \\location \\problem} & 83.344 & 85.332 & 85.377 & 85.440 & 85.347\\ \bottomrule \end{tabular} \end{table} \section{Conclusion} \label{conclusion} In this paper, we consider a simulation-based R\&S problem of selecting the best system design from a finite set of alternatives under a fixed budget setting. We propose a budget allocation rule that is adaptive to the simulation budget and possesses both desirable finite-budget properties and asymptotic optimality. Based on the proposed budget-adaptive allocation rule, two heuristic algorithms FAA and DAA are developed. In the numerical experiments, both FAA and DAA clearly outperform other competing procedures. We highlight the most important implication of our contributions: simulation budget significantly impacts the budget allocation strategy, and a desirable budget allocation rule should be adaptive to the simulation budget. The proposed budget-adaptive allocation rule indicates that, compared with the asymptotically optimal OCBA allocation rule, the budget allocation ratios of non-best designs that are hard to distinguish from the best design should be discounted, while the budget allocation ratios of those that are easy to distinguish from the best should be increased. These adjustments are based on the simulation budget, which is often limited and finite. Therefore, the budget allocation strategy should be sensitive to the simulation budget, and highlights the significant difference between finite budget and sufficiently large budget allocation strategies. We believe these findings can help and motivate researchers to develop more efficient finite-budget allocation rules in future studies. The budget allocation problem can be essentially formulated as a stochastic dynamic program (DP) problem. Both FAA and DAA are efficient procedures that can adapt to the simulation budget, but they ignore the dynamic feedback of the final step while sampling at the current step. Recently, Qin, Hong, and Fan \cite{qin2022non} in their preliminary version of work formulate the problem as a DP and investigate a non-myopic knowledge gradient (KG) procedure, which can dynamically look multiple steps ahead and take the dynamic feedback mechanism into consideration. However, exactly solving the DP is intractable due to the extremely high computational cost caused by “curse of dimensionality”. As a result, how to derive a computationally tractable allocation rule that can incorporate the dynamic feedback mechanism, remains a critical future direction. \section*{Appendix} \begingroup \allowdisplaybreaks \subsection{Proof of Lemmas} \subsubsection{Proof of Lemma 1} \label{Proof of Lemma 1} The constraints of Problem $\mathcal{P}1$ are all affine functions of $w_i$, for $i \in \mathcal{K}$. Furthermore, showing $\text{APCS}$ is concave is equivalent to showing \begin{equation*} g(w) = \sum_{i \in \mathcal{K}^{\prime}} \Phi \left( - \frac{\delta_{i,b}}{\sigma_{i,b}} \right), \end{equation*} is a convex function of $w$. To verify the convexity of $g(w)$, we need to show its Hessian matrix is positive semi-definite. The Hessian matrix for $g(w)$ is \begin{equation*} \nabla^2 g(w) = \begin{pmatrix} \frac{\partial^{2} g(w)}{\partial^2 w_1} & \frac{\partial^{2} g(w)}{\partial w_1 \partial w_2} & \cdots & \frac{\partial^{2} g_(w)}{\partial w_1 \partial w_k} \\ \frac{\partial^{2} g(w)}{\partial w_2 \partial w_1} & \frac{\partial^{2} g(w)}{\partial^2 w_2} & \cdots & \frac{\partial^{2} g(w)}{\partial w_2 \partial w_k} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^{2} g(w)}{\partial w_k \partial w_1} & \frac{\partial^{2} g(w)}{\partial w_k \partial w_2} & \cdots & \frac{\partial^{2} g(w)}{\partial^2 w_k} \end{pmatrix}, \end{equation*} where, for $i,j \in \mathcal{K}^{\prime}$ and $i \neq j$, and $b$ \begin{align*} \frac{\partial^{2} g(w)}{\partial w_i^{2}}&= \frac{1}{2 \sqrt{2 \pi}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right)\\ & \quad \times\left[\frac{\delta_{i,b}}{\sigma_{i,b}^{5} T^{2}} \frac{\sigma_{i}^{4}}{w_i^{4}}\left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right)+2 \frac{\delta_{i,b}}{\sigma_{i,b}^{3} T} \frac{\sigma_{i}^{2}}{w_i^{3}}\right], \\ \frac{\partial^{2} g(w)}{\partial w_b^{2}}&= \frac{1}{2 \sqrt{2 \pi}} \sum\limits_{i \in \mathcal{K}^{\prime}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right) \\ &\quad\times\left[\frac{\delta_{i,b}}{\sigma_{i,b}^{5} T^{2}} \frac{\sigma_{b}^{4}}{w_b^{4}}\left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right)+2 \frac{\delta_{i,b}}{\sigma_{i,b}^{3} T} \frac{\sigma_{b}^{2}}{w_b^{3}}\right], \\ \frac{\partial^{2} g(w)}{\partial w_i \partial w_b}&= \frac{1}{2 \sqrt{2 \pi}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right) \\ &\quad\times\left[\frac{\delta_{i,b}}{\sigma_{i,b}^{5} T^{2}} \frac{\sigma_{i}^{2}}{w_i^{2}} \frac{\sigma_{b}^{2}}{w_b^{2}}\left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right)\right],\\ \frac{\partial^{2} g(w)}{\partial w_i \partial w_{j}} &= 0. \end{align*} Furthermore, for any non-zero vector $a\in \mathbb{R}^k$, we have \begin{align*} &a^{T} \nabla^2 g(w) a \\ &= \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{\partial^{2} g(w)}{\partial w_i^2} a_i^2 + \frac{\partial^{2} g(w)}{\partial w_b^{2}} a_b^2 \\ & \quad + 2\sum\limits_{i \in \mathcal{K}^{\prime}} \frac{\partial^{2} g(w)}{\partial w_i \partial w_b} a_i a_b\\ &= \frac{1}{2 \sqrt{2 \pi}} \sum\limits_{i \in \mathcal{K}^{\prime}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right)\\ & \quad \times a_i^2 \left[\frac{\delta_{i,b}}{\sigma_{i,b}^{5} T^{2}} \frac{\sigma_{i}^{4}}{w_i^{4}}\left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right)+2 \frac{\delta_{i,b}}{\sigma_{i,b}^{3} T} \frac{\sigma_{i}^{2}}{w_i^{3}}\right]\\ &\quad + \frac{1}{2 \sqrt{2 \pi}} \sum\limits_{i \in \mathcal{K}^{\prime}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right) \\ &\quad\times a_b^2 \left[\frac{\delta_{i,b}}{\sigma_{i,b}^{5} T^{2}} \frac{\sigma_{b}^{4}}{w_b^{4}}\left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right)+2 \frac{\delta_{i,b}}{\sigma_{i,b}^{3} T} \frac{\sigma_{b}^{2}}{w_b^{3}}\right]\\ &\quad + \frac{1}{2 \sqrt{2 \pi}} \sum\limits_{i \in \mathcal{K}^{\prime}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right) \\ &\quad\times 2 a_i a_b \left[\frac{\delta_{i,b}}{\sigma_{i,b}^{5} T^{2}} \frac{\sigma_{i}^{2}}{w_i^{2}} \frac{\sigma_{b}^{2}}{w_b^{2}}\left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right)\right] \\ &= \frac{1}{2 \sqrt{2 \pi}} \sum\limits_{i \in \mathcal{K}^{\prime}} \exp \left(-\frac{\delta_{i,b}^{2}}{2 \sigma_{i,b}^{2}}\right) \\ & \quad \times \frac{\delta_{i,b}}{\sigma_{i,b}^{3} T} \left[\frac{1}{\sigma_{i,b}^{2} T} \left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right) \left(\frac{\sigma_{i}^{2}}{w_i^{2}} a_i + \frac{\sigma_{b}^{2}}{w_b^{2}} a_b \right)^2 \right.\\ &\left.\quad \quad + 2 \left( \frac{\sigma_{i}^{2}}{w_i^{3}} a_i^2 + \frac{\sigma_{b}^{2}}{w_b^{3}} a_b^2 \right) \right]. \\ \end{align*} Since \begin{align*} 2 \left( \frac{\sigma_{i}^{2}}{w_i^{3}} a_i^2 + \frac{\sigma_{b}^{2}}{w_b^{3}} a_b^2 \right) & = 2 \frac{\sigma_{i,b}^2}{\sigma_{i,b}^2} \left( \frac{\sigma_{i}^{2}}{w_i^{3}} a_i^2 + \frac{\sigma_{b}^{2}}{w_b^{3}} a_b^2 \right)\\ & = \frac{2}{\sigma_{i,b}^2 T} \left(\frac{\sigma_i^2}{w_i} + \frac{\sigma_b^2}{w_b} \right) \left( \frac{\sigma_{i}^{2}}{w_i^{3}} a_i^2 + \frac{\sigma_{b}^{2}}{w_b^{3}} a_b^2 \right)\\ & = \frac{2}{\sigma_{i,b}^2 T} \left[\left(\frac{\sigma_{i}^{2}}{w_i^{2}} a_i + \frac{\sigma_{b}^{2}}{w_b^{2}} a_b \right)^2 \right.\\ &\left. \quad \quad+ \frac{\sigma_i^2 \sigma_b^2}{w_i w_b} \left( \frac{a_i}{w_i} - \frac{a_b}{w_b} \right)^2\right], \\ \end{align*} we have \begin{align*} &\frac{1}{\sigma_{i,b}^{2} T} \left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right) \left(\frac{\sigma_{i}^{2}}{w_i^{2}} a_i+ \frac{\sigma_{b}^{2}}{w_b^{2}} a_b \right)^2\\ & \quad + 2 \left( \frac{\sigma_{i}^{2}}{w_i^{3}} a_i^2 + \frac{\sigma_{b}^{2}}{w_b^{3}} a_b^2 \right) \\ = & \frac{1}{\sigma_{i,b}^{2} T} \left[ \left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}-\frac{3}{2}\right) \left(\frac{\sigma_{i}^{2}}{w_i^{2}} a_i + \frac{\sigma_{b}^{2}}{w_b^{2}} a_b \right)^2 \right.\\ &\left. \quad+ 2 \left(\frac{\sigma_{i}^{2}}{w_i^{2}} a_i + \frac{\sigma_{b}^{2}}{w_b^{2}} a_b \right)^2 + 2 \frac{\sigma_i^2 \sigma_b^2}{w_i w_b} \left( \frac{a_i}{w_i} - \frac{a_b}{w_b} \right)^2\right] \\ = & \frac{1}{\sigma_{i,b}^{2} T} \left[ \left(\frac{1}{2} \frac{\delta_{i,b}^{2}}{\sigma_{i,b}^{2}}+\frac{1}{2}\right) \left(\frac{\sigma_{i}^{2}}{w_i^{2}} a_i + \frac{\sigma_{b}^{2}}{w_b^{2}} a_b \right)^2 \right.\\ & \left. \quad + 2 \frac{\sigma_i^2 \sigma_b^2}{w_i w_b} \left( \frac{a_i}{w_i} - \frac{a_b}{w_b} \right)^2\right]\\ \geq & 0, \end{align*} and therefore, $\nabla^2 g(w) \succeq 0$, and $g(w)$ is a convex function of $w$. Due to the two constraints $\sum_{i \in \mathcal{K}} w_i = 1$ and $w_i \geq 0$, for $i \in \mathcal{K}$, forming a convex set, Problem $\mathcal{P}1$ is a convex optimization problem. This result concludes the proof. \subsubsection{Proof of Lemma 2} By condition $\widehat{\mathcal{C}}_2$, for $i \in \mathcal{K}^{\prime}$, we have \begin{equation} \label{proof p1 w_i} \begin{split} w_i &= \frac{\lambda - 2 \log I_i}{\frac{T}{I_i} + \frac{1}{w^*_i}} \\ &= \frac{I_i (\lambda - 2 \log I_i)}{T+S} , \end{split} \end{equation} in which, $S = \sum_{i \in \mathcal{K}} I_i$ and $w^*_i = I_i / S$. We substitute $w_i$ provided by \eqref{proof p1 w_i} into condition $\mathcal{C}_1$, and obtain \begin{equation} \label{proof p1 w_b} w_b = \sigma_b \sqrt{ \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i^2(\lambda - 2 \log I_i )^2}{\sigma_i^2 (T+S)^2} }. \end{equation} For $i \in \mathcal{K}^{\prime}$, by condition $\mathcal{C}_3$, \eqref{proof p1 w_i}, and \eqref{proof p1 w_b} \begin{equation} \label{verify lambda feasibility} \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i (\lambda - 2 \log I_i)}{T+S} + \sigma_b \sqrt{ \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i^2(\lambda - 2 \log I_i )^2}{\sigma_i^2 \left(T+S\right)^2} } = 1. \end{equation} Then, we have \begin{equation*} \lambda =\left\{\begin{array}{ll} \frac{- q + \sqrt{q^2 - 4 p r}}{2 p} & \quad \text{if} \ w^*_b \neq \frac{1}{2} \\ \frac{4\sum_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T + S}{2\sum_{i \in \mathcal{K}^{\prime}} I_i} & \quad \text{if} \ w^*_b = \frac{1}{2} \\ \end{array}\right. \end{equation*} where \begin{equation*} \begin{split} p &= S(2 I_b - S), \\ q &= - 4 \sigma_b^2 \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i^2 \log I_i }{\sigma_i^2 } \\ &\quad\quad+ 2 (S- I_b) \left(2 \sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T+S \right), \\ r &= 4 \sigma_b^2 \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{I_i^2 \log^2 I_i }{\sigma_i^2 } - \left( 2 \sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T+S \right)^2. \end{split} \end{equation*} Let $\alpha_i(T) = \frac{ (\lambda - 2 \log I_i)}{1 + T/S} $, for $i \in \mathcal{K}^{\prime}$, and substituting $\alpha_i(T)$ into \eqref{proof p1 w_i}. Then, Lemma 2 is proved. \subsubsection{Proof of Lemma 3} We now consider condition $\mathcal{C}_4$. For the best design $b$, its budget allocation ratio $W_b(T)$ is always non-negative. As for non-best designs $i \in \mathcal{K}^{\prime}$, let $W_i(T) \geq 0$, and we have \begin{align} \lambda & \geq 2 \log I_i \quad \forall i \in \mathcal{K}^{\prime}, \notag \\ & \geq 2 \log I_{\langle k - 1 \rangle}. \notag \end{align} If $w^*_b = 1/2$ \begin{equation*} \lambda = \frac{4\sum_{i \in \mathcal{K}^{\prime}} I_i \log I_i + T + S}{2\sum_{i \in \mathcal{K}^{\prime}} I_i} \geq 2 \log I_{\langle k - 1 \rangle}, \end{equation*} and it can be checked that $T \geq 4 \sum_{i \in \mathcal{K}^{\prime}} I_i \log(I_{\langle k - 1 \rangle}/I_i) - S $. If $w^*_b \neq 1/2$ \begin{equation*} \lambda = \frac{- q + \sqrt{q^2 - 4 p r}}{2 p} \geq 2 \log I_{\langle k - 1 \rangle}, \end{equation*} where $p$, $q$, and $r$ are given in Lemma 2. Case 1: If $ p = 2w_b^* - 1 > 0$, i.e. $w_b^* > 1/2$, we need to solve the following inequality \begin{equation} \label{proof p2 case1} \sqrt{q^2 - 4 p r} \geq 4 p \log I_{\langle k - 1 \rangle} + q. \end{equation} Additionally \begin{equation*} \begin{split} &4 p \log I_{\langle k - 1 \rangle} + q \\ =& 4 \sum\limits_{i \in \mathcal{K}^{\prime}} \left(\frac{\sigma^2_b (w_i^*)^2}{\sigma_i^2} - (1-w_b^*)w_i^* \right) \log \frac{ I_{\langle k - 1 \rangle}}{I_i} \\ &\quad + 2(1-w_b^*)\frac{T+S}{S}.\\ \end{split} \end{equation*} Furthermore, we define \begin{equation*} T_1 = 2 \sum\limits_{i \in \mathcal{K}^{\prime}} \left[\frac{\sigma^2_b I_i^2}{\sigma_i^2 (S - I_b)} - I_i \right] \log \frac{I_{\langle k - 1 \rangle}}{I_i} - S. \end{equation*} When $T < T_1 $, $4 p \log I_{\langle k - 1 \rangle} + q < 0$, then Inequality \eqref{proof p2 case1} always holds. Otherwise, when $T \geq T_1 $, we take square on both sides of \eqref{proof p2 case1} and obtain \begin{equation*} T \in (- \infty, T_3] \cup [T_2,+\infty), \end{equation*} in which \begin{equation*} \begin{split} T_2 &= 2 \left[\sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log \frac{I_{\langle k - 1 \rangle}}{I_i} \right.\\ &\left.\quad\quad+ \sigma_b \sqrt{ \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{ I_i^2 }{\sigma_i^2 } \left( \log \frac{I_{\langle k - 1 \rangle}}{I_i} \right)^2 } \right]- S,\\ T_3 &= 2 \left[\sum\limits_{i \in \mathcal{K}^{\prime}} I_i \log \frac{I_{\langle k - 1 \rangle}}{I_i} \right.\\ &\left. \quad\quad - \sigma_b \sqrt{ \sum\limits_{i \in \mathcal{K}^{\prime}} \frac{ I_i^2 }{\sigma_i^2 } \left( \log \frac{I_{\langle k - 1 \rangle}}{I_i} \right)^2 } \right]- S.\\ \end{split} \end{equation*} It can be checked that $T_2$ is strictly positive. Due to the fact $\lim_{T \rightarrow \infty} W_i(T) \rightarrow w_i^* \geq 0$, for $i \in \mathcal{K}^{\prime}$, we expect that $T$ can be sufficiently large. Therefore, a sufficient condition for $W(T)$ being feasible, when $w_b^* > 1/2$, is $T\geq \max\{T_1,T_2\}$. Case 2: If $ p = 2w_b^* - 1 < 0$, i.e. $w_b^* < 1/2$, similarly, we need to solve the following inequality \begin{equation} \label{proof p2 case2} \sqrt{q^2 - 4 p r} \leq 4 p \log I_{\langle k - 1 \rangle} + q. \end{equation} When $T < T_1 $, Inequality \eqref{proof p2 case2} never holds. Otherwise, we take square on both sides of \eqref{proof p2 case2} and, similarly, obtain $T \in (- \infty, T_3] \cup [T_2,+\infty)$. Therefore, a sufficient condition for $W(T)$ being feasible, when $w_b^* < 1/2$, is $T\geq \max\{T_1,T_2\}$. Hence, the solution $W(T)$ is always feasible if $T\geq T_0$, where \begin{equation*} T_0 =\left\{\begin{array}{ll} \max\{T_1,T_2\} & \quad \text{if} \ w^*_b \neq \frac{1}{2} \\ 4 \sum_{i \in \mathcal{K}^{\prime}} I_i \log\frac{I_{\langle k - 1 \rangle}}{I_i} - S & \quad \text{if} \ w^*_b = \frac{1}{2} \\ \end{array}\right. \end{equation*} These results conclude the proof. \subsection{Proof of Theorems} \subsubsection{Proof of Theorem 1} Let $\lambda$ and $\nu_i$, for $i \in \mathcal{K}$, be the Lagrange multipliers. The KKT conditions for Problem $\mathcal{P}1$ are as follows \begin{equation} \label{KKT-1} -\frac{1}{2 \sqrt{2 \pi}} \exp \left(-\frac{\delta_{i,b}^2}{2 \sigma_{i,b}^2}\right) \frac{\delta_{i,b}}{\sigma_{i,b}^3} \frac{\sigma_i^2}{T w_i^2}-\lambda-\nu_i=0, \ i \in \mathcal{K}^{\prime}, \end{equation} \begin{equation} \label{KKT-2} -\frac{1}{2 \sqrt{2 \pi}} \frac{\sigma_b^2}{T w_b^2}\left[\sum_{i \in \mathcal{K}^{\prime}} \exp \left(-\frac{\delta_{i,b}^2}{2 \sigma_{i,b}^2}\right) \frac{\delta_{i,b}}{\sigma_{i,b}^3}\right] -\lambda-\nu_b = 0, \end{equation} \begin{equation} \label{KKT-3} \nu_i w_i = 0, \quad i \in \mathcal{K}, \end{equation} \begin{equation} \label{KKT-4} \sum\limits_{i \in \mathcal{K}} w_i = 1, \end{equation} \begin{equation} \label{KKT-5} w_i \geq 0, \quad i \in \mathcal{K}. \end{equation} Assuming $w_i$ is strictly positive, thus we have $\nu_i = 0$, for $i \in \mathcal{K}$. Condition $\mathcal{C}_1$ can be derived by substituting \eqref{KKT-1} into \eqref{KKT-2}. Take logarithm on both sides of \eqref{KKT-1}, then condition $\mathcal{C}_2$ follows. Equations \eqref{KKT-4} and \eqref{KKT-5} are conditions $\mathcal{C}_3$ and $\mathcal{C}_4$, respectively, both of which guarantee the feasibility of Problem $\mathcal{P}1$. \subsection{Proof of Propositions} \subsubsection{Proof of Proposition 1} We first show that for any pair of non-best designs $i,j \in \mathcal{K}^{\prime}$ and $i \neq j$, there exists a positive constant $c_{i,j} > 0$ such that $w_i/w_j \leq c_{i,j}$. We prove this by contradiction. Assume that there exists a pair of designs $i,j \in \mathcal{K}^{\prime}$ and $i \neq j$ such that $w_i/w_j$ can not be upper bounded, i.e., $w_i/w_j = \infty$. Since $w_i, w_j \in [0,1]$, it can be checked that $w_j \rightarrow 0$. By condition $\mathcal{C}_2$ in Theorem 1 \begin{equation} \label{proof of proposition verifying: eqn 1} \begin{split} &\left[\frac{\delta_{j,b}^2}{ (\sigma_j^2/w_j + \sigma_b^2/w_b) } - \frac{\delta_{i,b}^2}{ (\sigma_i^2/w_i + \sigma_b^2/w_b) } \right] \frac{T}{2} \\ & \quad \quad = \log \frac{\delta_{j,b} (\sigma_j^2/w^2_j) (\sigma_i^2/w_i + \sigma_b^2/w_b)^{\frac{3}{2}}}{\delta_{i,b} (\sigma_i^2/w^2_i) (\sigma_j^2/w_j + \sigma_b^2/w_b)^{\frac{3}{2}} }. \end{split} \end{equation} As $w_j \rightarrow 0$, the term $\frac{\delta_{j,b}^2}{ (\sigma_j^2/w_j + \sigma_b^2/w_b) }$ in \eqref{proof of proposition verifying: eqn 1} will vanish, and the right-hand side in \eqref{proof of proposition verifying: eqn 1} will approach infinity. Then, as $w_j \rightarrow 0$, we have \begin{equation*} - \frac{\delta_{i,b}^2}{ 2(\sigma_i^2/w_i + \sigma_b^2/w_b) } T = + \infty, \end{equation*} which is true if $\sigma_i^2/w_i + \sigma_b^2/w_b \rightarrow 0^-$. However, this contradicts with the fact $\sigma_i^2/w_i + \sigma_b^2/w_b > 0$. Thus, the assumption is false, and for any pair of designs $i,j \in \mathcal{K}^{\prime}$ and $i \neq j$, there must exist a positive constant $c_{i,j} > 0$, such that $w_i/w_j \leq c_{i,j}$. Let $w_{\text{min}}$ be the minimum budget allocation ratio of non-best designs, i.e., $w_{\text{min}} = \min_{i \in \mathcal{K}^{\prime}} w_i$. Therefore, there exists a positive constant $c$ such that $w_i \leq c w_{\text{min}}$, for $i \in \mathcal{K}^{\prime}$. By condition $\mathcal{C}_1$ in Theorem 1 \begin{equation*} w_b = \sigma_b \sqrt{\sum\limits_{i \in \mathcal{K}^{\prime}} \frac{w_i^{2}}{\sigma_i^2}} \geq \frac{ \sigma_b\sqrt{k-1} }{\bar{\sigma}} w_{\text{min}}. \end{equation*} Then, for any non-best design $i \in \mathcal{K}^{\prime}$, we have \begin{equation*} \frac{w_i}{w_b} \leq \frac{w_i}{w_{\text{min}}} \frac{\bar{\sigma}}{\sigma_b \sqrt{k-1} } \leq \frac{\bar{\sigma}}{\underline{\sigma}} \frac{c}{\sqrt{k-1}}. \end{equation*} This result concludes the proof, and therefore, $w_i/w_b = \mathcal{O}(1/\sqrt{k})$, for $i \in \mathcal{K}^{\prime}$. \subsubsection{Proof of Proposition 2} According to Lemma 2, we have \begin{equation*} \alpha_i(T) = \frac{ (\lambda - 2 \log I_i)}{1 + T/S}, \quad i \in \mathcal{K}^{\prime}, \end{equation*} which is monotone decreasing with respect to $I_i$. Since $I_{\langle 1 \rangle} \leq I_{\langle 2 \rangle} \leq \dots \leq I_{\langle k - 1 \rangle}$, we have $\alpha_{\langle 1 \rangle}(T) \geq \alpha_{\langle 2 \rangle}(T) \geq \dots \geq \alpha_{\langle k - 1 \rangle}(T)$. Furthermore, if $I_{i}$, for $i \in \mathcal{K}^{\prime}$, are all equal, then $\lambda = 1 + T/S + 2 \log I_i$ and $\alpha_{\langle 1 \rangle}(T) = \alpha_{\langle 2 \rangle}^(T) = \dots = \alpha_{\langle k - 1 \rangle}(T) = 1$. On the contrary, if $\alpha_{\langle 1 \rangle}(T) = \alpha_{\langle 2 \rangle}(T) = \dots = \alpha_{\langle k - 1 \rangle}(T)$, it can be checked that all $I_i$s are equal. We show $\alpha_{\langle 1 \rangle}(T) \geq 1 $ by contradiction, and $\alpha_{\langle k - 1 \rangle}(T) \leq 1$ can be proved similarly. Without lose of generality, we assume $\alpha_{\langle 1 \rangle}(T) < 1$. Based on preceding analyses, we know $\alpha_{\langle k - 1 \rangle}(T) \leq \alpha_{[k-2]}(T) \leq \dots \leq \alpha_{\langle 1 \rangle}(T) < 1$. According to Lemma 2, we have $W_i(T) = w^*_i \alpha_{i}(T) < w^*_i $ for non-best designs $i \in \mathcal{K}^{\prime}$, and $W_b(T) = \sigma_b \sqrt{\sum_{i \in \mathcal{K}^{\prime}}\frac{(w_i^* \alpha_{i}(T) )^{2} }{\sigma_i^2}} < w^*_b$ for the best design $b$. Then we have $\sum_{i \in \mathcal{K}} W_b(T) < \sum_{i \in \mathcal{K}} w^*_b = 1$, which contradicts with the fact $\sum_{i \in \mathcal{K}} W_b(T) = 1$. Therefore, $\alpha_{\langle 1 \rangle}(T)$ must be larger or equal to 1. These results conclude the proof. \endgroup \section*{References} \end{document}
\begin{document} \title{The Weighted Barycenter Drawing Recognition Problem} \author{Peter Eades\inst{1} \and Patrick Healy\inst{2} \and Nikola S. Nikolov\inst{2}} \institute{ University of Sydney, \email{[email protected]} \and University of Limerick \email{patrick.healy,[email protected]}} \maketitle \begin{abstract} We consider the question of whether a given graph drawing $\Gamma$ of a triconnected planar graph $G$ is a weighted barycenter drawing. We answer the question with an elegant arithmetic characterisation using the faces of $\Gamma$. This leads to positive answers when the graph is a Halin graph, and to a polynomial time recognition algorithm when the graph is cubic. \end{abstract} \section{Introduction}\label{se:intro} The \emph{barycenter algorithm} of Tutte~\cite{tutte60,tutte63} is one of the earliest and most elegant of all graph drawing methods. It takes as input a graph $G=(V,E)$, a subgraph $F_0 = (V_0, E_0) $ of $G$, and a position $\gamma_a$ for each $a \in V_0$. The algorithm simply places each vertex $v \in V-V_0$ at the barycenter of the positions of its neighbours. The algorithm can be seen as the grandfather of force-directed graph drawing algorithms, and can be implemented easily by solving a system of linear equations. If $G$ is a \iffalse Nik: replaced plane by planar \fi planar triconnected graph, $F_0$ is the outside face of $G$, and the positions $\gamma_a$ for $a \in V_0$ are chosen so that $F_0$ forms a convex polygon, then the drawing output by the barycenter algorithm is planar and each face is convex. The barycenter algorithm can be generalised to \iffalse Nik: added planar \fi planar graphs with positive edge weights, placing each vertex $i$ of $V-V_0$ at the weighted barycenter of the neighbours of $i$. This generalisation preserves the property that the output is planar and convex~\cite{DBLP:journals/cagd/Floater97a}. Further, weighted barycenter methods have been used in a variety of theoretical and practical contexts~\cite{DBLP:conf/gd/FraysseixM03,DBLP:journals/comgeo/VerdierePV03,DBLP:journals/corr/abs-0708-0964,Thomassen83}. Examples of weighted barycenter drawings (the same graph with different weights) are in Fig.~\ref{fi:example}. In this paper we investigate the following question: given a straight-line \iffalse Nik: added planar \fi planar drawing $\Gamma$ of a triconnected planar graph $G$, can we compute weights for the edges of $G$ so that $\Gamma$ is the weighted barycenter drawing of $G$? We answer the question with an elegant arithmetic characterisation, using the faces of $\Gamma$. This yields positive answers when the graph is a Halin graph, and leads to a polynomial time algorithm when the graph is cubic. Our motivation in examining this question partly lies in the elegance of the mathematics, but it was also posed to us by Veronika Irvine (see \cite{DBLP:conf/gd/BiedlI17,tesselace}), who needed the characterisation to to create and classify ``grounds'' for bobbin lace drawings; this paper is a first step in this direction. Further, we note that our result relates to the problem of morphing from one planar graph drawing to another (see \cite{DBLP:conf/gd/Barrera-CruzHL14,FloaterGotsman}). Previous work has characterised drawings that arise from the Schnyder algorithm (see~\cite{DBLP:conf/wg/BonichonGHI10}) in this context. Finally, we note that this paper is the first attempt to characterise drawings that are obtained from force-directed methods. \begin{figure} \caption{ Weighted barycenter drawings of the same graph embedding with different weights. } \label{fi:example} \end{figure} \section{Preliminaries: the weighted barycenter algorithm}\label{se:prelim} Suppose that $G = (V,E)$ denotes a triconnected \iffalse Nik: changed plane to planar \fi planar graph and $w$ is a \emph{weight function} that assigns a non-negative real weight $w_{ij}$ to each edge $(i,j) \in E$. We assume that the weights are positive unless otherwise stated. We denote $|V|$ by $n$ and $|E|$ by $m$. In this paper we discuss planar straight-line drawings of such graphs; such a drawing $\Gamma$ is specified by a position $\gamma_i$ for each vertex $i \in V$. We say that $\Gamma$ is \emph{convex} if every face is a convex polygon. Throughout this paper, $F_0$ denotes the \iffalse Nik: changed outside to outer \fi outer face of a plane graph $G$. Denote the number of vertices on $F_0$ by $f_0$. In a convex drawing, the edges of $F_0$ form a simple convex polygon $P_0$. Some terminology is convenient: we say that an edge or vertex on $F_0$ is \emph{external}; a vertex that is not external is \emph{internal}; a face $F$ (respectively edge, $e$) is \emph{internal} if $F$ (resp. $e$) is incident to an internal vertex, and \emph{strictly internal} if every vertex incident to $F$ (resp. $e$) is internal. The \emph{weighted barycenter algorithm} takes as input a triconnected \iffalse Nik: changed plane to planar \fi planar graph $G = (V,E)$ with a weight function $w$, together with $F_0$ and $P_0$, and produces a straight-line drawing $\Gamma$ of $G$ with $F_0$ drawn as $P_0$. Specifically, it assigns a position $\gamma_i$ to each internal vertex $i$ such that $\gamma_i$ is the weighted barycenter of its neighbours in $G$. That is: \begin{equation} \label{eq:bary0} \gamma_i = \frac{1}{\sum_{j \in N(i)} w_{ij}} \sum_{j \in N(i)} w_{ij} \gamma_j \end{equation} for each internal vertex $i$. Here $N(i)$ denotes the set of neighbours of $i$. If $\gamma_i = (x_i, y_i)$ then (\ref{eq:bary0}) consists of $2(n-f_0)$ linear equations in the $2(n-f_0)$ unknowns $x_i , y_i$. The equations (\ref{eq:bary0}) are called the \emph{(weighted) barycenter equations} for $G$. Noting that the matrix involved is a submatrix of the Laplacian of $G$, one can show that the equations have a unique solution that can be found by traditional (see for example~\cite{Trefethen97}) or specialised (see for example~\cite{DBLP:journals/siamcomp/SpielmanT11}) methods. The weighted barycenter algorithm, \iffalse Nik: added a phrase \fi which can be viewed as a force directed method, was defined by Tutte~\cite{tutte60,tutte63} and extended by Floater~\cite{DBLP:journals/cagd/Floater97a}; the classic theorem says that the output is planar and convex: \begin{theorem} (Tutte~\cite{tutte60,tutte63}, Floater~\cite{DBLP:journals/cagd/Floater97a}) The drawing output by the weighted barycenter algorithm is planar, and each face is convex. \label{th:tutte} \end{theorem} \section{The Weighted Barycenter Recognition Problem} \label{se:TheProblemt} This paper discusses the problem of finding weights $w_{ij}$ so that a given drawing is the weighted barycenter drawing with these weights. More precisely, we say that a drawing $\Gamma$ is a \emph{weighted barycenter drawing} if there is a positive weight $w_{ij}$ for each internal edge $(i,j)$ such that for each internal vertex $i$, equations (\ref{eq:bary0}) hold. \begin{description} \item [{\bf The Weighted Barycenter Recognition problem}] \item [{\em Input:}] A straight-line planar drawing $\Gamma$ of a triconnected plane graph $G = (V,E)$, such that the vertices on the convex hull of $\{ \gamma_i : i \in V \}$ form a face of $G$. \item [{\em Question:}] Is $\Gamma$ a weighted barycenter drawing? \end{description} Thus we are given the location $\gamma_i = (x_i , y_i)$ of each vertex, and we must compute a positive weight $w_{ij}$ for each edge so that the barycenter equations (\ref{eq:bary0}) hold for each internal vertex. Theorem~\ref{th:tutte} implies that if $\Gamma$ is a weighted barycenter drawing, then each face of the drawing is convex; however, the converse is false, even for triangulations (see Appendix). \section{Linear Equations for the Weighted Barycenter Recognition problem} In this section we show that the weighted barycenter recognition problem can be expressed in terms of linear equations. The equations use \emph{asymmetric} weights $z_{ij}$ for each edge $(i,j)$; that is, $z_{ij}$ is not necessarily the same as $z_{ji}$. To model this asymmetry we replace each undirected edge $(i,j)$ of $G$ with two directed edges $(i,j)$ and $(j,i)$; this gives a directed graph $\overrightarrow{G} = (V,\overrightarrow{E})$. For each vertex $i$, let $N^+(i)$ denote the set of \emph{out-neighbours} of $i$; that is, $N^+(i) = \{j \in V : (i,j) \in \overrightarrow{E} \}.$ Since each face is convex, each internal vertex is inside the convex hull of its neighbours. Thus each internal vertex position is a convex linear combination of the vertex positions of its neighbours. That is, for each internal vertex $i$ there are non-negative weights $z_{ij}$ such that \begin{equation} \label{eq:bary3} \sum_{j \in N^+(i)} z_{ij} = 1 \text{~~~~and~~~~} \gamma_i = \sum_{j \in N^+(i)} z_{ij} \gamma_j . \end{equation} The values of $z_{ij}$ satisfying (\ref{eq:bary3}) can be determined in linear time. For a specific vertex $i$, the $z_{ij}$ for $j \in N^+(i)$ can be viewed as a kind of \emph{barycentric coordinates} for $i$. In the case that $|N^+(i)| = 3$, these coordinates are unique. Although equations (\ref{eq:bary0}) and (\ref{eq:bary3}) seem similar, they are not the same: one is directed, the other is undirected. In general $z_{ij} \neq z_{ji}$ for directed edges $(i,j)$ and $(j,i)$, while the weights $w_{ij}$ satisfy $w_{ij} = w_{ji}$. However we can choose a ``scale factor'' $s_i > 0$ for each vertex $i$, and scale equations (\ref{eq:bary3}) by $s_i$. That is, for each internal vertex $i$, \begin{equation} \label{eq:baryScaled} \gamma_i = \frac{1}{ \sum_{j \in N^+(i)} s_i z_{ij} } \sum_{j \in N^+(i)} s_i z_{ij} \gamma_j . \end{equation} The effect of this scaling is that we replace $z_{ij}$ by $s_i z_{ij}$ for each edge $(i,j)$. We would like to choose a scale factor $s_i > 0$ for each internal vertex $i$ such that for each strictly internal edge $(i,j) \in E$, $ s_i z_{ij} = s_j z_{ji}$; that is, we want to find a real positive $s_i$ for each internal vertex $i$ such that \begin{equation} \label{eq:scale} s_i z_{ij} - s_j z_{ji} = 0 \end{equation} for each strictly internal edge $(i,j)$. It can be shown easily that the existence of any nontrivial solution to (\ref{eq:scale}) implies the existence of a positive solution (see Appendix). We note that any solution of (\ref{eq:scale}) for strictly internal edges gives weights $w_{ij}$ such that the barycenter equations (\ref{eq:bary0}) hold. We choose $w_{ij} = s_i z_{ij}$ for each (directed) edge $(i,j)$ that is incident to an internal vertex $i$. Equations (\ref{eq:scale}) ensure that $w_{ij} = w_{ji}$ for each strictly internal edge. For edges which are internal but not strictly internal, we can simply choose $w_{ij} = s_i z_{ij}$ for any value of $s_i$, since $z_{ji}$ is undefined. Thus if equations (\ref{eq:scale}) have a nontrivial solution, then the drawing is a weighted barycenter drawing. \subsubsection{The main theorem.} We characterise the solutions of equations (\ref{eq:scale}) with an arithmetic condition on the faces of $\Gamma$. This considers the product of the weights $z_{ij}$ around directed cycles in $G$: if the product around each strictly internal face in the clockwise direction is the same as the product in the counter-clockwise direction, then equations (\ref{eq:scale}) have a nontrivial solution. \begin{theorem} \label{th:cycleProduct} Equations (\ref{eq:scale}) have a nontrivial solution if and only if for each strictly internal face $C = (v_0, v_1, \ldots , v_{k-1}, v_k = v_0)$ in $G$, we have \begin{equation} \label{eq:cycle} \prod_{i=0}^{k-1} z_{v_i,v_{i+1}} = \prod_{i=1}^{k} z_{v_{i},v_{i-1}}. \end{equation} \end{theorem} \begin{proof} For convenience we denote $\frac{z_{ji}}{z_{ij}}$ by $\zeta_{ij}$ for each directed edge $(i,j)$; note that $\zeta_{ij} = 1/\zeta_{ji}$. Equations (\ref{eq:scale}) can be re-stated as \begin{equation} \label{eq:zeta} s_i - \zeta_{ij} s_j = 0 \end{equation} for each strictly internal edge $(i,j)$, and the equations (\ref{eq:cycle}) for cycle $C$ can be re-stated as \begin{equation} \label{eq:cycleZeta} \prod_{i=0}^{k-1} \zeta_{v_i,v_{i+1}} = 1. \end{equation} First suppose that equations (\ref{eq:zeta}) have nontrivial solutions $s_i$ for all internal vertices $i$, and $C = (v_0, v_1, \ldots , v_{k-1}, v_k = v_0)$ is a strictly internal face in $G$. Now applying (\ref{eq:zeta}) around $C$ clockwise beginning at $v_0$, we can have: $$ s_{v_0} = \zeta_{v_0,v_1} s_{v_1} ~ = ~ \zeta_{v_0,v_1} \zeta_{v_1,v_2} s_{v_2} ~ = ~ \zeta_{v_0,v_1} \zeta_{v_1,v_2} \zeta_{v_2,v_3} s_{v_3} ~ = ~\ldots $$ We can deduce that \begin{equation*} s_{v_0} = \left( \prod_{i=0}^{j-1} \zeta_{v_i,v_{i+1}} \right) s_{v_j} = \left( \prod_{i=0}^{k-1} \zeta_{v_i,v_{i+1}} \right) s_{v_k} = \left( \prod_{i=0}^{k-1} \zeta_{v_i,v_{i+1}} \right) s_{v_0} \end{equation*} and this yields equation (\ref{eq:cycleZeta}). Now suppose that equation (\ref{eq:cycleZeta}) holds for every strictly internal facial cycle of $G$. We first show that equation (\ref{eq:cycleZeta}) holds for \emph{every} strictly internal cycle. Suppose that (\ref{eq:cycleZeta}) holds for two cycles $C_1$ and $C_2$ that share a single edge, $(u,v)$, and let $C_3$ be the sum of $C_1$ and $C_2$ (that is, $C_3 = ( C_1 \cup C_2 ) - \{ (u,v) \}$). Now traversing $C_3$ in clockwise order gives the clockwise edges of $C_1$ (omitting $(u,v)$) followed by the clockwise edges of $C_2$ (omitting $(v,u)$). But from equation (\ref{eq:cycleZeta}), the product of the edge weights $\zeta_{ij}$ in the clockwise order around $C_1$ is one, and the product of the edge weights $\zeta_{i'j'}$ in the clockwise order around $C_2$ is one. Thus the product of the edge weights $\zeta_{ij}$ in clockwise order around $C_3$ is $\frac{1}{\zeta_{uv} \zeta_{vu} } = 1$. That is, (\ref{eq:cycleZeta}) holds for $C_3$. Since the facial cycles form a cycle basis, it follows that (\ref{eq:cycleZeta}) holds for every cycle. Now choose a reference vertex $r$, and consider a depth first search tree $T$ rooted at $r$. Denote the set of directed edges on the directed path in $T$ from $i$ to $j$ by $E_{ij}$. Let $s_{r} = 1$, and for each internal vertex $i \neq r$, let \begin{equation} \label{eq:s} s_{i} = \prod_{(u,v) \in E_{ri}} \zeta_{uv}. \end{equation} Clearly equation~(\ref{eq:zeta}) holds for every edge of $T$. Now consider a back-edge $(i,j)$ for $T$ (that is, a strictly internal edge of $G$ that is not in $T$), and let $k$ denote the least common ancestor of $i$ and $j$ in $T$. Then from (\ref{eq:s}) we can deduce that \begin{equation} \label{eq:moo1} \frac{s_i}{s_j} = \frac{ \prod_{(u,v) \in E_{ri}} \zeta_{uv} }{ \prod_{(u',v')\in E_{rj}} \zeta_{u'v'} } = \frac{ \prod_{(u,v) \in E_{ki}} \zeta_{uv} }{ \prod_{(u',v')\in E_{kj}} \zeta_{u'v'} }. \end{equation} Now let $C$ be the cycle in $\Gamma$ that consists of the reverse of the directed path in $T$ from $k$ to $j$, followed by the directed path in $T$ from $k$ to $i$, followed by the edge $(i,j)$. Since equation~(\ref{eq:cycleZeta}) holds for $C$, we have: \begin{equation} \label{eq:moo2} 1 = \left( \prod_{(v',u') \in E_{jk}} \zeta_{v'u'} \right) \left( \prod_{(u,v) \in E_{ki}} \zeta_{uv} \right) \zeta_{ij} = \left( \frac{ \prod_{(u,v) \in E_{ki}} \zeta_{uv} } { \prod_{(u'v') \in E_{kj}} {\zeta_{u'v'}} } \right) \zeta_{ij} \end{equation} Combining equations (\ref{eq:moo1}) and (\ref{eq:moo2}) we have $s_i = \zeta_{ij} s_j$ and so equation (\ref{eq:zeta}) holds for each back edge $(i,j)$. We can conclude that (\ref{eq:zeta}) holds for all strictly internal edges. \qed \end{proof} \section{Applications} We list some implications of Theorem~\ref{th:cycleProduct} for cubic, Halin~\cite{DBLP:journals/jgaa/Eppstein16} and planar graphs with degree larger than three. Proofs of the corollaries below are straightforward. \begin{corollary} \label{cor:cubic1} A drawing $\Gamma$ of a cubic graph is a weighted barycenter drawing if and only if equations (\ref{eq:scale}) have rank smaller than $n-f_0$. \qed \end{corollary} \begin{corollary} \label{cor:cubic2} For cubic graphs, there is a linear time algorithm for the weighted barycenter recognition problem. \qed \end{corollary} For cubic graphs, the weights $z_{ij}$ are unique, and thus equations (\ref{eq:scale}) give a complete characterisation of weighted barycenter drawings. One can use Theorem~\ref{th:cycleProduct} to test whether a solution of equations (\ref{eq:scale}) exists, checking equations~(\ref{eq:cycle}) in linear time. \begin{corollary} \label{cor:halin} Suppose that $\Gamma$ is a convex drawing of a Halin graph such that the internal edges form a tree. Then $\Gamma$ is a weighted barycenter drawing. \qed \end{corollary} \subsubsection{Graphs with degree larger than three.} For a vertex $i$ of degree $d_i > 3$, solutions for equations (\ref{eq:bary3}) are not unique. Nevertheless, these equations are linear, and we have 3 equations in $d_i$ variables. Thus, for each vertex $i$, the solution $z_{ij}, j \in N(i)$, form a linear space of dimension at most $d_i-3$. In this general case, we have: \begin{corollary} \label{co:general} A drawing $\Gamma$ of a graph $G$ is a weighted barycenter drawing if and only if there are solutions $z_{ij}$ to equations (\ref{eq:bary3}) such that the cycle equation (\ref{eq:cycle}) holds for every internal face. \qed \end{corollary} Although Corollary~\ref{co:general} is quite elegant, it does not lead to an immediately practical algorithm because the equations (\ref{eq:cycle}) are not linear. \section{Conclusion}\label{se:conclusions} Force-directed algorithms are very common in practice, and drawings obtained from force-directed methods are instantly recognisable to most researchers in Graph Drawing. However, this paper represents the first attempt to give algorithms to recognise the output of a particular force-directed method, namely the weighted barycenter method. It would be interesting to know if the results of other force-directed methods can be automatically recognised. \subsubsection*{Acknowledgements.} We wish to thank Veronika Irvine for motivating discussions. \pagebreak \section*{Appendix} \ \subsection*{A triangulation which is not a weighted barycenter drawing.} The weighted barycenter algorithm can be viewed as a force directed method, as follows. We define the \emph{energy} $\eta(i,j)$ of an internal edge $(i,j)$ by \begin{equation} \label{eq:energyEdge} \eta(i,j) = \frac{1}{2} w_{ij} \delta (\gamma_i,\gamma_j)^2 = \frac{1}{2} w_{ij} \left( (x_i - x_j)^2 + (y_i-y_j)^2 \right)\\ \end{equation} where $\delta(,)$ is the Euclidean distance and $\gamma_i = (x_i , y_i)$. The energy $\eta(\Gamma)$ in the whole drawing is the sum of the internal edge energies. Taking partial derivatives with respect to each variable $x_i$ and $y_i$ reveals that $\eta(\Gamma)$ is minimised precisely when the barycenter equations (\ref{eq:bary0}) hold. \begin{figure} \caption{ A triangulation which is not a weighted barycenter drawing. } \label{fi:triangles3} \end{figure} \begin{lemma} The drawing in Fig.~\ref{fi:triangles3} is not a weighted barycenter drawing. \end{lemma} \begin{proof} Suppose that the drawing $\Gamma$ in Fig.~\ref{fi:triangles3} is a weighted barycenter drawing with weights $w_{ij}$. The total energy $\eta(\Gamma)$ in the drawing is given by summing equation (\ref{eq:energyEdge}) over all internal edges, and the drawing $\Gamma$ minimises $\eta(\Gamma)$. Further, the minimum energy drawing is unique. Consider the drawing $\Gamma'$ of this graph where the inner triangle is rotated clockwise by $\epsilon$, where $\epsilon$ is small. The strictly internal edges remain the same length, while the edges between the inner and outer triangles become shorter. Thus, since every $w_{ij} > 0$ for every such $(i,j)$, $\eta(\Gamma') <\eta(\Gamma)$. This contradicts the fact that $\eta$ is minimised at $\Gamma$. \qed \end{proof} \subsection*{Positive solutions for equations (\ref{eq:scale}).} \begin{lemma} If equations (\ref{eq:scale}) have a nontrivial solution, then they have a nontrivial solution in which every $s_i$ is positive. \end{lemma} \begin{proof} Suppose that the vector $s$ is a solution to equations (\ref{eq:scale}), and $s_i \leq 0$ for some internal vertex $i$. Since $z_{ij} > 0$ for each $i,j$, and $G$ is connected, it is easy to deduce from (\ref{eq:scale}) that $s_j \leq 0$ for every internal vertex $j$. Further if $s_i = 0$ for some internal vertex $i$ then $s_j = 0$ for every internal vertex. Noting that $s$ is a solution to (\ref{eq:scale}) if and only if $-s$ is a solution, the Lemma follows. \qed \end{proof} \subsection*{Properties of the coefficient matrix of the equations for the scale factors.} Equations (\ref{eq:scale}) form a set of $m-f_0$ equations in the $n-f_0$ unknowns $s_i$. We can write (\ref{eq:scale}) as \begin{equation} \label{eq:B} B^T s=0 \end{equation} where $s$ is an $(n-f_0) \times 1$ vector and $B$ is an $(n-f_0) \times (m-f_0)$ matrix; precisely: \[ B_{ie} = \begin{cases} z_{ij} & \text{~if~} e \text{~is the edge~} (i,j) \in \overrightarrow{E}\\ -z_{ij} & \text{~if~}e \text{~is the edge~} (j,i) \in \overrightarrow{E}\\ 0 & \text{otherwise.} \end{cases} \] Note that $B$ is a weighted version of the directed incidence matrix of the graph $\overrightarrow{G}$. Adapting a classical result for incidence matrices yields a lower bound on the rank of $B$: \begin{lemma} The rank of $B$ is at least $n - f_0 - 1$. \end{lemma} \begin{proof} Since $G$ is triconnected, the induced subgraph of the internal vertices is connected. Consider a submatrix of $B$ consisting of rows that correspond to the edges of a tree that spans the internal vertices. It is easy to see that this $(n-f_0-1) \times (n-f_0) $ submatrix has full row rank (note that it has a column with precisely one nonzero entry). The lemma follows. \qed \end{proof} In fact, one can show (using the same method as in the proof of Theorem~\ref{th:cycleProduct}) that $B$ has rank exactly $n - f_0 - 1$ as long as the equations (\ref{eq:cycle}) hold for every strictly internal cycle. \subsection*{Proofs of the corollaries.} {\bf Corollary}~\ref{cor:cubic1} \begin{proof} In the case of a cubic graph, the weights $z_{ij}$ are unique. Thus, if the only solution to (\ref{eq:scale}) is $s_i = 0$ for every $i$, then the drawing is not a weighted barycenter drawing. \qed \end{proof} {\bf Corollary}~\ref{cor:halin} \begin{proof} A \emph{Halin graph}~\cite{DBLP:journals/jgaa/Eppstein16} \iffalse Nik: added a phrase \fi is a triconnected graph that consists of a tree, none of the vertices of which has exactly two neighbours, together with a cycle connecting the leaves of the tree. The cycle connects the leaves in an order so that the resulting graph is planar. A Halin graph is typically drawn so that the outer face is the cycle. This corollary can be deduced immediately from Theorem~\ref{th:cycleProduct} since such a graph has no strictly internal cycles. More directly, we can solve the equations (\ref{eq:scale}) starting by assigning $s_r = 1$ for the root $r$, and adding one edge at a time. \qed \end{proof} \end{document}
\begin{document} \title[Plurisubharmonic defining functions]{A note on plurisubharmonic defining functions in $\mathbb{C}^{n}$} \author{J. E. Forn\ae ss, A.-K. Herbig} \subjclass[2000]{32T35, 32U05, 32U10} \keywords{Plurisubharmonic defining functions, Stein neighborhood basis, DF exponent} \thanks{Research of the first author was partially supported by an NSF grant.} \thanks{Research of the second author was supported by FWF grant P19147} \address{Department of Mathematics, \newline University of Michigan, Ann Arbor, Michigan 48109, USA} \email{[email protected]} \address{Department of Mathematics, \newline University of Vienna, Vienna, A-1090, Austria} \email{[email protected]} \date{} \begin{abstract} Let $\Omega\subset\subset\mathbb{C}^{n}$, $n\geq 3$, be a smoothly bounded domain. Suppose that $\Omega$ admits a smooth defining function which is plurisubharmonic on the boundary of $\Omega$. Then the Diederich--Forn\ae ss exponent can be chosen arbitrarily close to $1$, and the closure of $\Omega$ admits a Stein neighborhood basis. \end{abstract} \maketitle \section{Introduction} Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain. Throughout, we suppose that $\Omega$ admits a $\mathcal{C}^{\infty}$-smooth defining function $\rho$ which is plurisubharmonic on the boundary, $b\Omega$, of $\Omega$. That is, \begin{align}\label{E:psh} H_{\rho}(\xi,\xi)(z):=\sum_{j,k=1}^{n}\frac{\partial^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}(z) \xi_{j}\overline{\xi}_{k}\geq 0\;\;\text{for all}\;z\in b\Omega,\;\xi\in\mathbb{C}^{n}. \end{align} The question we are concerned with is what condition \eqref{E:psh} tells us about the behaviour of the complex Hessian of $\rho$ - or of some other defining function of $\Omega$ - away from the boundary of $\Omega$. That $\rho$ is not necessarily plurisubharmonic in any neighborhood of $b\Omega$ can be seen easily, for an example see Section 2.3 in \cite{For-Her}. In \cite{For-Her}, we showed that if $n=2$, then for any $\epsilon>0$, $K>0$ there exist smooth defining functions $\rho_{i}$, $i=1,2$, and a neighborhood $U$ of $b\Omega$ such that \begin{align}\label{E:dim2result} H_{\rho_{i}}(\xi,\xi)(q_{i})\geq-\epsilon|\rho_{i}(q_{i})|\cdot|\xi|^{2} +K|\langle\partial\rho_{i}(q_{i}),\xi\rangle|^{2} \end{align} for all $\xi\in\mathbb{C}^{2}$ and $q_{1}\in\overline{\Omega}\cap U$, $q_{2}\in\Omega^{c}\cap U$. The estimates \eqref{E:dim2result} imply the existence of particular exhaustion functions for $\Omega$ and the complement of $\overline{\Omega}$, which is not a direct consequence of \eqref{E:psh}. A Diederich--Forn\ae ss exponent of a domain is a number $\tau\in(0,1]$ for which there exists a smooth defining function $s$ such that $-(-s)^{\tau}$ is strictly plurisubharmonic in the domain. It was shown in \cite{DF1,Ran} that all smoothly bounded, pseudoconvex domains have a Diederich--Forn\ae ss exponent. However, it is also known that there are smoothly bounded, pseudoconvex domains for which the largest Diederich--Forn\ae ss exponent has to be chosen arbitrarily close to $0$ (see \cite{DF2}). In \cite{For-Her}, we showed that \eqref{E:dim2result}, $i=1$, implies that the Diederich--Forn\ae ss exponent can be chosen arbitrarily close to $1$. We also showed that \eqref{E:dim2result}, $i=2$, yields that the complement of $\Omega$ can be exhausted by bounded, strictly plurisubharmonic functions. In particular, the closure of $\Omega$ admits a Stein neighborhood basis. For $n\geq 3$ we obtain the following: \begin{theorem}\label{T:Main} Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain. Suppose that $\Omega$ has a smooth defining function which is plurisubharmonic on the boundary of $\Omega$. Then for any $\epsilon>0$ there exist a neighborhood $U$ of $b\Omega$ and smooth defining functions $r_{1}$ and $r_{2}$ such that \begin{align}\label{E:Main1} H_{r_{1}}(\xi,\xi)(q)\geq -\epsilon \left[|r_{1}(q)|\cdot|\xi|^{2}+\frac{1}{|r_{1}(q)|}\cdot\left|\langle\partial r_{1}(q),\xi\rangle\right|^{2}\right] \end{align} holds for all $q\in\Omega\cap U$, $\xi\in\mathbb{C}^{n}$, and \begin{align}\label{E:Main2} H_{r_{2}}(\xi,\xi)(q)\geq -\epsilon \left[r_{2}(q)\cdot|\xi|^{2}+\frac{1}{r_{2}(q)}\cdot\left|\langle\partial r_{2}(q),\xi\rangle\right|^{2}\right] \end{align} holds for all $q\in(\overline{\Omega})^{c}\cap U$, $\xi\in\mathbb{C}^{n}$. \end{theorem} Let us remark that our proof of Theorem \ref{T:Main} also works when $n=2$. However, the results of Theorem \ref{T:Main} are weaker than \eqref{E:dim2result}. Nevertheless, they are still strong enough to obtain that the Diederich--Forn\ae ss exponent can be chosen arbitrarily close to 1 and that the closure of the domain admits a Stein neighborhood basis. In particular, we have the following: \begin{corollary}\label{C:DF} Assume the hypotheses of Theorem \ref{T:Main} hold. Then \begin{enumerate} \item for all $\eta\in (0,1)$ there exists a smooth defining function $\tilde{r}_{1}$ of $\Omega$ such that $-(-\tilde{r}_{1})^{\eta}$ is strictly plurisubharmonic on $\Omega$, \item for all $\eta>1$ there exist a smooth defining function $\tilde{r}_{2}$ of $\Omega$ and a neighborhood $U$ of $\overline{\Omega}$ such that $\tilde{r}_{2}^{\eta}$ is strictly plurisubharmonic on $(\overline{\Omega})^{c}\cap U$. \end{enumerate} \end{corollary} We note that in \cite{DF3} it was proved that (i) and (ii) of Corollary \ref{C:DF} hold for so-called regular domains. Furthermore, in \cite{DF4} it was shown that pseudoconvex domains with real-analytic boundary are regular domains. This article is structured as follows. In Section \ref{S:prelim}, we give the setting and define our basic notions. Furthermore, we show in this section which piece of the complex Hessian of $\rho$ at a given point $p$ in $b\Omega$ constitutes an obstruction for inequality \eqref{E:Main1} to hold for a given $\epsilon>0$. In Section \ref{S:modification}, we construct a local defining function which does not possess this obstruction term to \eqref{E:Main1} at a given boundary point $p$. Since this fixes our problem with \eqref{E:Main1} only at this point $p$ (and at nearby boundary points at which the Levi form is of the same rank as at $p$), we will need to patch the newly constructed local defining functions without letting the obstruction term arise again. This is done in Section \ref{S:cutoff}. In Section \ref{S:proof}, we finally prove \eqref{E:Main1} and remark at the end how to obtain \eqref{E:Main2}. We conclude this paper with the proof of Corollary \ref{C:DF} in Section \ref{S:DF}. We would like to thank J. D. McNeal for fruitful discussions on this project, in particular we are very grateful to him for providing us with Lemma \ref{L:McNeal} and its proof. \section{Preliminaries and pointwise obstruction}\label{S:prelim} Let $(z_{1},\dots,z_{n})$ denote the coordinates of $\mathbb{C}^{n}$. We shall identify the vector $\langle\xi_{1},\dots,\xi_{n}\rangle$ in $\mathbb{C}^{n}$ with $\sum_{i=1}^{n}\xi_{i}\frac{\partial}{\partial z_{i}}$ in the $(1,0)$-tangent bundle of $\mathbb{C}^{n}$ at any given point. This means in particular that if $X$, $Y$ are $(1,0)$-vector fields with $X(z)=\sum_{i=1}^{n}X_{i}(z)\frac{\partial}{\partial z_{i}}$ and $Y(z)=\sum_{i=1}^{n}Y_{i}(z)\frac{\partial}{\partial z_{i}}$, then \begin{align*} H_{\rho}(X,Y)(z)=\sum_{j,k=1}^{n}\frac{\partial^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}(z) X_{j}(z)\overline{Y}_{k}(z). \end{align*} Suppose $Z$ is another $(1,0)$-vector field with $Z(z)=\sum_{l=1}^{n}Z_{l}(z)\frac{\partial}{\partial z_{l}}$. For notational convenience, and because of lack of a better notation, we shall write \begin{align*} (ZH_{\rho})(X,Y)(z):= \sum_{j,k,l=1}^{n}\frac{\partial^{3}\rho}{\partial z_{j}\partial\overline{z}_{k}\partial z_{l}}(z) X_{j}(z)\overline{Y}_{k}(z) Z_{l}(z). \end{align*} We use the pointwise hermitian inner product $\langle .,.\rangle$ defined by $\langle\frac{\partial}{\partial z_{j}},\frac{\partial}{\partial z_{k}}\rangle=\delta_{j}^{k}$. Hoping that it will not cause any confusion, we also write $\langle .,.\rangle$ for contractions of vector fields and forms. We will employ the so-called (sc)-(lc) inequality: $|ab|\leq\tau|a|^{2}+\frac{1}{4\tau}|b|^{2}$ for $\tau>0$. Furthermore, we shall write $|A|\lesssim|B|$ to mean $|A|\leq c|B|$ for some constant $c>0$ which does not depend on any of the relevant parameters. In particular, we will only use this notation when $c$ depends solely on absolute constants, e.g., dimension, quantities related to the given defining function $\rho$. Let us now work on proving inequality \eqref{E:Main1}. Since $b\Omega$ is smooth, there exists a neighborhood $U$ of $b\Omega$ and a smooth map \begin{align*} \pi:\overline{\Omega}\cap U&\longrightarrow b\Omega\\ q&\longmapsto\pi(q)=p \end{align*} such that $\pi(q)=p$ lies on the line normal to $b\Omega$ passing through $q$ and $|p-q|$ equals the Euclidean distance, $d_{b\Omega}(q)$, of $q$ to $b\Omega$. After possibly shrinking $U$, we can assume that $\partial\rho\neq 0$ on $U$. We set $N(z)=\frac{1}{|\partial\rho(z)|}\sum_{j=1}^{n}\frac{\partial\rho}{\partial\overline{z}_{j}}(z) \frac{\partial}{\partial z_{j}}$. If $f$ is a smooth function on $U$, then it follows from Taylor's theorem that \begin{align}\label{E:BasicTaylor} f(q)=f(p)-d_{b\Omega}(q)\left(\operatorname{Re} N\right)(f)(p)+\mathcal{O}\left(d_{b\Omega}^{2}(q)\right)\;\; \text{for}\;\;q\in \overline{\Omega}\cap U; \end{align} for details see for instance Section 2.1 in \cite{For-Her}\footnote{Equation \eqref{E:BasicTaylor} above differs from (2.1) in \cite{For-Her} by a factor of $2$ in the second term on the right hand side. This stems from mistakenly using that outward normal of length $1/2$ instead of the one of unit length in \cite{For-Her}. However, this mistake is inconsequential for the results in \cite{For-Her}.}. Let $p\in b\Omega\cap U$ be given. Let $W\in\mathbb{C}^{n}$ be a weak, complex tangential vector at $p$, i.e., $\langle\partial\rho(p),W\rangle=0$ and $H_{\rho}(W,W)(p)=0$. If $q\in\Omega\cap U$ with $\pi(q)=p$, then \eqref{E:BasicTaylor} implies \begin{align}\label{E:BasicTaylorH} H_{\rho}(W,W)(q)=H_{\rho}(W,W)(p)-d_{b\Omega}(q)\left(\operatorname{Re} N\right) \left(H_{\rho}(W,W)\right)(p)+\mathcal{O}(d^{2}_{b\Omega}(q))|W|^{2}. \end{align} Since $H_{\rho}(W,W)$ is a real-valued function, we have \begin{align*} \left(\operatorname{Re} N\right)\left(H_{\rho}(W,W)\right)=\operatorname{Re}\left[N\left(H_{\rho}(W,W)\right)\right]. \end{align*} Moreover, $H_{\rho}(W,W)$ is non-negative on $b\Omega\cap U$ and equals $0$ at $p$. That is, $H_{\rho}(W,W)_{|_{b\Omega\cap U}}$ attains a local minimum at $p$. Therefore, any tangential derivative of $H_{\rho}(W,W)$ vanishes at $p$. Since $N-\overline{N}$ is tangential to $b\Omega$, we obtain \begin{align*} \operatorname{Re}\left[N\left(H_{\rho}(W,W)\right)\right](p)=N\left(H_{\rho}(W,W)\right)(p) =(NH_{\rho})(W,W)(p), \end{align*} where the last equality holds since $W$ is a fixed vector. Hence, \eqref{E:BasicTaylorH} becomes \begin{align}\label{E:BasicTaylorHW} H_{\rho}(W,W)(q)=-d_{b\Omega}(q)(NH_{\rho})(W,W)(p) +\mathcal{O}\left(d_{b\Omega}^{2}(q)\right)|W|^{2}. \end{align} Clearly, we have a problem with obtaining \eqref{E:Main1} when $(NH_{\rho})(W,W)$ is strictly positive at $p$. That is, when $H_{\rho}(W,W)$ is strictly decreasing along the real inward normal to $b\Omega$ at $p$, i.e., $H_{\rho}(W,W)$ becomes negative there, then \eqref{E:Main1} can not hold for the complex Hessian $\rho$ when $ \epsilon>0$ is sufficiently close to zero. The question is whether we can find another smooth defining function $r$ of $\Omega$ such that $(NH_{r})(W,W)(p)$ is less than $(NH_{\rho})(W,W)(p)$. The construction of such a function $r$ is relatively easy and straightforward when $n=2$ (see Section 2.3 in \cite{For-Her} for a non-technical derivation of $r$). The difficulty in higher dimensions arises simply from the fact that the Levi form of a defining function might vanish in more than one complex tangential direction at a given boundary point. \section{Pointwise Modification of $\rho$}\label{S:modification} Let $\Sigma_{i}\subset b\Omega$ be the set of boundary points at which the Levi form of $\rho$ has rank $i$, $i\in\{0,\dots,n-1\}$. Note that $\cup_{i=0}^{j}\Sigma_{i}$ is closed in $b\Omega$ for any $j\in\{0,\dots,n-1\}$. Moreover, $\Sigma_{j}$ is relatively closed in $b\Omega\setminus \cup_{i=0} ^{j-1}\Sigma_{i}$ for $j\in\{1,\dots,n-1\}$. Of course, $\Sigma_{n-1}$ is the set of strictly pseudoconvex boundary points of $\Omega$. Let $p\in b\Omega\cap\Sigma_{i}$ for some $i\in\{0,\dots, n-2\}$ be given. Then there exist a neighborhood $V\subset U$ of $p$ and smooth, linearly independent $(1,0)$-vector fields $W^{\alpha}$, $1\leq\alpha\leq n-1-i$, on $V$, which are complex tangential to $b\Omega$ on $b\Omega\cap V$ and satisfy $H_{\rho}(W^{\alpha},W^{\alpha})=0$ on $\Sigma_{i}\cap V$. We consider those points $q\in\Omega\cap V$ with $\pi(q)=p$. We shall work with the smooth function \begin{align*} r(z)=\rho(z)\cdot e^{-C\sigma(z)},\;\text{where}\; \; \sigma(z)=\sum_{\alpha=1}^{n-1-i}H_{\rho}(W^{\alpha},W^{\alpha})(z) \end{align*} for $z\in V$. Here, the constant $C>0$ is fixed and to be chosen later. Note that $r$ defines $b\Omega$ on $b\Omega\cap V$. Furthermore, $\sigma$ is a smooth function on $V$ which is non-negative on $b\Omega\cap V$ and vanishes on the set $\Sigma_{i}\cap V$. That means that $\sigma_{|_{b\Omega\cap V}}$ attains a local minimum at each point in $\Sigma_{i} \cap V$. Therefore, any tangential derivative of $\sigma$ vanishes on $\Sigma_{i}\cap V$. Moreover, if $z\in\Sigma_{i}\cap V$ and $T\in\mathbb{C}T_{z}b\Omega$ is such that $H_{\rho}(T,T)$ vanishes at $z$, then $H_{\sigma}(T,T)$ is non-negative at that point. Let $W\in \mathbb{C}^{n}$ be a vector contained in the span of the vectors $\left\{W^{\alpha}(p)\right\}_{\alpha=1}^{n-1-i}$. Then, using \eqref{E:BasicTaylorHW}, it follows that \begin{align}\label{E:explaincutoff} H_{r}(W,W)(q) =&e^{-C\sigma(q)}\left[H_{\rho}(W,W) -C\operatorname{Re}\left(\langle\partial\rho,W\rangle\overline{\langle\partial\sigma,W\rangle }\right)\right.\notag\\ &\hspace{3.5cm}+\left.\rho\left(C^{2}\left|\langle\partial\sigma,W\rangle\right|^{2} -CH_{\sigma}(W,W)\right)\right](q)\notag\\ =&e^{-C\sigma(q)} \Bigl[ -d_{b\Omega}(q)(NH_{\rho})(W,W)(p)-C\rho(q)H_{\sigma}(W,W)(q)\bigr. +\mathcal{O}\left(d_{b\Omega}^{2}(q)\right)|W|^{2}\notag\\ &\hspace{1.5cm}+C^{2}\rho(q)\left|\langle\partial\sigma(q),W\rangle \right|^{2}\bigl.-2C\operatorname{Re}\left( \langle\partial\rho,W\rangle\overline{\langle\partial\sigma,W\rangle } \right)(q) \Bigr]. \end{align} Since $\langle\partial\sigma(p),W\rangle=0=\langle\partial\rho(p),W\rangle$, Taylor's theorem gives \begin{align*} \langle\partial\sigma(q),W\rangle=\mathcal{O}\left(r(q)\right)|W|=\langle\partial\rho(q),W\rangle. \end{align*} Therefore, we obtain \begin{align}\label{E:BasicTaylorr} H_{r}(W,W)(q)\geq -d_{b\Omega}(q)e^{-C\sigma(q)}(NH_{\rho})(W,W)(p)&-Cr(q)H_{\sigma}(W,W)(q)\\ &+\mathcal{O}\left(r^{2}(q)\right)|W|^{2},\notag \end{align} where the constant in the last term depends on the choice of the constant $C$. However, in view of our claim \eqref{E:Main1}, this is inconsequential. From here on, we will not point out such negligible dependencies. We already know that $H_{\sigma}(W,W)(p)$ is non-negative, i.e., of the right sign to correct $(NH_{\rho})(W,W)(p)$ when necessary. The question is whether the sizes of $(NH_{\rho})(W,W)(p)$ and $H_{\sigma}(W,W)(p)$ are comparable in some sense. The following proposition clarifies this. \begin{proposition}\label{P:compare} There exists a constant $K>0$ such that \begin{align*} \left|(NH_{\rho})(W,W)(z_{0})\right|^{2} \leq K|W|^{2}\cdot H_{\sigma}(W,W)(z_{0}) \end{align*} holds for all $z_{0}\in\Sigma_{i}\cap V$ and $W\in\mathbb{C}T_{z_{0}}b\Omega$ with $H_{\rho}(W,W)(z_{0})=0$. \end{proposition} In order to prove Proposition \ref{P:compare}, we need the following lemma: \begin{lemma}\label{L:compare} Let $z_{0}\in b\Omega$ and $U$ a neighborhood of $z_{0}$. Let $Z$ be a smooth $(1,0)$-vector field defined on $U$, which is complex tangential to $b\Omega$ on $b\Omega\cap U$, and let $Y\in\mathbb{C}^{n}$ be a vector belonging to $ \mathbb{C}T_{z_{0}}b\Omega$. Suppose that $Y$ and $Z$ are such that \begin{align*} H_{\rho}(Y,Y)(z_{0})=0=H_{\rho}(Z,Z)(z_{0}). \end{align*} Set $X=\sum_{j=1}^{n}\overline{Y}(Z_{j})\frac{\partial}{\partial z_{j}}$. Then the following holds: \begin{enumerate} \item $X$ is complex tangential to $b\Omega$ at $z_{0}$, \item $\left(YH_{\rho}\right)(X,Z)(z_{0})=0$,\ \item $H_{H_{\rho}(Z,Z)}(Y,Y)(z_{0})\geq H_{\rho}(X,X)(z_{0})$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{L:compare}] (1) That $X$ is complex tangential to $b\Omega$ at $z_{0}$ was shown in Lemma 3.4 of \cite{For-Her}. (2) The plurisubharmonicity of $\rho$ says that both $H_{\rho}(Y,Y)_{|_{b\Omega\cap U}}$ and $H_{\rho}(Z,Z)_{|_{b\Omega\cap U}}$ attain a local minimum at $z_{0}$. In fact, the function $H_{\rho}(aY+bZ,aY+bZ)_{|_{b\Omega}}$, $a,b\in\mathbb{C}$, attains a local minimum at $z_{0}$. This means that any tangential derivative of either one of those three functions must vanish at that point. In particular, we have \begin{align*} 0&=\langle\partial H_{\rho}(aY+bZ,aY+bZ),X\rangle(z_{0})\\ &=|a|^{2}\langle\partial H_{\rho}(Y,Y),X\rangle(z_{0}) +2\operatorname{Re}\left(a\overline{b}\langle\partial H_{\rho}(Y,Z),X\rangle \right)(z_{0})+|b|^{2}\langle\partial H_{\rho}(Z,Z),X\rangle(z_{0})\\ &=2\operatorname{Re}\left(a\overline{b}\langle\partial H_{\rho}(Y,Z),X\rangle \right)(z_{0}). \end{align*} Since this is true for all $a,b\in\mathbb{C}$, it follows that $\langle\partial H_{\rho}(Y,Z),X\rangle$ must vanish at $z_{0}$. But the plurisubharmonicity of $\rho$ at $z_{0}$ yields \begin{align*} \langle\partial H_{\rho}(Y,Z),X\rangle=\left(XH_{\rho}\right)(Y,Z)(z_{0})= \left(YH_{\rho}\right)(X,Z)(z_{0}), \end{align*} which proves the claim. (3) Consider the function \begin{align*} f(z)=\left( H_{\rho}(Z,Z)\cdot H_{\rho}(X,X)-\left|H_{\rho}(X,Z)\right|^{2}\right)(z)\;\;\text{for}\;\;z\in U. \end{align*} Note that $f_{|_{b\Omega\cap U}}$ attains a local minimum at $z_{0}$. Since $Y$ is a weak direction at $z_{0}$, it follows that $H_{f}(Y,Y)(z_{0})$ is non-negative. This implies that \begin{align}\label{E:comparetemp} \left(H_{H_{\rho}(Z,Z)}(Y,Y)\cdot H_{\rho}(X,X)\right)(z_{0})\geq \left| \langle \partial H_{\rho}(X,Z), Y\rangle(z_{0}) \right|^{2}, \end{align} where we used that both $H_{\rho}(Z,Z)$ and any tangential derivative of $H_{\rho}(Z,Z)$ at $z_{0}$ are zero. We compute \begin{align*} \langle\partial H_{\rho}(X,Z),Y\rangle(z_{0}) =(YH_{\rho})(X,Z)(z_{0})+H_{\rho}(X,X)(z_{0}) +H_{\rho}\left(\sum_{j=1}^{n}Y(X_{j})\frac{\partial}{\partial z_{j}},Z \right)(z_{0}). \end{align*} The first term on the right hand side equals zero by part (2) of Lemma \ref{L:compare}, and the third term is zero as well since $\rho$ plurisubharmonic at $z_{0}$ and $Z$ is a weak direction there. Therefore, \eqref{E:comparetemp} becomes \begin{align*} H_{\rho}(X,X)(z_{0})\leq H_{H_{\rho}(Z,Z)}(Y,Y)(z_{0}). \end{align*} \end{proof} Now we can proceed to show Proposition \ref{P:compare}. \begin{proof}[Proof of Proposition \ref{P:compare}] Recall that we are working with vectors $W\in\mathbb{C}^{n}$ contained in the span of $W^{1}(z_{0}),\dots,W^{n-1-i}(z_{0})$. We consider the function \begin{align*} h(z)=\left( \sigma\cdot H_{\rho}(N,N)-\sum_{\alpha=1}^{n-1-i}\left| H_{\rho}(N,W^{\alpha}) \right|^{2} \right)(z) \;\;\text{for}\;\;z\in U, \end{align*} where $\sigma=\sum_{\alpha=1}^{n-1-i}H_{\rho}(W^{\alpha},W^{\alpha})$. Again, since $\rho$ is plurisubharmonic on $b\Omega$, $h_{|_{b\Omega\cap V}}$ has a local minimum at $z_{0}$. This, together with $H_{\rho}(W,W)(z_{0})=0$, implies that $H_{h}(W,W)(z_{0})$ is non-negative. Since both $\sigma$ and $\langle\partial\sigma, W\rangle=0$ vanish at $z_{0}$, it follows that \begin{align*} &\left( H_{\sigma}(W,W)\cdot H_{\rho}(N,N) \right)(z_{0})\\ &\geq \sum_{\alpha=1}^{n-1-i}\left| (WH_{\rho})(N,W^{\alpha})+H_{\rho}\left(\sum_{j=1}^{n}W(N_{j})\frac{\partial}{\partial z_{j}},W^{\alpha} \right) +H_{\rho}\left( N,\sum_{j=1}^{n}\overline{W}(W_{j}^{\alpha})\frac{\partial}{\partial z_{j}} \right) \right|^{2}(z_{0})\\ &= \sum_{\alpha=1}^{n-1-i}\left| (WH_{\rho})(N,W^{\alpha})+H_{\rho}\left( N,\sum_{j=1}^{n}\overline{W}(W_{j}^{\alpha})\frac{\partial}{\partial z_{j}} \right) \right|^{2}(z_{0}), \end{align*} where the last step follows from $\rho$ being plurisubharmonic at $z_{0}$ and the $W^{\alpha}$'s being weak directions there. Moreover, we have that $(WH_{\rho})(N,W^{\alpha})$ equals $(NH_{\rho})(W,W^{\alpha})$ at $z_{0}$. Writing $X^{\alpha}=\sum_{j=1}^{n}\overline{W}(W_{j}^{\alpha})\frac{\partial}{\partial z_{j}}$, we obtain \begin{align*} \left( H_{\sigma}(W,W)\cdot H_{\rho}(N,N) \right)(z_{0}) &\geq \sum_{\alpha=1}^{n-1-i}\left| (NH_{\rho})(W,W^{\alpha})+H_{\rho}(N,X^{\alpha}) \right|^{2}(z_{0})\\ &\geq \sum_{\alpha=1}^{n-1-i}\left( \frac{1}{2}\left| (NH_{\rho})(W,W^{\alpha}) \right|^{2}-3 \left|H_{\rho}(N,X^{\alpha})\right|^{2} \right)(z_{0}). \end{align*} Here the last step follows from the (sc)-(lc) inequality. Since $\rho$ is plurisubharmonic at $z_{0}$, we can apply the Cauchy--Schwarz inequality \begin{align*} \left| H_{\rho}(N,X^{\alpha})(z_{0}) \right|^{2} &\leq \left( H_{\rho}(N,N)\cdot H_{\rho}(X^{\alpha},X^{\alpha}) \right)(z_{0})\\ &\leq \left( H_{\rho}(N,N)\cdot H_{H_{\rho}(W^{\alpha},W^{\alpha})}(W,W) \right)(z_{0}), \end{align*} where the last estimate follows by part (3) of Lemma \ref{L:compare} with $W$ and $W^{\alpha}$ in place of $Y$ and $Z$, respectively. Thus we have \begin{align*} \sum_{\alpha=1}^{n-1-i}\left|H_{\rho}(N,X^{\alpha})(z_{0}) \right|^{2} \leq \left(H_{\rho}(N,N)\cdot H_{\sigma}(W,W)\right)(z_{0}), \end{align*} which implies that \begin{align}\label{E:comparetemp2} \sum_{\alpha=1}^{n-1-i}\left| (NH_{\rho})(W,W^{\alpha})(z_{0}) \right|^{2} \leq 8\left(H_{\sigma}(W,W)\cdot H_{\rho}(N,N)\right)(z_{0}). \end{align} Since $W$ is a linear combination of $\{W^{\alpha}(z_{0})\}_{\alpha=1}^{n-1-i}$, we can write $W=\sum_{\alpha=1}^{n-1-i}a_{\alpha}W^{\alpha}(z_{0})$ for some scalars $a_{\alpha}\in\mathbb{C}$. Because of the linear independence of the $W^{\alpha}$'s on $V$, there exists a constant $K_{1}>0$ such that \begin{align*} \sum_{\alpha}^{n-1-i}|b_{\alpha}|^{2}\leq K_{1}\left| \sum_{\alpha=1}^{n-1-i}b_{\alpha}W^{\alpha}(z) \right|^{2}\;\text{for all}\;z\in b\Omega\cap V,\;b_{\alpha}\in\mathbb{C}. \end{align*} Thus it follows that \begin{align*} \sum_{\alpha=1}^{n-1-i} \left| (NH_{\rho})(W,W^{\alpha})(z_{0}) \right|^{2} &\geq \frac{1}{K_{1}|W|^{2}} \sum_{\alpha=1}^{n-1-i}\left|(NH_{\rho})(W,a_{\alpha}W^{\alpha})(z_{0})\right|^{2}\\ &\geq \frac{1}{K_{1}(n-1-i)|W|^{2}} \left|(NH_{\rho})(W,W)(z_{0})\right|^{2}. \end{align*} Hence, \eqref{E:comparetemp2} becomes \begin{align*} \left|(NH_{\rho})(W,W)(z_{0})\right|^{2}\leq 8K_{1}(n-1-i)|W|^{2}\left(H_{\sigma}(W,W)\cdot H_{\rho}(N,N)\right)(z_{0}). \end{align*} Let $K_{2}>0$ be a constant such that $H_{\rho}(N,N)_{|_{b\Omega}}\leq K_{2}$ holds. Setting $K=8K_{1}K_{2}(n-1-i)$, it follows that \begin{align*} \left|(NH_{\rho})(W,W)(z_{0})\right|^{2}\leq K|W|^{2}H_{\sigma}(W,W)(z_{0}). \end{align*} \end{proof} Recall that we are considering a fixed boundary point $p\in\Sigma_{i}$ and all $q\in\Omega\cap V$, $\pi(q)=p$, for some sufficiently small neighborhood of $p$. After possibly shrinking $V$ it follows by Taylor's theorem that \begin{align*} H_{\sigma}(W,W)(q)=H_{\sigma}(W,W)(\pi(q))+\mathcal{O}\left(d_{b\Omega}(q)\right)|W|^{2} \end{align*} holds for all $q\in\Omega\cap V$ with $\pi(q)=p$. Using this and Proposition \ref{P:compare}, we get for $q\in\Omega\cap V$ with $\pi(q)=p$ that \begin{align*} \left|(NH_{\rho})(W,W)(p)\right|^{2}\leq K|W|^{2}\left[H_{\sigma}(W,W)(q) +\mathcal{O}\left(d_{b\Omega}(q)\right)|W|^{2}\right]. \end{align*} Therefore, our basic estimate \eqref{E:BasicTaylorr} of the complex Hessian of $r$ in direction $W$ becomes \begin{align*} H_{r}(W,W)(q)\geq - d_{b\Omega}(q)e^{-C\sigma(q)}(NH_{\rho})(W,W)(p) &-r(q)\frac{C}{K|W|^{2}}\left|NH_{\rho}(W,W)\right|^{2}(p)\\ &+\mathcal{O}\left(r^{2}(q)\right)|W|^{2}. \end{align*} Let $c_{1}>0$ be such that $d_{b\Omega}(z)\leq c_{1}|\rho(z)|$ for all $z$ in $V$. Then, if we choose \begin{align}\label{E:chooseC} C\geq\max_{z\in b\Omega, T\in\mathbb{C}^{n}, |T|=1} \left\{ 0,\frac{c_{1}\operatorname{Re}\left[(NH_{\rho})(T,T)(z)\right]-\frac{\epsilon}{2}}{\left|(NH_{\rho})(T,T)(z)\right|^{2}}K \right\}, \end{align} we obtain, after possibly shrinking $V$, \begin{align}\label{E:BasicTaylorrfinal} H_{r}(W,W)(q)\geq -\epsilon |r(q)|\cdot|W|^{2} \end{align} for all $q\in\Omega\cap V$ with $\pi(q)=p$ and $W\in\mathbb{C}^{n}$ in the span of $\{W^{\alpha}(p)\}_{\alpha=1}^{n-1-i}$. In fact, after possibly shrinking $V$, \eqref{E:BasicTaylorrfinal} holds with, say, $2\epsilon$ in place of $\epsilon$ for all $q\in\Omega\cap V$ satisfying $\pi(q)\in\Sigma_{i}\cap V$ and $W\in\mathbb{C}^{n}$ belonging to the span of $\{W^{\alpha}(\pi(q))\}_{\alpha=1}^{n-1-i}$. A problem with this construction is that $r$ is not necessarily plurisubharmonic at those weakly pseudoconvex boundary points which are not in $\Sigma_{i}$. This possible loss of plurisubharmonicity occurs because the $W^{\alpha}$'s are not necessarily weak directions at those points. This means, that we can not simply copy this construction with $r$ in place of $\rho$ to get good estimates near, say, $\Sigma_{i+1}$. Let us be a more explicit. Suppose $\tilde{p}\in\Sigma_{i+1}\cap V$ is such that at least one of the $W^{\alpha}$'s is not a weak direction at $\tilde{p}$. That means, if $T$ is a weak complex tangential direction at $\tilde{p}$, then neither does $|\langle\partial\sigma(\tilde{p}),T\rangle|^{2}$ have to be zero nor does $H_{\sigma}(T,T)(\tilde{p})$ have to be non-negative. In view of \eqref{E:explaincutoff}, this says that it might actually happen that $(NH_{r})(T,T)(\tilde{p})$ is greater than $(NH_{\rho})(T,T)(\tilde{p})$ for such a vector $T$. That is, by removing the obstruction term at $p$ we might have worsened the situation at $\tilde{p}$. One might think that this does not cause any real problems since we still need to introduce a correcting function $\tilde{\sigma}$ to remove the obstruction to \eqref{E:Main1} on the set $\Sigma_{i+1}\cap V$. However, it might be the case that $(NH_{\rho})(T,T)(\tilde{p})=0$. In this case we do not know whether $H_{\tilde{\sigma}}(T,T)$ is strictly positive at $\tilde{p}$, i.e., we do not know whether $H_{\tilde{\sigma}}(T,T)(\tilde{p})$ can make up for any obstructing terms at $\tilde{p}$ introduced by $\sigma$. This says that we need to smoothly cut off $\sigma$ in a manner such that, away from $\Sigma_{i}\cap V$, $|\langle\partial\sigma,T\rangle|^{2}$ stays close to zero and $H_{\sigma}(T,T)$ does not become too negative (relative to $\epsilon|T|^{2}$). The construction of such a cut off function will be done in the next section. \section{The cutting off}\label{S:cutoff} Let us recall our setting: we are considering a given boundary point $p\in\Sigma_{i}$, $0\leq i \leq n-2$, $V$ a neighborhood of $p$ and smooth, linearly independent $(1,0)$-vector fields $\{W^{\alpha}\}_{\alpha=1}^{n-1-i}$, $\alpha\in\{1,\dots,n-1-i\}$ on $V$, which are complex tangential to $b\Omega$ on $b\Omega\cap V$ and satisfy $H_{\rho}(W^{\alpha},W^{\alpha})=0$ on $\Sigma_{i}\cap V$. From now on, we also suppose that $V$ and the $W^{\alpha}$'s are chosen such that the span of $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$ contains the null space of the Levi form of $\rho$ at $z$ for all $z\in\Sigma_{j}\cap V$ for $j\in\{i+1,\dots,n-2\}$. This can be done by first selecting smooth $(1,0)$-vector fields $\{S^{\beta}(z)\}_{\beta=1}^{i}$ which are complex tangential to $b\Omega$ on $b\Omega\cap V$ for some neighborhood $V$ of $p$ and orthogonal to each other with respect to the Levi form of $\rho$ such that $H_{\rho}(S^{\beta},S^{\beta})>0$ holds on $b\Omega\cap V$ after possibly shrinking $V$. Then one completes the basis of the complex tangent space with smooth $(1,0)$-vector fields $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$ such that the $W^{\alpha}$'s are orthogonal to the $S^{\beta}$'s with respect to the Levi form of $\rho$. Let $V'\subset\subset V$ be another neighborhood of $p$. Let $\zeta\in C^{\infty}_{c}(V,[0,1])$ be a function which equals $1$ on $V'$. For given $m>2$, let $\chi_{m}\in C^{\infty}(\mathbb{R})$ be an increasing function with $\chi_{m}(x)=1$ for all $x\leq 1$ and $\chi_{m}(x)=e^{m}$ for all $x\geq e^{m}$ such that \begin{align*} \frac{x}{\chi_{m}(x)}\leq 2,\;\; \chi_{m}'(x)\leq 2,\;\text{and}\;\; x\cdot\chi_{m}''(x)\leq 4\;\;\text{for all}\;\; x\in[1,e^{m}]. \end{align*} Set $\chi_{m,\tau}(x)=\chi_{m}\left(\frac{x}{\tau}\right)$ for given $\tau>0$. The above properties then become \begin{align*} \frac{x}{\chi_{m,\tau}(x)}\leq 2\tau,\;\; \chi_{m,\tau}'(x)\leq \frac{2}{\tau},\;\text{and}\;\; x\cdot\chi_{m,\tau}''(x)\leq \frac{4}{\tau}\;\;\text{for all}\;\; x\in[\tau,\tau e^{m}]. \end{align*} Set $g_{m,\tau}(x)=1-\frac{\ln(\chi_{m,\tau}(x))}{m}$. It follows by a straightforward computation that \begin{align}\label{E:propsg} &g_{m,\tau}(x)=1\;\text{for}\;x\leq\tau,\;\;0\leq g_{m,\tau}(x)\leq 1\;\text{for all}\;x\in\mathbb{R}, \;\text{and}\notag\\ &|g_{m,\tau}'(x)|\leq\frac{4}{m}\cdot\frac{1}{x},\;\;g_{m,\tau}''(x)\geq-\frac{8}{m}\cdot\frac{1}{x^{2}}\;\; \text{for}\;x\in(\tau, \tau e^{m}). \end{align} For given $m,\;\tau> 0$ we define \begin{align*} s_{m,\tau}(z)=\zeta(z)\cdot\sigma(z)\cdot g_{m,\tau}(\sigma(z))\;\;\text{for}\;z\in V \end{align*} and $s_{m,\tau}=0$ outside of $V$. This function has the properties described at the end of Section \ref{S:modification} if $m,\tau$ are chosen appropriately: \begin{lemma}\label{L:cutoff} For all $\delta>0$, there exist $m,\;\tau>0$ such that $s_{m,\tau}$ satisfies: \begin{enumerate} \item[(i)] $s_{m,\tau}=\zeta\sigma$ for $\sigma\in[0,\tau]$, \item[(ii)] $0\leq s_{m,\tau}\leq\delta$ on $b\Omega$. \end{enumerate} Moreover, if $z\in \left(b\Omega\cap V\right)\setminus\Sigma_{i}$ and $T\in\mathbb{C}T_{z}b\Omega$, then \begin{enumerate} \item[(iii)] $|\langle\partial s_{m,\tau}(z),T\rangle|\leq\delta|T|$, \item[(iv)] $H_{s_{m,\tau}}(T,T)(z)\geq-\delta|T|^{2}$ if $T\in\operatorname{span}\{W^{\alpha}(z)\}$. \end{enumerate} \end{lemma} Note that part (iv) of Lemma \ref{L:cutoff} in particular says that if $z\in\cup_{j=i+1}^{n-2}\Sigma_{j}\cap V$, then $H_{s_{m,\tau}}(T,T)(z)\geq-\delta|T|^{2}$ for all $T$ which are weak complex tangential vectors at $z$. To prove Lemma \ref{L:cutoff}, we will need $|\langle\partial\sigma,T\rangle|^{2} \lesssim\sigma|T|^{2}$ on $\overline{\operatorname{supp}(\zeta)}\cap b\Omega$. That this is in fact true we learned from J. D. McNeal. \begin{lemma}[\cite{McN}]\label{L:McNeal} Let $U\subset\subset\mathbb{R}^{n}$ be open. Let $f\in C^{2}(U)$ be a non-negative function on $U$. Then for any compact set $K\subset\subset U$, there exists a constant $c>0$ such that \begin{align}\label{E:McNeal} |\nabla f(x)|^{2}\leq c f(x)\;\;\text{for all}\;\;x\in K. \end{align} \end{lemma} Since the proof by McNeal is rather clever, and since we are not aware of it being published, we shall give it here. \begin{proof} Let $F$ be a smooth, non-negative function such that $F=f$ on $K$ and $F=0$ on $\mathbb{R}^{n}\setminus U$. For a given $x\in K$, we have for all $h\in\mathbb{R}^{n}$ that \begin{align}\label{E:McNealTaylor} 0\leq F(x+h)&=F(x)+\sum_{k=1}^{n}\frac{\partial F}{\partial x_{k}}(x)h_{k} +\frac{1}{2}\sum_{k,l=1}^{n}\frac{\partial^{2} F}{\partial x_{k}\partial x_{l}}(\xi)h_{k}h_{l}\notag\\ &=f(x)+\sum_{k=1}^{n}\frac{\partial f}{\partial x_{k}}(x)h_{k}+ \frac{1}{2}\sum_{k,l=1}^{n}\frac{\partial^{2} F}{\partial x_{k}\partial x_{l}}(\xi)h_{k}h_{l} \end{align} holds for some $\xi\in U$. Note that \eqref{E:McNeal} is true if $\left(\nabla f\right)(x)=0$. So assume now that $\left(\nabla f\right)(x)\neq 0$ and choose $h_{k}=\frac{\frac{\partial f}{\partial x_{k}}(x)}{\left|\left(\nabla f\right)(x)\right|}\cdot t$ for $t\in\mathbb{R}$. Then \eqref{E:McNealTaylor} becomes \begin{align*} 0\leq f(x)+\left|\left(\nabla f\right)(x)\right|t +nL\cdot\frac{\sum_{k=1}^{n}\left|\frac{\partial f}{\partial x_{k}}(x)\right|^{2}}{\left|\left(\nabla f\right) (x) \right|^{2}} \cdot t^{2}\;\;\text{for all}\;\;t\in\mathbb{R}, \end{align*} where $L=\frac{1}{2}\sup \left\{\left|\frac{\partial^{2}F}{\partial x_{k}\partial x_{l}}(\xi) \right|\;|\;\xi\in U,\;1\leq k,l\leq n\right\}$. Therefore, \eqref{E:McNealTaylor} becomes \begin{align*} 0\leq f(x)+\left|\left(\nabla f\right)(x)\right|\cdot t+nL\cdot t^{2}\;\;\text{for all}\;\;t\in\mathbb{R}. \end{align*} In particular, the following must hold for all $t\in\mathbb{R}$: \begin{align*} -\frac{f(x)}{nL}+\frac{\left|\left(\nabla f\right)(x)\right|^{2}}{(2nL)^{2}} \leq \left( t+\frac{\left|\left(\nabla f\right)(x)\right|}{2nL} \right)^{2}, \end{align*} which implies that \begin{align*} \left|\left(\nabla f\right)(x)\right|^{2}\leq 4nL\cdot f(x). \end{align*} \end{proof} We can assume that $V$ is such that there exists a diffeomorphism $\phi:V\cap b\Omega \longrightarrow U$ for some open set $U\subset\subset\mathbb{R}^{2n-1}$. Set $f:=\sigma\circ\phi^{-1}$. Then $f$ satisfies the hypotheses of Lemma \ref{L:McNeal}. Hence we get that there exists a constant $c>0$ such that $\left|\left(\nabla f\right)(x)\right|^{2}\leq cf(x)$ for all $x\in K=\phi(\overline{\operatorname{supp}(\zeta)})$. This implies that there exists a constant $c_{1}>0$, depending on $\phi$, such that \begin{align}\label{E:McNealapplied} \left|\langle\partial\sigma(z),T\rangle\right|^{2}\leq c_{1}\sigma(z)|T|^{2}\;\; \text{for all}\;\; z\in\overline{\operatorname{supp}(\zeta)}\;\text{and}\; T\in\mathbb{C}T_{z}b\Omega. \end{align} Now we can prove Lemma \ref{L:cutoff}. \begin{proof}[Proof of Lemma \ref{L:cutoff}] Note first that $s_{m,\tau}$ is identically zero on $b\Omega\setminus V$ for any $m>2$ and $\tau>0$. Now let $\delta>0$ be given, let $m$ be a large, positive number, fixed and to be chosen later (that is, in the proof of (iv)). Below we will show how to choose $\tau>0$ once $m$ has been chosen. Part (i) follows directly from the definition of $s_{m,\tau}$ for any choice of $m>2$ and $\tau>0$. Part (ii) also follows straightforwardly, if $\tau>0$ is such that $\tau e^{m}\leq\delta$. Notice that for all $z\in b\Omega\cap V$ with $\sigma(z)>\tau e^{m}$, $s_{m,\tau}(z)=0$, and hence (iii), (iv) hold trivially there. Thus, to prove (iii) and (iv) we only need to consider the two sets \begin{align*} S_{1}=\{z\in b\Omega\cap V\;|\;\sigma(z)\in(0,\tau)\}\;\;\text{and}\;\; S_{2}=\{z\in b\Omega\cap V\;|\;\sigma(z)\in[\tau,\tau e^{m}]\}. \end{align*} \emph{Proof of (iii):} If $z\in S_{1}$, then $s_{m,\tau}(z)=\zeta(z)\cdot\sigma(z)$ and if $T\in\mathbb{C}T_{z}b\Omega$, we get \begin{align*} \left|\langle\partial s_{m,\tau}(z),T\rangle \right|&= \left| \sigma\cdot\langle\partial\zeta,T\rangle+\zeta\cdot\langle\partial\sigma, T\rangle \right|(z)\\ &\overset{\eqref{E:McNealapplied}}{\leq} \left(c_{2}\sigma(z) +\left(c_{1}\sigma(z)\right)^{\frac{1}{2}}\right)|T|, \; \text{where}\;\; c_{2}=\max_{z\in b\Omega\cap V}|\partial\zeta(z)|. \end{align*} Thus, if we choose $\tau>0$ such that, $c_{2}\tau+(c_{1}\tau)^{\frac{1}{2}}\leq\delta$, then (iii) holds on the set $S_{1}$.\\ Now suppose that $z\in S_{2}, T\in\mathbb{C}T_{z}b\Omega$ and compute: \begin{align*} \left| \langle\partial s_{m,\tau}(z),T\rangle \right| &=\left| \sigma\cdot g_{m,\tau}(\sigma)\cdot\langle\partial\zeta,T\rangle +\zeta\cdot\langle\partial\sigma,T\rangle \left( g_{m,\tau}(\sigma)+\sigma\cdot g_{m,\tau}'(\sigma) \right) \right|(z)\\ &\overset{\eqref{E:McNealapplied},\eqref{E:propsg}}{\leq} \left( c_{2}\sigma(z)+\left(c_{1}\sigma(z)\right)^{\frac{1}{2}}\cdot\left(1+\frac{4}{m}\right) \right)|T|. \end{align*} Thus, if we choose $\tau>0$ such that $c_{2}\tau e^{m}+2(c_{1}\tau e^{m})^{\frac{1}{2}} \leq\delta$, then (iii) also holds on the set $S_{2}$. \emph{Proof of (iv):} Let us first consider the case when $z\in S_{1}$. Then, again, $s_{m,\tau}(z)=\zeta(z)\cdot\sigma(z)$ and if $T$ is in the span of $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$, we obtain \begin{align*} H_{s_{m,\tau}}(T,T)(z)=\left[\sigma\cdot H_{\zeta}(T,T) +2\operatorname{Re}\left(\langle\partial\zeta,T\rangle\cdot \overline{\langle\partial\sigma,T\rangle}\right) +\zeta\cdot H_{\sigma}(T,T) \right](z). \end{align*} Let $c_{3}>0$ be a constant such that $H_{\zeta}(\xi,\xi)(z)\geq -c_{3}|\xi|^{2}$ for all $z\in b\Omega\cap V$, $\xi\in\mathbb{C}^{n}$. Then it follows, using \eqref{E:McNealapplied} again, that \begin{align*} H_{s_{m,\tau}}(T,T)(z)\geq -\left(c_{3}\sigma(z)+2c_{2}\left(c_{1}\sigma(z) \right)^{\frac{1}{2}}\right) |T|^{2} +\zeta(z)\cdot H_{\sigma}(T,T)(z). \end{align*} Note that for $z$ and $T$ as above, $H_{\sigma}(T,T)(z)\geq 0$ when $\sigma(z)=0$. Furthermore, the set \begin{align*} \left\{(z,T)\;|\;z\in b\Omega\cap\overline{\operatorname{supp}\zeta},\;\sigma(z)=0,\; T\in\operatorname{span}\{W^{1}(z),\dots,W^{n-1-i}(z)\}\right\} \end{align*} is a closed subset of the complex tangent bundle of $b\Omega$. Thus there exists a neighborhood $U\subset V$ of $\{z\in b\Omega\cap\overline{\operatorname{supp}\zeta}\;|\;\sigma(z)=0\}$ such that \begin{align*} H_{\sigma}(T,T)(z)\geq-\frac{\delta}{2}|T|^{2} \end{align*} holds for all $z\in b\Omega\cap U$, $T$ in the span of $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$. Let $\nu_{1}$ be the maximum of $\sigma$ on the closure of $b\Omega\cap U$, and let $\nu_{2}$ be the minimum of $\sigma$ on $\left(b\Omega\cap\overline{\operatorname{supp}\zeta}\right)\setminus U$. Now choose $\tau>0$ such that $\tau \leq\min\{\nu_{1},\frac{\nu_{2}}{2}\}$. Then $z\in S_{1}$ implies that $z\in b\Omega\cap U$ and therefore $\zeta(z)\cdot H_{\sigma}(T,T)(z)\geq-\frac{\delta}{2}|T|^{2}$ for all $T$ in the span of the $W^{\alpha}(z)$'s. If we also make sure that $\tau>0$ is such that $c_{3}\tau+2c_{2}\left(c_{1}\tau\right)^{\frac{1}{2}} \leq\frac{\delta}{2}$, then (iv) is true on $S_{1}$. Now suppose that $z\in S_{2}$ and $T$ in the span of $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$. We compute \begin{align*} H_{s_{m,\tau}}&(T,T)(z)= \Bigl[\sigma g_{m,\tau}(\sigma) H_{\zeta}(T,T) +2\operatorname{Re}\left(\langle\partial\zeta,T\rangle\cdot \overline{\langle\partial\sigma,T\rangle} \right) \left( g_{m,\tau}(\sigma)+\sigma g_{m,\tau}'(\sigma) \right)\Bigr.\\ &\Bigl.+ \zeta\left|\langle\partial\sigma,T\rangle\right|^{2} \left(2g_{m,\tau}'(\sigma)+\sigma g_{m,\tau}''(\sigma)\right) +\zeta H_{\sigma}(T,T) (g_{m,\tau}(\sigma)+\sigma g_{m,\tau}'(\sigma)) \Bigr](z)\\ &=\operatorname{I}+\operatorname{II}+\operatorname{III}+\operatorname{IV}. \end{align*} If we choose $\tau>0$ such that $\tau e^{m}c_{3}\leq\frac{\delta}{4}$, then it follows that $\operatorname{I}\geq-\frac{\delta}{4}|T|^{2}$. Estimating the term $\operatorname{II}$ we get \begin{align*} \operatorname{II}\overset{\eqref{E:propsg}}{\geq}-2c_{2}\left|\langle\partial\sigma(z),T\rangle \right||T|\left(1+\frac{4}{m}\right) \overset{\eqref{E:McNealapplied}}{\geq} -4c_{2}\left(c_{1}\sigma(z)\right)^{\frac{1}{2}}|T|^{2} \geq-\frac{\delta}{4}|T|^{2}, \end{align*} if we choose $\tau>0$ such that $4c_{2}\left(c_{1}\tau e^{m}\right)^{\frac{1}{2}}\leq\frac{\delta}{4}$. To estimate term $\operatorname{IV}$, we only need to make sure that $\tau>0$ is so small that $z\in S_{2}$ implies that $2\zeta(z)\cdot H_{\sigma}(T,T)(z)\geq -\frac{\delta}{4}|T^{2}$. This can be done similarly to the case when $z\in S_{1}$. Note that up to this point the size of the parameter $m$ played no role. That is, we obtain above results for any choice of $m$ as long as $\tau>0$ is sufficiently small. The size of $m$ only matters for the estimates on term $\operatorname{III}$: \eqref{E:propsg} and \eqref{E:McNealapplied} yield \begin{align*} \operatorname{III}&\geq -\left|\langle\partial\sigma(z),T\rangle\right|^{2} \left[2|g_{m,\tau}'(\sigma(z))|+\sigma(z)g_{m,\tau}''(\sigma(z))\right]\\ &\geq -\left|\langle\partial\sigma(z),T\rangle\right|^{2}\frac{16}{m\sigma(z)} \geq-\frac{16 c_{1}}{m}|T|^{2}. \end{align*} We now choose $m>0$ such that $\frac{16 c_{1}}{m}\leq\frac{\delta}{4}$, and then we choose $\tau>0$ according to our previous computations . \end{proof} \section{Proof of \eqref{E:Main1}}\label{S:proof} We shall prove \eqref{E:Main1} by induction over the rank of the Levi form of $\rho$. To start the induction we construct a smooth defining function $r_{0}$ of $\Omega$ which satisfies \eqref{E:Main1} on $U_{0}\cap\Omega$ for some neighborhood $U_{0}$ of $\Sigma_{0}$. Let $\{V_{j,0}\}_{j\in J_{0}}$, $\{V_{j,0}'\}_{j\in J_{0}}\subset\subset\mathbb{C}^{n}$ be finite, open covers of $\Sigma_{0}$ with $V_{j,0}'\subset\subset V_{j,0}$ such that there exist smooth, linearly independent $(1,0)$-vector fields $W_{j,0}^{\alpha}$, $\alpha\in\{1,\dots,n-1\}$, defined on $V_{j,0}$, which are complex tangential to $b\Omega$ on $b\Omega\cap V_{j,0}$ and satisfy: \begin{enumerate} \item $H_{\rho}(W_{j,0}^{\alpha},W_{j,0}^{\alpha})=0$ on $\Sigma_{0}\cap V_{j,0}$ for all $j\in J_{0}$, \item the span of the $W_{j,0}^{\alpha}(z)$'s contains the null space of the Levi form of $\rho$ at $z$ for all boundary points $z$ in $\cup_{j=i+1}^{n-2}\Sigma_{j}\cap V_{j,0}$. \end{enumerate} We shall write $V_{0}=\cup_{j\in J_{0}}V_{j,0}$ and $V_{0}'=\cup_{j\in J_{0}}V_{j,0}'$. Choose smooth, non-negative functions $\zeta_{j,0}$, $j\in J_{0}$, such that \begin{align*} \sum_{j\in J_{0}}\zeta_{j,0}=1\;\text{on}\;V_{0}',\;\sum_{j\in J_{0}}\zeta_{j,0}\leq 1\;\text{on}\;V_{0}, \;\text{and}\;\overline{\operatorname{supp}\zeta_{j,0}}\subset V_{j,0}\;\text{for all}\;j\in J_{0}. \end{align*} Set $\sigma_{j,0}=\sum_{\alpha=1}^{n-1}H_{\rho}(W_{j,0}^{\alpha},W_{j,0}^{\alpha})$. For given $\epsilon>0$, choose $C_{0}$ according to $\eqref{E:chooseC}$. Then choose $m_{j,0}$ and $\tau_{j,0}$ as in Lemma \ref{L:cutoff} such that \begin{align*} s_{m_{j,0},\tau_{j,0}}(z)= \begin{cases} \zeta_{j,0}(z)\cdot\sigma_{j,0}(z)\cdot g_{m_{j,0},\tau_{j,0}}(\sigma_{j,0}(z)) &\text{if}\;\;z\in V_{j,0}\\ 0 &\text{if}\;\;z\in (V_{j,0})^{c} \end{cases} \end{align*} satisfies (i)-(iv) of Lemma \ref{L:cutoff} for $\delta_{0}=\frac{\epsilon}{C_{0}|J_{0}|}$. Finally, set $s_{0}=\sum_{j\in J_{0}}s_{m_{j,0},\tau_{j,0}}$ and define the smooth defining function $r_{0}=\rho e^{-C_{0}s_{0}}$. By our choice of $r_{0}$ we have for all $q\in\Omega\cap V_{0}'$ with $\pi(q)\in\Sigma_{0}\cap V_{0}'$ that \begin{align*} H_{r_{0}}(W,W)(q)\geq H_{r_{0}}(W,W)(\pi(q))-\epsilon \left|r_{0}(q)\right|\cdot |W|^{2} =-\epsilon \left|r_{0}(q)\right|\cdot |W|^{2} \end{align*} for all $W\in\mathbb{C}T_{\pi(q)}b\Omega$. In fact, by continuity there exists a neighborhood $U_{0}\subset V_{0}'$ of $\Sigma_{0}$ such that \begin{align*} H_{r_{0}}(W,W)(q)\geq H_{r_{0}}(W,W)(\pi(q))-2\epsilon |r_{0}(q)||W|^{2} \end{align*} holds for all $q\in\Omega\cap U_{0}$ with $\pi(q)\in b\Omega\cap U_{0}$ and $W\in\mathbb{C}T_{\pi(q)}b\Omega$. Now let $\xi\in\mathbb{C}^{n}$. For each $q\in\Omega\cap U_{0}$ with $\pi(q)\in b\Omega\cap U_{0}$ we shall write $\xi=W+M$, where $W\in\mathbb{C}T_{\pi(q)}b\Omega$ and $M$ in the span of $N(\pi(q))$. Note that then $|\xi|^{2}=|W|^{2}+|M|^{2}$. We get for the complex Hessian of $r_{0}$ at $q$: \begin{align*} H_{r_{0}}(\xi,\xi)(q)&=H_{r_{0}}(W,W)(q)+2\operatorname{Re}\left( H_{r_{0}}(W,M)(q) \right) +H_{r_{0}}(M,M)(q)\\ &\geq H_{r_{0}}(W,W)(\pi(q))-2\epsilon\left|r_{0}(q)\right|\cdot |W|^{2}+ 2\operatorname{Re}\left( H_{r_{0}}(W,M)(q) \right) +H_{r_{0}}(M,M)(q). \end{align*} Note that Taylor's theorem yields \begin{align*} H_{r_{0}}(W,M)(q)&=H_{r_{0}}(W,M)(\pi(q))+\mathcal{O}(d_{b\Omega}(q))|W||M|\\ &= e^{-C_{0}s_{0}(\pi(q))}\left(H_{\rho}(W,M)-C\overline{\langle\partial\rho,M\rangle} \langle\partial s_{0},W\rangle\right)(\pi(q))+\mathcal{O}(d_{b\Omega}(q))|W||M|. \end{align*} It follows by property (iii) of Lemma \ref{L:cutoff} that $|\langle\partial s_{0},W\rangle|\leq\frac{\epsilon}{C_{0}}|W|$ on $b\Omega$. After possibly shrinking $U_{0}$ we get \begin{align*} 2\operatorname{Re}\left(H_{r_{0}}(W,M)(\pi(q))\right)\geq-4\epsilon|\partial\rho| |W| |M| +e^{-C_{0}s_{0}(\pi(q))}2\operatorname{Re}\left(H_{\rho}(W,M)\right)(\pi(q)). \end{align*} Putting the above estimates together, we have \begin{align*} H_{r_{0}}(\xi,\xi)(q)\geq &-2\epsilon|r_{0}(q)||W|^{2}-4\epsilon|\partial\rho||W||M|+H_{r_{0}}(M,M)(q)\\ &+e^{-C_{0}s_{0}(\pi(q))} \left[ H_{\rho}(W,W)(\pi(q))+2\operatorname{Re}\left(H_{\rho}(W,M)(\pi(q))\right) \right]. \end{align*} An application of the (sc)-(lc) inequality yields \begin{align*} -\epsilon |W||M|\geq-\epsilon\left(|r_{0}(q)||\xi|^{2} +\frac{1}{|r_{0}(q)|}\left|\langle\partial r_{0}(\pi(q)),\xi\rangle\right|^{2} \right), \end{align*} where we used that $|\xi|^{2}=|W|^{2}+|M|^{2}$. Taylor's theorem also gives us that \begin{align*} \left|\langle\partial r_{0}(\pi(q)),\xi\rangle\right| &=e^{-C_{0}s_{0}(\pi(q))}\cdot\left| \langle\partial\rho(\pi(q)),\xi\rangle \right|\\ &\leq e^{-C_{0}s_{0}(\pi(q))}\cdot\left( \left|\langle\partial\rho(q),\xi\rangle\right|+ \mathcal{O}\left(\rho(q)\right)|\xi| \right), \end{align*} where the constant in the last term is independent of $\epsilon$. After possibly shrinking $U_{0}$, we obtain for all $q\in\Omega\cap U_{0}$ \begin{align*} \left|\langle\partial r_{0}(\pi(q)),\xi\rangle\right| &\leq 2e^{-C_{0}s_{0}(q)}\cdot\left|\langle\partial\rho(q),\xi\rangle\right| +\mathcal{O}\left(r_{0}(q)\right)|\xi|\\ &\leq 2\left|\langle\partial r_{0}(q),\xi\rangle\right|+2\left|\rho(q)\right| \cdot\left|\langle\partial e^{-C_{0}s_{0}(q)},\xi\rangle\right| +\mathcal{O}\left(r_{0}(q)\right)|\xi|, \end{align*} where, again, the constant in the last term is independent of $\epsilon$. Using that $\left|\langle\partial s_{0},W\rangle\right|\leq\frac{\epsilon}{C_{0}}|W|$ on $b\Omega$, we get \begin{align*} \left|\langle\partial e^{-C_{0}s_{0}(q)},\xi\rangle\right| \leq 2\epsilon |W|+\mathcal{O}\left( \left|\langle\partial\rho(\pi(q)),\xi\rangle \right| \right), \end{align*} which implies that $|\langle\partial r_{0}(\pi(q)),\xi\rangle|\lesssim|\langle\partial r_{0}(q),\xi\rangle|+ |r_{0}(q)||\xi|$. Thus we have \begin{align*} -\epsilon|W||M|\gtrsim-\epsilon\left(|r_{0}(q)||\xi|^{2} +\frac{1}{|r_{0}(q)|}\left|\langle\partial r_{0}(q),\xi\rangle\right|^{2} \right). \end{align*} Since \begin{align*} H_{\rho}(W,W)(\pi(q))+2\operatorname{Re}(H_{\rho}(W,M)(\pi(q))=H_{\rho}(\xi,\xi)(\pi(q))-H_{\rho}(M,M)(\pi(q)), \end{align*} and $H_{\rho}(\xi,\xi)(\pi(q))$ is non-negative, it follows that \begin{align*} H_{r_{0}}(\xi,\xi)(q)\gtrsim -\epsilon\left(|r_{0}(q)||\xi|^{2}+\frac{1}{|r_{0}(q)|} \left|\langle\partial r_{0}(q),\xi\rangle\right|^{2}\right) +\mu_{0}H_{\rho}(\xi,\xi)(\pi(q)) \end{align*} for some positive constant $\mu_{0}$. Since the constants in $\gtrsim$ do not depend on the choice of $\epsilon$, this proves \eqref{E:Main1} in an open neighborhood $U_{0}$ of $\Sigma_{0}$. Let $l\in\{0,\dots,n-3\}$ be fixed and suppose that there exist a smooth defining function $r_{l}$ of $b\Omega$ and an open neighborhood $U_{l}\subset\mathbb{C}^{n}$ of $\cup_{i=0}^{l}\Sigma_{i}$ such that \begin{align}\label{E:ihypotheses1} H_{r_{l}}(\xi,\xi)(q)\geq -\epsilon \left( |r_{l}(q)||\xi|^{2}+\frac{1}{|r_{l}(q)|}\left|\langle\partial r_{l}(q),\xi\rangle\right|^{2} \right)+\mu_{l}H_{\rho}(\xi,\xi)(\pi(q)) \end{align} for all $q\in\Omega\cap U_{l}$ with $\pi(q)\in b\Omega\cap U_{l}$ and $\xi\in\mathbb{C}^{n}$. Here, $\mu_{l}$ is some positive constant. Furthermore, we suppose that the function $\vartheta_{l}$ defined by $r_{l}=\rho e^{-\vartheta_{l}}$ satisfies the following \begin{align} \left| \langle\partial\vartheta_{l}(z),T\rangle\right|&\leq\epsilon|T|\;\;\text{for all}\;\;z\in b\Omega, T\in \mathbb{C}T_{z}b\Omega\label{E:ihypothesis2}\\ H_{\vartheta_{l}}(T,T)(z)&\geq-\epsilon|T|^{2}\;\;\text{for all}\;\; z\in\cup_{j=l+1}^{n-2}\Sigma_{j}, T\in\mathbb{C}T_{z}b\Omega\;\;\text{with}\;\;H_{\rho}(T,T)(z)=0\label{E:ihypothesis3}. \end{align} Let $k=l+1$. We shall now show that there exist a smooth defining function $r_{k}$ and a neighborhood $U_{k}$ of $\cup_{i=0}^{k}\Sigma_{i}$ such that for some positive constant $\mu_{k}$ \begin{align}\label{E:claimistep} H_{r_{k}}(\xi,\xi)(q)\geq -\epsilon\left( |r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left| \langle\partial r_{k}(q),\xi\rangle \right|^{2} \right)+\mu_{k}H_{\rho}(\xi,\xi)(\pi(q)) \end{align} holds for all $q\in \Omega\cap U_{k}$ with $\pi(q)\in b\Omega\cap U_{k}$ and $\xi\in\mathbb{C}^{n}$. Let $\{V_{j,k}\}_{j\in J_{k}}$, $\{V_{j,k}'\}_{j\in J_{k}}$ be finite, open covers of $\Sigma_{k}\setminus U_{k-1}$ such that \begin{enumerate} \item[(1)] $V_{j,k}'\subset\subset V_{j,k}$ and $\overline{V}_{j,k} \cap\left(\cup_{i=0}^{k-1}\Sigma_{i}\right)=\emptyset$ for all $j\in J_{k}$, and \item[(2)] there exist smooth, linearly independent $(1,0)$-vector fields $W_{j,k}^{\alpha}$, $\alpha\in\{1,\dots,n-1-k\}$, and $S_{j,k}^{\beta}$, $\beta\in\{1,\dots,k\}$, defined on $V_{j,k}$, which are complex tangential to $b\Omega$ on $b\Omega\cap V_{j,k}$ and satisfy the following: \begin{enumerate} \item[(a)] $H_{\rho}(W_{j,k}^{\alpha},W_{j,k}^{\alpha})=0$ on $\Sigma_{k}\cap V_{j,k},\; \alpha\in\{1,\dots,n-1-k\},\;j\in J_{k}$, \item[(b)] the span of $\{W_{j,k}^{1}(z),\dots,W_{j,k}^{k}(z)\}$ contains the null space of the Levi form of $\rho$ at all boundary points $z$ belonging to $\cup_{i=k+1}^{n-2}\Sigma_{i}\cap V_{j,k}$, \item[(c)] $H_{\rho}(S_{j,k}^{\beta}, S_{j,k}^{\beta})> 0$ on $b\Omega\cap \overline{V_{j,k}},\; \beta\in\{1,\dots,k\},\;j\in J_{k}$, \item[(d)] $H_{\rho}(S_{j,k}^{\beta},S_{j,k}^{\tilde{\beta}})=0$ for $\beta\neq\tilde{\beta}$ on $b\Omega\cap V_{j,k}$, $\beta,\tilde{\beta}\in\{1,\dots,k\},\;j\in J_{k}$. \end{enumerate} \end{enumerate} Note that above vector fields $\{W_{j,k}^{\alpha}\}$ always exist in some neighborhood of a given point in $\Sigma_{k}$. However, we might not be able to cover $\Sigma_{k}$ with finitely many such neighborhoods, when the closure of $\Sigma_{k}$ contains boundary points at which the Levi form is of lower rank. Moreover, if the latter is the case, then (c) above is also impossible. These are the reasons for proving \eqref{E:Main1} via induction over the rank of the Levi form of $\rho$. Suppose $S$ is in the span of $\{S_{j,k}^{\beta}(z)\}_{\beta=1}^{k}$ and $W$ is in the span of $\{W_{j,k}^{\alpha}(z)\}_{\alpha=1}^{n-1-k}$ for some $z\in V_{j,k}$, then there is some constant $\kappa_{k}>0$ such that $|S|^{2}+|W|^{2}\leq\kappa_{k}|S+W|^{2}$ for all $j\in J_{k}$. We shall write $V_{k}=\cup_{j\in J_{k}}V_{j,k}$ and $V_{k}'=\cup_{j\in J_{k}}V_{j,k}'$. Let $\zeta_{j,k}$ be non-negative, smooth functions such that \begin{align*} \sum_{j\in J_{k}}\zeta_{j,k}=1\;\text{on}\;V_{k}',\;\sum_{j\in J_{k}}\zeta_{j,k}\leq 1\; \text{on}\;V_{k}\;\text{and}\;\overline{\operatorname{supp}\zeta_{j,k}}\subset V_{j,k}. \end{align*} Set $\sigma_{j,k}=\sum_{\alpha=1}^{n-k-1}H_{\rho}(W_{j,k}^{\alpha},W_{j,k}^{\alpha})$. Recall that $\epsilon>0$ is given. Choose $C_{k}$ according to \eqref{E:chooseC} with $\frac{\epsilon}{\kappa_{k}}$ in place of $\epsilon$ there. We now choose $m_{j,k},\;\tau_{j,k}>0$ such that \begin{align*} s_{m_{j,k},\tau_{j,k}}(z)= \begin{cases} \zeta_{j,k}(z)\cdot\sigma_{j,k}(z)\cdot g_{m_{j,k}\tau_{j,k}}(\sigma_{j,k}(z))&\text{if}\; z\in V_{j,k}\\ 0&\text{if}\;z\in (V_{j,k})^{c} \end{cases} \end{align*} satisfies (i)-(iv) of Lemma \ref{L:cutoff} with $\delta_{k}=\frac{\epsilon}{C_{k}|J_{k}|\kappa_{k}}$. Set $s_{k}=\sum_{j\in J_{k}}s_{m_{j,k},\tau_{j,k}}$ and define the smooth defining function $r_{k}=r_{k-1}e^{-Cs_{k}}$. We claim that this choice of $r_{k}$ satisfies \eqref{E:claimistep}. We shall first see that \eqref{E:claimistep} is true for all $q\in\Omega\cap U_{k-1}\cap V_{k}$ with $\pi(q)\in b\Omega\cap U_{k-1}\cap V_{k}$. A straightforward computation yields \begin{align}\label{E:istepint1} H_{r_{k}}(\xi,\xi)(q)= e^{-C_{k}s_{k}(q)} \biggl[ H_{r_{k-1}}(\xi,\xi)\biggr.&+r_{k-1}\left( C_{k}^{2}\left|\langle\partial s_{k},\xi\rangle\right|^{2}-C_{k}H_{s_{k}}(\xi,\xi) \right)\\ \biggl.&-2C_{k}\operatorname{Re}\left( \overline{\langle\partial r_{k-1},\xi\rangle}\langle\partial s_{k},\xi\rangle \right) \biggr](q).\notag \end{align} By induction hypothesis \eqref{E:ihypotheses1} we have good control over the first term in \eqref{E:istepint1}: \begin{align*} e^{-C_{k}s_{k}(q)}H_{r_{k-1}}(\xi,\xi)(q)\geq &-\epsilon\left( |r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle e^{-C_{k}s_{k}(q)}\partial r_{k-1}(q),\xi\rangle \right|^{2} \right)\\ &+e^{-C_{k}s_{k}(q)}\mu_{k-1}H_{\rho}(\xi,\xi)(\pi(q)). \end{align*} Note that \begin{align*} \left| \langle e^{-C_{k}s_{k}(q)}\partial r_{k-1}(q),\xi\rangle \right|^{2} \leq 2\left| \langle\partial r_{k}(q),\xi\rangle \right|^{2} +r_{k}^{2}(q)C_{k}^{2}\left| \langle\partial s_{k}(q),\xi\rangle \right|^{2}. \end{align*} Moreover, part (iii) of Lemma \ref{L:cutoff} implies that \begin{align}\label{E:istepint2} C_{k}^{2}\left| \langle\partial s_{k}(q),\xi\rangle \right|^{2} \leq 2\epsilon|\xi|^{2}+\mathcal{O}\left(\left| \langle\partial r_{k}(\pi(q)),\xi\rangle \right|^{2}\right) \leq 3\epsilon|\xi|^{2}+\mathcal{O}\left(\left| \langle\partial r_{k}(q),\xi\rangle \right|^{2}\right) \end{align} after possibly shrinking $U_{k-1}$ (in normal direction only). Thus we have \begin{align*} e^{-C_{k}s_{k}(q)}H_{r_{k-1}}(\xi,\xi)(q)\gtrsim -\epsilon\left( |r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle \partial r_{k}(q),\xi\rangle \right|^{2} \right)+\mu H_{\rho}(\xi,\xi)(\pi(q)). \end{align*} For some positive constant $\mu\leq\mu_{k-1}$. Thus the first term on the right hand side of \eqref{E:istepint1} is taken care of. Now suppose $q\in\Omega\cap U_{k-1}\cap V_{k}$ is such that $\pi(q)\in b\Omega\cap V_{j,k}$ for some $j\in J_{k}$. To be able to deal with the term $H_{s_{k}}(\xi,\xi)(q)$ in \eqref{E:istepint1}, we shall write $\xi=S+W+M$, where \begin{align*} S\in\operatorname{span}\left(\{S_{j,k}^{\beta}(\pi(q))\}_{\beta=1}^{k} \right),\;\; W\in\operatorname{span}\left(\{W_{j,k}^{\alpha}(\pi(q))\}_{\alpha=1}^{n-1-k}\right),\;\;\text{and}\;\; M\in\operatorname{span}\left(N(\pi(q))\right). \end{align*} Then the (sc)-(lc) inequality gives \begin{align*} C_{k}H_{s_{k}}(\xi,\xi)(q)&\geq C_{k}H_{s_{k}}(W,W)(q)-\frac{\epsilon}{\kappa_{k}}|W|^{2}+ \mathcal{O}\left(|S|^{2}+|M|^{2}\right)\\ &\geq -2\frac{\epsilon}{\kappa_{k}}|W|^{2}+\mathcal{O}\left(|S|^{2}+|M|^{2}\right), \end{align*} where the last step holds since $s_{k}$ satisfies part (iv) of Lemma \ref{L:cutoff}. The last inequality together with \eqref{E:istepint2} lets us estimate the second term in \eqref{E:istepint1} as follows \begin{align*} e^{-C_{k}s_{k}(q)}r_{k-1}(q)&\left( C_{k}^{2}\left|\langle\partial s_{k},\xi\rangle\right|^{2}-C_{k}H_{s_{k}}(\xi,\xi) \right)(q)\\ &\gtrsim -\epsilon \left( |r_{k}||\xi|^{2}+\frac{1}{|r_{k}|}\left|\langle \partial r_{k},\xi\rangle \right|^{2} \right)(q)+\mathcal{O}(r_{k}(q))|S|^{2}. \end{align*} For the third term in \eqref{E:istepint1} we use \eqref{E:istepint2} again and obtain \begin{align*} -2C_{k}e^{-C_{k}s_{k}(q)}\operatorname{Re}\left( \overline{\langle\partial r_{k-1},\xi\rangle}\langle \partial s_{k},\xi\rangle \right)(q) \gtrsim -\epsilon\left(|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle\partial r_{k-1}(q),\xi\rangle \right|^{2}\right). \end{align*} Collecting all these estimates, using $\frac{1}{\kappa_{k}}|W|^{2}\leq|\xi|^{2}$, we now have for $q\in\Omega\cap U_{k-1}\cap V_{k}$ with $\pi(q)\in b\Omega\cap U_{k-1}\cap V_{k}$ and for some $\mu>0$ \begin{align*} H_{r_{k}}(\xi,\xi)(q)\gtrsim &-\epsilon \left(|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle\partial r_{k-1}(q),\xi\rangle \right|^{2}\right)\\ &\hspace{4cm}+\mu H_{\rho}(\xi,\xi)(\pi(q))+\mathcal{O}(r_{k}(q))|S|^{2}\\ \gtrsim&-\epsilon \left(|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle\partial r_{k-1}(q),\xi\rangle \right|^{2}\right) +\frac{\mu}{2}H_{\rho}(\xi,\xi)(\pi(q)). \end{align*} Here, the last estimate holds, after possibly shrinking $U_{k-1}\cap V_{k}$ (in normal direction only), since $\rho$ is plurisubharmonic on $b\Omega$. We still need to show that \eqref{E:claimistep} is true in some neighborhood of $\Sigma_{k}\setminus U_{k-1}$. Let $U\subset V_{k}$ be a neighborhood of $\Sigma_{k}\setminus U_{k-1}$, $q\in \Omega\cap U$ with $\pi(q)\in b\Omega\cap V_{j,k}$ for some $j\in J_{k}$ and $\xi\in\mathbb{C}^{n}$. Writing $\vartheta_{k}=\vartheta_{k-1}+C_{k}s_{k}$, we get \begin{align*} H_{r_{k}}(\xi,\xi)(q) =e^{-\vartheta_{k}(q)} \Bigl[ H_{\rho}(\xi,\xi)-&2\operatorname{Re}\left( \overline{\langle\partial\rho,\xi\rangle} \langle\partial\vartheta_{k},\xi\rangle \right)\Bigl.\\ &\Bigr.+\rho\left( \left|\langle\partial\vartheta_{k},\xi\rangle \right|^{2} -H_{\vartheta_{k-1}}(\xi,\xi)-C_{k}H_{s_{k}}(\xi,\xi) \right) \Bigr](q)\\ &=\operatorname{I}+\operatorname{II}+\operatorname{III}+\operatorname{IV}+\operatorname{V}. \end{align*} We write again $\xi=S+W+M$. By construction of $s_{k}$ and by the induction hypotheses \eqref{E:ihypothesis2} and \eqref{E:ihypothesis3} on $\vartheta_{k-1}$, we can do estimates similar to the ones below \eqref{E:istepint1} to obtain \begin{align*} \operatorname{II}+\operatorname{III}+\operatorname{IV}\gtrsim -\epsilon\left( |r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left| \langle\partial r_{k}(q),\xi\rangle \right|^{2} \right)+\mathcal{O}(r_{k}(q))|S|^{2}\;\;\text{for}\;\;q\in U. \end{align*} So we are left with the terms $\operatorname{I}$ and $\operatorname{V}$. Let us first consider the term $\operatorname{I}$. By Taylor's Theorem, we have \begin{align*} e^{-\vartheta_{k}(q)}H_{\rho}(\xi,\xi)(q) = e^{-\vartheta_{k}(q)}&\Bigl(H_{\rho}(\xi,\xi)(\pi(q)) -d_{b\Omega}(q)\operatorname{Re}\bigl[ (N H_{\rho})(\xi,\xi)(\pi(q))\bigr]\Bigr)\\ &+\mathcal{O}\left(r_{k}^{2}(q)\right)|\xi|^{2}. \end{align*} By the (sc)-(lc) inequality we have \begin{align*} \operatorname{Re} \left[(NH_{\rho})(\xi,\xi)(\pi(q))\right] \leq &\operatorname{Re}\left[(NH_{\rho})(W,W)(\pi(q))\right]\\ &+\frac{\epsilon}{c_{1}\kappa_{k}}|W|^{2}+ \mathcal{O}\left(\frac{c_{1}\kappa_{k}}{\epsilon}\right)(|S|^{2}+|M|^{2}), \end{align*} where $c_{1}>0$ is such that $d_{b\Omega}(q)\leq c_{1}|\rho(q)|$. Therefore we obtain for some $\mu>0$ \begin{align*} e^{-\vartheta_{k}(q)}H_{\rho}(\xi,\xi)(q) \geq &-d_{b\Omega}(q)e^{-\vartheta_{k}(q)}\operatorname{Re}\left[(NH_{\rho})(W,W)(\pi(q))\right] +r_{k}(q)\frac{\epsilon}{\kappa_{k}}|W|^{2}\\ &+\mu H_{\rho}(\xi,\xi)(\pi(q))+\mathcal{O}\left(r_{k}(q)\right)(|S|^{2}+|M|^{2}) +\mathcal{O}\left(r_{k}^{2}(q)\right)|\xi|^{2}. \end{align*} To estimate term $\operatorname{V}$ we use (sc)-(lc) inequality again: \begin{align*} -r_{k}(q)C_{k}H_{s_{k}}(\xi,\xi)(q) \geq -r_{k}(q)\left(C_{k}H_{s_{k}}(W,W)(q)-\frac{\epsilon}{\kappa_{k}}|W|^{2}\right)+ \mathcal{O}\left(r_{k}(q)\right)(|S|^{2}+|M|^{2}). \end{align*} After possibly shrinking $U$, we get for some $\mu>0$ \begin{align*} \operatorname{I}+\operatorname{V}\geq & -d_{b\Omega}(q)e^{-\vartheta_{k}(q)}\operatorname{Re}\left[(NH_{\rho})(W,W)(\pi(q))\right] +r_{k}(q)\left(-C_{k}H_{s_{k}}(W,W)(q)+\frac{2\epsilon}{\kappa_{k}}|W|^{2} \right)\\ &+\mu H_{\rho}(\xi,\xi)(\pi(q))+\mathcal{O}\left(r_{k}(q)\right)|M|^{2} +\mathcal{O}\left(r_{k}^{2}(q)\right)|\xi|^{2}. \end{align*} By our choice of $s_{k}$ and $C_{k}$, it follows that for all $q\in\Omega\cap U$ with $\pi(q)\in b\Omega\cap U$ we have \begin{align*} -d_{b\Omega}e^{-\vartheta_{k}(q)}\operatorname{Re}\left[ (NH_{\rho})(W,W)(\pi(q))\right] -r_{k}(q)C_{k}H_{s_{k}}(W,W)(q)\geq\frac{\epsilon}{\kappa_{k}}r_{k}(q)|W|^{2}. \end{align*} Putting our estimates for the terms $\operatorname{I}$--$\operatorname{V}$ together and letting $U_{k}$ be the union of $U_{k-1}$ and $U$, we obtain: for all $q\in\Omega\cap U_{k}$ with $\pi(q)\in b\Omega\cap U_{k}$, the function $r_{k}$ satisfies \begin{align*} H_{r_{k}}(\xi,\xi)(q)\gtrsim -\epsilon\left( |r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left| \langle\partial r_{k}(q),\xi\rangle \right|^{2} \right)+\mu_{k}H_{\rho}(\xi,\xi)(\pi(q)) \end{align*} for some $\mu_{k}>0$. Since the constants in $\gtrsim$ do not depend on $\epsilon$ or on any other parameters which come up in the construction of $r_{k}$, \eqref{E:claimistep} follows. Moreover, by construction, $\vartheta_{k}$ satisfies \eqref{E:ihypothesis2} and \eqref{E:ihypothesis3}. Note that $\Sigma_{n-1}\setminus U_{n-2}$ is a closed subset of the set of strictly pseudoconvex boundary points. Thus for any smooth defining function $r$ there is some neighborhood $U$ of $\Sigma_{n-1}\setminus U_{n-2}$ such that \begin{align*} H_{r}(\xi,\xi)\gtrsim |\xi|^{2} +\mathcal{O}(|\langle\partial r(q),\xi\rangle|^{2})\;\;\text{for all}\;\;\xi\in\mathbb{C}^{n} \end{align*} holds on $U\cap\Omega$. This concludes the proof of \eqref{E:Main1}. The proof of \eqref{E:Main2} is essentially the same as the one of \eqref{E:Main1} except that a few signs change. That is, the basic estimate \eqref{E:BasicTaylorH} for $q\in\overline{\Omega}^{c}\cap U$ becomes \begin{align*} H_{\rho}(W,W)(q)=2d_{b\Omega}(q)NH_{\rho}(W,W)(\pi(q))+\mathcal{O}\left(d_{b\Omega}^{2}(q) \right)|W|^{2} \end{align*} for any vector $W\in\mathbb{C}^{n}$ which is a weak complex tangential direction at $\pi(q)$. So an obstruction for \eqref{E:Main2} to hold at $q\in\overline{\Omega}^{c}\cap U$ occurs when $NH_{\rho}(W,W)$ is negative at $\pi(q)$ -- note that this happens exactly when we have no problem with \eqref{E:Main1}. Since the obstruction terms to \eqref{E:Main1} and \eqref{E:Main2} only differ by a sign, one would expect that the necessary modifications of $\rho$ also just differ by a sign. In fact, let $\vartheta_{n-2}$ be as in the proof of \eqref{E:Main1} -- that is, $r_{1}=\rho e^{-\vartheta_{n-2}}$ satisfies \eqref{E:Main1} for a given $\epsilon>0$. Then $r_{2}=\rho e^{\vartheta_{n-2}}$ satisfies \eqref{E:Main2} for the same $\epsilon$. \section{Proof of Corollary \ref{C:DF}}\label{S:DF} We shall now prove Corollary \ref{C:DF}. We begin with part (i) by showing first that for any $\eta\in(0,1)$ there exist a $\delta>0$, a smooth defining function $r$ of $\Omega$ and a neighborhood $U$ of $b\Omega$ such that $h=-(-re^{-\delta|z|^{2}})^{\eta}$ is strictly plurisubharmonic on $\Omega\cap U$. Let $\eta\in(0,1)$ be fixed, and $r$ be a smooth defining function of $\Omega$. For notational ease we write $\phi(z)=\delta|z|^{2}$ for $\delta>0$. Here, $r$ and $\delta$ are fixed and to be chosen later. Let us compute the complex Hessian of $h$ on $\Omega\cap U$: \begin{align*} H_{h}(\xi,\xi)=&\eta(-r)^{\eta-2}e^{-\phi\eta} \Bigl[ (1-\eta)\Bigr.\left|\langle\partial r,\xi\rangle\right|^{2}-rH_{r}(\xi,\xi)\\ &+2r\eta \operatorname{Re}\left(\langle\partial r,\xi\rangle\langle\overline{\partial\phi,\xi}\rangle\right) \Bigl. -r^{2}\eta\left|\langle\partial\phi,\xi\rangle\right|^{2} +r^{2}H_{\phi}(\xi,\xi)\Bigr]. \end{align*} An application of the (sc)-(lc) inequality gives \begin{align*} 2r\eta \operatorname{Re}\left(\langle\partial r,\xi\rangle\langle\overline{\partial\phi,\xi}\rangle\right) \geq -\frac{1-\eta}{2}\left|\langle\partial r,\xi\rangle\right|^{2} -\frac{2r^{2}\eta^{2}}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}. \end{align*} Therefore, we obtain for the complex Hessian of $h$ on $\Omega$ the following: \begin{align*} H_{h}(\xi,\xi) \geq \eta(-r)^{\eta-2}e^{-\phi\eta} \Biggl[\frac{1-\eta}{2}|\langle\partial r,\xi\rangle|^{2} \Biggr.&-rH_{r}(\xi,\xi)\\ &+r^{2} \left\{ -\frac{\eta(1+\eta)}{1-\eta}|\langle\partial\phi,\xi\rangle|^{2}+H_{\phi}(\xi,\xi) \right\} \Biggl.\Biggr]. \end{align*} Set $\delta=\frac{1-\eta}{2\eta(1+\eta)D}$, where $D=\max_{z\in\overline{\Omega}}|z|^{2}$. Then we get \begin{align*} H_{\phi}(\xi,\xi)-\frac{\eta(1+\eta)}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2} = \delta\left(H_{|z|^{2}}(\xi,\xi)-\frac{\eta(1+\eta)}{1-\eta}\delta\left|\langle\overline{z}, \xi\rangle\right|^{2} \right) \geq \frac{\delta}{2}|\xi|^{2}. \end{align*} This implies that \begin{align}\label{E:generalDFest} H_{h}(\xi,\xi)\geq \eta(-r)^{\eta-2}e^{-\phi\eta} \left[ \frac{1-\eta}{2}|\langle \partial r,\xi\rangle|^{2} -rH_{r}(\xi,\xi)+\frac{\delta}{2}r^{2}|\xi|^{2} \right] \end{align} holds on $\Omega$. Set $\epsilon=\min\{\frac{1-\eta}{4},\frac{1-\eta}{8\eta(1+\eta)D}\}$. By \eqref{E:Main1} there exist a neighborhood $U$ of $b\Omega$ and a smooth defining function $r_{1}$ of $\Omega$ such that \begin{align*} H_{r_{1}}(\xi,\xi)(q) \geq -\epsilon\left(|r_{1}(q)||\xi|^{2}+\frac{1}{|r_{1}(q)|}|\langle\partial r_{1}(q),\xi\rangle|^{2}\right) \end{align*} holds for all $q\in\Omega\cap U$, $\xi\in\mathbb{C}^{n}$. Setting $r=r_{1}$ and using \eqref{E:generalDFest}, we obtain \begin{align*} H_{h}(\xi,\xi)(q)\geq \eta(-h(q))\cdot\epsilon|\xi|^{2}\;\;\text{for}\;\;q\in\Omega\cap U,\;\xi\in\mathbb{C}^{n}. \end{align*} It follows by standard arguments that there exists a defining function $\widetilde{r}_{1}$ such that $-(-\widetilde{r}_{1})^{\eta}$ is strictly plurisubharmonic on $\Omega$; for details see pg.\ 133 in \cite{DF1}. This proves part (i) of Corollary \ref{C:DF}. A proof similar to the one of part (i), using \eqref{E:Main2}, shows that for each $\eta>1$ there exists a smooth defining function $\tilde{r}_{2}$, a neighborhood $U$ of $b\Omega$ and $\delta>0$ such that $(r_{2}e^{\delta|z|^{2}})^{\eta}$ is strictly plurisubharmonic on $\overline{\Omega}^{c}\cap U$. \end{document}
\begin{document} \title{\LARGE \bf A Structured Optimal Controller with Feed-Forward for Transportation } \begin{abstract} We study an optimal control problem for a simple transportation model on a \change{path} graph. We give a closed form solution for the optimal controller, which can also account for planned disturbances using feed-forward. The optimal controller is highly structured, which allows the controller to be implemented using only local communication, conducted through two sweeps through the graph. \end{abstract} \section{Introduction} In this paper we study a simple Linear Quadratic control transportation problem on a network. Such problems have well known solutions based on the Riccati equation \cite{kalman1960}. This gives a static feedback law $$ u = Kx, $$ where $u$ is the input to the system, $x$ is the state of the system and $K$ is a matrix with real entries. This matrix is in general dense. This is undesirable in large-scale problems, since it implies that measurements from the entire network are required to compute the optimal inputs at every node. Furthermore a centralized coordinator with knowledge of the entire system is required to determine the matrix $K$, and a complete redesign will be required in response to any changes in the network. These factors have led to the development of a range of general purpose methods for structured control system design. Some notable themes include the notion of Quadratic Invariance \cite{RL06,LL11}, System Level Synthesis \cite{WMD18}, and the use of large-scale optimization techniques (e.g. \cite{LFJ13}). A downside with these approaches is that they improve scalability at the expense of performance. That is they search over families of controllers that exclude the dense optimal controller for \eqref{eq:problem}. While in comparison with the alternative this may be an acceptable trade-off, it implicitly assumes that just because the feedback law is dense, it cannot be efficiently implemented. The main result of this paper is to show that the simple structure in our problem allows the optimal control law to be computed and implemented in a simple and scalable manner. The resulting control actions are the same as those from a Riccati approach, and could in principle be calculated that way. However, there are extra structural features in the control law that are obscured by the resulting dense feedback matrix representation, and it is not obvious how to exploit these to give a scalable implementation from the gain matrix obtained from the Riccati equation. \subsection{Problem Formulation} We consider the problem of transportation and production of goods on a directed path graph with vertices $v_1,v_2,\ldots{},v_N$ and directed edges $(e_N,e_{N-1}),\ldots{},(e_2,e_1)$. The dynamics are given by \begin{equation}\label{eq:dynamics} z_i[t+1] = z_i[t] - u_{i-1}[t] + u_{i}[t-\tau_{i}] + v_i[t] + d_i[t]. \end{equation} All the variables are considered to be defined relative to some equilibrium. In the above $z_i[t] \in \mathds{R}$ is the quantity in node $i$ at time $t$. The system can be controlled using the variables $u_i[t]\in \mathds{R}$ and $v_i[t]\in \mathds{R}$. The variable $u_i[t]$ denotes the amount of the quantity that is transported from node $i+1$ to node $i$ (again relative to some equilibrium flows), and the transportation takes $\tau_{i}$ time units. For the last node $N$ it is assumed that $u_N[t] = 0$ for all $t$. The variable $v_i[t]$ denotes the flexible production or consumption of the quantity at the \emph{i}th node. Finally $d_i[t]\in \mathds{R}$ is the fixed production/consumption at the \emph{i}th node. This will be treated like a forecast, or \emph{planned disturbance}, that is known to the designer, but cannot be changed. This model could for instance describe a water irrigation network \cite{cantoni2007control} or a simple supply chain system \cite{inventory}. A state-space representation for \eqref{eq:dynamics} can be obtained by setting $z_i[t]$, $u_i[t-\delta],\ 1\leq\delta\leq\tau_i$ to equal the system state. The goal is to optimally operate this network around some equilibrium point. The performance is measured by the cost of deviating from the equilibrium levels $q_iz_i^2$ and the cost of the variable production $r_iv_i^2$, where $q$ and $r$ are strictly positive constants. We thus consider the following linear quadratic control problem on a graph with $N$ nodes, \begin{equation}\label{eq:problem}\begin{aligned} \minimize_{z,u,v} \quad & \sum_{t=0}^\infty\sum_{i=1}^N \big(q_iz_i[t]^2 + r_iv_i[t]^2 \big)\\ \st \quad & \text{dynamics in \eqref{eq:dynamics}}\\ & z[0], d_i[t]. \end{aligned}\end{equation} Note that there is no penalty on the internal flows $u_i$. This can for example be motivated by the transportation costs already being covered by the costs of the nominal flows (or in the case of water irrigation networks that gravity does the moving). This problem is in effect a dynamic extension of the types of scheduling problems considered in transportation networks \cite{ahuja1995applications}, and could be used to compliment such approaches by optimally adjusting a nominal schedule in real time using the feedback principle. A similar problem has been studied in a previous paper \cite{heyden2018structured}. However we give several important extensions, in that we allow for non-homogeneous delays, production in every node and optimal feed-forward for planned disturbances. Allowing for non homogeneous delays is important as that will be the case for almost all applications. Taking planned disturbances into account allows for increased performance whenever such disturbances can be forecast. Finally, allowing for some variation in the consumption $v_i$ for each node will also generally increase performance whenever such variation is possible. The effect the feed-forward of planned disturbances can have on the controller performance is illustrated in \cref{fig:example}, where we see that the controller with feed-forward anticipates the action of the disturbances, allowing the effect to be better spread through the graph and the node levels to be more tightly regulated. This results in a significant improvement in performance. \begin{figure} \caption{Example of the effect of feed forward. The graph has five nodes and transportation delay $\tau_1 = 3$, $\tau_2 = 2$, $\tau_3 = 5$ and $\tau_4 =4$. There is a disturbance in node three from time 10 to 13 and in node 2 from time 12 to 15. We can see that the feed-forward manages to handle the disturbances better by spreading out their effect throughout the graph. To quantify the difference one can consider the cost in \eqref{eq:problem}, which is $3.11$ with feed-forward and 11.35 without feed-forward.} \label{fig:example} \end{figure} \subsection{Result Preview} The key structural feature that we identify in the optimal control law for \eqref{eq:problem} is that optimal inputs can be computed recursively by two sweeps through the graph (even though the control law that would be obtained from the Riccati equation would be dense). More specifically, two intermediate variables local to the \emph{i}th node $\delta_i[t]$ and $\mu_i[t]$ can be computed recursively through relationships on the form \[ \begin{aligned} \delta_i[t]&=f(\text{local\_variables},\delta_{i-1}),\\ \mu_i[t]&=g(\text{local\_variables},\mu_{i+1}), \end{aligned} \] from which the optimal inputs $u_i[t]$ and $v_i[t]$ can be calculated based only on local variables. Conceptually this step is rather similar to solving a sparse system of equations with the structure of a directed path graph using back substitution. The details are given in \cref{alg:com_gen}, and this process is illustrated in \cref{fig:com_ill}. This allows the optimal inputs to be computed by sweeping once through the graph from the first node to the final node to compute the $\delta$'s, and once from the final node to the first to compute the $\mu$'s. Both sweeps can be conducted in parallel. This represents a sort of middle ground between centralised control and decentralised control, in which global optimality is preserved whilst only requiring distributed communication. The price for this is that the sweep through the whole graph must be completed before the inputs can be applied, but for systems with reasonably long sample times (which is likely true in transportation or irrigation systems) this seems a modest price to pay. Interestingly the controller parameters can be computed in a similar distributed manner, allowing the controller to also be synthesised in a simple and scalable manner. This is shown in \cref{alg:init}. \section{Results} In this section we present two algorithms that together allow for the solution of \eqref{eq:problem}. The first of these algorithms computes the parameters of a highly structured control law for solving \eqref{eq:problem}, whereas the second shows that the control law has a simple distributed implementation. These features will be discussed in \cref{sec:impl}. In this section we will demonstrate that under suitable assumptions on the planned disturbances $d_i[t]$, \cref{alg:init,alg:com_gen} give the optimal solution to \eqref{eq:problem} . This constitutes the main theoretical contribution of the paper. In the absence of the planned disturbances (i.e. with $d_i[t]\equiv{}0$), \eqref{eq:problem} is an infinite horizon LQ problem in standard form. It is of course highly desirable in applications to be able to include information about upcoming disturbances in the synthesis of the control law. However if we are given an infinite horizon of disturbances, \eqref{eq:problem} is no longer tractable. For the theoretical perspective, it turns out that the suitable assumption on the horizon length is as follows: \begin{assumption}\label{assump:horizon} Let the aggregate delay $\sigma_k$ in \eqref{eq:dynamics} be \begin{equation*} \sigma_k = \sum_{i=1}^{k-1} \tau_i. \end{equation*} Given a horizon length $H\geq0$, assume that $d_i[t] = 0$ for all $t> H+(\sigma_N-\sigma_i)$ and for all $1\leq i \leq N$. \end{assumption} Observe that if $d_i[t] = 0$ for all $t>H$ then \cref{assump:horizon} holds. Thus the assumption captures the natural notion of having a finite horizon $H$ of information about the disturbances $d_i[t]$ available when constructing the control input. In \cref{sec:simulations} we will investigate how the length of the horizon affects the performance of the controller. We will now state the main results of this paper. The following theorem shows that \eqref{eq:problem} can be solved by running two simple algorithms; one for calculating all the necessary parameters, and one for computing the optimal inputs. Both algorithms can be implemented using only local communication as discussed in \cref{sec:impl}. A graphical illustration of the implementation of \cref{alg:com_gen}, which is the algorithm used for the on-line implementation, can be found in \cref{fig:com_ill}. \begin{theorem}\label{thm:gen} Let \[ D_i[t] = \sum_{j=1}^i d_j[t-\sigma_j]. \] Assume that $H$ and $d_j[t]$ satisfy \cref{assump:horizon}. Then the optimal inputs $u_i[t]$ and $v_i[t]$ for the problem in \eqref{eq:problem} are given by running \cref{alg:com_gen} with the parameters calculated by \cref{alg:init}. \end{theorem} \begin{proof} See the appendix. A sketch of the proof can be found in \cref{sec:proof_idea}. \end{proof} \begin{remark} In most cases the choice of $H$ can be made without considering its effect on the controller implementation, and can instead be chosen based only on the nodes' ability to forecast their disturbances. In applications it would also be natural to incorporate new information on upcoming disturbances in a receding horizon fashion. This will be further discussed in \cref{sec:recedinghorizon}. $\lozenge$ \end{remark} \begin{remark} There is an asymmetry in \cref{assump:horizon} in that $(\sigma_N-\sigma_i)$ grows as $i$ decreases from $N$ to $1$. This means that this assumption allows nodes further down the graph to have longer horizons of planned disturbances. Of course there is no reason to believe that these nodes are better at predicting their disturbances. It is just that the derived theory can handle those disturbances in a straightforward manner since the optimal controller lumps the disturbances into time shifted sums, with a time shift proportional to $\sigma_i$. $\lozenge$ \end{remark} \begin{algorithm}[] \caption{Computation of control parameters.} \label{alg:init} \DontPrintSemicolon \SetNoFillComment \SetAlgoLined \KwIn{$q_i$, $r_i$, $\tau_i$, $H$} \KwOut{$\gamma_i$, $g_i(j)$, $P_i(\tau_i,1)$, $h_i$, $\phi_i(\Delta)$, $a_i$, $c_i$} \nonl \hrulefill \\ \tcc{First Sweep, upstream direction} $\gamma_1 = q_1, \quad \rho_1 = r_1$ \tcp*[r]{initialize first node} \textbf{send} $\gamma_1$ and $\rho_1$ to upstream neighbor\\ \For {node i = 2:N} { $\gamma_i = \frac{\gamma_{i-1}q_i}{\gamma_{i-1} + q_i}, \quad \rho_i = \frac{\rho_{i-1}r_i}{\rho_{i-1} + r_i}$\\ \textbf{send} $\gamma_i$ and $\rho_i$ to upstream neighbor } \nonl \dotfill\\ \tcc{Second Sweep Downstream direction} $X_N(H+2) = -\frac{\gamma_N}{2}+\sqrt{\gamma_N\rho_N+\frac{\gamma_N^2}{4}}$ \\ \For {node i = N:1} { $X_i(\tau_i) = \frac{\rho_i(X_{i+1}(1)+\gamma_{i})}{X_{i+1}(1)+\gamma_{i} + \rho_i}$ \tcp*{Not for node N} $X_i(t-1) = \frac{\rho_i(X_i(t)+\gamma_i)}{X_i(t) + \gamma_i+\rho_i}$, \tcp*{$1\leq t-1\leq\tau_i-1$ or for i =N, $1\leq t-1 \leq H+1$} $g_i(i) = \frac{X_i(i)}{X_i(i)+\gamma_i},\quad 2\leq i \leq \tau_i$ \\ $g_{i+1}(1) = \frac{X_{i+1}(1)}{X_{i+1}(1)+\gamma_{i}}$\\ $b_i = g_{i+1}(1)\prod_{j=2}^{\tau_i}g_i(j)$ \\ \textbf{send} $X_i(\tau_i)$, $b_i$ to downstream neighbor.\\ \tcp{for $1\leq l,m \leq \tau_i$} $P_i(1,m) = \frac{X_i(1)}{\rho_{i}}$ \\ $P_i(l,m) = (1-\frac{X_i(l)}{\rho_{i}})g_i(l)P_i(l-1,m) + \frac{X_i(l)}{\rho(i)}, \quad l\leq m$ \\ $P_i(l,m) = (1-\frac{X_i(l)}{\rho_{i}})P_i(l-1,m) + \frac{X_i(l)}{\rho(i)}, \quad l> m$ \\ } \nonl \dotfill \\ \tcc{Third Sweep, upstream direction} $h_1 = P_1(\tau_1,\tau_1)g_2(1)$\\ \textbf{send} $h_1$ to upstream neighbor.\\ \For {node i= 2:N-1} { $h_i = (1-P_i(\tau_i,1))b_ih_{i-1} + P_i(\tau_i,\tau_i)g_{i+1}(1)$ \\ \textbf{send} $h_i$ to upstream neighbor. } \nonl\dotfill \\ \tcc{Some final local Calculations} \tcp{For $1\leq \Delta \leq \tau_i$, Empty Product, $\prod_{j=2}^1$ = 1} $\phi_i(\Delta) = \left(1-P_i(\tau_i,\Delta) -(1-P_i(\tau_i,1))h_{i-1}{\prod_{j=2}^{\Delta +1}}g_k(j) \right)$\\ $a_i = \frac{X_i(1)}{r_i}+\frac{\gamma_i}{q_i}(1-\frac{X_i(1)}{\rho_i})$\\ $c_i = -\Big(\frac{X_i(1)}{r_i} - \frac{\gamma_iX_i(1)}{q_i\rho_i} \Big)(1-h_{i-1}) + \frac{\gamma_i}{q_i}h_{i-1}$ \end{algorithm} \SetInd{0.5em}{0.5em} \begin{algorithm}[h!] \caption{Distributed Controller Implementation.} \label{alg:com_gen} \SetNoFillComment \KwIn{$z_i[t]$, $u_i[t-(\tau_i-\Delta)]$, $d_i[t]$, $D_i[t+\sigma_i +\Delta]$} \KwOut{$u_i[t]$,$v_i[t]$} \nonl \hrulefill \\ \tcp{Let $\tau_N = H + 1$.} \tcc{Upstream sweep - Done in parallel with downstream sweep} \For{node i = 1:N} { $\Phi_i[t] = \phi_i(1)z_i[t] +$\\ \nonl$ \ \ \ \ \sum_{\Delta=0}^{\tau_i-1} \phi_i(\Delta+1)\Big(u_i[t-(\tau_i-\Delta)] + D_i[t+\sigma_i+\Delta]\Big)$\\ $\delta_i[t] = \Phi_i[t] + (1-P_{i}(\tau_{i},1))\delta_{i-1}[t]$\\ \textbf{send} $\delta_i[t]$ upstream } \nonl \dotfill\\ \tcc{Downstream sweep - Done in parallel with upstream sweep} \For{node i = N:1} { \tcp{Empty Product, $\prod_{j=2}^1$ = 1} $\pi_i[t] = z_i[t] + \sum_{\Delta=0}^{\tau_i-1} \Big(u_i[t-(\tau_i-\Delta)] + D_i[t+\sigma_i+\Delta]\Big)\prod_{j=2}^{\Delta+1}g_i(j)$\\ $\mu_i[t] = \pi_i[t] + b_i\mu_{i+1}[t]$\\ \textbf{send} $\mu_i[t]$ downstream} \nonl \dotfill\\ \tcc{Calculate outputs} $u_{i-1}[t] = (1-\frac{\gamma_i}{q_i})\big(z_i[t]+u_i[t-\tau_i]+D_{i}[t+\sigma_i]\big)$ \\ \nonl $\qquad \qquad \qquad \qquad \qquad -a_i\delta_{i-1}[t] + c_i\mu_i[t] + d_i[t] -D_{i}[t+\sigma_i]$\\ $v_i[t] = -\frac{X_i(1)}{r_i}\Big(\delta_{i-1}[t] + (1-h_{i-1})\mu_i[t]\Big)$ \end{algorithm} \section{Implementation}\label{sec:impl} \begin{figure*} \caption{ Illustration of the structured approach for calculating the optimal inputs using \cref{alg:com_gen}, for a 5 node example with $\tau_1 = 2$, $\tau_2 = 1$, $\tau_3 = 1$, $\tau_4 = 2$. The graph at the right of the figure illustrates the underlying dynamics of the network as in \eqref{eq:dynamics}. The left part of the figure illustrates the structure of the computations required to compute the optimal control according to \cref{alg:com_gen}. The solid circles corresponds to node states and dashed circles the quantities in transit. The number in each dashed circle denotes the value of $\Delta$, which then maps to $u_k[t-(\tau_k-\Delta)]$. The rectangles indicates the different intermediate calculations needed to determine the variables required to compute the optimal inputs (lines 2--3 and 7--8 in \cref{alg:com_gen}). These are horizontally aligned with the location in the network where they could be locally performed. The arrows indicate information flow. Each intermediate can be calculated using only the quantities from the incoming arrows. An upstream sweep is performed (the red arrows) in order to calculate the variables $\delta_i[t]$. The local intermediate $\Phi_i[t]$ variables are calculated (line 2), and then aggregated into the $\delta_i[t]$'s (line 3), which are sequentially passed up the graph. For the downstream sweep the local $\pi_i[t]$ variables are calculated (line 7) and aggregated into the $\mu_i[t]$'s (line 8). Both sweeps can be conducted in parallel, and once they have completed, the optimal inputs for the \emph{i}th node can be determined using the variables at the \emph{i}th location according to lines 11 and 12 in \cref{alg:com_gen}. } \label{fig:com_ill} \end{figure*} In this section we will discuss the structure in \cref{alg:init,alg:com_gen}, and explain how they can be used to implement an optimal feedback control law for solving \eqref{eq:problem} in a distributed manner. In both cases the order in which the computations occur is highly structured. This is illustrated for \cref{alg:com_gen}, which is the algorithm that must be run to compute the control inputs, in \cref{fig:com_ill}. Matlab code for using these algorithms to calculate the optimal control inputs is available at github\footnote{\texttt{https://github.com/Martin-Heyden/cdc-letters-2021}}, as well as code to verify that \cref{thm:gen} holds numerically. We will also discuss how to calculate $D_i$ and incorporate updates to the planned disturbances in a receding horizon style in \cref{alg:com_D}. \subsection{Algorithms 1 and 2, and the Optimal Control Law} The problem in \eqref{eq:problem} is at its heart an LQ problem, and the optimal controller is given by a static feedback law. The corresponding feedback matrix is generally dense, and that is the case for \eqref{eq:problem} as well. However certain special structural features of the process \eqref{eq:dynamics} are inherited by the optimal control law. It is these features we exploit to give a scalable implementation in \cref{alg:init,alg:com_gen}, which we will now discuss. In terms of the algorithm variables, the optimal node production $v_i[t]$ for \eqref{eq:problem} is given by \begin{equation}\label{eq:opt_v} v_i[t] = -\frac{X_i(1)}{r_i}\Big(\delta_{i-1}[t] + (1-h_{i-1})\mu_i[t]\Big), \end{equation} and the optimal internal flows $u_i[t]$ are given by \begin{multline}\label{eq:opt_u} u_{i-1}[t] = (1-\frac{\gamma_i}{q_i})\big(z_i[t]+u_i[t-\tau_i]+D_{i}[t+\sigma_i]\big) - a_i\delta_{i-1}[t]\\ + c_i\mu_i[t] + d_i[t] -D_{i}[t+\sigma_i]. \end{multline} The parameters in these control laws (the symbols without a time index, which includes $X_i(1)$) are calculated in a simple and structured manner by \cref{alg:init}. Of course having an efficient method for computing the control law is less critical than having an efficient real time implementation of the control law (which is performed by \cref{alg:com_gen}), since the control law can be computed ahead of time. However the fact that this step is also highly structured indicates that the approach is scalable, since it allows for the the control law to be simply and efficiently updated in response to changes to the dynamics in \eqref{eq:dynamics} (perhaps resulting from the introduction of more nodes). \cref{alg:init} computes all the parameters needed to give a closed form solution for the problem in \eqref{eq:problem}. The origin of the parameters in \cref{alg:init} are discussed briefly in the proof idea in \cref{sec:proof_idea} and full details are found in the proof in the appendix. The algorithm consists of three serial sweeps. The first sweep starts at node 1 and calculates $\gamma_i$ and $\rho_i$. The second sweep starts at node $N$, and calculates $X_i(t)$ The calculation of $X_i(t)$ has both local steps (line 10) and steps that requires communication (line 9). Also during the second sweep, the parameters $g$, $b$ and $P$ are calculated locally. The third sweep starts at node 1 again, and calculates the parameter $h$, which is needed to calculate the optimal production. Finally, after the third sweep, the parameters $\phi_i(\Delta)$, $a_i$ and $c_i$ are calculated in each node independently. The real time implementation of the optimal control law also has a simple distributed implementation. This is the role of \cref{alg:com_gen}, and the structure of the implementation is illustrated in \cref{fig:com_ill}. The algorithm proceeds through two sweeps through the graph. These sweeps are independent of one another, and can be conducted in parallel. In the upstream sweep (from node 1 to node $N$), a set of local variables ($\Phi_i[t]$ and $\delta_i[t]$) are computed according to lines 2--3. This is done sequentially, since the computation of $\delta_i[t]$ depends on $\delta_{i-1}[t]$. $\delta_{i-1}[t]$ then gives all the information node $i$ needs from downstream nodes. Similarly the downstream sweep sequentially computes the $\pi_i[t]$ and $\mu_i[t]$ variables. Here $\mu_i[t]$ gives all the information needed from nodes upstream of node $i$. Once these two sweeps are completed, the optimal inputs can be calculated locally using lines 11 and 12. \subsection{Receding Horizon and Calculation of $D_i[t]$}\label{sec:recedinghorizon} We will now discuss how to implement the controller in a receding horizon style to account for updates and new information about the planned disturbances $d_i[t]$. In terms of both the optimal control problem in \eqref{eq:problem} and the controller implementation in \cref{alg:com_gen}, the planned disturbances are treated as fixed quantities, that are known up to some horizon length $H$ into the future (and equal to zero thereafter, c.f. \cref{assump:horizon}). The idea is that $d_i[t]$ determines the anticipated consumption of the quantity at node $i$ and time $t$. Having this information available ahead of time allows the optimal control law to anticipate the predicated usage, and optimally 'schedule' the transportation of the quantity through the network. As we will see in the examples this can lead to a significant improvement in performance. However, in practice we would want to update the values of the $d_i[t]$'s as time passes, and more up to date information becomes available. A natural way to do this is to use a receding horizon approach. In this setting we assume that at each point in time, we essentially have a fresh problem, with a new set of planned disturbances. \cref{alg:com_gen} can then be used to compute the first optimal input for this problem, after which the problem resets, and we get a new horizon of planned disturbances. This ensures we always make the best action available to us with a given horizon of information about the disturbances. The question is then, how to efficiently update the part of the control law that depends on the planned disturbances. The changes required to accommodate this are rather minor. The planned disturbances do not affect the control parameters or the distributed structure of the implementation of the control law. To see this, observe from \cref{alg:init,alg:com_gen} that all of the information about the planned disturbances is handled through the variables $D_i[t]$ defined in \cref{thm:gen}. An illustration of the relationship between individual disturbance $d_i$ and shifted disturbance vectors $D_i[t]$ can be found in \cref{fig:D_illustration}. \begin{figure} \caption{Illustration for the terms included in $D_i[t]$ for a three node graph with $\tau_1 = \tau_2 = 2$. The first row of dots corresponds to $z_3[t],\ z_2[t],\ z_1[t]$ and the second corresponds to $z_3[t+1],\ z_2[t+1],\ z_1[t+1]$, and so on. Due to lack of space only $d_i[t]$ and $d_i[t+1]$ has been drawn. However, the pattern follows through the graph. From the figure we can see that $D_1[t+3] = D_2[t+3]-d_2[t]$, corresponding to the first part of \cref{alg:com_D}.} \label{fig:D_illustration} \end{figure} Thus the structure of the implementation of the control law remains the same, and only $D_i[t]$ needs to be updated as new information become available. This can be done efficiently by a sweep starting at the bottom of the graph. After all, although in the receding horizon framework we assume we have a `new' set of planned disturbances at each point in time, these will share a large amount of information with the planned disturbances from the previous time step. This is the role of \cref{alg:com_D}, which we now explain. \SetInd{0.5em}{1em} \begin{algorithm}[] \SetNoFillComment \KwIn{changed $d_i[s]$} \KwOut{updated $D_i$} \nonl \hrulefill \\ \tcc{Update Disturbances in parallel, $\mathcal{O}(1)$. Necessary even if there are no new disturbances} \For{node i = N:1}{ \textbf{Send} $D_{i-1}[t+1+\sigma_{i}-1] = D_{i}[t+\sigma_{i}] - d_{i}[t]$ downstream.\\ \textbf{Discard} $D_i[t+\sigma_{i}]$ \\ } \nonl \dotfill\\ \tcc{Update Disturbances due to new planned disturbances} \For{node i = 1:N} { \tcc{ For $t+\sigma_i\leq s <t+\sigma_N+H$} \If{$d_i[s-\sigma_i]$ \emph{changed} or $D_{i-1}[s]$ \emph{received}} {\textbf{send} $D_i[s] = D_{i-1}[s] + d_i[s-\sigma_i] $ upstream } } \caption{Calculation of $D$} \label{alg:com_D} \end{algorithm} All the $D_i[t]$'s where none of the underlying $d_j[t]$ were changed can easily be updated. For $1\leq \Delta \leq \tau_i-1$ the information in $D_i[t + \sigma_i + \Delta]$ will be useful in node $i$ at the next time step as $$D_i[t+1+\sigma_i + \Delta-1] = D_i[t + \sigma_i + \Delta].$$ When $\Delta = 0$ the information can be used at the downstream neighbor as $D_i$ satisfies \[ D_{i-1}[t+1+\sigma_{i}-1] = D_{i}[t+\sigma_{i}] - d_{i}[t]. \] These updates can be done in all nodes simultaneously and the time it takes is thus independent of the size of the graph. However, the shifted sums $D_i$ has to be initialized at time zero, and also updated when new disturbances $d_i$ are planned for times $t>0$. $D_i[t]$ only requires information from downstream, and can thus be calculated by a sweep starting at node one and going upstream. Starting at the first node, any of the $d_1[s],\ t\leq s\leq t+\sigma_N+H$ that have changed are sent to node two. Then for every node $i$, the aggregate $D_i[s],\ t+\sigma_{i} \leq s\leq t+\sigma_N+H$ is sent upstream if it has changed. This will be the case if node $i$ received $D_{i-1}[s],\ t+\sigma_{i}\leq s \leq t+\sigma_N+H$ from its downstream neighbor, or if $d_i[s],\ t\leq s \leq \sigma_N-\sigma_i+H$ has changed. The steps for updating the disturbances are summarized in \cref{alg:com_D}. While the algorithm is essentially a sweep through the graph in the upstream direction, it might be best to not implement it in the upstream sweep of \cref{alg:com_gen} as then the downstream sweep would have to be done after the upstream sweep, due to its need for the shifted disturbance vectors. On the other hand, the calculation does not rely on measurement from the system, and can thus be carried out either before or after \cref{alg:com_gen}. We are now ready to discuss how the choice of $H$ affect the implementation of the controller. Firstly, a larger $H$ will lead to a very slight increase in the synthesis time due to more iterations of $X_N(t)$ being required. Secondly increasing $H$ will increase the memory requirement in node $N$, in that it requires to store $D_N[t+\Delta]$ for $0\leq \Delta \leq H$. Finally the requirement for the communication bandwidth when updating $D_i[t]$ will depend on the number of new disturbances $d_j[t]$, but is upper bounded by $H$ if $d_i[t] = 0$ for $t>H$ and by $H+\sigma_N$ if $d_i[t] = 0$ for $t>H+\sigma_N-\sigma_i$. Thus if the bandwidth is limited, and a lot of new disturbances are expected to be planned, one might need to limit the size of $H$. Otherwise it can be freely chosen based on the nodes abilities to forecast disturbances. \section{Simulations}\label{sec:simulations} In this section we explore the effect the feed-forward horizon has on the controller performance through simulations. In \cref{fig:cost_vs_horizon} the performance for different horizon lengths is shown. Two random nodes are affected by disturbances of total size between minus one and zero and during a time interval of length between 1 and 5. The node level cost is given by $q_i = 1$. The production cost is given by $r_i = 10N$, where $N$ is the number of nodes. This is an attempt to keep the production cost similar for different values of $N$. There are 50 simulations done for each case with random disturbances as previously described. For all the cases when $N$ is the same, all the disturbances are the same for all the different horizons and delay values. The horizon lengths are the same for all nodes, i.e it is assumed that $d_i[t+d] = 0$ for $d>H$. We can see that a large part of the performance increase form having feed-forward can be achieved for short disturbance horizons. We can also see that for larger delays, and for more nodes, a longer horizon is needed to get the same effect. As a rule of thumb, at least for this example, it seems like a horizon longer than 2/3 of the total delay gives almost no effect, and even a horizon of 1/3 of the total delay gives most of the performance increase. \begin{figure} \caption{Simulations comparing the effect the planning horizon has on the performance. The data-points with highest cost for each configuration corresponds to not using the planned disturbances at all. While the the rest corresponds to using all planned disturbances announced up to $H$ time units ahead in every node.} \label{fig:cost_vs_horizon} \end{figure} \section{Proof Idea}\label{sec:proof_idea} In this section we will describe the main idea behind the technique used to derive the results, which is to study a time shifted sum of the node-levels $z_i$, and a time shifted sum of the production $v_i$. This will allow the problem to be solved in terms of these shifted sums, essentially reducing it to a problem with scalar variables. Outside the disturbance horizon the problem can be solved by a Riccati equation in one variable. While inside the disturbance horizon the problem is solved using dynamic programming, where each step has scalar variables. Now for the definitions of the shifted sums, let the sum of a shifted level $S_k$ and sum of a shifted production vector $V_k$ be defined as \[ S_k[t] = \sum_{i=1}^k z_i[t-\sigma_i], \qquad V_k[t] = \sum_{i=1}^k v_i[t-\sigma_i]. \] Also Let $\bar{V}_k[t] = V_k[t]+D_k[t]$ to shorten some expressions. We illustrate the main idea by considering these shifted sums with a short example. Consider a path graph with $N>2$, $\tau_1 = 1$, and $\tau_2>1$. Then $S_2[3] = z_1[3]+z_2[2]$. It can be checked that \[S_2[3] = z_1[0]+z_2[0] + \bar V_1[0]+\bar V_2[1]+\bar V_2[2] + u_1[-1] + u_2[-\tau_2].\] Note that the internal transportation $u_1[0]$ and $u_1[1]$ have canceled, and the sum is thus independent of the internal transportation $u$ (except those with negative time index, which correspond to initial conditions). Also note that any values for $z_2[2]$ and $z_1[3]$ can be achieved as long as $z_1[3]+z_2[2] = S_2[2]$. This follows from that $z_2[2]$ can take any value by choosing the appropriate value for $u_1[1]$. Thus the cost of $q_2z_2[2]^2 + q_1z_1[3]^2$ only depends on the value of $S_2[3]$, which is independent of all internal transportation $u_i[t],\ t\geq 0$. This means that all inputs except $u_1[1]$ can ignore its effect on the terms in $S_2[3]$. Furthermore, $S_2[3]$, and thus the corresponding cost, only depends on the sums $V_2[1]$ and $V_2[2]$ and not the individual productions $v_1[1]$, $v_1[2]$, $v_2[0]$ and $v_2[1]$. This idea can be generalized. The cost function can be rewritten in terms of shifted levels, where each shifted level sum is independent of the internal transportation. Each shifted level can thus be minimized independently with respect to the internal transportation $u$. Just as in the example, the only constraint is that the shifted sum has the correct value. The optimal cost for a shifted vector $S_k[t]$ is given by the solution to \begin{equation}\label{eq:opt_zi} \begin{aligned} \minimize_{z_i} \quad & \sum_{i=1}^k q_iz_i[t+\sigma_k-\sigma_i]^2 \\ \textrm{subject to} \quad & S_k[t+\sigma_k] = c, \end{aligned}\end{equation} where $c$ depends on the initial conditions, $V_i$, and $D_i$. The problem has the solution $z_i[t+\sigma_k-\sigma_i] = \gamma_k/q_i\cdot c$ and cost $\gamma_kc^2$, where $\gamma_k$ is as given in \cref{alg:init}. Once the optimal level $z_k[1]$ is calculated, the optimal value for $u_{k-1}[0]$ can be found from the dynamics, which gives \begin{multline*} u_{k-1}[0] = (1-\frac{\gamma_{k}}{q_k})(z_{k}[0] + u_{k}[-\tau_{k}]) + v_k[0] +d_k[0]\\ -\frac{\gamma_k}{q_k}\bar{V}_k[\sigma_k] -\frac{\gamma_k}{q_k}(m_{k-1}[0] + \sum_{i=1}^{k-1} \sum_{d=0}^{\tau_i-1}\bar{V}_i[\sigma_i+d]). \end{multline*} Where $m_k[t] = \sum_{i=1}^k(z_i[t] + \sum_{\delta = 1}^{\tau_i}u_i[t-\delta])$. After inserting the optimal values for $v_k$ and $V_i$ the expression in \eqref{eq:opt_u} is achieved. Note that all the terms with coefficient $1$ corresponds to what would be in the node $k$ at time $t=1$ if $u_{k-1}[0] = 0$, and all terms with coefficient $-\gamma_k/q_k$ gives the total quantity in $S_k[1]$. Furthermore, each shifted level sum only depends on the shifted production sums $V_k[t]$, and not the individual productions $v_i[t],\ i\leq k$. The optimal way to produce a specific amount $V_k[t]$ with a shifted production vector can be found by solving a problem similar to \eqref{eq:opt_zi}, with the optimal $v_i$ given by $v_i[t-\sigma_i] = \rho_k/r_i\cdot V_k[t] \text{ for } i \leq k$, and the cost given by $\rho_kV_k[t]^2$, where $\rho_k$ is as given in \cref{alg:init}. So the calculations of $\gamma_i$ and $\rho_i$ in the first sweep in \cref{alg:init} thus corresponds to solving the optimal distribution for a shifted level vector $S_k$ and the optimal production for a shifted production vector $V_k$. This is covered in \cref{lem:shifted_inv}. Assuming all $u_i$ are picked so that the shifted levels are optimized, the total level cost is given by \begin{equation}\label{eq:cost_S} \sum_{t=0}^\infty\sum_{i=1}^N q_i z_i[t]^2 = \sum_{i=0}^N q_i z_i[0]^2 + \sum_{i=1}^{N-1}\ \ \ \ \mathclap{\sum_{t=\sigma_i+1}^{\sigma_{i+1}}}\ \ \gamma_i S_i[t]^2 + \ \ \mathclap{\sum_{t=\sigma_N+1}^\infty}\ \ \gamma_NS_N[t]^2. \end{equation} And assuming all $v_i$ are picked so that each shifted production vector is optimized, then the total production cost is given by \begin{equation}\label{eq:cost_V} \sum_{t=0}^\infty\sum_{i=1}^N r_iv_i[t]^2 = \sum_{i=1}^{N-1}\sum_{t=\sigma_i}^{\sigma_{i+1}-1}\rho_i V_i[t]^2 + \sum_{t=\sigma_N}^\infty \rho_NV_N[t]^2. \end{equation} This allows the problem in \eqref{eq:problem} to be solved in terms of $V_i$ and $S_i$, reducing it to a problem in scalar variables. The scalar problem can be solved analytically, giving a closed form solution. This is done by first solving for all $V_N[t]$ outside the disturbance horizon, that is for $t >\sigma_N + H$. Using that outside the disturbance horizon the dynamics for the shifted levels $S_N[t]$ are $S_N[t+1] = S_N[t] +V_N[t]$ gives that those $V_N[t]$ are given by the solution to \[\begin{aligned} \minimize_{V_N[t]} \quad & \sum_{\mathclap{t=\sigma_N+H+1}}^\infty \gamma_NS_N[t]^2 + \rho_NV_N[t]^2 \\ \text{subject to} \quad & S_N[t+1] = S_N[t] +V_N[t]. \end{aligned}\] This problem can be solved through a Riccati equation in one variable, giving expressions for $V_N[t],\ t >\sigma_N + H$ in terms of $S_N[t]$. And more importantly, a cost to go in terms of $S_N[\sigma_N+H+1]$, that is $X_N[H+2]$ in \cref{alg:init}. Each shifted sum in \eqref{eq:cost_S} can be expressed in terms of initial conditions, shifted production vectors, and shifted disturbance vectors, \begin{equation*} S_k[\sigma_k+\Delta] = m_{k-1}[0]+ z_k[0] + \sum_{i=1}^{k-1}\sum_{d=\sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \sum_{d = \sigma_k}^{\mathclap{\sigma_k+\Delta-1}} \bar{V}_k[d] . \end{equation*} Using the cost to go from the Riccati equation as the terminal cost allows the rest of the $V_i$'s to be found analytically using dynamic programming. When solving this problem the cost to go $X_i$ in \cref{alg:init} is used. The parameter $g$ also appears naturally in the solution to each dynamic programming step, and the upstream aggregate $\mu_i$ in \cref{alg:com_gen} is used to simplify the expressions. This is covered in detail in \cref{lem:V_expr}. The resulting solution gives $V_i[t]$ in terms of initial conditions and the previous $V_k$'s in \eqref{eq:cost_V}. However, $V_1[0]$ is known, which gives $V_1[1]$ and so on. When rewriting $V_i[0]$ in terms of only initial conditions the expressions can be simplified by using $\delta$ as defined in \cref{alg:com_gen}. This in turn requires $P$, $h$, and $\phi$ which were defined in \cref{alg:init} and $\Phi$ which was defined in \cref{alg:com_gen}. For the details see \cref{lem:mVsum}. \section{Conclusions and Future Work} In this paper we studied an optimal control problem on a simple transportation model. We showed that the optimal controller is highly structured, allowing for a distributed implementation consisting of two sweeps through the graph. The optimal controller can also handle planned disturbances in an efficient way. We believe that the results presented here can be extended to more general graph structures. More specifically for any graph with the structure of a directed tree both the proof technique and the results could be extended. We plan to explore this in a future publication. \appendix \section{Appendix} The proof follows the structure of the proof idea. Before we start we restate the definition of $m_k$ which was mentioned in the proof idea. \[ m_k[t] = \sum_{i=1}^k\left( z_i[t] + \sum_{d=1}^{\tau_i} u_i[t-d]\right)\] Also, we let the product over an empty set be equal to one, e.g., $\prod_{i=2}^1 g_i = 1$. The proof will derive the optimal inputs at time $t = 0$. As the problem has an infinite horizon, one can freely shift the time, and the results will thus holds for all $t \geq 0$. We begin by showing that each shifted level can be optimally distributed and find the corresponding internal flows. \begin{lemma}\label{lem:shifted_inv} The following holds \begin{enumerate}[(i)] \item Every shifted level $S_k$ satisfies \begin{multline*}S_k[t+\sigma_k+1] =\\ z_k[t] + u_k[t-\tau_k] +m_{k-1}[t] + \sum_{i=1}^{k-1} \sum_{d=0}^{\tau_i-1}\bar{V}_i[t+\sigma_i+d] +\bar{V}_k[t+\sigma_k] \end{multline*} \item Let $\gamma_k$ be defined as in \cref{alg:init}. The optimization problem \[\begin{aligned} \minimize_{z_i} \quad & \sum_{i=1}^k q_iz_i[t+\sigma_k-\sigma_i]^2 \\ \textrm{subject to} \quad & S_k[t+\sigma_k] = m, \end{aligned}\] has the solution $z_i = \gamma_k/q_im$ and the optimum value is given by $\gamma_km^2$. \item When $u$ is chosen optimally, the cost for \eqref{eq:problem} is given by \begin{equation*} \sum_{t=0}^\infty\sum_{i=1}^N q_iz_i[t]^2 = \sum_{i=0}^Nq_iz_i[0]^2 + \sum_{i=1}^{N-1}\sum_{t=\sigma_i+1}^{\sigma_{i+1}}\gamma_iS_i[t]^2 + \sum_{t=\sigma_N+1}^\infty \gamma_NS_N[t]^2. \end{equation*} Also, the optimal $u_k[0]$ is given by \begin{multline*} u_{k-1}[0] = (1-\frac{\gamma_{k}}{q_k})(z_{k}[0] + u_{k}[-\tau_{k}]) + v_k[0] +d_k[0]\\ -\frac{\gamma_k}{q_k}\bar{V}_k[\sigma_k] -\frac{\gamma_k}{q_k}(m_{k-1}[0] + \sum_{i=1}^{k-1} \sum_{d=0}^{\tau_i-1}\bar{V}_i[\sigma_i+d]). \end{multline*} \end{enumerate} \end{lemma} \begin{proof} For $k=1$ (i) reduces to the dynamics. Now assume that (i) holds for $k-1$. It follows from the definition of $S_k$ that \begin{equation}\label{eq:S_ind_shift} S_k[t+\sigma_k+1] = z_k[t+1] + S_{k-1}[t+\sigma_k+1]. \end{equation} It holds that \begin{equation}\label{eq:Sk_step} S_k[t+1] = S_k[t]+\bar{V}_k[t] + u_k[t-\sigma_k-\tau_k], \end{equation} since $u_i[t-\sigma_i-\tau_i]$ will cancel out for $i<k$. This allows $S_{k-1}[t+\sigma_k+1]$ to be rewritten as \begin{equation}\label{eq:sk_big_step} S_{k-1}[t+\sigma_k+1] = S_{k-1}[t+\sigma_{k-1}+1]+\ \ \mathclap{\sum_{\Delta=\sigma_{k-1}+1}^{\sigma_k}}\ \ \ \bar{V}_{k-1}[t+\Delta] + \sum_{\Delta =0}^{\tau_{k-1}-1} u_{k-1}[t-\Delta]. \end{equation} Using the induction assumption that (i) holds for $k-1$, \eqref{eq:sk_big_step} and the dynamics, \begin{equation}\label{eq:dyn_in_proof} z_k[t+1] = z_k[t] -u_{k-1}[t]+ u_k[t-\tau_k]+v_k[t]+d_k[t], \end{equation} allows \eqref{eq:S_ind_shift} to be rewritten as \begin{multline*} S_k[t+\sigma_k+1] = z_k[t] - u_{k-1}[t] + u_k[t-\tau_k]+v_k[t]+d_k[t] \\ +z_{k-1}[t] + u_{k-1}[t-\tau_{k-1}] +m_{k-2}[t] + \sum_{i=1}^{k-2} \sum_{d=0}^{\tau_i-1}\bar{V}_i[t+\sigma_i+d]\\ +\bar{V}_{k-1}[t+\sigma_{k-1}] +\ \ \mathclap{\sum_{\Delta=\sigma_{k-1}+1}^{\sigma_k}}\ \ \ \bar{V}_{k-1}[t+\Delta] + \sum_{\Delta =0}^{\tau_k-1} u_{k-1}[t-\Delta]. \end{multline*} In the above it holds that \[ z_{k-1}[t]+u_{k-1}[t-\tau_{k-1}] -u_{k-1}[t]+\sum_{\Delta =0}^{\mathclap{\tau_{k-1}-1}} u_{k-1}[t-\Delta] + m_{k-2}[t] = m_{k-1}[t] \] and \begin{multline*} v_k[t]+d_k[t] + \sum_{i=1}^{k-2} \sum_{d=0}^{\tau_i-1}\bar{V}_i[t+\sigma_i+d] + \bar{V}_{k-1}[t+\sigma_{k-1}] + \ \ \mathclap{\sum_{\Delta=\sigma_{k-1}+1}^{\sigma_k}}\ \ \ \bar{V}_{k-1}[t+\Delta]\\ = \sum_{i=1}^{k-1} \sum_{d=0}^{\tau_i-1}\bar{V}_i[t+\sigma_i+d] +\bar{V}_k[t+\sigma_k]. \end{multline*} And thus (i) holds for $k$ as well. For (ii) the proposed solution satisfies the constraint as \[ \sum_{i=1}^k \frac{1}{q_i} = \frac1{\gamma_k}. \] If the proposed solution was not optimal then it would be possible to improve it by increasing $z_i$ by epsilon and decreasing $z_j$ by epsilon for $i,j\leq K$ as the problem is convex. However \[ \frac{\partial}{\partial z_i} q_iz_i[t+\sigma_k-\sigma_i]^2 = 2\gamma_km \] for $z_i[t+\sigma_k-\sigma_i] = \gamma_k/q_i m$ and all $i$, and thus the proposed solution is optimal. For (iii) note that the sum $\sum_{t=0}^\infty\sum_{i=1}^N q_iz_i[t]^2$ can be written in terms of shifted level vectors as follows, \begin{multline} \label{eq:cost_decomp} \sum_{t=0}^\infty\sum_{i=1}^N q_iz_i[t]^2 = \\ \sum_{i=0}^Nq_iz_i[0]^2 + \sum_{i=1}^{N-1}\sum_{t=\sigma_i+1}^{\sigma_{i+1}} \sum_{j=1}^iq_jz_j[t-\sigma_j]^2 + \ \ \mathclap{\sum_{t=\sigma_N+1}^\infty}\ \ \ \ \ \sum_{j=1}^N q_jz_j[t-\sigma_j]^2. \end{multline} The inner sums corresponds to the objective in (ii). From (i) it follows that $S_k[t]$, $t\leq \sigma_{i+1}$ is independent of $u_j[t], \ \forall t \geq 0, \forall j$ and that $S_N[t]$ is independent of $u_j[t],\ \forall t,j$. Thus each shifted level sum in \eqref{eq:cost_decomp} is independent of the internal flows. Now consider arbitrary, but fixed productions $V$ and disturbances $D$. Then by (i) the sum of all shifted levels are fixed. If there exists $u$ so that each sum over shifted levels in \eqref{eq:cost_decomp} is the optimal solution to the problem in (ii), then those inputs must be optimal for the given $V$ and $D$. By choosing $u_{j-1}[t-\sigma_j-1]$ so that $z_j[t-\sigma_j]$ is optimal for (ii) for $2\leq j \leq i$ gives that all $z_j[t-\sigma_j]$ are optimally for $2\leq j \leq i$. However, since the constraint will always be satisfied, $z_1[t]$ will be optimal as well. Using (i), the optimal $z_k[1]$ from (ii) is given by \begin{equation*} z_k[1] = \frac{\gamma_k}{q_k}\Big(m_{k-1}[0] +z_k[0] + u_k[-\tau_k] + \sum_{i=1}^{k-1}\sum_{\Delta = 0}^{\tau_i-1}\bar{V}_i[\sigma_i +\Delta] + \bar{V}_k[\sigma_k] \Big). \end{equation*} Inserting the dynamics in \eqref{eq:dyn_in_proof} into the LHS and solving for $u_{k-1}[0]$ gives the expression in (iii). \end{proof} Now we will give the solution to the optimization problem which will arise in the dynamic programming problem that will need to be solved in the next lemma. \begin{lemma}\label{lem:aux} Let $X_i$ and $g_i(j)$ be defined as in \cref{alg:init}. Then \begin{enumerate}[(i)] \item Let $j\geq1$. The optimization problem \[ \minimize_x \quad X_i(j+1)(a+b + x)^2 + \gamma_i(a+x)^2 + \rho_ix^2 \] has minimizer \[x = -\frac{X_i(j)}{\rho_i}(a+g_i(j+1)b),\] with optimum value $X_i(j)\cdot(a+g_i(j+1)b)^2 + f(b)$. \item The optimization problem \[ \minimize_x \quad X_{i+1}(1)(a+b+ x)^2 + \gamma_{i}(a+x)^2 + \rho_{i}x^2 \] has minimizer \[ x = -\frac{X_{i}(\tau_{i})}{\rho_{i}}(a+g_{i+1}(1)b), \] with optimum value $X_{i}(\tau_{i})\cdot(a+g_{i+1}(1)b)^2+f(b)$. \end{enumerate} \end{lemma} \begin{proof} We will show that the optimization problem \[ \minimize_x \quad c_1(a+b+x)^2 + c_2(a+x)^2+c_3x^2 \] has the solution \[ x = -\frac{c_1+c_2}{c_1+c_2+c_3}\big(a + \frac{c_1}{c_1+c_2}b\big) \] and the minimal value is on the form \[ \frac{c_3(c_1+c_2)}{c_1+c_2+c_3}\left(a + \frac{c_1}{c_1+c_2}b\right)^2 + f(b), \] where $f(b)$ is independent of $a$. The lemma then follows by applying the above and using the definition for $X_i$ and $g_i(j)$. There exits a unique solution as the problem is strictly convex. Differentiating the objective function with respect to $x$ gives that the optimal $x$ is given by \[ x = -\frac{1}{c_1+c_2+c_3}\big((c_1+c_2)a + c_1b\big) \] from which the proposed $x$ follows. The objective function can be rewritten as \[ c_1(a+b)^2 + c_2a^2 + 2[(c_1+c_2)a + c_1b)]x + (c_1 + c_2 + c_3)x^2. \] Inserting the minimizer gives \[ \frac{1}{c_1+c_2+c_3}\Big((c_1+c_2+c_3)(c_1(a+b)^2+c_2a^2) - [(c_1+c_2)a + c_1b]^2\Big). \] The first term can be written as \begin{multline*} (c_1+c_2+c_3)(c_1(a+b)^2+c_2a^2) \\=(c_1+c_2+c_3)\big[(c_1+c_2)a^2 + 2c_1ab + c_1b^2 \big]\\ =(c_1+c_2)^2a^2 + 2(c_1+c_2)c_1ab + c_1^2b^2 + \\ c_3(c_1+c_2)a^2 + 2c_1c_3ab + (c_2+c_3)c_1b^2. \end{multline*} Which gives that the objective function has the minimum value \[ \frac{1}{c_1+c_2+c_3}\Big[c_3(c_1+c_2)a^2 + 2c_1c_3ab + (c_2+c_3)c_1b^2\Big]. \] The last term is independent of $a$. The dependence on a is thus given by \[ \frac{c_3(c_1+c_2)}{c_1+c_2+c_3}\left(a^2 + \frac{2c_1}{c_1+c_2}ab\right) = \frac{c_3(c_1+c_2)}{c_1+c_2+c_3}\left(a + \frac{c_1}{c_1+c_2}b\right)^2 + f(b) \] \end{proof} Armed with the results from the previous lemma, we will now apply dynamic programming to \eqref{eq:problem}. We will show that the problem can be solved in terms of the shifted levels $S_k$, shifted productions $V_k$ and shifted disturbances $D_k$. Outside of the horizon the problem can be solved using the Riccati equation. Using the cost to go given by the Riccati equation as initialization we can apply dynamic programming using the results from the previous lemma. \begin{lemma}\label{lem:V_expr} Let $\gamma_k$, $\rho_k$, $X_k$, $g_k$, $\mu_k$ be defined as in \cref{alg:init,alg:com_gen}. Let for $1\leq k\leq N-1$ and $1\leq \Delta \leq\tau_k$, and for $k=N$ and $1 \leq \Delta \leq H+2$ \begin{multline} \label{eq:xi} \xi_k[\Delta-1] = m_{k-1}[0] + z_k[0] + \sum_{i=1}^{k-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] \\ + \sum_{d=0}^{\tau_k-1} \Big(u_k[-(\tau_k-d)]+D_k[\sigma_{k}+d]\Big)\prod_{j=\Delta+1}^{d+1}g_k(j) \\ +\sum_{d = \sigma_k}^{\sigma_k + \Delta-2}V_k[d] + \mu_{k+1}[0]g_{k+1}(1)\prod_{j=\Delta+1}^{\tau_k} g_k(j). \end{multline} Then the optimal $V_k$ for \eqref{eq:problem} is given by \begin{equation*} V_k[\sigma_k+(\Delta-1)] = -\frac{X_k(\Delta)}{\rho_k}\xi_k[\Delta-1]. \end{equation*} The optimal individual productions are given by \[ v_k[\Delta-1] = \frac{\rho_k}{r_k}V_k[\sigma_k+(\Delta-1)].\] \end{lemma} \begin{proof} By \cref{lem:shifted_inv}-(i) and \eqref{eq:Sk_step} each shifted inventory level $S_k[\sigma_k +\Delta]$ with $1\leq \delta \leq \tau_{k}$ satisfies \begin{equation}\label{eq:Sk_delta_shift} S_k[\sigma_k+\Delta] = m_{k-1}[0]+ z_k[0] + \sum_{i=1}^{k-1}\sum_{d=\sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \sum^{\tau_k}_{\mathclap{d = \tau_k-(\Delta-1)}} u_k[-d] + \sum_{\mathclap{d = \sigma_k}}^{\mathclap{\sigma_k+\Delta-1}} \bar{V}_k[d]. \end{equation} By (iii) in \cref{lem:shifted_inv} the cost can be rewritten as \begin{equation*} \sum_{t=0}^\infty\sum_{i=1}^N q_iz_i[t]^2 = \sum_{i=0}^Nq_iz_i[0]^2 + \sum_{i=1}^{N-1}\sum_{t=\sigma_i+1}^{\sigma_{i+1}}\gamma_iS_i[t]^2 + \sum_{\mathclap{t=\sigma_N+1}}^\infty \gamma_NS_N[t]^2. \end{equation*} And similarly, by \cref{lem:shifted_inv}-(ii), the optimal cost for a shifted production $V_i[t]$ is given by $\rho_iV_i[t]^2$ and individual productions are given by $v_i[t] = \rho_i/r_i\cdot V_i[t+ \sigma_i]$. This gives the total production cost in terms of $V_i$ as \[ \sum_{t=0}^\infty\sum_{i=1}^N r_iv_i[t]^2 = \sum_{i=1}^{N-1}\sum_{t=\sigma_i}^{\sigma_{i+1}-1}\rho_iV_i[t]^2 + \sum_{t=\sigma_N}^\infty \rho_NV_N[t]^2. \] We can thus solve the problem in terms of $S_i$ and $V_i$, and then recover the optimal $v_i$. To that end define the cost to go for $1\leq k \leq N-1$ and $1\leq \Delta \leq \tau_k$ \begin{multline*} \Gamma_k[\Delta] = \sum_{t = \sigma_k+\Delta}^{\sigma_{k+1}}\Big(\gamma_kS_k[t]^2+\rho_kV_k[t-1]^2\Big) + \sum_{i=k+1}^{N-1}\sum_{\sigma_i+1}^{\sigma_{i+1}}\Big( \gamma_iS_i[t]^2+\rho_iV_i[t-1]^2\Big)\\ + \sum_{t=\sigma_N+1}^\infty \Big( \gamma_NS_N[t]^2+\rho_NV_N[t-1]^2\Big). \end{multline*} And for $k = N$ and $\Delta \geq 1$ \[ \Gamma_N[\Delta] = \sum_{t=\sigma_N+\Delta}^\infty \Big( \gamma_NS_N[t]^2+\rho_NV_N[t-1]^2\Big). \] We will show for $1\leq k \leq N-1$ and $1 \leq \Delta \leq \tau_i$, and for $k = N$ and $1\leq \Delta \leq H+2$, that \begin{equation}\label{eq:cost_to_go} \Gamma_k[\Delta] = X_{k}(\Delta)\xi_k[\Delta-1]^2 + f(b), \end{equation} where $f(b)$ is independent of $V_k[t]$. $f(b)$ can thus be ignored in the optimization of $V_k[t]$. Using \cref{lem:shifted_inv}-(i) combined with \eqref{eq:Sk_step} and that all $D_N[t] =0$ for $t>H+\sigma_N$ it follows that the optimal $V_N[t]$ for $t>\sigma_N+H$ is given by the solution to the problem \[\begin{aligned} \minimize_{V_N[t]} \quad & \sum_{t=\sigma_N+H+1}^\infty \gamma_NS_N[t]^2 + \rho_NV_N[t]^2 \\ \text{subject to} \quad & S_N[t+1] = S_N[t] +V_N[t] \\ &S_N[\sigma_N+H+1] = m_N[0] + \sum_{i=1}^{N-1}\ \ \ \ \mathclap{\sum_{\delta=\sigma_i}^{\sigma_{i+1}-1}}\ \ \ \bar{V_i}[\delta] +\ \mathclap{\sum_{\delta = \sigma_{N}}^{\sigma_N+H}}\ \ \ \ \bar{V}_N[\delta]. \end{aligned}\] This is a standard LQR problem and the solution can be found by solving the following Riccati equation \[ X = X-X^2/(\rho_N+ X)+\gamma_N \Rightarrow X = \frac{\gamma_N}{2}+\sqrt{\gamma_N\rho_N+\frac{\gamma_N^2}{4}}. \] Now let $X_N(H+2) = X-\gamma_N$. Then $\Gamma_N[\sigma_N+H+2]$ is given by \[\Gamma_N[\sigma_N+H+2] = S_N[\sigma_N+H+1]^2X_N(H+2).\] Note that the cost for $S_k[\sigma_N+H+1]$ is not part of $\Gamma_N[\sigma_N+H+2]$, but it is part of the cost to go given by the solution $X$ to the Riccati equation. Furthermore, the optimal $V_N[t]$ for $t = \sigma_N+H+1$ is given by \begin{equation*} V_N[t] = -\frac{X}{X+\rho_N}S_N[t] = -\frac{X_N(H+1)+\gamma_N}{X_N(H+1)+\gamma_N+\rho_N}S_N[t] = -\frac{X_N(H)}{\rho_N}S_N[t]. \end{equation*} For $\Delta = H+2$ and $k=N$ the expression for $\xi_k[\Delta-1]$ reduces to \[ \xi_N[H+1] = m_N[0] + \sum_{i=1}^{N-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \sum_{d = \sigma_N}^{\sigma_N + H}\bar{V}_N[d], \] as $\mu_{N+1} = 0$, $u_N=0$ and $D_N[t] = 0$ for $t>\sigma_N+H$. By \eqref{eq:Sk_delta_shift} $\xi_N[H+1] = S_N[\sigma_N+H+1]$ and thus the lemma and \eqref{eq:cost_to_go} holds for $k=N$ and $\Delta = H+2$. Assume that \eqref{eq:cost_to_go} holds for $k+1$ and $\Delta = 1$. Then the optimal $V_{k}[\sigma_{k+1}-1]$ is given by the minimizer for \[ \Gamma_{k}[\tau_{k}] = \Gamma_{k+1}[1] + \gamma_{k}S_k[ \sigma_{k+1}]^2 + \rho_k V_k[\sigma_{k+1}-1]^2.\] Using the assumption for the cost to go in \eqref{eq:cost_to_go} gives that $\Gamma_{k+1}[1] =X_{k+1}(1)\xi_{k+1}[0]^2$ and thus the optimal $V_{k}[\sigma_{k+1}-1]$ is given by the optimal value for the problem \begin{equation*} \minimize_{V_{k}[\sigma_{k+1}-1]}\quad X_{k+1}(1)\xi_{k+1}[0]^2+\gamma_{k}S_{k}[\sigma_{k+1}]^2 + \rho_{k}V_{k}[\sigma_{k+1}-1]^2. \end{equation*} For $\Delta = 1$ \eqref{eq:xi} reduces to \begin{equation}\label{eq:delta1reduce} \xi_k[0] = m_{k-1}[0] +\sum_{i=1}^{k-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \mu_k[0], \end{equation} as \begin{equation*} \mu_k[0]= \pi_k[0] +\mu_{k+1}g_{k+1}(1)\prod_{j=2}^{\tau_k} g_k(j) \end{equation*} and \[ \pi_k[0] = z_k[0] + \sum_{d=0}^{\tau_k-1} \Big(u_k[-(\tau_k-d)]+D_k[\sigma_{k}+d]\Big)\prod_{j=2}^{d+1}g_k(j). \] We also note that by \eqref{eq:Sk_delta_shift}, as $\sigma_{k+1} = \sigma_k + \tau_k$, \[ S_k[\sigma_{k+1}] = m_k[0] + \sum_{i=1}^{k}\sum_{d=\sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d]. \] Applying \cref{lem:aux}-(ii) with \[ \begin{aligned} a &= S_{k}[\sigma_{k+1}]-V_{k}[\sigma_{k+1}-1]\\ b &= \xi_{k+1}[0] - S_k[\sigma_{k+1}] = \mu_{k+1}[0] \\ x &= V_{k}[\sigma_{k+1}-1], \end{aligned} \] gives that the lemma and \eqref{eq:cost_to_go} hold for $k$ and $\Delta = \tau_{k}$ as \[ \xi_k[\tau_k-1] = m_k[0] + \sum_{i=1}^{k-1}\sum_{d=\sigma_i}^{\sigma_{i+1}-1}\bar V_i[d] + \sum_{d=\sigma_k}^{\mathclap{\sigma_k+\tau_k-2}}\bar{V}_k[d] + \mu_{k+1}[0]g_{k+1}(1). \] Assume that \eqref{eq:cost_to_go} holds for some $k$ and $\Delta+1$, where $1\leq \Delta\leq \tau_i-1$ if $k<N$ and $1\leq \Delta\leq H+1$ if $k = N$. Then $V_k[\sigma_k+\Delta-1]$ can be found as the minimizer for \begin{equation*} \minimize_{V_k[\sigma_k+\Delta-1]} \quad X_k(\Delta+1)\xi_k[\Delta]^2 + \gamma_k S_k[\sigma_k+\Delta]^2 + \rho_k V_k[\sigma_k+\Delta-1]^2. \end{equation*} Using that two of the terms in \eqref{eq:Sk_delta_shift} can be rewritten as \begin{equation*} \sum^{\tau_k}_{\mathclap{d = \tau_k-(\Delta-1)}} u_k[-d] + \sum_{d = \sigma_k}^{\mathclap{\sigma_k+\Delta-1}} \bar{V}_k[d] =\sum_{d = 0}^{\Delta-1} \Big(u_k[-(\tau_k-d)]+D_k[\sigma_k+d]\Big) + \sum_{d = \sigma_k}^{\mathclap{\sigma_k+\Delta-1}} V_k[d] \end{equation*} and with $x =V_k[\sigma_k+\Delta-1]$, $a = S_k[\sigma_k+\Delta]-V_k[\sigma_k+\Delta-1]$, which equals \begin{multline*} a = m_{k-1}[0]+ z_k[0] + \sum_{i=1}^{k-1}\sum_{d=\sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] \\ + \sum_{d = 0}^{\Delta-1} \Big(u_k[-(\tau_k-d)]+D_k[+\sigma_k+d]\Big) + \sum_{d = \sigma_k}^{\mathclap{\sigma_k+\Delta-2}} V_k[d], \end{multline*} and $b = \xi_k[\Delta]-S_k[\sigma_k+\Delta]$, which gives \begin{equation*} b = \ \ \mathclap{\sum_{d=\Delta}^{\tau_k-1}} \ \ \ \Big(u_k[-(\tau_k-d)]+D_k[\sigma_{k}+d]\Big)\ \ \ \ \mathclap{\prod_{j=\Delta+2}^{d+1}} \ \ \ g_k(j) + \mu_{k+1}[0]g_{k+1}(1)\prod_{j=\Delta+2}^{\tau_k} g_k(j). \end{equation*} By applying \cref{lem:aux}-(i) it follows that \eqref{eq:cost_to_go} and the lemma holds for $k$ and $\Delta$ as well. Thus the lemma holds for all $1\leq \Delta \leq\tau_k$ for $1\leq k\leq N-1$ and $1 \leq \Delta \leq H+2$ for $k=N$. \end{proof} All that remains now is to find expressions for $V_k[\sigma_k]$ in terms of the initial conditions. The following lemma allows us to do so, using the expressions for $V_k$ derived in the previous lemma. \begin{lemma}\label{lem:mVsum} Let $h_k$, $P_k(i,j)$, $\phi_k(\Delta)$, $\pi_k[0]$, $\mu_k[0]$, $\Phi_k[0]$, and $\delta_k[0]$ be defined as in \cref{alg:init,alg:com_gen}. Then for $k\leq N-1$ \begin{equation}\label{eq:m_k_rewrite} m_k[0] + \sum_{i=1}^{k}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] = \delta_{k}[0] - h_k\mu_{k+1}[0] \end{equation} \end{lemma} \begin{proof} Let $B_k[0] = z_k[0] + u_k[-\tau_k] + D_k[\sigma_k]$ and $B_k[i] = u_k[-(\tau_k-i)] + D_k[\sigma_{k}+i]$ for $1\leq i < \tau_k$. We will prove the lemma by showing that for $1\leq \Delta \leq \tau_k$ \begin{multline}\label{eq:m_k_partial} m_{k-1}[0] + \sum_{i=1}^{k-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \sum_{d = \sigma_k}^{\sigma_k+\Delta-1}V_k[d] =\\ (1-P_k(\Delta,1))\delta_{k-1}[0] -\Big[h_{k-1}b_k(1-P(\Delta,1))+P_k(\Delta,\Delta)g_{k+1}(1)\ \ \ \mathclap{\prod_{j = \Delta+1}^{\tau_k}}\ \ \ g_k(j)\Big]\mu_{k+1}[0] \\ - \sum_{d=0}^{\tau_k-1}B_k[d]\Big[ P_k(\Delta,\min(d+1,\Delta))\prod_{j=\Delta+1}^{d+1}g_k(j) +(1-P_k(\Delta,1))h_{k-1}\prod_{j=2}^{d+1}g_k(j) \Big] \end{multline} More specifically we will show that \begin{enumerate} \item \eqref{eq:m_k_partial} holds for $k=1$ and $\Delta = 1$. \item If \eqref{eq:m_k_partial} holds for some $k$ and $\Delta-1$ then it holds for $\Delta$ as well. \item If \eqref{eq:m_k_rewrite} holds for $k-1$ then \eqref{eq:m_k_partial} holds for $k$ and $\Delta = 1$. \item If \eqref{eq:m_k_partial} holds for $k$ and $\Delta = \tau_k$ then $\eqref{eq:m_k_rewrite}$ holds for $k$. \end{enumerate} For $k=1$ and $\Delta = 1$ the LHS of \eqref{eq:m_k_partial} is just $V_1[t]$. The RHS of \eqref{eq:m_k_partial} equals $-X_1(1)/\rho_1\cdot\mu_k[0]$ as $P_1(1,m) = X_1(1)/\rho_1$, $\delta_0=0$, and $h_0 = 0$. And by \cref{lem:V_expr} the RHS is also equal to $V_1[t]$ since by \eqref{eq:delta1reduce} \[ \xi_1[0] = \mu_k[0]. \] Thus \eqref{eq:m_k_partial} holds for $k=1$ and $\Delta = 1$. Applying \cref{lem:V_expr} on $V_k[\sigma_k+\Delta]$ for the LHS of \eqref{eq:m_k_partial} gives: \begin{multline}\label{eq:ind_step} m_{k-1}[0] + \sum_{i=1}^{k-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \sum_{d = \sigma_k}^{\sigma_k+\Delta-1}V_k[d] = \\ (1-\frac{X_k(\Delta)}{\rho_k})\Big( m_{k-1}[0] + \sum_{i=1}^{k-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \sum_{d = \sigma_k}^{\sigma_k+\Delta-2}V_k[d] \Big)\\ -\frac{X_k(\Delta)}{\rho_k}\Big[\sum_{d=0}^{\tau_k-1} B_k[d]\prod_{j=\Delta+1}^{d+1}g_k(j) + \mu_{k+1}[0]g_{k+1}(1)\prod_{j=\Delta+1}^{\tau_k} g_k(j)\Big] \end{multline} Now assume that \eqref{eq:m_k_partial} holds for $k$ and $\Delta-1$, we then show that \eqref{eq:m_k_partial} holds for $k$ and $\Delta$. Using \eqref{eq:ind_step} gives for the coefficients for the different terms of the LHS for \eqref{eq:m_k_partial} as follows. For $\delta_{k-1}[0]$ we get \[ (1-\frac{X_k(\Delta)}{\rho_k})(1-P_k(\Delta-1,1)) = 1-P_k(\Delta,1). \] For the terms in front of $\mu_{k+1}[0]$ we get \begin{multline*} -\Big(1-\frac{X_k(\Delta)}{\rho_k}\Big)\Big(h_{k-1}b_k \big(1-P(\Delta-1,1) \big) +P_k(\Delta-1,\Delta-1)g_{k+1}(1)\prod_{j = \Delta}^{\tau_k} g_k[j]\Big) \\ - \frac{X_k(\Delta)}{\rho_k}g_{k+1}(1)\prod_{j=\Delta+1}^{\tau_k} g_k(j) \\ =-h_{k-1}b_k\big(1-P(\Delta,1)\big) - P_k(\Delta,\Delta)g_{k+1}(1)\prod_{j=\Delta+1}^{\tau_k} g_k(j), \end{multline*} and for the coefficient for $B[d]$, \begin{multline*} -(1-\frac{X_k(\Delta)}{\rho_k})\Big[P_k(\Delta-1,\min(d+1,\Delta-1))\prod_{j=\Delta}^{d+1}g_k(j) + \\ (1-P_k(\Delta-1,1))h_{k-1}\prod_{j=2}^{d+1}g_k(j)\Big] - \frac{X_k(\Delta)}{\rho_k}\prod_{j=\Delta+1}^{d+1}g_k(j)\\ = - P_k(\Delta,\min(d+1,\Delta))\sum_{j=\Delta+1}^{d+1}g_k(j) -(1-P_k(\Delta,1))h_{k-1}\prod_{j=2}^{d+1}g_k(j). \end{multline*} Thus \eqref{eq:m_k_partial} holds for $k$ and $\delta$ as well. Assume that \eqref{eq:m_k_rewrite} holds for $k-1$. Then we can show that \eqref{eq:m_k_partial} holds for $k$ and $\Delta = 1$. Using that \begin{equation}\label{eq:pi_k} \pi_k[0] = \sum_{d=0}^{\tau_k-1}\Big( B[d]\prod_{j=2}^{d+1}g_k(j)\Big), \end{equation} the RHS of \eqref{eq:m_k_partial} reduces to \begin{equation*} \big[1-P_k(1,1)\big]\delta_{k-1}[0] -\big[h_{k-1}(1-P_k(1,1))+P_k(1,1)\big](\pi_k[0] + b_k\mu_{k+1}[0]) . \end{equation*} Using \eqref{eq:ind_step} with $\Delta = 1$, the definition for $P_k(1,1)$, and inserting \eqref{eq:m_k_rewrite} gives that the LHS of \eqref{eq:m_k_partial} is equal to \begin{equation*} (1-P_k(1,1))\Big[\delta_{k-1}[0] - h_{k-1}\mu_{k}[0] \Big] - P_k(1,1)\Big[\sum_{d=0}^{\tau_k-1} B[d]\prod_{\mathclap{j=\Delta+1}}^{d+1}g_k(j) + \mu_{k+1}[0]b_k\Big]. \end{equation*} Using \eqref{eq:pi_k} and the definition for $\mu_k[0] = \pi_k[0] + b_k \mu_{k+1}[0]$ shows that the RHS and LHS are equal. And thus \eqref{eq:m_k_partial} hold for $k$ and $\Delta = 1$ if \eqref{eq:m_k_rewrite} holds for $k-1$. Finally, we will show that if \eqref{eq:m_k_partial} holds for $k$ and $\Delta = \tau_k$ then \eqref{eq:m_k_rewrite} holds for $k$. Using the definition for $h_k$ the RHS of \eqref{eq:m_k_partial} reduces to \begin{multline*} (1-P_k(\tau_k,1))\delta_{k-1}[0] +h_k\mu_{k+1}[0] \\ - \sum_{i=d}^{\tau_k}B_k[d]\Big[P_k(\tau_k,d+1) + \big(1-P_k(\tau_k,1)h_{k-1}\big)\prod_{j=2}^{d+1}g_k(j)\Big] \end{multline*} For $\Delta = \tau_k$ the LHS of \eqref{eq:m_k_rewrite} is equal to the LHS of \eqref{eq:m_k_partial} plus \[ z_k[0] +\sum_{d=1}^{\tau_k}u_k[-d] + \sum_{d=\sigma_k}^{\sigma_{k+1}-1} D_k[d] = \sum_{d=0}^{\tau_k-1}B_k[d]. \] Thus it holds that the LHS of \eqref{eq:m_k_rewrite} is equal to \begin{multline*} (1-P_k(\tau_k,1))\delta_{k-1}[0] +h_k\mu_{k+1}[0] \\ + \sum_{i=d}^{\tau_k}B_k[d]\Big[(1-P_k(\tau_k,d+1)) - \big(1-P_k(\tau_k,1)h_{k-1}\big)\prod_{j=2}^{d+1}g_k(j)\Big]. \end{multline*} Using the definition for $\phi_i(\Delta)$ in \cref{alg:init} and $\Phi_i$ and $\delta_k$ in \cref{alg:com_gen} shows that \eqref{eq:m_k_partial} gives \eqref{eq:m_k_rewrite} for $\Delta=\tau_k$, as \[ \sum_{i=d}^{\tau_k}B_k[d]\Big[(1-P_k(\tau_k,d+1)) - \big(1-P_k(\tau_k,1)h_{k-1}\big)\prod_{j=2}^{d+1}g_k(j)\Big] = \Phi_k[0]. \] \end{proof} We are now finally ready to prove the theorem, which follows from the previous lemmas. \emph{Proof of \cref{thm:gen}:} \Cref{lem:V_expr} with $\Delta = 1$ and \cref{lem:mVsum} gives that \begin{equation}\label{eq:opt_V}\begin{aligned} V_k[\sigma_k] &= -\frac{X_k(1)}{\rho_k}\Big[m_{k-1}[0] + \sum_{i=1}^{k-1}\sum_{d = \sigma_i}^{\sigma_{i+1}-1}\bar{V}_i[d] + \mu_k[0] \Big]\\ & = -\frac{X_k(1)}{\rho_k}\Big[\delta_{k-1}[0] + (1-h_k)\mu_k[0] \Big] \end{aligned}\end{equation} from which the optimal $v_k[0] = \rho_k/r_k\cdot V_k[\sigma_k]$ as in \cref{alg:com_gen} follows. Using \cref{lem:shifted_inv}-(iii), \cref{lem:mVsum}, and that $v_k[0] = \rho_k/r_k\cdot V_k[\sigma_k]$ gives that \begin{multline*} u_{k-1}[0] = (1-\frac{\gamma_k}{q_k})(z_k[t]+u_k[-\tau_k]+D_k[0]) +d_k[0]-D_k[0]\\ +V_k[\sigma_k]\Big(\frac{\rho_k}{r_k}-\frac{\gamma_k}{q_k} \Big) - \frac{\gamma_k}{q_k}\Big(\delta_{k-1}[0]-h_{k-1}\mu_k[0] \Big). \end{multline*} Inserting \eqref{eq:opt_V} gives that the optimal $u$ is as in \cref{alg:com_gen}. The results will hold for $t\neq 0$ as the problem has an infinite horizon and one can always change the variables so that current time is time zero. \qed \end{document}
\begin{document} \title{Controlling Directionality and Dimensionality of Wave Propagation through Separable Bound States in the Continuum} \author{Nicholas Rivera$^{1}$, Chia Wei Hsu $^{1,2}$, Bo Zhen $^{1,3}$, Hrvoje Buljan$^{4}$, John D. Joannopoulos$^{1}$ \& Marin Solja\v{c}i\'{c}$^{1}$} \affiliation{$^{1}$Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA \\ $^{2}$Department of Applied Physics, Yale University, New Haven, CT 06520, USA. \\ $^{3}$Physics Department and Solid State Institute, Technion, Haifa 32000, Israel \\ $^{4}$Department of Physics, University of Zagreb, Zagreb 10000, Croatia. } \begin{abstract} \noindent A bound state in the continuum (BIC) is an unusual localized state that is embedded in a continuum of extended states. \st{BICs can arise from the separability of the wave equation.} Here, we present the general condition for \st{separable} BICs to \st{exist, which allows us to control} \hl{arise from wave equation separability and show that} the directionality and dimensionality of \hl{their} resonant radiation \hl{can be controlled} by exploiting \hl{perturbations of certain} symmetry. Using this general framework\st{ for separable BICs}, we construct new examples of \st{them} \hl{separable BICs} in realistic models of optical potentials for ultracold atoms, photonic systems, and systems described by tight binding. Such BICs with easily reconfigurable radiation patterns allow for applications such as the storage and release of waves at a controllable rate and direction, systems that switch between different dimensions of confinement, and experimental realizations in atomic, optical, and electronic systems. \end{abstract} \maketitle \section{Introduction} The usual frequency spectrum of waves in an inhomogeneous medium consists of localized waves whose frequencies lie outside a continuum of delocalized propagating waves. For some rare systems, the bound states can be at the same frequency as an extended state. \st{These states, called} \hl{Such} bound states in the continuum (BICs) represent a method of wave localization that does not forbid outgoing waves from propagating, unlike traditional methods of localizing waves such as potential wells in quantum mechanics, conducting mirrors in optics, band-gaps, and Anderson localization. BICs were first \st{shown to exist} \hl{predicted} theoretically in quantum mechanics by von Neumann and Wigner~\cite{neumann1929}. However, their BIC-supporting potential was highly oscillatory and could not be implemented in reality. The BIC was seen as a mathematical curiosity for the following 50 years until Friedrich and Wintgen showed that BICs could exist in models of multi-electron atoms~\cite{friedrich1985}. Since then, many theoretical examples of BICs have been demonstrated in quantum mechanics~\cite{robnik1986,nockel1992,duclos2001,prodanovic2013, ladron2003,ordonez2006,moiseyev2009}, electromagnetism~\cite{ochiai2001,ctyroky2001,watts2001,kawakami2002,apalkov2003,shipman2005,porter2005,bulgakov2008,marinica2008,molina2012,2013_Hsu_LSA,bulgakov2014,monticone2014,2014_Bulgakov_OL}, \hl{acoustics~\cite{porter2005}, and water waves~\cite{cobelli2011},} \st{and more general wave frameworks. On the other hand, experimental realizations are much fewer in number} \hl{with some experimentally realized~\cite{plotnik2011,lee2012,2013_Hsu_Nature,corrielli2013,weimann2013,lepetit2014}.} A few mechanisms explain most examples of BICs that have been discovered. \st{One of the} \hl{The} simplest mechanism is \st{that of the} symmetry\st{-protected BICs} \hl{ protection}, in which the BIC is decoupled from its degenerate continuua by symmetry mismatch~\cite{ochiai2001,fan2002,ladron2003,plotnik2011,lee2012,shipman2005,moiseyev2009}. \st{Another type arises when two resonances couple, leading one of them to become a BIC \cite{friedrich1985,marinica2008,lepetit2014}. When two resonant states with non-zero imaginary part of their frequency cross in the real part of their frequency as a parameter in the Hamiltonian is varied, one of these resonances can become a BIC.}\hl{Or, when two resonances are coupled, they can form BICs through Fabry-Perot interference~\cite{ordonez2006,marinica2008, 2013_Hsu_LSA, weimann2013} or through radiative coupling with tuned resonance frequencies~\cite{friedrich1985,lepetit2014,2014_Bulgakov_OL}.} More generally, parametric BICs can occur for special values of parameters in the Hamiltonian~\cite{porter2005,bulgakov2008,2013_Hsu_Nature,yi2014,bulgakov2014,monticone2014}, which can in some cases be understood using topological arguments~\cite{zhen2014}. There exists yet another class of BICs, which has thus far been barely explored, and literature covering it is rare and scattered; these are BICs \hl{due to separability}~\cite{robnik1986,nockel1992,prodanovic2013,ctyroky2001,watts2001,kawakami2002,apalkov2003,duclos2001}. In this paper, we develop the most general properties of separable BICs \st{-} by providing the general criteria for \st{the} \hl{their} existence \st{of separable BICs} and by characterizing the continuua of these BICs. We demonstrate that separable BICs enable control over both the directionality and the dimensionality of resonantly emitted waves by exploiting symmetry, which is not possible in other classes of BICs. Our findings \hl{may} lead to applications such as\st{: a new class of} tunable-$Q$ directional resonators and quantum systems which can be switched between quantum dot (0D), quantum wire (1D), and quantum well (2D) modes of operation. While previous works on separable BICs have been confined to somewhat artificial systems, we \hl{also} present readily experimentally observable examples of separable BICs in photonic systems with directionally resonant states, as well as examples in optical potentials for cold atoms. \section{Results} \subsection{General Condition for Separable BICs} We start by considering a simple example of a two-dimensional separable system - one where the Hamiltonian operator can be written as a sum of Hamiltonians that each act on distinct variables $x$ or $y$, i.e; \begin{equation} H = H_x(x) + H_y(y). \end{equation} Denoting the eigenstates of $H_x$ and $H_y$ as $\psi_{x}^i(x)$ and $\psi_{y}^j(y)$, with energies $E_{x}^i$ and $E_{y}^j$, it follows that $H$ is diagonalized by the basis of product states $\psi_{x}^i(x)\psi_{y}^j(y)$ with energies $E^{i,j}=E_{x}^i+E_{y}^j$. If $H_x$ and $H_y$ each have a continuum of extended states starting at zero energy, and these Hamiltonians each have at least one bound state, $\psi_x^0$ and $\psi_y^0$, respectively, then the continuum of $H$ starts at energy $\text{min}(E_{x}^0,E_{y}^0)$, where the $0$ subscript denotes ground states. Therefore, if there exists a bound state satisfying $E^{i,j}>\text{min}(E_{x}^0,E_{y}^0)$, then it is a bound state in the continuum. We schematically illustrate this condition being satisfied in Figure 1, where we illustrate a separable system in which $H_x$ has two bound states, $\psi_x^0$ and $\psi_x^1$, and $H_y$ has one bound state, $\psi_y^0$. The first excited state of $H_x$ combined with the ground state of $H_y$, $\psi_x^1\psi_y^0$, while spatially bounded, has larger energy, $E_x^1 + E_y^0$, than the lowest continuum energy, $E_x^0$, and is therefore a BIC. To extend separability to a Hamiltonian with a larger number of separable degrees of freedom is then straightforward. The Hamiltonian can be expressed in tensor product notation as $H = \sum\limits_{i=1}^N H_i$, where $H_i \equiv I^{\otimes i-1}\otimes h_i \otimes I^{\otimes N-i}$. In this expression, $N$ is the number of separated degrees of freedom, $h_i$ is the operator acting on the $i$-th variable, and $I$ is the identity operator. The variables may refer to the different particles in a non-interacting multi-particle system, or the spatial and polarization degrees of freedom of a single-particle system. Denote the $n_j$th eigenstate of $h_i$ by $|\psi_{i}^{n_j}\rangle$ with energy $E_i^{n_j}$. Then, the overall Hamiltonian $H$ is trivially diagonalized by the product states $|n_1,n_2,\cdot\cdot\cdot ,n_N\rangle \equiv |\psi_1^{n_1}\rangle\otimes|\psi_2^{n_2}\rangle\otimes\cdot\cdot\cdot\otimes|\psi_N^{n_N}\rangle$ with corresponding energies $E = E_1^{n_1} + E_2^{n_2} + \cdot\cdot\cdot + E_N^{n_N}$. Denoting the ground state of $h_i$ by $E_i^{0}$ and defining the zeros of the $h_i$ such that their continuua of extended states start at energy zero, the continuum of the overall Hamiltonian starts at $\underset{{1\leq i \leq N}}{\text{min}}(E_{i}^{0})$. Then, if the separated operators $\{ h_i \}$ are such that there exists a combination of separated bound states satisfying $E = \sum\limits_{i=1}^N E_{i}^{n_i} > \underset{{1\leq i \leq N}}{\text{min}}(E_{i}^{0})$, this combined bound state is a BIC of the overall Hamiltonian. For such separable BICs, coupling to the continuum is forbidden by the separability of the Hamiltonian. Stating the most general criteria for separable BICs allows us to straightforwardly extend the known separable BICs to systems in three dimensions, multi-particle systems, and systems described by the tight-binding approximation. \subsection{Properties of the Degenerate Continuua} A unique property that holds for all separable BICs is that the delocalized modes degenerate to the BIC are always guided in at least one direction. In many cases, there are multiple degenerate delocalized modes and they are guided in different directions. In 2D, we can associate partial widths $\Gamma_x$ and $\Gamma_y$ to the resulting resonances of these systems when the separable system is perturbed. These partial widths are the decay rates associated with energy-conserving transitions between the BIC and states delocalized in the $x$ and $y$ directions under perturbation, respectively. When separability is broken, we generally can not decouple leakage in the $x$ and $y$ directions because the purely-$x$ and purely-$y$ delocalized continuum states mix. However, in this section, we show that one can control the radiation to be towards the x (or y) directions only, by exploiting the symmetry of the perturbation. In Fig. 2(a), we show a two-dimensional potential in a Schrodinger-like equation which supports a separable BIC. This potential is a sum of Gaussian wells in the $x$ and $y$-directions, given by \begin{equation} V(x,y) = -V_xe^{-\frac{2x^2}{\sigma_x^2}}-V_ye^{-\frac{2y^2}{\sigma_y^2}}. \end{equation} This type of potential is representative of realistic optical potentials for ultracold atoms and potentials in photonic systems. Solving the time-independent Schrodinger equation for $\{V_x, V_y \}=\{1.4, 2.2\}$ and $\{\sigma_x,\sigma_y \} = \{5,4\}$ (in arbitrary units) gives the spectra shown in Fig. 2(b). Due to the $x$ and $y$ mirror symmetries of the system, the modes have either even or odd parity in both the $x$ and the $y$ directions. This system has several BICs. Here we focus on the BIC $|n_x,n_y\rangle = |2,1\rangle$ at energy $E^{2,1} =-1.04$, with the mode profile shown in Fig.~2(c); being the second excited state in $x$ and the first excited state in $y$, this BIC is even in $x$ and odd in $y$. It is only degenerate to continuum modes $|0,E^{2,1}-E_{x}^0\rangle$ extended in the $y$ direction (Fig. 2(d)), and $|E^{2,1}-E_{y}^0,0\rangle$ extended in the $x$ direction (Fig. 2(e)), where an $E$ label inside a ket denotes an extended state with energy $E$. If we choose a perturbation that preserves the mirror symmetry in the $y$-direction, as shown in the inset of Fig.~2(f), then the perturbed system still exhibits mirror symmetry in $y$. Since the BIC $|2,1\rangle$ is odd in $y$ and yet the $x$-delocalized continuum states $|E^{2,1}-E_{y}^0,0\rangle$ are even in $y$, there is no coupling between the two. As a result, the perturbed state radiates only in the $y$ direction, as shown in the calculated mode profile in Fig.~2(f). This directional coupling is a result of symmetry, and so it holds for arbitrary perturbation strengths. On the other hand, if we apply a perturbation that is odd in $x$ but not even in $y$, as shown in the inset of Fig.~2(g), then there is radiation in the $x$ direction only to first-order in time-dependent perturbation theory. Specifically, for weak perturbations of the Hamiltonian, $\delta V$, the first-order leakage rate is given by Fermi's Golden Rule for bound-to-continuum coupling, $\Gamma \sim \sum_c |\langle 2,1 |\delta V| \psi_c \rangle|^2\rho_c(E^{2,1})$, where $\rho_c(E)$ is the density of states of continuum $c$, and $c$ labels the distinct continuua which have states at the same energy as the BIC. Since the BIC and the $y$-delocalized continuum states $|0,E^{2,1}-E_{x}^0\rangle$ are both even in $x$, the odd-in-$x$ perturbation does not couple the two modes directly, and $\Gamma_y$ is zero to the first order. As a result, the perturbed state radiates only in the $x$ direction, as shown in Fig.~2(g). At the second order in time-dependent perturbation theory, the BIC can make transitions to intermediate states $k$ at any energy, and thus the second-order transition rate, proportional to $\sum_c |T_{ci}|^2 = \sum_c |\sum_k \frac{\langle c|\delta V|k \rangle\langle k|\delta V|i\rangle}{E_i - E_k + i0^+}|^2$ does not vanish because the intermediate state can have even parity in the $x$-direction. Another unique aspect of separability is that by using separable BICs in 3D, the dimensionality of the confinement of a wave can be switched between one, two, and three by tailoring perturbations applied to a single BIC mode. The ability to do this allows for a device which can simultaneously act as a quantum well, a quantum wire, and a quantum dot. We demonstrate this degree of control using a separable potential generated by the sum of three Gaussian wells (in $x$, $y$, and $z$ directions) of the form in Equation 2 with strengths $\{0.4,0.4,1 \}$ and widths $\{ 12,12,3 \}$, all in arbitrary units. The identical $x$ and $y$ potentials have four bound states at energies $E_x = E_y = -0.33, -0.20, -0.10 \text{ and } -0.029$. The $z$ potential has two bound states at energies $E_z =-0.61 \text{ and } -0.059$. All of the continuua start at zero. The BIC state $|1,1,1\rangle$ at energy $E^{1,1,1}=-0.47$ is degenerate to $|0,0,E_z\rangle,|0,1,E_z\rangle,|1,0,E_z\rangle,|E_x,m,0\rangle,|n,E_y,0\rangle$ and $|E_x,E_y,0\rangle$, where $E_i$ denotes an energy above zero, and $m$ and $n$ denote bound states of the $x$ and $y$ wells. For perturbations which are even in the $z$ direction, the BIC $|1,1,1\rangle$ does not couple to states delocalized in $x$ and $y$ ($|E_x,m,0\rangle,|n,E_y,0\rangle,\text{ and }|E_x,E_y,0\rangle$), because they have opposite parity in $z$, so the BIC radiates in the $z$-direction only. On the other hand, for perturbations which are even in the $x$ and $y$ directions, the coupling to states delocalized in $z$ ($|0,0,E_z\rangle,|0,1,E_z\rangle,\text{ and }|1,0,E_z\rangle$) vanishes, meaning that the BIC radiates in the $xy$-plane. \subsection{Proposals for the experimental realizations of separable BICs} BICs are generally difficult to experimentally realize because they are fragile under perturbations of system parameters. On the other hand, separable BICs are straightforward to realize and also robust with respect to changes in parameters that preserve the separability of the system. This reduces the difficulty of experimentally realizing separable BICs to ensuring separability. And while this is still in general nontrivial, it can be straightforwardly achieved in systems that respond linearly to the intensity of electromagnetic fields. In the next two realistic examples of BICs that we demonstrate, we use detuned light sheets to generate separable potentials in photonic systems and also for ultracold atoms. \textit{Paraxial optical systems $-$} As a first example of this, consider electromagnetic waves propagating paraxially along the $z$-direction, in an optical medium with spatially non-uniform index of refraction $n(x,y) = n_{0} + \delta n(x,y)$, where $n_0$ is the constant background index of refraction and $\delta n \ll n_{0}$. The slowly varying amplitude of the electric field $\psi(x,y,z)$ satisfies the two-dimensional Schr\"{o}dinger equation (see Ref. 34 and references therein), \begin{equation} i\frac{\partial \psi}{\partial z}= -\frac{1}{2k}\nabla_{\perp}^2 \psi-\frac{k \delta n }{n_0}\psi. \label{paraxial} \end{equation} Here, $\nabla_{\perp}^2=\partial^2/\partial x^2+\partial^2/\partial y^2$, and $k=2\pi n_0/\lambda$, where $\lambda$ is the wavelength in vacuum. The modes of the potential $\delta n(x,y)$ are of the form $\psi=A_j(x,y)e^{i\beta_j z}$, where $A_j$ is the profile, and $\beta_j$ the propagation constant of the $j$th mode. In the simulations we use $n_0=2.3$, and $\lambda=485$~nm. Experimentally, there are several ways of producing potentials $\delta n(x,y)$ of different type, e.g., periodic\cite{fleischer2005}, random\cite{schwartz2007}, and quasicrystal\cite{freedman2006}. One of the very useful techniques is the so-called optical induction technique where the potential is generated in a photosensitive material (e.g., photorefractives) by employing light~\cite{fleischer2003}. We consider here a potential generated by two perpendicular light sheets, which are slightly detuned in frequency, such that the time-averaged interference vanishes and the total intensity is the sum of the intensities of the individual light sheets. The light sheets are much narrower in one dimension ($x$ for one, $y$ for the other) than the other two, and therefore each light sheet can be approximated as having an intensity that depends only on one coordinate, making the index contrast separable. This is schematically illustrated in Fig. 3(a). If the sheets are Gaussian along the narrow dimension, then the potential is of the form $\delta n(x,y)=\delta n_0 [\exp(-2(x/\sigma)^2)+\exp(-2(y/\sigma)^2)]$. It is reasonable to use $\sigma=30$~$\mu$m and $\delta n_0 = 5.7 \times 10^{-4}$. For these parameters, the one dimensional Gaussian wells have four bound states, with $\beta$ values of (in mm$^{-1}$): $2.1,1.3,0.55,\text{ and } 0.11$. There are eight BICs: $|1,2\rangle,|2,1\rangle,|1,3\rangle,|3,1\rangle$, $|2,3\rangle$, $|3,2\rangle, |2,2\rangle$, and $|3,3\rangle$. Among them, $|1,3\rangle$ and $|3,1\rangle$ are symmetry protected. Additionally, the BICs $|1,3\rangle$ and $|3,1\rangle$ can be used to demonstrate directional resonance in the $x$ or $y$ directions, respectively, by applying perturbations even in $x$ or $y$, respectively. Therefore, this photonic system serves as a platform to demonstrate both separable BICs and directional resonances. \textit{Optical potentials for ultracold atoms $-$} The next example that we consider can serve as a platform for the first experimental measurement of BICs in quantum mechanics. Consider a non-interacting neutral Bose gas in an optical potential. Optical potentials are created by employing light sufficiently detuned from the resonance frequencies of the atom, where the scattering due to spontaneous emission can be neglected, and the atoms are approximated as moving in a conservative potential. The macroscopic wavefunction of the system is then determined by solving the Schr\"{o}dinger equation with a potential that is proportional to the intensity of the light~\cite{bloch2005}. As an explicit example, consider an ultracold Bose gas of $^{87}Rb$ atoms. An optical potential is made by three Gaussian light sheets with equal intensity $I_1 = I_2 = I_3$, widths $20 \text{ }\mu\text{m}$, and wavelengths centered at $\lambda=1064 \text { nm}$~\cite{henderson2009}. This is schematically illustrated in Fig. 3(b). The intensity is such that this potential has depth equal to ten times the recoil energy $E_r = \frac{h^2}{2m\lambda^2}$. By solving the Schrodinger equation numerically, we find many BICs; the continuum energy starts at a reduced energy $\xi_c = \frac{2mEx_0^2}{\hbar^2} = -296.24$, where $x_0$ is chosen to be $1 \text{ }\mu m$. The reduced depth of the trap is $-446.93$. Each one-dimensional Gaussian supports 138 bound states. There are very many BICs in such a system. For concreteness, an example of one is $|30,96,96\rangle$, with reduced energy $-146.62$. \textit{Tight Binding Models$-$} The final example that we consider here is an extension of the separable BIC formalism to systems which are well-approximated by a tight-binding Hamiltonian that is separable. Consider the following one-dimensional tight-binding Hamiltonian, $H_i$, which models a one-dimensional lattice of non-identical sites: \begin{equation} H_i = \sum\limits_{k}\varepsilon_i^{k} |k\rangle\langle k|+t_i \sum \limits_{\langle lm \rangle}(|l\rangle\langle m| + |m\rangle\langle l| ), \end{equation} where $\langle lm \rangle$ denotes nearest-neighbors, $\varepsilon_i^k $ is the on-site energy of site $k$, and $k,l,\text{ and }m$ run from $-\infty$ to $\infty$. Suppose $\varepsilon_i^k = -V$ for $|k| < N$, and zero otherwise. For two Hamiltonians of this form, $H_1$ and $H_2$, $H_1 \otimes I + I \otimes H_2$ describes the lattice in Fig. 3(c). If we take $H_1 = H_2$ with $\{ V,t,N \} = \{ -1,-0.3,2 \}$, in arbitrary units, the bound state energies of the 1D-lattices are numerically determined to be $-0.93,-0.74,-0.46,\text{ and } -0.16$. Therefore the states $|2,2\rangle, |2,3\rangle, |3,2\rangle, |3,3\rangle, |3,1\rangle \text{ and } |1,3\rangle$ are BICs. The last two of these are also symmetry-protected from the continuum as they are odd in $x$ and $y$ while the four degenerate continuum states are always even in at least one direction. Of course, many different physical systems can be adjusted to approximate the system from Eq. (4), so this opens a path for observing separable BICs in a wide variety of systems. \section{Summary} With the general criterion for separable BICs, we have extended the existing handful of examples to a wide variety of wave systems including: three-dimensional quantum mechanics, paraxial optics, and lattice models which can describe 2D waveguide arrays, quantum dot arrays, optical lattices, and solids. These BICs exist in other wave equations such as the 2D Maxwell equations, and the inhomogeneous scalar and vector Helmholtz equations, meaning that their physical domain of existence extends to 2D photonic crystals, microwave optics, and also acoustics. To our knowledge, they do not exist (except perhaps approximately) in isotropic media in the full 3D Maxwell equations $\nabla \times \nabla \times \bold{E} = \nabla(\nabla \cdot \bold{E}) - \nabla^2\bold{E} = \epsilon \frac{\omega^2}{c^2}\bold{E}$ because the operator $\nabla(\nabla \cdot \text{ })$ couples the different coordinates and renders the equation non-separable. We have demonstrated numerically the existence of separable BICs in models of trapped Bose-Einstein condensates and photonic potentials, showing that separable BICs can be found in realistic systems. These simple and realistic models can facilitate the observation of BICs in quantum mechanics, which to this date has not been conclusively done. More importantly, we have demonstrated two new properties unique to separable BICs: the ability to control the direction of emission of the resonance using perturbations, and also the ability to control the dimensionality of the resulting resonance. This may lead to two applications. In the first, perturbations are used as a switch which can couple waves into a cavity, store them, and release them in a fixed direction. In the second, the number of dimensions of confinement of a wave can be switched between one, two, and three by exploiting perturbation parity. The property of dimensional and directional control of resonant radiation serves as another potential advantage of BICs over traditional methods of localization. \section{Acknowledgments} The authors would like to acknowledge Prof. Steven G. Johnson, Prof. Marc Kastner, Prof. Silvija Grade\v{c}ak, Wujie Huang, and Dr. Wenlan Chen for useful discussions and advice. Work of M.S. was supported as part of S3TEC, an EFRC funded by the U.S. DOE, under Award Number DE-SC0001299 / DE-FG02-09ER46577. This work was also supported in part by the MRSEC Program of the National Science Foundation under award number DMR-1419807. This work was also supported in part by the U. S. Army Research Laboratory and the U. S. Army Research Office through the Institute for Soldier Nanotechnologies, under contract number W911NF-13-D-0001. \section{Author Contributions} N.R. came up with the idea for separable BICs, and controlling directionality and dimensionality with them, in addition to doing the numerical simulations. C.W.H., B.Z., H.B., and M.S. helped come up with readily observable physical examples of separable BICs in addition to helping develop the concept of separable BICs. M.S. and J.D.J. provided critical supervision for the work. N.R. wrote the manuscript with critical reading and editing from B.Z., C.W.H., H.B., J.D.J., and M.S.. \begin{figure*} \caption{A schematic illustration demonstrating the concept of a separable BIC in two dimensions.} \end{figure*} \begin{figure*} \caption{(a) A separable potential which is a sum of a purely x-dependent Gaussian well and a purely y-dependent Gaussian well. (b) The relevant states of the spectrum of the $x$-potential,$y$-potential, and total potential. (c) A BIC supported by this double well. (d,e) Continuum states degenerate in energy to the BIC. (f) A $y$-delocalized continuum state resulting from an even-$y$-parity perturbation of the BIC supporting potential. (g). An $x$-delocalized continuum state resulting from an odd-$x$-parity perturbation.} \end{figure*} \begin{figure*} \caption{Separable physical systems with BICs. (a) A photorefractive optical crystal whose index is weakly modified by two detuned intersecting light sheets with different intensities. (b) An optical potential formed by the intersection of three slightly detuned light sheets with different intensities. (c) A tight-binding lattice.} \end{figure*} \end{document}
\begin{document} \newcommand{\eta}{\eta} \newcommand{\bar{\eta}}{\bar{\eta}} \newcommand{\eta^{\prime}}{\eta^{\prime}} \newcommand{\bar{\eta}^{\prime}}{\bar{\eta}^{\prime}} \newcommand{{\bf k}}{{\bf k}} \newcommand{{\bf l}}{{\bf l}} \newcommand{{\bf q}}{{\bf q}} \newcommand{{\bf r}}{{\bf r}} \newcommand{z}{z} \newcommand{\bar{z}}{\bar{z}} \newcommand{z^{\prime}}{z^{\prime}} \newcommand{\bar{z}^{\prime}}{\bar{z}^{\prime}} \newcommand{\lambda}{\lambda} \newcommand{\bar{\lambda}}{\bar{\lambda}} \newcommand{\omega}{\omega} \newcommand{\epsilon}{\epsilon} \newcommand{\bar{\psi}}{\bar{\psi}} \newcommand{\psi^{\prime}}{\psi^{\prime}} \newcommand{\bar{\psi}^{\prime}}{\bar{\psi}^{\prime}} \newcommand{\bar{\phi}}{\bar{\phi}} \newcommand{\phi^{\prime}}{\phi^{\prime}} \newcommand{\bar{\phi}^{\prime}}{\bar{\phi}^{\prime}} \newcommand{\Gamma}{\Gamma} \newcommand{\sigma}{\sigma} \title{Quantum Entanglement under Non-Markovian Dynamics of Two Qubits Interacting with a common Electromagnetic Field} \author{C.~Anastopoulos$^{1}$\footnote{Corresponding author. Email address: [email protected]}, S.~Shresta$^{2,3}$\footnote{Present address: MITRE Corporation 7515 Colshire Drive, MailStop N390 McLean, VA 22102. Email Address: sanjiv$\[email protected]}, and B.~L. Hu$^{2}$\footnote{Email address: [email protected]}} \affiliation{$^1$Department of Physics, University of Patras, 26500 Patras, Greece, \\$^2$Department of Physics, University of Maryland, College Park, Maryland 20742-4111 \\ $^3$NIST, Atomic Physics Division, Gaithersburg, MD 20899-8423} \date{January 31, 2007} \begin{abstract} We study the non-equilibrium dynamics of a pair of qubits made of two-level atoms separated in space with distance $r$ and interacting with one common electromagnetic field but not directly with each other. Our calculation makes a weak coupling assumption but no Born or Markov approximation. We derived a non-Markovian master equation for the evolution of the reduced density matrix of the two-qubit system after integrating out the electromagnetic field modes. It contains a Markovian part with a Lindblad type operator and a nonMarkovian contribution, the physics of which is the main focus of this study. We use the concurrence function as a measure of quantum entanglement between the two qubits. Two classes of states are studied in detail: Class A is a one parameter family of states which are the superposition of the highest energy $|I \rangle \equiv |11 \rangle$ and lowest energy $|O \rangle \equiv |00 \rangle$ states, {\it viz}, $|A \rangle \equiv \sqrt p|I \rangle + \sqrt {(1-p)} |O \rangle$, with $ 0 \le p \le 1 $; and Class B states $|B \rangle$ are linear combinations of the symmetric $|+ \rangle = \frac{1}{\sqrt 2} (|01 \rangle + |10 \rangle)$ and the antisymmetric $|- \rangle = \frac{1}{\sqrt 2} (|01 \rangle - |10 \rangle)$ Bell states. We obtain similar behavior for the Bell states as in earlier results derived by using the Born-Markov approximation \cite{FicTan06} on the same model. However, in the Class $|A \rangle$ states the behavior is qualitatively different: under the non-Markovian evolution we do not see sudden death of quantum entanglement and subsequent revivals, except when the qubits are sufficiently far apart. (The existence of sudden death was first reported for two qubits in two disjoint cavity electromagnetic fields \cite{YuEbePRL}, and the dark period and revival were found from calculations using the Born-Markov approximation \cite{FicTan06}). For an initial Bell state, our findings based on non-Markovian dynamics agree with those obtained under the Born-Markov approximation. We provide explanations for such differences of behavior both between these two classes of states and between the predictions from the Markov and non-Markovian dynamics. We also study the decoherence of this two-qubit system and find that the decoherence rate in the case of one qubit initially in an excited state does not change significantly with the qubits separation whereas it does for the case when one qubit is initially in the ground state. Furthermore, when the two qubits are close together, the coherence of the whole system is preserved longer than it does in the single qubit case or when the two qubits are far apart. \end{abstract} \maketitle \section{Introduction} Investigation of quantum entanglement is both of practical and theoretical significance: It is viewed as a basic resource for quantum information processing (QIP) \cite{NielsenChuang} and it is a basic issue in understanding the nature of nonlocality in quantum mechanics \cite{EPR,Bell,GHZ}. However, even its very definition and accurate characterization are by no means easy, especially for multi-partite states (see, e.g., \cite{PeresBook,KarolBook,AlickiBook,Wootters,Bennett,Wer,VidWer}.) Nonetheless there are useful criteria proposed for the separability of a bipartite state, pure and mixed (e.g., \cite{Peres,Horod,Simon,Duan,Barnum,Brennen,HHH}), theorems proven (e.g, \cite{Ruskai,AlickiHorod}), and new mathematical tools introduced (e.g., \cite{Brody,Levay}), which add to advances in the last decade of this new endeavor \cite{Knill}. Realistic quantum systems designed for QIP cannot avoid interactions with their environments, which can alter their quantum coherence and entanglement properties. Thus quantum decoherence and disentanglement are two essential obstacles to overcome for the design of quantum computers and the implementation of QIP. Environment-induced decoherence in the context of QIP has been under study for over a decade \cite{PazZurRev} and studies of environment-induced disentanglement has seen a rapid increase in recent years. There are now experimental proposals to measure finite time disentanglement induced by a reservoir \cite{Franca}. The relation between decoherence and disentanglement is an interesting one because both are attributable to the decay of quantum interference in the system upon the interaction with an environment. (See, e.g., \cite{VKG,RajRendell,Diosi,Dodd,DoddHal}) In addition to the mathematical investigations mentioned above which could provide rather general characterizations of quantum entanglement, detailed studies of physical models targeting actual designs of quantum computer components can add precious insight into its behavior in concrete settings. Two classes of models relevant to condensed matter and atomic-optical QIP schemes are of particular interest to us. The first class consists of the quantum Brownian motion model (QBM) and the spin-boson model (SBM). Quantum decoherence has been studied in detail in both models, and results on quantum disentanglement are also appearing (See \cite{VKG,Diosi,Dodd,DoddHal} for QBM under high temperature, negligible dissipation, and \cite{CHY} for an attempt towards the full non-Markovian regimes.) The second class of models describes atoms or ions in a cavity with a quantum electromagnetic field at zero or finite temperature. The model consists of two two level atoms (2LA) in an electromagnetic field (EMF). For a primary source on this topic, read, e.g.. \cite{Agarwal}. For a more recent description of its dynamics under the Born-Markov approximation, see the review of \cite{FicTan}. \subsection{Two-atom entanglement via an electromagnetic field} Quantum decoherence and entanglement between one 2LA and an EMF has been treated by us and many other authors earlier \cite{AH,SADH} and by Cummings and Hu recently towards the strong coupling regime \cite{CumHu06}, which provide insight in how the atom-field interaction affects their entanglement. There is recent report of exact solutions found for a 2LA in an EMF using the underlying pseudo-supersymmetry of the system \cite{SSG}. In the 2 atom-EMF model, the two atoms can be assumed to interact only with its own cavity EMF, or with a common EMF, and they can also interact with each other. The noninteracting case of separate fields was first studied by Yu and Eberly \cite{YuEbePRL,YuEbePRB} where `sudden death' of quantum entanglement was sighted. The noninteracting case of a common field was studied recently by Ficek and Tanas \cite{FicTan06}, An {\it et al} \cite{AnWangLuo}. Quantum decoherence of N-qubits in an electromagnetic field was studied by Palma, Suominen and Ekert \cite{PSK}. For entanglement of ions in cavities, see, e.g., \cite{BPH}. For the purpose of quantum information processing, we have emphasized earlier in our studies of quantum decoherence that it is absolutely essential to keep track fully of the mutual influence of, or the interplay between, the system and the environment. If one chooses to focus only on the system dynamics, one needs to take into consideration the back-action of the environment, and vice versa. In our prior work \cite{AH,SADH}, we used the influence functional formalism with a Grassmannian algebra for the qubits (system) and a coherent state path integral representation for the EMF (environment). Here, we employ a more standard operator method through perturbation theory, because the assumption of an initial vacuum state for the EMF allows a full resummation of the perturbative series, thus leading to an exact and closed expression for the evolution of the reduced density matrix of the two qubits. This approach incorporates the back-action of the environment on the system self-consistently and in so doing generates non-Markovian dynamics. We shall see that these features make a fundamental difference in the depiction of evolution of quantum entanglement in the qubit dynamics. \subsection{The importance of including back-action self-consistently} Since quantum entanglement is a more delicate quantity to track down than decoherence, an accurate description is even more crucial. For this, one needs to pay extra attention to back-actions. For example, in the case of two 2LA (system) in a cavity EMF (environment), the two parties are equally important. This means that we should include both the back-action of the field on the atoms, and the atoms on the field. In a more complete treatment as attempted here, we obtain results qualitatively different from earlier treatments where the back-action is not fully included or properly treated \cite{FicTan}. Some special effects like `sudden death' \cite{YuEbePRL} can in this broader context be seen as consequences only of rather special arrangements: Each atom interacting with its own EMF precludes the fields from talking to each other and in turn cuts off the atoms' natural inclination (by the dictum of quantum mechanics) to be entangled. In effect, this is only a limiting case of the full dynamics we explored here for the two-qubit entanglement via the EMF. This limit corresponds to the qubits being separated by distances much larger than the correlation length characterizing the total system. For a wide range of spatial separations within the correlation length, entanglement is robust: Our results for the full atom-field dynamics reveal that there is no sudden death. \subsection{Non-Markovian dynamics from back-action} It is common knowledge in nonequilibrium statistical mechanics \cite{Zwanzig} that for two interacting subsystems the two ordinary differential equations governing each subsystem can be written as an integro-differential equation governing one such subsystem, thus rendering its dynamics non-Markovian, with the memory of the other subsystem's dynamics registered in the nonlocal kernels (which are responsible for the appearance of dissipation and noise should the other subsystem possess a much greater number of degrees of freedom and are coarse-grained in some way). Thus inclusion of back-action self-consistently in general engenders non-Markovian dynamics. Invoking the Markov approximation as is usually done may lead to the loss of valuable information, especially for quantum systems. These assumptions need to be scrutinized carefully with due consideration of the different time scales in the system and the specific physical quantities that are of interest in the study. For monitoring the evolution of quantum entanglement which is usually a more delicate process than coherence, if one lacks detailed knowledge of how the different important processes interplay, our experience is that it is safer not to put in any ad hoc assumption at the beginning (e.g., Markovian approximation, high temperature, white noise) but to start with a first principles treatment of the dynamics (which is likely non-Markovian) involving all subsystems concerned and respecting full self-consistency. This is because entanglement can be artificially and unknowingly curtailed or removed in these ad hoc assumptions. What is described here is not a procedural, but a substantive issue, if one seeks to coherently follow or manipulate any quantum system, as in QIP, because doing it otherwise can generate quantitatively inaccurate or even qualitatively wrong results. Thus the inclusion of backreaction (which depends on the type of coupling and the features of the environment) usually leads to nonMarkovian dynamics \footnote{This does not preclude the possibility that certain types of backreaction effects when included leads to the same type of open system dynamics as the original (test- field) dynamics, which, if already in the Lindblad form, remains Markovian (e.g., renormalization of the coupling constants, invoking a mean field approximation).}. Also, under extreme conditions such as imposing infinite cutoff frequency and at high temperatures, the dynamics of, say, a quantum harmonic oscillator bilinearly coupled to an Ohmic bath can become Markovian \cite{CalLeg83}. Other factors leading to or effecting nonMarkovian behavior include the choice of special initial conditions. For example, the factorizable initial condition introduces a fiducial or special choice of time into the dynamics which destroys time-homogeneity. A word about terminology might be helpful here: One usually refers to Markovian dynamics as that governed by a master equation with constant-in-time coefficients, i.e., described by a Linblad operator, and non-Markovian for all other types of dynamics. A more restricted condition limits the definition of non-Markovian dynamics to cases with non-trivial (nonlocal in time) integral kernels appearing in the master equation. This more stringent definition would refer to dynamics (depicted by master equations containing coefficients which are) both time-homogeneous and non-time-homogeneous as Markovian. We use the first and more common convention of terminology, in which the master equation (\ref{Lindbladlike}) which is local in time but non-time-homogeneous would be nonMarkovian. The Markovian regime emerges in the limit when the two qubits are far separated. This feature is similar to the HPZ master equation \cite{HPZ} for quantum Brownian motion, where Markovian (time-homogeneous) dynamics appears only in specific limiting conditions (high temperature and ohmic distribution of environmental modes, as alluded to above). Our present study of the two qubit (2qb)- EMF system is also aimed at addressing a common folklore, namely, that in quantum optics one does not need to worry about non-Markovian effects. We will see that there is memory effect residing in the off diagonal components of the reduced density matrix for the 2 qubit system which comes from virtual photon exchange processes mediated by the field and which depends on the qubit separation. Perhaps the simplest yet strongest reason for the need to take nonMarkovian effects seriously is that results from the Markovian approximation are incomplete and lead to qualitatively wrong predictions. \subsection{Relation to prior work and organization of this paper} In this paper we study the non-Markovian dynamics of a pair of qubits (2LA) separated in space by distance $r$ interacting with one common electromagnetic field (EMF) through a Jaynes-Cummings type interaction Hamiltonian. We use the concurrence function as a measure of quantum entanglement between the two qubits. The same model was studied before in detail by Ficek and Tanas \cite{FicTan} using the Born-Markov approximation. In a more recent paper \cite{FicTan06} they show the existence of dark periods and revival of quantum entanglement in contrast to the findings of Yu and Eberly which assumes two qubits in disjoint EMFs. Our calculation makes a weak coupling assumption and ignores the two- and higher- photon- exchanges between the qubits, but it makes no Born or Markov approximation. We derive a non-Markovian master equation, which differs from the usual one of the Lindblad type: it contains extra terms that correspond to off-diagonal elements of the density matrix propagator. We concentrate on two classes of states, superpositions of highest and lowest energy states and the usual antisymmetric $|- \rangle$ and symmetric $|+\rangle $ Bell states \cite{Bell} and observe very different behavior. These are described in detail in the Discussions section. The difference between our results and that of Ref. \cite{FicTan06} highlights the effect of non-Markovian (with memory) evolution of quantum entanglement. In short, we find similar behavior in the Class B (Bell) states but qualitative different behavior in the evolution of Class A states. Ref \cite{FicTan06} found that their evolution leads generically to sudden death of entanglement and a subsequent revival. In our more complete treatment of the atom-field dynamics we indeed see the former effect present for large values of the inter-qubit distances. However, sudden death is absent for short distances, while there is no regime in which a revival of entanglement can take place. This calls for caution. Another set of papers close to our work reported here is that of \cite{JakJam} who considered two 2LA in an infinite temperature field bath. When the atoms are separated at large distance the authors assume that they are located inside two independent baths. ( The severance of the field is subject to the same criticism above: A small but finite quantum entanglement cannot be equated to zero because the small amount can later grow.) For these conditions and under the Markovian approximation, the time evolution of the two-atom system is given by the ergodic dynamical semi-group. They ignore without justification the effect of distance on the interaction between the qubits. A paper of interest not directly related to the present model but which does show the dependence of the disentanglement rate on distance, like ours reported here, is that by Roszak and Machinikowski \cite{RosMac}. They consider a system of excitons with different coupling, and with a very different infrared behavior of the bath modes. The latter seems not to be relevant to the two atoms' case here. This paper is organized as follows: Section 2 contains the main derivation. We write down the Hamiltonian for two 2-level atoms (2LA) interacting with a common electromagnetic field (EMF) at zero temperature, and we compute the relevant matrix elements for the propagator of the total system by resummation of the perturbative series (Appendix A). We then determine the evolution of the reduced density matrix of the atoms, which is expressed in terms of seven functions of time. We compute these functions using an approximation that amounts to keeping the contribution of the lowest loop order for the exchange of photons between the qubits. In Section 3 we examine the evolution of the reduced density matrix for two classes of initial states. We then describe the time evolution of quantum entanglement with spatial separation dependence in these states via the concurrence plotted for some representative cases. In Section 4, we study the decoherence of this system when the two qubits are initially disentangled. We consider two cases that correspond to one of the qubits being initially in a vacuum state and in an excited state. We compare these results with the single qubit cases and highlight the lessening of decoherence due to the presence of other qubit(s). In Section 5 we discuss and compare our results on disentanglement with the work of Yu and Eberly for two 2LA in separate EMF baths, and with the work of Fizek and Tanas on two 2LA in a common EMF bath under the Born-Markov approximation. We identify the point of departure of quantum dynamics under the Markovian approximation from the full non-Markovian dynamics and thereby demonstrate the limitations of the Born-Markov approximation. Finally, we discuss the domain of validity of the rotating wave approximation in describing these systems. In Appendix C we sketch an alternative derivation through the Feynman-Vernon influence functional technique, in which Grassmann variables are employed for the study of the atomic degrees of freedom. \section{Two-Atoms interacting via a common Electromagnetic Field} \subsection{The Hamiltonian} We consider two 2-level atoms (2LA), acting as two qubits, labeled 1 and 2, and an electromagnetic field described by the free Hamiltonian $H_0$ \begin{eqnarray} \hat{H}_0 = \hbar \sum_{\bf k} \omega_{\bf k}\hat{b}^{\dagger}_{\bf k} \hat{b}_{\bf k} +\hbar\omega_o \hat{S}_+^{(1)} \hat{S}_-^{(1)} +\hbar\omega_o \hat{S}_+^{(2)} \hat{S}_-^{(2)} \end{eqnarray} where $\omega_{\bf k}$ is the frequency of the ${\bf k}^{\mbox{th}}$ electromagnetic field mode and $\omega_o$ the atomic frequency between the two levels of the atom, assumed to be the same for the two atoms. The electromagnetic field creation~(annihilation) operator is $b_{\bf k}^+$~($b_{\bf k}$), while $S_+^{(n)}$~($S_-^{(n)}$) are the spin raising~(lowering) operators for the $n^{\mbox{th}}$ atom. We will define the pointing vector from $1$ to $2$ as ${\bf r} ={\bf r}_2 -{\bf r}_1$ and we will assume without loss of generality that ${\bf r}_1 + {\bf r}_2 = 0$. The two 2LAs do not interact with each other directly but only through the common electromagnetic field via the interaction Hamiltonian \begin{eqnarray} \hat{H}_I = \hbar\sum_{\bf k} g_{\bf k}\left( \hat{b}^{\dagger}_{\bf k} ( e^{-i{\bf k}\cdot{\bf r}/2} \hat{S}_{-}^{(1)} + e^{i{\bf k}\cdot{\bf r}/2} \hat{S}_{-}^{(2)} ) + b_{\bf k} ( e^{i{\bf k}\cdot{\bf r}/2} \hat{S}_{+}^{(1)} + e^{-i{\bf k}\cdot{\bf r}/2} \hat{S}_{+}^{(2)} ) \right), \label{Hint} \end{eqnarray} where $g_{\bf k } = \frac{\lambda}{\sqrt{\omega_{\bf k}}}$, $\lambda$ being the coupling constant. We have assumed that the dipole moments of the atoms are parallel. The total Hamiltonian of the atom-field system is \begin{equation} \hat{H} = \hat{H}_0 + \hat{H}_I. \end{equation} \subsection{Perturbative expansion and resummation} We assume that at $t = 0$ the state of the combined system of atoms+field is factorized and that the initial state of the EMF is the vacuum $|O \rangle$. For this reason we need to identify the action of the evolution operator $e^{-i\hat{H}t}$ on vectors of the form $|O \rangle \otimes | \psi \rangle$, where $|\psi \rangle$ is a vector on the Hilbert space of the two 2LA's. For this purpose, we use the resolvent expansion of the Hamiltonian \begin{eqnarray} e^{-i\hat{H}t} = \int \frac{dE e^{-iE t}}{E - \hat{H} + i \eta} \end{eqnarray} and we expand \begin{eqnarray} (E - \hat{H})^{-1} = (E - \hat{H}_0)^{-1} + (E - \hat{H}_0)^{-1} \hat{H}_I (E - \hat{H}_0)^{-1} \nonumber \\ + (E - \hat{H}_0)^{-1} \hat{H}_I (E - \hat{H}_0)^{-1} \hat{H}_I (E - \hat{H}_0)^{-1} + \ldots \label{expansion} \end{eqnarray} Of relevance for the computation of the reduced density matrix of the two qubits are matrix elements of the form $\langle z; i',j'| (E - \hat{H})^{-1}| O; i,j \rangle$, where $z$ refers to a coherent state of the EM field and $i, j = 0,1$, the value $i = 0$ corresponding to the ground state of a single qubit and $i = 1$ to the excited state. We compute the matrix elements above through the perturbation expansion (\ref{expansion}). It turns out that we can effect a resummation of the perturbative series and thus obtain an exact expression for the matrix elements--see Appendix A for details of the resummation. The non-vanishing matrix elements are the following \begin{eqnarray} \langle z; 0,0| (E - \hat{H})^{-1}| O; 0, 1 \rangle &=& \sum_{\bf k} \frac{g_{\bf k} z^*_{\bf k} e^{ i \frac{{\bf k} \cdot {\bf r}}{2}}}{(E - \omega_o - \alpha(E) - \beta(E,r)e^{i {\bf k} \cdot {\bf r}})(E - \omega_{\bf k})} \\ \langle z; 0,1| (E - \hat{H})^{-1}| O; 0, 1 \rangle &=& \frac{1}{2}\left[ \frac{1}{E - \omega_o - \alpha(E) - \beta(E,r)} \right. \nonumber \\ &+& \left. \frac{1}{E - \Omega - \alpha(E) + \beta(E,r)} \right] \\ \langle z; 1,0| (E - \hat{H})^{-1}| O; 0, 1 \rangle &=& \frac{1}{2}\left[ \frac{1}{E - \omega_o - \alpha(E) - \beta(E,r)} \right. \nonumber \\ &-& \left. \frac{1}{E - \omega_o - \alpha(E) + \beta(E,r)} \right] \\ \langle z; 0,0| (E - \hat{H})^{-1}| O; 0, 0 \rangle &=& E^{-1} \\ \langle z; 1,1| (E - \hat{H})^{-1}| O; 1, 1 \rangle &=& \frac{1}{E - 2\omega_o - 2 \alpha (E - \omega_o)- f(E,r)} \\ \langle z; 0,0| (E - \hat{H})^{-1}| O; 1, 1 \rangle &=& \sum_{{\bf k k'}} \frac{\hat{H}_{{\bf k k'}} z^*_{{\bf k}} z^*_{{\bf k}}}{E - 2\omega_o - 2 \alpha (E - \omega_o)- f(E,r)}\\ \left( \begin{array}{cc} \langle z; 0,1| (E - \hat{H})^{-1}| O; 1, 1 \rangle \\ \langle z; 1, 0| (E - \hat{H})^{-1}| O; 1, 1 \rangle \end{array} \right) &=& \sum_{\bf k k'} \frac{g_{\bf k'} z^*_{\bf k}}{(E - 2 \omega_o)(E - \omega_o - \omega_{\bf k'})} \left( \begin{array}{cc} e^{-i\frac{{\bf k} \cdot {\bf r}}{2}} \\ e^{i\frac{{\bf k} \cdot {\bf r}}{2}} \end{array} \right) (1 -L)^{-1}_{\bf kk'} \end{eqnarray} In the equations above the functions $\alpha(E), \beta(E,r)$ are \begin{eqnarray} \alpha(E) :&=& \sum_{\bf k} \frac{g_{\bf k}^2}{E - \omega_{\bf k}} \\ \beta(E,r) :&=& \sum_{\bf k} \frac{g_{\bf k}^2}{E - \omega_{\bf k}} e^{i{\bf k} \cdot {\bf r} }. \end{eqnarray} The definitions of the kernel $H_{\bf k k'}$ and of the function $f$ involve complicated expressions. However, the term involving $H_{\bf kk'}$ does not contribute to the evolution of the reduced density matrix, while the function $f(E,r)$ is of order $\lambda^4$ and it can be ignored in the approximation we effect in Sec. II.E. Thus for the purpose of this investigation, the explicit definitions of $H$ and $f$ are not needed, and hence not given here. Finally, the matrix $L$ is defined as \begin{eqnarray} L := \left( \begin{array}{cc} \Xi & \Theta \\ \bar{\Theta} & \bar{\Xi} \end{array} \right), \end{eqnarray} where $\Xi$ and $\Theta$ are matrices on the space of momenta \begin{eqnarray} \Xi_{\bf k k'} = \frac{1}{E - \Omega - \omega_{\bf k}} \left( \alpha(E - \omega_{\bf k}) \delta_{\bf kk'} + g_{\bf k} g_{\bf k'} (\frac{1}{E-2 \Omega} + \frac{e^{i ({\bf k} - {\bf k'}) \cdot {\bf r}}}{E - \omega_{\bf k} - \omega_{\bf k'}} )\right) \label{Xi} \\ \Theta_{\bf k k'} = \frac{1}{E - \Omega - \omega_{\bf k}} \left( \beta(E-\omega_{\bf k}, r) \delta_{\bf kk'} + g_{\bf k} g_{\bf k'} (\frac{1}{E-2 \Omega} + \frac{ 1}{E - \omega_{\bf k} - \omega_{\bf k'}} )\right), \label{Theta} \end{eqnarray} and the overbar denotes complex conjugation. \subsection{The matrix elements of the propagator} The next step is to Fourier transform the matrix elements of the resolvent in order to obtain the matrix elements of the evolution operator. Explicitly, \begin{eqnarray} \langle z; 0, 0| e^{-i\hat{H}t}| O; 0, 1 \rangle &=& \sum_{\bf k} e^{i {\bf k}\cdot {\bf r}/2} z^*_{\bf k} s_{\bf k}(t) \label{00}\\ \langle z; 0, 1| e^{-i\hat{H}t}| O; 0, 1 \rangle &=& \int \frac{dE e^{-iEt}}{2} \left[ \frac{1}{E - \omega_o - \alpha(E) - \beta(E,r)} \right. \nonumber \\ &+& \left.\frac{1}{E - \omega_o - \alpha(E) + \beta(E,r)} \right] = : v_+(t) \label{iv+}\\ \langle z; 1, 1| e^{-i\hat{H}t}| O; 0, 1 \rangle &=& \int \frac{dE e^{-iEt}}{2} \left[ \frac{1}{E - \omega_o - \alpha(E) - \beta(E,r)} \right. \nonumber \\ &-& \left. \frac{1}{E - \omega_o - \alpha(E) + \beta(E,r)} \right] =: v_-(t) \label{iv-}\\ \langle z; 0, 0| e^{-i\hat{H}t}| O; 0, 0 \rangle &=& 1 \\ \langle z; 1, 1| e^{-i\hat{H}t}| O; 1, 1 \rangle &=& \int \frac{dE e^{-iEt}}{E - 2 \omega_o - 2 \alpha(E - \omega_o) -f(E, r)} =: u(t) \label{iu}\\ \langle z; 0, 0| e^{-i\hat{H}t}| O; 1, 1 \rangle &=& \int dE e^{-iEt} \sum_{\bf k k'} \frac{ \hat{H}_{\bf k k'} z^*_{\bf k} z^*_{\bf k'} }{ E - 2 \omega_o - 2 \alpha(E - \omega_o) -f(E, r)} \\ \left( \begin{array}{cc} \langle z; 0, 1| e^{-i\hat{H}t}| O; 1, 1 \rangle \\ \langle z; 1, 0| e^{-i\hat{H}t}| O; 1, 1 \rangle \end{array} \right) &=& \sum_{\bf k} z^*_{\bf k} \left( \begin{array}{cc} e^{-i\frac{{\bf k} \cdot {\bf r}}{2}} \nu_{\bf k}(t) \\ e^{i\frac{{\bf k} \cdot {\bf r}}{2}} \nu'_{\bf k}(t) \label{11} \end{array} \right), \end{eqnarray} where we defined the functions $s_{\bf k}(t), \nu_{\bf k}(t), \nu_{\bf k}'(t)$ as \begin{eqnarray} s_{\bf k}(t) &=& \int \frac{dE e^{-iEt}}{(E-\omega_o - \alpha(E) - \beta(E,r) e^{i {\bf k} \cdot {\bf r}})(E - \omega_{\bf k})} \label{sk}\\ \left( \begin{array}{cc} \nu_{\bf k}(t) \\ \nu'_{\bf k}(t) \end{array} \right) &=& \int \frac{dE e^{-iEt}}{E - 2 \omega_o} \sum_{\bf k'} (1 - L)_{\bf kk'} \left( \begin{array}{cc} \frac{g_{\bf k'}}{E - \omega_o - \omega_{\bf k'}} \\ \frac{g_{\bf k'}}{E - \omega_o - \omega_{\bf k'}} \end{array} \right) \label{nu} \end{eqnarray} \subsection{The reduced density matrix} We next compute the elements of the reduced density matrix for the qubit system by integrating out the EM field degrees of freedom \begin{eqnarray} \rho^{ij}_{i'j'}(t) = \sum_{i_0,j_0,i'_0,j'_0} \rho^{i_0,j_0}_{i'_0,j'_0} (0) \int [dz] [dz^*] \langle O; i'_0j'_0|e^{i\hat{H}t}| z; i', j' \rangle \langle z,i,j|e^{-i\hat{H}t} |O, i_0,j_0 \rangle, \label{rdm} \end{eqnarray} where $[dz]$ is the standard Gaussian integration measure for the coherent states of the EM field. Substituting Eqs. (\ref{00}-\ref{11}) into (\ref{rdm}) we obtain through a tedious but straightforward calculation the elements of the reduced density matrix \begin{eqnarray} \rho^{I}_{I}(t) &=& \rho^{I}_{I}(0) |u|^2(t) \label{1111} \\ \rho^{11}_{01}(t) &=& \rho^{11}_{01}(0) u(t) v^*_+(t) + \rho^{11}_{10}(0) u(t) v^*_-(t) \\ \rho^{11}_{10}(t) &=& \rho^{11}_{10}(0) u(t) v^*_+(t) + \rho^{11}_{01}(0) u(t) v^*_-(t) \\ \rho^{I}_{00}(t) &=& \rho^{11}_{00} u(t) \\ \rho^{01}_{00}(t) &=& \rho^{01}_{00}(0) v_+(t) + \rho^{10}_{00}(0) v_-(t) + \rho^{11}_{01}(0) \mu_1(t) + \rho^{11}_{10}(0) \mu_2(t) \\ \rho^{10}_{00}(t) &=& \rho^{10}_{00}(0) v_+(t) + \rho^{10}_{00}(0) v_-(t) + \rho^{11}_{01}(0) \mu^*_2(t) + \rho^{11}_{10}(0) \mu^*_1(t) \\ \rho^{01}_{01}(t) &=& \rho^{01}_{01}(0) |v_+|^2(t) + \rho^{01}_{10}(0) v_+(t) v^*_-(t) + \rho^{10}_{10}(0) |v_-|^2(t) \nonumber \\ &+& \rho^{10}_{01}(0) v_-(t) v^*_+(t) + \rho^{11}_{11}(0) \kappa_1(t) \\ \rho^{01}_{10}(t) &=& \rho^{01}_{10}(0) |v_+|^2(t) + \rho^{10}_{01}(0) |v_-|^2(t) + \rho^{01}_{01}(0) v_+(t) v_-^*(t)\nonumber \\ & +& \rho^{10}_{10}(0) v_-(t) v^*_+(t) + \rho^{11}_{11}(0) \kappa_2(t) \\ \rho^{00}_{00}(t) &=& 1 - \rho^{11}_{11}(t) - \rho^{01}_{ 01}(t) - \rho^{10}_{10}(t) \label{0000} \end{eqnarray} where \begin{eqnarray} \mu_1(t) &=& \sum_{\bf k} g_{\bf k} \nu_{\bf k}(t) s^*_{\bf k}(t) \label{defm1} \\ \mu_2(t) &=& \sum_{\bf k} g_{\bf k} \nu_{\bf k}(t) s^*_{\bf k}(t) e^{-i {\bf k} \cdot {\bf r}} \label{defm2}\\ \kappa_1(t) &=& \sum_{\bf k} |\nu_{\bf k}|^2(t) \label{defk1}\\ \kappa_2(t) &=& \sum_{\bf k} \nu_{\bf k}(t) \nu'^*_{\bf k}(t) e^{- i {\bf k} \cdot {\bf r}}, \label{defk2} \end{eqnarray} and the functions $u(t), v_{\pm}(t)$ were defined in Eqs. (\ref{iu}), (\ref{iv+}) and (\ref{iv-}). \subsection{Explicit forms for the evolution functions} Eqs. (\ref{1111}-\ref{0000}) provide an {\em exact} expression for the evolution of the reduced density matrix for the system of two qubits interacting with the EM field in the vacuum state. The evolution is determined by seven functions of time $u, v_{\pm}, \kappa_{1,2}, \mu_{1,2}$, for which we have provided the full definitions. To study the details of the qubits' evolution we must obtain explicit forms for the functions above. For analytic expressions, we recourse to an approximation: Assuming weak coupling ($\lambda^2 << 1$), we ignore the contribution of all processes that involve the exchange of two or more photons between the two qubits. \subsubsection{The functions $u, v_{\pm}$} With the approximation above, the contribution of the function $f$ drops out from the definition of $u$. Thus we obtain \begin{eqnarray} u(t) &=& \int \frac{dE e^{-iEt}}{E - 2 \omega_o - 2 \alpha(E - \omega_o) } \\ v_{\pm} &=& \int dE e^{-iEt} \frac{1}{2}\left[ \frac{1}{E - \omega_o - \alpha(E) - \beta(E,r)} \pm \frac{1}{E - \omega_o - \alpha(E) + \beta(E,r)} \right]. \end{eqnarray} We evaluate these expressions using an additional approximation. In performing the Fourier transform, we only keep the contribution of the poles in the integral and ignore that coming from a branch-cut that appears due to the presence of a logarithm in the exact expression of $\alpha(E)$--see Ref. \cite{AH} for details. We then obtain \begin{eqnarray} u(t) &=& e^{ - 2 i \omega_o t - 2 \Gamma_0 t} \label{u} \\ v_{\pm}(t) &=& \frac{e^{- i \omega_o t - \Gamma_0 t}}{2} \left( e^{ - i \sigma t - \Gamma_r t} \pm e^{i \sigma t + \Gamma_r t} \right). \label{v+-} \end{eqnarray} In the equations above, we effected a renormalization of the frequency $\omega_o$ by a constant divergent term-- see \cite{AH}. The parameters $\Gamma_0, \Gamma_r$ and $\sigma(r) $ are defined as \begin{eqnarray} \Gamma_0 := - Im \, \alpha(\omega_o) \\ -\sigma(r) + i \Gamma_r := \beta(\omega_o, r), \end{eqnarray} and they read explicitly \begin{eqnarray} \Gamma_0 &=& \frac{\lambda^2 \omega_o}{2 \pi} \\ \Gamma_r &=& \frac{\lambda^2 \sin \omega_o r }{2 \pi r} \\ \sigma(r) &=& \frac{\lambda^2}{2 \pi^2 r} \left[ - \cos \omega_0 r [\frac{\pi}{2} - Si(\omega_0 r)] \right. \nonumber \\ &+& \left. \sin \omega_0r [\log(e^{\gamma} \omega_0 r) + \int_0^{\omega_0r} dz \, \frac{1 - \cos z}{z}] \right], \end{eqnarray} where $Si$ is the sine-integral function. The term $\sigma(r)$ is a frequency shift caused by the vacuum fluctuations. It breaks the degeneracy of the two-qubit system and generates an effective dipole coupling between the qubits. At the limit $r \rightarrow 0$, this term becomes infinite. One should recall however that the physical range of $r$ is always larger than $a_B$, the Bohr radius of the atoms. As $r \rightarrow \infty$, $\sigma(r) \rightarrow 0$. The constant $\Gamma_0$ corresponds to the rate of emission from individual qubits. It coincides with the rate of emission obtained from the consideration of a single qubit interacting with the electromagnetic field. The function $\Gamma_r$ is specific to the two-qubit system. It arises from Feynman diagrams that involve an exchange of photons between the qubits. Heuristically, it expresses the number of virtual photons per unit time exchanged between the qubits. As such, $\Gamma_r^{-1}$ is the characteristic time-scale for the exchange of information between the qubits. As $r \rightarrow 0$, $\Gamma_r \rightarrow \Gamma_0$ and as $r \rightarrow \infty$, $\Gamma_r \rightarrow 0$. Note that the ratio $\Gamma/\Gamma_0 = \frac{\sin \omega_o r}{\omega_o r}$, while smaller than unity, is of the order of unity as long as $r$ is not much larger than $\omega_o^{-1}$. It is interesting to note that $\Gamma_r = 0$ for $r = n \pi \omega_0^{-1}$, where $n$ an integer. This is a resonant behaviour, similar to that of a classical oscillating dipole when $r = n \lambda/2$, where $\lambda$ is the oscillation wavelength. \subsubsection{The functions $\kappa_{1,2}(t)$} We first compute the functions $\nu_{\bf k}, \nu'_{\bf k}$ of Eq. (\ref{nu}) keeping terms up to second loop order \begin{eqnarray} \left( \begin{array}{cc} \nu_{\bf k}(t) \\ \nu'_{\bf k}(t) \end{array} \right) &=& \int \frac{dE e^{-iEt}}{E - 2 \omega_o} \sum_{\bf k'} \frac{g_{\bf k'}} {E - \omega_o - \omega_{\bf k'}}\left( \begin{array}{cc} \delta_{\bf kk'} + \Xi_{\bf kk'} + \Theta_{\bf kk'} \\\delta_{\bf kk'} + \bar{\Xi}_{\bf kk'} + \bar{\Theta}_{\bf kk'}\\ \end{array} \right), \end{eqnarray} where $\Xi$ and $\Theta$ are given by Eqs. (\ref{Xi}) and (\ref{Theta}). The summation over ${\bf k'}$ yields within an order of $\lambda^5$ \begin{eqnarray} \left( \begin{array}{cc} \nu_{\bf k}(t) \\ \nu'_{\bf k}(t) \end{array} \right) &=& \int \frac{dE e^{-iEt}}{E - 2 \omega_o} \frac{g_{\bf k}}{E - \omega_o - \omega_{\bf k}} \left[ 1 + \frac{ \alpha(E - \omega_{\bf k}) + \beta(E - \omega_{\bf k})}{E - \omega_o - \omega_{\bf k}}\right] \left[ 1 + 2 \frac{\alpha(E - \omega_o)}{E - 2 \omega_o} \right] \nonumber \\ &\times& \left( \begin{array}{cc} 1 + \sum_{\bf k'} \frac{g_{\bf k'}^2 (1 + e^{i ({\bf k} - {\bf k'}) \cdot {\bf r}})} {(E - \omega_{\bf k} - \omega_{\bf k'}) (E - \omega_o - \omega_{\bf k'})} \\ 1 + \sum_{\bf k'} \frac{g_{\bf k'}^2 (1 + e^{-i ({\bf k} - {\bf k'}) \cdot {\bf r}})} {(E - \omega_{\bf k} - \omega_{\bf k'}) (E - \omega_o - \omega_{\bf k'})} \end{array} \right). \label{nuF} \end{eqnarray} The terms in brackets in the first line of the equation above can be absorbed in the leading-order denominators--see Eq. (\ref{nnn}). The term in the second line, however, only gives rise to a (time-independent) multiplicative term of the form $1 + O(\lambda^2)$. Hence, if we keep the leading order terms in the expression of $\nu_{\bf k}$, we may ignore this term. Then $\nu_{\bf k} = \nu'_{\bf k}$, and within an error of order $\lambda^5$ \begin{eqnarray} \nu_{\bf k}(t) = g_{\bf k} \int \frac{dE e^{-iEt}}{[E - 2 \omega_o - 2 \alpha (E - \omega_o)][E - \omega_o - \omega_{\bf k} - \alpha(E - \omega_{\bf k}) - \beta(E - \omega_{\bf k}, r))]}. \label{nnn} \end{eqnarray} Using the same approximation as in Sec. II.E.1 for the Fourier transform, we obtain \begin{eqnarray} \nu_{\bf k}(t) = g_{\bf k} \frac{e^{ - i \Omega t - \Gamma_0 t}}{\omega - \omega_{\bf k} - \sigma -i \Gamma_0 + i \Gamma_r} \left( e^{- i \omega_0 t - \Gamma_0t} - e^{ -i \omega_{\bf k} t - i \sigma t - \Gamma_r t} \right) \end{eqnarray} We then substitute the expression above for $\nu_{\bf k}$ into Eqs. (\ref{defk1}) and (\ref{defk2}) to get \begin{eqnarray} \kappa_1(t) = \frac{\lambda^2}{2 \pi^2} e^{-2 \Gamma_0t} \int_0^{\infty} k dk \, \frac{e^{-2 \Gamma_0 t} + e^{ - 2\Gamma_r t} - 2 e^{ - (\Gamma_0 + \Gamma_r) t} \cos [ (\omega_o - k - \sigma)t]}{(k - \omega_o + \sigma)^2 + (\Gamma_0 - \Gamma_r)^2} \\ \kappa_2(t) = \frac{\lambda^2}{2 \pi^2 r} e^{-2 \Gamma_0t} \int_0^{\infty} dk \, \frac{e^{-2 \Gamma_0 t} + e^{ - 2\Gamma_r t} - 2 e^{ - (\Gamma_0 + \Gamma_r) t} \cos [ (\omega_o - k - \sigma)t]}{(k - \omega_o + \sigma)^2 + (\Gamma_0 - \Gamma_r)^2} \sin kr \end{eqnarray} For $\omega_o t >> 1$, it is a reasonable approximation to substitute the Lorentzian with a delta function. Hence, \begin{eqnarray} \kappa_1(t) = \Gamma_0 \kappa(t) \label{kappa1} \\ \kappa_2(t) = \Gamma_r \kappa(t) \label{kappa2} \end{eqnarray} where \begin{eqnarray} \kappa(t) \simeq \frac{1}{\Gamma_0 - \Gamma_r} e^{-2 \Gamma_0t} (e^{-\Gamma_0 t} - e^{- \Gamma_rt})^2 \label{kappa}. \end{eqnarray} \subsubsection{The functions $\mu_{1,2}$} We first compute the functions $s_{\bf k}(t)$ of Eq. (\ref{sk}) \begin{eqnarray} s_{\bf k}(t) = \frac{ e^{-i \omega_o t - \Gamma_0 t -(\Gamma_r + i \sigma) e^{i {\bf k} \cdot{\bf r}} t} - e^{- i \omega_{\bf k}t}}{\omega_o - \omega_{\bf k} - i \Gamma_0 + (\sigma - i \Gamma_r) e^{i {\bf k} \cdot{\bf r}}} \end{eqnarray} Substituting into Eqs. (\ref{defm1}) and (\ref{defm2}) we obtain \begin{eqnarray} \mu_1(t) = \frac{\lambda^2}{4 \pi^2} \int_{-1}^{1} d \xi \; \int k dk \; e^{ - i \omega_o t - \Gamma_0 t}\frac{e^{- i \omega_0 t - \Gamma_0t} - e^{ -i \omega_{\bf k} t - i \sigma t - \Gamma_r t}}{\Omega - \omega_{\bf k} - \sigma -i \Gamma_0 + i \Gamma_r} \nonumber\\ \times \frac{ e^{i \omega_o t - \Gamma_0 t -(\Gamma_r - i \sigma) e^{-i {\bf k} \cdot{\bf r}} t} - e^{ i \omega_{\bf k}t}}{\omega_o - \omega_{\bf k} + i \Gamma_0 + (\sigma + i \Gamma_r) e^{-i {\bf k} \cdot{\bf r}}}\\ \mu_2(t) = \frac{\lambda^2}{4 \pi^2} \int_{-1}^{1} d \xi \; \int k dk e^{-i {\bf k} \cdot {\bf r}} \; e^{ - i \omega_o t - \Gamma_0 t}\frac{e^{- i \omega_0 t - \Gamma_0t} - e^{ -i \omega_{\bf k} t - i \sigma t - \Gamma_r t}}{\omega_o - \omega_{\bf k} - \sigma -i \Gamma_0 + i \Gamma_r} \nonumber\\ \times \frac{ e^{i \omega_o t - \Gamma_0 t -(\Gamma_r - i \sigma) e^{-i {\bf k} \cdot{\bf r}} t} - e^{ i \omega_{\bf k}t}}{\omega_o - \omega_{\bf k} + i \Gamma_0 + (\sigma + i \Gamma_r) e^{-i {\bf k} \cdot{\bf r}}}. \end{eqnarray} An approximate evaluation of the $\xi$-integral followed by the further approximation $\frac{1}{x+ i \epsilon} \simeq i \pi \delta(x)$, gives an estimation of the leading order contribution \begin{eqnarray} \mu_1(t) = \Gamma_0 [\mu(t) + i \nu(t)] \label{mu1}\\ \mu_2(t) = \Gamma_r [\mu(t) + i \nu(t)] \label{mu2} \end{eqnarray} where $\mu + i \nu$ is the complex-valued function \begin{eqnarray} \mu(t) + i \nu(t)= \simeq \frac{1 }{\Gamma_0 + \frac{ 2 \sin \omega_o r}{\omega_o r} \Gamma_r - i \sigma ( 1 + 2 \frac{\sin \omega_o r}{\omega_o r})} e^{- i \omega_o t - \Gamma_0 t}\nonumber \\ \times ( e^{ - \Gamma_0 t} - e^{- \Gamma_rt} )( e^{ - \Gamma_0 t - 2 \frac{\sin \omega_o r}{\omega_o r} [\Gamma_r - i \sigma] t} - e^{ i \sigma t} ). \label{munu} \end{eqnarray} \subsection{The master equation} Given the explicit form of the functions computed in Sec. II.E, we write the evolution equations in the following form \begin{eqnarray} \rho^{I}_{I} (t) &=& e^{-4 \Gamma_0 t} \rho^{I}_{I}(0) \label{dmp1}\\ \rho^{I}_{O}(t) &=& e^{ - 2 i \omega_o t - 2 \Gamma_0 t} \rho^{I}_{O}(0) \\ \rho^{I}_-(t) &=& e^{-i \omega_o t - 3 \Gamma_0 t + i \sigma t + \Gamma_r t} \rho^{I}_-(0) \\ \rho^{I}_+ (t) &=& e^{-i \omega_o t - 3 \Gamma_0 t - i \sigma t -\Gamma_r t} \rho^{I}_+(0) \\ \rho^-_O(t) &=& e^{-i \omega_ot -\Gamma_0t + i \sigma + \Gamma_r t} \rho^-_O(0) + i (\Gamma_0 + \Gamma_r) \nu(t) \rho^{I}_+(0) + (\Gamma_0 - \Gamma_r) \mu(t) \rho^{I}_-(0) \\ \rho^+_{O}(t) &=& e^{-i \omega_ot -\Gamma_0t - i \sigma - \Gamma_r t} \rho^+_{O}(0) + (\Gamma_0 + \Gamma_r) \mu(t) \rho^{I}_+(0) + i (\Gamma_0 - \Gamma_r) \nu(t) \rho^{I}_-(0) \\ \rho^+_+(t) &=& e^{-2 \Gamma_0 t - 2 \Gamma_r t} \rho^+_+(0) + (\Gamma_0 + \Gamma_r) \kappa(t) \rho^{I}_{I}(0) \\ \rho^-_-(t) &=& e^{-2 \Gamma_0 t - 2 \Gamma_r t} \rho^-_-(0) + (\Gamma_0 - \Gamma_r) \kappa(t) \rho^{I}_{I}(0) \label{rss}\\ \rho^+_-(t) &=& e^{2 i \sigma t - 2 \Gamma_0 t} \rho^+_-(0). \label{dmpf} \end{eqnarray} In the equations above we wrote the density matrix in a basis defined by $|I \rangle, |O\rangle, |+\rangle, |-\rangle$, where \begin{eqnarray} |+ \rangle = \frac{1}{\sqrt{2}} \left( |01 \rangle + |10 \rangle \right) \\ |- \rangle = \frac{1}{\sqrt{2}} \left( |01 \rangle - |10 \rangle \right). \end{eqnarray} We note that in this approximation, the diagonal elements of the density matrix propagator (i.e. the ones that map $\rho^a_b(0)$ to $\rho^a_b(t)$ where $a,b \in \{O, +, -, I\}$) decay exponentially (except for the $\rho^{O}_{O}$ which is functionally dependent on the other matrix elements due to normalization). We shall see in Sec. V that this behavior is in accordance with the Born-Markov approximation. The situation is different for the off-diagonal elements of the density matrix propagator, i.e. the ones that map $\rho^a_b(0)$ to $\rho^{a'}_{b'}(t)$ for $a \neq a'$ and $b \neq b'$. They are given by more complex functions of time and they differ from the ones predicted by the Born-Markov approximation. As we have the solution $\rho(t) = M_t[\rho(0)]$, where $M_t$ is the density matrix propagator defined by Eqs. (\ref{dmp1}-\ref{dmpf}), we can identify the master equation through the relation $\dot{\rho}(t) = \dot{M}_t[ M^{-1}_t[\rho(t)]]$. We obtain \begin{eqnarray} \dot{\rho}^{I}_{I} &=& - 4 \Gamma_0 \rho^I_I \label{ME1}\\ \dot{\rho}^{I}_- &=& ( -i \omega_o - 3 \Gamma_0 + i \sigma + \Gamma_r) \rho^{I}_- \\ \dot{\rho}^{I}_+ &=& ( -i \omega_o - 3 \Gamma_0 - i \sigma - \Gamma_r) \rho^{I}_+ \\ \dot{\rho}^{I}_{O} &=& (-2 i \omega_o - 2 \Gamma_0) \rho^{I}_{O} \\ \dot{\rho}^-_{O} &=& = (-i \omega_o - \Gamma_0 +i \sigma + \Gamma_r) \rho^-_{O} + (\Gamma_0 + \Gamma_r) \alpha_1(t) \rho^{I}_+ + (\Gamma_0 - \Gamma_r) \alpha_2(t) \rho^{I}_- \\ \dot{\rho}^+_{O} &=& = (-i \omega_o - \Gamma_0 +i \sigma + \Gamma_r) \rho^+_{O} + (\Gamma_0 + \Gamma_r) \alpha_3(t) \rho^{I}_+ + (\Gamma_0 - \Gamma_r) \alpha_4(t) \rho^{I}_- \\ \dot{\rho}^+_{+} &=& -2 (\Gamma_0 + \Gamma_r) \rho^+_+ + (\Gamma_0 + \Gamma_r) \alpha_5(t) \rho^{I}_{I} \\ \dot{\rho}^-_- &=& -2 (\Gamma_0 - \Gamma_r) \rho^+_+ + (\Gamma_0 - \Gamma_r) \alpha_6(t) \rho^{I}_{I} \\ \dot{\rho}^+_- &=& 2(i\sigma - \Gamma_0) \rho^+_-. \label{MEf} \end{eqnarray} Explicit expressions for the functions of time $\alpha_i(t), i = 1, \ldots, 6$ appearing in Eqs. (\ref{ME1}--\ref{MEf}) are given in the Appendix B. We see that the evolution equation for the reduced density matrix of the two qubits, while it is local-in-time, it does not have constant-in-time coefficients. Hence, it does not correspond to a Markov master equation of the Lindblad type. Again, we note that the non-Markovian behavior is solely found in the off-diagonal terms of the evolution law and that the diagonal ones involve constant coefficients. To facilitate comparison with the expressions obtained from the Born-Markov approximations we cast equations (\ref{ME1}--\ref{MEf}) into an operator form. \begin{eqnarray} \dot{\hat{\rho}} &=& - i [\hat{H}_0 + \hat{H}_i, \hat{\rho}] + \sum_{i,j=1}^2 \Gamma_{ij} (\hat{S}_+^{(i)} \hat{S}_-^{(j)} \hat{\rho} + \hat{\rho} \hat{S}_+^{(i)} \hat{S}_-^{(j)} - 2 \hat{S}_-^{(i)} \hat{\rho} \hat{S}_+^{(j)}) \nonumber \\ &+& (\Gamma_0 + \Gamma_r) {\bf F_t}[\hat{\rho}] + (\Gamma_0 - \Gamma_r) {\bf G_t}[\hat{\rho}]. \label{Lindbladlike} \end{eqnarray} The first term on the right-hand-side of Eq. (\ref{Lindbladlike}) corresponds to the usual Hamiltonian evolution: the total Hamiltonian is a sum of the free Hamiltonian $\hat{H}_0$ and of a dipole interaction Hamiltonian \begin{eqnarray} \hat{H}_i = - \sigma (\hat{S}^- \otimes \hat{S}_+ + \hat{S}^+ \otimes \hat{S}^-). \end{eqnarray} The second term in the right-hand-side of Eq. (\ref{Lindbladlike}) is the usual Lindblad term (see e.g., \cite{FicTan}), where we defined $\Gamma_{11} = \Gamma_{22} = \Gamma_0$ and $\Gamma_{12} = \Gamma_{21} = \Gamma_r$. The last two terms contain effects that pertain to the off-diagonal terms of the reduced density matrix propagator and they are non-Markovian: ${\bf F_t}$ and ${\bf G_t}$ are trace-preserving linear operators on the space of density matrices and their explicit form is given in Appendix B. Assuming that the Markovian regime corresponds to the vanishing of ${\bf F_t}$ and ${\bf G_t}$, we find from Eqs. (\ref{FF}--\ref{GG}) that in this regime the functions $\alpha_i(t)$ in Eqs. (\ref{ME1}--\ref{MEf}) should reduce to the following constants \begin{eqnarray} \alpha_1 = \alpha_4 = 0; \hspace{2cm} \alpha_3 = \alpha_5 = \alpha_6 = 2; \hspace{2cm} \alpha_2 = -2. \label{coefficients}. \end{eqnarray} In Sec. V, we shall discuss in more detail the physical origin of non-Markovian behavior in this two-qubit system. \section{Disentanglement of two qubits} In this section, we employ the results obtained above to study the evolution of the two qubits initially in an entangled state. We shall focus on the process of disentanglement induced by their interaction with the field. \subsection{Class A states: Initial superposition of $|00 \rangle$ and $|11\rangle$} We first examine the class of initial states we call Class A of the following type \begin{eqnarray} | \psi_o \rangle = \sqrt{1-p} |00\rangle + \sqrt{p} |11 \rangle, \label{gggg} \end{eqnarray} where $0 \leq p \leq 1$. Recall our definition $|I \rangle = |11 \rangle$ and $|O \rangle = |00 \rangle$. From Eqs. (\ref{1111} - \ref{0000}) we obtain \begin{eqnarray} \hat{\rho}(t) = p^2 e^{-4 \Gamma_0 t} |I \rangle \langle I| + e^{-2 \Gamma_0 t} \sqrt{p(1-p)} ( e^{2 i \omega_o t} |I\rangle \langle O| + e^{-2 i \omega_0 t} \, |O \rangle \langle I|) \nonumber \\ + [ \kappa_1(t) - \kappa_2(t)] |- \rangle \langle -| + [\kappa_1 (t) + \kappa_2(t)] |+ \rangle \langle +| + [1 - p^2 e^{-4 \Gamma_0 t} - 2 \kappa_1(t)] |O \rangle \langle O|, \end{eqnarray} where the functions $\kappa_1(t)$ and $\kappa_2(t)$ are given by Eqs. (\ref{kappa1}--\ref{kappa2}). These results are quite different from those reported in Ref. \cite{FicTan06}, which were obtained under the Born-Markov approximation. While the $|I \rangle \langle I|$ and $|I\rangle \langle O|$ terms are essentially the same, the $|- \rangle \langle -|$ and $|+ \rangle \langle +|$ ones are not, as they involve non-diagonal elements of the density matrix propagator. For comparison, we reproduce here the explicit form of these matrix elements in our calculation \begin{eqnarray} \rho^+_+ &=& p \frac{\Gamma_0 + \Gamma_r}{\Gamma_0 - \Gamma_r} e^{-2 \Gamma_0t} (e^{-\Gamma_0 t} - e^{- \Gamma_rt})^2 \\ \rho^-_- &=& p e^{-2 \Gamma_0t} (e^{-\Gamma_0 t} - e^{- \Gamma_rt})^2, \end{eqnarray} and in that of Ref. \cite{FicTan06} (translated into our notation): \begin{eqnarray} \rho^+_+ (t) &=& p \frac{\Gamma_0 + \Gamma_r}{\Gamma_0 - \Gamma_r} e^{-2 \Gamma_0 t} ( e^{-2 \Gamma_rt} - e^{- 2 \Gamma_0 t}) \label{FTt} \\ \rho^-_-(t) &=& p \frac{\Gamma_0 - \Gamma_r}{\Gamma_0 + \Gamma_r} e^{-2 \Gamma_0 t} ( e^{2 \Gamma_rt} - e^{- 2 \Gamma_0 t}). \label{FTs} \end{eqnarray} For large values of $r$, $\Gamma_r << \Gamma_0$ and the expressions above coincide. However, for $\omega_0 r << 1$ their difference is substantial. In this regime, $\Gamma_0 \simeq \Gamma_r$ and at times $\Gamma_0 t \sim 1$ we obtain $(\Gamma_0 - \Gamma_r)t <<1$. According to the Markovian results of \cite{FicTan06}, in this regime the $|+ \rangle \langle +|$ term is of order $O(\lambda^0)$ and hence comparable in size to the other terms appearing in the evolution of the density matrix. However, according to our results, which are based on the full non-Markovian dynamics, the $|+ \rangle \langle +|$ term is of order $\frac{\Gamma_0 - \Gamma_r}{\Gamma_0}$ and hence much smaller. In general, for $\omega_0 r << 1$, we find that the $|- \rangle \langle -|$ and $|+ \rangle \langle +|$ terms contribute little to the evolution of the reduced density matrix and they can be ignored. Since these terms are responsible for the sudden death and subsequent revival of entanglement studied in \cite{FicTan06}, we conclude that these effects are absent in this regime. Indeed, this can be verified by the study of concurrence as a function of time as appearing in Figs. 1 and 2. For large inter-qubit separations and for specific initial states ($ p > 0.5$) there is sudden death of entanglement--but no subsequent revivals. \begin{figure} \caption{A plot of the concurrence as a function of time for initial Class A state (\ref{gggg}) with $p = 0.5$ for different values of $\omega_o r$.} \label{ghzconc} \end{figure} \begin{figure} \caption{A plot of the concurrence as a function of time for initial Class A state (\ref{gggg}) for different values of $p$ and fixed inter-qubit distance ($\omega_o r = 1$).} \label{ghzconc2} \end{figure} \subsection{Initial antisymmetric $|- \rangle$ Bell state} We next consider the case of an initial antisymmetric $|- \rangle$ state for the two qubits. We find \begin{eqnarray} \hat{\rho}(t) = e^{ -2 [\Gamma_0 - \Gamma_r]t} |- \rangle \langle -| + \left(1 -e^{ -2 [\Gamma_0 - \Gamma_r]t}\right)|O \rangle \langle O | \end{eqnarray} We see that the decay rate is $\Gamma_0 - \Gamma_r$. The effect of photon exchange between the qubits essentially slows down the overall emission of photons from the two-qubit system (sub-radiant behavior). As the qubits are brought closer, the decay is slower. At the limit $r \rightarrow 0$, there is no decay. The results agree qualitatively with those obtained in Ref. \cite{FicTan} through the Born-Markov approximation. Fig. 3 shows the time evolution of concurrence for an initial antisymmetric $|- \rangle$ state and for different inter-qubit distances. \begin{figure} \caption{A plot of the concurrence as a function of time for the initial antisymmetric $|- \rangle$ Bell state and for different values of the inter-qubit distance $r$ (in units of $\omega_o^{-1}$). The decay of concurrence proceeds more slowly when the qubits are closer together.} \label{singlet conc} \end{figure} \subsection{Initial symmetric $|+ \rangle$ Bell state} For an initial symmetric $|+ \rangle$ Bell state the reduced density matrix of the two qubits is \begin{eqnarray} \hat{\rho}(t) = e^{ -2 [\Gamma_0 + \Gamma_r]t} |+ \rangle \langle +| + \left(1 -e^{ -2 [\Gamma_0 + \Gamma_r]t}\right) |O \rangle \langle O|. \end{eqnarray} Here we have a super-radiant decay with rate given by $\Gamma_0 + \Gamma_r$. Note that for both the antisymmetric $|- \rangle $ and symmetric $|+ \rangle $ states, a resonance appears when $r = n \pi \omega_o^{-1}$, and the decay becomes radiant. Fig. 4 shows the behavior of concurrence, in qualitative agreement with the corresponding results of Ref. \cite{FicTan}. \begin{figure} \caption{Plot of the concurrence as a function of time for initial symmetric $|+ \rangle$ Bell state and for different values of the inter-qubit distance $r$ (in units of $\omega_o^{-1}$).} \label{triplet conc} \end{figure} \section{Decoherence of qubits} In this section we study the evolution of factorized initial states, in order to understand how the decoherence of one qubit is affected by the presence of another. \subsection{One qubit in vacuum state} We assume that the first qubit is prepared initially in a superposition of the $|0 \rangle$ and $|1 \rangle$ states, and that the second qubit lies on the ground state. Therefore, the initial state is of the form \begin{eqnarray} \left(\sqrt{p} |1 \rangle + \sqrt{1-p} |0 \rangle \right) \otimes |0 \rangle = \sqrt{p} |10 \rangle + \sqrt{1-p} |00 \rangle. \end{eqnarray} From Eqs. ( \ref{1111}--\ref{0000}), we obtain the density matrix of the combined qubit system \begin{eqnarray} \hat{\rho}(t) = p \left( |v_+|^2 |10\rangle \langle 10| + |v_-|^2 |01 \rangle \langle 01 | + v_-^* v_+ |01 \rangle \langle 10 | + v_- v_+^* |10 \rangle \langle 01| \right) \nonumber \\ + \sqrt{p(1-p)} \left( v_+^* |10 \rangle \langle 00| + v_-^* |01 \rangle \langle 00| + v_+ |00 \rangle \langle 10| + v_- |00 \rangle \langle 01| \right) \nonumber \\ + \left(1 - p (|v_+|^2 + |v_-|^2) \right) |00 \rangle \langle 00|, \end{eqnarray} where the functions $v_{\pm}(t)$ are given by Eq. (\ref{v+-}). The two qubits become entangled through their interaction via the EM field bath. To study the decoherence in the first qubit, we trace out the degrees of freedom of the second one, thus constructing the reduced density matrix $\hat{\tilde{\rho}}_1$ \begin{eqnarray} \hat{\tilde{\rho}}_1(t) = p |v_+|^2 |1 \rangle \langle 1| + \sqrt{p(1-p} \left( v_+ |0\rangle \langle 1| + v_+^* |1 \rangle 0| \right) + \left( 1 - p|v_+|^2 \right) |0 \rangle \langle 0|. \end{eqnarray} At the limit of large interqubit distances, $\Gamma_r = 0 = \sigma(r)$, whence $v_+ \simeq e^{- i \omega_o t - \Gamma_0 t}$, the off-diagonal elements decay within a characteristic time-scale of order $\Gamma_0^{-1}$. These results coincide with those for the single qubit case--see Refs. \cite{AH, SADH}. However, in the regime $\omega_o r << 1$, the results are substantially different. The entanglement with the second qubit leads to a departure from pure exponential decay. In this regime, $\Gamma_r \simeq \Gamma_0$. This implies for times longer than $\Gamma_0^{-1}$ a substantial fraction of the off-diagonal elements persists. This decays eventually to zero within a time-scale of order $[\Gamma_0 - \Gamma_r]^{-1} >> \Gamma_0^{-1}$. Hence, the qubit preserves its coherence longer. (At the limit $r \rightarrow 0$ there is no decoherence.) The reduced density matrix of the second qubit is \begin{eqnarray} \hat{\tilde{\rho}}_2(t) = p |v_-|^2 |1 \rangle \langle 1| + \sqrt{p(1-p)} \left( v_- |0\rangle \langle 1| + v_-^* |1 \rangle 0| \right) + \left( 1 - p |v_-|^2 \right) |0 \rangle \langle 0|. \end{eqnarray} Note that at the limit of small inter-qubit distances the asymptotic behavior ( for $\Gamma_0 t >> 1$) of $\hat{\tilde{\rho}}_1(t)$ is identical to that of $\hat{\tilde{\rho}}_2(t)$. The second qubit (even though initially on its ground state) develops a persistent quantum coherence as a result of the interaction with the first one. \subsection{One qubit in excited state} We also consider the case of a factorized initial state, in which the second qubit is excited \begin{eqnarray} \left(\sqrt{p} |1 \rangle + \sqrt{1-p} |0 \rangle \right) \otimes |1 \rangle = \sqrt{p} |I \rangle + \sqrt{1-p} |01 \rangle. \end{eqnarray} This system behaves differently from that of Sec. IV.A. The matrix elements of the reduced density matrix read \begin{eqnarray} \rho^{11}_{11} &=& p e^{ - 4 \Gamma_0 t} \\ \rho^{11}_{01} &=& \sqrt{p(1-p)} e^{ - 2 i \omega_o t - 2 \Gamma_0 t} v_-^* \\ \rho^{01}_{00} &=& \sqrt{p(1-p)} \mu_1(t) \\ \rho^{10}_{00} &=& \sqrt{p(1-p)} \mu_2^*(t)\\ \rho^{01}_{01} &=& (1-p) |v_+|^2 + p \kappa_1 \\ \rho^{01}_{10} &=& (1-p) v_+ v^*_- + p \kappa_2, \end{eqnarray} where the functions $\mu_{1,2}$ are given by Eqs. (\ref{mu1}--\ref{munu}), the functions $\kappa_{1,2}$ by Eqs. (\ref{kappa1}) and (\ref{kappa2}) and the functions $v_{\pm}$ by Eq. (\ref{v+-}). The reduced density matrix of the first qubit is \begin{eqnarray} \hat{\tilde{\rho}}_1(t) = \left((1-p)|v_+|^2 + p \kappa_1 + p e^{- 4 \Gamma_0t} \right) |1 \rangle \langle 1| + \left(1 - (1-p)|v_+|^2 - p \kappa_1 - p e^{- 4 \Gamma_0t} \right) |0 \rangle \langle 0| \nonumber\\ + \sqrt{p(1-p)} \left( (\mu_2^* + e^{ -2 i \omega_o t - 2 \Gamma_0 t} v_+^*) |0 \rangle \langle 1| + (\mu_2 + e^{ 2 i \omega_o t - 2 \Gamma_0 t} v_+^*) |1 \rangle \langle 0| \right). \end{eqnarray} At times $t >> \Gamma_0^{-1}$, the decay of the off-diagonal elements at is dominated by the function $\mu_2$, which then reads \begin{eqnarray} \mu_2(t) \simeq \frac{\Gamma_r e^{- i (\omega_o - \sigma) t - \Gamma_0 t} }{\Gamma_0 + \frac{ 2 \sin \omega_o r}{\omega_o r} \Gamma_r - i \sigma ( 1 + 2 \frac{\sin \omega_o r}{\omega_o r})} ( e^{ - \Gamma_r t} - e^{- \Gamma_0t} ). \end{eqnarray} In the limit of large inter-qubit distance $r$, the off-diagonal element falls like $r^{ - 2 \Gamma_0t}$, while in the small $r$ limit it falls like $e^{ - \Gamma_0 t}$. Comparing with the case of Sec. IV.A, we see that the initial excitation of the second qubit leads to a lesser degree of entanglement of the total system, as it cannot absorb any virtual photons emitted from the first one. For this reason, the asymptotic decoherence rate does not vary significantly with the interqubit distance. \section{Discussion} In this section, we first summarize our results for the evolution of various initial states, we then discuss the origin of the non-Markovian behavior and finally, a possible limitation of our results that is due to the restricted domain of validity of the Rotating-Wave approximation. \subsection{Description of the results} For an initial Class A state (\ref{gggg}), the $|I \rangle$ component decays to the vacuum, but it also evolves into a linear combination of antisymmetric $|- \rangle $ and symmetric $|+ \rangle $ Bell states. However, if the qubits are close together the evolution to Bell states is suppressed. This behavior is qualitatively different from that of Ref. \cite{FicTan06}, which was obtained through the Born-Markov approximation. The corresponding terms differ by many orders of magnitude at the physically relevant time-scales. As a consequence, we find that there is neither sudden death nor revival of entanglement in this regime. In retrospect, this difference should not be considered surprising. The Born-Markov method involves two approximations: i) that the back-action of the field on the atoms is negligible and ii) that all memory effects in the system are insignificant. When the qubits are found within a distance much smaller than the one corresponding to their characteristic wavelength, it is not possible to ignore back-action. The virtual photons exchanged by the qubits (at a rate given by $\Gamma_r$) substantially alter the state of the electromagnetic field. On the other hand, the effect of virtual photons exchange between qubits drops off quickly at large separations $r$ -- the two qubits decay almost independently one of the other. Hence, the Born-Markov approximation -- reliable for the case of two separate qubits each interacting with an individual field -- also gives reasonable results for the two qubits interacting with a common field. In this regime sudden death is possible, but not revival of entanglement. In this sense, our results effectively reduces to those of Ref. \cite{YuEbePRL}: when the distance between the qubits is much larger than any characteristic correlation length scale of the system it looks as though the two qubits are found in different reservoirs. Therefore, our results for initial states of Bell type are qualitatively similar to those obtained in Ref. \cite{FicTan} through the Born-Markov approximation. The symmetric $|+ \rangle $ state decays super-radiantly and the antisymmetric $|- \rangle $ one sub-radiantly. For small values of $\omega_o r$, the antisymmetric $|- \rangle $ state decays very slowly and entanglement is preserved. Concerning decoherence, when the inter-qubit separation is large and the second qubit lies on its ground state, our two qubit calculation reproduces previous results on the single qubit case \cite{AH, SADH}. However, if the qubits are close together the coherence is preserved longer. These results seem to suggest that in a many-qubit system, the inter-qubit quantum coherence can be sustained for times larger than the decoherence time of the single qubit case. This may suggest some physical mechanisms to resist decoherence in multi-qubit realistic systems. \subsection{The origin and significance of the non-Markovian behavior} With the exact solutions we have obtained (under weak coupling but no Born or Markov approximation) for the two qubit - EMF system we want to elaborate on the origin and significance of the non-Markovian behavior started in the Introduction. In the evolution equations (\ref{dmp1}--\ref{dmpf}) we note that the diagonal terms of the reduced density matrix propagator all decay exponentially. Their decay rate is therefore constant. It is well known that this feature is a sign of Markovian behavior. In fact, it characterizes the domain of validity of Fermi's golden rule: one could obtain the decay rates by direct application of this rule. Hence, as far as this part of the evolution is concerned, our results are fully compatible with the Markovian predictions. However, the behavior of the non-diagonal terms in the reduced density matrix propagator is different. A look at Eqs. (\ref{dmp1}-\ref{dmpf}) will convince the reader that the only non-zero such terms are ones that describe the effect of successive decays, for example the $|11 \rangle $ state first decaying into $|- \rangle$ and then $|- \rangle $ decaying into the ground state $|00 \rangle$. Hence, the $\rho^-_-(t)$ term consists of one component that contains the remaining of the $|- \rangle \langle -|$ part of the initial state and another component that incorporates the decay of the $|11 \rangle \langle 11|$ part of the initial state towards the state $|- \rangle$. In our calculation, the latter term is encoded into the functions $\kappa_{1,2}(t)$, which are obtained by squaring the amplitudes $\nu_{\bf k}(t)$ as in Eqs. (\ref{defk1}--\ref{defk2}). At second loop order, these amplitudes are obtained from the summation of two Feynman diagrams --see Eq. (\ref{nuF}). The structure of the poles in Eq. (\ref{nuF}) reveals that the first Feynman diagram describes the decays of the $|11 \rangle$ state, while the second one corresponds to processes involving the $|01 \rangle$ and $|10 \rangle$ states. When we construct the evolution functions $\kappa_{1,2}(t)$, we obtain terms that are both diagonal and off-diagonal with respect to the two types of process. It is the presence of the off-diagonal terms that is primarily (but not solely) responsible for the deviation of our results from the Markovian prediction. In the Markov approximation, the corresponding term involves summation (subtraction) of probabilities rather than amplitudes. To justify the last statement, we note from Eqs. (\ref{1111}--\ref{0000}) that the evolution decouples the diagonal from the off-diagonal elements of the density matrix (the Markovian equations also have this property). Hence, the probabilities $p_a(t) = \rho^a_a(t), a \in \{ I, +, -, O\}$ evolve autonomously. Time--homogeneity (i.e. Lindblad time evolution) implies that their evolution can be given by a transfer matrix \begin{eqnarray} \dot{p}_a(t) = \sum_b T_a^b p_b(t). \label{transf} \end{eqnarray} Here $T^a_b$ is the {\em constant} decay rate for the process $ b \rightarrow a$. Noting that Eqs. (\ref{1111}--\ref{0000}) contain no transitions between $+$ and $-$ and no transitions from $-$ to $I$, Eq. (\ref{transf}) yields \begin{eqnarray} \dot{p}_-(t) = - 2(\Gamma_0 - \Gamma_r) p_-(t) + w p_{I}(t), \label{ps} \end{eqnarray} where $w = T^-_{I}$. Since $p_{I}(t)$ is determined by Eq. (\ref{ME1}) as $p_{I}(0) e^{-4 \Gamma_0t}$, we obtain for the initial condition $p_-(0) = 0$, \begin{eqnarray} p_-(t) \sim p_{I}(0) (e^{-2 (\Gamma_0 - \Gamma_r)t} - e^{-4 \Gamma_0 t}), \label{ccc} \end{eqnarray} in full agreement with Eq. (\ref{FTs}) obtained from the Lindblad master equation. The derivation of Eq. (\ref{ccc}) provides an example of a more general fact: the Markovian assumption forces the off-diagonal terms of the density matrix propagator (in this case the one mapping $\rho^{I}_{I}(0)$ to $\rho^-_-(t)$) to be subsumed by the diagonal ones. The off-diagonal elements can have no independent dynamics of their own, unlike what would be the case if they were derived from a full calculation. One should also note that the diagonal terms of the propagator are obtained from the first order perturbation theory. Since they determine the off-diagonal terms within the Markov approximation, the latter only contains the information obtained from first-order perturbation theory. However, in our calculation the off-diagonal terms involve second--order effects and hence, reveals dynamical correlations that are inaccessible within the context of the Born-Markov approximation. For the reasons above, the matrix elements (\ref{FTt}--\ref{FTs}) given by \cite{FicTan06} obtained under the Markovian approximation--carry the characteristic superradiant and subradiant behavior of the decay of the $|+ \rangle$ and $|- \rangle$ states respectively, a property that does not arise from our calculation. Moreover, we note that Eq. (\ref{ps}) obtained by making the Markov approximation is related to arguments from {\em classical probability}, i.e. it involves addition of the probabilities associated to different processes \footnote{To be precise, in an exponential decay, it is possible to define a probability density for the occupation number of any state as a function of time. This is in general not possible in quantum theory. Eq. (\ref{ccc}) is then essentially the Kolmogorov additivity condition for these probabilities. }. On the other hand, a proper quantum mechanical calculation involves the addition of amplitudes--e.g. the ones appearing in the definition of the functions $\kappa_{1,2}(t)$-- and as such it must also contain `interference' terms. In effect, the Markovian approximation introduces by hand a degree of partial `decoherence' (or classicality) and we think that this explains why in general it predicts a faster classicalization of the system than the full analysis does. We believe that the feature discussed above is generic. In the single qubit system, it was not present because its structure was too simple. There could be no intermediate decays. However, this effect should in principle be present in any system that contains intermediate states. The Markovian approximation would then be valid only if specific conditions hold that render the `interference' terms negligible--for example if there exists a sharp separation of the relevant timescales. To summarize, the Markov approximation essentially ignores interference terms that are relevant to processes that involve successive decays. These processes appear through off-diagonal terms in the reduced density matrix propagator. The Markov approximation misrepresents the intrinsic dynamics for these terms and ties them --by forcing additivity of probabilities--to the evolution of the diagonal ones. As a results, the off-diagonal terms are subsumed by the diagonal ones. \subsection{The use of the rotating wave approximation} Finally, we add a few words on the accuracy of our model for the interaction between the 2LA's and the EMF. The Hamiltonian (\ref{Hint}) is obtained from the study of the interaction of the atomic degrees of freedom with the electromagnetic field. Its derivation involves the dipole and the rotating wave (RW) approximations--see Appendix A in Ref. \cite{AH}. The RW approximation consists in ignoring rapidly oscillating terms in the interaction-picture Hamiltonian and keeping only the ones that correspond to resonant coupling. (One ignores processes during which a photon is emitted {\em and} the atom becomes excited.) The terms that are dropped out in the RW approximation oscillate with a frequency of order $\omega_o$. For a single qubit system the RW approximation is self-consistent. However, in the two-qubit system we keep terms in the Hamiltonian that vary in space as $e^{i {\bf k} \cdot {\bf r}}$. For the RW approximation to be consistent, one has to assume that ${\bf k} \cdot {\bf r} << \omega_o t$. Since ${\bf k}$ is peaked around the resonance frequency, this condition is equivalent to $ r << t$. Since the physically interesting time-scale for the study of disentanglement and decoherence corresponds to $\Gamma_0^{-1} \sim [\lambda^{2} \omega_0]^{-1}$, we expect the RW approximation to be reliable in this context, as long as $\lambda^2 \omega_o r << 1$. This is sufficient for realistic situations, in which $r$ is bounded by the size of the cavity and $\lambda^2 << 1$. However, the condition above does not hold at the formal limit $r \rightarrow \infty$. In this regime the RW approximation may break down. Indeed, in section IV.B the reduced density matrix for the single qubit in the limit $r \rightarrow \infty$ does not coincide with the corresponding expression obtained in the study of the single qubit. The presence of an {\em excited} second qubit, even if it is situated far away, affects the time evolution significantly. This effect is also present in the Born-Markov approximation. This is arguably an unphysical behavior, and we believe that it arises as an artefact of the RW approximation. \\ \noindent{\bf Acknowledgement} It is a pleasure for one of us (BLH) to hear the lectures of Dr. Ting Yu and the seminar of Professor Joe Eberly explicating their work on disentanglement of two qubits. We also thank the referee for urging us to derive the master equation so the non-Markovian features can be demonstrated more explicitly. This work is supported by grants from the NSF-ITR program (PHY-0426696), NIST and NSA-LPS. CA was supported by a Pythagoras II grant (EPEAEK). \begin{appendix} \section{Perturbative resummation} We notice that the action of the operator $[(E-H_0)^{-1}H_I]^2$ on any linear combination of the vectors $| O;0,1 \rangle $ and $| O; 1, 0 \rangle$ yields another linear combination of these vectors. If we denote by $p_n $ and $q_n$ the coefficients of $| O;0,1 \rangle $ and $| O; 1,0 \rangle$ respectively after the n-th action of $[(E-H_0)^{-1}H_I]^2$ (2n-th order of perturbation theory) we obtain \begin{eqnarray} \left( \begin{array}{l} p_n \\ q_n \end{array} \right) = \frac{1}{E - \Omega}\left( \begin{array}{ll} \alpha(E) & \beta^*(E,r) \\ \beta(E,r) & \alpha(E) \end{array} \right) \left( \begin{array}{l} p_{n-1} \\ q_{n-1} \end{array} \right) = : A \left( \begin{array}{l} p_{n-1} \\ q_{n-1} \end{array} \right) \end{eqnarray} Consequently, \begin{eqnarray} \left( \begin{array}{l} p_n \\ q_n \end{array} \right) = A^n \left( \begin{array}{l} p_0 \\ q_0 \end{array} \right), \end{eqnarray} and summing over all $n$ we obtain \begin{eqnarray} \sum_{n=0}^{\infty} \left( \begin{array}{l} p_n \\ q_n \end{array} \right) = \left( \sum_{n=0}^{\infty} A^n \right) \left( \begin{array}{l} p_0 \\ q_0 \end{array} \right) = (1 -A)^{-1} \left( \begin{array}{l} p_0 \\ q_0 \end{array} \right) \end{eqnarray} To compute $\langle z; 0,1|(E-H)^{-1}|O;01 \rangle$ and $\langle z; 1,0|(E-H)^{-1}|O;01 \rangle$ according to the perturbation series we have $p_0 = (E-\omega_o)^{-1}$, $q_0 = 0$. The results in the main text then follow. The resummation for $\langle z; 0,0|(E-H)^{-1}|O;0,1 \rangle$ proceeds from the fact that the terms in the $2n+1$ order of the perturbative expansion are of the form \begin{eqnarray} \sum_{\bf k} \frac{g_{\bf k} \zeta^n_{\bf k} e^{i {\bf k} \cdot {\bf r}/2} }{E - \omega_{\bf k}} b^{\dagger}_k | O, 0 , 0 \rangle \end{eqnarray} We then note that \begin{eqnarray} \zeta_{\bf k}^n = [\frac{ \alpha(E) + \beta(E,r)e^{i {\bf k} \cdot {\bf r}} }{E - \omega_o}] \zeta_{\bf k}^{n-1}, \end{eqnarray} where $\zeta^0_{\bf k} = (E- \omega_o)^{-1}$. This geometric series can be summed to give the result quoted in the main text. A similar summation is achieved for the terms $\langle z; 0,1|(E-H)^{-1}|O; 1, 1 \rangle $ and $\langle z; 1, 0|(E-H)^{-1}|O; 1, 1 \rangle $. In the $2n$-th order of perturbation theory the action of the series on $O; 1, 1 \rangle$ yields \begin{eqnarray} \sum_{\bf k} \frac{g_{\bf k} }{(E - 2 \Omega) (E - \Omega - \omega_{\bf k})} \left( e^{-i {\bf k}\cdot {\bf r}/2} \gamma^n_{\bf k} |O; 0, 1 \rangle + e^{i {\bf k}\cdot {\bf r}/2} \delta^n_{\bf k} |O; 1, 0 \rangle \right) \end{eqnarray} \section{The time-dependent terms of the master equation} The time-dependent coefficients $\alpha_i, i = 1, \ldots 6$ appearing in the master equation (\ref{ME1}--\ref{MEf}) are defined as \begin{eqnarray} \alpha_1(t) &=& i \frac{d}{d t}\left(\nu(t) e^{i \omega_ot + \Gamma_0t - i \sigma t - \Gamma_rt}\right) e^{2 \Gamma_0 t + 2 i \sigma t + 2 \Gamma_rt} \label{alpha1} \\ \alpha_2(t) &=& \frac{d}{d t} \left( \mu(t) e^{i\omega_0t + \Gamma_0 t- i \sigma t- \Gamma_r t} \right) e^{2 \Gamma_0t} \\ \alpha_3(t) &=& \frac{d}{d t} \left( \mu(t) e^{i\omega_0t + \Gamma_0 t + i \sigma t + \Gamma_r t} \right) e^{2 \Gamma_0t} \\ \alpha_4(t) &=& i \frac{d}{d t}\left(\nu(t) e^{i \omega_ot + \Gamma_0t + i \sigma t + \Gamma_rt}\right) e^{2 \Gamma_0 t - 2 i \sigma t - 2 \Gamma_rt}\\ \alpha_5(t) &=& \frac{d}{d t}\left( \kappa(t) e^{2\Gamma_0t + 2 \Gamma_rt}\right) e^{2 \Gamma_0t- 2 \Gamma_r t} \\ \alpha_6(t) &=& \frac{d}{d t}\left( \kappa(t) e^{2\Gamma_0t - 2 \Gamma_rt}\right) e^{2 \Gamma_0t+ 2 \Gamma_r t}. \label{alpha6} \end{eqnarray} Using the notation \begin{eqnarray} \hat{A}_{+I} &=& |+ \rangle \langle I|, \\ \hat{A}_{OI} &=& |O \rangle \langle I| \\ \hat{A}_{++} &=& |+ \rangle \langle +| \\ \hat{A}_{+-} &=& |+ \rangle \langle -| \\ \hat{A}_{--} &=& |- \rangle \langle -|, \end{eqnarray} the linear functional ${\bf F}_t$ and ${\bf G}_t$ of Eq. (\ref{Lindbladlike}) are defined as \begin{eqnarray} {\bf F_t}[\hat{\rho}] &=& -[2 - \alpha_5(t)] (\hat{A}_{+I} \hat{\rho} \hat{A}^{\dagger}_{+I} - \hat{A}_{OI} \hat{\rho} \hat{A}_{OI}^{\dagger}) - [2 - \alpha_3(t)] \hat{A}_{OI} \hat{\rho} \hat{A}_{++} - [2 - \alpha_3^*(t)] \hat{A}_{++} \hat{\rho} \hat{A}_{OI}^{\dagger} \nonumber \\ &+& \alpha_1(t) \hat{A}_{OI} \hat{\rho} \hat{A}_{+-} + \alpha_1^*(t) \hat{A}_{+-}^{\dagger} \hat{\rho} \hat{A}_{OI}^{\dagger} \label{FF}\\ {\bf G_t}[\hat{\rho}] &=& - [2 - \alpha_6(t)] (\hat{A}_{-I} \hat{\rho} \hat{A}^{\dagger}_{-I} - \hat{A}_{OI} \hat{\rho} \hat{A}_{OI}^{\dagger}) +[2 + \alpha_2(t)] \hat{A}_{OI} \hat{\rho} \hat{A}_{--} + [2 + \alpha_2^*(t)] \hat{A}_{--} \hat{\rho} \hat{A}_{OI}^{\dagger} \nonumber \\ &+& \alpha_4(t) \hat{A}_{OI} \hat{\rho} \hat{A}_{+-}^{\dagger} + \alpha_4^*(t) \hat{A}_{+-}^{\dagger} \hat{\rho} \hat{A}_{OI}^{\dagger} \label{GG}. \end{eqnarray} \section{A Grassmann path-integral derivation} In Refs. \cite{AH,SADH} the evolution of the reduced density matrix for a 2LA interacting with the EM field was determined with a version of the Feynman-Vernon path integral method, using Grassmann variables for the atomic degrees of freedom. The same method can be applied to the present problem, even though the perturbative approach turned out to be more convenient because the vacuum initial state allowed for a summation of the perturbative series. However, the path-integral method may allow for a simpler treatment of other systems, such as an N-qubit system or the EM field at a finite temperature. For this reason, we include a sketch of the path-integral treatment in this Appendix, noting that it leads to the same results as the ones obtained in Sec. II. \subsection{Coherent state path integral} The coherent states for the EMF (bosonic) and 2LA (qubit) states are defined respectively by \begin{eqnarray} \label{emcoherentstates} |z_{\bf k} \rangle &=& \exp(z_{\bf k} b_{\bf k}^\dagger) |0_{\bf k} \rangle \\ \label{gmncoherentstates} |\eta^{(n)} \rangle &=& \exp(\eta S_+^{(n)}) |0\rangle_{(n)}. \end{eqnarray} The states $|0_{\bf k} \rangle$ and $|0\rangle_{(n)}$ are ground states of the electromagnetic field's ${\bf k}^{\mbox{th}}$ mode in vacuum and the $n^{\mbox{th}}$ qubit in its lower state, repectively. The bosonic coherent states are labeled by a complex number, $z_{\bf k}$, and the qubit coherent states are labelled by an anti-commuting~(i.e.~Grassmannian) number, $\eta^{(n)}$. A non-interacting eigenstate of two qubits and the electromagnetic spectrum can be written as the direct product of coherent states, \begin{equation} |\{z_{\bf k}\},\eta^{(1)},\eta^{(2)} \rangle =|\{z_{\bf k}\}\rangle \otimes |\eta^{(1)}\rangle \otimes |\eta^{(2)} \rangle. \end{equation} In a path integral approach the transition amplitude between some chosen initial and final states is divided into many infinitesimal time steps. The resulting path integral corresponds to the matrix element of the evolution operator. In the coherent-state path integral representation the Hamiltonian is given by, \begin{eqnarray}\label{hamiltonian2} \frac{\langle \bar{\eta}^{(1)}, \bar{\eta}^{(2)}, \{\bar{z}_{k}\} | H | \eta^{\prime(1)}, \eta^{\prime(2)}, \{z^{\prime}_{k}\} \rangle}{\exp[\bar{\eta}^{(1)} \eta^{\prime(1)} +\bar{\eta}^{(2)} \eta^{\prime(2)} +\sum_{\bf k} \bar{z}_{k} z^{\prime}_{k}]}= \hbar \omega_o \bar{\eta} \eta^{\prime} + \hbar \sum_{\bf k} \left[ \omega_{{\bf k}} \bar{z}_{\bf k} z^{\prime}_{\bf k} + \sum_n \left(\lambda_{{\bf k}}^{(n)} \bar{\eta}^{(n)} z^{\prime}_{\bf k} +\bar{\lambda}_{{\bf k}}^{(n)} \bar{z}_{\bf k} \eta^{\prime(n)} \right) \right]. \end{eqnarray} Spatial dependence is carried in the coupling constants, \begin{eqnarray}\label{spatialdependence} \lambda_{\bf k}^{(n)} = \frac{\lambda^{(n)}}{\omega_{\bf k}} e^{-i{\bf k}\cdot{\bf r}_n} \\ \bar{\lambda}_{\bf k}^{(n)} = \frac{\bar{\lambda}^{(n)}}{\omega_{\bf k}} e^{+i{\bf k}\cdot{\bf r}_n}. \end{eqnarray} The transition amplitude for N atoms after the j-th infinitesimal time steps is, \begin{equation} \label{jstepprop} K(j\epsilon,0) = \exp\bigg\{ \sum_n \bar{\eta}_{j}^{(n)} \bigg[\sum_m \psi_j^{(nm)}\eta_{0}^{(m)} +\sum_{{\bf k}{\bf l}} g_{j{\bf k}{\bf l}}^{(n)}z_{0{\bf l}}\bigg] +\sum_{\bf k} \bar{z}_{j{\bf k}} \bigg[\sum_{\bf l} f_{j{\bf k}{\bf l}}z_{0{\bf l}} +\sum_m \phi_{j{\bf k}}^{(m)}\eta_{0}^{(m)}\bigg] \bigg\}. \end{equation} The functions in the path integral are determined by finite-difference equations \begin{equation} \label{inductive1} \begin{array}{lll} \psi_{j+1}^{(nm)} = (1-i \omega_o \epsilon) \psi_{j}^{(nm)} -i\epsilon\sum_{{\bf k}} \lambda_{j{\bf k}}^{(n)} \phi_{j{\bf k}}^{(m)} && \psi_0^{(nm)} =\delta_{nm} \\ \phi_{j+1,{\bf k}}^{(m)} = (1-i \omega_{{\bf k}}\epsilon)\phi_{j{\bf k}}^{(m)} -i\epsilon \sum_n \bar{\lambda}_{j{\bf k}}^{(n)} \psi_{j}^{(nm)} && \phi_{0{\bf k}}^{(m)} =0 \end{array} \end{equation} \begin{equation} \label{inductive2} \begin{array}{lll} g_{j+1,{\bf k}{\bf l}}^{(n)} = (1-i \omega_o\epsilon) g_{j{\bf k}{\bf l}}^{(n)} -i\epsilon \lambda_{j{\bf k}}^{(n)} f_{j{\bf k}{\bf l}} && g_{0{\bf k}{\bf l}} = 0 \\ f_{j+1,{\bf k}{\bf l}} = (1-i\omega_{{\bf k}}\epsilon) f_{j{\bf k}{\bf l}} -i\epsilon \sum_m \bar{\lambda}_{j{\bf k}}^{(m)} \sum_{\bf q} g_{j{\bf q}{\bf l}}^{(m)} && f_{0{\bf k}{\bf l}} =\delta_{{\bf k}{\bf l}}. \end{array} \end{equation} \subsection{Reduced Density Matrix} The reduced density matrix of the two 2LAs is \begin{equation} \label{evolved1} \rho(t) = \int d\mu(\eta_0^{(2)})d\mu(\eta_0^{\prime(2)})d\mu(\eta_0^{(1)})d\mu(\eta_0^{\prime(1)}) \rho(0) J_R(t,0). \end{equation} The dynamics of this open system is described by the density matrix propagator \begin{equation}\label{reduced propagator} J_R(t,0)\vert_{T=0} = \exp\bigg\{ \sum_{nm} \bigg[\bar{\eta}_t^{(n)} \psi(t)^{(nm)} \eta_0^{(m)} +\bar{\eta}_0^{\prime(n)} \bar{\psi}^{\prime}(t)^{(nm)} \eta^{\prime(m)}_t +\bar{\eta}_0^{\prime(n)} \sum_{\bf k} \bar{\phi}^{\prime}_{k}(t) \phi_{k}(t) \eta_0^{(m)}\bigg] \bigg\} \end{equation} A general initial two-qubit density matrix written in the Grassmann representation is: \begin{eqnarray} \rho(0)= \left( \begin{array}{ccccccc} 1 && \bar{\eta}_0^{(2)} && \bar{\eta}_0^{(1)} && \bar{\eta}_0^{(1)}\bar{\eta}_0^{(2)}\end{array} \right) \left( \begin{array}{lllllll} \rho^{00}_{00} && \rho^{00}_{01} && \rho^{00}_{10} && \rho^{00}_{11} \\ \\ \rho^{01}_{00} && \rho^{01}_{01} && \rho^{01}_{10} && \rho^{01}_{11} \\ \\ \rho^{10}_{00} && \rho^{10}_{01} && \rho^{10}_{10} && \rho^{10}_{11} \\ \\ \rho^{11}_{00} && \rho^{11}_{01} && \rho^{11}_{10} && \rho^{11}_{11} \end{array} \right) \left( \begin{array}{c} 1 \\ \\ \eta_0^{\prime(2)} \\ \\ \eta_0^{\prime(1)} \\ \\ \eta_0^{\prime(1)}\eta_0^{\prime(2)} \end{array} \right). \end{eqnarray} In addition to Eqs. Eq.(\ref{inductive1}-\ref{inductive2}), we derive finite difference equations for the expanded functions appearing in the reduced density matrix elements \begin{eqnarray} \psi_{j+1}^{(ab)}\psi_{j+1}^{(nm)} &=& (1-2i\omega_o \epsilon)\psi_{j}^{(ab)}\psi_{j}^{(nm)} -i\epsilon\sum_{{\bf k}} \lambda_{j{\bf k}}^{(n)} \phi_{j{\bf k}}^{(m)}\psi_{j}^{(ab)} -i\epsilon\sum_{{\bf k}} \lambda_{j{\bf k}}^{(a)} \phi_{j{\bf k}}^{(b)}\psi_{j}^{(nm)} \\ \psi_{j+1}^{(ab)}\phi_{j+1,{\bf k}}^{(m)} &=& (1-i\omega_o\epsilon-i\omega_{\bf k})\psi_{j}^{(ab)}\phi_{j{\bf k}}^{(m)} -i\epsilon\sum_{n} \bar{\lambda}_{j{\bf k}}^{(n)} \psi_{j}^{(nm)}\psi_{j}^{(ab)} -i\epsilon\sum_{{\bf l}} \lambda_{j{\bf l}}^{(a)} \phi_{j{\bf l}}^{(b)}\phi_{j{\bf k}}^{(m)} \\ \phi_{j+1,{\bf l}}^{(b)}\phi_{j+1,{\bf k}}^{(m)} &=& (1-i\omega_{\bf l}\epsilon-i\omega_{\bf k})\phi_{j{\bf l}}^{(b)}\phi_{j{\bf k}}^{(m)} -i\epsilon\sum_{n} \bar{\lambda}_{j{\bf k}}^{(n)} \psi_{j}^{(nm)}\phi_{j{\bf l}}^{(b)} -i\epsilon\sum_{a} \bar{\lambda}_{j{\bf l}}^{(a)} \psi_{j}^{(ab)}\phi_{j{\bf k}}^{(m)} \end{eqnarray} For a two-atom system, it is convenient to represent the functions appearing in the propagator by column vectors, \begin{eqnarray} \label{columnvec1} [\psi] &=& \left( \begin{array}{c} \psi^{(22)} \\ \psi^{(21)} \\ \psi^{(12)} \\ \psi^{(11)} \end{array}\right) \;\;\;\;\;\;\;\; \label{columnvec2} [\phi_{\bf k}] = \left( \begin{array}{c} \phi_{\bf k}^{(2)} \\ \phi_{\bf k}^{(1)} \end{array}\right) \;\;\;\;\;\;\; \label{columnvec3} [\psi\psi] = \left( \begin{array}{c} \psi^{(11)}\psi^{(22)} \\ \psi^{(21)}\psi^{(12)} \end{array}\right) \\ \label{columnvec4} [\psi\phi_{{\bf k}}] &=& \left( \begin{array}{c} \psi^{(22)}\phi_{\bf k}^{(1)} \\ \psi^{(21)}\phi_{\bf k}^{(2)} \\ \psi^{(12)}\phi_{\bf k}^{(1)} \\ \psi^{(11)}\phi_{\bf k}^{(2)} \end{array}\right) \;\;\;\;\;\;\;\;\; \label{columnvec5} [\phi\phi_{{\bf k}{\bf l}}] = \left( \begin{array}{c} \phi_{\bf k}^{(1)}\phi_{\bf l}^{(2)} \\ \phi_{\bf k}^{(2)}\phi_{\bf l}^{(1)} \end{array}\right). \end{eqnarray} At the continuous limit the finite difference equations yield a set of differential equations for the column vectors above. \begin{eqnarray} \label{functional1} \frac{d}{dt} [\psi] &=& -i\omega_o [\psi] -i\sum_{\bf k} G^{\dagger}_{\bf k} [\phi_{{\bf k}}] \\ \label{functional2} \frac{d}{dt} [\phi_{\bf k}] &=& -i\omega_{\bf k} [\phi_{\bf k}] -iG_{\bf k} [\psi] \\ \label{functional3} \frac{d}{dt} [\psi\psi] &=& -2i\omega_o [\psi\psi] -i\sum_{\bf k} A_{\bf k} [\phi\psi_{{\bf k}}] \\ \label{functional4} \frac{d}{dt} [\phi\psi_{{\bf k}}] &=& -i(\omega_o +\omega_{\bf k}) [\phi\psi_{{\bf k}}] -i A^{\dagger}_{\bf k} [\psi\psi] -i\sum_{\bf l} L_{\bf l} [\phi\phi_{{\bf k}{\bf l}}]\\ \label{functional5} \frac{d}{dt} [\phi\phi_{{\bf k}{\bf l}}] &=& -i(\omega_{\bf l} +\omega_{\bf k}) [\phi\phi_{{\bf k}{\bf l}}] -i G_{{\bf l}} [\phi\psi_{{\bf k}}] -i D_{{\bf k}} [\phi\psi_{{\bf l}}], \end{eqnarray} in terms of the matrices \begin{eqnarray} G_{\bf k} &=& \left(\begin{array}{cccc} \bar{\lambda}_{\bf k}^{(2)} & 0 & \bar{\lambda}_{\bf k}^{(1)} & 0 \\ 0 & \bar{\lambda}_{\bf k}^{(2)} & 0 & \bar{\lambda}_{\bf k}^{(1)} \end{array}\right) \;\;\;\;\;\;\; A_{\bf k} = \left(\begin{array}{cccc} \lambda_{\bf k}^{(1)} & 0 & 0 & \lambda_{\bf k}^{(2)} \\ 0 & \lambda_{\bf k}^{(1)} & \lambda_{\bf k}^{(2)} & 0 \end{array}\right) \\ D_{{\bf k}} &=& \left(\begin{array}{cccc} 0 & \bar{\lambda}_{\bf k}^{(2)} & 0 & \bar{\lambda}_{\bf k}^{(1)}\\ \bar{\lambda}_{\bf k}^{(2)} & 0 & \bar{\lambda}_{\bf k}^{(1)} & 0 \end{array}\right) \;\;\;\;\;\;\; L_{\bf l} = \left(\begin{array}{cc} 0 & \lambda_{\bf l}^{(2)} \\ 0 & \lambda_{\bf l}^{(2)} \\ \lambda_{\bf l}^{(1)} & 0 \\ \lambda_{\bf l}^{(1)} & 0 \end{array}\right) \end{eqnarray} The solution of Eqs. (\ref{functional1}--\ref{functional5}) allows one to fully reconstruct the propagating amplitude and from this the elements of the reduced density matrix for the qubits. These equations can be solved by implementing an approximation scheme similar to that employed for the operator method in the main text. We do not provide the details, but only note that the results of the two methods coincide. \end{appendix} \end{document}
\begin{document} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\hfil \break \break}{\hfil \break \break} \newcommand{\hat{a}}{\hat{a}} \newcommand{\hat{a}^{\dagger}}{\hat{a}^{\dagger}} \newcommand{\hat{a}_g^{\dagger}}{\hat{a}_g^{\dagger}} \newcommand{\hat{b}}{\hat{b}} \newcommand{\hat{b}^{\dagger}}{\hat{b}^{\dagger}} \newcommand{\hat{b}_g^{\dagger}}{\hat{b}_g^{\dagger}} \newcommand{\hat{c}}{\hat{c}} \newcommand{\hat{c}^{\dagger}}{\hat{c}^{\dagger}} \newcommand{\hat{d}}{\hat{d}} \newcommand{\hat{n}}{\hat{n}} \newcommand{\hat{n}^{\dagger}}{\hat{n}^{\dagger}} \newcommand{\hat{\rho}}{\hat{\rho}} \newcommand{\hat{\phi}}{\hat{\phi}} \newcommand{\hat{A}}{\hat{A}} \newcommand{\hat{A}^{\dagger}}{\hat{A}^{\dagger}} \newcommand{\hat{B}}{\hat{B}} \newcommand{\hat{B}^{\dagger}}{\hat{B}^{\dagger}} \newcommand{\hat{C}}{\hat{C}} \newcommand{\hat{D}}{\hat{D}} \newcommand{\hat{E}}{\hat{E}} \newcommand{\hat{L}}{\hat{L}} \newcommand{\hat{N}}{\hat{N}} \newcommand{\hat{O}}{\hat{O}} \newcommand{\hat{O}^{\dagger}}{\hat{O}^{\dagger}} \newcommand{\hat{S}}{\hat{S}} \newcommand{\hat{U}}{\hat{U}} \newcommand{\hat{U}^{\dagger}}{\hat{U}^{\dagger}} \newcommand{\hat{X}}{\hat{X}} \newcommand{\hat{Z}}{\hat{Z}} \newcommand{\hat{X}^{\dagger}}{\hat{X}^{\dagger}} \newcommand{\hat{Y}^{\dagger}}{\hat{Y}^{\dagger}} \newcommand{\hat{Z}^{\dagger}}{\hat{Z}^{\dagger}} \newcommand{\hat{H}}{\hat{H}} \newcommand{{\prime \prime}}{{\prime \prime}} \newcommand{{\prime \prime \prime}}{{\prime \prime \prime}} \newcommand{\ket}[1]{\mbox{$|#1\rangle$}} \newcommand{\bra}[1]{\mbox{$\langle#1|$}} \newcommand{\ketbra}[2]{\mbox{$|#1\rangle \langle#2|$}} \newcommand{\braket}[2]{\mbox{$\langle#1|#2\rangle$}} \newcommand{\bracket}[3]{\mbox{$\langle#1|#2|#3\rangle$}} \newcommand{\mbox{\boldmath $\cdot$}}{\mbox{\boldmath $\cdot$}} \newcommand{\otimes}{\otimes} \newcommand{\op}[2]{\mbox{$|#1\rangle\langle#2|$}} \newcommand{\hak}[1]{\left[ #1 \right]} \newcommand{\vin}[1]{\langle #1 \rangle} \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\tes}[1]{\left( #1 \right)} \newcommand{\braces}[1]{\left\{ #1 \right\}} \newcommand{I}{I} \newcommand{\initial}[2]{|\psi^#1_#2(0)\rangle} \newcommand{\initialbra}[2]{\langle\psi^#1_#2(0)} \newcommand{\decohered}[2]{|\psi^#1_#2(\gamma)\rangle} \newcommand{\decoheredbra}[2]{\langle\psi^#1_#2(\gamma)} \newcommand{\gendecohered}[1]{|\psi^G(#1)\rangle} \newcommand{\gendecoheredbra}[1]{\langle\psi^G(#1)} \newcommand{\projector}[2]{|\ksi^#1_#2 \rangle} \title[Non-monotonic projection probabilites]{Non-monotonic dependence of projection probabilities as a function of distinguishability} \author{Gunnar Bj\"{o}rk, Saroosh Shabbir} \address{Department of Applied Physics, Royal Institute of Technology (KTH)\\ AlbaNova University Center, SE - 106 91 Stockholm, Sweden} \ead{[email protected]} \date{\today} \begin{abstract} Typically, quantum superpositions, and thus measurement projections of quantum states involving interference, decrease (or increase) monotonically as a function of increased distinguishability. Distinguishability, in turn, can be a consequence of decoherence, for example caused by the (simultaneous) loss of excitation or due to inadequate mode matching (either deliberate or indeliberate). It is known that for some cases of multi-photon interference, non-monotonic decay of projection probabilities occurs, which has so far been attributed to interference between four or more two photons. We show that such a non-monotonic behaviour of projection probabilities is not unnatural, and can also occur for single-photon and even semiclassical states. Thus, while the effect traces its roots from indistinguishability and thus interference, the states for which this can be observed do not need to have particular quantum features. \end{abstract} \maketitle \section{Introduction} Interference is the manifestation of the quantum mechanical law stating that when a final state can be reached through two or more indistinguishable ``paths'', the path probability amplitudes should be added before any final state probability is calculated. Since probability amplitudes are represented by complex numbers and may change signs, path probability amplitudes can thus ``annihilate'' each other, resulting in destructive interference. On the contrary, if the paths to the final state are distinguishable, if even only in principle, then the probabilities and not the amplitudes of the paths should be added. Since probabilities are real and non-negative, adding probabilities cannot result in destructive interference. Perhaps the best known experiment to demonstrate the general validity of this law, i.e., interference due to indistinguishability is the Hanbury Brown and Twiss stellar interferometer \cite{HBT,HBT2}. The surprise for the scientific community at the time was that intensities interfered, rather than wave amplitudes. However, this experiment firmly established that path indistinguishability is what causes interference.\\ Thus, it is perhaps to be expected that constructive (destructive) interference probabilities should decrease (increase) monotonically with increasing distinguishability. In either case one could consider that when interference is probed as a function of distinguishability a monotonic behaviour would follow. The Hong-Ou-Mandel (HOM) experiment \cite{HOM} is a prototype for examining the interference of increasingly distinguishable particles, which indeed shows monotonic dependence as a function of distinguishability. However, recent extensions of the HOM interference to higher photon numbers \cite{Ra, Tichy} have shown that under certain circumstances the measured interference signal varies non-monotonically with increasing distinguishability of the interfering particles (photons in this case).\\ The quantitative relationship between path distinguishability and interference visibility has been studied in the past in different contexts \cite{Jaeger,Englert,Durr,Bjork,Gavenda}. Here, we are not going to concern ourselves with visibility, but only look at how certain (projective) measurements of interference depend on the degree of distinguishability, and specifically whether they vary monotonically with increasing distinguishability or not. We thus do not study interference as a function of the relative phase between the paths, but for a fixed phase. Moreover, while first order interference oscillates sinusoidally with a path relative-phase period of $2 \pi$, the higher (even) order interference functions we will study (like the Hong-Ou-Mandel dip) do not oscillate as a function of the path relative-phase.\\ The authors of Ref. \cite{Ra, Tichy} associate distinguishable events with classical ``particles'' (or ``paths'') and the indistinguishable ``particles'' with quantum trajectories. They subsequently attribute the non-monotonicity to a quantum-to-classical transition of multi-particle interference. However, our analysis shows that contrary to what one may believe, interference probabilities can be non-monotonic for single particles as well, whether they are bosons or fermions. Moreover, the measured interference probabilities analysed and measured in \cite{Ra} were sums of, in principle distinguishable, final events, some increasing and some decreasing with increasing distinguishability. Taking the analysis further, we shall demonstrate that more fundamentally, isolated probabilities (projection probabilities onto pure, final states) can also vary non-monotonically. Finally we shall discuss that this behaviour occurs also classically, specifically with a classical light-source and both with conventional photo detectors and single-photon detectors. Thus we find that the non-monotonic behaviour is natural for all interfering particles or fields and is thus, in general, not a manifestation of a classical-to-quantum transition, or ``exclusive to setups with four or more particles that interfere simultaneously" \cite{Ra}. \section{Analysis of the Hong-Ou-Mandel experiment} The Hong-Ou-Mandel (HOM) dip \cite{HOM} demonstrates how projection probabilities change as a function of distinguishability, where distinguishability is introduced by delaying one input photon with respect to the other in the second input port of a beam splitter (Fig.~\ref{Fig: schematic}a). The delay causes the loss of interference of the photons at the beam splitter, since time resolution of the output can in principle convey ``which-path" information. \begin{flushleft} \begin{figure} \caption{\textbf{a.} Schematic of the Hong-Ou-Mandel experiment. \textbf{b.} Early (left) and late (right) photon modes for each degree of freedom (red and green). Note that the t-axis represents the time at which the photons impinge on the beam splitter. $\gamma$ is the distinguishability transformation which increases downwards, so the top (bottom) figure represents the case when the temporal modes are perfectly indistinguishable (distinguishable).} \label{Fig: schematic} \end{figure} \end{flushleft} We model the HOM experiment with the four mode input state \begin{equation} \decohered{{\scriptsize \textrm{HOM}}}{2} = \cos(\gamma)\ket{1,1}\otimes \ket{0,0} + \sin(\gamma)\ket{1,0}\otimes \ket{0,1},\end{equation} where $\gamma$ parameterizes the delay, and hence distinguishability, between the two photons and ranges from $0 \leq \gamma \leq \pi/2$ and the subscript 2 indicates that it is a two-photon state. The two states to the left of the tensor multiplication sign refers to early photon modes, indistinguishable except for one degree of freedom such as path (illustrated in Fig.~\ref{Fig: schematic}b) or polarization, whereas the two states to the right refer to late photon modes, perfectly distinguishable to the early photons, but indistinguishable among themselves except for, e.g., the path. In this case, the time of arrival can fully determine which photon impinges on the beam splitter first, hence the designation early and late for the two photon modes. Thus when $\gamma=0$ the input photons are perfectly indistinguishable and can display maximal interference when the two modes interact in a 50:50 beam splitter described by the unitary operator $\hat{U}$, \emph{viz.}, $\hat{U}\ket{1,1}\otimes \ket{0,0} = (\ket{2,0}-\ket{0,2})\otimes \ket{0,0}/\sqrt{2}$.\\ Qualitatively, the indistinguishability of the input state can be defined as its projection probability onto the maximally interfering state, i.e., $I \equiv | \initialbra{{\scriptsize \textrm{HOM}}}{2}\decohered{{\scriptsize \textrm{HOM}}}{2}|^2=\cos^2(\gamma).$ Note that $I$ is not a good measure in quantitative terms: for different perfectly distinguishable modes $I$ may not be zero as we shall see below. However, for our purposes this simple measure suffices as it is a monotonically decreasing function of $\gamma$ for all our investigated methods of increasing distinguishability. The output state after the beam splitter is \begin{eqnarray}\fl \hat{U} \decohered{{\scriptsize \textrm{HOM}}}{2}=\frac{\cos(\gamma)}{\sqrt{2}}(\ket{2,0}-\ket{0,2})\otimes \ket{0,0} \nonumber \\ + \frac{\sin(\gamma)}{2}(\ket{1,0}+\ket{0,1})\otimes (\ket{1,0}-\ket{0,1}).\end{eqnarray} Unfortunately, it is rather difficult to measure the projection onto the state $\hat{U}\initial{{\scriptsize \textrm{HOM}}}{2}$. The remedy in a typical experiment is to instead project onto $\ket{1,0,0,1}\bra{1,0,0,1}$ and thus instead measure the \textit{distinguishability}. The state $\ket{1,0,0,1}$ represent the case when an early photon takes the first ``path'' and a late photon takes the second ``path''. If the photons are measured in coincidence, but the coherence length and the time delay of the modes are much shorter than the coincidence measurement window, they will still be recorded as coincident. This is a typical experimental situation. In this case the state $\ket{0,1,1,0}$ will also be detected as coincident, and although the two states are orthogonal they will both contribute to this measurement of interference. Mathematically, since the events are distinguishable in practice (by, e.g., noting in which mode the early photon arrived) the respective measurement probabilities should be added. Then, the output projection probability can be written as \begin{equation} \fl |\bra{1,0,0,1}\hat{U} \decohered{{\scriptsize \textrm{HOM}}}{2}|^2 + |\bra{0,1,1,0}\hat{U} \decohered{{\scriptsize \textrm{HOM}}}{2}|^2 = \frac{\sin^2(\gamma)}{2} \nonumber = \frac{1-I}{2}. \end{equation} Thus, if $I$ is monotonic, so is this measurement of interference.\\ More generally, consider a state $\gendecohered{\gamma}$ of one or more photons, where $\gamma$ parameterizes the degree of distinguishability. For $\gamma=0$ the input modes are perfectly indistinguishable (except for one degree of freedom) when they interfere in some device described by the unitary transformation $\hat{U}$. If the input modes are made partially distinguishable, e.g., by shifting the modes in time, in polarization or in frequency, so that a state with $\gamma \neq 0$ is prepared, then the indistinguishability can be defined as the states' overlap with the maximally interfering state, i.e., $I_{\scriptsize \textrm{HOM}} \equiv |\gendecoheredbra{0}\gendecohered{\gamma}|^2$. After the unitary (interference) transformation, it is very natural to define the degree of indistinguishability as the output state's projection probability onto the output state state exhibiting maximal interference, i.e., $\hat{U}\gendecohered{0}$. In the following we shall define $\hat{U}\gendecohered{0} \gendecoheredbra{0}|\hat{U}^\dagger$ as the \textit{proper} indistinguishability projector.\\ Hence, independent of the exact nature of the interference between the photons, the degree of indistinguishability at the output will be $I_{\scriptsize \textrm{out}} = |\gendecoheredbra{0}|\hat{U}^\dagger \hat{U}\gendecohered{\gamma}|^2 = I_{\scriptsize \textrm{in}} $. From this, almost trivial, consideration we conclude that if the indistinguishability of the input varies in a monotonic fashion, then the \textit{properly measured} interference, and thus indistinguishability of the output modes, also varies monotonically. However, if one chooses a different measurement to probe the interference propensity, then one may obtain a non-monotonic probability as a function of distinguishability, e.g., by taking a sum of projections such that one increases with distinguishability while the other decreases. We demonstrate this in the two photon-pair HOM experiment analyzed below.\\ \section{Two photon-pair HOM experiment} The case where a pair of photons is delayed with respect to another pair at the input of the beam splitter \cite{Ra,Tichy} can be similarly modelled as \begin{eqnarray} \fl \decohered{{\scriptsize \textrm{HOM}}}{4} = \cos^2(\gamma)\ket{2,2}\otimes \ket{0,0} + \sqrt{2}\cos(\gamma)\sin(\gamma)\ket{2,1}\otimes \ket{0,1} \nonumber \\+ \sin^2(\gamma)\ket{2,0}\otimes \ket{0,2}.\end{eqnarray} Using the ``recipe'' above, the indistinguishability between the two pairs of photons can be quantified by $I = \cos^4(\gamma)$. The indistinguishability thus decreases monotonically with $\gamma$ in the relevant interval $0\leq \gamma\leq \pi/2$. In the case of two photons in each output, the coincidence detection window can again be set so that the photons in the output states $\ket{2,2,0,0}$, $\ket{2,1,0,1}$, $\ket{1,2,1,0}$ $\ket{2,0,0,2}$, $\ket{1,1,1,1}$, and $\ket{0,2,2,0}$ all are defined to be coincident (i.e. two photons are detected in each of the two spatial or polarization modes within the coincidence window.) The individual probabilities for these projections are as follows: \begin{eqnarray} |\bra{2,2,0,0}\hat{U} \ket{\xi(\gamma)} |^2 &=& \frac{\cos^4(\gamma)}{4}, \nonumber \\ |\bra{2,1,0,1}\hat{U} \ket{\xi(\gamma)} |^2 &=& \frac{\cos^2(\gamma)\sin^2(\gamma)}{8}, \nonumber \\ |\bra{1,2,1,0}\hat{U} \ket{\xi(\gamma)} |^2 &=& \frac{\cos^2(\gamma)\sin^2(\gamma)}{8}, \label{Eq: projs 2 pair}\\ |\bra{2,0,0,2}\hat{U} \ket{\xi(\gamma)} |^2 &=& \frac{\sin^4(\gamma)}{16}, \nonumber \\ |\bra{1,1,1,1}\hat{U} \ket{\xi(\gamma)} |^2 &=& \frac{\sin^4(\gamma)}{4}, \nonumber \\ |\bra{0,2,2,0}\hat{U} \ket{\xi(\gamma)} |^2 &=& \frac{\sin^4(\gamma)}{16}. \nonumber \end{eqnarray} \begin{figure} \caption{Probabilities of the $(m,n,\tilde{m},\tilde{n})$ events for the two-pair input HOM experiment, where the latter two digits with tilde refer to late output modes of the beam splitter. \textbf{a.} Sum of (2,2,0,0), (2,1,0,1), (1,2,1,0), (2,0,0,2), (1,1,1,1) and (0,2,2,0) event probabilities. \textbf{b.} (2,2,0,0) event probability. \textbf{c.} Sum of (4,0,0,0), (3,0,1,0) and (2,0,2,0) event probabilities. \textbf{d.} (4,0,0,0) event probability.} \label{Fig: projection probabilities} \end{figure} We see that only the first of these projection probabilities is proportional to the indistinguishability of the input state. Collecting the terms contributing to the coincidence signal (under the assumption that the detection coincidence window $>$ the maximal time delay between the modes $>$ the coherence time) we get the coincidence count probability \begin{equation} P_4^{\scriptsize \textrm{HOM}} = \frac{\cos^4(\gamma)}{4} + \frac{\cos^2(\gamma)\sin^2(\gamma)}{4} + \frac{3 \sin^4(\gamma)}{8}.\end{equation} The probability in this case is not monotonic with respect to the distinguishability parameter $\gamma$ as can be seen in Fig. \ref{Fig: projection probabilities} and was experimentally verified in \cite{Ra}. To see this more clearly, the equation can be rewritten as \begin{equation} P_4^{\scriptsize \textrm{HOM}} = \frac{1}{8}\left (3 I -4 \sqrt{I} + 3 \right ).\end{equation} The function is non-monotonic partially because it measures a sum of terms, three growing with increasing distinguishability and one decreasing. Moreover, the second and third probabilities in (\ref{Eq: projs 2 pair}) are non-monotonic by themselves. For the bunching event (e.g., when all photons exit in the lower path or in the same polarization mode), the measured coincident probability is the sum of projections on $\ket{4,0,0,0}$, $\ket{3,0,1,0}$, $\ket{2,0,2,0}$ states. However, in this case the coincidence probability fortuitously turns out to be monotonic nonetheless \cite{Xiang}, as shown in Fig.~\ref{Fig: projection probabilities}. \\ \section{Distinguishability transformations for single particle states} We now turn to single particle states. In this case there is no difference between bosons and fermions, and for the latter this analysis is general and sufficient since, by the Pauli exclusion principle, fermions can only interfere into final states containing one excitation. We show that analogously to the deliberate distinguishability transformation in the two-photon HOM experiment, one can also prepare a single-particle state in an increasingly distinguishable manner in various ways: one way would be to send a single photon into a variable reflectivity, loss-less beam splitter. The larger the probability amplitude for the state corresponding to finding the photon in one of the arms as compared to the other, the more distinguishable the photon paths become. A similar method would be to prepare the photon in a linearly polarized mode. If the photon's polarization is diagonal to the horizontal-vertical (HV) polarization basis, then the two paths expressed in this basis become indistinguishable, but as the polarization of the photon is rotated towards, e.g., the horizontal direction, the paths become increasingly distinguishable in the HV basis. Such an initial state can be written as \begin{equation} \ket{\psi^D_1(\gamma)} = \cos(\pi/4 + \gamma/2) \ket{1,0} + \sin(\pi/4 + \gamma/2) \ket{0,1}. \label{Eq:Initial indistinguishability} \end{equation} For $\gamma=0$ it is not possible to distinguish or guess in which of the two modes one would ``find'' the particle, if one were to measure, while for $\gamma=\pi/2$ one can be confident, \textit{a priori} of any measurement, to find the particle in the second mode. This state is projected onto the projector state $ \ket{\xi_1} = \cos(\beta_n) \ket{1,0} + e^{-i \theta_n}\sin(\beta_n) \ket{0,1}, \label{Eq:Projector} $ where $\beta_n$ and $\theta_n$ are variables that can be used to implement any pure, two-mode, single-photon projector. Note that here, for simplicity, we have incorporated the beam splitter, or any other unitary transformation, into this projector, and so referring to Fig.~\ref{Fig: schematic}, this projector state represents $\hat{U} \ket{\xi}$. If the two modes are taken to be the horizontally and vertically, linearly polarized modes, the projector can be implemented through a variable birefringence plate implementing $\theta_n$, followed by polarizer oriented with the transmission polarization at the angle $\beta_n$, that in turn is followed by a single-photon detector. The projection probability $P^D_1$ for this input state is \begin{eqnarray} \fl P^D_1 = \cos^2(\beta_n) \left [1-\sin(\gamma) \right]/2 + \left[\cos(\theta_n)\sin(2\beta_n) \cos(\gamma)\right]/2 \nonumber \\ + \sin^2(\beta_n) \left [1+\sin(\gamma)\right ]/2. \label{Eq:Projector D} \end{eqnarray} \begin{figure} \caption{\textbf{a.} The Stokes vector evolution for the different distinguishability models. Dotted (red) line indicates a deliberate distinguishability transformation due to a polarization rotation, Eq. (\ref{Eq:Initial indistinguishability}). Dashed (blue) line, indicates the increase of distinguishability due to linear loss, Eq. (\ref{Eq:Initial loss}). Solid (black) line indicates increasing phase noise, Eq. (\ref{Eq:Initial phase noise}). \textbf{b.} Projection probabilities for $P_1^D$ (dotted, red) and $P_1^L$ (dashed, blue) as a function of distinguishability} \label{Fig: Bloch} \end{figure} This projection probability is not monotonic with respect to $\gamma$ for certain values of the other parameters as can be inferred from Fig.~\ref{Fig: Bloch} \textbf{a}. For polarized photons the figure shows the equatorial plane of the Poincar\'{e} sphere. For the projector state $\ket{\xi_1^D}=\cos(\pi/8) \ket{1,0}-\sin(\pi/8)\ket{0,1}$, corresponding to $\beta_n=\pi/8$ and $\theta_n=\pi$ and marked by the black dot in Fig.~\ref{Fig: Bloch} \textbf{a}, the overlap with the partially distinguishable state on the dotted, red line is zero where the thin black line emanating from the black dot and crossing the origin intersects the dotted curve. (The states, being antipodal on the Poincar\'{e} sphere, are orthogonal.) For states with either more or less distinguishability the overlap is finite since the states are non-orthogonal. Hence, the projection probability will be non-monotonic as a function of $\gamma$, with zero as a minimum as can be seen in Fig.~\ref{Fig: Bloch} \textbf{b}. The projection measurement $\ket{\xi_1^D}\bra{\xi_1^D}$ corresponds to the interference between two spatial modes in an unbalanced beam splitter followed by a photo detector in one of the arms. In a polarization setting it corresponds to a half-wave plate oriented with its axis parallel to the linear polarization basis, followed by a polarizer set at 22.5 degrees and a photo detector. However, if the \emph{proper} distinguishability projector for this basis $\ket{\xi^P_1}=(\ket{0,1} + \ket{1,0})/\sqrt{2}$ is chosen, then the measured probability equals the indistinguishability $I^D_1=\cos^2(\gamma/2)$ that goes from 1 to 0.5 monotonically. Other models of distinguishability can be analysed in a similar fashion. In the case where one of the mode experiences linear loss, then if the loss is large enough we can be convinced that any detected photon must have resided in ``the other'' mode. In this case, to preserve unitarity and normalization, at least a three mode model is needed. The input state can thus be modeled \begin{equation} \ket{\psi^L_1(\gamma)} = \left [\cos(\gamma) \ket{1,0,0} + \ket{0,1,0}+ \sin(\gamma) \ket{0,0,1} \right ]/\sqrt{2}. \label{Eq:Initial loss} \end{equation} If we assume that any excitation in the rightmost (ancillary) loss mode remains undetected, the projection probability $P^L_1$ onto the single-photon projector $\ket{\xi_1}\bra{\xi_1}$ is \begin{equation} P^L_1 = \left [ \cos^2(\gamma)\cos^2(\beta_n) + \cos(\theta_n)\sin(2\beta_n)\cos(\gamma) + \sin^2(\beta_n) \right ]/2. \label{Eq:Projector B}\end{equation} This projection probability is also not monotonic with respect to the parameter $\gamma$ for certain values of the other parameters as can easily be seen from Fig. \ref{Fig: Bloch} \textbf{a} and \textbf{b}, blue, dashed line (which is also drawn for $\beta_n=\pi/8$ and $\theta_n=\pi$). The indistinguishability $I^L_1=[1+\cos(\gamma)]^2/2$, however, is monotonic in the relevant interval $0 \leq \gamma \leq \pi/2$. As a third model, we shall analyse distinguishability brought about by phase noise, the initial state can be written as, \begin{equation} \ket{\psi^N_1} = \left( \ket{1,0} + e^{i \phi_n} \ket{0,1} \right)/\sqrt{2}, \label{Eq:Initial phase noise} \end{equation} where $\phi_n$ is a random variable representing the (relative-)phase fluctuations between the modes. The projection probability of this state onto $\ket{\xi_1}\bra{\xi_1}$ is derived to be, \begin{equation} P^N_1 = \left [ 1 + \langle \cos(\theta_n + \phi_n\rangle \sin(2\beta_n) \right] /2, \label{Eq:Projector with noise} \end{equation} where $\langle \ \rangle$ denotes an ensemble average. If we take $\phi_n$ to have a symmetric distribution around a zero mean, we have $\langle \cos(\theta_n + \phi_n)\rangle = \cos(\theta_n)\langle \cos(\phi_n)\rangle$. In the following we are not interested in modelling any particular physical source of phase noise, but can instead simply use the parametrization $\langle \cos(\phi_n)\rangle = \cos(\gamma)$, $\gamma \in \{0,\pi/2 \}$ to characterize the degree of distinguishability between the two modes due to the fluctuating relative phase. Thus we arrive at our final formula \begin{equation} P^N_1 = \left [ 1 + \cos(\theta_n)\sin(2\beta_n)\cos(\gamma) \right ]/2, \label{Eq:Projector A}\end{equation} From the formula we can see that since $\cos(\gamma)$ is a monotonic function in the interval $\{0,\pi/2 \}$, so is $P^N_1$, irrespective of the value of any of the other parameters. Thus, for this model any single-photon projection probability described by $\ket{\xi_1}$ will either increase or decrease monotonically, or be flat. This can be seen in Fig. \ref{Fig: Bloch} \textbf{a} where the density matrix is transformed along a radial ray in the Poincar\'{e} sphere as a function of increasing phase noise or distinguishability. The indistinguishability in this case is $I^N_1=[1+\cos(\gamma)]/2$ which is monotonic when $0 \leq \gamma \leq \pi/2$.\\ It is well known that when analysing linear interference one can get the classical results by extrapolating the case of single-photon input, i.e., replacing the input probability amplitudes with field amplitudes and interpreting the output probability as a mean output intensity. From this analogy it is clear that if classical fields are used with the mode distinguishability models above, and the corresponding classical interference measurement were done, the corresponding non-monotonic output intensity curves would be obtained. This shows that non-monotonic interference probability does not, in general, stem from a quantum-to-classical transition. \section{Experiment} The analysis and the predictions of non-monotonicity for the single-photon states above are so simple that we find it pointless to prove them by experiments. Here we shall therefore discuss the more interesting case of distinguishability (mode) transformation for the two-photon case, experimentally implemented as a polarization rotation. We saw in the analysis of the two-photon Hong-Ou-Mandel experiment that a monotonic behavior followed. This is not the case for all two-photon experiments as we shall show. To this end, two photons prepared in a diagonal, linearly polarized mode will have the maximum indistinguishability with respect to the HV modes. Rotating the diagonally polarized state to become, e.g., horizontally polarized, the distinguishability will become perfect. The initial two mode state can be written as \begin{eqnarray} \fl \ket{\psi^D_2} = \sin^2(\pi/4 + \gamma/2) \ket{2,0} + \sqrt{2} \sin(\pi/4 + \gamma/2)\cos(\pi/4 + \gamma/2) \ket{1,1} \nonumber \\+ \cos^2(\pi/4 + \gamma/2) \ket{0,2}, \label{Eq:Initial two-photon indistinguishability} \end{eqnarray} where $\gamma$ is the relative angle between the input and output polarizations. The two-photon interference projector we shall use for this state is \begin{equation} \ket{\xi^D_2} = \frac{1}{\sqrt{3}}\left ( \sqrt{2} \ket{2,0} + \ket{1,1} \right ). \label{Eq:Two photon projector D} \end{equation} It is deliberately chosen to on the one hand involve interference between the two states $\ket{2,0}$ and $\ket{1,1}$, and on the other hand not be the proper indistinguishability projector. Computing the projection probability, one gets \begin{equation} P^D_2 = \frac{4}{3} \sin^2(\pi/4 + \gamma/2)\cos^2(\gamma/2). \label{Eq: prediction}\end{equation} For a diagonally polarized input state, $\gamma=0$, one gets $P^D_2=2/3$. \begin{figure}\label{Fig: setup} \end{figure} In \cite{Hofmann} it was shown that any $N$ photon, two-mode projector can be implemented probabilistically by writing the state, expressed in terms of creation operators of the two modes, as a product of single-photon creation operators. For the state $\xi^D_2$ above the decomposition becomes \begin{equation} \xi^D_2 = \sqrt{\frac{2}{3}} \frac{\left ( \hat{a}^{\dagger} + \hat{b}^{\dagger} \right ) }{\sqrt{2}} \hat{a}^{\dagger} \ket{0,0} \label{Eq: Implementation D} \end{equation} The consequence is that the projector can probabilistically be implemented by splitting the diagonally polarized, two-photon state in a non-polarizing 50:50 beam splitter, there is a 50 \% chance that one photon exits from each beam splitter output port. Then after one port a polarizer is inserted that transmits diagonally polarized photons followed by a photo detector. Each time a diagonally polarized photon exits this port an ideal detector will click. After the second port one inserts a polarizer that transmits horizontally polarized photons. Hence, if one diagonally polarized photon exits this port there is a 50 \% chance that an ideal detector clicks. If one looks at the probability $P^E_2$ that the two detectors click in coincidence one finds that (for ideal detectors) the probability is $1/4$. Hofmann proved that $P^E_2 \propto P^D_2$ \cite{Hofmann}. Comparing the two expressions for a diagonally polarized, two-photon, input state we got 1/4 and 2/3, respectively. Hence, the coincidence probability $P_2^E$ will be connected to the projection probability $P_2^D$ as \begin{equation} P^E_2 = \frac{3}{8} P^D_2.\end{equation} Taking non-unity photo detection into account one obtains $P^E_2 = 3 \eta^2 P^D_2/8$, where it has been assumed that the detector efficiency $\eta$ is the same for both detectors. Fig.~\ref{Fig: exp} shows the experimental results - implemented via post-selection of the two-photon component of a weakly excited coherent state input \cite{Shabbir} - along with the theoretical curve. As can be seen, and as predicted by (\ref{Eq: prediction}), the curve is non-monotonic. \begin{figure} \caption{Two-photon distinguishability polarization transformation. The black squares represent experimental data and the red curve is $A P^D_2$ with $A$ chosen for best fit. The data is normalized with respect to the maximum number of counts. Error bars shows $\pm \sigma$.} \label{Fig: exp} \end{figure} Using classical light, with field amplitude $E_0$, the output of the above experiment would be $(E_0/2)^4 \cos^2(\gamma-\theta_1)\cos^2(\gamma-\theta_1)$, with $\theta_1$ and $\theta_2$ as 0 and $\pi/4$ respectively. This is also a non-monotonic function of the input polarization $\gamma$. \section{Discussion} In this paper we have discussed various ways by which states can be made increasingly distinguishable with respect to a given measurement basis and we have shown that by suitable choice of interference projectors, non-monotonic projection probabilities can be demonstrated for most states, including single particle states and classical states detected either classically or with single particle detectors. Hence, in general non-monotonicity can neither be regarded as a consequence of \emph{quantum} interference between multiple photons nor as a quantum-to-classical transition phenomenon \cite{Ra, Tichy}. However, given an initial state in a particular measurement basis, a suitable ``maximally indistinguishable'' projector can be defined. We have named such a projector the \textit{proper} indistinguishability projector. With respect to this choice, the interference projection probability always change monotonically as a function of increasing distinguishability. Thus we see that the choice of measurement projector critically determines how the measurement probability behaves as a function of the distinguishability between the interfering modes, particles, or states. \ack We gratefully acknowledge discussions with Prof. Y.-H. Kim and Mr. Y.-S. Ra. This work was supported by the Swedish Research Council (VR) through grant 621-2011-4575 and through its support of the Linn\ae us Excellence Center ADOPT. \section*{References} \end{document}
\begin{document} \title{Measuring the Casimir-Polder potential with Unruh-DeWitt detector excitations} \author{Kacper D\k{e}bski} \email{[email protected]} \affiliation{Institute of Theoretical Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland} \author{Piotr T. Grochowski} \email{[email protected]} \affiliation{Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotnik\'ow 32/46, 02-668 Warsaw, Poland} \author{Andrzej Dragan} \email{[email protected]} \affiliation{Institute of Theoretical Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore, Singapore} \date{\today} \begin{abstract} We propose a new method to measure Casimir-Polder force in an optical cavity by means of atomic excitations. We utilize a framework in which Unruh-DeWitt Hamiltonian mimics the full matter-field interaction and find a connection between the excitation rate of the two-level atom and the Casimir-Polder energy, allowing to map one onto the other. We argue that such a realization opens a route to study Casimir-Polder potentials through measurement of excited state's population in large, spatially compact ensembles of atoms. \end{abstract} \maketitle \section{Introduction} The idea of light quantization is one of the cornerstones of the quantum theory. Among many of its proofs, experiments involving a weak electromagnetic field played a crucial role in establishing the consensus. The seminal paper of Casimir and Polder shows that even in the limit of a photonless state -- the quantum vacuum -- a neutral atom near a dielectric wall feels a force mediated through the quantum fluctuations~\cite{Casimir1948}. Not before 1990s was such a behavior experimentally demonstrated, providing another confirmation of the quantization of the light and opening perspectives on investigating the quantum vacuum~\cite{Sukenik1993,Lamoreaux1997,Mohideen1998}. The presence of Casimir-Polder forces was demonstrated in various setups, involving different types of plates, from conducting to dielectric ones, and different types of probes, from atomic, through mechanical, to Bose-Einstein condensates~\cite{Schwinger1978, Balian1978, Plunien1986,Bordag2001,Harber2005a,Obrecht2007,Klimchitskaya2009}. In most cases, Casimir forces are attractive, however it was proposed that a repulsive character of the interaction is also possible~\cite{Dzyaloshinskii1961,Boyer1974,Bordag2001,Kenneth2002,Levin2010}, as recently showed experimentally~\cite{Munday2009}. The QED minimal coupling is usually a model of choice in describing Casimir phenomena, however there are others, providing an effective description of the electromagnetic field. One of the minimal models allowing system to exhibit both attractive and repulsive character of Casimir force is the Unruh-DeWitt (UDW) model~\cite{DeWitt1979} in the optical cavity~\cite{Alhambra2014}. Routinely used as a pointlike particle detector in the quantum field theory~\cite{Crispino2008,Birrell1982,Louko2008,Hodgkinson2013,Brown2013}, UDW model constitutes of a two-level atom coupled to the scalar field. Despite being noticeably simpler than the full QED Hamiltonian, it was shown to be a reasonable approximation to the atom-field interaction when no orbital angular momentum is exchanged~\cite{Martin-Martinez2013a}. Moreover, it was used to study Casimir-Polder forces~\cite{Passante2003, Rizzuto2004,Spagnolo2006,Rizzuto2007}. In this work, we analyze UDW model in the optical cavity, focusing on the following two aspects. Using the second order perturbation theory, we calculate the Casimir-Polder (CP) potential as a function of the atom's position in the cavity. We find that under some assumptions, it is intrinsically connected to the excitation probability of the two-level system that models the atom. As a consequence, we propose a simple experimental scenario of measuring the CP force by means of the atomic excitations in a Bose-Einstein condensate placed near the wall. The work is structured as follows. In Sec. II we introduce the model, underline its connection to the full QED interaction Hamiltonian, and compute both CP potential and the excitation rate of the UDW detector. In Sec. III we find the connection between the two and present the toy experimental proposal. The final Sec. IV concludes the manuscript with the recapitulation and the outlook. \section{Model} We will work in natural units, $\hbar=c=e=1$. Let us consider a scalar field of a mass $m$ governed by the Klein-Gordon equation: \begin{equation} \left(\Box+m^2 \right) \hat{\phi}=0 \end{equation} in the cavity with a length of $L$ fulfilling Dirichlet boundary conditions, $\hat{\phi}(x=0)=\hat{\phi}(x=L)=0$ with the following mode solutions: \begin{equation} u_n(x,t)=\frac{1}{\sqrt{\omega_n L}} \sin{\left(k_n x\right)}e^{-i\omega_n t}\equiv u_n(x)e^{-i\omega_n t}, \end{equation} where $\omega_n=\sqrt{k_n^2+m^2}$, $k_n=\frac{n\pi}{L}$, $n \in \mathbb{Z}$. Using these modes, the field $\hat{\phi}$ can be decomposed as: \begin{equation} \hat{\phi}(x)=\sum_{n}\left[\hat{a}_n^{\dagger} u_{n}^*(x)+\hat{a}_n u_n(x)\right], \end{equation} where $\hat{a}_n$ and $\hat{a}_n^{\dagger}$ are annihilation and creation bosonic operators satisfying the canonical commutation relations, $\left[\hat{a}_n,\hat{a}_k^{\dagger}\right]=\delta_{nk}$ and $\left[\hat{a}_n,\hat{a}_k\right]=\left[\hat{a}_n^{\dagger},\hat{a}_k^{\dagger}\right]=0$. In the distance $d$ from the boundary let us place a two-level system corresponding to the simplest model of an atom with an energy gap $\Omega$. Such a description approximates e.g. a hydrogen atom which is not placed in a strong classical background field and therefore no transition is resonantly coupled to the cavity. As we are interested in working with the vacuum state of the cavity, this proves to be a valid approximation. Then, the full Hamiltonian of the considered model includes free Hamiltonians of the scalar field and of the atom, and the term accounting for the interaction between both of them, $\hat{H}_{\mathrm{I}}$. One of the simplest possible choices of the interaction between the scalar field and the two-level system is the pointlike Unruh-DeWitt Hamiltonian. In the Schr\"odinger picture it takes the following form: \begin{equation} \hat{H}_{\mathrm{UDW}}=\lambda~\hat{\mu}_{\mathrm{S}}~\hat{\phi}(x), \label{eqn:UdWhamiltonian} \end{equation} where $\lambda$ -- dimensionless coupling constant, $\hat{\mu}_{\mathrm{S}}$ -- the monopole moment of the detector, $\hat{\mu}_{\mathrm{S}}=\hat{\sigma}^{+}+\hat{\sigma}^{-}=\ket{g}\bra{e}+\ket{e}\bra{g}$, where $\ket{g}$ is the ground state of the two-level system and $\ket{e}$ is its excited state. Moreover, $\hat{\phi}(x)$ is the scalar field operator evaluated at the point at which the pointlike detector is placed. In the spirit of electromagnetic field considerations, this first order term would be called a paramagnetic one. This simple model of interaction can be extended to a more realistic form including the second order term in the Hamiltonian corresponding to a diamagnetic, self-interaction term of the full QED Hamiltonian. This quadratic term of the Unruh-DeWitt Hamiltonian has the form: \begin{align} \hat{H}_{\mathrm{UDW}}^2&=\left(\lambda\left(\ket{g}\bra{e}+\ket{e}\bra{g}\right)\hat{\phi}\right)^2 \nonumber \\ &=\lambda^2\left(\ket{g}\bra{g}+\ket{e}\bra{e}\right)\hat{\phi}^2=\lambda^2\hat{\phi}^2. \end{align} It is worth to notice that such a term does not change a detector state. At this point, let us revoke some key results from Refs.~\cite{Martin-Martinez2013a} that compare QED Hamiltonian and UDW one. The minimal electromagnetic coupling in the Coulomb gauge reads: \begin{equation} \hat{H}_{\mathrm{QED}} = -\frac{1}{m} \vec{A}(\vec{x}) \cdot \vec{p}+\frac{1}{2 m} \left[ \vec{A}(\vec{x})\right]^2. \end{equation} The main difference is the vector character of the EM interaction in contrast to the scalar one that we consider. However, a scalar field can be readily utilized to describe electric and magnetic contributions separately, given appropriate boundary conditions. Indeed, such a description has been used to analyze Casimir-Polder interaction in the past. It has to be noted that such a scalar model does not allow any exchange of the orbital momentum, however we retreat to the simple case of atomic transitions that obey this rule. As mentioned above, the QED Hamiltonian consists of two terms -- paramagnetic and diamagnetic ones. The simplified light-matter interaction Hamiltonians often neglect the second term while working with weak fields. The minimal coupling in the vacuum implies interaction only with the quantum fluctuations $<\vec{A}^2>$. However, in the vacuum, the value of $<\vec{A}^2>$ depends on the region in which an atom resides -- or in the language of quantum field theory -- on the region these quantum fluctuations are smeared over. It happens that while approaching a limit of a pointlike atom (detector), this variance diverges. So, it introduces a necessity to allow for a finite size of the atom, unlike in the simple UDW model. In the original Casimir and Polder paper, such a problem was also present -- it was taken care of by the means of introducing a regularizing factor $e^{-\gamma k}$ in the integrals over momentum space. However, we follow a procedure used by \cite{Martin-Martinez2013a}, where an explicit spatial form of the ground state of the atom is assumed: \begin{equation}\label{pro} \Psi(x)=\frac{e^{-x/a_0}}{a_0}, \end{equation} where $a_0$ is some characteristic length associated to the spherically symmetric atomic profile (meant to be of the order of magnitude of Bohr radius). Such an approach modifies the UDW Hamiltonian by effectively coupling the detector to an effective field, \begin{equation} \hat{\phi}_{\mathrm{R}}(x)=\sum_{n}f_n\left[\hat{a}_n^{\dagger} u_{n}^*(x)+\hat{a}_n u_n(x)\right], \end{equation} where \begin{equation}f_n=\frac{2}{\left(a_0 k_n\right)^2+1} \end{equation} are Fourier transforms of the spatial profile \eqref{pro} evaluated at momentum $k_n$. Such a momentum-space profile is typical for zero angular momentum orbitals and can describe the simplest case of a hydrogen atom and its lowest transition, 1s $\rightarrow$ 2s. The next simplification of the UDW model involves assuming equal contributions from both of the nondiagonal parts of the Hamiltonian acting on the space spanned by the internal states of the atom. In a general case, their relative weight can be unequal, but in the case of a spherical symmetry of both the ground and the excited states, they happen to be equal. The other difference between QED and UDW Hamiltonians come from the fact that in the former the relative strength of para- and diamagnetic terms is given explicitly. It is not the case in the latter, as it has to be computed for specific profiles of the ground and the excited states. It can be done, however we will take the advantage of our model by considering a general, dimensionless parameter quantifying this relative strength. Combining all of these considerations, we finally get the extended version of the UDW Hamiltonian that mimics the QED one: \begin{equation}\label{ham} \hat{H}_{\mathrm{I}}=\lambda\left(\ket{g}\bra{e}+\ket{e}\bra{g}\right)\hat{\phi}_{\mathrm{R}}+\alpha\frac{\lambda^2}{\Omega}\hat{\phi}_{\mathrm{R}}^2, \end{equation} where $\alpha$ is a dimensionless constant and a free parameter to tune the relative strength between para- and diamagnetic terms. Energy $\Omega$ is introduced here to provide the correct units. Such a Hamiltonian can effectively mimic some forms of the full QED interaction~\cite{Martin-Martinez2013a}. It has to be noted that such a Hamiltonian is only a one-dimensional toy model that is utilized to model the qualitative effects coming from the full electromagnetic one. By keeping free parameters $\lambda$ and $\alpha$ explicitly in the calculations, we will show that some interesting conclusions stay the same for their arbitrary values. \subsection{Casimir-Polder potential} The first step is to find how the full energy of the system changes with the position of the atom in the cavity. The difference between this full energy, $E$, and the sum of the ground state energies of noninteracting cavity and the atom, $E_0$, is called the Casimir-Polder potential, $E_{\mathrm{CP}}$. The usual Casimir-Polder force acting on the atom in the fixed cavity is then understood as a spatial derivative of the Casimir-Polder potential, $F=-\nabla E_{\mathrm{CP}}$. We consider a system prepared in the state $\ket{g,0}$ -- the scalar field is in the vacuum state and two-level system is in the ground state. The system is then slightly perturbed by the extended UDW Hamiltonian~\eqref{ham} with $\lambda$ being the perturbation parameter. We will calculate the following energy in the second order of the perturbation theory. It takes form: \begin{eqnarray} E&=&E_0+E^{(1)}+E^{(2)}+\mathcal{O}(\lambda^4), \nonumber \\ E^{(1)}&=&\bra{g,0}\hat{H}_{\mathrm{I}}\ket{g,0}, \nonumber \\ E^{(2)}&=&\sum_{n=0}^{\infty}\sum_{s=\{g,e\}}\frac{|\bra{s,n}\hat{H}_{\mathrm{I}}\ket{g,0}|^2}{E_0-(E_0+\omega_n+\Omega_s)}, \end{eqnarray} where state $\ket{s,n}$, $s\in\{g,e\}$ corresponds to the arbitrary final state of the atom and the scalar field in the state $\ket{n}$ of energy $\omega_n$. Furthermore, $\Omega_s$ is the energy of the detector in the state $\ket{s}$, meaning that $\Omega_g=0$ and $\Omega_e=\Omega$. Then, we have: \begin{eqnarray} E^{(1)}&=&\frac{\alpha\lambda^2}{\Omega}\bra{0}\hat{\phi}_{\mathrm{R}}^2\ket{0}=\frac{\alpha\lambda^2}{\Omega L}\sum_{n=1}^{\infty}\frac{f_n^2 \sin^2{\left(k_n x\right)}}{\omega_n},\nonumber\\ E^{(2)}&=&-\sum_{n=1}^{\infty}\frac{\lambda^2}{\omega_n+\Omega}|\bra{n}\hat{\phi}_{\mathrm{R}}\ket{0}|^2+\mathcal{O}(\lambda^4)\nonumber\\ &=&-\sum_{n=1}^{\infty}\frac{\lambda^2}{\omega_n+\Omega}\frac{f_n^2 \sin^2{\left(k_n x\right)}}{\omega_n L}+\mathcal{O}(\lambda^4).\nonumber \end{eqnarray} It is useful to note that the term $f_n$ makes $E^{(1)}$ convergent. The whole second-order Casimir-Polder potential then reads \begin{align} E_{\mathrm{CP}}&=E^{(1)}+E^{(2)} \nonumber \\ &=\lambda^2\sum_{n=1}^{\infty}\frac{f_n^2\sin^2{\left(k_n x\right)}}{\omega_n L \left(\omega_n+\Omega\right)}\left[\left(\alpha-1\right)+\alpha \frac{\omega_n}{\Omega}\right]. \label{eqn:ECP} \end{align} One can immediately see that depending on the parameter $\alpha$, the Casimir-Polder potential, and consequently Casimir-Polder force can be either positive or negative. It confirms the usual phenomenology in which Casimir forces can be either repulsive or attractive, depending on the physical scenario involved. \subsection{Probability of excitation} The next step is to assume the same physical model but now with the interaction lasting for some finite time $\sigma$. As the electromagnetic interaction cannot be switched on or off, we choose to interpret the finite interaction time as a time between the creation of a setup and a destructive measurement. We aim to find the probability of measurement of the excited state of the atom, as it was initially prepared unexcited in the cavity. Therefore, we have to define the time-dependent Hamiltonian of interaction, allowing for a finite time interaction. We can modify previously showed model by adding a time-dependent switching function $\chi(t)$. The modified, time-dependent version of the extended UDW Hamiltonian in the Schr\"odinger picture has the following form: \begin{equation} \hat{H}_{\mathrm{UDW}}(t) =\chi(t)\left[\lambda~\hat{\mu}_{S} (t)~\hat{\phi}_{\mathrm{R}}(x) + \alpha \frac{\lambda^2}{\Omega}\left(\hat{\phi}_{\mathrm{R}}(x)\right)^2 \right]. \end{equation} We assume that the interaction starts and ends rapidly, so that $\chi(t)=1$ for $t\in(0,\sigma)$ and $\chi(t)=0$ for any other time. As it was mentioned before, the full Hamiltonian includes also a time-independent free scalar field and a free two-level system part. We proceed to use the Dirac picture, because the full Hamiltonian contains time-independent $\hat{H}_0=\sum_n \omega_n \hat{a}_n^{\dagger}\hat{a}_n\otimes\Omega\hat{\sigma}^{+}\hat{\sigma}^{-}$ and a time-dependent interaction component coming from the Unruh-DeWitt interaction. The evolution in such a scenario is given by operator in the form: $\hat{U} =\mathcal{T}\exp{ -i\int_{-\infty}^{\infty} \mathrm{d}t\hat{H}_{\mathrm{I}}^{(D)}(t) }$, where $(D)$ represents operator in the Dirac picture. As a result of the interaction, the state of the field can be changed, however we are interested only in finding the probability of the detector's excitation. The final state after the interaction between the detector and the scalar field can be written as $\ket{e,l}$, where $l\in\mathbb{N}\cup\{0\}$. Using the Born rule, we can write probability of excitation $p_{g\xrightarrow{}e}$ in the form: \begin{equation} p_{g\xrightarrow{}e}=\sum_{l\in\mathbb{N}\cup\{0\}}|\bra{e,l} \int_{-\infty}^{\infty} \mathrm{d}t \hat{H}_{\mathrm{I}}^{(D)}(t) \ket{g,0}|^2. \label{eqn:p2} \end{equation} The extended UDW Hamiltonian in the Dirac representation reads: \begin{eqnarray} \label{ham2} \hat{H}_{\mathrm{I}}^{(D)}(t) =\chi(t)\left[\lambda~\hat{\mu}^{(D)}~\hat{\phi}_{\mathrm{R}}^{(D)} + \frac{\alpha\lambda^2}{\Omega}\left(\hat{\phi}_{\mathrm{R}}^{(D)}\right)^2 \right], \end{eqnarray} where: \begin{eqnarray} \hat{\mu}^{(D)}&=&\left(e^{i\Omega t} \hat{\sigma}^{+}+e^{-i\Omega t} \hat{\sigma}^{-}\right),\\ \hat{\phi}_{\mathrm{R}}^{(D)}(x)&=&\sum_{n}f_n\left[\hat{a}_n^{\dagger} u_{n}(x)e^{i\omega_n t}+H.c.\right]. \end{eqnarray} Only the first part, linear in the coupling constant $\lambda$ contains an operator changing the state of the detector. The second-order term does not contribute to the probability of excitation given by the equation \eqref{eqn:p2}, because $\bra{e} \frac{\alpha\lambda^2}{\Omega}\left(\hat{\phi}_{\mathrm{R}}^{(D)}\right)^2\ket{g}=0$. After some direct calculation, by plugging~\eqref{ham2} in~\eqref{eqn:p2}, we get: \begin{equation} p_{g\xrightarrow{}e}=4\lambda^2\sum_{n=1}^{\infty}\frac{f_n^2\sin^2{\left(k_n x\right)}}{\omega_n L}\frac{\sin^2{\left[\frac{1}{2}\sigma\left(\omega_n+\Omega\right)\right]}}{\left(\omega_n+\Omega\right)^2}. \label{eqn:skonczoneprawd} \end{equation} The above result corresponds to an arbitrarily chosen time of interaction $\sigma$, but if the measurement apparatus does not have a time resolution good enough to work withing the scale of an atomic transition $1/\Omega$ one has to consider a coarse grained version of the above formula. Such an averaged out excitation probability reads: \begin{align} p_{g\xrightarrow{}e}^{\text{av}}&=4\lambda^2\int \mathrm{d}\sigma \sum_{n=1}^{\infty}\frac{f_n^2\sin^2{\left(k_n x\right)}}{\omega_n L}\frac{\sin^2{\left[\frac{1}{2}\sigma\left(\omega_n+\Omega\right)\right]}}{\left(\omega_n+\Omega\right)^2} \nonumber \\ &= 2\lambda^2 \sum_{n=1}^{\infty}\frac{f_n^2\sin^2{\left(k_n x\right)}}{\omega_n L \left(\omega_n+\Omega\right)^2} . \label{eqn:pav} \end{align} This new quantity is no longer dependent on the interaction time and is an explicit function of a distance from the wall. In the next Section we will show the connection between this averaged probability and the Casimir-Polder force. \section{Retrieving Casimir-Polder potential from the average excitation rate} We now proceed to compare both results. Both energy and probability given by Eqs. \eqref{eqn:ECP} and \eqref{eqn:pav} are represented by infinite series. We find that contributing terms in both of these series occur only for small (in comparison to the energy gap $\Omega$) values of $n$ , $\omega_n \ll \Omega$. For a numerical analysis of that fact, see Appendix. Therefore, we can treat $\frac{\omega_n}{\Omega}$ as a small parameter and expand both \eqref{eqn:ECP} and \eqref{eqn:pav} up to the first subleading order: \begin{equation} \label{cpeneeny} E_{\mathrm{CP}}\approx \Omega \sum_{n} p_n (x) \left[ \left( \alpha-1\right)+\frac{\omega_n}{\Omega} \right] \end{equation} and \begin{equation} p_{g\xrightarrow{}e}^{\text{av}}\approx 2 \sum_{n} p_n (x) \left[1-2\frac{\omega_n}{\Omega} \right] \end{equation} where \begin{equation} p_n (x) = \frac{\lambda^2 f_n^2 \sin^2\left( k_n x \right) }{\omega_n L \Omega^2}. \end{equation} The universal function \begin{equation} F(x)=\sum_{n} p_n (x) \end{equation} that reproduces the general shape of both Casimir-Polder potential and the averaged excitation probability is shown at Fig.~\ref{fig1} together with its derivative, corresponding to the Casimir-Polder force. \begin{figure} \caption{(Top) Universal function $F(x)$ proportional to both Casimir-Polder potential and the average excitation probability for the atom at position $x/L$ in the cavity. The parameters taken for the plot read: $L=1$, $m=1 \cdot 10^{-3}$, $\lambda =1 \cdot 10^{-2} $, $\Omega=1$. (Bottom) The derivative of the universal function $F(x)$, corresponding to the Casimir-Polder force in the optical cavity. } \label{fig1} \end{figure} Up to the leading order in $\omega_n/\Omega$, we have a proportionality between the Casimir-Polder energy and the averaged excitation probability of the atom: \begin{equation} \label{res} E_{\mathrm{CP}} \approx \frac{1}{2} \Omega \left(\alpha - 1 \right) p_{g\xrightarrow{}e}^{\text{av}}. \end{equation} The proportionality constant is a function of the energy gap and the internal properties of the atom, implicitly contained in the parameter $\alpha$. Unless $\alpha$ is extremely fine tuned to be 1, the proportionality between $E_{\mathrm{CP}}$ and $p_{g\xrightarrow{}e}^{\text{av}}$ is preserved. One has to note that here is no realistic value of $\alpha$ for the three-dimensional electromagnetic model of interaction between atom an the field, as the presented one-dimensional toy model is aimed only at grasping qualitative effects present in the system. However, within this toy model, taking $a_0=10^{-2}$ in natural units, $\alpha$ can be calculated to be $\alpha \sim 1/400$~\cite{Alhambra2014a}. It shows that at least for hydrogen-like atoms in 1D toy model, value $\alpha \sim 1$ is not a typical one. The result~\eqref{res} shows that Casimir-Polder potential can be indirectly recovered through analyzing the rate of atomic excitation near the wall. However, the probability of exciting a single atom through the interaction with the vacuum fluctuations is extremely small. It can be in some way adjusted by a proper choice of atomic species and of a proper atomic transition, however the most straightforward way is to use a large number of atoms that are placed within the same region. The natural candidate for such an ensemble is a Bose-Einstein condensate placed near the wall of the optical cavity. Modern ultracold experiments provide a clean environment to probe such settings. The first perk is the localization of the atoms by the means of optical trapping -- an ultracold sample can be placed in highly localized region of space (given by the usual Thomas-Fermi radius of BEC). The second one is a fine control over the parameters of neutral atoms involved -- the size of the atomic cloud is tuned with the trapping and with the two-body interaction between the atoms, that can be effectively turned off by the means of Fesbach resonances. Indeed, Bose-Einstein condensates have already been utilized to probe Casimir forces through their effect on the collective modes excited in the atomic cloud. In Ref.~\cite{Obrecht2007} a BEC of rubidium was placed near the dielectric surface and the frequency of dipole mode was probed. It allowed to measure nonzero temperature Casimir force acting on individual atoms. We, however, propose a different scheme in which the Casimir force is not directly measured through the change of the collective motion, but rather the potential itself is analyzed through the excitation of atoms. First, the BEC is treated as a collection of a large number of individual atoms that is spatially tightly packed and well described by the Thomas-Fermi profile and not as interacting many-body system. The hydrogen-like atom assumption we have is a shortcoming, however condensates of such atoms have been produced. We stress that our aim is not to perform the full three-dimensional calculation of the realistic QED system, but rather to show that at the level of a toy model mimicking the QED, there exists a direct relation between the Casimir-Polder energy and the excitation probability of an atom involved. It is worth checking how the averaging over the typical density of BEC changes this relation. The Thomas-Fermi single-particle density of BEC with perpendicular degrees of freedom integrated out and situated at $x_0$ from the boundary read: \begin{equation} \label{bec} n(x;x_0)=\frac{15 N}{16 R_{TF}} \left[1-\left(\frac{x-x_0}{R_{TF}} \right)^2 \right]^2, \end{equation} where $N$ -- total number of atoms in the condensate and $R_{TF}$ -- Thomas-Fermi radius. If we follow the excitation scheme in which multiple single-shot experiments measuring population of the excited state (e.g. via in situ imaging) are performed, the averaged population of excited state takes the form: \begin{equation} \label{ave} N_{\text{exc}} (x)=\int p_{g\xrightarrow{}e}^{\text{av}}(u) n(u;x) \dd u. \end{equation} The effect of such an averaging, for different sizes of BECs is shown at Fig.~\ref{fig2}. It appears that averaging over the density of BEC does not introduce appreciable changes to the spatial dependence of population of excited states in comparison to the Casimir-Polder potential. \begin{figure} \caption{Comparison between time-averaged excitation probability averaged over the density of atomic Bose-Einstein condensate for different Thomas-Fermi radii (red, dotted line -- $R_{TF} =0.05$, black, dashed line -- $R_{TF} =0.01$, orange, solid line -- $R_{TF} \rightarrow 0$). The spatial averaging does not introduce appreciable quantitative difference, yielding same curve for each radius, corresponding to the Casimir-Polder potential. Note that each curve starts at $R_{TF}$ due to the finite size of the BEC used as a probe. The parameters taken for the plot read: $L=1$, $m=1 \cdot 10^{-3}$, $\lambda =1 \cdot 10^{-2} $, $\Omega=1$. } \label{fig2} \end{figure} \section{Recapitulation and outlook} To summarize, we have analyzed a system consisting of an atom (described by a two-dimensional Hilbert space) interacting with a scalar field within a one-dimensional cavity. We have argued that an extended version of the Unruh-DeWitt Hamiltonian coupled to the scalar Klein-Gordon field provides a qualitatively reasonable approximation to the full light-matter interaction when the vacuum state of the cavity is involved. Utilizing the second-order perturbation theory, we have calculated the Casimir-Polder energy of the system and excitation probability of the atom when placed in a fixed distance from the wall of the cavity. We have shown that up to the leading order, both of these quantities coincide with each other up to multiplicative constant depending on the internal structure of the atom. As a result, we have argued that a suitable experimental setup involving measurement of the population of the excited atomic state in e.g. ultracold system of a two-level Bose-Einstein condensate, can be used to measure Casimir-Polder phenomena. Moreover, we have checked how averaging over the BEC density affects the above-mentioned relation between CP energy and the excitation probability. As a future line of work, a natural consequence would be a consideration a full three-dimensional system in a realistic experimental scenario. Another potential research could involve studying applicability of the Untuh-DeWitt Hamiltonian in describing neutral atoms in optical cavities. \section{Acknowledgments} P.T.G. appreciates discussions with K. Rz\k{a}\.{z}ewski. \begin{thebibliography}{29} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Casimir}\ and\ \citenamefont {Polder}(1948)}]{Casimir1948} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~B.~G.}\ \bibnamefont {Casimir}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Polder}},\ }\href {\doibase 10.1103/PhysRev.73.360} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {360} (\bibinfo {year} {1948})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sukenik}\ \emph {et~al.}(1993)\citenamefont {Sukenik}, \citenamefont {Boshier}, \citenamefont {Cho}, \citenamefont {Sandoghdar},\ and\ \citenamefont {Hinds}}]{Sukenik1993} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~I.}\ \bibnamefont {Sukenik}}, \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Boshier}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Cho}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Sandoghdar}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Hinds}},\ }\href {\doibase 10.1103/PhysRevLett.70.560} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {560} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lamoreaux}(1997)}]{Lamoreaux1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~K.}\ \bibnamefont {Lamoreaux}},\ }\href {\doibase 10.1103/PhysRevLett.78.5} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {5} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mohideen}\ and\ \citenamefont {Roy}(1998)}]{Mohideen1998} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Mohideen}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Roy}},\ }\href {\doibase 10.1103/PhysRevLett.81.4549} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {4549} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schwinger}\ \emph {et~al.}(1978)\citenamefont {Schwinger}, \citenamefont {DeRaad},\ and\ \citenamefont {Milton}}]{Schwinger1978} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Schwinger}}, \bibinfo {author} {\bibfnamefont {L.~L.}\ \bibnamefont {DeRaad}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~A.}\ \bibnamefont {Milton}},\ }\href {\doibase 10.1016/0003-4916(78)90172-0} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys. (N. Y).}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages} {1} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Balian}\ and\ \citenamefont {Duplantier}(1978)}]{Balian1978} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Balian}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Duplantier}},\ }\href {\doibase 10.1016/0003-4916(78)90083-0} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys. (N. Y).}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {165} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Plunien}\ \emph {et~al.}(1986)\citenamefont {Plunien}, \citenamefont {M{\"{u}}ller},\ and\ \citenamefont {Greiner}}]{Plunien1986} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Plunien}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {M{\"{u}}ller}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Greiner}},\ }\href {\doibase 10.1016/0370-1573(86)90020-7} {\enquote {\bibinfo {title} {{The Casimir effect}},}\ } (\bibinfo {year} {1986})\BibitemShut {NoStop} \bibitem [{\citenamefont {Bordag}\ \emph {et~al.}(2001)\citenamefont {Bordag}, \citenamefont {Mohideen},\ and\ \citenamefont {Mostepanenko}}]{Bordag2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bordag}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Mohideen}}, \ and\ \bibinfo {author} {\bibfnamefont {V.~M.}\ \bibnamefont {Mostepanenko}},\ }\href {\doibase 10.1016/S0370-1573(01)00015-1} {\enquote {\bibinfo {title} {{New developments in the Casimir effect}},}\ } (\bibinfo {year} {2001}),\ \Eprint {http://arxiv.org/abs/0106045} {arXiv:0106045 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Harber}\ \emph {et~al.}(2005)\citenamefont {Harber}, \citenamefont {Obrecht}, \citenamefont {McGuirk},\ and\ \citenamefont {Cornell}}]{Harber2005a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Harber}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Obrecht}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {McGuirk}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Cornell}},\ }\href {\doibase 10.1103/PhysRevA.72.033610} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A - At. Mol. Opt. Phys.}\ }\textbf {\bibinfo {volume} {72}} (\bibinfo {year} {2005}),\ 10.1103/PhysRevA.72.033610},\ \Eprint {http://arxiv.org/abs/0506208} {arXiv:0506208 [cond-mat]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Obrecht}\ \emph {et~al.}(2007)\citenamefont {Obrecht}, \citenamefont {Wild}, \citenamefont {Antezza}, \citenamefont {Pitaevskii}, \citenamefont {Stringari},\ and\ \citenamefont {Cornell}}]{Obrecht2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Obrecht}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Wild}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Antezza}}, \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Pitaevskii}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Stringari}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Cornell}},\ }\href {\doibase 10.1103/PhysRevLett.98.063201} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}} (\bibinfo {year} {2007}),\ 10.1103/PhysRevLett.98.063201},\ \Eprint {http://arxiv.org/abs/0608074} {arXiv:0608074 [physics]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Klimchitskaya}\ \emph {et~al.}(2009)\citenamefont {Klimchitskaya}, \citenamefont {Mohideen},\ and\ \citenamefont {Mostepanenko}}]{Klimchitskaya2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~L.}\ \bibnamefont {Klimchitskaya}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Mohideen}}, \ and\ \bibinfo {author} {\bibfnamefont {V.~M.}\ \bibnamefont {Mostepanenko}},\ }\href {\doibase 10.1103/RevModPhys.81.1827} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {1827} (\bibinfo {year} {2009})},\ \Eprint {http://arxiv.org/abs/0902.4022} {arXiv:0902.4022} \BibitemShut {NoStop} \bibitem [{\citenamefont {Dzyaloshinskii}\ \emph {et~al.}(1961)\citenamefont {Dzyaloshinskii}, \citenamefont {Lifshitz},\ and\ \citenamefont {Pitaevskii}}]{Dzyaloshinskii1961} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~E.}\ \bibnamefont {Dzyaloshinskii}}, \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Lifshitz}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Pitaevskii}},\ }\href {\doibase 10.1070/pu1961v004n02abeh003330} {\bibfield {journal} {\bibinfo {journal} {Sov. Phys. Uspekhi}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {153} (\bibinfo {year} {1961})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boyer}(1974)}]{Boyer1974} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont {Boyer}},\ }\href {\doibase 10.1103/PhysRevA.9.2078} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {2078} (\bibinfo {year} {1974})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kenneth}\ \emph {et~al.}(2002)\citenamefont {Kenneth}, \citenamefont {Klich}, \citenamefont {Mann},\ and\ \citenamefont {Revzen}}]{Kenneth2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Kenneth}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Klich}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mann}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Revzen}},\ }\href {\doibase 10.1103/PhysRevLett.89.033001} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {89}} (\bibinfo {year} {2002}),\ 10.1103/PhysRevLett.89.033001},\ \Eprint {http://arxiv.org/abs/0202114} {arXiv:0202114 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Levin}\ \emph {et~al.}(2010)\citenamefont {Levin}, \citenamefont {McCauley}, \citenamefont {Rodriguez}, \citenamefont {Reid},\ and\ \citenamefont {Johnson}}]{Levin2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Levin}}, \bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {McCauley}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Rodriguez}}, \bibinfo {author} {\bibfnamefont {M.~T.}\ \bibnamefont {Reid}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~G.}\ \bibnamefont {Johnson}},\ }\href {\doibase 10.1103/PhysRevLett.105.090403} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}} (\bibinfo {year} {2010}),\ 10.1103/PhysRevLett.105.090403},\ \Eprint {http://arxiv.org/abs/1003.3487} {arXiv:1003.3487} \BibitemShut {NoStop} \bibitem [{\citenamefont {Munday}\ \emph {et~al.}(2009)\citenamefont {Munday}, \citenamefont {Capasso},\ and\ \citenamefont {Parsegian}}]{Munday2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~N.}\ \bibnamefont {Munday}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Capasso}}, \ and\ \bibinfo {author} {\bibfnamefont {V.~A.}\ \bibnamefont {Parsegian}},\ }\href {\doibase 10.1038/nature07610} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {457}},\ \bibinfo {pages} {170} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {DeWitt}(1979)}]{DeWitt1979} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~S.}\ \bibnamefont {DeWitt}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Gen. Relativ. An Einstein Centen. Surv.}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {S.~W.}\ \bibnamefont {Hawking}}\ and\ \bibinfo {editor} {\bibfnamefont {W.}~\bibnamefont {Israel}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {address} {Cambridge},\ \bibinfo {year} {1979})\ pp.\ \bibinfo {pages} {680--745}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alhambra}\ \emph {et~al.}(2014{\natexlab{a}})\citenamefont {Alhambra}, \citenamefont {Kempf},\ and\ \citenamefont {Mart{\'{i}}n-Mart{\'{i}}nez}}]{Alhambra2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {{\'{A}}.~M.}\ \bibnamefont {Alhambra}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kempf}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Mart{\'{i}}n-Mart{\'{i}}nez}},\ }\href {\doibase 10.1103/PhysRevA.89.033835} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A - At. Mol. Opt. Phys.}\ }\textbf {\bibinfo {volume} {89}} (\bibinfo {year} {2014}{\natexlab{a}}),\ 10.1103/PhysRevA.89.033835}\BibitemShut {NoStop} \bibitem [{\citenamefont {Crispino}\ \emph {et~al.}(2008)\citenamefont {Crispino}, \citenamefont {Higuchi},\ and\ \citenamefont {Matsas}}]{Crispino2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~C.~B.}\ \bibnamefont {Crispino}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Higuchi}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~E.~A.}\ \bibnamefont {Matsas}},\ }\href {\doibase 10.1103/RevModPhys.80.787} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {787} (\bibinfo {year} {2008})},\ \Eprint {http://arxiv.org/abs/0710.5373} {arXiv:0710.5373} \BibitemShut {NoStop} \bibitem [{\citenamefont {Birrell}\ and\ \citenamefont {Davies}(1982)}]{Birrell1982} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Birrell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~C.~W.}\ \bibnamefont {Davies}},\ }\href {https://books.google.pl/books/about/Quantum{\_}Fields{\_}in{\_}Curved{\_}Space.html?id=SEnaUnrqzrUC{\&}redir{\_}esc=y} {\emph {\bibinfo {title} {{Quantum fields in curved space}}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {1982})\ p.\ \bibinfo {pages} {340}\BibitemShut {NoStop} \bibitem [{\citenamefont {Louko}\ and\ \citenamefont {Satz}(2008)}]{Louko2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Louko}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Satz}},\ }\href {\doibase 10.1088/0264-9381/25/5/055012} {\bibfield {journal} {\bibinfo {journal} {Class. Quantum Gravity}\ }\textbf {\bibinfo {volume} {25}} (\bibinfo {year} {2008}),\ 10.1088/0264-9381/25/5/055012},\ \Eprint {http://arxiv.org/abs/0710.5671} {arXiv:0710.5671} \BibitemShut {NoStop} \bibitem [{\citenamefont {Hodgkinson}(2013)}]{Hodgkinson2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Hodgkinson}},\ }\emph {\bibinfo {title} {{Particle detectors in curved spacetime quantum field theory}}},\ \href@noop {} {Ph.D. thesis} (\bibinfo {year} {2013}),\ \Eprint {http://arxiv.org/abs/1309.7281v2} {arXiv:1309.7281v2} \BibitemShut {NoStop} \bibitem [{\citenamefont {Brown}\ \emph {et~al.}(2013)\citenamefont {Brown}, \citenamefont {Mart{\'{i}}n-Mart{\'{i}}nez}, \citenamefont {Menicucci},\ and\ \citenamefont {Mann}}]{Brown2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Brown}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Mart{\'{i}}n-Mart{\'{i}}nez}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Menicucci}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Mann}},\ }\href {\doibase 10.1103/PhysRevD.87.084062} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D - Part. Fields, Gravit. Cosmol.}\ }\textbf {\bibinfo {volume} {87}} (\bibinfo {year} {2013}),\ 10.1103/PhysRevD.87.084062}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mart{\'{i}}n-Mart{\'{i}}nez}\ \emph {et~al.}(2013)\citenamefont {Mart{\'{i}}n-Mart{\'{i}}nez}, \citenamefont {Montero},\ and\ \citenamefont {{Del Rey}}}]{Martin-Martinez2013a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Mart{\'{i}}n-Mart{\'{i}}nez}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Montero}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Del Rey}}},\ }\href {\doibase 10.1103/PhysRevD.87.064038} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D - Part. Fields, Gravit. Cosmol.}\ }\textbf {\bibinfo {volume} {87}} (\bibinfo {year} {2013}),\ 10.1103/PhysRevD.87.064038},\ \Eprint {http://arxiv.org/abs/1207.3248} {arXiv:1207.3248} \BibitemShut {NoStop} \bibitem [{\citenamefont {Passante}\ \emph {et~al.}(2003)\citenamefont {Passante}, \citenamefont {Persico},\ and\ \citenamefont {Rizzuto}}]{Passante2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Passante}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Persico}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Rizzuto}},\ }\href {\doibase 10.1016/S0375-9601(03)01131-9} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. Sect. A Gen. At. Solid State Phys.}\ }\textbf {\bibinfo {volume} {316}},\ \bibinfo {pages} {29} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rizzuto}\ \emph {et~al.}(2004)\citenamefont {Rizzuto}, \citenamefont {Passante},\ and\ \citenamefont {Persico}}]{Rizzuto2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Rizzuto}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Passante}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Persico}},\ }\href {\doibase 10.1103/PhysRevA.70.012107} {\enquote {\bibinfo {title} {{Dynamical casimir-polder energy between an excited- and a ground-state atom}},}\ } (\bibinfo {year} {2004})\BibitemShut {NoStop} \bibitem [{\citenamefont {Spagnolo}\ \emph {et~al.}(2006)\citenamefont {Spagnolo}, \citenamefont {Passante},\ and\ \citenamefont {Rizzuto}}]{Spagnolo2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Spagnolo}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Passante}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Rizzuto}},\ }\href {\doibase 10.1103/PhysRevA.73.062117} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A - At. Mol. Opt. Phys.}\ }\textbf {\bibinfo {volume} {73}} (\bibinfo {year} {2006}),\ 10.1103/PhysRevA.73.062117}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rizzuto}(2007)}]{Rizzuto2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Rizzuto}},\ }\href {\doibase 10.1103/PhysRevA.76.062114} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A - At. Mol. Opt. Phys.}\ }\textbf {\bibinfo {volume} {76}} (\bibinfo {year} {2007}),\ 10.1103/PhysRevA.76.062114}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alhambra}\ \emph {et~al.}(2014{\natexlab{b}})\citenamefont {Alhambra}, \citenamefont {Kempf},\ and\ \citenamefont {Mart{\'{i}}n-Mart{\'{i}}nez}}]{Alhambra2014a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {{\'{A}}.~M.}\ \bibnamefont {Alhambra}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kempf}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Mart{\'{i}}n-Mart{\'{i}}nez}},\ }\href {\doibase 10.1103/PhysRevA.89.033835} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A - At. Mol. Opt. Phys.}\ }\textbf {\bibinfo {volume} {89}} (\bibinfo {year} {2014}{\natexlab{b}}),\ 10.1103/PhysRevA.89.033835}\BibitemShut {NoStop} \end{thebibliography} \appendix \section{Low frequency approximation} In this Appendix we demonstrate that the infinite series given by the \eqref{eqn:ECP} and \eqref{eqn:pav}, can be well approximated by a sum of finite number of terms. We will include only the lowest modes of the field to the sum, so it can be called a low frequency approximation. Furthermore, we want to show that the frequency of the last mode included in the approximation sum always stays much smaller than the energy gap $\Omega$. \begin{figure} \caption{Fidelity of the finite elements approximation used to find a value of Casimir-Polder potential or probability of excitation for a detector standing in the middle of the cavity. The parameters taken for the plot read: $x=\frac{L}{2},~L=1,~m=1\cdot10^{-3},~\lambda=1\cdot10^{-2}$ (in the case of finding Casimir-Polder potential there is also $\alpha$ coupling constant). Fidelity of: (Top) probability of excitation, (Center) Casimir-Polder potential for $\alpha=1\cdot10^{-1}$, (Bottom) Casimir-Polder potential for $\alpha=2$. } \label{fig3} \end{figure} Let us start from the most general case. Let us cosider a converged infinite sum $S=\sum_{n=1}^{\infty}a_n$. The value of $S$ can be approximated by the $S_N=\sum_{n=1}^{N}a_n$. The bigger $N$ is, the better approximation of the infinite series $S$ we get. We can ask how many elements of the series need to be summed up to achieve a given quality of the approximation. To well define this problem we have to determine how to measure the quality of the approximation of the series One of the possibility is to use a fidelity function defined as: \begin{equation}\label{fid} \mathcal{F}_{a_n}^{N}=\frac{a_{N}}{\sum_{n=1}^{N}a_n}. \end{equation} Such a function tells us how big contribution to finite sum $S_N$ coming from the last element is. The smaller $\mathcal{F}_{a_n}^{N}$ is, the better the quality of $S$ approximation given by $S_N$ becomes. In the case presented above, we want to verify whether series which define Casimir-Polder potential and probability of excitation can be approximated by a sum including just $N$ elements such that $\omega_N$ is still much smaller than $\Omega$. Using fidelity function~\eqref{fid} we can find the value of the function $\mathcal{F}_{E_n(\Omega)}^{N}$, where $E_n$ is such that $E_{\mathrm{CP}}=\sum_{n=1}^{\infty}E_n$. To answer our question we can plot $\mathcal{F}_{E_n(\Omega=\omega_K)}^{N}$ as a function of $N$ and $K$ such that $\omega_K=\Omega$. Similarly, for the probability of excitation we will plot $\mathcal{F}_{(p_{g\xrightarrow{}e})_n(\Omega)}^{N}$, where $(p_{g\xrightarrow{}e})_n$ is such that $p_{g\xrightarrow{}e}=\sum_{n}(p_{g\xrightarrow{}e})_n$. For simplicity, we will consider only a detector standing in the middle of the cavity. As a result, only odd modes of the field have non zero contribution to the final value. The figure \ref{fig3} shows fidelity of the approximated series-defined Casimir-Polder potential and probability of excitation. We can see that the lines connecting the points of the same value of the fidelity function have a convex shape. It turns out that for every parameter describing the quality of the approximation by a $N$ elements sum, we can choose $\Omega$ for which $\omega_N\ll\Omega$. For instance, let us consider the line of constant value of fidelity shown on the top of figure \ref{fig3}. The same value of a fidelity occurs for pair $(N,K)\approx(7,7)$ and for $(N,K)\approx(12,20)$. It means that for $\Omega=\omega_{2\cdot7+1}$ one has to sum $N=2\cdot7+1$ modes of the field to achieve the same fidelity as for $\Omega=\omega_{2\cdot20+1}$ and only $N=2\cdot12+1$ modes. For series-defined Casimir-Polder potential and coupling $\alpha>1$ it is even better, because lines of constant fidelity decrease with $K$, so the bigger $\Omega=\omega_{2K+1}$ is, the smaller number of modes that are needed to be summed up to achieve given fidelity is. \end{document}
\begin{document} \title[Stationary Nonlocal Cahn-Hilliard-Navier-Stokes System]{On the Stationary Nonlocal Cahn-Hilliard-Navier-Stokes System: Existence, Uniqueness and Exponential Stability \Addresses } \author[T. Biswas, S. Dharmatti, L. N. M. Perisetti and M. T. Mohan] {Tania Biswas\textsuperscript{1}, Sheetal Dharmatti\textsuperscript{1*}, Perisetti Lakshmi Naga Mahendranath\textsuperscript{1} and Manil T Mohan\textsuperscript{2}} \begin{abstract} Cahn-Hilliard-Navier-Stokes system describes the evolution of two isothermal, incompressible, immiscible fluids in a bounded domain. In this work, we consider the stationary nonlocal Cahn-Hilliard-Navier-Stokes system in two and three dimensions with singular potential. We prove the existence of a weak solution for the system using pseudo-monotonicity arguments and Browder's theorem. Further, we establish the uniqueness and regularity results for the weak solution of the stationary nonlocal Cahn-Hilliard-Navier-Stokes system for constant mobility parameter and viscosity. Finally, in two dimensions, we establish that the stationary solution is exponentially stable (for convex singular potentials) under suitable conditions on mobility parameter and viscosity. \end{abstract} \maketitle \section{Introduction}\label{sec1}\setcounter{equation}{0} We consider a mathematical model of two isothermal, incompressible, immiscible fluids evolving in two or three dimensional bounded domains. This system of equations is well known as \emph{Cahn-Hilliard-Navier-Stokes(CHNS) system} or is also known as $\mathrm{H}$-Model. Cahn-Hilliard-Navier-Stokes model describes the chemical interactions between the two phases at the interface, which is achieved using a Cahn-Hilliard approach, and also the hydrodynamic properties of the mixture which is obtained by using Navier-Stokes equations with surface tension terms acting at the interface (see \cite{MR2580516}). If the two fluids have the same constant density, then the temperature differences are negligible and the diffusive interface between the two phases has a small but non-zero thickness, and thus we have the well-known ``H-Model" (see \cite{MR1404829}). The equations for evolution of the Cahn-Hilliard-Navier-Stokes/H-model are given by \begin{equation}\label{1.1} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned} \varphi_t + \mathbf{u}\cdot \nabla \varphi &=\text{ div} (m(\varphi) \nabla \mu), \ \text{ in } \ \Omega \times (0,T),\\ \mu &= a \varphi - \mathrm{J}\ast \varphi + \mathrm{F}'(\varphi),\\ \rho \mathbf{u}_t - 2 \text{div } ( \nu(\varphi) \mathrm{D}\mathbf{u} ) + (\mathbf{u}\cdot \nabla )\mathbf{u} + \nabla \uppi &= \mu \nabla \varphi + \mathbf{h}, \ \text{ in } \ \Omega \times (0,T),\\ \text{div }\mathbf{u}&= 0, \ \text{ in } \ \Omega \times (0,T), \\ \frac{\partial \mu}{\partial\mathbf{n}} &= 0 \ , \mathbf{u}=0 \ \text{ on } \ \partial \Omega \times [0,T],\\ \mathbf{u}(0) &= \mathbf{u}_0, \ \ \varphi(0) = \varphi _0 \ \text{ in } \ \Omega, \end{aligned} \aftergroup\egroup\originalright. \end{equation} where $\Omega\subset \mathbb{R}^n,\ n=2,3$ and $\mathbf{u}(x,t)$ and $\varphi(x,t)$ denote the average velocity of the fluid and the relative concentration respectively. These equations are of the nonlocal type because of the presence of the term $\mathrm{J}$, which is the \emph{spatial-dependent internal kernel} and $\mathrm{J} \ast \varphi$ denotes the spatial convolution over $\Omega$. The mobility parameter is denoted by $m$, $\mu$ is the \emph{chemical potential}, $\uppi$ is the \emph{pressure}, $a$ is defined by $a(x) := \int _\Omega \mathrm{J}(x-y) \/\mathrm{d}\/ y$, $\mathrm{F}$ is the configuration potential, which accounts for the presence of two phases, $\nu$ is the \emph{kinematic viscosity} and $\mathbf{h}$ is the external forcing term acting in the mixture. The strain tensor $\mathrm{D}\mathbf{u}$ is the symmetric part of the gradient of the flow velocity vector, i.e., $\mathrm{D}\mathbf{u}$ is $\frac{1}{2}\mathopen{}\mathclose\bgroup\originalleft(\nabla\mathbf{u}+(\nabla\mathbf{u})^{\top}\aftergroup\egroup\originalright)$. The chemical potential $\mu$ is the first variation of the free energy functional: \begin{align*} \mathcal{E}(\varphi) := \frac{1}{4}\int_\Omega \int_\Omega \mathrm{J}(x-y)(\varphi(x)-\varphi(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y + \int_\Omega \mathrm{F}(\varphi(x))\/\mathrm{d}\/ x. \end{align*} Various simplified models of this system are studied by several mathematicians and physicists. The local version of the system (see \cite{MR1700669, MR2580516}) is obtained by replacing $\mu $ equation by $ \mu =- \Delta \varphi + \mathrm{F}' (\varphi) $, which is the first variation of the free energy functional $$\mathcal{E}(\varphi) := \int_{\Omega} \mathopen{}\mathclose\bgroup\originalleft( \frac{1}{2} | \nabla \varphi(x)|^2 + \mathrm{F}(\varphi (x))\aftergroup\egroup\originalright)\, \/\mathrm{d}\/ x.$$ Another simplification appeared in the literature is to assume the constant mobility parameter and/or constant viscosity. The solvability, uniqueness and regularity of the system (\ref{1.1}) and of other simplified Cahn-Hilliard-Navier-Stokes models is well studied in the literature though most of the works are recent ones. Typically two types of potentials are considered in the literature, regular potential as well as singular potential. In general, singular potentials are difficult to handle and in such cases, $\mathrm{F}(\cdot)$ is usually approximated by polynomials in order to make the mathematical analysis easier (see \cite{MR2347608}). The nonlocal Cahn-Hilliard-Navier-Stokes system with regular potential has been analysed by M. Grasseli et al. in \cite{MR2834896, MR3518604, MR3090070}, etc. Taking advantage of the results for the regular potential, they have also studied in \cite{MR3019479}, the existence of weak solution of the system with singular potential. Furthermore, they proved the existence of the global attractor in 2D and trajectory attractor in 3D. Strong solutions for the nonlocal Cahn-Hilliard-Navier-Stokes system was discussed in \cite{MR3903266}. Uniqueness results for the same were established in \cite{MR3518604}. In \cite{MR3688414}, authors considered the nonlocal Cahn-Hilliard equation with singular potential and constant mobility and studied well-posedness and regularity results. Moreover, they established the strict separation property in dimension 2. Regularity results in case of degenerate mobility were studied in \cite{frigeri10regularity}. The local Cahn-Hilliard-Navier-Stokes system with singular free energies has been studied in \cite{MR2563636, MR1700669}. Further, along the application side, the optimal control of nonlocal Cahn-Hilliard-Navier-Stokes equations and robust control for local Cahn-Hilliard-Navier-Stokes equations have been addressed in the works \cite{MR3456388,MR4104524,MR3565933,MR3436705, MR4108622,MR4131779}, etc. Solvability results for the stationary nonlocal Cahn-Hilliard equations with singular potential were discussed in \cite{MR2108884}, whereas authors in \cite{MR2347608} proved the convergence to the equilibrium solution of Cahn-Hilliard system with logarithmic free energy. The existence of the equilibrium solution for the steady state Navier-Stokes equation is well known in the literature and can be found in the book \cite{MR1318914}. In \cite{MR3524178}, the authors discussed the existence of a weak solution to the stationary local Cahn-Hilliard-Navier-Stokes equations. The author in \cite{MR3555135} studied a coupled Cahn-Hilliard-Navier-Stokes model with delays in a two-dimensional bounded domains and discussed the asymptotic behaviour of the weak solutions and the stability of the stationary solutions. In this work, our main aim is to study the well-posedness of nonlocal steady state system corresponding to the model described in \eqref{1.1} in dimensions $2$ and $3$ and to examine the stability properties of this solution in dimension $2$ (for convex singular potentials). Throughout this paper, we consider $\mathrm{F}$ to be a singular potential. A typical example is the logarithmic potential: \begin{align}\label{2} \mathrm{F} (\varphi) = \frac{\theta}{2} ((1+ \varphi) \ln (1+\varphi) + (1-\varphi) \ln (1-\varphi)) - \frac{\theta_c}{2} \varphi^2 ,\quad \varphi \in (-1,1), \end{align} where $\theta,\theta_c>0$. The logarithmic terms are related to the entropy of the system and note that $\mathrm{F}$ is convex if and only if $\theta\geq \theta_c$. If $\theta\geq \theta_c$ the mixed phase is stable and if $0<\theta<\theta_c$, the mixed phase is unstable and phase separation occurs. To the best of our knowledge, the solvability results for stationary nonlocal Cahn-Hilliard-Navier-Stokes equations is not available in the literature. In the current work, using techniques similar to the one developed in \cite{MR3524178}, we resolve this issue. We prove the existence of a weak solution to the stationary nonlocal Cahn-Hilliard-Navier-Stokes system in dimensions 2 and 3 with variable mobility and viscosity. Further, we answer the questions of uniqueness and regularity of the solution for the equations with constant viscosity and mobility parameters. In dimensions $2$ and $3$, we show that the weak solution possesses higher regularity. The uniqueness of weak solutions is established under certain conditions on the viscosity and mobility parameters. Lastly, for constant viscosity and mobility parameters and convex singular potential, we establish that the strong solution of steady state equations in dimension $2$, stabilizes exponentially. The main difficulty to tackle while obtaining these results was to handle the nonlocal term. The nonlocal term in the equation needs careful estimation. Moreover, to the best of our knowledge, apart from existence and regularity results for steady state equation, which we have obtained here is the first work to discuss exponential stability of nonlocal Cahn-Hilliard-Navier-Stokes model. These results can be useful to study stabilisation properties of control problems associated with the system. Rest of the paper is organised as follows: In the next section, we explain functional setting for the solvability of stationary nonlocal Cahn-Hilliard-Navier-Stokes equations \eqref{steadysys} (given below). We define the weak formulation of our system in section \ref{sec3}. The existence of a weak solution to the nonlocal Cahn-Hilliard-Navier-Stokes equations \eqref{steadysys} is proved using pseudo-monotonicity arguments and Browder's theorem in this section (see Theorem \ref{mainthm}). In further studies, we assume the mobility parameter and viscosity to be constant. The section \ref{sec4} is devoted to study the uniqueness of a weak solution for the system \eqref{steadysys} and some regularity results. We establish the uniqueness of weak solutions under certain assumptions on mobility parameter and viscosity (see Theorem \ref{unique-steady}). Further, we derive some regularity results for the solution. Finally, in section \ref{se4}, we establish that the stationary solution in two dimensions is exponentially stable (see Theorem \ref{thmexp}) under certain restrictions on mobility parameter and viscosity. \section{Stationary Nonlocal Cahn-Hilliard-Navier-Stokes System}\label{se3}\setcounter{equation}{0} In this section, we consider the stationary nonlocal Cahn-Hilliard-Navier-Stokes system in two and three dimensional bounded domains. Here, we consider the case of the coefficient of kinematic viscosity and mobility parameter depending on $\varphi$. Let us consider the following steady state system associated with the equation \eqref{1.1}: \begin{equation}\label{steadysys} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned} \mathbf{u}\cdot \nabla \varphi &= \text{ div} (m(\varphi) \nabla \mu), \ \text{ in }\ \Omega,\\ \mu &= a \varphi - \mathrm{J}\ast \varphi + \mathrm{F}'(\varphi), \ \text{ in }\ \Omega,\\ - 2 \text{div } ( \nu(\varphi) \mathrm{D}\mathbf{u} ) + (\mathbf{u}\cdot \nabla )\mathbf{u} + \nabla \uppi &= \mu \nabla \varphi + \mathbf{h}, \ \text{ in }\ \Omega,\\ \text{div }\mathbf{u}&= 0, \ \text{ in }\ \Omega, \\ \frac{\partial \mu}{\partial\mathbf{n}} &= 0 \ , \mathbf{u}=\mathbf{0} \ \text{ on } \ \partial \Omega,\\ \end{aligned} \aftergroup\egroup\originalright. \end{equation} with average of $\varphi$ given by $$\frac{1}{|\Omega|}\int_\Omega \varphi (x) \/\mathrm{d}\/ x = k \in (-1,1),$$ where $|\Omega|$ is the Lebesgue measure of $\Omega$. Our main aim in this work is to study the existence, uniqueness, regularity and stability of the system \eqref{steadysys}. For solvability, we formulate the problem in an abstract setup and use the well known Browder's theorem to establish the existence of a weak solution to the system \eqref{steadysys}. We further study regularity, uniqueness and exponential stability of the system with constant viscosity and mobility parameters by establishing a-priori estimates and under certain conditions on viscosity and mobility. \subsection{Functional setting} We first explain the functional spaces needed to obtain our main results. Let us define \begin{align*} \mathbb{G}_{\mathrm{div}} &:= \Big\{ \mathbf{u} \in \mathrm{L}^2(\Omega;\mathbb{R}^n) : \text{div }\mathbf{u}=0,\ \mathbf{u}\cdot \mathbf{n} \big|_{\partial \Omega}=0 \Big\}, \\ \mathbb{V}_{\mathrm{div}} &:= \Big\{\mathbf{u} \in \mathrm{H}^1_0(\Omega;\mathbb{R}^n): \text{div }\mathbf{u}=0\Big\},\\ \mathrm{H}&:=\mathrm{L}^2(\Omega;\mathbb{R}),\ \mathrm{V}:=\mathrm{H}^1(\Omega;\mathbb{R}), \end{align*} where $n=2,3$. Let us denote $\| \cdot \|$ and $(\cdot, \cdot)$, the norm and the scalar product, respectively, on both $\mathrm{H}$ and $\mathbb{G}_{\mathrm{div}}$. The duality between any Hilbert space $\mathbb{X}$ and its topological dual $\mathbb{X}'$ is denoted by $_{\mathbb{X}'}\langle \cdot,\cdot\rangle_{\mathbb{X}}$. We know that $\mathbb{V}_{\mathrm{div}}$ is endowed with the scalar product $$(\mathbf{u},\mathbf{v})_{\mathbb{V}_{\mathrm{div}} }= (\nabla \mathbf{u}, \nabla \mathbf{v})=2(\mathrm{D}\mathbf{u},\mathrm{D}\mathbf{v}),\ \text{ for all }\ \mathbf{u},\mathbf{v}\in\mathbb{V}_{\mathrm{div}}.$$ The norm on $\mathbb{V}_{\mathrm{div}}$ is given by $\|\mathbf{u}\|_{\mathbb{V}_{\mathrm{div}}}^2:=\int_{\Omega}|\nabla\mathbf{u}(x)|^2\/\mathrm{d}\/ x=\|\nabla\mathbf{u}\|^2$. In the sequel, we use the notations $\mathbb{H}^m(\Omega):=\mathrm{H}^m(\Omega;\mathbb{R}^n)=\mathrm{W}^{m,2}(\Omega;\mathbb{R}^n)$ and $\mathrm{H}^m(\Omega):=\mathrm{H}^m(\Omega;\mathbb{R})=\mathrm{W}^{m,2}(\Omega;\mathbb{R})$ for Sobolev spaces of order $m$. Let us also define \begin{align*} \mathrm{L}^2_{(k)}(\Omega)& := \mathopen{}\mathclose\bgroup\originalleft\{ f \in \mathrm{L}^2(\Omega;\mathbb{R}) : \frac{1}{|\Omega |} \int_\Omega f(x)\/\mathrm{d}\/ x = k \aftergroup\egroup\originalright\},\\ \mathrm{H}^1_{(0)} (\Omega)&:= \mathrm{H}^1(\Omega;\mathbb{R}) \cap\mathrm{L}^2_{(0)}(\Omega)= \mathopen{}\mathclose\bgroup\originalleft\{ f \in \mathrm{H}^1(\Omega;\mathbb{R}) : \int_\Omega f(x)\/\mathrm{d}\/ x = 0 \aftergroup\egroup\originalright\}\ ,\\ \mathrm{H}^{-1}_{(0)}(\Omega)&:=\mathrm{H}_{(0)}^1(\Omega)'. \end{align*} Note that $\mathrm{L}^2_{(0)}(\Omega)$ is a Hilbert space equipped with the usual inner product in $\mathrm{L}^2(\Omega)$. Since $\Omega$ is a bounded smooth domain and the average of $f$ is zero in $\mathrm{H}^1_{(0)} (\Omega)$, using the Poincar\'e-Wirtinger inequality (see Lemma \ref{poin} below), we have $\|f\|\leq C_{\Omega}\|\nabla f\|$, for all $f\in \mathrm{H}^1_{(0)} (\Omega)$. Using this fact, one can also show that $\mathrm{H}^1_{(0)} (\Omega)$ is a Hilbert space equipped with the inner product \begin{align*} (\varphi ,\psi)_{\mathrm{H}^1_{(0)}} = (\nabla \varphi, \nabla \psi), \ \text{ for all }\ \varphi, \psi \in \mathrm{H}^1_{(0)}(\Omega) . \end{align*} We can prove the following dense and continuous embedding: \begin{align} \label{zeroembedding} \mathrm{H}^1_{(0)} (\Omega)\hookrightarrow \mathrm{L}^2_{(0)}(\Omega)\equiv \mathrm{L}^2_{(0)}(\Omega)' \hookrightarrow \mathrm{H}^{-1}_{(0)}(\Omega). \end{align} Note that the embedding is compact (see for example, Theorem 1, Chapter 5, \cite{MR2597943}). The projection $\mathrm{P}_0 : \mathrm{L}^2(\Omega) \rightarrow \mathrm{L}^2_{(0)}(\Omega)$ onto $\mathrm{L}^2$-space with mean value zero is defined by \begin{align} \label{P0} \mathrm{P}_0 f := f - \frac{1}{|\Omega |} \int_\Omega f(x)\/\mathrm{d}\/ x \end{align} for every $f \in \mathrm{L}^2(\Omega)$. For every $f \in \mathrm{V}'$, we denote $\overline{f}$ the average of $f$ over $\Omega$, that is, $\overline{f} := |\Omega|^{-1} {_{\mathrm{V}'}}\langle f, 1 \rangle_{\mathrm{V}}$. Let us also introduce the spaces (see \cite{MR3518604} for more details) \begin{align*}\mathrm{V}_0 &=\mathrm{H}_{(0)}^{1}(\Omega)= \{ v \in \mathrm{V} : \overline{v} = 0 \},\\ \mathrm{V}_0' &=\mathrm{H}_{(0)}^{-1}(\Omega)= \{ f \in \mathrm{V}' : \overline{f} = 0 \},\end{align*} and the operator $\mathcal{A} : \mathrm{V} \rightarrow \mathrm{V}'$ is defined by \begin{align*}\,_{\mathrm{V}'}\langle \mathcal{A} u ,v \rangle_{\mathrm{V}} := \int_\Omega \nabla u(x) \cdot \nabla v(x) \/\mathrm{d}\/ x, \ \text{for all } \ u,v \in \mathrm{V}.\end{align*} Clearly $\mathcal{A}$ is linear and it maps $\mathrm{V}$ into $\mathrm{V}_0'$ and its restriction $\mathcal{B}$ to $\mathrm{V}_0$ onto $\mathrm{V}_0'$ is an isomorphism. We know that for every $f \in \mathrm{V}_0'$, $\mathcal{B}^{-1}f$ is the unique solution with zero mean value of the \emph{Neumann problem}: $$ \mathopen{}\mathclose\bgroup\originalleft\{ \begin{array}{ll} - \Delta u = f, \ \mbox{ in } \ \Omega, \\ \frac{\partial u}{\partial\mathbf{n}} = 0, \ \mbox{ on } \ \partial \Omega. \end{array} \aftergroup\egroup\originalright. $$ In addition, we have \begin{align} \!_{\mathrm{V}'}\langle \mathcal{A}u , \mathcal{B}^{-1}f \rangle_{\mathrm{V}} &= \!_{\mathrm{V}'}\langle f ,u \rangle_{\mathrm{V}}, \ \text{ for all } \ u\in \mathrm{V}, \ f \in \mathrm{V}_0' , \label{bes}\\ \!_{\mathrm{V}'}\langle f , \mathcal{B}^{-1}g \rangle_{\mathrm{V}} &= \!_{\mathrm{V}'}\langle g ,\mathcal{B}^{-1}f \rangle_{\mathrm{V}} = \int_\Omega \nabla(\mathcal{B}^{-1}f)\cdot \nabla(\mathcal{B}^{-1}g)\/\mathrm{d}\/ x, \ \text{for all } \ f,g \in \mathrm{V}_0'.\label{bes1} \end{align} Note that $\mathcal{B}$ can be also viewed as an unbounded linear operator on $\mathrm{H}$ with domain $\mathrm{D}(\mathcal{B}) = \mathopen{}\mathclose\bgroup\originalleft\{v \in \mathrm{H}^2(\Omega)\cap \mathrm{V}_0 : \frac{\partial v}{\partial\mathbf{n}}= 0\text{ on }\partial\Omega \aftergroup\egroup\originalright\}$. Below, we give some facts about the elliptic regularity theory of Laplace operator $ B_N = -\Delta + \mathrm{I} $ with Neumann boundary conditions. \begin{lemma}[$ L^p $ regularity for Neumann Laplacian, \cite{SJL_1961-1962____A6_0}, Theorem 9.26, \cite{MR2759829}] \label{Lp_reg} Let us assume that $ u $ satisfies $ B_N u = f$ and $ \frac{\partial u}{\partial n} =0$ in weak sense. Then the following holds: \begin{itemize} \item [(i)] Let $ f \in (W^{1,p'}(\Omega))'$, with $ 1<p'<\infty $. Then $ u \in W^{1,p}(\Omega) $, where $ \frac{1}{p}+\frac{1}{p'}=1 $ and there exists a constant $ C>0 $ such that \begin{align*} \|u\|_{W^{1,p}(\Omega)} \leq C \|f\|_{(W^{1,p'}(\Omega))'}. \end{align*} \item [(ii)] Let $f \in L^p(\Omega)$, with $1 < p < \infty$. Then, $u \in W^{2,p} (\Omega), -\Delta u + u = f$ for a.e. $x \in \Omega, \frac{\partial u}{\partial n}= 0$ on $\partial \Omega$ in the sense of traces and there exists $C > 0$ such that \begin{align*} \|u\|_{W^{2,p}(\Omega)} \leq C \|f\|_{L^p(\Omega)}. \end{align*} \end{itemize} \end{lemma} \subsection{Linear and non-linear operators} Let us define the Stokes operator $\mathrm{A} : \mathrm{D}(\mathrm{A})\cap \mathbb{G}_{\mathrm{div}} \to \mathbb{G}_{\mathrm{div}}$. In the case of no slip boundary condition $$\mathrm{A}=-\mathrm{P}\Delta,\ \mathrm{D}(\mathrm{A})=\mathbb{H}^2(\Omega) \cap \mathbb{V}_{\mathrm{div}},$$ where $\mathrm{P} : \mathbb{L}^2(\Omega) \to \mathbb{G}_{\mathrm{div}}$ is the \emph{Helmholtz-Hodge orthogonal projection}. Note also that, we have $$\!_{\mathbb{V}_{\mathrm{div}}'}\langle\mathrm{A}\mathbf{u}, \mathbf{v}\rangle_{\mathbb{V}_{\mathrm{div}}} = (\mathbf{u}, \mathbf{v})_{\mathbb{V}\text{div}} = (\nabla\mathbf{u}, \nabla\mathbf{v}), \text{ for all } \ \mathbf{u}, \mathbf{v} \in \mathbb{V}_{\mathrm{div}}.$$ It should also be noted that $\mathrm{A}^{-1} : \mathbb{G}_{\mathrm{div}} \to \mathbb{G}_{\text{div }}$ is a self-adjoint compact operator on $\mathbb{G}_{\mathrm{div}}$ and by the classical \emph{spectral theorem}, there exists a sequence $\lambda_j$ with $0<\lambda_1\leq \lambda_2\leq \lambda_j\leq\cdots\to+\infty$ and a family of $\mathbf{e}_j \in \mathrm{D}(\mathrm{A}),$ which is orthonormal in $\mathbb{G}_\text{div}$ and such that $\mathrm{A}\mathbf{e}_j =\lambda_j\mathbf{e}_j$. We know that $\mathbf{u} \in\mathbb{G}_{\mathrm{div}}$ can be expressed as $\mathbf{u}=\sum\limits_{j=1}^{\infty}\langle \mathbf{u},\mathbf{e}_j\rangle \mathbf{e}_j,$ so that $\mathrm{A}\mathbf{u}=\sum\limits_{j=1}^{\infty}\lambda_j\langle \mathbf{u},\mathbf{e}_j\rangle \mathbf{e}_j,$ for all $\mathbf{u}\in \mathrm{D}(\mathrm{A})\subset \mathbb{G}_{\mathrm{div}}$. Thus, it is immediate that \begin{align} \|\nabla\mathbf{u}\|^2=\langle \mathrm{A}\mathbf{u},\mathbf{u}\rangle =\sum_{j=1}^{\infty}\lambda_j|\langle \mathbf{u},\mathbf{e}_j\rangle|^2\geq \lambda_1\sum_{j=1}^{\infty}|\langle \mathbf{u},\mathbf{e}_j\rangle|^2=\lambda_1\|\mathbf{u}\|^2. \end{align} For $\mathbf{u},\mathbf{v},\mathbf{w} \in \mathbb{V}_{\mathrm{div}}$, we define the trilinear operator $b(\cdot,\cdot,\cdot)$ as $$b(\mathbf{u},\mathbf{v},\mathbf{w}) = \int_\Omega (\mathbf{u}(x) \cdot \nabla)\mathbf{v}(x) \cdot \mathbf{w}(x)\/\mathrm{d}\/ x=\sum_{i,j=1}^n\int_{\Omega}u_i(x)\frac{\partial v_j(x)}{\partial x_i}w_j(x)\/\mathrm{d}\/ x,$$ and the bilinear operator $\mathrm{B}$ from $\mathbb{V}_{\mathrm{div}} \times \mathbb{V}_{\mathrm{div}} $ into $\mathbb{V}_{\mathrm{div}}'$ is defined by, $$ \!_{\mathbb{V}_{\mathrm{div}}'}\langle \mathrm{B}(\mathbf{u},\mathbf{v}),\mathbf{w} \rangle_{\mathbb{V}_{\mathrm{div}}} := b(\mathbf{u},\mathbf{v},\mathbf{w}), \ \text{ for all } \ \mathbf{u},\mathbf{v},\mathbf{w} \in \mathbb{V}_\text{{div}}.$$ An integration by parts yields, \begin{equation}\label{2.7} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned} b(\mathbf{u},\mathbf{v},\mathbf{v}) &= 0, \ \text{ for all } \ \mathbf{u},\mathbf{v} \in\mathbb{V}_\text{{div}},\\ b(\mathbf{u},\mathbf{v},\mathbf{w}) &= -b(\mathbf{u},\mathbf{w},\mathbf{v}), \ \text{ for all } \ \mathbf{u},\mathbf{v},\mathbf{w}\in \mathbb{V}_\text{{div}}. \end{aligned} \aftergroup\egroup\originalright.\end{equation} For more details about the linear and non-linear operators, we refer the readers to \cite{MR0609732}. Let us now provide some important inequalities, which are used frequently in the paper. \begin{lemma}[Gagliardo-Nirenberg inequality, Theorem 2.1, \cite{MR1230384}] \label{gn} Let $\Omega\subset\mathbb{R}^n$ and $\mathbf{u}\in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^n)$, $p\geq 1$. Then for any fixed number $p,q\geq 1$, there exists a constant $C>0$ depending only on $n,p,q$ such that \begin{align}\label{gn0} \|\mathbf{u}\|_{\mathbb{L}^r}\leq C\|\nabla\mathbf{u}\|_{\mathbb{L}^p}^{\theta}\|\mathbf{u}\|_{\mathbb{L}^q}^{1-\theta},\;\theta\in[0,1], \end{align} where the numbers $p, q, r, n$ and $\theta$ satisfy the relation $$\theta=\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{q}-\frac{1}{r}\aftergroup\egroup\originalright)\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{n}-\frac{1}{p}+\frac{1}{q}\aftergroup\egroup\originalright)^{-1}.$$ \end{lemma} A particular case of Lemma \ref{gn} is the well known inequality due to Ladyzhenskaya (see Lemma 1 and 2, Chapter 1, \cite{MR0254401}), which is given below: \begin{lemma}[Ladyzhenskaya's inequality]\label{lady} For $\mathbf{u}\in\mathrm{C}_0^{\infty}(\Omega;\mathbb{R}^n), n = 2, 3$, there exists a constant $C$ such that \begin{align}\label{lady1} \|\mathbf{u}\|_{\mathbb{L}^4}\leq C^{1/4}\|\mathbf{u}\|^{1-\frac{n}{4}}\|\nabla\mathbf{u}\|^{\frac{n}{4}},\text{ for } n=2,3, \end{align} where $C=2,4,$ for $n=2,3$ respectively. \end{lemma} Note that the above inequality is true even in unbounded domains. For $n=3$, $r=6$, $p=q=2$, from \eqref{gn0}, we find $\theta=1$ and \begin{align*} \|\mathbf{u}\|_{\mathbb{L}^6}\leq C\|\nabla\mathbf{u}\|=C\|\mathbf{u}\|_{\mathbb{V}_{\mathrm{div}}}. \end{align*} For $n=2$, the following estimate holds: \begin{align*} |b(\mathbf{u},\mathbf{v},\mathbf{w})| \leq \sqrt{2}\|\mathbf{u}\|^{1/2}\| \nabla \mathbf{u}\|^{1/2}\|\mathbf{v}\|^{1/2}\| \nabla \mathbf{v}\|^{1/2}\| \nabla \mathbf{w}\|, \end{align*} for every $\mathbf{u},\mathbf{v},\mathbf{w} \in \mathbb{V}_{\mathrm{div}}$. Hence, for all $\mathbf{u}\in\mathbb{V}_{\mathrm{div}},$ we have \begin{align} \label{be} \|\mathrm{B}(\mathbf{u},\mathbf{u})\|_{\mathbb{V}_{\mathrm{div}}'}\leq \sqrt{2}\|\mathbf{u}\|\|\nabla\mathbf{u}\|\leq \sqrt{\frac{2}{\lambda_1}}\|\mathbf{u}\|_{\mathbb{V}_{\mathrm{div}}}^2 , \end{align} by using the Poincar\'e inequality. Similarly, for $n=3$, we have \begin{align*} |b(\mathbf{u},\mathbf{v},\mathbf{w})| \leq 2\|\mathbf{u}\|^{1/4}\| \nabla \mathbf{u}\|^{3/4}\|\mathbf{v}\|^{1/4}\| \nabla \mathbf{v}\|^{3/4}\| \nabla \mathbf{w}\|, \end{align*} for every $\mathbf{u},\mathbf{v},\mathbf{w} \in \mathbb{V}_{\mathrm{div}}$. Hence, for all $\mathbf{u}\in\mathbb{V}_{\mathrm{div}},$ using the Poincar\'e inequality, we have \begin{align} \|\mathrm{B}(\mathbf{u},\mathbf{u})\|_{\mathbb{V}_{\mathrm{div}}'}\leq 2\|\mathbf{u}\|^{1/2}\|\nabla\mathbf{u}\|^{3/2}\leq \frac{2}{\lambda_1^{1/4}}\|\mathbf{u}\|_{\mathbb{V}_{\mathrm{div}}}^2. \end{align} We also need the following general version of the Gagliardo-Nirenberg interpolation inequality to prove the regularity results. For functions $\mathbf{u}: \Omega\to\mathbb{R}^n$ defined on a bounded Lipschitz domain $\Omega\subset\mathbb{R}^n$, the Gagliardo-Nirenberg interpolation inequality is given by: \begin{lemma}[Gagliardo-Nirenberg interpolation inequality, Theorem on Page125 , \cite{MR109940}]\label{GNI} Let $\Omega\subset\mathbb{R}^n$, $\mathbf{u}\in\mathrm{W}^{m,p}(\Omega;\mathbb{R}^n), p\geq 1$ and fix $1 \leq p,q \leq \infty$ and a natural number $m$. Suppose also that a real number $\theta$ and a natural number $j$ are such that \begin{align} \label{theta} \theta=\mathopen{}\mathclose\bgroup\originalleft(\frac{j}{n}+\frac{1}{q}-\frac{1}{r}\aftergroup\egroup\originalright)\mathopen{}\mathclose\bgroup\originalleft(\frac{m}{n}-\frac{1}{p}+\frac{1}{q}\aftergroup\egroup\originalright)^{-1}\end{align} and $\frac{j}{m} \leq \theta \leq 1.$ Then for any $\mathbf{u}\in\mathrm{W}^{m,p}(\Omega;\mathbb{R}^n),$ we have \begin{align}\label{gn1} \|\nabla^j\mathbf{u}\|_{\mathbb{L}^r}\leq C\mathopen{}\mathclose\bgroup\originalleft(\|\nabla^m\mathbf{u}\|_{\mathbb{L}^p}^{\theta}\|\mathbf{u}\|_{\mathbb{L}^q}^{1-\theta}+\|\mathbf{u}\|_{\mathbb{L}^s}\aftergroup\egroup\originalright), \end{align} where $s > 0$ is arbitrary and the constant $C$ depends upon the domain $\Omega,m,n$. \end{lemma} If $1 < p < \infty$ and $m - j -\frac{n}{p}$ is a non-negative integer, then it is necessary to assume also that $\theta\neq 1$. Note that for $\mathbf{u}\in\mathrm{W}_0^{1,p}(\Omega;\mathbb{R}^n)$, Lemma \ref{gn} is a special case of the above inequality, since for $j=0$, $m=1$ and $\frac{1}{s}=\frac{\theta}{p}+\frac{1-\theta}{q}$ in \eqref{gn1}, and application of the Poincar\'e inequality yields (\ref{gn0}). It should also be noted that \eqref{gn1} can also be written as \begin{align}\label{gn2} \|\nabla^j\mathbf{u}\|_{\mathbb{L}^r}\leq C\|\mathbf{u}\|_{\mathbb{W}^{m,p}}^{\theta}\|\mathbf{u}\|_{\mathbb{L}^q}^{1-\theta}, \end{align} By taking $j=1$, $r=4$, $n=m=p=q=s=2$ in (\ref{theta}), we get $\theta=\frac{3}{4},$ and from \eqref{gn1} we get \begin{align}\label{gu} \|\nabla\mathbf{u}\|_{\mathbb{L}^4}\leq C\mathopen{}\mathclose\bgroup\originalleft(\|\Delta\mathbf{u}\|^{3/4}\|\mathbf{u}\|^{1/4}+\|\mathbf{u}\|\aftergroup\egroup\originalright). \end{align} Also, taking $j=1$, $r=4$, $n=3$, $m=p=q=s=2$ in (\ref{theta}), we get $\theta=\frac{7}{8},$ and \begin{align}\label{gua} \|\nabla\mathbf{u}\|_{\mathbb{L}^4}\leq C\mathopen{}\mathclose\bgroup\originalleft(\|\Delta\mathbf{u}\|^{7/8}\|\mathbf{u}\|^{1/8}+\|\mathbf{u}\|\aftergroup\egroup\originalright). \end{align} \begin{lemma} [Poincar\'e-Wirtinger inequality, Corollary 12.28, \cite{MR3726909}]\label{poin} Assume that $1 \leq p < \infty$ and that $\Omega$ is a bounded connected open subset of the $n$-dimensional Euclidean space $\mathbb{R}^n$ whose boundary is of class $\mathrm{C}$. Then there exists a constant $C_{\Omega,p}>0$, such that for every function $\phi \in \mathrm{W}^{1,p}(\Omega)$, $$\|\phi-\overline{\phi}\|_{\mathrm{L}^{p}(\Omega )}\leq C_{\Omega,p}\|\nabla \phi\|_{\mathrm{L}^{p}(\Omega )},$$ where $\overline{\phi}={\frac {1}{|\Omega |}}\int _{\Omega }\phi(y)\,\mathrm {d} y$ is the average value of $\phi$ over $\Omega$. \end{lemma} \subsection{Basic assumptions} Let us now make the following assumptions on $\mathrm{J}$ and $\mathrm{F}$ in order to establish the solvability results of the system \eqref{steadysys}. We suppose that the potential $\mathrm{F}$ can be written in the following form $$\mathrm{F}= \mathrm{F}_1 + \mathrm{F}_2$$ where $\mathrm{F}_1 \in \mathrm{C}^{(2+2q)}(-1,1)$ with $q \in \mathbb{N}$ fixed, and $\mathrm{F}_2 \in \mathrm{C}^2(-1,1)$. Now, we list the assumptions on $\nu, \mathrm{J}, \mathrm{F}_1, \mathrm{F}_2 $ and mobility $m$ (cf. \cite{MR3019479}). \begin{itemize} \item[(A1)] $ \mathrm{J} \in \mathrm{W}^{1,1}(\mathbb{R}^2;\mathbb{R}), \ \mathrm{J}(x)= \mathrm{J}(-x) \; \text {and} \ a(x) = \int_\Omega \mathrm{J}(x-y)\/\mathrm{d}\/ y \geq 0,$ a.e., in $\Omega$. \item[(A2)] The function $\nu$ is locally Lipschitz on $\mathbb{R}$ and there exists $\nu_1, \nu_2 >0$ such that $$ \nu_1 \leq \nu(s) \leq \nu_2, \ \text{ for all }\ s \in \mathbb{R}.$$ \item[(A3)] There exist $C_1>0$ and $\epsilon_0 >0$ such that $$\mathrm{F}_1^{(2+2q)} (s) \geq C_1, \ \text{ for all } \ s \in (-1,1+\epsilon_0] \cup [1-\epsilon_0,1).$$ \item[(A4)] There exists $\epsilon_0 >0$ such that, for each $k=0,1,...,2+2q$ and each $j=0,1,...,q$, $$ \mathrm{F}_1^{(k)}(s) >0, \ \text{ for all }\ s \in [1-\epsilon_0,1)$$ $$ \mathrm{F}_1^{(2j+2)}(s) \geq 0, \ \mathrm{F}_1^{(2j+1)}(s) \leq 0, \ \text{ for all }\ s \in (-1,1+\epsilon_0].$$ \item[(A5)] There exists $\epsilon_0 >0$ such that $\mathrm{F}_1^{(2+2q)}$ is non-decreasing in $[1-\epsilon_0,1)$ and non-increasing in $(-1,-1+\epsilon_0]$. \item[(A6)] There exists $\alpha , \beta \in \mathbb{R}$ with $\alpha + \beta > - \min\limits_{[-1,1]} \mathrm{F}''_2$ such that $$ \mathrm{F}_1''(s) \geq \alpha, \ \text{ for all }\ s \in (-1,1), \quad a(x) \geq \beta, \ \text{ a.e. }\ x \in \Omega.$$ \item[(A7)] $\lim\limits_{s \rightarrow -1} \mathrm{F}_1'(s) = -\infty$ and $\lim\limits_{s \rightarrow 1} \mathrm{F}_1'(s) = \infty$. \item[(A8)] The function $m$ is locally Lipschitz function on $\mathbb{R}$ and there exists $m_1 , m_2 >0$ such that \begin{align*} m_1 \leq m(s) \leq m_2 , \ \text{ for all }\ s \in \mathbb{R}. \end{align*} \end{itemize} \begin{remark} \cite{MR3000606} We can represent the potential $\mathrm{F}$ as a quadratic perturbation of a convex function. That is \begin{align}\label{decomp of F} \mathrm{F}(s)= \mathrm{G} (s) - \frac{\kappa}{2} s^2, \end{align} where $\mathrm{G}$ is strictly convex and $\kappa>0$. \end{remark} We further assume that \begin{itemize} \item[(A9)] there exists $C_0 > 0$ such that $\mathrm{F}''(s) + a(x) \geq C_0$ , for all $s \in (-1,1)$ a.e. $x \in \Omega$ and $ \|\mathrm{J} \|_{\mathrm{L}^1} \leq {C_0} + \kappa$. \end{itemize} \begin{remark}\label{remark J} Assumption $\mathrm{J} \in \mathrm{W}^{1,1}(\mathbb{R}^2;\mathbb{R})$ can be weakened. Indeed, it can be replaced by $\mathrm{J} \in \mathrm{W}^{1,1}(\mathrm{B}_\delta;\mathbb{R})$, where $\mathrm{B}_\delta := \{z \in \mathbb{R}^2 : |z| < \delta \}$ with $\delta := \emph{diam}(\Omega)=\sup\limits_{x,y\in \Omega}d(x,y)$, where $d(\cdot,\cdot)$ is the Euclidean metric on $\mathbb{R}^2$, or also by \begin{eqnarray} \label{Estimate J} \sup_{x\in \Omega} \int_\Omega \mathopen{}\mathclose\bgroup\originalleft( |\mathrm{J}(x-y)| + |\nabla \mathrm{J}(x-y)| \aftergroup\egroup\originalright) \/\mathrm{d}\/ y < +\infty. \end{eqnarray} \end{remark} Note that \eqref{Estimate J} says that $\sup\limits_{x \in \Omega} \|\mathrm{J}\|_{\mathrm{W}^{1,1}} $ is finite and hence assumption (A1) is justified. \begin{remark}\label{remark F} Assumptions (A3)-(A6) are satisfied in the case of the physically relevant logarithmic double-well potential \eqref{2}, for any fixed positive integer $q$. In particular setting \begin{align*} \mathrm{F}_1 (s)= \frac{\theta}{2}((1+s)\ln(1+s)+(1-s)\ln(1-s)), \qquad \mathrm{F}_2(s) =- \frac{\theta_cs^2}{2} \end{align*} then Assumption (A6) is satisfied iff $\beta > \theta_c - \theta.$ \end{remark} \section{Existence of Weak Solution}\label{sec3}\setcounter{equation}{0} In this section, we establish the existence of a weak solution to the system \eqref{steadysys} using pseudo-monotonicity arguments and Browder's theorem. Let us first give the definition of \emph{weak solution} of the system \eqref{steadysys}. \begin{definition} \label{weakdef} Let $\mathbf{h} \in \mathbb{V}'_{{\mathrm{div}}}$ and $k\in (-1,1)$ be fixed. A triple $(\mathbf{u},\mu , \varphi) \in \mathbb{V}_{{\mathrm{div}}} \times \mathrm{V} \times (\mathrm{V} \cap \mathrm{L}^2_{(k)}(\Omega) ) $ is called a \emph{weak solution} of the problem \eqref{steadysys} if \begin{align} \label{weakphi} \int_\Omega (\mathbf{u} \cdot \nabla \varphi) \psi \, \/\mathrm{d}\/ x =& -\int_\Omega m(\varphi) \nabla \mu \cdot \nabla \psi \, \/\mathrm{d}\/ x, \\ \label{weakmu} \int_\Omega \mu \psi \, \/\mathrm{d}\/ x=& \int_\Omega (a \varphi -\mathrm{J} * \varphi) \psi \, \/\mathrm{d}\/ x +\int_\Omega \mathrm{F}'(\varphi) \psi \, \/\mathrm{d}\/ x, \\ \label{weakform nse} \int_\Omega (\mathbf{u} \cdot \nabla) \mathbf{u} \cdot \mathbf{v} \, \/\mathrm{d}\/ x+ \int_\Omega 2\nu (\varphi) \mathrm{D}\mathbf{u} \cdot \mathrm{D}\mathbf{v}\, \/\mathrm{d}\/ x =& \int_\Omega\mu \nabla \varphi \cdot \mathbf{v} \, \/\mathrm{d}\/ x + \langle \mathbf{h} , \mathbf{v} \rangle, \end{align} for every $\psi \in \mathrm{V}$ and $\mathbf{v} \in \mathbb{V}_{{\mathrm{div}}}$. \end{definition} Our aim is to establish the existence of a weak solution of the system \eqref{steadysys} in the sense of Definition \ref{weakdef}. But, when working with the above definition, some difficulties arise in the analysis of our problem. The most important one is that $\mathrm{L}^2_{(k)}(\Omega)$ is not a vector space for $k\neq 0$. But, we can assume that $k=0$ with out loss of generality. Otherwise replace $\varphi$ by $\widetilde{\varphi}:= \varphi - k$ and $\mathrm{F}$ by $\mathrm{F}_k$ with $\mathrm{F}_k(x) := \mathrm{F}(x+k)$ for all $x \in \mathbb{R}$. Thus, in order to establish a weak solution of the system \eqref{steadysys}, we first reformulate the problem \eqref{weakphi}-\eqref{weakform nse}. We prove the existence of a solution to the reformulated problem \eqref{reformphi}-\eqref{nsreform} (see below) instead of \eqref{weakphi}-\eqref{weakform nse}. We establish the equivalence of these two problems in Lemma \ref{reformtoweak}. We reduce $\mu$ to $\mu_0$, which has mean value $0$, and it would help in proving coercivity of an operator in the later part of this section. Let us fix $ \mu_0 =\mu - \frac{1}{|\Omega |} \int_\Omega \mathrm{F}'(\varphi)\/\mathrm{d}\/ x$. Then the reformulated problem of \eqref{weakphi}-\eqref{weakform nse} is given by \begin{align} \label{reformphi} \int_\Omega (\mathbf{u} \cdot \nabla \varphi) \psi \,\/\mathrm{d}\/ x =& -\int_\Omega m(\varphi) \nabla \mu_0 \cdot \nabla \psi\, \/\mathrm{d}\/ x, \\ \label{mu-reform} \int_\Omega \mu_0 \psi \,\/\mathrm{d}\/ x=& \int_\Omega (a \varphi -\mathrm{J} * \varphi) \psi \,\/\mathrm{d}\/ x +\int_\Omega \mathrm{P}_0 (\mathrm{F}'(\varphi)) \psi \,\/\mathrm{d}\/ x, \\ \label{nsreform} \int_\Omega (\mathbf{u} \cdot \nabla)\mathbf{u} \cdot\mathbf{v} \,\/\mathrm{d}\/ x+ \int_\Omega 2\nu (\varphi) \mathrm{D}\mathbf{u} \cdot \mathrm{D}\mathbf{v}\, \/\mathrm{d}\/ x =& \int_\Omega\mu_0 \nabla \varphi \cdot \mathbf{v} \,\/\mathrm{d}\/ x + \langle \mathbf{h} , \mathbf{v} \rangle, \end{align} where $\mathbf{u} \in \mathbb{V}_{\mathrm{div}} , \mu_0 \in \mathrm{V}_0, \varphi \in \mathrm{V}_0$. Now we show that proving the existence of a solution to the equations \eqref{reformphi}-\eqref{nsreform} would also give a solution to \eqref{weakphi}-\eqref{weakform nse}. \begin{lemma}\label{reformtoweak} Let $(\mathbf{u},\mu_0,\varphi) \in\mathbb{V}_{{\mathrm{div}}}\times \mathrm{V}_0 \times \mathrm{V}_0$ be a solution to the system \eqref{reformphi}-\eqref{nsreform}. Then $(\mathbf{u},\mu,\varphi)$ is a solution to the weak formulation \eqref{weakphi}-\eqref{weakform nse}, where $ \mu_0 =\mu - \frac{1}{|\Omega |} \int_\Omega \mathrm{F}'(\varphi)\/\mathrm{d}\/ x$. \end{lemma} \begin{proof} Let $(\mathbf{u},\mu_0, \varphi) \in \mathbb{V}_{\mathrm{div}}\times \mathrm{V}_0 \times \mathrm{V}_0$ be a solution of the system \eqref{reformphi}-\eqref{nsreform}. Let $\overline{\mu}= \frac{1}{|\Omega|} \int_\Omega \mathrm{F}'(\varphi)\/\mathrm{d}\/ x$. Since $\overline{\mu}$ is a scalar, from \eqref{nsreform}, using integration by parts, one can easily deduce that $ \int_\Omega \overline{\mu} \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x=0 $. Then, we have \begin{align*} \int_\Omega (\mathbf{u} \cdot \nabla) \mathbf{u} \cdot \mathbf{v} \/\mathrm{d}\/ x+ \int_\Omega 2\nu (\varphi) \mathrm{D}\mathbf{u} \cdot \mathrm{D}\mathbf{v} \/\mathrm{d}\/ x =& \int_\Omega\mu_0 \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x + \langle \mathbf{h} , \mathbf{v} \rangle+ \int_\Omega \overline{\mu} \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x \\ =& \int_\Omega\mu \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x + \langle \mathbf{h} , \mathbf{v} \rangle , \end{align*} which gives the equation \eqref{weakform nse}. Once again, since $\overline{\mu}$ is a scalar quantity, we can clearly see \eqref{weakphi} follows from \eqref{reformphi}. Now it is left to prove \eqref{weakmu}. From \eqref{mu-reform}, we have \begin{align} \label{3.3} \int_\Omega \mu_0 \psi \/\mathrm{d}\/ x=& \int_\Omega (a \varphi -\mathrm{J} * \varphi) \psi \/\mathrm{d}\/ x +\int_\Omega \mathrm{P}_0 (\mathrm{F}'(\varphi)) \psi \/\mathrm{d}\/ x. \end{align} Using \eqref{P0} and substituting value of $\mu_0$ in \eqref{3.3} we get, \begin{align*} \int_\Omega \mu \psi \/\mathrm{d}\/ x=& \int_\Omega (a \varphi -\mathrm{J} * \varphi) \psi \/\mathrm{d}\/ x +\int_\Omega (\mathrm{P}_0 (\mathrm{F}'(\varphi))+ \overline{\mu} )\psi \/\mathrm{d}\/ x \\ =& \int_\Omega (a \varphi -\mathrm{J} * \varphi) \psi \/\mathrm{d}\/ x +\int_\Omega \mathrm{F}'(\varphi)\psi \/\mathrm{d}\/ x, \end{align*} which completes the proof. \end{proof} \subsection{Preliminaries} In order to formulate the problem \eqref{reformphi}-\eqref{nsreform} in the framework of Browder's Theorem (see Theorem \ref{Browder} below), we need some preliminaries, which we state below. Let $\mathbb{X}$ be a Banach space and $\mathbb{X}'$ be its topological dual. Let $\mathrm{T}$ be a function from $\mathbb{X}$ to $\mathbb{X}'$ with domain $\mathrm{D}=\mathrm{D}(\mathrm{T}) \subseteq \mathbb{X}.$ \begin{definition}[Definition 2.3, \cite{MR3014456}] The function $\mathrm{T}$ is said to be \begin{itemize} \item[(i)] \emph{demicontinuous} if for every sequence $u_k \in \mathrm{D}, u \in \mathrm{D}$ and $u_k \rightarrow u$ in $\mathbb{X}$ implies that $\mathrm{T}(u_k) \rightharpoonup \mathrm{T}(u)$ in $\mathbb{X}'$, \item[(ii)] \emph{hemicontinuous} if for every $u \in \mathrm{D}, v \in \mathbb{X}$ and for every sequence of positive real numbers $t_k$ such that $u+t_k v \in \mathrm{D},$ it holds that $t_k \rightarrow 0$ implies $\mathrm{T}(u+t_k v) \rightharpoonup \mathrm{T}(u)$ in $\mathbb{X}'$, \item[(iii)] \emph{locally bounded} if for every sequence $u_k \in \mathrm{D}$, $u \in \mathrm{D}$ and $u_k \rightarrow u $ in $\mathbb{X}$ imply that $\mathrm{T}(u_k)$ is bounded in $\mathbb{X}'$. \end{itemize} \end{definition} From the above definition, it is clear that a demicontinuous function is hemicontinuous and locally bounded. \begin{definition}[Definition 2.1 (iv), \cite{MR3014456}] We say that $\mathrm{T}$ is \emph{pseudo-monotone} if, for every sequence $u_k$ in $\mathbb{X}$ such that $u_k \rightharpoonup u$ in $\mathbb{X}$ and \begin{equation*} \limsup_{k \rightarrow \infty} \,_{\mathbb{X}'}\langle \mathrm{T}(u_k), u_k - u\rangle_{\mathbb{X}} \leq 0 \end{equation*} implies \begin{equation*} \liminf_{k \rightarrow \infty} \,_{\mathbb{X}'}\langle \mathrm{T}(u_k), u_k-v \rangle_{\mathbb{X}} \geq \,_{\mathbb{X}'}\langle \mathrm{T}(u),u-v \rangle_{\mathbb{X}} \end{equation*} for every $v \in \mathbb{X}$. Moreover $\mathrm{T}$ is said to be \emph{monotone} if $$ \,_{\mathbb{X}'}\langle \mathrm{T}(u)-\mathrm{T}(v),u-v\rangle_{\mathbb{X}} \geq 0, \ \text{ for every }\ u ,v\in \mathrm{D}.$$ \end{definition} \begin{definition} A mapping $\mathrm{T}: \mathbb{X} \rightarrow \mathbb{X}'$ is said to be maximal monotone if it is monotone and its graph \begin{align*} \mathrm{G}(\mathrm{T})= \mathopen{}\mathclose\bgroup\originalleft\{ (u,w) : w \in \mathrm{T}(u) \aftergroup\egroup\originalright\} \subset \mathbb{X} \times \mathbb{X}' \end{align*} is not properly contained in graph of any other monotone operator. In other words, for $u\in\mathbb{X}$ and $w\in\mathbb{X}'$, the inequality $\,_{\mathbb{X}'}\langle w-\mathrm{T}(v),u-v\rangle_{\mathbb{X}}\geq 0$, for all $v\in\mathbb{X}$ implies $w=\mathrm{T}(u)$. \end{definition} \begin{definition}[Definition 2.3, \cite{MR3014456}] Let $\mathbb{X}$ and $\mathbb{Y}$ be Banach spaces. A bounded linear operator $\mathrm{T}:\mathbb{X} \rightarrow \mathbb{Y}$ is said to be a \emph{completely continuous operator} if for every sequence $u_k \rightharpoonup u$ in $\mathbb{X}$ implies $\mathrm{T}(u_k) \rightarrow \mathrm{T}(u)$ in $\mathbb{Y}$. \end{definition} We can see that complete continuity implies pseudo-monotonicity (Corollary 2.12, \cite{MR3014456}). \begin{lemma}[Lemma 5.1, \cite{MR3524178} ] \label{pseudo-monotone} Let $\mathbb{X}$ be a real, reflexive Banach space and $\widetilde{\mathrm{T}} : \mathbb{X} \times \mathbb{X} \rightarrow \mathbb{X}'$ be such that for all $u \in \mathbb{X}$: \begin{enumerate} \item $\widetilde{\mathrm{T}} (u, \cdot): \mathbb{X} \rightarrow \mathbb{X}'$ is monotone and hemicontinuous. \item $\widetilde{\mathrm{T}} ( \cdot,u): \mathbb{X} \rightarrow \mathbb{X}'$ is completely continuous. \end{enumerate} Then the operator $\mathrm{T}:\mathbb{X} \rightarrow \mathbb{X}'$ defined by $\mathrm{T}(u) := \widetilde{\mathrm{T}}(u,u)$ is pseudo-monotone. \end{lemma} \begin{definition} Let $\mathbb{X}$ be a real Banach space and $f:\mathbb{X} \rightarrow (-\infty, \infty]$ be a functional on $\mathbb{X}$. A linear functional $g \in \mathbb{X}'$ is called \emph{subgradient} of $f$ at $u$ if $f(u) \not\equiv \infty$ and \begin{align*} f(v) \geq f(u) + \!_{\mathbb{X}'}\langle g, v-u \rangle_{\mathbb{X}}, \end{align*} holds for all $v \in \mathbb{X}$. \end{definition} We know that subgradient of a functional need not be unique. The set of all subgradients of $f$ at $u$ is called \emph{subdifferential} of $f$ at $u$ and is denoted by $\partial f (u)$.We say that $f$ is \emph{Gateaux differentiable} at $u$ in $\mathbb{X}$ if $\partial f(u)$ consists of exactly one element (see \cite{MR1195128}). \begin{lemma}[Theorem A, \cite{MR262827}] \label{rockfellar} If $f$ is a lower semicontinuous, proper convex function on $\mathbb{X}$ (that is, $f$ is a convex function and $f$ takes values in the extended real number line such that $f(u)<+\infty $ for at least one $u\in\mathbb{X}$ and $f(u)>-\infty $ for every $u\in\mathbb{X}$), then $\partial f$ is a maximal monotone operator from $\mathbb{X}$ to $\mathbb{X}'$. \end{lemma} Now we state Browder's theorem, which we use to prove the existence of solution to the problem \eqref{reformphi}-\eqref{nsreform}. \begin{theorem}[Theorem 32.A. in \cite{MR1033498}, Browder] \label{Browder} \begin{enumerate} \item Let $\mathbb{Y}$ be a non-empty, closed and convex subset of a real and reflexive Banach space $\mathbb{X}$. \item Let $\mathrm{T}: \mathbb{Y} \rightarrow \mathcal{P}(\mathbb{X}')$ be a maximal monotone operator, where $\mathcal{P}(\mathbb{X}')$ denotes the power set of $\mathbb{X}'$. \item Let $\mathrm{S}: \mathbb{Y} \rightarrow \mathbb{X}'$ be a pseudo-monotone, bounded and demicontinuous mapping. \item If the set $\mathbb{Y}$ is unbounded, then the operator $\mathrm{S}$ is $\mathrm{T}$-coercive with respect to the fixed element $b \in \mathbb{X}'$, that is, there exists an element $u_0 \in \mathbb{Y} \cap \mathrm{D} (\mathrm{T})$ and $R>0$ such that \begin{align*} \!_{\mathbb{X}'}\langle \mathrm{S}(u), u-u_0 \rangle_{\mathbb{X}} > \,_{\mathbb{X}'}\langle b, u-u_0 \rangle_{\mathbb{X}}, \end{align*} for all $u \in \mathbb{Y}$ with $\| u\|_{\mathbb{X}}>R$. \end{enumerate}Then the problem \begin{align} b\in \mathrm{T} (u) + \mathrm{S}(u) \end{align} has a solution $u \in \mathbb{Y} \cap \mathrm{D}(\mathrm{T})$. \end{theorem} \subsection{The functional $f$} We mainly follow the work of \cite{MR3524178} (local Cahn-Hilliard-Navier-Stokes equations) to establish the solvability results of the system \ref{steadysys}. Before we proceed to prove our main result, we first consider the following functional and study its properties. Let us define \begin{align} \label{functional} f(\varphi):= \frac{1}{4}\int_\Omega \int_\Omega \mathrm{J}(x-y)(\varphi(x)-\varphi(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y + \int_\Omega \mathrm{G}(\varphi(x))\/\mathrm{d}\/ x, \end{align} where $\varphi \in \mathrm{L}^2_{(0)}(\Omega)$ and $\mathrm{G}$ is given in \eqref{decomp of F}. Using assumption \ref{decomp of F}, we define $\mathrm{G}(x)= \infty$ for $x \notin [-1, 1]$. The domain of $f$ is given by \begin{align} \label{domain0f_f} \mathrm{D}(f) = \mathopen{}\mathclose\bgroup\originalleft\{ \varphi \in \mathrm{L}^2_{(0)}(\Omega) : \mathrm{G}(\varphi) \in \mathrm{L}^1(\Omega) \aftergroup\egroup\originalright\}. \end{align} For $\varphi \notin \mathrm{D}(f)$, we set $f(\varphi)= +\infty$. Note that $ \mathrm{D}(f) \neq \varnothing$. Given a functional $f:\mathrm{L}^2_{(0)}(\Omega) \rightarrow (-\infty,\infty]$, its subgradient maps from $\mathrm{L}^2_{(0)}(\Omega)$ to $ \mathcal{P}{(\mathrm{L}^2_{(0)}(\Omega)')}$. We write $\partial_{\mathrm{L}^2_{(0)}}f$ as the subgradient of the functional $f:\mathrm{L}^2_{(0)}(\Omega) \rightarrow (-\infty,\infty]$. Since $\mathrm{V}_0 \hookrightarrow\mathrm{L}^2_{(0)}(\Omega)$, we can also consider $f$ as a functional from $\mathrm{V}_0$ to $(-\infty,\infty]$, and hence we have to distinguish between its different subgradients. If we consider $f:\mathrm{V}_0 \rightarrow (-\infty,\infty]$, then the subgradient of $f$ is denoted by $\partial_{\mathrm{V}_0} f$. \begin{proposition} \label{gatuexderivative} Let $f$ be defined as in \eqref{functional}. Then $f$ is Gateaux differentiable on $\mathrm{L}^2_{(0)} (\Omega)$ and we have \begin{align}\label{sub} \partial_{\mathrm{L}^2_{(0)}} f(\varphi) = a \varphi -\mathrm{J}*\varphi + \mathrm{P}_0 \mathrm{G}'(\varphi). \end{align} and \begin{align*} \mathrm{D}(\partial_{\mathrm{L}^2_{(0)}}f) = \mathopen{}\mathclose\bgroup\originalleft\{ \varphi \in \mathrm{L}^2_{(0)}(\Omega) : \mathrm{G}'(\varphi) \in \mathrm{H} \aftergroup\egroup\originalright\}. \end{align*} Furthermore, it holds that \begin{align}\label{3.10} \|\partial_{\mathrm{L}^2_{(0)}}f(\varphi)\|\leq \mathopen{}\mathclose\bgroup\originalleft(\|a\|_{\mathrm{L}^{\infty}}+\|\mathrm{J}\|_{\mathrm{L}^1}\aftergroup\egroup\originalright)\|\varphi\|+\|\mathrm{G}'(\varphi)\|\leq 2a^*\|\varphi\|+\|\mathrm{G}'(\varphi)\|. \end{align} \end{proposition} \begin{proof} Let $h \in \mathrm{L}^2_{(0)}(\Omega)$. Then, we have \begin{align*} & \frac{f(\varphi + \epsilon h)-f(\varphi)}{\epsilon} \nonumber\\&= \frac{1}{\epsilon} \mathopen{}\mathclose\bgroup\originalleft[\frac{1}{4}\int_\Omega \int_\Omega \mathrm{J}(x-y)((\varphi+ \epsilon h)(x)-(\varphi+\epsilon h)(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y + \int_\Omega \mathrm{G}((\varphi+\epsilon h)(x))\/\mathrm{d}\/ x \aftergroup\egroup\originalright. \\ & \hspace{2cm}\mathopen{}\mathclose\bgroup\originalleft. -\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{4}\int_\Omega \int_\Omega \mathrm{J}(x-y)((\varphi)(x)-\varphi(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y + \int_\Omega \mathrm{G}(\varphi(x))\/\mathrm{d}\/ x \aftergroup\egroup\originalright) \aftergroup\egroup\originalright] \\ &= \frac{1}{4}\int_\Omega \int_\Omega \mathrm{J}(x-y)(2(\varphi(x)-\varphi(y))(h(x)-h(y)) + \epsilon (h(x)-h(y))^2) \\ & \hspace{1cm}+ \frac{1}{\epsilon} \int_\Omega (\mathrm{G}((\varphi+\varepsilon h)(x)) - \mathrm{G}(\varphi(x)) )\/\mathrm{d}\/ x . \end{align*} Hence \begin{align} \lim_{\epsilon \rightarrow 0 }\frac{f(\varphi + \epsilon h)-f(\varphi)}{\epsilon} = (a \varphi- \mathrm{J}*\varphi + \mathrm{G}'(\varphi), h). \end{align} Since $\mathrm{P}_0$ is the orthogonal projection onto $\mathrm{L}^2_{(0)}(\Omega)$, we get \eqref{sub}. Note that \eqref{3.10} is an immediate consequence of \eqref{sub}. \end{proof} \begin{lemma} \label{lowersemi} The functional $f: \mathrm{L}^2_{(0)}(\Omega) \rightarrow (-\infty,\infty]$ defined by \eqref{functional} is proper convex and lower semicontinuous with $\mathrm{D}(f) \neq \varnothing$. \end{lemma} \begin{proof} \textbf{Claim (1).} \emph{$f$ is proper convex with $\mathrm{D}(f) \neq \varnothing$:} In order to prove convexity, we use the fact that $f$ is convex if and only if $$(\partial_{\mathrm{L}^2_{(0)}} f(\varphi)-\partial_{\mathrm{L}^2_{(0)}} f(\psi) , \varphi - \psi) \geq 0,$$ for all $\varphi,\psi\in\mathrm{L}^2_{(0)}(\Omega)$. Observe that for $0<\theta<1$, using Taylor's series expansion and the definition of $\mathrm{P}_0$, we find that \begin{align*} &\hspace*{-0.3 cm}(a \varphi - \mathrm{J}*\varphi+ \mathrm{P}_0 \mathrm{G}'(\varphi) -(a \psi - \mathrm{J}*\psi+ \mathrm{P}_0 \mathrm{G}'(\psi) ), \varphi-\psi) \qquad \qquad \\ &=(a \varphi - \mathrm{J}*\varphi+ \mathrm{F}'(\varphi) +\kappa \varphi-(a \psi - \mathrm{J}*\psi+ \mathrm{F}'(\psi)+\kappa\psi ), \varphi-\psi) \qquad \qquad \\ &= (a(\varphi-\psi)-\mathrm{J}*(\varphi-\psi)+ \mathrm{F}'(\varphi)- \mathrm{F}'(\psi)+ \kappa(\varphi- \psi) ,\varphi-\psi) \qquad \ \\ &= (a(\varphi-\psi)-\mathrm{J}*(\varphi-\psi)+ \mathrm{F}''(\varphi + \theta \psi)(\varphi- \psi)+ \kappa(\varphi- \psi) ,\varphi-\psi) \\ &\geq C_0 \| \varphi- \psi\|^2 - \|\mathrm{J}\|_{\mathrm{L}^1} \| \varphi- \psi\|^2 + \kappa \| \varphi- \psi\|^2 \quad \hspace{3.65cm} \\ &\geq (C_0 + \kappa - \|\mathrm{J}\|_{\mathrm{L}^1})\| \varphi- \psi\|^2 \hspace{7.78cm} \\ &\geq 0, \end{align*} using (A9). Hence $f$ is a convex functional on $\mathrm{L}^2_{(0)}(\Omega) $. Since the domain of $f$ is $\mathrm{D}(f) = \mathopen{}\mathclose\bgroup\originalleft\{ \varphi \in \mathrm{L}^2_{(0)}(\Omega) : \mathrm{G}(\varphi) \in \mathrm{L}^1(\Omega) \aftergroup\egroup\originalright\} \neq \varnothing,$ it is immediate that $f$ is proper convex. \textbf{Claim (2).} \emph{$f$ is lower semicontinuous:} Let $\varphi_k \in \mathrm{L}^2_{(0)}(\Omega)$ and $\varphi_k \rightarrow \varphi $ in $\mathrm{L}^2_{(0)}(\Omega)$ as $k \rightarrow \infty$. Our aim is to establish that $ f(\varphi)\leq \liminf\limits_{k \rightarrow \infty} f(\varphi_k).$ It is enough to consider the case $\liminf\limits_{k \rightarrow \infty} f(\varphi_k) < +\infty$. Thus, for this sequence, we can assume that $f(\varphi_k) \leq \mathrm{M}$, for some $\mathrm{M}$. This implies that $\varphi_k \in \mathrm{D}(f)$, for all $k\in\mathbb{N}$. Since $\mathrm{G}:[-1,1] \rightarrow \mathbb{R}$ is a continuous function, by adding a suitable constant, we can assume without loss of generality that $\mathrm{G} \geq 0$ and we have \begin{align*} \mathrm{G}(\varphi) \leq \liminf_{k \rightarrow \infty} \mathrm{G}(\varphi_k), \end{align*} which gives from Fatou's lemma that \begin{align} \label{lowerG} \int_\Omega \mathrm{G}(\varphi(x)) \/\mathrm{d}\/ x \leq \liminf_{k \rightarrow \infty} \int_\Omega \mathrm{G}(\varphi_k (x)) \/\mathrm{d}\/ x . \end{align} Now consider the functional $\mathrm{I}(\cdot)$ defined by \begin{align*} \mathrm{I}(\varphi) :=\int_\Omega \int_\Omega \mathrm{J}(x-y)(\varphi(x)-\varphi(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y = (a \varphi, \varphi) -(\mathrm{J}*\varphi, \varphi). \end{align*} We show that the function $\mathrm{I}(\cdot)$ is continuous. We consider \begin{align*} |\mathrm{I}(\varphi_k)-\mathrm{I}(\varphi)|=&|(a \varphi_k, \varphi_k)-(\mathrm{J}*\varphi_k, \varphi_k) - (a \varphi, \varphi) -(\mathrm{J}*\varphi, \varphi)| \\ \leq & |(a(\varphi_k- \varphi), \varphi_k )|+|(a \varphi, \varphi_k-\varphi)| + |(\mathrm{J}*(\varphi_k-\varphi), \varphi_k)| + |(\mathrm{J}*\varphi,\varphi_k-\varphi)|\\ \leq & \|a\|_{\mathrm{L}^\infty} \,(\|\varphi_k\| + \| \varphi\|) \|\varphi_k-\varphi\| + \|\mathrm{J}\|_{\mathrm{L}^1}(\|\varphi_k\| + \| \varphi\|) \|\varphi_k-\varphi\|, \end{align*} where we used Young's inequality, Young's inequality for convolution and H\"older's inequality. Then, we have $|\mathrm{I}(\varphi_k)-\mathrm{I}(\varphi)|\to 0 \ \ \text{as } k \rightarrow \infty $, since $\varphi_k \rightarrow \varphi $ in $\mathrm{L}^2_{(0)}(\Omega)$ as $k \rightarrow \infty$. Since continuity implies lower semicontinuity, we have \begin{align} \label{lowerJ} \int_\Omega \int_\Omega \mathrm{J}(x-y)(\varphi(x)-\varphi(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y =\liminf_{k \rightarrow \infty} \int_\Omega \int_\Omega \mathrm{J}(x-y)(\varphi_k(x)-\varphi_k(y))^2\/\mathrm{d}\/ x \/\mathrm{d}\/ y. \end{align} Combining \eqref{lowerG} and \eqref{lowerJ}, we get \begin{align*} f(\varphi)\leq \liminf_{k \rightarrow \infty} f(\varphi_k). \end{align*} This proves $f$ is lower semicontinuous. \end{proof} \begin{remark} \label{lower_H1} The proper convexity of $f: \mathrm{V}_0 \rightarrow (-\infty,\infty]$ is immediate, since $\mathrm{V}_0 \hookrightarrow\mathrm{L}^2_{(0)}(\Omega)$. Let $(\varphi_k)_{k \in \mathbb{R}} \in \mathrm{V}_0 $ be such that $\varphi_k \rightarrow \varphi$ in $\mathrm{V}_0$. Then from Lemma \ref{poin}, we can easily see that $\varphi_k \rightarrow \varphi $ in $\mathrm{L}^2_{(0)}(\Omega)$. Therefore using Lemma \ref{lowersemi}, we get $f: \mathrm{V}_0 \rightarrow (-\infty,\infty]$ is lower semicontinuous. \end{remark} \begin{proposition} \label{maximalmonotone} The subgradients $\partial_{\mathrm{L}^2_{(0)}} f$ and $\partial_{\mathrm{V}_0} f$ are maximal monotone operators. \end{proposition} \begin{proof} In Lemma \ref{lowersemi}, we have proved that $f:\mathrm{L}^2_{(0)}(\Omega) \rightarrow \mathbb{R}$ is proper convex and lower semicontinuous. By using Lemma \ref{rockfellar}, we obtain that the operator $\partial_{\mathrm{L}^2_{(0)}} f : \mathrm{L}^2_{(0)}(\Omega) \rightarrow \mathcal{P}((\mathrm{L}^2_{(0)}\Omega)')$ is maximal monotone. Now for the operator $\partial_{\mathrm{V}_0} f$, Remark \ref{lower_H1} gives that $f: \mathrm{V}_0 \rightarrow (-\infty,\infty]$ is proper convex and lower semicontinuous. Hence, once again using Lemma \ref{rockfellar}, we get that $\partial_{\mathrm{V}_0} f$ is also maximal monotone. \end{proof} \begin{lemma}[Lemma 3.7, \cite{MR3524178}] Consider the functional $f$ as in \eqref{functional}. Then for every $\varphi \in \mathrm{D}(\partial_{\mathrm{L}^2_{(0)}}f) \cap \mathrm{V}_0$ we have that $\partial_{\mathrm{L}^2_{(0)}}f(\varphi) \subseteq \partial_{\mathrm{V}_0}f(\varphi)$. \end{lemma} \begin{lemma}[Lemma 3.8, \cite{MR3524178}] \label{subgrad H1_0} Let $\varphi \in \mathrm{D}(\partial_{\mathrm{V}_0} f)$ and $w \in \partial_{\mathrm{V}_0}f(\varphi)$. Suppose $w \in \mathrm{L}^2_{(0)}(\Omega)$, then $\varphi \in \mathrm{D}(\partial_{\mathrm{L}^2_{(0)}}f)$ and \begin{align*} w=\partial_{\mathrm{L}^2_{(0)}}f(\varphi) = a \varphi - \mathrm{J}*\varphi + \mathrm{P}_0 \mathrm{G}'(\varphi). \end{align*} \end{lemma} \subsection{Abstract Formulation} In this subsection, we define the following spaces in order to set up the problem in Browder's theorem (see Theorem \ref{Browder}). \begin{align*} &\mathbb{X} :=\mathbb{V}_{\mathrm{div}}\times \mathrm{V}_0 \times \mathrm{V}_0. \end{align*} Let us define, \begin{align} \label{Zdef} \mathrm{Z}:=\mathopen{}\mathclose\bgroup\originalleft\{ \varphi \in \mathrm{V}_0 :\varphi (x) \in [-1,1],\ \text{ a.e.} \aftergroup\egroup\originalright\}, \end{align} and \begin{align*} \mathbb{Y}:=\mathbb{V}_{\mathrm{div}}\times \mathrm{V}_0 \times \mathrm{Z}. \end{align*} Clearly $\mathbb{Y}$ is a closed subspace of $\mathbb{X}$. Let $\mathrm{D}(\mathrm{T}):= \mathbb{V}_{\mathrm{div}}\times \mathrm{V}_0 \times \mathrm{D}(\partial_{\mathrm{V}_0}f)$ and we define a mapping $\mathrm{T}:\mathbb{Y} \rightarrow \mathcal{P}(\mathbb{X}')$ by \begin{equation} \label{operator T} \mathrm{T}(\mathbf{u},\mu_0, \varphi) := \mathopen{}\mathclose\bgroup\originalleft\{\begin{array}{cl} \mathopen{}\mathclose\bgroup\originalleft\{ \mathopen{}\mathclose\bgroup\originalleft( \begin{array}{c} 0\\ 0\\ \partial_{\mathrm{V}_0} f(\varphi) \end{array} \aftergroup\egroup\originalright) \aftergroup\egroup\originalright\}, & \text{if } (\mathbf{u},\mu_0, \varphi) \in \mathrm{D}(\mathrm{T}), \\ \varnothing, & \text{otherwise.} \end{array}\aftergroup\egroup\originalright. \end{equation} We define the operator $\mathrm{S}: \mathbb{Y} \rightarrow \mathbb{X}'$ as \begin{align} \label{operator S} \nonumber \!_{\mathbb{X}'}\langle \mathrm{S}\mathbf{x},\mathbf{y} \rangle_{\mathbb{X}} &:= \int_\Omega (\mathbf{u} \cdot \nabla) \mathbf{u} \cdot \mathbf{v} \/\mathrm{d}\/ x+ \int_\Omega 2\nu (\varphi) \mathrm{D}\mathbf{u} \cdot \mathrm{D}\mathbf{v} \/\mathrm{d}\/ x - \int_\Omega\mu_0 \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x + \int_\Omega (\mathbf{u} \cdot \nabla \varphi) \cdot \eta \/\mathrm{d}\/ x \\ &\quad+\int_\Omega m(\varphi) \nabla \mu_0 \cdot \nabla \eta \/\mathrm{d}\/ x -\int_\Omega \mu_0 \psi \/\mathrm{d}\/ x -\int_\Omega \mathrm{P}_0 (\kappa \varphi) \psi \/\mathrm{d}\/ x, \end{align} for all $\mathbf{x}=(\mathbf{u},\mu_0, \varphi) \in \mathbb{Y}$, $\mathbf{y}=(\mathbf{v},\eta, \psi) \in \mathbb{X}$ and $\mathbf{b} \in \mathbb{X}'$ is defined by \begin{align*} \!_{\mathbb{X}'}\langle \mathbf{b},\mathbf{y} \rangle_{\mathbb{X}} := \langle \mathbf{h} , \mathbf{v} \rangle, \end{align*} for all $\mathbf{y}\in \mathbb{X}$. From the relations \eqref{operator T} and \eqref{operator S}, the problem $b \in \mathrm{T}(\mathbf{u}, \mu_0,\varphi) +\mathrm{S}(\mathbf{u}, \mu_0,\varphi)$ with $(\mathbf{u}, \mu_0,\varphi) \in \mathbb{Y}\cap \mathrm{D}(\mathrm{T}) $ can be written as \begin{equation} \label{operatorform} \mathopen{}\mathclose\bgroup\originalleft( \begin{array}{c} 0 \\ 0 \\ \partial_{\mathrm{V}_0} f(\varphi) \end{array} \aftergroup\egroup\originalright) + \mathopen{}\mathclose\bgroup\originalleft( \begin{array}{c} (\mathbf{u} \cdot \nabla)\mathbf{u} -\text{div}(2\nu(\varphi) \mathrm{D}\mathbf{u})-\mu_0 \nabla \varphi\\ -\text{div}(m(\varphi) \nabla \mu_0)+\mathbf{u} \cdot \nabla \varphi \\ -\mu_0-\mathrm{P}_0(\kappa \varphi) \end{array} \aftergroup\egroup\originalright) = \mathopen{}\mathclose\bgroup\originalleft( \begin{array}{c} \mathbf{h} \\ 0\\ 0 \end{array} \aftergroup\egroup\originalright), \end{equation} in $\mathbb{X}'$. If we can prove that \eqref{operatorform} has a solution, then this solution solves the reformulated problem \eqref{reformphi}-\eqref{nsreform}. This is the content of the next lemma. Later, we discuss the existence of solution for \eqref{operatorform}. \begin{lemma} \label{oper_reform} Let $(\mathbf{u}, \mu_0,\varphi)\in \mathbb{Y} \cap \mathrm{D}(T)$ satisfies $ \mathbf{b} \in \mathrm{T}(\mathbf{u}, \mu_0,\varphi) + \mathrm{S}(\mathbf{u}, \mu_0,\varphi)$. Then $(\mathbf{u}, \mu_0,\varphi)$ is a solution of the reformulated problem \eqref{reformphi}-\eqref{nsreform}. \end{lemma} \begin{proof} Let $(\mathbf{u},\mu_0,\varphi)$ be such that $ \mathbf{b} \in \mathrm{T}(\mathbf{u}, \mu_0,\varphi) + \mathrm{S}(\mathbf{u}, \mu_0,\varphi)$. From first and second equations of \eqref{operatorform}, it clearly follows that \eqref{reformphi} and \eqref{nsreform} are satisfied for all $\mathbf{v} \in \mathbb{V}_{\mathrm{div}}$ and $\psi \in \mathrm{V}_0$. Now from the third equation of \eqref{operatorform}, there exists $w \in \partial_{\mathrm{V}_0}f(\varphi)$ such that \begin{align*} w = \mu_0 +\mathrm{P}_0(\kappa \varphi) \ \text{ in } \ \mathrm{V}_0'. \end{align*} Since $\mu_0 + \mathrm{P}_0(\kappa \varphi) \in \mathrm{L}^2_{(0)}(\Omega)$ and $\varphi \in \mathrm{D}(\partial_{\mathrm{V}_0}f)$, we can see from Lemma \ref{subgrad H1_0} that $w=a\varphi - \mathrm{J}*\varphi + \mathrm{P}_0(\mathrm{G}'(\varphi))$ in $\mathrm{V}_0'$. This gives for every $\psi \in \mathrm{V}_0$ \begin{align*} \int_\Omega (\mu_0 + \mathrm{P}_0(\kappa \varphi )) \psi \/\mathrm{d}\/ x = \int_\Omega ( a\varphi - \mathrm{J}*\varphi ) \psi \/\mathrm{d}\/ x+ \int_\Omega \mathrm{P}_0(\mathrm{G}'(\varphi))\/\mathrm{d}\/ x . \end{align*} Hence \begin{align*} \int_\Omega \mu_0 \psi \/\mathrm{d}\/ x & = \int_\Omega (a\varphi - \mathrm{J}*\varphi ) \/\mathrm{d}\/ x+ \int_\Omega \mathrm{P}_0(\mathrm{G}'(\varphi)-\kappa \varphi) \/\mathrm{d}\/ x \\ &= \int_\Omega (a\varphi - \mathrm{J}*\varphi) \psi \/\mathrm{d}\/ x + \int_\Omega \mathrm{P}_0(\mathrm{F}'(\varphi)) \/\mathrm{d}\/ x. \end{align*} for all $\psi\in \mathrm{V}_0$. \end{proof} In Lemma \ref{oper_reform}, we showed that the existence of $(\mathbf{u}, \mu_0, \varphi)$ such that $\mathbf{b} \in \mathrm{T}(\mathbf{u}, \mu_0, \varphi)+\mathrm{S}(\mathbf{u}, \mu_0, \varphi)$ implies $(\mathbf{u}, \mu_0, \varphi)$ satisfies the reformulation \eqref{reformphi}-\eqref{nsreform}. Now we show that there exists a $(\mathbf{u}, \mu_0, \varphi) \in \mathbb{Y} \cap \mathrm{D}(\mathrm{T})$ which satisfies $\mathbf{b} \in \mathrm{T}(\mathbf{u}, \mu_0, \varphi)+\mathrm{S}(\mathbf{u}, \mu_0, \varphi)$. To this purpose, we use Browder's theorem (see Theorem \ref{Browder}). \begin{lemma}\label{browderproof} Let $\mathrm{T},\mathrm{S}$ be as defined in \eqref{operator T} and \eqref{operator S}. Given $\mathbf{b} \in \mathbb{X}'$, there exists a triple $\mathbf{x}=(\mathbf{u}, \mu_0, \varphi) \in \mathbb{Y} \cap \mathrm{D}(\mathrm{T})$ such that $\mathbf{b} \in \mathrm{T}(\mathbf{x}) + \mathrm{S}(\mathbf{x})$. \end{lemma} \begin{proof} Let us prove that the operators $\mathrm{T}$ and $\mathrm{S}$, and the spaces $\mathbb{X}$ and $\mathbb{Y}$ satisfy the hypothesis of Browder's theorem (see Theorem \ref{Browder}) in the following steps. \textbf{(1)} We can see that the set $\mathbb{Y}$ is non-empty, closed, convex subset of $\mathbb{X}$. Also $\mathbb{X}$ is a reflexive real Banach space, since $\mathbb{V}_{\mathrm{div}}$ and $\mathrm{V}_0$ are reflexive. \textbf{(2)} Now we show that the operator $\mathrm{T}:\mathbb{Y} \rightarrow \mathcal{P}(\mathbb{X}') $ is maximal monotone. Let us first show that $\mathrm{D}(\partial_{\mathrm{V}_0}f) \subseteq \mathrm{Z}. $ In order to get this result, let us take $\varphi \in \mathrm{D}(\partial_{\mathrm{V}_0}f)$. Then we know that $f(\varphi) \neq \infty$, since $\mathrm{D}(\partial_{\mathrm{V}_0}f) \subseteq \mathrm{D}(f)$. This gives that $\varphi (x) \in [-1,1]$, since $\mathrm{G}(x)=+\infty $ for $x \notin [-1,1]$. Hence $\varphi \in \mathrm{Z}$ and $\mathrm{D}(\partial_{\mathrm{V}_0}f) \subseteq \mathrm{Z}. $ From Proposition \ref{maximalmonotone}, the operator $ \partial_{\mathrm{V}_0} f: \mathrm{V}_0 \rightarrow \mathcal{P}(\mathrm{V}_0')$ is maximal monotone. This implies that the operator $\mathrm{T} : \mathbb{X} \rightarrow \mathcal{P}(\mathbb{X}')$ is maximal monotone. Observe that, \begin{align*} \mathrm{D}(\mathrm{T})= \mathbb{V}_{\mathrm{div}}\times \mathrm{V}_0 \times \mathrm{D}(\partial_{\mathrm{V}_0}f) \subseteq \mathbb{V}_{\mathrm{div}}\times \mathrm{V}_0 \times \mathrm{Z} = \mathbb{Y} \subset \mathbb{X}. \end{align*} Moreover by the definition of $\mathrm{T}$, for $(\mathbf{u}, \mu_0, \varphi) \notin \mathrm{D}(\mathrm{T})$, $\mathrm{T}(\mathbf{u}, \mu_0, \varphi) = \varnothing$. Hence it follows that $\mathrm{T}: \mathbb{Y} \rightarrow \mathcal{P}(\mathbb{X}')$ is a maximal monotone operator. \textbf{(3)} We write $\mathrm{S} = \mathrm{S}_1 + \ldots +\mathrm{S}_7$ and show that each $\mathrm{S}_i$, for $i=1,\ldots,7$, is pseudo-monotone. Let us define \begin{align*} & _{\mathbb{X}'}\langle \mathrm{S}_1(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega (\mathbf{u} \cdot \nabla ) \mathbf{u} \cdot \mathbf{v} \/\mathrm{d}\/ x, \\ & _{\mathbb{X}'}\langle \mathrm{S}_2(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega 2\nu (\varphi) \mathrm{D}\mathbf{u} \cdot \mathrm{D}\mathbf{v} \/\mathrm{d}\/ x,\\ & _{\mathbb{X}'}\langle \mathrm{S}_3(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega \mu_0 \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x, \\ & _{\mathbb{X}'}\langle \mathrm{S}_4(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega m(\varphi) \nabla \mu_0 \cdot \nabla \eta \/\mathrm{d}\/ x, \\ & _{\mathbb{X}'}\langle \mathrm{S}_5(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega (\mathbf{u} \cdot \nabla \varphi )\eta \/\mathrm{d}\/ x, \\ & _{\mathbb{X}'}\langle \mathrm{S}_6(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega \mu_0 \psi \/\mathrm{d}\/ x, \\ & _{\mathbb{X}'}\langle \mathrm{S}_7(\mathbf{u}, \mu_0, \varphi), (\mathbf{v}, \eta , \psi) \rangle_{\mathbb{X}} := \int_\Omega \mathrm{P}_0(\kappa \varphi ) \psi \/\mathrm{d}\/ x. \end{align*} Since completely continuous implies pseudo-monotone, we show that $\mathrm{S}_1, \mathrm{S}_3, \mathrm{S}_5,\mathrm{S}_6$ and $\mathrm{S}_7$ are completely continuous operators. Let us denote $\mathbf{x}_n = (\mathbf{u}_n, \mu_{0n}, \varphi_n)$, $\mathbf{x} = (\mathbf{u}, \mu_0, \varphi)$ and $\mathbf{y}=(\mathbf{v}, \eta , \psi)$. Assume that $\mathbf{x}_n \rightharpoonup \mathbf{x}$ in $\mathbb{Y}$. This means $\mathbf{u}_n \rightharpoonup \mathbf{u} $ in $\mathbb{V}_{\mathrm{div}}$ , $\mu_{0n} \rightharpoonup \mu_0 $ in $\mathrm{V}_0$ and $\varphi_n \rightharpoonup \varphi $ in $\mathrm{V}_0$, which in turn gives $\mathbf{u}_n \rightarrow \mathbf{u} $ in $\mathbb{G}_{\mathrm{div}}$, $\mu_{0n} \rightarrow \mu_0 $ in $\mathrm{L}^2_{(0)}(\Omega)$ and $\varphi_n \rightarrow \varphi $ in $\mathrm{L}^2_{(0)}(\Omega)$, using the compact embeddings $\mathbb{V}_{\mathrm{div}}\hookrightarrow\mathbb{G}_{\text{div }}$ and $\mathrm{V}_0\hookrightarrow\mathrm{L}^2_{(0)}(\Omega)$. We have to show that $\mathrm{S}_1 \mathbf{x}_n$ converges strongly to $\mathrm{S}_1 \mathbf{x} $ in $\mathbb{X}'$-norm. Using \eqref{2.7}, H\"older's and Ladyzhenskaya's inequalities, for $n=2$, we get \begin{align} |\,_{\mathbb{X}'}\langle \mathrm{S}_1 \mathbf{x}_n, \mathbf{y} \rangle_{\mathbb{X}}-\,_{\mathbb{X}'}\langle \mathrm{S}_1 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}}| \nonumber=& |b(\mathbf{u}_n, \mathbf{u}_n,\mathbf{v})- b(\mathbf{u},\mathbf{u},\mathbf{v})| \nonumber \\ \nonumber=& |-b(\mathbf{u}_n, \mathbf{v}, \mathbf{u}_n -\mathbf{u}) -b(\mathbf{u}_n-\mathbf{u},\mathbf{v},\mathbf{u})| \\ \nonumber\leq & \|\mathbf{u}_n \|_{\mathbb{L}^4} \| \nabla \mathbf{v}\| \ \|\mathbf{u}_n-\mathbf{u} \|_{\mathbb{L}^4} + \|\mathbf{u} \|_{\mathbb{L}^4} \| \nabla \mathbf{v}\| \ \|\mathbf{u}_n-\mathbf{u} \|_{\mathbb{L}^4} \\ \label{S12} \leq & 2^{1/4} \mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{u}_n\|^{1/2} \| \nabla \mathbf{u}_n\|^{1/2} + \|\mathbf{u}\|^{1/2} \| \nabla \mathbf{u}\|^{1/2} \aftergroup\egroup\originalright) \|\mathbf{u}_n - \mathbf{u} \|_{\mathbb{L}^4} \| \mathbf{v}\|_{\mathbb{V}_{\mathrm{div}}}. \qquad \end{align} For $n=3$, using H\"older's and Ladyzhenskaya's inequalities,, we get \begin{align} \label{S13} |\,_{\mathbb{X}'}\langle \mathrm{S}_1 \mathbf{x}_n, \mathbf{y} \rangle_{\mathbb{X}} -\,_{\mathbb{X}'}\langle \mathrm{S}_1 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}}| \nonumber \nonumber \leq & \| \mathbf{u}_n \|_{\mathbb{L}^4} \| \nabla \mathbf{v} \| \ \| \mathbf{u}_n - \mathbf{u} \|_{\mathbb{L}^4} + \| \mathbf{u}_n - \mathbf{u} \|_{\mathbb{L}^4} \| \nabla \mathbf{v} \| \|\mathbf{u} \|_{\mathbb{L}^4} \\ \leq & 2^{1/2}\mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{u}_n\|^{1/4}\|\nabla\mathbf{u}_n\|^{3/4}+\|\mathbf{u}\|^{1/4}\|\nabla\mathbf{u}\|^{3/4}\aftergroup\egroup\originalright)\|\mathbf{u}_n-\mathbf{u}\|_{\mathbb{L}^4}\|\mathbf{v}\|_{\mathbb{V}_{\mathrm{div}}}. \end{align} Let us now estimate $|\,_{\mathbb{X}'}\langle \mathrm{S}_3\mathbf{x}_n,\mathbf{y} \rangle - \langle \mathrm{S}_3 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}} |$ and $|\,_{\mathbb{X}'}\langle \mathrm{S}_5\mathbf{x}_n,\mathbf{y} \rangle_{\mathbb{X}}- \,_{\mathbb{X}'}\langle \mathrm{S}_5 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}}|$. We perform an integration by parts, use H\"older's inequality and the embedding $\mathbb{H}_0^1(\Omega)\hookrightarrow\mathbb{L}^4(\Omega)$ to estimate $|\,_{\mathbb{X}'}\langle \mathrm{S}_3\mathbf{x}_n,\mathbf{y} \rangle - \langle \mathrm{S}_3 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}} |$ as \begin{align} \nonumber |\,_{\mathbb{X}'}\langle \mathrm{S}_3\mathbf{x}_n,\mathbf{y} \rangle - \langle \mathrm{S}_3 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}} |&= \mathopen{}\mathclose\bgroup\originalleft|-\int_\Omega \mu_{0n} \nabla \varphi_n \cdot \mathbf{v} \/\mathrm{d}\/ x +\int_\Omega \mu_0 \nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x\aftergroup\egroup\originalright| \\ \nonumber &=\mathopen{}\mathclose\bgroup\originalleft|\int_\Omega \mu_{0n} \nabla (\varphi - \varphi_n) \cdot \mathbf{v} \/\mathrm{d}\/ x + \int_\Omega (\mu_0-\mu_{0n})\nabla \varphi \cdot \mathbf{v} \/\mathrm{d}\/ x \aftergroup\egroup\originalright|\\ \nonumber & \leq \| \nabla \mu_{0n}\| \ \| \mathbf{v}\|_{\mathbb{L}^4} \|\varphi - \varphi_n \|_{\mathrm{L}^4} + \|\mu_0-\mu_{0n} \|_{\mathrm{L}^4} \|\nabla \varphi \| \ \|\mathbf{v} \|_{\mathbb{L}^4} \\ \label{S3} & \leq C\mathopen{}\mathclose\bgroup\originalleft( \| \nabla \mu_{0n}\| \|\varphi - \varphi_n \|_{\mathrm{L}^4} + \|\mu_0-\mu_{0n} \|_{\mathrm{L}^4} \|\nabla \varphi \|\aftergroup\egroup\originalright) \| \mathbf{v} \|_{\mathbb{V}_{\mathrm{div}}}. \end{align} Similarly, we have \begin{align} \nonumber |\,_{\mathbb{X}'}\langle \mathrm{S}_5\mathbf{x}_n,\mathbf{y} \rangle_{\mathbb{X}}- \,_{\mathbb{X}'}\langle \mathrm{S}_5 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}}| &= \mathopen{}\mathclose\bgroup\originalleft|\int_\Omega (\mathbf{u}_n \cdot \nabla \varphi_n) \eta \/\mathrm{d}\/ x -\int_\Omega (\mathbf{u} \cdot \nabla \varphi) \eta \/\mathrm{d}\/ x \aftergroup\egroup\originalright|\\ \nonumber &= \mathopen{}\mathclose\bgroup\originalleft|\int_\Omega (\mathbf{u}_n-\mathbf{u}) \cdot \nabla \varphi_n \eta \/\mathrm{d}\/ x+\int_\Omega \mathbf{u} \cdot \nabla (\varphi_n -\varphi)\eta \/\mathrm{d}\/ x\aftergroup\egroup\originalright| \\ \label{S5} &\leq \mathopen{}\mathclose\bgroup\originalleft( \|\mathbf{u}_n-\mathbf{u} \|_{\mathbb{L}^4}\| \varphi_n\|_{\mathrm{L}^4} +\| \mathbf{u}\|_{\mathbb{L}^4} \ \|\varphi_n -\varphi \| _{\mathrm{L}^4} \aftergroup\egroup\originalright)\|\nabla \eta \|. \end{align} Using H\"older's inequality, we obtain \begin{align}\label{S6} |\,_{\mathbb{X}'}\langle \mathrm{S}_6 \mathbf{x}_n, \mathbf{y} \rangle_{\mathbb{X}}-\,_{\mathbb{X}'}\langle \mathrm{S}_6 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}}| \leq & \int_\Omega |\mu_{0n} - \mu_0|\,|\psi| \/\mathrm{d}\/ x \leq \|\mu_{0n} - \mu_0 \| \, \|\psi \| \end{align} and \begin{align}\label{S7} |\,_{\mathbb{X}'}\langle \mathrm{S}_7 \mathbf{x}_n, \mathbf{y} \rangle_{\mathbb{X}} -\,_{\mathbb{X}'}\langle \mathrm{S}_7 \mathbf{x}, \mathbf{y} \rangle_{\mathbb{X}}| &= \mathopen{}\mathclose\bgroup\originalleft|\int_\Omega\mathrm{P}_0(\kappa \varphi_n) \psi \/\mathrm{d}\/ x - \int_\Omega \mathrm{P}_0(\kappa \varphi) \psi \/\mathrm{d}\/ x \aftergroup\egroup\originalright| \leq \kappa \int_{\Omega} |\varphi_n - \varphi | |\psi| \/\mathrm{d}\/ x \nonumber\\& \leq \kappa \|\varphi_n -\varphi \| \|\psi \|. \end{align} From \eqref{S12}-\eqref{S13}, we get \begin{align*} & \| \mathrm{S}_1 \mathbf{x}_n -\mathrm{S}_1\mathbf{x}\|_{\mathbb{X}'} \\ &\leq 2^{1/2} \mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{u}_n\|^{1/2} \| \nabla \mathbf{u}_n\|^{1/2} + \|\mathbf{u}\|^{1/2} \| \nabla \mathbf{u}\|^{1/2} \aftergroup\egroup\originalright)( \|\mathbf{u}_n \|^{1/2}+\|\mathbf{u} \|^{1/2}) \|\mathbf{u}_n - \mathbf{u} \|^{1/2}\ \ (n=2), \end{align*} and \begin{align*} & \| \mathrm{S}_1 \mathbf{x}_n -\mathrm{S}_1\mathbf{x}\|_{\mathbb{X}'} \\ &\leq 2\mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{u}_n\|^{1/4}\|\nabla\mathbf{u}_n\|^{3/4}+\|\mathbf{u}\|^{1/4}\|\nabla\mathbf{u}\|^{3/4}\aftergroup\egroup\originalright)( \|\mathbf{u}_n \|^{3/4}+\|\mathbf{u} \|^{3/4})\|\mathbf{u}_n-\mathbf{u}\|^{1/4} \ \ (n=3), \end{align*} and both converges to $0$ as $n\to\infty$ using the compact embedding of $\mathbb{V}_{\mathrm{div}}\hookrightarrow\mathbb{G}_{\mathrm{div}}$. Using Ladyzhenskaya's inequality and the compact embedding $\mathrm{V}_0 \hookrightarrow \mathrm{L}^2_{(0)}(\Omega)$, from \eqref{S3}-\eqref{S7}, we have \begin{align*} \| \mathrm{S}_3\mathbf{x}_n- \mathrm{S}_3\mathbf{x}\|_{\mathbb{X}'} &\leq \| \nabla \mu_{0n}\| \|\varphi - \varphi_n \|_{\mathrm{L}^4} + \|\mu_0-\mu_{0n} \|_{\mathrm{L}^4} \|\nabla \varphi \| \rightarrow 0, \\ \| \mathrm{S}_5\mathbf{x}_n- \mathrm{S}_5\mathbf{x} \|_{\mathbb{X}'} &\leq \|\mathbf{u}_n-\mathbf{u} \|_{\mathbb{L}^4}\| \varphi_n\|_{\mathrm{L}^4} +\| \mathbf{u}\|_{\mathbb{L}^4} \ \|\varphi_n -\varphi \| _{\mathrm{L}^4} \rightarrow 0, \\ \|\mathrm{S}_6 \mathbf{x}_n- \mathrm{S}_6 \mathbf{x} \|_{\mathbb{X}'} & \leq \| \mu_{0n}- \mu_0\| \rightarrow 0, \\ \|\mathrm{S}_7 \mathbf{x}_n- \mathrm{S}_7 \mathbf{x} \|_{\mathbb{X}'} &\leq \kappa \| \varphi_n - \varphi \| \rightarrow 0, \end{align*} as $n\to\infty$. This proves that $\mathrm{S}_1, \mathrm{S}_3, \mathrm{S}_5, \mathrm{S}_6$ and $\mathrm{S}_7$ are completely continuous and hence pseudo-monotone. In order to prove the pseudo-monotonicity of $\mathrm{S}_4$, we use Lemma \ref{pseudo-monotone}. Let us define the operator $\widetilde{\mathrm{S}}_4 : \mathbb{X} \times \mathbb{X} \rightarrow \mathbb{X}'$ by \begin{align*} \,_{\mathbb{X}'}\langle \widetilde{\mathrm{S}}_4 (\mathbf{x}_1,\mathbf{x}_2),\mathbf{y} \rangle_{\mathbb{X}} := \int_\Omega m (\varphi_1) \nabla \mu_{0_2} \cdot \nabla \eta\/\mathrm{d}\/ x, \end{align*} where $\mathbf{x}_i = (\mathbf{u}_i, \mu_{0_i},\varphi_i) \in \mathbb{X}, i=1,2,$ respectively and $\mathbf{y}=(\mathbf{v},\eta,\psi) \in \mathbb{X}.$ If $\mathbf{x}=(\mathbf{u},\mu_0,\varphi)\in\mathbb{X}$, then \begin{align*} \,_{\mathbb{X}'}\langle \widetilde{\mathrm{S}}_4 (\mathbf{x},\mathbf{x}_1 - \mathbf{x}_2) , \mathbf{x}_1 - \mathbf{x}_2 \rangle_{\mathbb{X}} &= \int_\Omega m(\varphi_1) \nabla (\mu_{0_1}- \mu_{0_2}) \cdot \nabla (\mu_{0_1}- \mu_{0_2})\/\mathrm{d}\/ x\\&=\int_\Omega m(\varphi_1) |\nabla (\mu_{0_1}- \mu_{0_2}) |^2\/\mathrm{d}\/ x \geq 0, \end{align*} since $m(\cdot)\geq 0$. This implies $\widetilde{\mathrm{S}}_4(\mathbf{x}, \cdot)$ is monotone. Now for each fixed $\mathbf{x}\in\mathbb{X}$, it can be easily seen that $\,_{\mathbb{X}'}\langle \widetilde{\mathrm{S}}_4(\mathbf{x},\mathbf{x}_1+t_k\mathbf{x}_2),\mathbf{y}\rangle_{\mathbb{X}}\to \,_{\mathbb{X}'}\langle \widetilde{\mathrm{S}}_4(\mathbf{x},\mathbf{x}_1),\mathbf{y}\rangle_{\mathbb{X}}$, as $t_k\to0$. Hence, the operator $\widetilde{\mathrm{S}}_4(\mathbf{x}, \cdot)$ is hemicontinuous for each fixed $\mathbf{x}\in\mathbb{X}$. In order to prove complete continuity of $\widetilde{\mathrm{S}}_4(\cdot ,\mathbf{x})$ for all $\mathbf{x} \in \mathbb{X}$, we consider a sequence $\widetilde{\mathbf{x}}_k \subseteq \mathbb{X}$ such that $\widetilde{\mathbf{x}}_k \rightharpoonup \widetilde{\mathbf{x}}$ in $\mathbb{X}$ and $\widetilde{\mathbf{x}}=(\widetilde{\mathbf{u}}, \widetilde{\mu}_0, \widetilde{\varphi}) \in \mathbb{X}$. Then \begin{align*} \,_{\mathbb{X}'}\langle \widetilde{\mathrm{S}}_4(\widetilde{\mathbf{x}}_k ,\mathbf{x}) - \widetilde{\mathrm{S}}_4(\widetilde{\mathbf{x}} ,\mathbf{x}) , \mathbf{y}\rangle_{\mathbb{X}} &= \int_\Omega (m(\widetilde{\varphi}_k) \nabla \mu_0 - m(\widetilde{\varphi}) \nabla \mu_0) \cdot \nabla \mathbf{v}\/\mathrm{d}\/ x \\ &\leq \| m(\widetilde{\varphi}_k) \nabla \mu_0 - m(\widetilde{\varphi}) \nabla \mu_0 \| \ \|\mathbf{v} \|_{\mathbb{V}_{\mathrm{div}}}. \end{align*} We know that $\widetilde{\varphi}_k \rightarrow \widetilde{\varphi}$ in $\mathrm{L}_{(0)}^2(\Omega)$ and $m$ is a continuous function, we have to show that $\| m(\widetilde{\varphi}_k) \nabla \mu_0 - m(\widetilde{\varphi}) \nabla \mu_0 \| \rightarrow 0$ as $k\to\infty$. Let us define $$\mathrm{H}(\widetilde{\varphi})(x):=g(x,\widetilde{\varphi}(x))=m(\widetilde{\varphi}(x))\nabla \mu_0(x).$$ Then Lemma 1.19, \cite{ruzicka2006nichtlineare} yields that $\mathrm{H}:\mathrm{L}^2_{(0)}(\Omega)\to\mathrm{L}^2(\Omega)$ is continuous and bounded. Since $\widetilde{\varphi}_k \rightarrow \widetilde{\varphi}$ in $\mathrm{L}_{(0)}^2(\Omega)$, this implies that $\| m(\widetilde{\varphi}_k) \nabla \mu_0 - m(\widetilde{\varphi}) \nabla \mu_0 \| \rightarrow 0$ as $k\to\infty$ and hence $\widetilde{\mathrm{S}}_4(\cdot,\mathbf{x})$ is completely continuous for all $\mathbf{x}\in\mathbb{X}$. Using Lemma \ref{pseudo-monotone}, we get that the operator $\mathrm{S}_4(\mathbf{x})=\widetilde{\mathrm{S}}_4(\mathbf{x},\mathbf{x})$ is pseudo-monotone . A similar proof follows for $\mathrm{S}_2$, since it is of the same form as $\mathrm{S}_4$ and $\nu(\cdot)$ is a continuously differentiable function. Since each of the operators $\mathrm{S}_i$'s are bounded, the operator $\mathrm{S}$ is also bounded. Since pseudo-monotonicity and locally boundedness implies demicontinuity (see Lemma 2.4, \cite{MR3014456} or Proposition 27.7, \cite{MR1033498}), $\mathrm{S}$ is demicontinuous. \textbf{(4)} Observe that $\mathbb{Y}$ is unbounded. Choose $\mathbf{x}_0 =(\mathbf{0},0,0) \in \mathbb{Y} \cap \mathrm{D}(\mathrm{T}).$ Then we have to show that there is $R>0$ such that \begin{align*} \,_{\mathbb{X}'}\langle \mathrm{S}(\mathbf{x})-\mathbf{b}, \mathbf{x} \rangle_{\mathbb{X}} >0, \ \text{ for all } \ \mathbf{x} \in \mathbb{Y} \ \text{ with } \ \|\mathbf{x}\|_{\mathbb{X}} >R. \end{align*} For $\mathbf{x}=(\mathbf{u},\mu_0,\varphi)\in\mathbb{Y}$, let us consider \begin{align}\label{com} _{\mathbb{X}'}\langle \mathrm{S}(\mathbf{x})-\mathbf{b}, \mathbf{x} \rangle_{\mathbb{X}} \nonumber&= \int_\Omega2 \nu(\varphi)\mathrm{D}\mathbf{u} \cdot \mathrm{D}\mathbf{u} \/\mathrm{d}\/ x + \int_\Omega m(\varphi) \nabla \mu_0 \cdot \nabla \mu_0 \/\mathrm{d}\/ x- \int_\Omega \mu_0 \varphi \/\mathrm{d}\/ x \\ \nonumber&\quad- \int_\Omega \mathrm{P}_0(\kappa \varphi) \varphi \/\mathrm{d}\/ x - \langle \mathbf{h} , \mathbf{u} \rangle\\ &=: \mathrm{I}_1+ \mathrm{I}_2+ \mathrm{I}_3 + \mathrm{I}_4 +\mathrm{I}_5. \end{align} Since $|\varphi(x)| \leq 1$, and $\nu(\cdot)$ is a continuously differentiable function, we have \begin{align*} \mathrm{I}_1 = \int_\Omega 2\nu(\varphi) \mathrm{D}\mathbf{u} \cdot\mathrm{D}\mathbf{u} \/\mathrm{d}\/ x\geq 2\int_{\Omega}\nu(\varphi)|\nabla\mathbf{u}|^2\/\mathrm{d}\/ x \geq \widetilde{C}_1 \|\mathbf{u} \| ^2 _{\mathbb{V}_{\mathrm{div}}}. \end{align*} Now, since mean value of $\mu_0$ is $0$ and $\mu(\cdot)$ is a continuous function, we also obtain \begin{align*} \mathrm{I}_2=\int_\Omega m(\varphi) \nabla \mu_0 \cdot \nabla \mu_0 \/\mathrm{d}\/ x=\int_\Omega m(\varphi) |\nabla \mu_0|^2 \/\mathrm{d}\/ x \geq \widetilde{C}_2 \| \mu_0\|^2_{\mathrm{V}_0}. \end{align*} Using H\"older's, Poincar\'e's and Young's inequalities, we get \begin{align*} \mathrm{I}_3&=\int_\Omega \mu_0 \varphi \/\mathrm{d}\/ x \leq \int_{\Omega}|\mu_0|\/\mathrm{d}\/ x\leq |\Omega|^{1/2}\|\mu_0\|_{\mathrm{L}^2}\leq \widetilde{C}_3 \| \mu_0 \|_{\mathrm{V}_0},\\ \mathrm{I}_4&=\int_\Omega \mathrm{P}_0(\kappa \varphi) \varphi \/\mathrm{d}\/ x = \kappa \| \varphi \|^2 \leq \widetilde{C}_4,\\ \mathrm{I}_5&= \langle \mathbf{h} , \mathbf{u} \rangle \leq \|\mathbf{h}\|_{\mathbb{V}_{\mathrm{div}}'}\|\mathbf{u}\|_{\mathbb{V}_{\mathrm{div}}} \leq \frac{1}{2\widetilde{C}_1} \| \mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}}^2 +\frac{ \widetilde{C}_1}{2} \|\mathbf{u} \|^2_{\mathbb{V}_{\mathrm{div}}}, \end{align*} for some constants $\widetilde{C}_1, \widetilde{C}_2, \widetilde{C}_3, \widetilde{C}_4 >0$. Combining the above inequalities and substituting in \eqref{com} yields \begin{align*} _{\mathbb{X}'}\langle \mathrm{S}(\mathbf{x})-\mathbf{b}, \mathbf{x} \rangle_{\mathbb{X}} & \geq \frac{\widetilde{C}_1}{2} \|\mathbf{u} \| ^2_{\mathbb{V}_{\mathrm{div}}} + \underbrace{\| \mu_0 \|_{\mathrm{V}_0}\mathopen{}\mathclose\bgroup\originalleft(\widetilde{C}_2 \| \mu_0 \|_{\mathrm{V}_0} -\widetilde{C}_3\aftergroup\egroup\originalright)}_{:=g\mathopen{}\mathclose\bgroup\originalleft(\|\mu_0\|_{\mathrm{V}_0}\aftergroup\egroup\originalright)}-\widetilde{C}_4 - \frac{1}{2\widetilde{C}_1} \| \mathbf{h}\|_{\mathbb{V}_{\mathrm{div}}'}^2 \end{align*} for $\mathbf{x}=(\mathbf{u}, \mu_0, \varphi) \in \mathbb{Y}$. Since $\mathbf{h}\in\mathbb{G}$ is fixed, we can choose a constant $K>0$ large enough such that \begin{align*} \widetilde{C}_4 + \frac{1}{2\widetilde{C}_1}\| \mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}}^2 \leq K. \end{align*}Furthermore, it can be easily seen that $$\lim_{x\to+\infty}g(x)=+\infty.$$ Since $K$ is chosen and $g (\cdot)$ grows to infinity, we can choose $R>0$ such that \begin{align*} _{\mathbb{X}'}\langle \mathrm{S}(\mathbf{x})-\mathbf{b}, \mathbf{x} \rangle_{\mathbb{X} }>0 \ \text{ for all } \ \mathbf{x}=(\mathbf{u},\mu_0,\varphi)\in\mathbb{Y} \ \text{ such that } \ \|\mathbf{x}\|_{\mathbb{X}} >R. \end{align*} Hence using Theorem \ref{Browder}, there exists $(\mathbf{u}, \mu_0,\varphi) \in \mathbb{Y} \cap \mathrm{D}(\mathrm{T})$ such that $\mathbf{b} \in \mathrm{T}(\mathbf{u}, \mu_0,\varphi) +\mathrm{S}(\mathbf{u}, \mu_0,\varphi)$, which completes the proof. \end{proof} \begin{theorem} \label{mainthm} Let $\Omega \subset \mathbb{R}^n$, $n=2,3$ be a bounded subset. Let $\mathbf{h} \in \mathbb{V}'_{{\mathrm{div}}}$ and $k \in (-1,1),$ then under the Assumptions (A1)-(A9), there exists a weak solution $(\mathbf{u}, \mu,\varphi )$ to the system \eqref{steadysys} such that \begin{align*} \mathbf{u} \in \mathbb{V}_{{\mathrm{div}}}, \quad \quad \varphi \in \mathrm{V} \cap \mathrm{L}^2_{(k)}(\Omega) \quad \mathrm{and} \quad \mu \in \mathrm{V}. \end{align*} and satisfies the weak formulation \eqref{weakphi}-\eqref{weakform nse}. \end{theorem} \begin{proof} Lemma \ref{browderproof} gives the existence of $(\mathbf{u}, \mu_0,\varphi) \in \mathbb{Y} \cap \mathrm{D}(\mathrm{T})$ such that $\mathbf{b} \in \mathrm{T}(\mathbf{u}, \mu_0,\varphi) +\mathrm{S}(\mathbf{u}, \mu_0,\varphi)$. From Lemma \ref{oper_reform} we get that $(\mathbf{u}, \mu_0,\varphi)$ satisfies the reformulated problem \eqref{reformphi}-\eqref{nsreform}. Hence from Lemma \ref{reformtoweak}, we know that $(\mathbf{u}, \mu,\varphi)$ satisfies the weak formulation \eqref{weakphi}-\eqref{weakform nse}, which completes the proof. \end{proof} \begin{remark}\label{F'phi} Note that, since $\mu \in \mathrm{V}$ and $\varphi \in \mathrm{V}\cap \mathrm{L}^2_{(k)}(\Omega)$ and $\|a\|_{\mathrm{L}^\infty} \leq \|\mathrm{J}\|_{\mathrm{L}^1}$, from \eqref{weakmu}, we have \begin{align*} \mathopen{}\mathclose\bgroup\originalleft| \int_\Omega\mathrm{F}'(\varphi)\psi \/\mathrm{d}\/ x\aftergroup\egroup\originalright|= \mathopen{}\mathclose\bgroup\originalleft| \int_\Omega\mu \psi \/\mathrm{d}\/ x- \int_\Omega(a \varphi-\mathrm{J} * \varphi)\psi \/\mathrm{d}\/ x \aftergroup\egroup\originalright|\leq (\|\mu \| + \|a\|_{\mathrm{L}^{\infty}} \|\varphi\| + \|\mathrm{J}\|_{\mathrm{L}^1} \|\varphi\| ) \|\psi\|, \end{align*} for all $\psi \in \mathrm{H}$, so that $\|\mathrm{F}'(\varphi)\| \leq \|\mu\| + 2 \|\mathrm{J}\|_{\mathrm{L}^1}\|\varphi\| < \infty$. \end{remark} \section{Uniqueness of Weak Solutions and Regularity}\setcounter{equation}{0}\label{sec4} In this section, we prove that the weak solution to the system \eqref{steadysys} obtained in Theorem \ref{mainthm} is unique. In this section, we assume that the viscosity coefficient $\nu$ and the mobility parameter $m$ are positive constants. We also prove some regularity results for the solution. \subsection{Uniqueness} Recall that our existence result ensures that $\varphi(\cdot) \in [-1, 1]$, a.e. (see \eqref{Zdef}). Then, we have the following uniqueness theorem. \begin{theorem}\label{unique-steady} Let $(\mathbf{u}_i,\mu_i,\varphi_i) \in \mathbb{V}_{{\mathrm{div}}} \times \mathrm{V} \times (\mathrm{V}\cap \mathrm{L}^2_{(k)}(\Omega))$ for $i=1,2$ be two \emph{weak solutions} of the following system (satisfied in the weak sense). \begin{equation} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned}\label{3.2} \mathbf{u} \cdot \nabla \varphi &= m\Delta \mu, \ \emph{ in }\ \Omega, \\ \mu &= a \varphi - \mathrm{J}*\varphi + \mathrm{F}'(\varphi) \\ -\nu \Delta \mathbf{u} + (\mathbf{u} \cdot \nabla) \mathbf{u} + \nabla \uppi &= \mu \nabla \varphi + \mathbf{h}, \ \emph{ in }\ \Omega,\\ \emph{div }\mathbf{u} &= 0, \ \emph{ in }\ \Omega, \\ \frac{\partial \mu}{ \partial \mathbf{n}}=0, \ \mathbf{u}&=0, \ \emph{ on }\ \partial\Omega, \end{aligned} \aftergroup\egroup\originalright. \end{equation} where $\mathbf{h}\in\mathbb{V}_{{\mathrm{div}}}'$. For $n=2$, we have $\mathbf{u}_1=\mathbf{u}_2$ and $\varphi_1=\varphi_2$, provided \begin{align*} & (i) \ \nu^2 > \frac{2\sqrt{2}}{\sqrt{\lambda_1}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} + \frac{12 \nu }{ \lambda_1m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2,\\ &(ii) \ (\nu m)^2 \mathopen{}\mathclose\bgroup\originalleft( \frac{C_0}{4} -\frac{C}{C_0} \|\nabla \mathrm{J}\|^2_{\mathbb{L}^1} \aftergroup\egroup\originalright)> \nu m \mathopen{}\mathclose\bgroup\originalleft(\frac{C}{\lambda_1}\aftergroup\egroup\originalright) + \frac{2 C }{ C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}} \ \text{ with }\ \|\nabla\mathrm{J}\|^2_{\mathbb{L}^1}<\frac{C^2_0}{4C}, \end{align*} Similarly, for $n=3$, we have $\mathbf{u}_1=\mathbf{u}_2$ and $\varphi_1=\varphi_2$, provided \begin{align*} &(i)\ \nu^2 > \mathopen{}\mathclose\bgroup\originalleft( \frac{16}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} +\frac{12 \nu }{ \lambda_1m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2, \\ &(ii)\ (\nu m)^2 \mathopen{}\mathclose\bgroup\originalleft( \frac{C_0}{4} -\frac{C}{C_0} \|\nabla \mathrm{J}\|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright)> \nu m \mathopen{}\mathclose\bgroup\originalleft(\frac{C}{\lambda_1}\aftergroup\egroup\originalright) + \frac{2 C }{ C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}}, \ \text{ with }\ \|\nabla\mathrm{J}\|^2_{\mathbb{L}^1}<\frac{C^2_0}{4C}, \end{align*} where $C$ is a generic constant depends on the embeddings. \end{theorem} \begin{proof} Let us first find a simple bound for the velocity field. We take inner product of the first equation in \eqref{3.2} with $\mu$ and third equation with $\mathbf{u}$ to obtain \begin{align} \label{phi} (\mathbf{u} \cdot \nabla \varphi, \mu)= -m \| \nabla \mu \|^2 \end{align} and \begin{align} \label{u} \nu \| \nabla \mathbf{u}\|^2 = (\mu \nabla \varphi, \mathbf{u}) +\langle\mathbf{h}, \mathbf{u}\rangle. \end{align} Adding \eqref{phi} and \eqref{u}, we obtain \begin{align}\label{1p} m\|\nabla\mu\|^2+\nu\|\nabla\mathbf{u}\|^2=\langle\mathbf{h}, \mathbf{u}\rangle, \end{align} where we used the fact that $(\mathbf{u} \cdot \nabla \varphi, \mu)=(\mu \nabla \varphi, \mathbf{u})$. From \eqref{1p}, we infer that \begin{align*} \nu\|\nabla\mathbf{u}\|^2 \leq m\|\nabla\mu\|^2 + \nu\|\nabla\mathbf{u}\|^2 =\langle\mathbf{h},\mathbf{u}\rangle \leq \| \mathbf{h} \|_{\mathbb{V}_{\mathrm{div}}'} \|\mathbf{u} \|_{\mathbb{V}_{\mathrm{div}}}. \end{align*} Finally, we have \begin{align} \label{gradu estimate} \nu \|\nabla\mathbf{u}\| \leq \| \mathbf{h} \|_{\mathbb{V}_{\mathrm{div}}'}. \end{align} Let $(\mathbf{u}_1, \mu_1,\varphi_1) $ and $(\mathbf{u}_2,\mu_2, \varphi_2)$ be two weak solutions of the system \eqref{3.2}. Note that the averages of $\varphi_1$ and $\varphi_2$ are same and are equal to $k$, which gives $\overline{\varphi}_1 - \overline{\varphi}_2=0$. We can rewrite the third equation in \eqref{3.2} as \begin{equation}\label{nonlin u rewritten} -\nu \Delta \mathbf{u} + (\mathbf{u}\cdot \nabla )\mathbf{u} + \nabla \widetilde{\uppi}_{\mathbf{u}} = - \nabla a\frac{{\varphi}^2}{2} - (\mathrm{J}\ast \varphi)\nabla \varphi + \mathbf{h} \ \text{ in }\ \mathbb{V}_{{\mathrm{div}}}', \end{equation} where $\widetilde{\uppi}_{\mathbf{u}} :=\widetilde{\uppi} = \uppi -\mathopen{}\mathclose\bgroup\originalleft( \mathrm{F}(\varphi) + a\frac{{\varphi}^2}{2}\aftergroup\egroup\originalright)$. Let us define $\mathbf{u}^e:=\mathbf{u}_1-\mathbf{u}_2$, $\varphi^e:=\varphi_1-\varphi_2$ and $\widetilde{\uppi}^e:=\widetilde{\uppi}_{\mathbf{u}_1}-\widetilde{\uppi}_{\mathbf{u}_2}$. Then for every $\mathbf{v} \in \mathbb{V}_{\mathrm{div}}$ and $\psi \in \mathrm{V}, \, (\mathbf{u}^e,\varphi^e)$ satisfies the following: \begin{align*} (\mathbf{u}^e \cdot \nabla \varphi_1, \psi)+(\mathbf{u}_2\cdot\nabla\varphi^e,\psi )&= -m(\nabla \mu^e , \nabla \psi), \\ \nu (\nabla \mathbf{u}^e, \nabla \mathbf{v} )+ b(\mathbf{u}_1,\mathbf{u}^e,\mathbf{v})+ b(\mathbf{u}^e,\mathbf{u}_2,\mathbf{v}) & = -\frac{1}{2}\mathopen{}\mathclose\bgroup\originalleft(\varphi^e( \varphi_1+\varphi_2)\nabla a , \mathbf{v}\aftergroup\egroup\originalright)- ((\mathrm{J}*\varphi^e)\nabla\varphi_2 , \mathbf{v})\nonumber\\ &\quad- ((\mathrm{J}*\varphi_1)\nabla\varphi^e,\mathbf{v}), \end{align*} Let us choose $\mathbf{v} = \mathbf{u}^e$ and $\psi = \mathcal{B}^{-1} \varphi^e$ to get \begin{align} \label{s48} (\mathbf{u}^e \cdot \nabla \varphi_1, \mathcal{B}^{-1} \varphi^e)+(\mathbf{u}_2\cdot\nabla\varphi^e, \mathcal{B}^{-1} \varphi^e) &= m\langle \Delta \mu^e ,\mathcal{B}^{-1} \varphi^e \rangle, \\ \label{s49} \nu\|\nabla\mathbf{u}^e\|^2&= -b(\mathbf{u}^e,\mathbf{u}_2,\mathbf{u}^e)-\frac{1}{2}(\nabla a\varphi^e( \varphi_1+\varphi_2),\mathbf{u}^e)\nonumber\\&\quad -( (\mathrm{J}*\varphi^e)\nabla\varphi_2,\mathbf{u}^e) - ((\mathrm{J}*\varphi_1)\nabla\varphi^e,\mathbf{u}^e), \end{align} where we used the fact that $b(\mathbf{u}_1, \mathbf{u}^e,\mathbf{u}^e)=0$. Using \eqref{bes}, Taylor's series expansion and (A9), we estimate the term $(\nabla \mu^e ,\nabla \mathcal{B}^{-1} \varphi^e)$ from \eqref{s48} as \begin{align}\label{3.13} -\langle-\Delta \mu^e,\mathcal{B}^{-1} \varphi^e\rangle & =-(\mu^e, \varphi^e ) = -(a \varphi^e - \mathrm{J}*\varphi^e + \mathrm{F}'(\varphi_1)-\mathrm{F}'(\varphi_2),\varphi^e ) \nonumber \\ & = -((a + \mathrm{F}''(\varphi_1+ \theta \varphi_2)) \varphi^e, \varphi^e ) + (\mathrm{J}*\varphi^e, \varphi^e) \nonumber\\ & \leq -C_0 \| \varphi^e \|^2 + (\mathrm{J}*\varphi^e, \varphi^e). \end{align} Using \eqref{3.13} in \eqref{s48}, we obtain \begin{align}\label{3.14} mC_0\|\varphi^e\|^2 \leq -(\mathbf{u}^e\cdot\nabla\varphi_1,\mathcal{B}^{-1}\varphi^e)-(\mathbf{u}_2\cdot\nabla\varphi^e,\mathcal{B}^{-1}\varphi^e) + m(\mathrm{J}\ast \varphi^e , \varphi^e). \end{align} Let us now estimate the terms in the right hand side of \eqref{s49} and \eqref{3.14} one by one. We use the H\"older's, Ladyzhenskaya's, Young's and Poincar\'{e}'s inequalities to estimate $|b(\mathbf{u}^e,\mathbf{u}_2,\mathbf{u}^e)|$. For $n=2$, we get \begin{equation}\label{3.15a} |b(\mathbf{u}^e,\mathbf{u}_2,\mathbf{u}^e)|\leq \|\mathbf{u}^e\|_{\mathbb{L}^4}^2\|\nabla\mathbf{u}_2\|\leq \sqrt{2}\|\mathbf{u}^e\|\|\nabla\mathbf{u}^e\|\|\nabla\mathbf{u}_2\|\leq \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\nabla\mathbf{u}^e\|^2 \|\nabla\mathbf{u}_2\|, \end{equation} and for $n=3$, we have \begin{equation}\label{3.15b} |b(\mathbf{u}^e,\mathbf{u}_2,\mathbf{u}^e)|\leq \|\mathbf{u}^e\|_{\mathbb{L}^4} ^2\|\nabla\mathbf{u}_2\| \leq 2\|\mathbf{u}^e\|^{\frac{1}{2}} \|\nabla\mathbf{u}^e\|^{\frac{3}{2}} \|\nabla\mathbf{u}_2\| \leq \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\nabla\mathbf{u}^e\|^2 \|\nabla\mathbf{u}_2\|. \end{equation} Since $(\mathbf{u}_i, \varphi_i)$ are weak solutions of \eqref{3.2} for $i=1,2,$ such that $\varphi_i \in [-1, 1] $, a.e., we have, $ |\varphi_i (x) | \leq 1$. Using H\"older's, Ladyzhenskaya's and Young's inequalities and boundedness of $\varphi$, we obtain \begin{align}\label{3.16a} \mathopen{}\mathclose\bgroup\originalleft|\frac{1}{2}(\nabla a\varphi^e( \varphi_1+\varphi_2),\mathbf{u}^e)\aftergroup\egroup\originalright| &\leq \frac{1}{2}\|\nabla a\|_{\mathbb{L}^{\infty}}\|\varphi^e\|\|\varphi_1+\varphi_2\|_{\mathrm{L}^{\infty}}\|\mathbf{u}^e\| \nonumber\\ &\leq \frac{1}{2}\|\nabla a\|_{\mathbb{L}^{\infty}}\|\varphi^e\|(\|\varphi_1\|_{\mathrm{L}^{\infty}} +\|\varphi_2\|_{\mathrm{L}^{\infty}}) \|\mathbf{u}^e\| \nonumber\\ &\leq \frac{m C_0}{8}\|\varphi^e\|^2+\frac{2}{m C_0}\|\nabla a\|_{\mathbb{L}^{\infty}}^2\|\mathbf{u}^e\|^2. \end{align} In order to estimate $((\mathrm{J}*\varphi^e)\nabla\varphi_2,\mathbf{u}^e)$ and $((\mathrm{J}*\varphi_1)\nabla\varphi^e,\mathbf{u}^e)$, we write these terms using an integration by parts and the divergence free condition as \begin{align*} ((\mathrm{J}*\varphi^e)\nabla\varphi_2,\mathbf{u}^e) = -((\nabla\mathrm{J}*\varphi^e)\varphi_2,\mathbf{u}^e), \\ ((\mathrm{J}*\varphi_1)\nabla\varphi^e,\mathbf{u}^e) = -((\nabla\mathrm{J}*\varphi_1)\varphi^e,\mathbf{u}^e). \end{align*} Using H\"older's and Ladyzhenskaya's inequalities, and Young's inequality for convolution, we estimate $|((\nabla\mathrm{J}*\varphi^e)\varphi_2,\mathbf{u}^e)|$ as \begin{align}\label{3.17a} |((\nabla\mathrm{J}*\varphi^e)\varphi_2,\mathbf{u}^e)| &\leq \| \nabla \mathrm{J}*\varphi^e\|_{\mathbb{L}^2}\|\varphi_2\|_{\mathrm{L}^\infty} \|\mathbf{u}^e\|\leq \| \nabla \mathrm{J}\|_{\mathbb{L}^1} \|\varphi^e\| \|\mathbf{u}^e\| \nonumber \\ &\leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2}{m C_0}\| \nabla \mathrm{J}\|_{\mathbb{L}^1}^2 \|\mathbf{u}^e\|^2. \end{align} Similarly, we obtain \begin{align}\label{3.18a} \mathopen{}\mathclose\bgroup\originalleft|((\nabla\mathrm{J}*\varphi_1)\varphi^e,\mathbf{u}^e)\aftergroup\egroup\originalright| &\leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2}{m C_0}\| \nabla \mathrm{J}\|_{\mathbb{L}^1}^2 \|\mathbf{u}^e\|^2 . \end{align} Substituting \eqref{3.15a}, \eqref{3.16a}-\eqref{3.18a} in \eqref{s49}, then using \eqref{gradu estimate} and the fact that $\| \nabla a \|_{\mathrm{L}^\infty} \leq \|\nabla \mathrm{J} \|_{\mathbb{L}^1}$, for $n =2$, we obtain \begin{subequations} \begin{align}\label{3.19a} \nu\|\nabla\mathbf{u}^e\|^2 & \leq \frac{3 m C_0}{8}\|\varphi^e\|^2 + \frac{1}{\nu} \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} \|\nabla\mathbf{u}^e\|^2 +\frac{6}{ m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \|\mathbf{u}^e\|^2. \end{align} Combining \eqref{3.15b}, \eqref{3.16a}-\eqref{3.18a} and substituting it in \eqref{s49}, for $n=3$, we get \begin{align} \label{3.19b} \nu\|\nabla\mathbf{u}^e\|^2 & \leq \frac{3 m C_0}{8}\|\varphi^e\|^2 + \frac{1}{\nu} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} \|\nabla\mathbf{u}^e\|^2 +\frac{6}{ m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \|\mathbf{u}^e\|^2. \end{align} \end{subequations} Now we estimate the terms in the right hand side of \eqref{3.14}. To estimate $(\mathbf{u}^e\cdot\nabla\varphi_1,\mathcal{B}^{-1}\varphi^e)$, we use an integration by parts, $\mathbf{u}^e\big|_{\partial\Omega}=0$ and the divergence free condition of $\mathbf{u}^e$ to obtain \begin{align*} (\mathbf{u}^e\cdot \nabla \varphi_1,\mathcal{B}^{-1}\varphi^e) = -(\mathbf{u}^e\cdot \nabla \mathcal{B}^{-1}\varphi^e,\varphi_1). \end{align*} Using H\"older's, Ladyzhenskaya's, Poincar\'e's and Young's inequalities, we estimate the above term as \begin{align}\label{3.20a} |(\mathbf{u}^e\cdot \nabla \mathcal{B}^{-1}\varphi^e,\varphi_1)| & \leq \|\mathbf{u}^e\| \, \|\nabla \mathcal{B}^{-1}\varphi^e\| \, \|\varphi_1\|_{\mathrm{L}^\infty}\nonumber\\ &\leq \|\mathbf{u}^e\| \, \|\nabla \mathcal{B}^{-1}\varphi^e \| \nonumber\\ &\leq \mathopen{}\mathclose\bgroup\originalleft( \frac{1}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright) \|\nabla\mathbf{u}^e\|\|\nabla \mathcal{B}^{-1}\varphi^e \|\nonumber\\ &\leq \frac{\nu}{2} \|\nabla\mathbf{u}^e\|^2 +\frac{1}{2\nu \lambda_1} \|\mathcal{B}^{-1/2}\varphi^e \|^2, \end{align} where $\lambda_1$ is the first eigenvalue of the Stokes operator $\mathrm{A}$. Next we estimate $(\mathbf{u}_2\cdot\nabla\varphi^e,\mathcal{B}^{-1}\varphi^e)$ in the following way: \begin{align}\label{3.21} |(\mathbf{u}_2\cdot\nabla\varphi^e,\mathcal{B}^{-1}\varphi^e)| &\leq |(\mathbf{u}_2\cdot\nabla \mathcal{B}^{-1}\varphi^e,\varphi^e)| \nonumber \\ &\leq \|\mathbf{u}_2\|_{\mathbb{L}^4} \|\nabla \mathcal{B}^{-1} \varphi^e\|_{\mathbb{L}^4}\|\varphi^e\| \nonumber \\ &\leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2}{m C_0}\|\nabla \mathcal{B}^{-1}\varphi^e \|_{\mathbb{L}^4}^2\|\mathbf{u}_2\|_{\mathbb{L}^4}^2 \nonumber \\ & \leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2C}{m C_0} \|\nabla \mathcal{B}^{-1}\varphi^e\|_{\mathbb{H}^1}^2 \|\mathbf{u}_2\|_{\mathbb{L}^4}^2 \end{align} where we used the embedding $\mathbb{H}^1 \hookrightarrow \mathbb{L}^4$. One can see that the $\mathrm{H}^2$-norm of $\zeta$ in $\mathrm{D}(\mathcal{B})$ is equivalent to the $\mathrm{L}^2$-norm of $(\mathcal{B} + \mathrm{I})\zeta $, that is, $$\| \zeta \|_{\mathrm{H}^2} \cong \|(\mathcal{B} + \mathrm{I})\zeta\|.$$ Now since $ \mathcal{B}^{-1}\varphi^e \in \mathrm{D}(\mathcal{B})$ and $\mathcal{B}$ is a linear isomorphism, we have \begin{align*} \|\nabla \mathcal{B}^{-1}\varphi^e\|_{\mathbb{H}^1} \leq C\|\mathcal{B}^{-1}\varphi^e \|_{\mathrm{H}^2} \leq C \|(\mathcal{B}+\mathrm{I})\mathcal{B}^{-1}\varphi^e\| \leq C \|\varphi^e\|. \end{align*} Substituting in \eqref{3.21}, using Ladyzhenskaya's inequality and \eqref{gradu estimate} , for $n=2$,we obtain \begin{subequations} \begin{align}\label{3.22} |(\mathbf{u}_2\cdot\nabla\varphi^e,\mathcal{B}^{-1}\varphi^e)| &\leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2C}{m C_0} \|\varphi^e\|^2 \, \|\mathbf{u}_2\|_{\mathbb{L}^4}^2 \nonumber \\ &\leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2 C }{m C_0}\|\varphi^e\|^2 \mathopen{}\mathclose\bgroup\originalleft( \frac{\sqrt{2}}{\nu ^2\sqrt{\lambda_1}} \aftergroup\egroup\originalright) \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}}, \end{align} and for $n=3$, we get \begin{align} \label{s50} |(\mathbf{u}_2\cdot\nabla\varphi^e,\mathcal{B}^{-1}\varphi^e )| \leq \frac{m C_0}{8}\|\varphi^e\|^2 + \frac{2C}{m C_0}\|\varphi^e\|^2 \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \frac{1}{\nu^2} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}} . \end{align} \end{subequations} It is only left to estimate $(\mathrm{J}\ast \varphi^e , \varphi^e)$. Since $\bar{\varphi}=0$ we can write using the fact that $\|B_N^{1/2} \varphi\|^2 = (B_N \varphi,\varphi) = \|\nabla \varphi\|^2$ for all $\varphi \in D(B_N)$ \begin{align*} |(\mathrm{J}*\varphi,\varphi)|&=|(B_N^{1/2}(\mathrm{J}*\varphi-\overline{\mathrm{J}*\varphi}), B_N^{-1/2} \varphi)| \\ &\leq \| B_N^{1/2}(\mathrm{J}*\varphi-\overline{\mathrm{J}*\varphi})\| \| B_N^{-1/2} \varphi\| \\ &\leq \|\nabla \mathrm{J} * \varphi\| \| B_N^{-1/2} \varphi\| \leq \|\nabla \mathrm{J}\|_{\mathrm{L}^1} \|\varphi\| \| B_N^{-1/2} \varphi\| \\ &\leq \frac{C_0}{4} \|\varphi\|^2 + \frac{1}{C_0}\|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2\| B_N^{-1/2} \varphi\|^2 \end{align*} which implies \begin{align}\label{3.23} m|(\mathrm{J}*\varphi,\varphi)|\leq \frac{mC_0}{4} \|\varphi\|^2 + \frac{m}{C_0}\|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2\| B_N^{-1/2} \varphi\|^2 \end{align} Combining \eqref{3.20a}, \eqref{3.22} and \eqref{3.23} and substituting it in \eqref{3.14}, for $n=2$, we infer \begin{subequations} \begin{align} \label{3.24a} \frac{5m C_0}{8}\|\varphi^e\|^2 & \leq \frac{\nu}{2}\|\nabla\mathbf{u}^e\|^2 + \mathopen{}\mathclose\bgroup\originalleft( \frac{1}{2\nu \lambda_1}+\frac{m}{C_0}\|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright)\|\mathcal{B}^{-1/2}\varphi^e \|^2 + \frac{2 C }{\nu ^2m C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}}\|\varphi^e \|^2 . \end{align} Combining \eqref{3.20a}, \eqref{s50} and \eqref{3.23} and substituting in \eqref{3.14}, for $n=3$, we find \begin{align}\label{3.24b} \frac{5m C_0}{8} \|\varphi^e\|^2 &\leq \frac{\nu}{2}\|\nabla\mathbf{u}^e\|^2 + \mathopen{}\mathclose\bgroup\originalleft( \frac{1}{2\nu \lambda_1}+\frac{m}{C_0}\|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright) \|\mathcal{B}^{-1/2}\varphi^e\|^2 + \frac{2 C }{\nu^2 m C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}}\| \mathbf{h} \|^2_{\mathbb{V}'_{\mathrm{div}}} \|\varphi^e\|^2 . \end{align} \end{subequations} Adding \eqref{3.19a} and \eqref{3.24a}, for $n=2$, we obtain \begin{subequations} \begin{align} &\frac{\nu}{2} \|\nabla\mathbf{u}^e\|^2 + \frac{m C_0}{4}\|\varphi^e\|^2 \leq \frac{1}{\nu}\mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} \|\nabla\mathbf{u}^e\|^2 + \frac{6}{ m C_0} \| \nabla \mathrm{J}\|_{\mathbb{L}^1} ^2 \|\mathbf{u}^e\|^2 \nonumber \\ & \hspace{4cm}+ \mathopen{}\mathclose\bgroup\originalleft( \frac{1}{2 \nu \lambda_1}+\frac{m}{C_0}\|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright) \|\mathcal{B}^{-1/2}\varphi^e \|^2 + \frac{2 C }{\nu ^2m C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}}\|\varphi^e\|^2. \end{align} Combining \eqref{3.19b} and \eqref{3.24b}, for $n =3$, we get \begin{align} & \frac{\nu}{2} \|\nabla\mathbf{u}^e\|^2 + \frac{m C_0}{4} \|\varphi^e\|^2 \leq \frac{1}{\nu} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}}\|\nabla\mathbf{u}^e\|^2 +\frac{6}{m C_0} \| \nabla \mathrm{J}\|_{\mathbb{L}^1}^2 \|\mathbf{u}^e\|^2\nonumber \\ &\hspace{4cm}+ \mathopen{}\mathclose\bgroup\originalleft( \frac{1}{2 \nu\lambda_1} + \frac{m}{C_0}\|\nabla \mathrm{J}\|^2_{\mathbb{L}^1}\aftergroup\egroup\originalright) \|\mathcal{B}^{-1/2}\varphi^e \|^2 + \frac{2C }{\nu^2 m C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}}\| \mathbf{h} \|^2_{\mathbb{V}'_{\mathrm{div}}} \|\varphi^e\|^2 . \end{align} \end{subequations} Now using the continuous embedding of $\mathrm{L}^2_{(0)} \hookrightarrow \mathrm{V}_0'$, that is, $ \|\mathcal{B}^{-1/2}\varphi^e \|^2 \leq C \| \varphi^e \|^2$ and Poincar\'e's inequality, we further obtain \begin{align}\label{3.25} &\mathopen{}\mathclose\bgroup\originalleft[ \lambda_1 \mathopen{}\mathclose\bgroup\originalleft( \frac{\nu}{2} - \frac{1}{\nu}\mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} \aftergroup\egroup\originalright) - \frac{6}{ m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright] \| \mathbf{u}^e\|^2 \nonumber \\ & +\Bigg[\frac{mC_0}{4} -\frac{mC}{C_0}\|\nabla \mathrm{J}\|^2_{\mathbb{L}^1}- \frac{C}{2 \nu \lambda_1} - \frac{2C }{\nu ^2m C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}} \Bigg]\|\varphi^e\|^2\leq 0. \end{align} From \eqref{3.25}, uniqueness for $n= 2$ follows provided quantities in both brackets in above inequality are strictly positive. Thus we conclude that for \begin{align*} & (i) \ \nu^2 > \frac{2\sqrt{2}}{\sqrt{\lambda_1}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} + \frac{12 \nu }{ \lambda_1m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2,\\ &(ii) \ (\nu m)^2 \mathopen{}\mathclose\bgroup\originalleft( \frac{C_0}{4} -\frac{C}{C_0} \|\nabla \mathrm{J}\|^2_{\mathbb{L}^1} \aftergroup\egroup\originalright)> \nu m \mathopen{}\mathclose\bgroup\originalleft(\frac{C}{\lambda_1}\aftergroup\egroup\originalright) + \frac{2 C }{ C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{2}{\lambda_1} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}} \ \text{ with }\ \|\nabla\mathrm{J}\|^2_{\mathbb{L}^1}<\frac{C^2_0}{4C}, \end{align*} uniqueness follows in two dimensions. Similarly for $n=3$, we obtain \begin{align*} & \mathopen{}\mathclose\bgroup\originalleft[ \lambda_1 \Bigg(\frac{\nu}{2 }-\frac{1}{\nu} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} \Bigg)-\frac{6}{ m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright] \| \mathbf{u}^e\|^2 \nonumber \\ &+\Bigg[ \frac{mC_0}{4} - \frac{mC}{C_0} \|\nabla \mathrm{J}\|_{\mathbb{L}^1}^2 - \frac{C}{2\nu \lambda_1} - \frac{2 C }{\nu^2 m C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}}\| \mathbf{h} \|^2_{\mathbb{V}'_{\mathrm{div}}} \Bigg] \|\varphi^e \|^2 \leq 0. \end{align*} Hence, the uniqueness follows provided \begin{align*} &(i)\ \nu^2 > \mathopen{}\mathclose\bgroup\originalleft( \frac{16}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \|\mathbf{h}\|_{\mathbb{V}'_{\mathrm{div}}} +\frac{12 \nu }{ \lambda_1m C_0} \|\nabla \mathrm{J} \|_{\mathbb{L}^1}^2, \\ &(ii)\ (\nu m)^2 \mathopen{}\mathclose\bgroup\originalleft( \frac{C_0}{4} -\frac{C}{C_0} \|\nabla \mathrm{J}\|_{\mathbb{L}^1}^2 \aftergroup\egroup\originalright)> \nu m \mathopen{}\mathclose\bgroup\originalleft(\frac{C}{\lambda_1}\aftergroup\egroup\originalright) + \frac{2 C }{ C_0} \mathopen{}\mathclose\bgroup\originalleft( \frac{4}{\sqrt{\lambda_1}} \aftergroup\egroup\originalright)^{\frac{1}{2}} \| \mathbf{h} \|^2 _{\mathbb{V}'_{\mathrm{div}}}, \ \text{ with }\ \|\nabla\mathrm{J}\|^2_{\mathbb{L}^1}<\frac{C^2_0}{4C}, \end{align*} which completes the proof. \end{proof} \subsection{Regularity of the weak solution} In this subsection, we establish the regularity results for the weak solution to the system \eqref{3.2}. Let $(\mathbf{u}, \mu, \varphi ) \in \mathbb{V}_{\mathrm{div}} \times \mathrm{V} \times (\mathrm{V}\cap \mathrm{L}^2_{(k)}(\Omega)) $ and $\varphi(x) \in (-1,1) \ a.e.$ be the unique weak solution of the system \eqref{3.2}. In the next theorem, we prove the higher order regularity results for the system \eqref{3.2}. \begin{theorem}\label{regularity} Let $\mathbf{h}\in \mathbb{G}_{\mathrm{div}}$ and the assumptions of Theorem \ref{unique-steady} are satisfied. Then the weak solution $(\mathbf{u}, \mu, \varphi)$ of the system \eqref{3.2} satisfies: \begin{align}\label{s3} \mathbf{u}\in\mathbb{H}^2(\Omega),\ \ \mu \in \mathrm{H}^2(\Omega) \ \text{ and } \ \varphi \in \mathrm{W}^{1,p} (\Omega), \end{align} where $2\leq p < \infty$ if $n=2$ and $2 \leq p \leq 6$ if $n=3.$ \end{theorem} \begin{proof} \textbf{Step 1:} Let $ \mathbf{h} \in \mathbb{G}_{\mathrm{div}} $. Then from Theorem \ref{mainthm} and Theorem \ref{unique-steady}, we know that there exists a unique weak solution $ (\mathbf{u}, \varphi, \mu) $ satisfying \begin{align} \mathbf{u} \in \mathbb{V}_{\mathrm{div}}, \varphi \in \mathrm{V} \cap \mathrm{L}^2_{(k)} \ \mathrm{and} \ \mu \in \mathrm{V}. \label{reg1} \end{align} Now, we recall the form \eqref{nonlin u rewritten} that is \begin{align} \label{reg2} \nu (\nabla \mathbf{u}, \nabla \mathbf{v}) + b(\mathbf{u},\mathbf{u}, \mathbf{v}) = - \mathopen{}\mathclose\bgroup\originalleft< \nabla a\frac{{\varphi}^2}{2} - (\mathrm{J}\ast \varphi)\nabla \varphi , \mathbf{v} \aftergroup\egroup\originalright>+ \langle \mathbf{h},\mathbf{v} \rangle, \ \text{ for all }\ \mathbf{v} \in \mathbb{V}_{\mathrm{div}}, \end{align} where $\nabla a(x) = \int_\Omega \nabla \mathrm{J}(x-y) \/\mathrm{d}\/ y$. Using the weak regularity \eqref{reg1}, $ |\varphi (x)| \leq 1 \ $ and Young's inequality for convolution, we observe that \begin{align*} \| (\nabla a) \varphi^2\|& \leq \|\nabla a\|_{\mathbb{L}^{\infty}}\|\varphi\|_{\mathrm{L}^4}^2\leq C ,\\ \|\mathrm{J}\ast \varphi \nabla \varphi\|& \leq \|\mathrm{J}\|_{\mathrm{L}^1}\|\varphi\|_{\mathrm{L}^\infty}\|\nabla \varphi\| \leq C. \end{align*} Hence, the right hand side of \eqref{reg2} belongs to $\mathbb{L}^2(\Omega)$. Then from the regularity of the steady state Navier-Stokes equations (see Chapter II, \cite{MR0609732}) we have that \begin{align} \mathbf{u} \in \mathbb{H}^2(\Omega). \label{reg4} \end{align} \vskip 0.1cm \noindent \textbf{Step 2:} From the first equation of \eqref{3.2} we have \begin{align} m ( \nabla \mu, \nabla \psi) = (\mathbf{u} \cdot \nabla \varphi, \psi),\ \text{ for all }\ \psi \in \mathrm{V}. \label{reg3} \end{align} Using the Sobolev inequality, \eqref{reg4} and \eqref{reg1}, we observe that \begin{align} \|\mathbf{u} \cdot \nabla \varphi \| \leq C\|\mathbf{u}\|_{\mathbb{L}^\infty}\|\nabla \varphi\| \leq C \|\mathbf{u}\|_{\mathbb{H}^2} \|\nabla \varphi\| \leq C. \end{align} Since we have that $\frac{\partial \mu }{\partial \mathbf{n}}=0$, from the $L^p$ regularity of the Neumann Laplacian (see Lemma \ref{Lp_reg}) for $p=2$, we conclude that \begin{align} \label{reg8} \mu \in \mathrm{H}^2(\Omega). \end{align} \vskip 0.1cm \noindent \textbf{Step 3:} An application of the Gagliardo-Nirenberg inequality together with \eqref{reg8} gives \begin{align}\label{s9} \|\nabla\mu\|_{\mathbb{L}^p}&\leq C\|\mu\|_{\mathrm{H}^2}^{1-\frac{1}{p}}\|\mu\|^{\frac{1}{p}} < \infty, \end{align} for $2\leq p < \infty$ in case of $n=2$ and \begin{align} \label{gradmu_n3} \|\nabla\mu\|_{\mathbb{L}^p} & \leq C\|\mu\|_{\mathrm{H}^2}^{\frac{5p-6}{4p}}\|\mu\|^{\frac{6-p}{4p}} < \infty, \end{align} for $2 \leq p \leq 6$ and $n=3.$ Let us take the gradient of $\mu= a\varphi- \mathrm{J}*\varphi + \mathrm{F}'(\varphi)$, multiply it by $\nabla\varphi|\nabla\varphi|^{p-2}$, integrate the resulting identity over $\Omega$, and use (A9), H\"older's and Young's inequalities and Young's inequality for convolution to obtain \begin{align}\label{s5} \int_{\Omega}\nabla\varphi|\nabla\varphi|^{p-2}\cdot \nabla\mu\/\mathrm{d}\/ x&=\int_{\Omega}\nabla\varphi|\nabla\varphi|^{p-2}\cdot\mathopen{}\mathclose\bgroup\originalleft(a\nabla\varphi+\nabla a\varphi-\nabla\mathrm{J}*\varphi+\mathrm{F}''(\varphi)\nabla\varphi\aftergroup\egroup\originalright)\/\mathrm{d}\/ x\nonumber\\ &=\int_{\Omega}(a+\mathrm{F}''(\varphi))|\nabla\varphi|^p\/\mathrm{d}\/ x+\int_{\Omega}\nabla\varphi|\nabla\varphi|^{p-2}\cdot\mathopen{}\mathclose\bgroup\originalleft(\nabla a\varphi-\nabla\mathrm{J}*\varphi\aftergroup\egroup\originalright)\/\mathrm{d}\/ x\nonumber\\ &\geq C_0\int_{\Omega}|\nabla\varphi|^p\/\mathrm{d}\/ x-\mathopen{}\mathclose\bgroup\originalleft(\|\nabla a\|_{\mathbb{L}^{\infty}}+\|\nabla\mathrm{J}\|_{\mathbb{L}^1}\aftergroup\egroup\originalright) \|\varphi \|_{\mathrm{L}^p} \| \nabla \varphi \|_{\mathbb{L}^p}^{p-1} \nonumber\\ &\geq \frac{C_0}{2}\|\nabla\varphi\|_{\mathbb{L}^p}^p -\frac{1}{p}\mathopen{}\mathclose\bgroup\originalleft(\frac{2(p-1)}{C_0p}\aftergroup\egroup\originalright)^{p-1}\mathopen{}\mathclose\bgroup\originalleft(\|\nabla a\|_{\mathbb{L}^{\infty}}+\|\nabla\mathrm{J}\|_{\mathbb{L}^1}\aftergroup\egroup\originalright)^p\|\varphi\|_{\mathbb{L}^p}^p. \end{align} Using Young's inequality, we also have \begin{align}\label{s6} \mathopen{}\mathclose\bgroup\originalleft| \int_{\Omega}\nabla\varphi|\nabla\varphi|^{p-2}\cdot \nabla\mu\/\mathrm{d}\/ x\aftergroup\egroup\originalright|&\leq \int_{\Omega}|\nabla\varphi|^{p-1}|\nabla\mu|\/\mathrm{d}\/ x\nonumber\\&\leq \frac{C_0}{4}\|\nabla\varphi\|_{\mathbb{L}^p}^p+\frac{1}{p}\mathopen{}\mathclose\bgroup\originalleft(\frac{4(p-1)}{C_0p}\aftergroup\egroup\originalright)^{p-1}\|\nabla\mu\|_{\mathbb{L}^p}^p. \end{align} Combining \eqref{s5} and \eqref{s6}, we get \begin{align} \label{s7} \frac{C_0}{4}\|\nabla\varphi\|_{\mathbb{L}^p}^p\leq\frac{1}{p}\mathopen{}\mathclose\bgroup\originalleft(\frac{2(p-1)}{C_0p}\aftergroup\egroup\originalright)^{p-1}\mathopen{}\mathclose\bgroup\originalleft[2^{p-1}\|\nabla\mu\|_{\mathbb{L}^p}^p+ \mathopen{}\mathclose\bgroup\originalleft(\|\nabla a\|_{\mathbb{L}^{\infty}}+\|\nabla\mathrm{J}\|_{\mathbb{L}^1}\aftergroup\egroup\originalright)^p\|\varphi\|_{\mathrm{L}^p}^p\aftergroup\egroup\originalright]. \end{align} Using \eqref{s9} and \eqref{gradmu_n3} we get \begin{align*} \|\nabla\varphi\|_{\mathbb{L}^p} < \infty \end{align*} for $2\leq p < \infty$ if $n=2$, and $2 \leq p \leq 6$ if $n=3.$ \end{proof} \section{Exponential Stability}\label{se4}\setcounter{equation}{0} The stability analysis of non-linear dynamical systems has a long history starting from the works of Lyapunov. For the solutions of ordinary or partial differential equations describing dynamical systems, different kinds of stability may be described. One of the most important types of stability is that concerning the stability of solutions near to a point of equilibrium (stationary solutions). In the qualitative theory of ordinary and partial differential equations, and control theory, Lyapunov’s notion of (global) asymptotic stability of an equilibrium is a key concept. It is important to note that the asymptotic stability do not quantify the rate of convergence. In fact, there is a strong form of stability which demands an exponential rate of convergence. The notion of exponential stability is far stronger and it assures a minimum rate of decay, that is, an estimate of how fast the solutions converge to its equilibrium. In particular, exponential stability implies uniform asymptotic stability. Stability analysis of fluid dynamic models has been one of the essential areas of applied mathematics with a good number of applications in engineering and physics (cf. \cite{MR0609732, MR3186318}, etc). In this section, we consider the singular potential $\mathrm{F}$ in $(-1,1)$ of the form: \begin{align*} \mathrm{F} (\varphi) = \frac{\theta}{2} ((1+ \varphi) \log (1+\varphi) + (1-\varphi) \log (1-\varphi)) ,\quad \varphi \in (-1,1). \end{align*} Observe that in this case, the potential $\mathrm{F}$ is convex and $\kappa =0$ in \eqref{decomp of F}. For such potentials, we prove that the stationary solution $(\mathbf{u}^e,\varphi^e)$ of the system (\ref{steadysys}) with constant mobility parameter $m$ and constant coefficient of kinematic viscosity $\nu$ is exponentially stable in 2-D. That is, our aim is to establish that: \begin{itemize} \item there exists constants $M>0$ and $\alpha>0$ such that \begin{align*} \|\mathbf{u}(t)-\mathbf{u}^e\|^2+\|\varphi (t)-\varphi^e\|^2\leq M e^{-\alpha t}, \end{align*} for all $t\geq 0$. \end{itemize} \subsection{Global solvability of two dimensional CHNS system} We consider the following initial-boundary value problem: \begin{equation}\label{4.1} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned} \varphi_t+ \mathbf{u} \cdot \nabla \varphi &= m\Delta \mu, \ \text{ in }\ \Omega\times(0,T), \\ \mu &= a \varphi - \mathrm{J}*\varphi + \mathrm{F}'(\varphi) \\ \mathbf{u}_t -\nu \Delta \mathbf{u} + (\mathbf{u} \cdot \nabla) \mathbf{u }+ \nabla \uppi &= \mu \nabla \varphi +\mathbf{ h}, \ \text{ in }\ \Omega\times(0,T),\\ \text{div }\mathbf{u}& = 0, \ \text{ in }\ \Omega\times(0,T), \\ \frac{\partial \mu}{ \partial \mathbf{n}}=0,& \ \mathbf{u}=0, \ \text{ on }\ \partial\Omega\times[0,T], \\ \mathbf{u}(0) = \mathbf{u}_0,& \quad \varphi(0)=\varphi_0, , \ \text{ in }\ \Omega. \end{aligned} \aftergroup\egroup\originalright. \end{equation} Let us now give the global solvability results available in the literature for the system \eqref{4.1}. We first give the definition of a weak solution for the system \eqref{4.1} \begin{definition}[weak solution, \cite{MR3019479}] Let $\mathbf{u}_0\in\mathbb{G}_{\mathrm{div}}$, $\varphi_0\in \mathrm{H}$ with $\mathrm{F}(\varphi_0)\in\mathrm{L}^1(\Omega)$ and $0<T<\infty$ be given. A couple $(\mathbf{u},\varphi)$ is a \emph{weak solution} to the system \eqref{4.1} on $[0,T]$ corresponding to $ [\mathbf{u}_0,\varphi_0]$ if \begin{itemize} \item [(i)] $\mathbf{u},\varphi$ and $\mu$ satisfy \begin{equation}\label{sol} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned} & \mathbf{u} \in \mathrm{L}^{\infty}(0,T;\mathbb{G}_{\mathrm{div}}) \cap \mathrm{L}^2(0,T;\mathbb{V}_{\mathrm{div}}), \\ & \mathbf{u}_t \in \mathrm{L}^{4/3}(0,T;\mathbb{V}_{\mathrm{div}}'),\text{ if } d=3, \\ & \mathbf{u}_t \in \mathrm{L}^{2}(0,T;\mathbb{V}_{\mathrm{div}}'),\text{ if } d=2, \\ & \varphi \in \mathrm{L}^{\infty}(0,T;\mathrm{H}) \cap \mathrm{L}^2(0,T;\mathrm{V}), \\ & \varphi_t \in \mathrm{L}^{2}(0,T;\mathrm{V}'), \\ & \mu = a \varphi - \mathrm{J}*\varphi + \mathrm{F}'(\varphi) \in L^2(0,T;\mathrm{V}), \end{aligned} \aftergroup\egroup\originalright. \end{equation} and $$\varphi \in \mathrm{L}^\infty (Q), \quad |\varphi (x,t)| <1 \ a.e. \ (x,t) \in Q:=\Omega \times (0,T);$$ \item [(ii)] For every $\psi\in\mathrm{V}$, every $\mathbf{v} \in \mathbb{V}_{\mathrm{div}}$ and for almost any $t\in(0,T)$, we have \begin{align} \langle \varphi_t,\psi\rangle + m(\nabla \mu, \nabla \psi) &=\int_{\Omega}(\mathbf{u}\cdot\nabla\psi)\varphi\/\mathrm{d}\/ x,\\ \langle \mathbf{u}_t,\mathbf{v}\rangle +\nu(\nabla\mathbf{u},\nabla\mathbf{v})+b(\mathbf{u},\mathbf{u},\mathbf{v})&=-\int_{\Omega}(\varphi\cdot\nabla\mu)\mathbf{v}\/\mathrm{d}\/ x+\langle\mathbf{h},\mathbf{v}\rangle. \end{align} \item [(iii)] The initial conditions $\mathbf{u}(0)=\mathbf{u}_0,\ \varphi(0)=\varphi_{0}$ hold in the weak sense. \end{itemize} \end{definition} Next, we discuss the existence and uniqueness of weak solution results available in the literature for the system \eqref{4.1}. \begin{theorem}[Existence, Theorem 1 \cite{MR3019479}]\label{exist} Assume that (A1)-(A7) are satisfied for some fixed positive integer $q$. Let $\mathbf{u}_0 \in \mathbb{G}_{\mathrm{div}}$, $\varphi_0 \in \mathrm{L}^\infty(\Omega)$ such that $\mathrm{F}(\varphi_0) \in \mathrm{L}^1(\Omega)$ and $\mathbf{h} \in \mathrm{L}^2_{\text{loc}}([0,\infty), \mathbb{V}_{\mathrm{div}}')$. In addition, assume that $|\overline{\varphi_0}|<1$. Then, for every given $T>0$, there exists a weak solution $(\mathbf{u},\varphi)$ to the equation \eqref{4.1} on $[0,T]$ such that $\overline{\varphi}(t) = \overline{\varphi_0}$ for all $t \in [0,T]$ and \begin{align*} \varphi \in \mathrm{L}^\infty (0,T; \mathrm{L}^{2+2q}(\Omega)). \end{align*} Furthermore, setting \begin{align}\mathscr{E}(\mathbf{u}(t),\varphi(t)) = \frac{1}{2} \|\mathbf{u}(t)\|^2 + \frac{1}{4} \int_\Omega \int_\Omega \mathrm{J}(x-y) (\varphi(x,t) - \varphi(y,t))^2 \/\mathrm{d}\/ x \/\mathrm{d}\/ y + \int_\Omega \mathrm{F}(\varphi(t))\/\mathrm{d}\/ x,\end{align} the following energy estimate holds \begin{align}\label{energy} \mathscr{E}(\mathbf{u}(t),\varphi(t)) + \int_s^t \mathopen{}\mathclose\bgroup\originalleft(2 \|\sqrt{\nu (\varphi)} D \mathbf{u}(s)\|^2 + m\| \nabla\mu(s) \|^2 \aftergroup\egroup\originalright)\/\mathrm{d}\/ s \leq \mathscr{E}(\mathbf{u}(s),\varphi(s)) + \int_s^t \langle \mathbf{h}(s), \mathbf{u}(s) \rangle\/\mathrm{d}\/ s, \end{align} for all $t \geq s$ and for a.a. $s \in (0,\infty)$. If $d=2$, the weak solution $(\mathbf{u},\varphi)$ satisfies the following energy identity, $$\frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\mathscr{E}(\mathbf{u}(t),\varphi(t)) + 2\|\sqrt{\nu (\varphi)} D \mathbf{u}(t) \|^2+ m\| \nabla \mu(t) \|^2 = \langle \mathbf{h}(t) , \mathbf{u}(t) \rangle,$$ that is, the equality in \eqref{energy} holds for every $t \geq 0.$ \end{theorem} \begin{theorem}[Uniqueness, Theorem 3, \cite{MR3518604}]\label{unique} Let $d=2$ and assume that (A1)-(A7) are satisfied for some fixed positive integer $q$. Let $\mathbf{u}_0 \in \mathbb{G}_{\mathrm{div}}$, $\varphi_0 \in \mathrm{L}^\infty (\Omega)$ with $\mathrm{F}(\varphi_0) \in \mathrm{L}^1(\Omega), |\overline{\varphi_0}|<1$ and $\mathbf{h} \in \mathrm{L}^2_{\text{loc}}([0,\infty);\mathbb{V}_{\mathrm{div}}')$. Then, the weak solution $(\mathbf{u},\varphi)$ corresponding to $(\mathbf{u}_0,\varphi_0)$ given by Theorem \ref{exist} is unique. Furthermore, for $i=1,2$, let $\mathbf{z}_i := (\mathbf{u}_i,\varphi_i)$ be two weak solutions corresponding to two initial data $\mathbf{z}_{0i} := (\mathbf{u}_{0i},\varphi_{0i})$ and external forces $\mathbf{h}_i$, with $\mathbf{u}_{0i} \in \mathbb{G}_{\mathrm{div}}$, $\varphi_{0i} \in \mathrm{L}^\infty (\Omega)$ such that $\mathrm{F}(\varphi_{0i}) \in \mathrm{L}^1(\Omega), |\overline{\varphi_{0i}}| < \eta ,$ for some constant $\eta \in [0,1)$ and $\mathbf{h}_i \in \mathrm{L}^2_{\text{loc}}([0,\infty);\mathbb{V}_{\mathrm{div}}')$. Then the following continuous dependence estimate holds: \begin{align*} &\| \mathbf{u}_2(t) - \mathbf{u}_1(t) \|^2 + \| \varphi_2(t) - \varphi_1(t) \|^2_{\mathrm{V}'} \\ &\quad + \int_0^t \mathopen{}\mathclose\bgroup\originalleft( \frac{C_0}{2} \| \varphi_2(\tau) - \varphi_1(\tau) \|^2 + \frac{\nu}{4} \|\nabla( \mathbf{u}_2(\tau) - \mathbf{u}_1(\tau)) \|^2 \aftergroup\egroup\originalright) d \tau \\ &\leq \mathopen{}\mathclose\bgroup\originalleft( \|\mathbf{u}_2(0) - \mathbf{u}_1(0) \|^2 + \| \varphi_2(0) - \varphi_1(0) \|^2_{\mathrm{V}'} \aftergroup\egroup\originalright) \Lambda_0(t) \\ &\quad + \| \overline{\varphi}_2(0) - \overline{\varphi}_1(0)\| \mathbb{Q} \mathopen{}\mathclose\bgroup\originalleft( \mathcal{E}(z_{01}),\mathcal{E}(z_{02}),\| \mathbf{h}_1 \|_{\mathrm{L}^2(0,t;\mathbb{V}_{div}')},\| \mathbf{h}_2 \|_{\mathrm{L}^2(0,t;\mathbb{V}_{div}')}, \eta \aftergroup\egroup\originalright) \Lambda_1(t) \\ &\quad + \| \mathbf{h}_2 - \mathbf{h}_1 \|^2_{\mathrm{L}^2(0,T;\mathbb{V}_{\mathrm{div}}')} \Lambda_2(t) , \end{align*} for all $t \in [0,T]$, where $\Lambda_0(t)$, $\Lambda_1(t)$ and $\Lambda_2(t)$ are continuous functions, which depend on the norms of the two solutions. The functions $\Lambda_i(t)$ also depend on $\mathrm{F}$, $\mathrm{J}$ and $\Omega$, and $\mathbb{Q}$ depends on $\mathrm{F}$, $\mathrm{J}$ ,$\Omega$ and $\eta$. \end{theorem} \begin{remark}[Remark 3, \cite{MR3019479}] The above theorems also imply $\mathbf{u}\in\mathrm{C}([0,T];\mathbb{G}_{\mathrm{div}})$ and $\varphi\in \mathrm{C}([0,T];\mathrm{H})$, for all $T>0$. \end{remark} Now, we prove that the stationary solution of \eqref{4.1} is exponentially stable in two dimensions. Let $(\mathbf{u}^e, \varphi^e)$ be the steady-state solution of the system \eqref{3.2}. From Theorem \ref{mainthm}, we know that there exists a unique weak solution for the system \eqref{3.2}. Remember that for $\mathbf{h}\in\mathbb{G}_{\mathrm{div}}$, the weak solution $(\mathbf{u}^e, \mu^e, \varphi^e)$ of the system \eqref{3.2} has the following regularity: \begin{align*} \nabla \varphi^e \in\mathrm{L}^p(\Omega),\ \mu^{e} \in \mathrm{H}^2(\Omega)\ \text{ and } \ \mathbf{u}^e\in\mathbb{H}^2(\Omega), \qquad 2 \leq p < \infty. \end{align*} Now, in the system \eqref{4.1}, we assume that $\mathbf{h}\in\mathbb{G}_{\text{div }}$ is independent of time and the initial data $\mathbf{u}_0\in\mathbb{G}_{\mathrm{div}}$, $\varphi_0\in\mathrm{L}^{\infty}(\Omega)$. Then, there exists a unique weak solution $(\mathbf{u},\varphi)$ of \eqref{4.1} such that the system has the regularity given in \eqref{sol}. \begin{theorem}\label{thmexp} Let $\mathbf{u}_0\in\mathbb{G}_{\mathrm{div}}$, $\varphi_0\in\mathrm{L}^{\infty}(\Omega)$ and $\mathbf{h}\in\mathbb{G}_{\text{div }}$. Then under the Assumptions of Theorem \ref{unique-steady}, for \begin{align*} &(i)\ \nu^2>\frac{4}{\lambda_1 }\|\nabla\mathbf{u}^e\|^2,\\ &(ii)\ (C_0-\|\mathrm{J}\|_{\mathrm{L}^1})^2>2 C_{\Omega}^2\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2+\frac{4}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2\aftergroup\egroup\originalright), \ \text{ with }\ C_0>\|\mathrm{J}\|_{\mathrm{L}^1}, \\ &(iii) \ \overline{\varphi}_0 = \overline{\varphi}^e, \end{align*} the stationary solution $(\mathbf{u}^e, \varphi^e)$ of \eqref{3.2}, with the regularity given in Theorem \ref{regularity}, is exponentially stable. That is, there exist constants $M>0$ and $\varrho>0$ such that \begin{align*} \|\mathbf{u}(t)-\mathbf{u}^e\|^2+\|\varphi(t)-\varphi^e\|^2\leq M e^{-\varrho t},\end{align*} for all $t\geq 0$, where $(\mathbf{u}, \varphi)$ is the unique weak solution of the system \eqref{4.1}, $$M=\frac{\|\mathbf{y}_0\|^2 + 4\|\mathrm{J}\|_{\mathrm{L}^{1}}(\|\varphi_0\|^2+\|\varphi^{e}\|^2)+ \|\mathrm{F}(\varphi_0) \|_{\mathrm{L}^1} + \|\mathrm{F}(\varphi^e)\|_{\mathrm{L}^1} + \|\mathrm{F}'(\varphi^e)\| (\|\varphi_0\|+\|\varphi^{e}\|)}{\min\{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1}),1\}}$$ and \begin{align} \varrho=\min\mathopen{}\mathclose\bgroup\originalleft\{\mathopen{}\mathclose\bgroup\originalleft(\lambda_1\nu-\frac{4}{\nu} \|\nabla\mathbf{u}^e\|^2\aftergroup\egroup\originalright) ,\mathopen{}\mathclose\bgroup\originalleft[\frac{m(C_0-\|\mathrm{J}\|_{\mathrm{L}^1})}{2C_{\Omega}^2}-\frac{1}{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1})}\mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2+\frac{4}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2\aftergroup\egroup\originalright)\aftergroup\egroup\originalright]\aftergroup\egroup\originalright\}>0. \end{align} From the hypothesis of Theorems \ref{exist} and \ref{unique}, settings of Theorem \ref{mainthm} (see Lemma \ref{browderproof} also) and Remark \ref{F'phi}, note that the quantity $M$ is finite. \end{theorem} \begin{proof} Let us define $\mathbf{y}:=\mathbf{u}-\mathbf{u}^e$, $\psi:=\varphi- \varphi^e$, $ \widetilde{\mu} = \mu - \mu^e $ and $\widetilde{\uppi} :=\uppi - \uppi^e$. Then we know that $(\mathbf{y},\psi )$ satisfies the following system in the weak sense: \iffalse \begin{align*} y_t- \nu \Delta y + (y \cdot \nabla ) y + (y \cdot \nabla ) u^e+ (u^e \cdot \nabla ) y + \nabla \widetilde{\uppi} = \widetilde{\mu} \nabla \psi \\ \psi_t + y \cdot \nabla \psi + y \cdot \nabla \varphi^e + u^e \cdot \nabla \psi = \Delta \widetilde{\mu} \end{align*}where $\widetilde{\mu}= a\psi - J*\psi + F'(\psi+ \varphi^e)- F'(\varphi^e)$. So we consider the following system \fi \begin{eqnarray}\label{4.3} \mathopen{}\mathclose\bgroup\originalleft\{ \begin{aligned} \psi_t + \mathbf{y} \cdot \nabla \psi + \mathbf{y} \cdot \nabla \varphi^e + \mathbf{u}^e \cdot \nabla \psi =& m\Delta \widetilde{\mu}, \ \text{ in }\ \Omega\times (0,T),\\ \widetilde{\mu}=& a\psi - \mathrm{J}*\psi + \mathrm{F}'(\psi+\varphi^e )- \mathrm{F}'(\varphi^e),\\ \mathbf{y}_t- \nu \Delta \mathbf{y} + (\mathbf{y }\cdot \nabla ) \mathbf{y} + (\mathbf{y} \cdot \nabla ) \mathbf{u}^e+ (\mathbf{u}^e \cdot \nabla ) \mathbf{y} + \nabla \widetilde{\uppi} =& \widetilde{\mu} \nabla \psi + \widetilde{\mu} \nabla \varphi^e + \mu^e \nabla \psi,\\&\qquad \qquad \qquad \ \text{ in }\ \Omega\times (0,T), \\ \text{div }\mathbf{y} = &0, \ \text{ in }\ \Omega\times (0,T),\\ \frac{\partial \widetilde{\mu}}{ \partial \mathbf{n}}=0, \ \mathbf{y}=&0, \ \text{ on }\ \partial \Omega\times(0,T),\\ \mathbf{y}(0) = \mathbf{y}_0, \ \psi(0)=&\psi_0, \ \text{ in }\ \Omega. \end{aligned} \aftergroup\egroup\originalright. \end{eqnarray} Now consider third equation of \eqref{4.3} and take inner product with $\mathbf{y}(\cdot)$ to obtain \begin{align} \label{4.4} &\frac{1}{2}\frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\|\mathbf{y}(t)\|^2+ \nu \|\nabla \mathbf{y}(t)\|^2 + b(\mathbf{y}(t),\mathbf{u}^e,\mathbf{y}(t)) \nonumber\\&\quad= (\widetilde{\mu}(t)\nabla\psi(t),\mathbf{y}(t))+ (\widetilde{\mu}(t) \nabla \varphi^e,\mathbf{y}(t)) + (\mu^e \nabla \psi(t),\mathbf{y}(t)), \end{align} where we used the fact that $b(\mathbf{y},\mathbf{y},\mathbf{y})=b(\mathbf{u}^e,\mathbf{y},\mathbf{y})=0$ and $(\nabla\widetilde{\uppi},\mathbf{y})=(\widetilde{\uppi},\nabla\cdot\mathbf{y})=0$. An integration by parts yields \begin{align*} (\widetilde{\mu}\nabla\psi,\mathbf{y})=-(\psi\nabla\widetilde{\mu},\mathbf{y})-(\psi\widetilde{\mu},\nabla\cdot\mathbf{y})=-(\psi\nabla\widetilde{\mu},\mathbf{y}), \end{align*} where we used the boundary data and divergence free condition of $\mathbf{y}$. Similarly, we have $(\widetilde{\mu} \nabla \varphi^e,\mathbf{y}) =-(\varphi^e\nabla \widetilde{\mu} ,\mathbf{y})$ and $ (\mu^e \nabla \psi,\mathbf{y})=-( \psi\nabla \mu^e ,\mathbf{y})$. Thus from \eqref{4.4}, we have \begin{align} \label{4.5} &\frac{1}{2}\frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\|\mathbf{y}(t)\|^2+ \nu \|\nabla \mathbf{y}(t)\|^2 + b(\mathbf{y}(t),\mathbf{u}^e,\mathbf{y}(t)) \nonumber\\&\quad = -(\psi(t)\nabla\widetilde{\mu}(t),\mathbf{y}(t))-(\varphi^e\nabla \widetilde{\mu}(t) ,\mathbf{y}(t))-( \psi(t)\nabla \mu^e ,\mathbf{y}(t)). \end{align} Taking inner product of the third equation in \eqref{4.3} with $\widetilde{\mu}(\cdot)$, we obtain \begin{align} \label{4.6} (\psi_t(t),\widetilde{\mu}(t) )+ m\|\nabla \widetilde{\mu}(t)\|^2= -( \mathbf{y}(t) \cdot \nabla \psi (t), \widetilde{\mu}(t))- (\mathbf{y}(t) \cdot \nabla \varphi^e, \widetilde{\mu}(t)) - (\mathbf{u}^e \cdot \nabla \psi(t), \widetilde{\mu}(t)). \end{align} Using an integration by parts, divergence free condition and boundary value of $\mathbf{y}$, we get \begin{align*} ( \mathbf{y} \cdot \nabla \psi ,\widetilde{\mu})=-(\psi\nabla\cdot\mathbf{y},\widetilde{\mu})-(\psi\nabla\widetilde{\mu},\mathbf{y})=-(\psi\nabla\widetilde{\mu},\mathbf{y}). \end{align*} Similarly, we have $(\mathbf{y} \cdot \nabla \varphi^e, \widetilde{\mu}) =-( \varphi^e \nabla \widetilde{\mu}, \mathbf{y}) $ and $ (\mathbf{u}^e \cdot \nabla \psi, \widetilde{\mu})= -( \psi \nabla \widetilde{\mu}, \mathbf{u}^e)$. Thus, from \eqref{4.6}, it is immediate that \begin{align} \label{4.7} (\psi_t(t),\widetilde{\mu} (t))+ m\|\nabla \widetilde{\mu}(t)\|^2= (\psi(t)\nabla\widetilde{\mu}(t),\mathbf{y}(t))+( \varphi^e \nabla \widetilde{\mu}(t), \mathbf{y}(t)) + ( \psi (t) \nabla \widetilde{\mu}(t), \mathbf{u}^e). \end{align} Adding \eqref{4.5} and \eqref{4.7}, we infer that \begin{align}\label{4.8} & \frac{1}{2}\frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\|\mathbf{y}(t)\|^2 + \nu \|\nabla \mathbf{y}(t)\|^2 +(\psi_t(t),\widetilde{\mu}(t) )+ m\|\nabla \widetilde{\mu}(t)\|^2\nonumber\\ &\quad= - b(\mathbf{y}(t),\mathbf{u}^e,\mathbf{y}(t))-( \psi(t)\nabla \mu^e ,\mathbf{y}(t))+ ( \psi(t) \nabla \widetilde{\mu}(t), \mathbf{u}^e). \end{align} We estimate the term $(\psi_t,\widetilde{\mu} )$ from \eqref{4.8} as \begin{align}\label{4.10} (\psi_t, \widetilde{\mu})&= (\psi_t,a\psi - \mathrm{J}*\psi + \mathrm{F}'(\psi+\varphi^e )- \mathrm{F}'(\varphi^e) ) \\ \nonumber &=\frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t} \mathopen{}\mathclose\bgroup\originalleft\{ \frac{1}{2} \|\sqrt{a} \psi \|^2 -\frac{1}{2} (\mathrm{J}*\psi,\psi) + \int_\Omega \mathrm{F}(\psi+\varphi^e) \/\mathrm{d}\/ x- (\mathrm{F}'(\varphi^e),\psi)\aftergroup\egroup\originalright\} \nonumber\\ &= \frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t} \mathopen{}\mathclose\bgroup\originalleft\{ \frac{1}{2} \|\sqrt{a} \psi \|^2 -\frac{1}{2} (\mathrm{J}*\psi,\psi) + \int_\Omega \mathrm{F}(\psi+\varphi^e) \/\mathrm{d}\/ x-\int_\Omega \mathrm{F}(\varphi^e) \/\mathrm{d}\/ x- \int_{\Omega}\mathrm{F}'(\varphi^e)\psi\/\mathrm{d}\/ x\aftergroup\egroup\originalright\}, \label{6.16} \end{align} since $\frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft(\int_\Omega \mathrm{F}(\varphi^e) \/\mathrm{d}\/ x\aftergroup\egroup\originalright)=0$. Using H\"older's, Ladyzhenskaya's and Young's inequalities, we estimate $b(\mathbf{y},\mathbf{u}^e,\mathbf{y})$ as \begin{align}\label{4.13} |b(\mathbf{y},\mathbf{u}^e,\mathbf{y})|\leq \|\nabla\mathbf{u}^e\|\|\mathbf{y}\|_{\mathbb{L}^4}^2\leq \sqrt{2}\|\nabla\mathbf{u}^e\|\|\mathbf{y}\|\|\nabla\mathbf{y}\|\leq \frac{\nu}{4}\|\nabla\mathbf{y}\|^2+\frac{2}{\nu}\|\nabla\mathbf{u}^e\|^2\|\mathbf{y}\|^2. \end{align} We estimate the term $( \psi\nabla \mu^e ,\mathbf{y})$ from \eqref{4.8} using H\"older's, Ladyzhenskaya's and Young's inequalities as \begin{align}\label{4.14} |( \psi\nabla \mu^e ,\mathbf{y})|&\leq \|\psi\|\|\nabla\mu^e\|_{\mathbb{L}^4}\|\mathbf{y}\|_{\mathbb{L}^4}\leq \sqrt{2}\|\psi\|\|\nabla\mu^e\|_{\mathbb{L}^4}\|\mathbf{y}\|^{1/2}\|\nabla\mathbf{y}\|^{1/2}\nonumber\\ &\leq \sqrt{2} (\lambda_1)^{1/4}\|\psi\| \|\nabla\mu^e\|_{\mathbb{L}^4} \|\nabla\mathbf{y}\|\nonumber\\ &\leq \frac{\nu}{4}\|\nabla\mathbf{y}\|^2+ \frac{2}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2 \|\psi\|^2. \end{align} Similarly, we estimate the term $ ( \psi \nabla \widetilde{\mu}, \mathbf{u}^e)$ from \eqref{4.8} as \begin{align}\label{4.15} |( \psi \nabla \widetilde{\mu}, \mathbf{u}^e)|&\leq\|\psi\|\|\nabla \widetilde{\mu}\|\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}\leq \frac{m}{2}\|\nabla \widetilde{\mu}\|^2+\frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2\|\psi\|^2. \end{align} Combining \eqref{6.16}-\eqref{4.15} and substituting it in \eqref{4.8}, we obtain \begin{align} \label{4.16} & \frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft\{\frac{1}{2}\|\mathbf{y}\|^2 + \frac{1}{2} \|\sqrt{a} \psi \|^2 -\frac{1}{2} (\mathrm{J}*\psi,\psi) + \int_\Omega \mathrm{F}(\psi+\varphi^e) \/\mathrm{d}\/ x-\int_\Omega \mathrm{F}(\varphi^e) \/\mathrm{d}\/ x- \int_{\Omega}\mathrm{F}'(\varphi^e)\psi\/\mathrm{d}\/ x\aftergroup\egroup\originalright\} \nonumber \\ &+ \frac{\nu}{2} \|\nabla \mathbf{y}\|^2 + \frac{m}{2}\|\nabla \widetilde{\mu}\|^2 \leq \mathopen{}\mathclose\bgroup\originalleft(\frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2+\frac{2}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2\aftergroup\egroup\originalright)\|\psi\|^2+\frac{2}{\nu}\|\nabla\mathbf{u}^e\|^2\|\mathbf{y}\|^2. \end{align} Using Taylor's series expansion and Assumption (A9), we also obtain \begin{align}\label{433} (\widetilde{\mu},\psi)&=(a\psi - \mathrm{J}*\psi + \mathrm{F}'(\psi+\varphi^e )- \mathrm{F}'(\varphi^e), \psi)\nonumber\\ &=(a\psi+\mathrm{F}''(\varphi^e +\theta\psi)\psi,\psi)-(\mathrm{J}*\psi,\psi)\nonumber\\ &\geq (C_0-\|\mathrm{J}\|_{\mathrm{L}^1})\|\psi\|^2, \end{align} for some $0<\theta<1$. Observe that $(\overline{\tilde{\mu}}, \psi)=\overline{\tilde{\mu}}(1,\psi)=0$, since $\overline{\varphi}_0=\overline{\varphi}= \overline{\varphi}^e$. Thus an application of Poincar\'e inequality yields \begin{align}\label{exp1} (\tilde{\mu}, \psi)=(\tilde{\mu}-\overline{\tilde{\mu}}, \psi)\leq\|\tilde{\mu}-\overline{\tilde{\mu}}\|\|\psi\|\leq C_\Omega\|\psi\| \|\nabla \tilde{\mu}\|. \end{align} Hence, from \eqref{433} and \eqref{exp1}, we get \begin{align} \label{exp3} \|\psi\| \leq \frac{C_\Omega}{C_0 - \|\mathrm{J}\|_{\mathrm{L}^1}} \|\nabla \tilde{\mu}\|, \end{align} which implies from \eqref{exp1} that \begin{align} (\tilde{\mu}, \psi) \leq \frac{C_\Omega^2}{C_0 - \|\mathrm{J}\|_{\mathrm{L}^1}}\|\nabla \tilde{\mu}\|^2 \label{exp2} \end{align} Using Poincar\'e's inequality and \eqref{exp3} in \eqref{4.16}, we get \begin{align} & \frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft\{\frac{1}{2}\|\mathbf{y}\|^2 + \frac{1}{2} \|\sqrt{a} \psi \|^2 -\frac{1}{2} (\mathrm{J}*\psi,\psi) + \int_\Omega \mathrm{F}(\psi+\varphi^e) \/\mathrm{d}\/ x-\int_\Omega \mathrm{F}(\varphi^e) \/\mathrm{d}\/ x- \int_{\Omega}\mathrm{F}'(\varphi^e)\psi\/\mathrm{d}\/ x\aftergroup\egroup\originalright\} \nonumber \\ &+ \mathopen{}\mathclose\bgroup\originalleft( \frac{\nu \lambda_1}{2} -\frac{2}{\nu}\|\nabla\mathbf{u}^e\|^2 \aftergroup\egroup\originalright)\|\mathbf{y}\|^2+ \mathopen{}\mathclose\bgroup\originalleft( \frac{m}{2} -\mathopen{}\mathclose\bgroup\originalleft( \frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2-\frac{2}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2 \aftergroup\egroup\originalright) \frac{C_\Omega^2}{(C_0 - \|\mathrm{J}\|_{\mathrm{L}^1})^2}\aftergroup\egroup\originalright) \|\nabla \widetilde{\mu}\|^2 \leq 0. \end{align} Using Taylor's formula, we have the following identities: \begin{align} \label{exp4} &\frac{1}{2} \|\sqrt{a} \psi \|^2 -\frac{1}{2} (\mathrm{J}*\psi,\psi)+\int_\Omega \mathopen{}\mathclose\bgroup\originalleft[\mathrm{F}(\psi+\varphi^e)-\mathrm{F}(\varphi^e)- \mathrm{F}'(\varphi^e)\psi\aftergroup\egroup\originalright]\/\mathrm{d}\/ x \nonumber \\ & =(a\psi - \mathrm{J}*\psi, \psi)+ \frac{1}{2}\int_{\Omega}\mathrm{F}''(\varphi^{e}+\theta\psi)\psi^2\/\mathrm{d}\/ x, \nonumber \\ &=(a\psi - \mathrm{J}*\psi + \mathrm{F}'(\psi+\varphi^e )- \mathrm{F}'(\varphi^e), \psi) = (\tilde{\mu}, \psi) \end{align} for some $0<\theta<1$. Using (A9) and Young's inequality for convolutions, we know that \begin{align*} \int_{\Omega}\mathopen{}\mathclose\bgroup\originalleft(a+\mathrm{F}''(\varphi^{e}+\theta\psi)\aftergroup\egroup\originalright)\psi^2\/\mathrm{d}\/ x\geq C_0\|\psi\|^2\geq \|\mathrm{J}\|_{\mathbb{L}^1}\|\psi\|^2 \geq (\mathrm{J}*\psi,\psi). \end{align*} Thus, we have \begin{align*} \int_{\Omega}\mathopen{}\mathclose\bgroup\originalleft(a+\mathrm{F}''(\varphi^{e}+\theta\psi)\aftergroup\egroup\originalright)\psi^2\/\mathrm{d}\/ x- (\mathrm{J}*\psi,\psi)\geq 0. \end{align*} Hence from \eqref{exp2}, we obtain \begin{align}\label{436} & \frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft[\|\mathbf{y}\|^2 + (\widetilde{\mu},\psi)\aftergroup\egroup\originalright] + \mathopen{}\mathclose\bgroup\originalleft(\nu \lambda_1-\frac{4}{\nu}\|\nabla\mathbf{u}^e\|^2\aftergroup\egroup\originalright) \|\mathbf{y}\|^2 \nonumber\\& \quad + \mathopen{}\mathclose\bgroup\originalleft[\frac{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1})}{2C_{\Omega}^2}-\frac{1}{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1})}\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2+\frac{4}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2\aftergroup\egroup\originalright)\aftergroup\egroup\originalright] (\widetilde{\mu},\psi)\leq 0. \end{align} Now for $C_0>\|\mathrm{J}\|_{\mathrm{L}^1}$ and \begin{align} \nu^2&>\frac{4}{\lambda_1 }\|\nabla\mathbf{u}^e\|^2, \ \text {and }\ (C_0-\|\mathrm{J}\|_{\mathrm{L}^1})^2>2 C_{\Omega}^2\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2+\frac{4}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2\aftergroup\egroup\originalright), \end{align} we use the variation of constants formula (see Lemma \ref{C3}) to find \begin{align} \label{s51} \|\mathbf{y}(t)\|^2 + (\widetilde{\mu}(t),\psi(t))\leq \mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{y}(0)\|^2 + (\widetilde{\mu}(0),\psi(0))\aftergroup\egroup\originalright)e^{-\varrho t}, \end{align} for all $t\geq 0$, where \begin{align} \varrho=\min\mathopen{}\mathclose\bgroup\originalleft\{\mathopen{}\mathclose\bgroup\originalleft(\lambda_1\nu-\frac{4}{\nu} \|\nabla\mathbf{u}^e\|^2\aftergroup\egroup\originalright) ,\mathopen{}\mathclose\bgroup\originalleft[\frac{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1})}{2C_{\Omega}^2}-\frac{1}{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1})}\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{2m}\|\mathbf{u}^e\|_{\mathbb{L}^{\infty}}^2+\frac{4}{\nu \sqrt{\lambda_1}}\|\nabla\mu^e\|_{\mathbb{L}^4}^2\aftergroup\egroup\originalright)\aftergroup\egroup\originalright]\aftergroup\egroup\originalright\}>0. \end{align} From \eqref{433}, we also have $ (C_0-\|\mathrm{J}\|_{\mathrm{L}^1})\|\psi\|^2\leq (\widetilde{\mu},\psi),$ so that from \eqref{s51}, we infer that \begin{align}\label{442} \|\mathbf{y}(t)\|^2 + (C_0-\|\mathrm{J}\|_{\mathrm{L}^1})\|\psi\|^2\leq \mathopen{}\mathclose\bgroup\originalleft(\|\mathbf{y}(0)\|^2 + (\widetilde{\mu}(0),\psi(0))\aftergroup\egroup\originalright)e^{-\varrho t}. \end{align} Using H\"older's inequality, Young's inequality for convolutions and assumption (A1) in the identity \eqref{exp4}, we obtain \begin{align} (\widetilde{\mu}(0),\psi(0)) &= (a \psi_0, \psi_0)-(\mathrm{J}*\psi_0, \psi_0) + \int_\Omega \mathrm{F}(\varphi_0) \/\mathrm{d}\/ x+ \int_\Omega \mathrm{F}(\varphi^e)\/\mathrm{d}\/ x - \int_\Omega \mathrm{F}'(\varphi^e) \psi_0\/\mathrm{d}\/ x\nonumber \\ &\leq 2\|\mathrm{J}\|_{\mathrm{L}^{1}}\|\psi_0\|^2+ \|\mathrm{F}(\varphi_0) \|_{\mathrm{L}^1} + \|\mathrm{F}(\varphi^e)\|_{\mathrm{L}^1} + \|\mathrm{F}'(\varphi^e)\|_{\mathrm{L}^1} \|\psi_0\|_{\mathrm{L}^{\infty}} , \end{align} where we have used the boundedness of $\psi_0$ and $\|a\|_{\mathrm{L}^{\infty}} \leq \|\mathrm{J}\|_{\mathrm{L}^{1}}$. Hence from \eqref{442}, we finally have \begin{align}\label{444} \|\mathbf{y}(t)\|^2 + \|\psi(t)\|^2 \leq \mathopen{}\mathclose\bgroup\originalleft( \frac{\|\mathbf{y}_0\|^2 + 2\|\mathrm{J}\|_{\mathrm{L}^{1}}\|\psi_0\|^2+ \|\mathrm{F}(\varphi_0) \|_{\mathrm{L}^1} + \|\mathrm{F}(\varphi^e)\|_{\mathrm{L}^1} + \|\mathrm{F}'(\varphi^e)\| \|\psi_0\| }{\min\{(C_0-\|\mathrm{J}\|_{\mathrm{L}^1}),1\}} \aftergroup\egroup\originalright) e^{-\varrho t}, \end{align} which completes the proof. \end{proof} \appendix \section{Variation of Constants Formula} \renewcommand{\Alph{section}}{\Alph{section}} \numberwithin{equation}{section}\setcounter{equation}{0} In this appendix, we give a variant of variation of constants formula, which is useful when we have two or more differentiable functions with different constant coefficients. \begin{lemma}[Lemma C.3, \cite{MR4120851}]\label{C3} Assume that the differentiable functions $y(\cdot),z(\cdot):[0,T]\to[0,\infty)$ and the constants $a_1,a_2,k_1,k_2,k_3>0$ satisfy: \begin{align}\label{c1} \frac{\/\mathrm{d}\/ }{\/\mathrm{d}\/ t}(a_1 y(t)+a_2z(t))+k_1y(t)+k_2z(t)\leq 0, \end{align} for all $t\in[0,T]$. Then, we have \begin{align}\label{c2} y(t)+z(t)\leq C(y(0)+z(0))e^{-\uprho t},\ \text{ where }\ C=\frac{\max\{a_1,a_2\}}{\min\{a_1,a_2\}}\ \text{ and }\ \uprho=\min\mathopen{}\mathclose\bgroup\originalleft\{\frac{k_1}{a_1},\frac{k_2}{a_2}\aftergroup\egroup\originalright\}. \end{align} \end{lemma} \begin{proof} Since $a_1>0$, from \eqref{c1}, we have \begin{align*} \frac{\/\mathrm{d}\/ }{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft(y(t)+\frac{a_2}{a_1}z(t)\aftergroup\egroup\originalright)+\frac{k_1}{a_1}\mathopen{}\mathclose\bgroup\originalleft(y(t)+\frac{k_2}{k_1}z(t)\aftergroup\egroup\originalright)\leq 0. \end{align*} Now, for $\frac{a_2}{a_1}\leq \frac{k_2}{k_1}$, from the above inequality, we also have \begin{align*} \frac{\/\mathrm{d}\/ }{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft(y(t)+\frac{a_2}{a_1}z(t)\aftergroup\egroup\originalright)+\frac{k_1}{a_1}\mathopen{}\mathclose\bgroup\originalleft(y(t)+\frac{a_2}{a_1}z(t)\aftergroup\egroup\originalright)\leq 0. \end{align*} From the above relation, it is immediate that \begin{align*} \frac{\/\mathrm{d}\/}{\/\mathrm{d}\/ t}\mathopen{}\mathclose\bgroup\originalleft[e^{\frac{k_1}{a_1}t}\mathopen{}\mathclose\bgroup\originalleft(y(t)+\frac{a_2}{a_1}z(t)\aftergroup\egroup\originalright)\aftergroup\egroup\originalright]\leq 0, \end{align*} which easily implies \begin{align}\label{c3} a_1y(t)+a_2z(t)\leq (a_1y(0)+a_2z(0))e^{-\frac{k_1}{a_1}t}. \end{align} We can do a similar calculation by a division with $a_2>0$ and for $\frac{a_1}{a_2}\leq \frac{k_1}{k_2}$, we arrive at \begin{align}\label{c4} a_1y(t)+a_2z(t)\leq (a_1y(0)+a_2z(0))e^{-\frac{k_2}{a_2}t}. \end{align} Combining \eqref{c3} and \eqref{c4}, we finally obtain \eqref{c2}. \end{proof} \end{document}
\begin{document} \title{A non-abelian, non-Sidon, completely bounded $\Lambda(p)$ set} \author{Kathryn E. Hare} \address{Dept. of Pure Mathematics\\ University of Waterloo\\ Waterloo, Ont., \\ Canada} \email{[email protected]} \thanks{This research was supported in part by NSERC\ grant RGPIN 2016-03719} \author{Parasar Mohanty} \address{Dept. of Mathematics and Statistics\\ Indian Inst. of Tech.\\ Kanput, India, 208016} \email{[email protected]} \subjclass{Primary: 43A46, 43A30; Secondary: 42A55} \begin{abstract} The purpose of this note is to construct an example of a discrete non-abelian group $G$ and a subset $E$ of $G$, not contained in any abelian subgroup, that is a completely bounded $\Lambda (p)$ set for all $p<\infty ,$ but is neither a Leinert set nor a weak Sidon set. \end{abstract} \maketitle \section{Introduction} The study of lacunary sets, such as Sidon sets and $\Lambda (p)$ sets, constitutes an interesting theme in the theory of Fourier series on the circle group ${\mathbb{T}}$. It has many applications in harmonic analysis and in the theory of Banach spaces, and various combinatorial and arithmetic properties of these sets have been studied extensively. These concepts have also been investigated in the context of more general compact abelian groups (with their discrete dual groups) and compact non-abelian groups; see \cite {GH}, \cite{LR}, \cite{R} and the references cited therein. The study of these sets in the setting of discrete non-abelian groups was pioneered by Bozjeko \cite{B}, Fig\'{a}-Talamanca \cite{FP} and Picardello \cite{Pi}. In abelian groups, there are various equivalent ways to define Sidon sets and these sets are plentiful. Indeed, every infinite subset of a discrete abelian group contains an infinite Sidon set. The natural analogues of these definitions in discrete non-abelian groups are known as strong Sidon, Sidon and weak Sidon sets. It was shown in \cite{Pi} that every weak Sidon set is $ \Lambda (p)$ for all $p<\infty $. In \cite{Le} Leinert introduced the concept of a $\Lambda (\infty )$ set, a notion only of interest in the non-abelian setting because in abelian groups such sets are necessarily finite. In striking contrast to the abelian situation, Leinert showed that the free group with two generators contains an infinite subset which is both weak Sidon and $\Lambda (\infty ),$ but does not contain any infinite Sidon subsets. In \cite{Ha}, Harcharras studied the concept of completely bounded $\Lambda (p)$ sets, a property more restrictive than $\Lambda (p),$ but still possessed by Sidon sets. The converse is not true as every infinite discrete abelian group admits a completely bounded $\Lambda (p)$ set which is not Sidon; see \cite{HM}. In this paper, we construct a non-amenable group $G$ and a set $E$ not contained in any abelian subgroup of $G,$ which is completely bounded $ \Lambda (p)$ for every $p<\infty ,$ but is neither $\Lambda (\infty )$ nor weak Sidon. It remains open if every infinite discrete group contains such a set $E$. \section{Definitions} Throughout this paper, $G$ will be an infinite discrete group. To define Sidon and $\Lambda (p)$ sets in this setting one requires the concepts of the Fourier algebra, $A(G)$, the von Neumann algebra, $VN(G),$ and the Fourier-Stieljies algebra, $B(G)$, as developed by P. Eymard in \cite{E} for locally compact groups. We also need the concept of a non-commutative $L^{p}$ -spaces introduced by I.E. Segal. We refer the reader to \cite{PX} for details on these latter spaces. \begin{defn} (i) The set $E\subseteq $ $G$ is said to be a \textbf{strong (weak) Sidon set } if for all $f\in c_{0}(E)$ (resp., $l_{\infty }(E))$ there exists $g\in A(G)$ (resp., $B(G))$ such that $f(x)=g(x)\;$for all $x\in E$. (ii) The set $E\subseteq G$ is said to be a \textbf{Sidon set }if there is a constant $C$ such that for all functions $f,$ compactly supported in $E,$ we have $\Vert f\Vert _{1}\leq C\Vert f\Vert _{VN(G)}$. The least such constant $C$ is known as the \textbf{Sidon constant} of $E$. \end{defn} These definitions are well known to be equivalent in the commutative setting. For any discrete group it is the case that strong Sidon sets are Sidon and Sidon sets are weak Sidon. Finite groups are always strong Sidon sets. In \cite{Pi} it is shown that $E\subseteq G$ is Sidon if and only if for every $f\in $ $l_{\infty }(E)$ there is some $g\in B_{\rho }(G)$ that extends $f$, where $B_{\rho }(G)$ is the dual of the reduced $C^{\ast }$ algebra $C_{\rho }^{\ast }(G)$. Since in an amenable group $B_{\rho }(G)=B(G),$ weak Sidon sets are Sidon in this setting. Very recently, Wang \cite{Wa} showed that every Sidon set in any discrete group is a strong Sidon set. It remains open if every infinite amenable group contains an infinite Sidon subset.\textit{\ } Picardello \cite{Pi} defined the notion of $\Lambda (p)$ sets in this setting and Harcharras \cite{Ha} introduced completely bounded $\Lambda (p)$ sets. For these, we require further notation. Let $\leftthreetimes $ denote the left regular representation of $G$ into $\mathcal{B}(l_{2}(G))$ and denote by $L^{p}(\tau _{0})$ the non-commutative $L^{p}$-space associated with the von Neumann algebra generated by $\leftthreetimes (G)$ with respect to the usual trace $\tau _{0}$. Let $L^{p}(\tau )$ denote the non-commutative $L^{p}$-space associated with the von Neumann algebra generated by $\leftthreetimes (G)\otimes \mathcal{B}(l_{2})$ with respect to the trace $\tau =\tau _{0}\otimes tr$, where $tr$ denotes the usual trace in $\mathcal{B}(l_{2})$. Observe that $L^{p}(\tau )$ has a cannonical operator space structure obtained from complex interpolation in the operator space category. We refer the reader to \cite{Pis} for more details. \begin{defn} (i) Let $2<p<\infty $. The set $E\subseteq G$ is said to be a $\Lambda (p)$ \textbf{\ set} if there exists a constant $C_{1}>0$ such that for all finitely supported functions $f$ we have \begin{equation} \left\Vert \sum\limits_{t\in E}f(t)\leftthreetimes (t)\right\Vert _{L^{p}(\tau _{0})}\leq C_{1}\left( |\sum\limits_{t\in E}|f(t)|^{2}\right) ^{ \frac{1}{2}}. \label{Lambdap} \end{equation} (ii) The set $E\subseteq G$ is said to be a \textbf{completely bounded }$ \Lambda (p)$\textbf{\ set, }denoted $\Lambda ^{cb}(p),$ if there exists a constant $C_{2}>0$ such that \begin{equation} \Vert \sum\limits_{t\in E}\leftthreetimes (t)\otimes x_{t}\Vert _{L^{p}(\tau )}\leq C_{2}\max \left( \Vert (\sum_{t\in E}x_{t}^{\ast }x_{t})^{1/2}\Vert _{S_{p}},\Vert (\sum_{t\in E}x_{t}x_{t}^{\ast })^{1/2}\Vert _{S_{p}}\right) \label{CBLp} \end{equation} where $x_{t}$ are finitely supported families of operators in $S_{p}$, the $ p $-Schatten class on $l_{2}$. The least such constants $C_{1}$ (or $C_{2})$ are known as the $\Lambda (p)$ \textbf{\ }(resp.,\textbf{\ }$\Lambda ^{cb}(p)$\textbf{) constants} of $E$. \end{defn} It is known that every infinite set contains an infinite $\Lambda (p)$ set \cite{B} and that every weak Sidon set is a $\Lambda (p)$ set for each $ p<\infty $ \cite{Pi}. \ Completely bounded $\Lambda (p)$ sets are clearly $ \Lambda (p),$ but the converse is not true, as seen in \cite{Ha}. Extending these notions to $p=\infty $ gives the Leinert and $L$-sets. \begin{defn} (i) The set $E\subseteq $ $G$ is called a \textbf{Leinert or }$\Lambda (\infty )$\textbf{\ set} if there exists a constant $C>0$ such that for every function $f\in l_{2}(E)$ we have $\Vert f\Vert _{VN(G)}\leq C\Vert f\Vert _{2}$. (ii) The sets of interpolation for the completely bounded multipliers of $ A(G)$ are called\textbf{\ }$L$\textbf{-sets}. \end{defn} It is well known that the Leinert sets are the sets of interpolation for multipliers of $A(G),$ so any $L$-set is Leinert; see \cite{Po}. The set $E$ is said to satisfy the \textbf{Leinert condition} if every tuple $ (a_{1},...,a_{2s})\in E^{2s},$ with $a_{i}\neq a_{i+1},$ satisfies the independence-like relation \begin{equation} a_{1}a_{2}^{-1}a_{3}\dots a_{2s-1}a_{2s}^{-1}\neq e. \label{leinert} \end{equation} Here $e$ is the identity of $G$. It can be shown (\cite{Po}) that any set that satisfies the Leinert condition is an $L$-set. It was seen in \cite{HM} that in abelian groups there are sets that are completely bounded $\Lambda (p)$ for all $p<\infty ,$ but not Sidon. Thus the inclusion, weak Sidon is $\Lambda ^{cb}(p),$ is strict for groups with infinite abelian subgroups. The purpose of this paper is to show the existence of sets not contained in \textit{any} abelian subgroup which also have this strict inclusion. In fact, we prove, more generally, the following result. \begin{theorem} There is a discrete group $G$ that admits both infinite $L$-sets and weak Sidon sets, and an infinite subset $E$ of $G$ that is $\Lambda ^{cb}(p)$ for all $p<\infty ,$ but not a Leinert set, an $L$-set or a weak Sidon set. Moreover, any subset of $E$ consisting of commuting elements is finite. \label{mainthm} \end{theorem} \section{Results and Proofs} \subsection{Preliminary results} To show that the set we will construct is not a Leinert or weak Sidon set, it is helpful to first establish some arithmetic properties of $\Lambda (p)$ and Leinert sets. We recall that a set $E\subseteq G$ is said to be \textbf{ quasi-independent} if all the sums \begin{equation*} \left\{ \sum\limits_{x\in A}x:A\subset E,|A|<\infty \right\} \end{equation*} are distinct. Quasi-independent sets in abelian groups are the prototypical Sidon sets. The first part of the following Lemma is well known for abelian groups. \begin{lemma} Let $G$ be a discrete group. (i) Suppose $q>2$ and $E\subseteq G$ is a $ \Lambda (q)$ set with $\Lambda (q)$ constant $A$. If $a\in G$ has order $ p_{n}\geq $ $2n$, then \begin{equation*} \left\vert E\bigcap \{a,a^{2},...,a^{n}\}\right\vert \leq 10A^{2}n^{2/q} \text{.} \end{equation*} (ii) Suppose $E\subseteq G$ is a Leinert set with Leinert constant $B$ and let $F\subseteq E$ be a finite commuting, quasi-independent subset. Then $ \left\vert F\right\vert \leq 6^{3}B^{2}$. \label{mainlem} \end{lemma} \begin{proof} We will write $1_{X}$ for the characteristic function of a set $X$. (i) Define the function $K_{n}$ on $G$ by \begin{equation*} K_{n}(x)=\sum_{j=-2n}^{2n}\left( 1-\frac{\left\vert j\right\vert }{n}\right) 1_{\{a^{j}\}}(x). \end{equation*} Let $J_{n}$ denote the function on $\mathbb{Z}_{p_{n}}$ (or $\mathbb{Z}$ if $ p_{n}=\infty $) defined in the analogous fashion. It is well known that the $ A(G)$ and $VN(G)$ norms for the function $K_{n}$ are dominated by the corresponding norms of the function $J_{n}$ on $\mathbb{Z}_{p_{n}}$. As $L^{q^{\prime }}(\tau _{0})$ (for $q^{\prime }$ the dual index to $q$) is an interpolation space between $A(G)$ and $VN(G)$, it follows that \begin{eqnarray*} \left\Vert K_{n}\right\Vert _{L^{q^{\prime }}(\tau )} &\leq &\left\Vert K_{n}\right\Vert _{A(G)}^{1/q^{\prime }}\left\Vert K_{n}\right\Vert _{VN(G)}^{1/q} \\ &=&\left\Vert J_{n}\right\Vert _{A(\mathbb{Z}_{p_{n}})}^{1/q^{\prime }}\left\Vert J_{n}\right\Vert _{VN(\mathbb{Z}_{p_{n}})}^{1/q}\leq (4n+1)^{1/q}\text{.} \end{eqnarray*} Suppose $E\bigcap \{a,a^{2},...,a^{n}\}$ consists of the $M$ elements $ \{a^{s_{j}}\}_{j=1}^{M}$ and put \begin{equation*} k_{n}(x)=\sum_{j=1}^{M}1_{\{a^{s_{j}}\}}(x). \end{equation*} Since $E$ has $\Lambda (q)$ constant $A,$ the generalized Holder's inequality implies \begin{eqnarray*} \frac{M}{2} &\leq &\sum_{j=1}^{M}K_{n}(a^{s_{j}})=\sum_{x\in G}K_{n}(x)k_{n}(x) \\ &\leq &\left\Vert K_{n}\right\Vert _{L^{q^{\prime }}(\tau _{0})}\left\Vert k_{n}\right\Vert _{L^{q}(\tau _{0})}\leq (4n+1)^{1/q}A\left\Vert k_{n}\right\Vert _{2} \\ &=&(4n+1)^{1/q}A\sqrt{M}. \end{eqnarray*} Consequently, $M\leq 2(4n+1)^{2/q}A^{2}\leq 10A^{2}n^{2/q}$, as claimed. (ii) Let $H$ be the abelian group generated by $F$. Being quasi-independent, $F$ is a Sidon subset of $H$ with Sidon constant at most $6\sqrt{6}$ (\cite[ p.115]{GH}). Consider the function $h=1_{F}$ defined on $H$ and $g=1_{F}$ defined on $G$. The Sidon property, together with the fact that $\left\Vert h\right\Vert _{VN(H)}=\left\Vert g\right\Vert _{VN(G)},$ ensures that \begin{equation*} \left\vert F\right\vert =\left\Vert h\right\Vert _{\ell ^{1}}\leq 6\sqrt{6} \left\Vert h\right\Vert _{VN(H)}=6\sqrt{6}\left\Vert g\right\Vert _{VN(H)}. \end{equation*} Since $E$ has Leinert constant $B$, we have $\left\Vert f\right\Vert _{VN(G)}\leq B\left\Vert f\right\Vert _{2}$ for any function $f$ defined on $ G$ and supported on $E$. In particular, this is true for the function $g\,$, hence \begin{equation*} \left\vert F\right\vert \leq 6\sqrt{6}\left\Vert g\right\Vert _{VN(H)}\leq 6 \sqrt{6}B\sqrt{\left\vert F\right\vert }. \end{equation*} \end{proof} \subsection{\noindent Proof of Theorem \protect\ref{mainthm}} \begin{proof} We will let $G$ be the free product of the cyclic groups $Z_{p_{n}}$, $n\in N $, where $p_{n}>2^{n+1}$ are distinct odd primes. If $a_{n}$ is a generator of $Z_{p_{n}},$ then $\{a_{n}\}_{n=1}^{\infty }$ is both a weak Sidon and Leinert set, as shown in \cite{Pi}. The set $E$ will be the union of finite sets $E_{n}\subseteq Z_{p_{n}}$, where $\left\vert E_{n}\right\vert =n^{2}$ and $E_{n}\subset \{a_{n},...,a_{n}^{2^{n}}\}$. The fact that any commuting subset of $E$ is finite is obvious from the definition of $E$.\newline We recall the following notation from \cite{Ha}: We say that a subset $ \Lambda \subseteq G$ has the $Z(p)$ property if $Z_{p}(\Lambda )<\infty $ where \begin{equation*} Z_{p}(\Lambda )=\sup_{x\in G}\left\vert \left\{ (x_{1},...,x_{p})\in \Lambda ^{p}:x_{i}\neq x_{j},x_{1}^{-1}x_{2}x_{3}^{-1}\cdot \cdot \cdot x_{p}^{(-1)^{p}}=x\right\} \right\vert . \end{equation*} In \cite{Ha}, Harcharras proved that if $2<p<\infty ,$ then every subset $ \Lambda $ of $G$ with the $Z(p)$ property is a $\Lambda ^{cb}(2p).$ We will construct the sets $E_{n}$ so that they have the property that for every even $s\geq 2$ there is an integer $n_{s}$ such that $Z_{s}\left( \bigcup_{n\geq n_{s}}E_{n}\right) \leq s!$. Consequently, $\bigcup_{n\geq n_{s}}E_{n}$ will be $\Lambda ^{cb}(2s)$ for all $s<\infty $. As finite sets are $\Lambda ^{cb}(p)$ for all $p<\infty ,$ and a finite union of $\Lambda ^{cb}(p)$ sets is again $\Lambda ^{cb}(p)$, it will follow that $E$ is $ \Lambda ^{cb}(p)$ for all $p<\infty $. We now proceed to construct the sets $E_{n}$ by an iterative argument. Temporarily fix $n$ and take $g_{1}=a_{n}$. Inductively assume that for $ N<n^{2}$, $\{g_{i}\}_{i=1}^{N}$ $\subseteq \{a_{n},...,a_{n}^{2^{n}}\}$ have been chosen with the property that if \begin{equation} \prod\limits_{j=1}^{N}g_{j}^{\varepsilon _{j}}=1\text{ for }\varepsilon _{j}=0,\pm 1,\pm 2, \sum_{j}|\varepsilon _{j}|\leq 2s,\text{ then all } \varepsilon _{j}=0. \tag{$\mathcal{P}_{N}$} \end{equation} Now choose \begin{equation*} g_{N+1}\neq \prod\limits_{j=1}^{N}g_{j}^{\varepsilon _{j}}\text{ for any } \varepsilon _{j}=0,\pm 1,\pm 2\text{ and }\sum_{j}|\varepsilon _{j}|\leq 2s \end{equation*} and \begin{equation*} g_{N+1}^{2}\neq \prod\limits_{j=1}^{N}g_{j}^{\varepsilon _{j}}\text{ for any }\varepsilon _{j}=0,\pm 1,\pm 2\text{ and }\sum_{j}|\varepsilon _{j}|\leq 2s. \end{equation*} There are at most $\binom{N}{2s}5^{2s}\leq C_{s}N^{2s}$ terms that $g_{N+1}$ must avoid and similarly for $g_{N+1}^{2}$ as the squares of elements of $ Z_{p_{n}}$ are all distinct. Provided $2C_{s}N^{2s}\leq 2^{n}$ then we can make such a choice of $g_{N+1}\in $ $\{a_{n},...,a_{n}^{2^{n}}\}$. Of course, it is immediate that property ($\mathcal{P}_{N+1}$) then holds. This can be done for every $N<n^{2}$ as long as $n$ is suitably large, say for $ n\geq n_{s}$. The set $E_{n}$ will be taken to be $\{g_{j}\}_{j=1}^{n^{2} \text{.}}.$ Now we need to check the claim that $Z_{s}(\bigcup\limits_{n\geq n_{s}}E_{n})\leq s!$. Towards this, suppose \begin{equation} x_{1}x_{2}^{-1}\cdot \cdot \cdot x_{s}^{-1}=y_{1}y_{2}^{-1}\cdot \cdot \cdot y_{s}^{-1} \label{P1} \end{equation} where $x_{i}$ are all distinct, $y_{j}$ are all distinct and all $ x_{i},y_{j}\in \bigcup\limits_{n\geq n_{s}}E_{n}$. The free product property guarantees that if this is true, then it must necessarily be the case that if we consider only the elements $x_{i_{k}}$ and $y_{j_{l}}$ which belong to a given $E_{n}$, we must have $\prod\limits_{k}x_{i_{k}}^{\delta _{k}}=$ $ \prod\limits_{l}y_{j_{l}}^{\varepsilon _{l}}$ for the appropriate choices of $\delta _{k},\varepsilon _{l}\in \{\pm 1\}$. As there at most $s$ choices for each of $x_{i_{k}}$ and $y_{i_{l}}$, our property ($\mathcal{P}_{N}$) ensures that this can happen only if $\{x_{i_{k}}:\delta _{k}=1\}$ $ =\{y_{j_{l}}:\varepsilon _{l}=1\}$ and similarly for the terms with $-1$ exponents. Hence we can only satisfy (\ref{P1}) if upon reordering, $ \{x_{1},x_{3},...,x_{s-1}\}=\{y_{1},y_{3},...,y_{s-1}\},$ and similarly for the terms with even labels. (We remark that for non-abelian groups, this is only a necessary but not, in general, sufficient condition for (\ref{P1}).) This suffices to establish that \begin{equation*} Z_{s}(\bigcup\limits_{n\geq n_{s}}E_{n})\leq ((s/2)!)^{2}\leq s! \end{equation*} and hence, as explained above, $E$ is a $\Lambda ^{cb}(p)$ set for all $ p\,<\infty $. Next, we will verify that $E$ is not a weak Sidon set. We proceed by contradiction. According to \cite{Pi}, if it was, then $E$ would be a $ \Lambda (p)$ set for each $p>2,$ with $\Lambda (p)$ constant bounded by $C \sqrt{p}$ for a constant $C$ independent of $p$. Appealing to Lemma \ref {mainlem}(i), we have \begin{equation*} n^{2}=\left\vert E_{n}\right\vert =\left\vert E\bigcap \{a_{n},...,a_{n}^{2^{n}}\}\right\vert \leq 10C^{2}p2^{2n/p}. \end{equation*} Taking $p=2n$ for sufficiently large $n$ gives a contradiction. Finally, to see that $E$ is not a Leinert set, we first observe that an easy combinatorial argument shows that any set of $N$ distinct elements contains a quasi-independent subset of cardinality at least $\log N/\log 3$. Thus we can obtain quasi-independent subsets $F_{n}\subseteq E_{n}$ with $\left\vert F_{n}\right\vert \rightarrow \infty $. But according to Lemma \ref{mainlem} (ii), this would be impossible if $E$ was a Leinert set. As $E$ is not Leinert, it is also not an $L$ set. This concludes the proof. \end{proof} \ \end{document}
\begin{document} \title{\bfseries \Large De-biased lasso for stratified Cox models with application to the national kidney transplant data} \begin{abstract} The Scientific Registry of Transplant Recipients (SRTR) system has become a rich resource for understanding the complex mechanisms of graft failure after kidney transplant, a crucial step for allocating organs effectively and implementing appropriate care. As transplant centers that treated patients might strongly confound graft failures, Cox models stratified by centers can eliminate their confounding effects. Also, since recipient age is a proven non-modifiable risk factor, a common practice is to fit models separately by recipient age groups. The moderate sample sizes, relative to the number of covariates, in some age groups may lead to biased maximum stratified partial likelihood estimates and unreliable confidence intervals even when samples still outnumber covariates. To draw reliable inference on a comprehensive list of risk factors measured from both donors and recipients in SRTR, we propose a de-biased lasso approach via quadratic programming for fitting stratified Cox models. We establish asymptotic properties and verify via simulations that our method produces consistent estimates and confidence intervals with nominal coverage probabilities. Accounting for nearly 100 confounders in SRTR, the de-biased method detects that the graft failure hazard nonlinearly increases with donor's age among all recipient age groups, and that organs from older donors more adversely impact the younger recipients. Our method also delineates the associations between graft failure and many risk factors such as recipients' primary diagnoses (e.g. polycystic disease, glomerular disease, and diabetes) and donor-recipient mismatches for human leukocyte antigen loci across recipient age groups. These results may inform the refinement of donor-recipient matching criteria for stakeholders.\\[0.5cm] \textbf{Keywords:} Confidence intervals; diverging number of covariates; end-stage renal disease; graft failure free survival; statistical inference. \end{abstract} \section{Introduction} \label{sec:intro} For patients with end-stage renal disease, one of the most lethal and prevalent diseases in the U.S. \citep{saran2020us}, successful renal transplantation is effective for improving quality of life and prolonging survival \citep{wolfe1999comparison,kostro2016quality,ju2019patient}. The success of kidney transplantation hinges upon various factors related to the quality of transplant operations, the quality of donated kidneys, and the physical conditions of recipients \citep{rodger2012approach,legendre2014factors}, and it is crucial to evaluate and understand how these risk factors impact on renal graft failure in order to increase the chance of success \citep{hamidi2016identifying,legendre2014factors}. With the scarcity of organs and an increasing number of waitlisted candidates \citep{bastani2015worsening}, the results can inform more efficient strategies for kidney allocation \citep{rao2009alphabet,smith2012kidney} as well as evidence-based post-transplant care \citep{baker2017renal}. Therefore, how to quantify the impacts of important factors associated with prognosis, particularly renal graft failure, remains to be a central question in kidney transplantation. The Scientific Registry of Transplant Recipients (SRTR) system, a federally funded organization that keeps records of transplant information from recipients and donors, has become a rich resource for studying post-kidney transplantation prognosis \citep{dickinson2008srtr}. Leveraging the SRTR data, one can develop a valid tool for characterizing the influences of risk factors on graft failure, a key step towards post-transplant prognosis. Most previous studies, which focused only on a small number of factors, i.e. kidney diagnosis, recipient age, recipient race, recipient gender, number of human leukocyte antigen (HLA) mismatches, donor age, donor race, donor gender, serum creatinine level and cold ischemia time \citep{alexander1994effect}, might have pre-excluded other important factors and not fully captured the complex mechanisms governing graft failure. The SRTR data contain comprehensive information on recipients and donors, such as recipient primary insurance and employment, procedure type, infection of multiple viruses, history of transplant, transfusion and drug abuse, and pre-transplant comorbidities. The data provide a unique opportunity for assessing the associations between graft failure and an extended list of variables simultaneously, which may reduce confounding \citep{wang2011gee}. Specifically, since donor age is a major criterion for donor-recipient matching \citep{kasiske2002matching,rao2009comprehensive,veroux2012age}, the data enable us to examine its effect on graft failure by adjusting for confounders, including pre-existing comorbidities. There are several statistical challenges. On the one hand, as recipients received care in various transplant centers, the center-specific effects may confound the covariate effects of interest. This motivates us to consider Cox models stratified by transplant centers, a commonly used model in the relevant context \lu{without the need to explicitly model the potentially time-varying center effects \citep{he2021stratified}}. On the other hand, recipient age is a strong risk factor and there may exist complicated interactions between recipients' age and other characteristics \citep{keith2004effect}. For ease of interpretation and by convention \citep{morales2012risk,faravardeh2013predictors}, we have opted to divide our analyzable patient population (the adult recipients with kidneys transplanted during 2000 and 2001) into $[18, 45]$, $(45, 60]$ and $60+$ years old groups (Table \ref{tab:character}), and fit models separately for these three groups. Allowing model parameters to be age group-specific, we have avoided parametrically modeling the interactions between recipient age and the other risk factors. When the number of covariates is relatively large (94 in our data) compared to, though still less than, the sample size (for example, 1448 patients with 1013 events in the $60+$ years old recipient group), the conventional maximum stratified partial likelihood estimation (MSPLE) may yield untrustworthy point estimates, confidence intervals and hypothesis testing results, as illustrated in our simulations. \begin{table}[ht] \centering \caption{Study population characteristics by recipient age group} \begin{tabular}{llll} \hline Recipient age group & $[18, 45]$ & $(45, 60]$ & $60+$ \\ Variable & \multicolumn{3}{c}{Mean (SD) / Count (\%) } \\ \hline \# Centers & 84 (--) & 107 (--) & 43 (--) \\ \# Patients & 3388 (100\%) & 4359 (100\%) & 1448 (100\%) \\ \# Events & 1588 (46.9\%) & 2334 (53.5\%) & 1013 (70.0\%) \\ Recipient age & 35.7 (7.0) & 53.0 (4.2) & 66.6 (4.3) \\ Donor age (years) & & & \\ $\qquad$ $\le 10$ & 276 (8.1\%) & 223 (5.1\%) & 61 (4.2\%) \\ $\qquad$ $(10, 20]$ & 580 (17.1\%) & 611 (14.0\%) & 137 (9.5\%) \\ $\qquad$ $(20, 30]$ & 633 (18.7\%) & 683 (15.7\%) & 179 (12.4\%) \\ $\qquad$ $(30, 40]$ & 505 (14.9\%) & 599 (13.7\%) & 174 (12.0\%) \\ $\qquad$ $(40, 50]$ & 753 (22.2\%) & 947 (21.7\%) & 256 (17.7\%) \\ $\qquad$ $(50, 60]$ & 498 (14.7\%) & 893 (20.5\%) & 318 (22.0\%) \\ $\qquad$ $60+$ & 143 (4.2\%) & 403 (9.2\%) & 323 (22.3\%) \\ Recipient gender & & & \\ $\qquad$ Male & 1997 (58.9\%) & 2671 (61.3\%) & 913 (63.1\%) \\ $\qquad$ Female & 1391 (41.1\%) & 1688 (38.7\%) & 535 (36.9\%) \\ Donor gender & & & \\ $\qquad$ Male & 2039 (60.2\%) & 2563 (58.8\%) & 803 (55.5\%) \\ $\qquad$ Female & 1349 (39.8\%) & 1796 (41.2\%) & 645 (44.5\%) \\ \hline \end{tabular} \label{tab:character} \end{table} For proper inferences, we consider an asymptotic framework with a diverging number of covariates, wherein the number of covariates, though smaller than the sample size, can increase with the sample size \citep{he2000parameters,wang2011gee}. \lu{Lasso provides a very popular tool for simultaneous variable selection and estimation with high-dimensional covariates \citep{tibshirani1997lasso}. For unstratified Cox models, \citet{huang2013oracle} and \citet{kong2014non} presented the oracle inequalities for the lasso estimator. However, with penalization, lasso estimates are biased towards zero \citep{van2014asymptotically}, and they do not possess regular limiting distributions even under linear regression with a fixed number of covariates \citep{fu2000asymptotics}. Conditional inference based on the selected model is invalid, either, due to the failure to account for uncertainty in model selection. Hence, lasso cannot be directly applied to draw statistical inference.} There is literature \lu{on inference for} unstratified Cox proportional hazards models under the related asymptotic framework. For example, \citet{fang2017testing} proposed decorrelated score tests for a low-dimensional component in the regression parameters, and \citet{kong2018high}, \citet{yu2018confidence} and \citet{xia2021cox} proposed to correct asymptotic biases of the lasso estimator following the framework of \citet{van2014asymptotically}, \citet{zhang2014confidence} and \citet{javanmard2014confidence} that were originated from high-dimensional linear or generalized linear models. For Cox models, all of these methods, except \citet{xia2021cox} which considered the ``large $n$, diverging $p$'' scenario, assumed sparsity on the inverse information matrix. This sparse matrix assumption, however, may not hold for models beyond linear regression, leading to insufficient bias correction and under-covered confidence intervals. Moreover, as these methods were not designed for modeling stratified data, they are not directly applicable to the analysis of the SRTR data. \lu{To our knowledge, the current literature lacks inferential methods with theoretical rigor for stratified Cox models with a diverging number of covariates.} We propose a de-biased lasso approach for Cox models stratified by transplant centers, which solves a series of quadratic programming problems to estimate the inverse information matrix, and corrects the biases from the lasso estimator for valid statistical inference. Our asymptotic results enable us to draw inference on any linear combinations of model parameters, including the low-dimensional targets in \citet{fang2017testing} and \citet{kong2018high} as special cases and fundamentally deviating from the stepwise regression adopted by \citet{rao2009comprehensive}. \lu{When the number of covariates is relatively large compared to the sample size, our approach yields less biased estimates and more properly covered confidence intervals than MSPLE as well as the methods of \citet{fang2017testing, kong2018high, yu2018confidence} adapted to the stratified setting.} Therefore, it is well-suited for analyzing the SRTR data, especially among the oldest recipient group that has the smallest sample size. Applications of our method to the SRTR data have generated reliable estimation and inference results for the effects of an expanded list of donor and recipient factors. We find that receiving kidneys from older donors is associated with an increased hazard of graft failure after adjusting for many confounding factors, and that the dependence on donors' age is non-linear. The results may inform more comprehensive assessments of post-transplant prognosis and kidney allocation. The article is organized as follows. We introduce the proposed de-biased lasso approach in Section \ref{sec:method} and establish the asymptotic results in Section \ref{sec:theory}, which form the basis of inference for the SRTR data. We conduct simulations in Section \ref{sec:simul} and demonstrate that our method outperforms MSPLE in bias correction and confidence interval coverage. In Section \ref{sec:app}, we analyze the SRTR data by using the proposed de-biased approach. Finally, we provide a few concluding remarks in Section \ref{sec:discuss}, the detailed list of covariates considered in the analysis of SRTR data in Appendix \ref{appA}, and regularity conditions in Appendix \ref{appB}. Technical details and proofs are deferred to the Supplementary Material. \section{De-biased lasso for stratified Cox models via quadratic programming} \label{sec:method} We apply stratified Cox models to evaluate the impacts of risk factors on post-transplant graft failure using the SRTR data. For each recipient age group defined in the first row of Table \ref{tab:character}, let $K$ be the total number of transplant centers, and $n_k$ be the number of recipients in the $k$-th transplant center, $k=1, \cdots, K$. With $i$ indexing recipients within the $k$-th transplant center, let $T_{ki}$ denote the graft failure free survival time, i.e. the time from transplantation to graft failure or death, whichever comes first [a common endpoint in transplantation \citep{kasiske2011relationship}], $X_{ki}$ be a $p$-dimensional covariate vector, and $C_{ki}$ be the censoring time. We assume random censoring, that is, $T_{ki}$ and $C_{ki}$ are independent given $X_{ki}$. In the SRTR data, $p=94$ and $X_{ki}$ includes risk factors from both donors and recipients, such as gender, ABO blood type, history of diabetes and duration, angina/coronary artery disease, symptomatic peripheral vascular disease, drug treated systemic hypertension, drug treated COPD, and mismatch for each HLA locus between donors and recipients; see a full list of covariates in Appendix \ref{appA}. Let $\delta_{ki} = 1(T_{ki} \le C_{ki})$ be the event indicator and $Y_{ki} = \min(T_{ki}, C_{ki})$ be the observed time. With a center-specific baseline hazard function $\lambda_{0k}(t)$, a stratified Cox model for $T_{ki}$ stipulates that its conditional hazard at $t$ given $X_{ki}$ is \[ \lambda_{ki}(t | X_{ki}) = \lambda_{0k}(t) \exp\{X_{ki}^T \beta^0\}, \] where $\beta^0 = (\beta^0_1, \ldots, \beta^0_p)^T \in \mathbb{R}^p$ is the vector of common regression coefficients across all centers. \lu{It is reasonable to assume that the true regression coefficients $\beta^0$ are the same across strata \citep{kalbfleisch2002statistical}, while the center effects, though not of primary interest here, are accounted for via different baseline hazards $\lambda_{0k}(t)$'s.} \subsection{Estimation method} \label{subsec:dslasso} The MSPLE of $\beta$ minimizes the following {\em negative} log stratified partial likelihood function \begin{equation} \label{eq:negloglik1} \ell(\beta) = -\displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \left[ \beta^T X_{ki} - \log \left\{ \frac{1}{n_k} \sum_{j=1}^{n_k} 1(Y_{kj} \ge Y_{ki}) \exp(\beta^T X_{kj}) \right\} \right] \delta_{ki}, \end{equation} where $N = \sum_{k=1}^K n_k$. In SRTR, the number of risk factors, though smaller than the sample size, is fairly large. In this case, our numerical examination shows that MSPLEs are biased and their confidence intervals do not yield nominal coverage. We consider a de-biased approach that has been shown to yield valid inference in linear regression \citep{van2014asymptotically,zhang2014confidence,javanmard2014confidence}. Here we assume that $p<N$ but grows with $N$, which falls into the ``large $N$, diverging $p$'' framework. We extend the de-biased lasso to accommodate stratified Cox models. For a vector $x = (x_1, \ldots, x_p)^T \in \mathbb{R}^p$, define $x^{\otimes 0} = 1$, $x^{\otimes 1} = x$ and $x^{\otimes 2} = x x^T$. Let $\dot{\ell}(\beta)$ and $\ddot{\ell}(\beta)$ be the first and the second order derivatives of $\ell(\beta)$ with respect to $\beta$, i.e. \begin{align*} \dot{\ell}(\beta) &= - \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \left\{ X_{ki} - \displaystyle \frac{\widehat{\mu}_{1k}(Y_{ki}; \beta) }{\widehat{\mu}_{0k} (Y_{ki}; \beta) } \right\} \delta_{ki}, \\ \ddot{\ell}(\beta) & = \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \left\{ \displaystyle \frac{\widehat{\mu}_{2k}(Y_{ki}; \beta)}{\widehat{\mu}_{0k}(Y_{ki}; \beta)} - \left[ \displaystyle \frac{\widehat{\mu}_{1k}(Y_{ki}; \beta)}{\widehat{\mu}_{0k}(Y_{ki}; \beta)} \right]^{\otimes 2} \right\} \delta_{ki}, \end{align*} where $\widehat{\mu}_{rk} (t; \beta) = {n_k}^{-1} \sum_{j=1}^{n_k} 1(Y_{kj} \ge t) X_{kj}^{\otimes r} \exp\{X_{kj}^T \beta\}, ~ r=0, 1, 2$. The lasso estimate, $\widehat{\beta}$, minimizes the penalized negative log stratified partial likelihood, \begin{equation} \label{eq:lasso} \widehat{\beta} = {\arg\min}_{\beta \in \mathbb{R}^p} \{ \ell(\beta) + \lambda \| \beta \|_1 \}, \end{equation} where $\lambda > 0$ is a tuning parameter that encourages sparse solutions. Here, $\| x \|_q = (\sum_{j=1}^p |x_j|^q)^{1/q}$ is the $\ell_q$-norm for $x \in \mathbb{R}^{p}$, $q \ge 1$. As $\widehat{\beta}$ is typically biased, we can obtain the de-biased lasso estimator by a Taylor expansion of $\dot{\ell}(\beta^0)$ around $\widehat{\beta}$. To proceed, let $\widehat{M}$ be a $p \times p$ matrix and $\widehat{M}_j$ its $j$th row. Pre-multiplying $\widehat{M}_j$ on both sides of the Taylor expansion and collecting terms, we have the following equality for the $j$th component of $\beta$: \begin{equation} \label{eq:derive_bhat_M} \widehat{\beta}_j - \beta^0_j + \overbrace{ \left( - \widehat{M}_j \dot{\ell}(\widehat{\beta}) \right) }^{I_j} + \overbrace{\left( - \widehat{M}_j {\Delta} \right) }^{II_j} + \overbrace{\left( \widehat{M}_j \ddot{\ell}(\widehat{\beta}) - {e}_j^T \right) \left( \widehat{\beta} - \beta^0 \right)}^{III_j} = - \widehat{M}_j \dot{\ell}(\beta^0), \end{equation} where the remainder $\Delta \in \mathbb{R}^p$ in $II_j$ can be shown asymptotically negligible given the convergence rate of the lasso estimator $\widehat{\beta}$, and so is $III_j$ if $\widehat{M}_j \ddot{\ell}(\widehat{\beta}) - e_j^T$ converges to zero with certain rate that will be discussed later in Section \ref{sec:theory}. Hence, the de-biased lasso estimator corrects the bias of $\widehat{\beta}_j$ with a one-step update of \begin{equation} \label{eq:dslasso0} \widehat{b}_j = \widehat{\beta}_j - \widehat{\Theta}_j \dot{\ell}(\widehat{\beta}), \end{equation} which replaces $\widehat{M}_j$ in \eqref{eq:derive_bhat_M} with the $j$-th row of $\widehat{\Theta}$, an estimate of the inverse information matrix $\Theta_{\beta^0}$, and $ - \widehat{\Theta}_j \dot{\ell}(\widehat{\beta})$ is the bias correction term to $\widehat{\beta}_j$. Here, $\Theta_{\beta^0}$ is the inverse of the population version of $\widehat{\Sigma} $ given in the following \eqref{eq:hatsigma}; see the explicit definition of $\Theta_{\beta^0}$ underneath \eqref{popsig}. Denote by $\widehat{b} = (\widehat{b}_1, \ldots, \widehat{b}_p)^T$ the vector of the de-biased lasso estimates, and, for compactness, write (\ref{eq:dslasso0}) in a matrix form \begin{equation} \label{eq:dslasso} \widehat{b} = \widehat{\beta} - \widehat{\Theta} \dot{\ell}(\widehat{\beta}). \end{equation} Unlike $\widehat{\beta}$, the de-biased estimator $\widehat{b}$ in (\ref{eq:dslasso}) is no longer sparse. Motivated by \citet{javanmard2014confidence} on high-dimensional inference in linear regression, we propose to obtain $\widehat{\Theta}$ by solving a series of quadratic programming problems. First, we compute \begin{equation} \label{eq:hatsigma} \widehat{\Sigma} = \displaystyle \frac{1}{N}\sum_{k=1}^K \sum_{i=1}^{n_k} \delta_{ki} \left[ X_{ki} - \widehat{\eta}_k(Y_{ki}; \widehat{\beta}) \right]^{\otimes 2}, \end{equation} where $\widehat{\eta}_k(t; \beta) = \widehat{\mu}_{1k}(t; \beta) / \widehat{\mu}_{0k}(t; \beta)$ is the vector of weighted average covariates. We use $\widehat{\Sigma}$, in lieu of $\ddot{\ell} (\widehat{\beta})$, for ease of proving theoretical properties. Indeed, as shown in the Supplementary Material, $\| \widehat{\Sigma} - \ddot{\ell} (\widehat{\beta}) \|_{\infty} \stackrel{p}{\rightarrow}0 $ with a desirable rate under the conditions in Section \ref{sec:theory}. Next, for each $j = 1, \ldots, p$, we solve a quadratic programming problem \begin{equation} \label{eq:qp} \min_{ m \in \mathbb{R}^p} \left\{ m^T \widehat{\Sigma} m: \| \widehat{\Sigma} m - e_j \|_{\infty} \le \gamma \right\}, \end{equation} where $\gamma > 0$ is a tuning parameter that is different from the lasso tuning parameter $\lambda$, $e_j$ is a unit directional vector with only the $j$th element being one, and $\| \cdot \|_{\infty}$ is the matrix max norm, i.e. $\| A \|_{\infty} = \max_{i,j} |A_{ij}|$ for a real matrix $A $. Denote by $m^{(j)}$ the column vector of solution to (\ref{eq:qp}). We obtain a $p \times p$ matrix $\widehat{\Theta} = (m^{(1)}, \ldots, m^{(p)})^T$. The constraint $\| \widehat{\Sigma} m - e_j \|_{\infty} \le \gamma$ in \eqref{eq:qp} controls deviations of the de-biased estimates from the lasso estimates. In an extreme case of $\gamma = 1$, an admissible solution is $m=0$, and therefore there is no bias correction in the de-biased estimator; in another extreme case of $\gamma=0$, $m^{(j)}$ is the $j$th column of $\widehat{\Sigma} ^{-1}$. We implement \eqref{eq:qp} by using R \texttt{solve.QP()}, which can be programmed in parallel for large $p$. We name the method {\sl de-biased lasso via quadratic programming} (hereafter, DBL-QP). \subsection{Tuning parameter selection} \label{subsec:tuning} For the DBL-QP method, the lasso tuning parameter $\lambda$ can be selected via 5-fold cross-validation as in \citet{simon2011regularization}. The selection of $\gamma$ is crucial as, for example, Figure \ref{fig:tuning} reveals that $\gamma$ should be selected within a specific range (shaded in figures) to achieve the most desirable bias correction and confidence interval coverage probability. It also shows the large bias and poor coverage resulting from MSPLE. Inappropriate tuning can yield even more biased estimates with poorer coverage than MSPLE. Results of lasso and oracle estimates are also provided as references, where oracle estimates are obtained from the reduced model that only contains truly nonzero coefficients. \begin{figure}\label{fig:tuning} \end{figure} Intuitively, $\gamma$ should be chosen near zero, resulting in a de-biased estimator with estimates of large coefficients close to oracle estimates. We do not recommend evaluating cross-validation criteria by plugging in the de-biased estimates because of accumulative estimation errors. We opt for a hard-thresholding approach that more effectively removes noise from the de-biased estimates: we retain the de-biased lasso estimate for $\beta^0_j$ only if the null hypothesis $\beta_j^0 = 0$ is rejected; otherwise, we set it to zero (shown in Algorithm \ref{algo1}). The set $\widehat{A}$ in Step 2.2 of Algorithm \ref{algo1} is expected to estimate well the set of truly associated variables. Specifically, we set $\widehat{A}$ to be the index set of variables whose Wald statistic $ \sqrt{N} | \widehat{b}_j | /\widehat{\Theta}_{jj}^{1/2} > z_{\alpha/(2p)}$, where $z_{\alpha/(2p)}$ is the upper $\{ \alpha/(2p) \}$-th quantile of the standard normal distribution. The cutoff is determined by Theorem \ref{thm:main} and Bonferroni correction for multiple testing. When implementing cross-validation, we can either take stratum as the sampling unit and randomly split strata, or randomly split observations within each stratum, to form training and testing subsets. We find the former improves stability of tuning parameter selection when there are a number of small-sized strata. \begin{algorithm}[ht] \caption{Selection of the tuning parameter $\gamma$ using cross-validation \label{algo1}} \begin{algorithmic} \item[\textbf{Step 1}] Pre-determine a grid of points for $\gamma$ in [0,1], denoted as $\gamma^{(g)}, g=1, \cdots, G$, and set each $cv_g = 0$. \item[\textbf{Step 2}] Randomly assign the $K$ strata into $M$ folds, leaving one fold for testing and the others for training. Set $q = 1$. \item[\quad \textbf{Step 2.1}] While $q \le M$, use the $q$th training set to compute the de-biased lasso estimator with $\gamma^{(g)}, g=1, \cdots, G$, denoted as $\widehat{b}^{(gq)}$, and define the active set $\widehat{A}^{(gq)}$. \item[\quad \textbf{Step 2.2}] Define the thresholded de-biased lasso estimator $\widehat{b}^{(gq)}_{thres} = \widehat{b}^{(gq)} \cdot 1(j \in \widehat{A}^{(gq)})$, i.e. setting components of $\widehat{b}^{(gq)}$outside the active set $\widehat{A}^{(gq)}$ to 0. \item[\quad \textbf{Step 2.3}] Compute the negative log partial likelihood on the $q$th testing set $\ell^{(q)} (\widehat{b}^{(gq)}_{thres})$. \item[\quad \textbf{Step 2.4}] Set $cv_g \leftarrow cv_g + N^{(q)} \ell^{(q)} (\widehat{b}^{(gq)}_{thres})$, for $g = 1, \cdots, G$, where $N^{(q)}$ is the total number of observations in the $q$th testing set. \item[\quad \textbf{Step 2.5}] Set $q \leftarrow q + 1$ and go to Step 2.1. \item[\textbf{Step 3}] Let $\widehat{g} = \arg\min_{g} cv_g$. The final output tuning parameter value is $\gamma^{(\widehat{g})}$. \end{algorithmic} \end{algorithm} \section{Valid statistical inference based on the de-biased lasso estimator} \label{sec:theory} This section presents asymptotic results, which lay the groundwork for using the de-biased lasso estimator described in Section \ref{sec:method} to infer on the risk factors of graft failure in the SRTR analysis. The pertaining large sample framework posits that the number of strata $K$ is fixed, the smallest stratum size $n_{min} = \min_{1 \le k \le K} n_k \rightarrow \infty$, and $ {n_k} / {N} \rightarrow r_k > 0 $ as $n_{min} \rightarrow \infty$, $k=1, \cdots, K$. \lu{This framework conforms to the real world setting of our concern, where the number of transplant centers nationwide is finite, and the number of patients or transplant events in each center increases over the years.} We provide regularity conditions \lu{and their discussion} in Appendix \ref{appB}, \lu{and present all} the proofs in the Supplementary Material. Let $\mu_{rk} (t; \beta) = {E} [1(Y_{k1} \ge t) X_{k1}^{\otimes r} \exp\{X_{k1}^T \beta\}]$ be the limit of $\widehat{\mu}_{rk} (t; \beta)$, $r = 0, 1, 2, ~ k=1, \cdots, K$. Then the limit of the weighted covariate process for $\widehat{\eta}_k(t; \beta) = \widehat{\mu}_{1k} (t; \beta) / \widehat{\mu}_{0k} (t; \beta)$ becomes $\eta_{k0}(t; \beta) = \mu_{1k} (t; \beta) / \mu_{0k} (t; \beta)$. Let \[ \Sigma_{\beta^0, k} = {E} [\{ X_{ki} - \eta_{k0}(Y_{ki}; \beta^0) \}^{\otimes 2} \delta_{ki}] \] be the information matrix for the $k$-th stratum, $k=1, \cdots, K$. The overall information matrix across all strata then becomes the weighted average of the stratum-specific information matrices, \begin{equation} \label{popsig} \Sigma_{\beta^0} = \sum_{k=1}^K r_k \Sigma_{\beta^0, k}. \end{equation} The inverse information matrix is $\Theta_{\beta^0} = \Sigma_{\beta^0}^{-1}$, which is to be approximated by $\widehat{\Theta}$ obtained in Section \ref{subsec:dslasso}. The following theorem establishes the asymptotic normality of any linear combination of the estimated regression parameters, {$c^T \widehat{b}$} for some loading vector $c \in \mathbb{R}^p$, obtained by the proposed DBL-QP method. For an $m \times r$ matrix $A = (a_{ij})$, define the $\ell_1$-induced matrix norm $\| A \|_{1,1} = \max_{1 \le j \le r} \sum_{i=1}^m |a_{ij}|$. For two positive sequences $\{a_n\}$ and $\{b_n\}$, we write $a_n \asymp b_n$ if there exist two constants $C$ and $C^{\prime}$ such that $0 < C \le a_n / b_n \le C^{\prime} < \infty$. Let $s_0$ be the number of nonzero elements of $\beta^0$. \begin{theorem} \label{thm:main} Assume that the tuning parameters $\lambda$ and $\gamma$ satisfy $\lambda \asymp \sqrt{\log(p)/n_{min}}$ and \\ $\gamma \asymp \| \Theta_{\beta^0} \|_{1,1} \{ \max_{1\le k \le K} | n_k/N - r_k| + s_0 \lambda \} $, and that $\| \Theta_{\beta^0} \|_{1,1}^2 \{ \max_{1 \le k \le K}|n_k/N - r_k| + s_0 \lambda \} p\sqrt{\log(p)} \rightarrow 0$ as $n_{min} \rightarrow \infty$. Under Assumptions \ref{assump1}--\ref{assump5} given in Appendix \ref{appB}, for any $c \in \mathbb{R}^p$ such that $\| c \|_2 = 1$, $\| c \|_1 \le a_*$ with $a_* < \infty$ being an absolute positive constant, and $\{ c^T \Theta_{\beta^0} c \}^{-1} = \mathcal{O}(1)$, we have \[ \displaystyle \frac{\sqrt{N} c^T (\widehat {b} - \beta^0)}{(c^T \widehat{\Theta} c)^{1/2} } \overset{\mathcal{D}}{\rightarrow} \mathcal{N}(0,1). \] \end{theorem} \lu{Note that, instead of listing it as a regularity condition in Appendix \ref{appB}, we assume $\{ c^T \Theta_{\beta^0} c \}^{-1} = \mathcal{O}(1)$ in the above theorem because the vector $c$ is also defined here. A similar condition is assumed in \cite{van2014asymptotically} [Theorem 3.3 (vi)] which is weaker than uniformly bounding the maximum eigenvalue of $\Sigma_{\beta^0}$. } The hypothesis testing with $H_{0}: c^T \beta^0 - a_0=0$ versus $H_{1}: c^T \beta^0- a_0 \ne 0$ for some constants $c \in \mathbb{R}^p$ and $a_0$ entails various applications. For example, by setting $a_0=0$ and $c$ to be a basis vector with only one element being 1 and all the others 0, we can draw inference on any covariate in the presence of all the other covariates. In particular, we will draw inference on the pairwise differences in graft failure risk among donor age groups, e.g. between $(10, 20]$ and $(20,30]$ (the reference level) years old, and among patients with different primary kidney diagnoses (diabetes is the reference level); see Section \ref{sec:app}. Given an appropriately chosen $c$ and with $T = \sqrt{N} (c^T \widehat{b} - a_0) / (c^T \widehat{\Theta} c)^{1/2}$, we construct a two-sided test function \[ \phi(T) = \left\{ \begin{array}{ll} 1 & \quad \mathrm{if} \ |T| > z_{\alpha/2} \\ 0 & \quad \mathrm{if} \ |T| \le z_{\alpha/2} \end{array} , \right. \] where $z_{\alpha/2}$ is the upper $(\alpha/2)$-th quantile of the standard normal distribution. Corollary \ref{coro:power} provides the asymptotic type I error and power of the test $\phi(T)$, and Corollary \ref{coro:ci} formalizes the construction of level $\alpha$ confidence intervals for $c^T \beta^0$ which ensures the nominal coverage probability asymptotically. \begin{corollary} \label{coro:power} Under the conditions specified in Theorem \ref{thm:main}, ${P} (\phi(T)=1 | H_0) \rightarrow \alpha$ as $n_{min} \rightarrow \infty$. Moreover, under $H_1: a_0 - c^T \beta^0 \ne 0$, ${P} (\phi(T) = 1 | H_1) \rightarrow 1$. \end{corollary} \begin{corollary} \label{coro:ci} Suppose that the conditions in Theorem \ref{thm:main} hold. Construct the random confidence interval $\mathcal{R}(\alpha) = \left[ c^T \widehat{b} - z_{\alpha/2} (c^T \widehat{\Theta} c / N)^{1/2},~ c^T \widehat{b} + z_{\alpha/2} (c^T \widehat{\Theta} c / N)^{1/2}\right]$. Then ${P} (c^T \beta^0 \in \mathcal{R}(\alpha)) \rightarrow 1 - \alpha$ as $n_{min} \rightarrow \infty$, where the probability is taken under the true $\beta^0$. \end{corollary} Our asymptotic results facilitate simultaneous inference on multiple contrasts in the context of post-transplant renal graft failure. For example, an important question to address is whether donor age is associated with graft failure. With categorized donor age in our data analysis, simultaneous comparisons among the seven categories, e.g. $\le 10, (10, 20], (20,30], (30,40], (40,50], (50, 60]$ and $60+$, naturally form multiple null contrasts. These contrasts can be formulated by $J \beta^0$, where $J$ is an $m\times p$ matrix, and $m$ represents the number of linear combinations or contrasts. The following theorem and corollary summarize the results for inference on multiple contrasts, $J \beta^0$. See an application of the asymptotic results to the SRTR data with $(m,p)=(6,94)$ in Section \ref{sec:app}. \begin{theorem} \label{thm:simul} Suppose that $J$ is an $m\times p$ matrix with $rank(J) = m$, $\| J \|_{\infty, \infty} = \mathcal{O}(1)$ and $J \Theta_{\beta^0} J^T \rightarrow F$, where $F$ is a nonrandom $m \times m$ positive definite matrix. Assume that the tuning parameters $\lambda$ and $\gamma$ satisfy $\lambda \asymp \sqrt{\log(p)/n_{min}}$ and $\gamma \asymp \| \Theta_{\beta^0} \|_{1,1} \{ \max_{1\le k \le K} | n_k/N - r_k| + s_0 \lambda \} $, and that $\| \Theta_{\beta^0} \|_{1,1}^2 \{ \max_{1 \le k \le K}|n_k/N - r_k| + s_0 \lambda \} p\sqrt{\log(p)} \rightarrow 0$ as $n_{min} \rightarrow \infty$. Under Assumptions \ref{assump1}--\ref{assump3}, \ref{assump5} and \ref{assump6} given in Appendix \ref{appB}, we have \[ \sqrt{N} J (\widehat{b} - \beta^0) \overset{\mathcal{D}}{\rightarrow} \mathcal{N}_m (0,F). \] \end{theorem} Here, $\| A \|_{\infty,\infty} = \max_{1 \le i \le m} \sum_{j=1}^r |a_{ij}|$ is the $\ell_{\infty}$-induced matrix norm for an $m \times r$ matrix $A = (a_{ij})$. The theorem implies the following corollary, which constructs test statistics and multi-dimensional confidence regions with proper asymptotic type I error rates and nominal coverage probabilities. \begin{corollary} \label{coro:thm2} Suppose the conditions in Theorem \ref{thm:simul} hold. For an $m \times p$ matrix $J$ as specified in Theorem \ref{thm:simul}, and under $H_0: J \beta^0 = a^0 \in \mathbb{R}^m$, \[T^{\prime} = N (J \widehat{b} - a^0)^T \widehat{F}^{-1} (J \widehat{b} - a^0) \overset{\mathcal{D}}{\rightarrow} \chi^2_m, \] where $\widehat{F} = J \widehat{\Theta} J^T$. Moreover, for an $\alpha \in (0,1)$, define the random set $\mathcal{R}^{\prime}(\alpha) = \{ a \in \mathbb{R}^m: N (J \widehat{b} - a)^T \widehat{F}^{-1} (J \widehat{b} - a) < \chi^2_{m, \alpha} \}$, where $\chi^2_{m, \alpha}$ is the upper $\alpha$-th quantile of $\chi^2_m$. Then ${P} (J\beta^0 \in \mathcal{R}^{\prime}(\alpha)) \rightarrow 1 - \alpha$ as $n_{min} \rightarrow \infty$, where the probability is taken under the true $\beta^0$. \end{corollary} \section{Simulation study} \label{sec:simul} We conduct simulations to examine the finite sample performance of the proposed DBL-QP approach in correcting estimation biases and maintaining nominal coverage probabilities of confidence intervals. For comparisons, we also perform MSPLE, the oracle estimation, \lu{and the three inference methods [``Nodewise'' for \citet{kong2018high}, ``CLIME'' for \citet{yu2018confidence}, and ``Decor'' for \citet{fang2017testing}] that are adapted to stratified Cox models}. The following scenarios pertain to four combinations of $(K, n_k, p)$, where $K, n_k$ and $p$ are the number of strata, stratum-specific sample size and the number of covariates, respectively. Specifically, Scenarios 1--3 refer to $(K, n_k, p)=(10, 100,10)$, $(10, 100,100)$, and $(5, 200,100)$, respectively. In Scenario 4, $K=40$, $p = 100$, $n_k$'s are simulated from a Poisson distribution with mean 40 and then fixed in all of the replications. This scenario mimics the situation of the recipient group aged over 60, the smallest group in the SRTR data. Covariates $X_{ki}$ are simulated from $N_p (0, \Sigma_x)$ and truncated at $\pm 3$, where $\Sigma_x$ has an AR(1) structure with the $(i,j)$-th entry being $0.5^{|i-j|}$. The true regression parameters $\beta^0$ are sparse. Its first element $\beta_1^0$ varies from 0 to 2 by an increment of 0.2, four additional elements are assigned values of 1, 1, 0.3 and 0.3 with their positions randomly generated and then fixed for all of the simulations, and all other elements are zero. The underlying survival times $T_{ki}$ are simulated from an exponential distribution with hazard $\lambda(t|X_{ki}) = \lambda_{0k} \exp\{X_{ki}^T\beta^0\}$, where $\lambda_{0k}$ are generated from $\mathrm{Uniform}(0.5,1)$ and then fixed throughout. As in \citet{fang2017testing} and \citet{fan2002variable}, the censoring times $C_{ki}$'s are simulated independently from an exponential distribution with hazard $\lambda_c(t|X_{ki}) = 0.2 \lambda_{0k} \exp\{X_{ki}^T\beta^0\}$, resulting in an overall censoring rate around 20\%. For the lasso estimator, we use 5-fold within-stratum cross-validation to select $\lambda$. In Scenarios 1--3 with small numbers of strata, each stratum serves as a cross-validation fold for the selection of $\gamma$; in Scenario 4 with 40 strata, we perform 10-fold cross-validation as described in Algorithm \ref{algo1} and randomly assign 4 strata to each fold. For each parameter configuration, we simulate 100 datasets, based on which we compare estimation biases of $\beta^0_1$, 95\% confidence interval coverage probabilities, model-based standard errors, and empirical standard errors across the \lu{six} methods. \begin{landscape} \begin{figure} \caption{Estimation bias, 95\% coverage, model-based standard error, and empirical standard error for $\beta^0_1$ of six different methods. Black horizontal lines are references to 0 for bias or 95\% for coverage probability.} \label{fig:4scenarios} \end{figure} \end{landscape} Figure \ref{fig:4scenarios} shows that, in Scenario 1 that features a small number of covariates ($p=10$), all \lu{six} methods perform well \lu{and similarly}; in Scenarios 2--4 with a relatively large number of covariates ($p=100$), which is close to the number of covariates in the real data we will analyze, \lu{our proposed DBL-QP estimator well corrects the biases of the lasso estimates and maintains good confidence interval coverage (excluding the practically impossible ``Oracle'' estimator), but MSPLE, Nodewise, Decor and CLIME all present larger biases compared to DBL-QP as $\beta^0_1$ increases from 0 to 2. CLIME, Nodewise and MSPLE have worse confidence interval coverage in general. As de-biased lasso methods, CLIME and Nodewise produce much smaller model-based standard error estimates, which also contribute to their poor covarage probabilities. This is likely due to that both methods (CLIME and Nodewise) use penalized estimators for inverse information matrix estimation, and such penalization induces biases towards zero.} To recapitulate, the proposed DBL-QP provides less biased estimates and better confidence interval coverage than \lu{the conventional MSPLE and three other competitors (Nodewise, Decor and CLIME adapted to the stratified setup)} when the sample size is moderate relative to the number of covariates, although \lu{all methods} give almost identical results when $p$ is rather small. Hence, when $p < N$, our proposed DBL-QP approach is at least as good as \lu{all the other methods}, and should be recommended for use. \section{Analysis of the SRTR kidney transplant data} \label{sec:app} \lu{The SRTR data set features 94 covariates from both donors and recipients, and the number of covariates is seen as relatively large for some recipient groups. With its reliable performance as demonstrated in simulations, we apply our DBL-QP approach to analyze the SRTR data, while using MSPLE as a benchmark.} The outcome is graft failure free survival, the time from transplant to graft failure or death, whichever comes first. Our primary goal is to investigate the joint associations of these covariates with graft failure for three recipient groups defined in Table \ref{tab:character} separately. By simultaneously considering all available donor and recipient covariates, we aim to account for confounding and provide asymptotically valid inference for the covariate effects, which differs from post hoc inference that only focuses on a smaller set of covariates selected by stepwise selection. The effect of donor age, in the presence of other risk factors, is worth investigating, as the debatable ``one-size-fit-all'' practice of donor-recipient age matching unfortunately is not suited for the benefit of transplantation \citep{keith2004effect,veroux2012age,dayoub2018effects}. \subsection{Data details} Included in our analysis are 9,195 recipients who received kidney-only transplants from deceased donors, had no prior solid organ transplants, and were at least 18 years old at the time of transplantation during 2000 and 2001. We focus on those with these same cohort years in order to eliminate the cohort effect. Moreover, this group of patients had longer follow-up than those from the later cohort years. See Appendix \ref{appA} for a full list of included variables in the analysis. In the three receipts' age groups, respectively, the sample sizes are 3388, 4359 and 1448, the censoring rates are 53.1\%, 46.5\% and 30.0\%, the median numbers of patients within each transplant center are 32, 31 and 27, and the restricted mean survival times by $13$ years are 9.1, 8.6 and 7.1 years. To select the tuning parameters, we implement 5-fold cross-validation by randomly selecting one fifth of transplant centers without replacement as testing data and the rest as training data. \subsection{Results} We begin with examining the overall effect of donors' age on graft failure and testing the null hypothesis that, within each recipient group and after adjusting for the other risk factors, all the donor age groups, i.e. $\le 10, (10, 20], (20,30], (30,40], (40,50], (50, 60]$ and $60+$, have the same risk of graft failure. Based on Theorem \ref{thm:simul} and Corollary \ref{coro:thm2}, with $(m,p)=(6,94)$, we perform tests for the null contrasts, and the obtained statistics significantly reject the null hypotheses for all three recipient groups (within recipients aged 18-45: $\chi^2=40.4$, df=6, p-value=$3.9 \times 10^{-7}$; recipients aged 45-60: $\chi^2=34.5$, df=6, p-value=$5.3 \times 10^{-6}$; recipients aged over 60: $\chi^2=14.2$, df=6, p-value=$2.8 \times 10^{-2}$). Indeed, Figure \ref{fig:don_age}, which depicts the risk-adjusted effect of donors' age across the three recipient age groups, shows a general trend of increasing hazards for those receiving kidneys from older donors, likely due to renal aging. The estimates and confidence intervals obtained by our proposed DBL-QP differ from those obtained by MSPLE, and the differences are the most obvious in the $60+$ year recipient group, which has the smallest sample size. As presented in our simulations, MSPLE may produce biased estimates with improper confidence intervals, especially when the sample size is relatively small. On the other hand, the proposed DBL-QP method may shed new light into the aging effect, which seems to be non-linear with respect to donors' age. First, using the results of Theorem \ref{thm:main} and Corollary \ref{coro:ci}, our tests detect no significant differences in hazards between those receiving kidneys from donors aged under 10 or $(10, 20]$ and $(20, 30]$ (reference level) years old, within all the three recipient age groups. Second, significantly increased hazards are observed as early as when donors' age reached 30-40, as compared to the reference level of $(20, 30]$, in the 18-45 years old recipient group, with an estimated hazard ratio (HR) of 1.16 (95\% CI: 1.01--1.34, p-value=$4.1\times 10^{-2}$). In contrast, there are no significant differences between receiving organs from $(30,40]$ years old donors and the reference level of $(20, 30]$, among the 45-60 years old recipients (HR= 0.96, 95\% CI: 0.85--1.09, p-value=$5.1\times 10^{-1}$) and the $60+$ years old recipients (HR=1.07, 95\% CI: 0.88--1.30, p-value=$5.0\times 10^{-1}$). Third, kidneys from $60+$ years old donors confer the highest hazards, with the estimated risk-adjusted HRs (compared to the reference level $(20, 30]$) being 1.83 (95\% CI: 1.48--2.28, p-value=$4.3 \times 10^{-8}$), 1.40 (95\% CI: 1.21--1.61, p-value=$4.1\times 10^{-6}$) and 1.37 (95\% CI: 1.14--1.63, p-value=$5.2 \times 10^{-4}$) among the three recipient age groups respectively. This means that, compared to the older recipients, recipients of 18-45 years old tend to experience a greater hazard of graft failure when receiving kidneys from donors over 60 years old. Caution needs to be exercised when allocating kidneys from older donors to young patients \citep{lim2010donor,kabore2017age,dayoub2018effects}. \begin{figure} \caption{Estimated hazard ratios and the corresponding $95\%$ confidence intervals of different donor age categories with reference to the $(20, 30]$ donor age category, after adjusting for all other variables, in three recipient groups. } \label{fig:don_age} \end{figure} Our method also delineates the associations of clinical indicators with graft failure, provides more reliable inference, and compares the relative strengths across recipient age groups. \lu{By naively applying lasso, 64, 44 and 27 covariates are selected with non-zero coefficients in the 18-45, 45-60, and 60$+$ years old recipient groups, respectively. In contrast, the proposed DBL-QP identifies 22, 22 and 14 significant covariates in these three recipient groups, respectively, from rigorous hypothesis tests with size 0.05 based on the asymptotic distribution.} Figure \ref{fig:ci_three} shows the estimated coefficients and their 95\% confidence intervals for covariates that are significant at level 0.05 in at least one recipient group. We highlight several noteworthy results. First, recipients' primary kidney diagnosis plays a critical role in kidney graft failure \citep{wolfe1991survival}. Compared to recipients with primary diagnosis of diabetes (the reference level), those with polycystic kidneys (variable 2 in Figure \ref{fig:ci_three}) have a reduced risk of graft failure, with highly significant lower HRs of 0.54 (95\% CI: 0.42--0.70, p-value=$3.6 \times 10^{-6}$), 0.65 (95\% CI: 0.57--0.75, p-value=$4.4 \times 10^{-9}$) and 0.74 (95\% CI: 0.60--0.92, p-value=$5.3 \times 10^{-3}$) for the three age groups respectively. Compared to diabetes, primary diagnosis of glomerular disease (variable 26 in Figure \ref{fig:ci_three}) is significantly associated with a reduced risk of graft failure only in the $60+$ years old recipient group (HR=0.79, 95\% CI: 0.66--0.96, p-value=$1.4 \times 10^{-2}$), and primary diagnosis of hypertensive nephrosclerosis (variable 29 in Figure \ref{fig:ci_three}) is significantly associated with a higher hazard of graft failure only in the 45-60 years old recipient group (HR=1.12, 95\% CI: 1.01--1.23, p-value=$2.5 \times 10^{-2}$). Second, since diabetes is the most prevalent among end-stage renal patients \citep{kovesdy2010glycemic}, we code recipients' diabetic status at transplant as non-diabetic (reference level), diabetic for 0-20 years (variable 13 in Figure \ref{fig:ci_three}), and 20+ years (variable 3 in Figure \ref{fig:ci_three}). Our stratified analysis reveals that diabetics is a stronger risk factor for young recipients aged between 18 and 45 years old than for older recipients, regardless of duration of diabetes. Third, instead of using the total number of mismatches as done in the literature, we consider the number of mismatches separately for each HLA locus for more precisely pinpointing the effects of mismatching loci. Our results reveal that the HLA-DR mismatches (variable 9 in Figure \ref{fig:ci_three}) are more strongly associated with graft failure than the HLA-A (variable 18 in Figure \ref{fig:ci_three}) and HLA-B mismatches (non-significant in any recipient group), which are consistent with a meta-analysis based on 500,000 recipients \citep{shi2018impact}. Finally, to study the granular impact of recipient age on graft failure \citep{karim2014recipient}, we treat recipient age (divided by 10) as a continuous variable (variable 4 in Figure \ref{fig:ci_three}) in the model within each recipient age group. Interestingly, we find that increasing age is associated with a higher hazard in the two older recipient groups (HR=1.31, 95\% CI: 1.19--1.44, p-value=$1.3\times 10^{-8}$, for recipients aged 45-60; HR=1.22, 95\% CI: 1.07--1.40, p-value=$3.6\times 10^{-3}$, for recipients aged 60+), but with a lower hazard of graft failure in the 18-45 recipient age group (HR=0.89, 95\% CI: 0.83--0.95, p-value=$5.2 \times 10^{-4}$). This is likely because that younger patients generally had poorer adherence to treatment, resulting in higher risks of graft loss \citep{kabore2017age}. The results also reinforce the necessity of separating analyses for different recipient age groups. \begin{figure} \caption{Estimated regression coefficients in the stratified Cox models using the proposed DBL-QP method, and the corresponding $95\%$ confidence intervals, presented by recipient age group. The covariates included are significant at level 0.05 in at least one recipient group, after adjusting for all other covariates.} \label{fig:ci_three} \end{figure} As a side note, we compare DBL-QP and MSPLE in the estimated coefficients and standard errors. Figure \ref{fig:QPvsMPLE_all} shows that in the 45-60 age group with the largest number of subjects, the point estimates obtained by the two methods almost coincide with each other, whereas in the $60+$ age group with the smallest sample size, MSPLE tends to have larger absolute estimates than the de-biased lasso. Moreover, the standard errors estimated by MSPLE are likely to be larger than those by our method across all the age groups. These observations agree with the results of our simulations (Scenarios 2--4), which show that MSPLE yields large biases in estimated coefficients and standard errors, especially when the sample size is relatively small, whereas our proposed DBL-QP method draws more valid inferences by maintaining proper type I errors and coverage probabilities. \begin{figure} \caption{Comparison between the coefficient estimates (top) and the model-based standard errors (bottom) by the de-biased lasso (DBL-QP) and the maximum stratified partial likelihood estimation (MSPLE) in three recipient age groups. The solid red lines are 45-degree reference lines, and the dotted blue lines in the top figures represent the fitted linear regression of the DBL-QP estimates on the MSPLE estimates.} \label{fig:QPvsMPLE_all} \end{figure} \section{Concluding remarks} \label{sec:discuss} The work is motivated by an urgent call of better understanding the complex mechanisms behind post-kidney transplant graft failure. Our modeling framework is Cox models stratified by transplant centers, due to their strong confounding effects on graft failure. To adjust for confounders to the extent possible, we have included an extended list of 94 covariates from recipients and donors, which has not been done in the literature. A particular scientific question to address is the debatable donor-recipient age matching criterion in kidney transplantation. Fitting separate models by recipient age enables direct assessments of the donor age effects in different recipient age groups, which differs from using donor-recipient age difference as in \citet{ferrari2011effect}. Specifically, we have followed a common practice of fitting separate models in age groups of 18-45, 45-60 and $60+$ years. The commonly used MSPLE yielded biased estimates and unreliable inference in some smaller age groups, though the samples outnumbered the covariates. In particular, the $60+$ years recipient group had only 1448 recipients in 43 different transplant centers, and MSPLE yielded more dramatic estimates for those donor age effects of over 30 years old (Figure \ref{fig:don_age}). Our simulation results also confirmed such a problematic phenomenon. Therefore, a statistical method that can guarantee reliable estimates and valid inference is much needed for delineating the associations of interest with graft failure when the number of covariates is relatively large in stratified Cox models. Inspired by the de-biased lasso method for linear regression \citep{javanmard2014confidence}, we have developed a de-biased lasso approach via quadratic programming for stratified Cox models. \lu{Despite progress made in high-dimensional inference for Cox models, virtually no work has considered stratified settings, theoretically or empirically. We have shown that in the ``large $N$, diverging $p$'' scenario, our approach possesses desirable asymptotic properties and finite-sample performance, and is more suitable for the analysis of the SRTR data than the competing methods illustrated in our simulation studies. Computationally, based on a previous work on Cox models without stratification \citep{xia2021cox}, for the estimation of $\Theta_{\beta^0}$, the computational speed using \texttt{solve.QP} in R was much faster than that using the R packages \texttt{clime} or \texttt{flare} adopted by \citet{yu2018confidence}. } Applications of our method to the SRTR data generated new biological findings. After categorizing donors' age and controlling for other risk factors listed in Appendix \ref{appA}, we find that {organs from older donors} are associated with an increased hazard of graft failure and that the dependence on donors' age is non-linear: within the youngest recipient group (18-45 years), significant differences from the reference donor age category (20-30 years) were detected as early as when donors reached 30-40 years old, whereas significant differences were detected only when donors reached 50-60 or 60+ years within the two older recipient groups, respectively; in other words, receiving kidneys from older or younger donors, such as $60+$ versus 20-30 years, presented larger differences than in the other two recipient groups. These results, which were not reported in the literature, may provide new empirical evidence to aid stake-holders, such as patients, families, physicians and policy makers, in decisions on donor-recipient age matching. \lu{A few technical points are noteworthy. First,} our work deals with the ``large $N$, diverging $p$'' scenario, as embedded in the motivating data, and approximates $\Theta_{\beta^0}$ via quadratic programming without positing any sparsity conditions on $\Theta_{\beta^0}$. This distinguishes from the related literature \citep{fang2017testing,yu2018confidence,kong2018high} \lu{in the ``large $p$, small $N$'' scenario} that relies upon sparsity conditions on the numbers of non-zero elements in the rows of $\Theta_{\beta^0}$, which \lu{are hardly discussed in depth and may not hold nor have explicit interpretations for Cox models}. \lu{For example, when the rows of $\Theta_{\beta^0}$ are not sparse, our dimension requirement for $p$ is less stringent than in \citet{yu2018confidence}, by a factor of $\sqrt{\log(Np)}$. } \lu{Moreover, when $p > N$, the several de-biased methods aforementioned may not yield reliable inference results, as empirically $\Theta_{\beta^0}$ cannot be estimated well, and biases in the lasso estimator are often not sufficiently corrected for in this scenario for Cox models. New approaches, such as sample-splitting approaches \citep{fei2021estimation} that bypass the estimation of $\Theta_{\beta^0}$, can be consulted.} \lu{Second, tuning parameter selection is critical in high-dimensional inference. Our proposed method deploys a single tuning parameter $\gamma$ for de-biasing the estimates of all $\beta_j$'s. This is a computationally feasible and commonly adopted strategy, presenting a satisfactory performance in our numerical studies, and can be extended to adapt to the variability of individual coefficient estimation. For example, one may consider the following estimation procedure for the $j$th row of $\widehat{\Theta}$ along the line of adaptive CLIME \citep{cai2016estimating}: \[ \min_m \{ m^T \widehat{\Sigma} m: | (\widehat{\Sigma} m - e_j)_k | \le \gamma_{jk}, k = 1, \ldots, p \}. \] Here, $\gamma_{jk}$'s are supposed to be adaptively estimated through a carefully designed procedure. However, the design of such an appropriate procedure requires complicated theoretical analysis in Cox models, unstratified or stratified, to determine the desirable rates of $ \gamma_{jk}$'s, among other tasks. Given that such complexity is beyond the scope of this paper, we will not pursue this route here in details but will leave it for future research.} \lu{ Third, though primarily focusing on the associations between the risk factors and survival (through Theorem \ref{thm:main}), the proposed method can be used for patient risk scoring and conditional survival probability estimation. For example, the de-biased estimates may be plugged into the Breslow's estimator \citep{kalbfleisch2002statistical} for stratum-specific baseline hazards. The conditional survival probability estimation may not go beyond the time point $\tau$ due to censoring. } \lu{ Lastly, we use Cox models stratified by transplant centers to account for but avoid explicitly modeling the center effects. Alternatively, random effects models can be used for clustered survival data analysis; for example, \citet{vaida2000proportional} generalized the usual frailty model to allow multivariate random effects. However, in a random effects model, the distribution of random effects needs to be specified, and the coefficients only have conditional interpretations, given a cluster. We may pursue this elsewhere. } \lu{We have implemented the proposed DBL-QP method with cross-validation in R and Rcpp, which is available online at \href{https://github.com/luxia-bios/StratifiedCoxInference/}{https://github.com/luxia-bios/StratifiedCoxInference/} with simulated examples.} \section*{Appendix} \begin{appendix} \section{SRTR data}\label{appA} The SRTR dataset analyzed in this article can be accessed by applying through the OPTN website \href{https://optn.transplant.hrsa.gov}{https://optn.transplant.hrsa.gov}. The interpretation and reporting of the SRTR data results are solely the responsibility of the authors and should not be viewed as official opinions of the SRTR or the United States Government. The 94 covariates, including dummy variables, are derived from the following factors. Donor factors include: ABO blood type, age, cytomegalovirus antibody, hepatitis C virus antibody, cause of death, cardiac arrest since event leading to declaration of death, serum creatinine, medication given to donor (DDAVP, dopamine and dobutamine), gender, height, history of cancer, cigarette smoking, history of drug abuse, hypertension, diabetes, inotropic support, inotropic agents at time of incision, non-heart beating donor, local or shared organ transplant, race, and weight. Recipient factors include: ABO blood type, history of diabetes and duration, angina/coronary artery disease, symptomatic peripheral vascular disease, drug treated systemic hypertension, drug treated COPD, gender (and previous pregnancies for females), sensitization (whether peak and/or current panel-reactive antibodies exceed 20\%), previous malignancy, peptic ulcer disease, symptomatic cerebrovascular disease, race, total serum albumin, age at transplant, number of HLA mismatches (A, B and DR), cytomegalovirus status, total cold ischemic time, primary kidney diagnoses, pre-transplant dialysis and duration, the Epstein–Barr virus serology status, employment status, hepatitis B virus status, hepatitis C virus status, height, pre-implantation kidney biopsy, pre-transplant blood transfusions, transplant procedure type, warm ischemic time and weight. \section{Regularity conditions}\label{appB} \renewcommand{B.\arabic{assumption}}{B.\arabic{assumption}} Assumptions \ref{assump1}--\ref{assump5} below ensure that Theorem \ref{thm:main} hold. \begin{assumption}\label{assump1} Covariates are almost surely uniformly bounded, i.e. $\| X_{ki} \|_{\infty} \le M$ for some positive constant $M<\infty$ for all $k$ and $i$. \end{assumption} \begin{assumption} \label{assump2} $| X_{ki}^T \beta^0 | \le M_1$ uniformly for all $k$ and $i$ with some positive constant $M_1 < \infty$ almost surely. \end{assumption} \begin{assumption} \label{assump3} The follow-up time stops at a finite time point $\tau > 0$, with probability $\pi_0 = \min_{k} P (Y_{ki} \ge \tau) > 0$. \end{assumption} \begin{assumption} \label{assump4} \lu{ For any $t\in [0, \tau]$, \[ \frac{c^T \Theta_{\beta^0}}{c^T \Theta_{\beta^0} c} \left[ \sum_{k=1}^K r_k \int_0^t \left\{ \mu_{2k}(u; \beta^0) - \frac{\mu_{1k}(u; \beta^0) \mu_{1k}(u; \beta^0)^T}{\mu_{0k}(u; \beta^0)} \right\} \lambda_{0k}(u) d u \right] \Theta_{\beta^0} c \rightarrow v(t; c) \] as $n \rightarrow \infty$ for some function $v(t; c) > 0$ of $t$ that also depends on the choice of $c$. } \end{assumption} \begin{assumption} \label{assump5} \lu{There exists a constant $\epsilon_0>0$ such that $\lambda_{\mathrm{min}}(\Sigma_{\beta^0}) \ge \epsilon_0$, where $\lambda_{\mathrm{min}} (\cdot)$ is the smallest eigenvalue of a matrix. } \end{assumption} For inference on multiple linear combinations or contrasts as described in Theorem \ref{thm:simul}, Assumption \ref{assump4} needs to be replaced with the following Assumption \ref{assump6}, which is a multivariate version of Assumption \ref{assump4}. \begin{assumption} \label{assump6} \lu{ For any $\omega \in \mathbb{R}^{m}$ and any $t\in [0, \tau]$, \[ \frac{\omega^T J \Theta_{\beta^0} }{\omega^T J \Theta_{\beta^0} J^T \omega} \left[ ~ \sum_{k=1}^K r_k \int_0^t \left\{ \mu_{2k}(u; \beta^0) - \frac{\mu_{1k}(u; \beta^0) \mu_{1k}(u; \beta^0)^T}{\mu_{0k}(u; \beta^0)} \right\} d\Lambda_{0k}(u) \right] \Theta_{\beta^0} J^T \omega \] converges to $v^{\prime}(t; \omega, J)$ as $n \rightarrow \infty$, for some function $v^{\prime}(t; \omega, J) > 0$ of $t$, that also depends on the choice of $\omega$ and $J$. } \end{assumption} \lu{It is common in the literature of high-dimensional inference to assume bounded covariates as in Assumption B.1. \citet{fang2017testing} and \citet{kong2018high} also posed Assumption B.2 for Cox models, i.e. uniform boundedness on the multiplicative hazard. Under Assumption B.1, Assumption B.2 can be implied by the bounded overall signal strength $\| \beta^0 \|_1$. Assumption B.3 is a common assumption in survival analysis \citep{andersen1982cox}. Assumption B.4 and its multivariate version, Assumption B.6, ensure the convergence of the variation process, which is key in applying the martingale central limit theorem. They are less stringent comparing to the boundedness assumption on $\| \Theta_{\beta^0} X_{ki} \|_{\infty}$ that is equivalent to the assumptions for statistical inference in \citet{van2014asymptotically} on high-dimensional generalized linear models and in \citet{fang2017testing} on high-dimensional Cox models. The boundedness of the smallest eigenvalue of $\Sigma_{\beta^0}$ in Assumption B.5 is common in inference for high-dimensional models \citep{van2014asymptotically,kong2018high}. Since we focus on random designs, unlike \citet{huang2013oracle}, \citet{yu2018confidence} and \citet{fang2017testing}, we do not directly assume the compatibility condition on $\ddot{\ell}(\beta^0)$; instead, we impose Assumption B.5 on the population-level matrix $\Sigma_{\beta^0}$, which leads to the compatibility condition for a given data set with probability going to one.} \end{appendix} \setcounter{section}{0} \renewcommand{S.\arabic{section}}{S.\arabic{section}} \renewcommand{S.\arabic{equation}}{S.\arabic{equation}} \renewcommand{S.\arabic{lemma}}{S.\arabic{lemma}} \begin{center} {\bf \Large Supplement to ``De-biased lasso for stratified Cox models with application to the national kidney transplant data"} \\[1em] {\large Lu Xia\textsuperscript{a}, Bin Nan\textsuperscript{b}, and Yi Li\textsuperscript{c} \\[1em] } {\small \textsuperscript{a}Department of Biostatistics, University of Washington, Seattle, WA \\ \textsuperscript{b}Department of Statistics, University of Californina, Irvine, CA \\ \textsuperscript{c}Department of Biostatistics, University of Michigan, Ann Arbor, MI } \end{center} For completeness of presentation, we first provide some useful lemmas and their proofs, and then give the proofs of the main theorems in this supplementary material. Since corollaries are direct results of the corresponding theorems, their proofs are straightforward and thus omitted. \section{Technical Lemmas} For the $i$th subject in the $k$th stratum, define the counting process $N_{ki}(t) = 1(Y_{ki} \le t, \delta_{ki} = 1)$. The corresponding intensity process $A_{ki}(t; \beta) = \int_0^t 1 (Y_{ki} \ge s) \exp(X_{ki}^T \beta) d\Lambda_{0k}(s)$, where $\Lambda_{0k}(t) = \int_{0}^{t} \lambda_{0k}(s) ds$ is the baseline cumulative hazard function for the $k$th stratum, $k=1, \cdots, K$, $i = 1, \cdots, n_k$. Let $M_{ki}(t; \beta) = N_{ki}(t) - A_{ki}(t; \beta)$, then $M_{ki}(t; \beta^0)$ is a martingale with respect to the filtration $\mathcal{F}_{ki}(t) = \sigma \{ N_{ki}(s), 1(Y_{ki} \ge s), X_{ki}: s \in (0, t] \}$. Recall that the stratum-specific weighted covariate process $\widehat{\eta}_k(t; \beta) = \widehat{\mu}_{1k} (t; \beta) / \widehat{\mu}_{0k} (t; \beta)$, where $\widehat{\mu}_{rk} (t; \beta) = (1/ n_k) \sum_{i=1}^{n_k} 1(Y_{ki} \ge t) X_{ki}^{\otimes r} \exp\{X_{ki}^T \beta\}$. Their population-level counterparts are $\mu_{rk} (t; \beta) = {E} [1(Y_{k1} \ge t) X_{k1}^{\otimes r} \exp\{X_{k1}^T \beta\}]$ and $\eta_{k0}(t; \beta) = \mu_{1k} (t; \beta) / \mu_{0k} (t; \beta)$, $r = 0, 1, 2$, $k=1, \cdots, K$. It is easily seen that the process $\{X_{ki} - \widehat{\eta}_k(t; \beta^0)\}$ is predictable with respect to the filtration $\mathcal{F}(t) = \sigma \{ N_{ki}(s), 1(Y_{ki} \ge s), X_{ki}: s \in (0, t], k=1, \cdots, K, ~ i=1, \cdots, n_k \}$. \begin{lemma} \label{chap4:lemma:mom} Under Assumptions B.1--B.3, for $k = 1, \cdots, K$, we have \begin{align*} & \sup_{t \in [0, \tau]} | \widehat{\mu}_{0k}(t; \beta^0) - \mu_{0k}(t; \beta^0) | = \mathcal{O}_{P}(\sqrt{\log(p) / n_k}), \\ & \sup_{t \in [0, \tau]} \| \widehat{\mu}_{1k}(t; \beta^0) - \mu_{1k}(t; \beta^0) \|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p)/n_k}), \\ & \sup_{t \in [0, \tau]} \| \widehat{\eta}_k(t; \beta^0) - \eta_{k0}(t; \beta^0) \|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p)/n_k}). \end{align*} \end{lemma} Lemma \ref{chap4:lemma:mom} is the extension of the results of Lemma A1 in \citet{xia2021cox} to each of the $K$ strata. We omit its proof here. \begin{lemma} \label{chap4:lemma:lead} Assume $ \| \Theta_{\beta^0} \|_{1,1}^2 \{ \sqrt{\log(p) / n_{min}} + \max_k | n_k/N - r_k | \} \to 0$, and $\{ c^T \Theta_{\beta^0} c \}^{-1} = \mathcal{O}(1)$. Under Assumptions B.1--B.5, for any $c \in \mathbb{R}^p$ such that $\| c \|_2 = 1$ and $\| c \|_1 \le a_*$ with some absolute constant $a_* < \infty$, we have \[ \displaystyle \frac{ \sqrt{N} c^T \Theta_{\beta^0} \dot{\ell}(\beta^0)}{\sqrt{c^T \Theta_{\beta^0} c}} \overset{\mathcal{D}}{\rightarrow} N(0,1). \] \end{lemma} \begin{proof}[\textbf{Proof of Lemma \ref{chap4:lemma:lead}}] We rewrite \begin{align} \label{chap4:eq:clt_decomp} \displaystyle \frac{- \sqrt{N} c^T \Theta_{\beta^0} \dot{\ell}(\beta^0)}{\sqrt{c^T \Theta_{\beta^0} c}} & = \displaystyle \frac{1}{\sqrt{N}} \sum_{k=1}^K \sum_{i=1}^{n_k} \frac{c^T \Theta_{\beta^0}}{\sqrt{c^T \Theta_{\beta^0} c}} \left\{ X_{ki} - \frac{\widehat{\mu}_{1k}(Y_{ki}; \beta^0)}{\widehat{\mu}_{0k}(Y_{ki}; \beta^0)} \right\} \delta_{ki} \nonumber \\ & = \frac{1}{\sqrt{N}} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \frac{c^T \Theta_{\beta^0}}{\sqrt{c^T \Theta_{\beta^0} c}} \left\{ X_{ki} - \frac{\widehat{\mu}_{1k}(t; \beta^0)}{\widehat{\mu}_{0k}(t; \beta^0)} \right\} dN_{ki}(t) \nonumber \\ & = \frac{1}{\sqrt{N}} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \frac{c^T \Theta_{\beta^0}}{\sqrt{c^T \Theta_{\beta^0} c}} \left\{ X_{ki} - \frac{\widehat{\mu}_{1k}(t; \beta^0)}{\widehat{\mu}_{0k}(t; \beta^0)} \right\} dM_{ki}(t). \end{align} Denote $\displaystyle U(t) = \frac{1}{\sqrt{N}} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{t} \frac{c^T \Theta_{\beta^0}}{\sqrt{c^T \Theta_{\beta^0} c}} \left\{ X_{ki} - \frac{\widehat{\mu}_{1k}(s; \beta^0)}{\widehat{\mu}_{0k}(s; \beta^0)} \right\} dM_{ki}(s)$. Then the variation process for $U(t)$ is \begin{align} \langle U \rangle (t) & \displaystyle = \sum_{k=1}^K \sum_{i=1}^{n_k} \frac{1}{N} \int_0^t \frac{c^T \Theta_{\beta^0}}{c^T \Theta_{\beta^0} c} \left\{ X_{ki} - \widehat{\eta}_k(u; \beta^0) \right\}^{\otimes 2} 1(Y_{ki} \ge u) e^{X_{ki}^T \beta^0} d\Lambda_{0k}(u) \Theta_{\beta^0} c \nonumber \\ & = \displaystyle \frac{c^T \Theta_{\beta^0}}{c^T \Theta_{\beta^0} c} \left[ \sum_{k=1}^K \frac{n_k}{N} \int_0^t \left\{ \widehat{\mu}_{2k}(u; \beta^0) - \frac{\widehat{\mu}_{1k}(u; \beta^0) \widehat{\mu}_{1k}^T(u; \beta^0)}{\widehat{\mu}_{0k}(u; \beta^0)} \right\} d\Lambda_{0k}(u) \right] \Theta_{\beta^0} c. \end{align} Following the proof of Lemma A2 in \citet{xia2021cox}, we have \begin{align*} & \left\| \displaystyle \int_0^t \left\{ \widehat{\mu}_{2k} (u; \beta^0) - \frac{\widehat{\mu}_{1k}(u; \beta^0) \widehat{\mu}_{1k}(u; \beta^0)^T}{\widehat{\mu}_{0k}(u; \beta^0)} \right\} d \Lambda_{0k}(u) - \right. \\ & \qquad \displaystyle \left. \int_0^t \left\{ {\mu}_{2k} (u; \beta^0) - \frac{{\mu}_{1k}(u; \beta^0) {\mu}_{1k}(u; \beta^0)^T}{{\mu}_{0k}(u; \beta^0)} \right\} d \Lambda_{0k}(u) \right\|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p) / n_k}) \end{align*} uniformly for all $t \in [0, \tau]$. Then, \begin{align*} & \displaystyle \frac{c^T \Theta_{\beta^0}}{c^T \Theta_{\beta^0} c} \left[ \sum_{k=1}^K \frac{n_k}{N} \int_0^t \left\{ \widehat{\mu}_{2k}(u; \beta^0) - \frac{\widehat{\mu}_{1k}(u; \beta^0) \widehat{\mu}_{1k}^T(u; \beta^0)}{\widehat{\mu}_{0k}(u; \beta^0)} \right\} d\Lambda_{0k}(u) \right] \Theta_{\beta^0} c \\ = & \frac{c^T \Theta_{\beta^0}}{c^T \Theta_{\beta^0} c} \left[ \sum_{k=1}^K r_k \int_0^t \left\{ \mu_{2k}(u; \beta^0) - \frac{\mu_{1k}(u; \beta^0) \mu_{1k}^T(u; \beta^0)}{\mu_{0k}(u; \beta^0)} \right\} d\Lambda_{0k}(u) \right] \Theta_{\beta^0} c ~ + ~ \\ & \qquad \mathcal{O}_{P} \left\{ \| c \|_1^2 \| \Theta_{\beta^0} \|_{1,1}^2 \left( \max_k |n_k/N - r_k| + \sqrt{\log(p)/ n_{min}} \right) \right\}. \end{align*} By Assumption B.4, $\langle U \rangle (t) \rightarrow_{P} v(t; c) > 0, ~ t \in [0, \tau]$. For any $\epsilon > 0$, define $G_{ki}(u) = \displaystyle \frac{1}{\sqrt{N}} \frac{c^T \Theta_{\beta^0}}{\sqrt{c^T \Theta_{\beta^0} c}} \left\{ X_{ki} - \frac{\widehat{\mu}_{1k}(u; \beta^0)}{\widehat{\mu}_{0k}(u; \beta^0)} \right\}$ and the truncated process $ \displaystyle U_{\epsilon}(t) = \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{t} G_{ki}(u) 1(|G_{ki}(u)| > \epsilon) dM_{ki}(u). $ The variation process of $U_{\epsilon}(t)$ is \[ \langle U_{\epsilon} \rangle (t) = \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{t} G^2_{ki}(u) 1(|G_{ki}(u)| > \epsilon) dA_{ki}(u), \] where $d A_{ki}(u) = 1(Y_{ki} \ge u) e^{X_{ki}^T \beta^0} d\Lambda_{0k}(u)$. Since \[ | \sqrt{N} G_{ki}(u) | \le a_* \| \Theta_{\beta^0} \|_{1,1} 2M \{ c^T \Theta_{\beta^0} c \}^{-1/2} = \mathcal{O}(\| \Theta_{\beta^0} \|_{1,1}), \] then $1(|G_{ki}(u)| > \epsilon) = 0$ almost surely as $ \| \Theta_{\beta^0} \|_{1,1} / \sqrt{N} \asymp \| \Theta_{\beta^0} \|_{1,1} / \sqrt{n_{min}} \to 0$. So $\langle U_{\epsilon} \rangle (t) \rightarrow_{P} 0$. By the martingale central limit theorem, we obtain the desirable result. \end{proof} \begin{lemma} \label{chap4:lemma:lasso} Under Assumptions B.1--B.3 and B.5, for $\lambda \asymp \sqrt{\log(p)/n_{min}}$, the lasso estimator $\widehat{\beta}$ satisfies \[ \| \widehat{\beta} - \beta^0 \|_1 = \mathcal{O}_{P}(s_0 \lambda), ~ \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} |X_{ki}^T (\widehat{\beta} - \beta^0)|^2 = \mathcal{O}_{P}(s_0 \lambda^2). \] \end{lemma} \begin{proof}[\textbf{Proof of Lemma \ref{chap4:lemma:lasso}}] This result is adapted from the proof in \citet{kong2014non}, with modifications as follows. An intermediate replacement for the negative log-likelihood in the $k$th stratum \[ \ell^{(k)}(\beta) = \displaystyle - \frac{1}{n_k} \sum_{i=1}^{n_k} \left[ \beta^T X_{ki} - \log \left\{ \frac{1}{n_k} \sum_{j=1}^{n_k} 1(Y_{kj} \ge Y_{ki}) \exp(\beta^T X_{kj}) \right\} \right] \delta_{ki} \] can be defined as \[ \widetilde{\ell}^{(k)}(\beta) = - \displaystyle \frac{1}{n_k} \sum_{i=1}^{n_k} \left\{ \beta^T X_{ki} - \log \mu_{0k}(Y_{ki}; \beta) \right\} \delta_{ki}, \] which is a sum of $n_k$ independent and identically distributed terms. The target parameter is $$\bar{\beta} = \arg\min_{\beta} \displaystyle {E} \left\{ \sum_{k=1}^K \frac{n_k}{N} \widetilde{\ell}^{(k)}(\beta) \right\}. $$ Then the excess risk for any given $\beta$ is \[ \mathcal{E}(\beta) = \displaystyle {E} \left\{ \sum_{k=1}^K \frac{n_k}{N} \widetilde{\ell}^{(k)}(\beta) \right\} - \displaystyle {E} \left\{ \sum_{k=1}^K \frac{n_k}{N} \widetilde{\ell}^{(k)}(\bar\beta) \right\}. \] We refer remaining details to \citet{kong2014non}. \end{proof} \begin{lemma} \label{chap4:lemma:const_sol} Under Assumptions B.1--B.3 and B.5 and assume $\lambda \asymp \sqrt{\log(p)/n_{min}}$, it holds with probability going to one that $\| \Theta_{\beta^0} \widehat{\Sigma} - I_p \|_{\infty} \le \gamma$, with $\gamma \asymp \| \Theta_{\beta^0} \|_{1,1} \{ \max_{1\le k \le K} | n_k/N - r_k | + s_0 \lambda \}$. \end{lemma} Lemma \ref{chap4:lemma:const_sol} shows that when $\gamma \asymp \| \Theta_{\beta^0} \|_{1,1} \{ \max_{1\le k \le K} | n_k/N - r_k | + s_0 \lambda \}$, the $j$th row of $\Theta_{\beta^0}$ ($j=1, \cdots, p$) will be feasible in the constraint of the corresponding quadratic programming problem with probability going to one. \begin{proof}[\textbf{Proof of Lemma \ref{chap4:lemma:const_sol}}] We first derive the rate for $\| \widehat{\Sigma} - \Sigma_{\beta^0} \|_{\infty}$. Note that \begin{align*} & \hspace{-0.4in} \| \widehat{\Sigma} - \Sigma_{\beta^0} \|_{\infty} \\ \le & \ \left\| \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \left[ \{ X_{ki} - \widehat{\eta}_k(t; \widehat{\beta}) \}^{\otimes 2} - \{ X_{ki} - \eta_{k0}(t; \beta^0) \}^{\otimes 2} \right] dN_{ki}(t) \right\|_{\infty} \\ & \quad + \left\| \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \{ X_{ki} - \eta_{k0}(t; \beta^0) \}^{\otimes 2} dN_{ki}(t) - \Sigma_{\beta^0} \right\|_{\infty} \\ \equiv & \ a_{N1} + a_{N2}. \end{align*} By the boundedness Assumption B.1, Lemma \ref{chap4:lemma:mom} and Lemma \ref{chap4:lemma:lasso}, \begin{align*} a_{N1} \le & \left\| \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \{ X_{ki} - \widehat{\eta}_{k}(t; \widehat{\beta}) \} \{ \eta_{k0}(t; \beta^0) - \widehat{\eta}_{k}(t; \widehat{\beta}) \}^T dN_{ki}(t) \right\|_{\infty} \\ & + \left\| \displaystyle \frac{1}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \{ \eta_{k0}(t; \beta^0) - \widehat{\eta}_{k}(t; \widehat{\beta}) \} \{ X_{ki} - \eta_{k0}(t; \beta^0) \}^T dN_{ki}(t) \right\|_{\infty} \\ \le & \displaystyle \frac{4M}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \| \eta_{k0}(t; \beta^0) - \widehat{\eta}_{k}(t; \widehat{\beta}) \|_{\infty} dN_{ki}(t) \\ \le & \displaystyle \frac{4M}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \| \eta_{k0}(t; \beta^0) - \widehat{\eta}_{k}(t; \beta^0) \|_{\infty} dN_{ki}(t) \\ & + \displaystyle \frac{4M}{N} \sum_{k=1}^K \sum_{i=1}^{n_k} \int_{0}^{\tau} \| \widehat{\eta}_{k}(t; \beta^0) - \widehat{\eta}_{k}(t; \widehat{\beta}) \|_{\infty} dN_{ki}(t) \\ \le & 4M \mathcal{O}_{P}(\sqrt{\log(p)/ n_{min}}) + 4M \mathcal{O}_{P}(s_0 \lambda) = \mathcal{O}_{P}(s_0 \lambda), \end{align*} where the last inequality is a result of Lemma \ref{chap4:lemma:mom} and the fact that $\sup_{t \in [0, \tau]} \| \widehat{\eta}_{k}(t; \beta^0) - \widehat{\eta}_{k}(t; \widehat{\beta}) \|_{\infty} = \mathcal{O}_{P}(\| \widehat{\beta} - \beta^0 \|_1) = \mathcal{O}_{P}(s_0\lambda)$; see the proof of Lemma A4 in \citet{xia2021cox}. Since $\Sigma_{\beta^0} = \sum_{k=1}^K r_k \Sigma_{\beta^0, k}$, \begin{align*} a_{N2} \le & \displaystyle \left\| \sum_{k=1}^K \frac{n_k}{N} \left[ \frac{1}{n_k} \sum_{i=1}^{n_k} \int_{0}^{\tau} \{ X_{ki} - \eta_{k0}(t; \beta^0) \}^{\otimes 2} dN_{ki}(t) - \Sigma_{\beta^0, k} \right] \right\|_{\infty} \\ & + \displaystyle \left\| \sum_{k=1}^K \left( \frac{n_k}{N} - r_k \right) \Sigma_{\beta^0,k} \right\|_{\infty} \\ \le & \displaystyle \sum_{k=1}^K \frac{n_k}{N} \left\| \frac{1}{n_k} \sum_{i=1}^{n_k} \int_{0}^{\tau} \{ X_{ki} - \eta_{k0}(t; \beta^0) \}^{\otimes 2} dN_{ki}(t) - \Sigma_{\beta^0, k} \right\|_{\infty} + \displaystyle \left\| \sum_{k=1}^K \left( \frac{n_k}{N} - r_k \right) \Sigma_{\beta^0,k} \right\|_{\infty}. \end{align*} The proof of Lemma A4 in \citet{xia2021cox} shows that, for $k=1, \cdots, K$, \[ \displaystyle \left\| \frac{1}{n_k} \sum_{i=1}^{n_k} \int_{0}^{\tau} \{ X_{ki} - \eta_{k0}(t; \beta^0) \}^{\otimes 2} dN_{ki}(t) - \Sigma_{\beta^0, k} \right\|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p) / n_k}) \] by Hoeffding's concentration inequality. So $a_{N2} = \mathcal{O}_{P}(\sqrt{\log(p)/ n_{min}}) + \mathcal{O}(\max_{k} | n_k/N - r_k|)$. Then, combining the bounds on $a_{N1}$ and $a_{N2}$, $\| \widehat{\Sigma} - \Sigma_{\beta^0} \|_{\infty} = \mathcal{O}_{P}(s_0 \lambda + \max_k | n_k/N - r_k |)$. Finally, it is easy to see that \[ \| \Theta_{\beta^0} \widehat{\Sigma} - I_p \|_{\infty} = \| \Theta_{\beta^0} (\widehat{\Sigma} - \Sigma_{\beta^0} ) \|_{\infty} \le \| \Theta_{\beta^0} \|_{1,1} \| \widehat{\Sigma} - \Sigma_{\beta^0} \|_{\infty}, \] and $\| \Theta_{\beta^0} \widehat{\Sigma} - I_p \|_{\infty} = \mathcal{O}_{P} ( \| \Theta_{\beta^0} \|_{1,1} \{ s_0 \lambda + \max_k | n_k/N - r_k | \})$. \end{proof} \begin{lemma} \label{chap4:lemma:diff_theta} Under the assumptions in Lemma \ref{chap4:lemma:const_sol}, if we further assume for some constant $\epsilon^{\prime} \in (0,1)$, $\limsup_{n_{min} \rightarrow \infty} p \gamma \le 1 - \epsilon^{\prime}$, then we have $\| \widehat{\Theta} - \Theta_{\beta^0} \|_{\infty} = \mathcal{O}_{P}(\gamma \| \Theta_{\beta^0} \|_{1,1})$. \end{lemma} The proof of Lemma \ref{chap4:lemma:diff_theta} follows that of Lemma A5 in \citet{xia2021cox}, thus is omitted. \begin{lemma} \label{chap4:lemma:max_score} Under Assumptions B.1--B.3, for each $t > 0$, \[ P (\| \dot{\ell}(\beta^0) \|_{\infty} > t) \le 2Kpe^{- C n_{min}t^2}, \] where $C>0$ is an absolute constant. \end{lemma} \begin{proof}[\textbf{Proof of Lemma \ref{chap4:lemma:max_score}}] Since $\dot{\ell}(\beta^0) = \sum_{k=1}^K \frac{n_k}{N} \dot{\ell}^{(k)}(\beta^0)$, we have \begin{align*} P \left( \| \dot{\ell}(\beta^0) \|_{\infty} > t \right) & \le P \left( \sum_{k=1}^K \frac{n_k}{N} \| \dot{\ell}^{(k)}(\beta^0) \|_{\infty} > t \right) \\ & \le \sum_{k=1}^K P \left( \| \dot{\ell}^{(k)}(\beta^0) \|_{\infty} > t \right) \\ & \le \sum_{k=1}^K 2pe^{- C n_k t^2}. \end{align*} Note that $\| X_{ki} - \widehat{\eta}_k(t; \beta^0) \|_{\infty} \le 2M$ holds uniformly for all $k$ and $i$. Then the last inequality is a direct result of Lemma 3.3(ii) in \citet{huang2013oracle} when applied to each of the $K$ strata. \end{proof} \section{Proofs of Main Theorems} \begin{proof}[\textbf{Proof of Theorem 3.1}] Let $\dot{\ell}_j(\beta)$ be the $j$th element of the derivative $\dot{\ell}(\beta)$. By the mean value theorem, there exists $\widetilde{\beta}^{(j)}$ between $\widehat{\beta}$ and $\beta^0$ such that $\dot{\ell}_j(\widehat{\beta}) - \dot{\ell}_j(\beta^0) = \left. \frac{\partial \dot{\ell}_j(\beta)}{\partial \beta^T} \right|_{\beta=\widetilde{\beta}^{(j)}} (\widehat{\beta} - \beta^0)$. Denote the $p \times p$ matrix $D = \left( \left. \frac{\partial \dot{\ell}_1(\beta)}{\partial \beta} \right|_{\beta=\widetilde{\beta}^{(1)}}, \cdots, \left. \frac{\partial \dot{\ell}_p(\beta)}{\partial \beta} \right|_{\beta=\widetilde{\beta}^{(p)}} \right)^T$. By the definition of the de-biased estimator $\widehat{b}$, we may decompose $c^T (\widehat{b} - \beta^0) $ as \begin{align*} c^T (\widehat{b} - \beta^0) & = - c^T \Theta_{\beta^0} \dot{\ell}(\beta^0) - c^T (\widehat{\Theta} - \Theta_{\beta^0}) \dot{\ell}(\beta^0) \nonumber \\ & \quad - c^T (\widehat{\Theta} \widehat{\Sigma} - I_p) (\widehat{\beta} - \beta^0) + c^T \widehat{\Theta} (\widehat{\Sigma} - D) (\widehat{\beta} - \beta^0) \\ & = - c^T \Theta_{\beta^0} \dot{\ell}(\beta^0) + (i) + (ii) + (iii), \end{align*} where $(i) = - c^T (\widehat{\Theta} - \Theta_{\beta^0}) \dot{\ell}(\beta^0), (ii) = - c^T (\widehat{\Theta} \widehat{\Sigma} - I_p) (\widehat{\beta} - \beta^0)$ and $(iii) = c^T \widehat{\Theta} (\widehat{\Sigma} - D) (\widehat{\beta} - \beta^0)$. We first show $\sqrt{N}(i) = o_{P}(1)$ and $\sqrt{N}(ii) = o_{P}(1)$. By Lemma \ref{chap4:lemma:diff_theta} and Lemma \ref{chap4:lemma:max_score}, \begin{align*} | \sqrt{N}(i) | & \le \sqrt{N} \| c\|_1 \cdot \| \widehat{\Theta} - \Theta_{\beta^0} \|_{\infty, \infty} \cdot \| \dot{\ell}(\beta^0) \|_{\infty} \\ & \le \sqrt{N} a_{*} \mathcal{O}_{P}( p \gamma \| \Theta_{\beta^0} \|_{1,1}) \mathcal{O}_{P}(\sqrt{\log(p)/n_{min}}) \\ & = \mathcal{O}_{P}( \| \Theta_{\beta^0} \|_{1,1} p \gamma \sqrt{\log(p)} ) \\ & = o_{P}(1), \end{align*} where the last equality is a direct result of the assumption that $\| \Theta_{\beta^0} \|_{1,1}^2 \{ \max_k|n_k/N-r_k| + s_0\lambda \} p \sqrt{\log(p)} \to 0$ when $\lambda\asymp \sqrt{\log(p)/n_{min}}$. By Lemma \ref{chap4:lemma:lasso}, \begin{align*} | \sqrt{N}(ii) | & \le \sqrt{N} \| c\|_1 \| (\widehat{\Theta} \widehat{\Sigma} - I_p) (\widehat{\beta} - \beta^0) \|_{\infty} \\ & \le \sqrt{N} a_* \| \widehat{\Theta} \widehat{\Sigma} - I_p \|_{\infty} \| \widehat{\beta} - \beta^0 \|_1 \\ & \le \sqrt{N} a_* \gamma \| \widehat{\beta} - \beta^0 \|_1 \\ & = \mathcal{O}_{P}(\sqrt{N} \gamma s_0 \lambda) \\ & \le \mathcal{O}_{P}( \sqrt{N} \| \Theta_{\beta^0} \|_{1,1} \{ \max_k |n_k/N - r_k| + s_0 \lambda \} p \sqrt{\log(p)/n_{min}} ) \\ & = o_{P}(1). \end{align*} We then show that $\sqrt{N}(iii) = o_{P}(1)$. Note that $\widehat{\Sigma} - D = (\widehat{\Sigma} - \Sigma_{\beta^0}) + (\Sigma_{\beta^0} - \ddot{\ell}(\beta^0)) + (\ddot{\ell}(\beta^0) - D)$. By the proof of Lemma \ref{chap4:lemma:const_sol}, we see that with $\lambda \asymp \sqrt{\log(p) / n_{min}}$, $\| \widehat{\Sigma} - \Sigma_{\beta^0} \|_{\infty} = \mathcal{O}_{P}(s_0 \lambda + \max_k | n_k/ N - r_k |)$. Based on the proof of Theorem 1 in \citet{xia2021cox}, for each stratum, $\| \ddot{\ell}^{(k)}(\beta^0) - D^{(k)} \|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p)/n_k})$, where $D^{(k)} = \left( \left. \frac{\partial \dot{\ell}^{(k)}_1(\beta)}{\partial \beta} \right|_{\beta=\widetilde{\beta}^{(1)}}, \cdots, \left. \frac{\partial \dot{\ell}^{(k)}_p(\beta)}{\partial \beta} \right|_{\beta=\widetilde{\beta}^{(p)}} \right)^T$. Since the overall negative log partial likelihood $\displaystyle \ell(\beta) = \sum_{k=1}^K \frac{n_k}{N} \ell^{(k)}(\beta)$, and $D= \displaystyle \sum_{k=1}^K \frac{n_k}{N} D^{(k)}$, then $\| \ddot{\ell}(\beta^0) - D \|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p)/n_{min}})$. Also, $\| \Sigma_{\beta^0, k} - \ddot{\ell}^{(k)}(\beta^0) \|_{\infty} = \mathcal{O}_{P}(\sqrt{\log(p)/n_k})$ by the proof of Theorem 1 in \citet{xia2021cox}. Then \begin{align*} \| \Sigma_{\beta^0} - \ddot{\ell}(\beta^0) \|_{\infty} & \le \left\| \sum_{k=1}^K r_k \Sigma_{\beta^0, k} - \sum_{k=1}^K \frac{n_k}{N} \Sigma_{\beta^0, k} \right\|_{\infty} + \left\| \sum_{k=1}^K \frac{n_k}{N} \Sigma_{\beta^0, k} - \sum_{k=1}^K \frac{n_k}{N} \ddot{\ell}^{(k)}(\beta^0) \right\|_{\infty} \\ & \le K \max_k(| n_k / N - r_k | \| \Sigma_{\beta^0, k} \|_{\infty}) + K \mathcal{O}_{P}(\sqrt{\log(p)/n_{min}}) \\ & = \mathcal{O}_{P}(\max_k |n_k/N - r_k| + \sqrt{\log(p)/n_{min}}). \end{align*} Therefore, for $\lambda \asymp \sqrt{\log(p)/n_{min}}$, $\| \widehat{\Sigma} - D \|_{\infty} = \mathcal{O}_{P}(s_0 \lambda + \max_k |n_k/N - r_k|)$, and \begin{align*} | \sqrt{N} (iii) | & \le \sqrt{N} \| c \|_1 \| \widehat{\Theta} \|_{\infty, \infty} \| \widehat{\Sigma} - D \|_{\infty} \| \widehat{\beta} - \beta^0 \|_1 \\ & \le \mathcal{O}_{P}\left(\sqrt{N} \| \Theta_{\beta^0} \|_{1,1} (s_0 \lambda + \max_k | n_k/N - r_k |) \right) s_0 \lambda \\ & \le \mathcal{O}_{P}\left( \sqrt{N /n_{min}} \| \Theta_{\beta^0} \|_{1,1} (s_0 \lambda + \max_k | n_k/N - r_k | ) p \sqrt{\log(p) } \right)\\ & = o_{P}(1). \end{align*} Finally, for the variance, \begin{align*} | c^T ( \widehat{\Theta} - \Theta_{\beta^0}) c | & \le \| c \|_1^2 \| \widehat{\Theta} - \Theta_{\beta^0} \|_{\infty} \\ & \le a_*^2 \mathcal{O}_{P}(\gamma \| \Theta_{\beta^0} \|_{1,1}) = o_{P}(1). \end{align*} By Slutsky's theorem and Lemma \ref{chap4:lemma:lead}, $\sqrt{n} c^T (\widehat{b} - \beta^0) / (c^T \widehat{\Theta} c)^{1/2} \overset{\mathcal{D}}{\rightarrow} N(0,1)$. \end{proof} \begin{proof}[\textbf{Sketch proof of Theorem 3.4}] Theorem 3.4 can be easily proved using Cram\'{e}r-Wold device. For any $\omega \in \mathbb{R}^m$, since the dimension of $\omega$ is a fixed integer, we can invoke Theorem 3.1 by taking $c=J^T \omega$. Note that $\| J^T \omega \|_1 \le \| J^T \|_{1,1} \| \omega \|_1 = \| J \|_{\infty, \infty} \| \omega \|_1 = \mathcal{O}(1)$. \end{proof} \end{document}
\begin{document} \title[$C^*$-algebras and Perron--Frobenius theory]{A note on lattice ordered $C^*$-algebras and Perron--Frobenius theory} \author{Jochen Gl\"uck} \email{[email protected]} \address{Jochen Gl\"uck, Institute of Applied Analysis, Ulm University, 89069 Ulm, Germany} \keywords{Lattice ordered $C^*$-algebra; Perron--Frobenius theory; completely positive semigroup; commutativity from order} \subjclass[2010]{46L05; 46B40; 47B65} \date{\today} \begin{abstract} A classical result of Sherman says that if the space of self-adjoint elements in a $C^*$-algebra $\mathcal{A}$ is a lattice with respect to its canonical order, then $\mathcal{A}$ is commutative. We give a new proof of this theorem which shows that it is intrinsically connected with the spectral theory of positive operator semigroups. Our methods also show that some important Perron--Frobenius like spectral results fail to hold in any non-commutative $C^*$-algebra. \end{abstract} \maketitle \section{Introduction} Let us consider the space $\mathcal{A}_{\operatorname{sa}}$ of self-adjoint elements of a $C^*$-algebra $\mathcal{A}$. There is a canonical order on $\mathcal{A}_{\operatorname{sa}}$ which is given by $a \le b$ if and only if $b-a$ is positive semi-definite. With respect to this order $\mathcal{A}_{\operatorname{sa}}$ is an ordered real Banach space, meaning that the \emph{positive cone} $(\mathcal{A}_{\operatorname{sa}})_+ := \{a \in \mathcal{A}_{\operatorname{sa}}| \, a \ge 0\}$ is closed, convex and invariant with respect to multiplication by scalars $\alpha \ge 0$ and that it fulfils $(\mathcal{A}_{\operatorname{sa}})_+ \cap -(\mathcal{A}_{\operatorname{sa}})_+ = \{0\}$. If $\mathcal{A}$ is commutative, then it follows from Gelfand's representation theorem for commutative $C^*$-algebras that $\mathcal{A}_{\operatorname{sa}}$ is actually lattice ordered, i.e.\ that all elements $a,b \in \mathcal{A}_{\operatorname{sa}}$ have an \emph{infimum} (or \emph{greatest lower bound}) $a \land b$. In 1951 Sherman proved that the converse implication also holds, i.e.\ if $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered, then $\mathcal{A}$ is commutative \cite[Theorems~1 and~2]{Sherman1951}. Since then a great wealth of results has appeared which vary and generalise Sherman's theorem in several directions. We only quote a few of them: in \cite[Theorems~1 and~2]{Fukamiya1954} it is shown that if the ordered space $\mathcal{A}_{\operatorname{sa}}$ is only assumed to have the Riesz decomposition property, then $\mathcal{A}$ is commutative. In \cite[Theorem~2]{Archbold1974} it is proved that if for a fixed element $a \in \mathcal{A}_{\operatorname{sa}}$ and all $b \in \mathcal{A}_{\operatorname{sa}}$ the infimum $a \land b$ exists, then $\mathcal{A}$ is commutative. In \cite[Corollary~11 and Theorem~12]{Kadison1951}, \cite[Theorem~3.7]{Archbold1972}, \cite[Theorem~3.5]{Cho-Ho1973} and \cite[Corollary~2.4]{Green1977} it is shown that if $\mathcal{A}$ is in some sense too far from being commutative, then $\mathcal{A}_{\operatorname{sa}}$ is even a so-called \emph{anti-lattice}, meaning that two elements of $a,b\in \mathcal{A}_{\operatorname{sa}}$ have an infimum only if $a \le b$ or $b \le a$. In \cite{Curtis1958} Sherman's result is adapted to more general Banach algebras. The ``commutativity-from-order'' theme was also considered from a somewhat different viewpoint in \cite{Topping1965} and a more recent contribution to the theory can be found in \cite{Spain2001}. It might not come as a surprise that the proofs for many of the above mentioned results use methods which are rather typical for the theory of operator algebras. The main goal of this note is to show that Sherman's theorem can also be proved by using only elementary properties of $C^*$-algebras if we instead employ the so-called \emph{Perron--Frobenius theory}; this notion is usually used to refer to the spectral theory of positive operators and semigroups, in particular on Banach lattices. We present our new proof of Sherman's result in Section~\ref{section:a-new-proof-of-shermans-theorem}. Since our approach relies on non-trivial spectral theoretic results, we do not claim that our proof is simpler than the original one; our main point is that our proof employs completely different methods and thus sheds some new light on Sherman's theorem. In particular, the proof establishes a beautiful connection between commutativity of operator algebras and Perron--Frobenius theory. In Section~\ref{section:perron-frobenius-theory-on-c-star-algebras} we have a somewhat deeper look into this connection by proving that certain Perron--Frobenius type results can never hold on a non-commutative $C^*$-algebra. \section{A new proof of Sherman's theorem} \label{section:a-new-proof-of-shermans-theorem} In this section we give a new proof of Sherman's commutativity result by means of Perron--Frobenius spectral theory. For every Banach space $E$ we denote by $\mathcal{L}(E)$ the space of bounded linear operators on $E$. \begin{theorem}[Sherman] \label{thm:sherman} Let $\mathcal{A}$ be a $C^*$-algebra and let the space $\mathcal{A}_{\operatorname{sa}}$ of self-adjoint elements in $\mathcal{A}$ be endowed with the canonical order. Then $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered if and only if $\mathcal{A}$ is commutative. \end{theorem} \begin{proof} The implication ``$\Leftarrow$'' follows readily from Gelfand's representation theorem for commutative $C^*$-algebras. To prove the implication ``$\Rightarrow$'', assume that $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered. Fix $a \in \mathcal{A}_{\operatorname{sa}}$. It suffices to show that $a$ commutes with every element $c \in \mathcal{A}_{\operatorname{sa}}$. For every $t \in \mathbb{R}$ we define an operator $T_t \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ by $T_tc = e^{-ita}\,c\,e^{ita}$ (where the exponential function is computed in the unitization of $\mathcal{A}$ in case that $\mathcal{A}$ does not contain a unit). Obviously, the operator family $\mathcal{T} := (T_t)_{t \in \mathbb{R}} \subseteq \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ is a contractive positive $C_0$-group on $\mathcal{A}_{\operatorname{sa}}$. The orbit $t \mapsto T_tc$ is differentiable for every $c \in \mathcal{A}_{\operatorname{sa}}$ and its derivative at $t = 0$ is given by $-iac + ica$; hence, the $C_0$-group $\mathcal{T}$ has a bounded generator $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ which is given by $Lc = -iac + ica$ for all $c \in \mathcal{A}_{\operatorname{sa}}$. Let $\sigma(L)$ denote the spectrum of (the complex extension of) $L$; then $\sigma(L)$ is contained in $i\mathbb{R}$ since the $C_0$-group $\mathcal{T}$ is bounded and $\sigma(L)$ is non-empty and bounded since $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$. Now we use that $\mathcal{A}_{\operatorname{sa}}$ is a lattice to show that $\sigma(L) = \{0\}$: since the norm is monotone on $\mathcal{A}_{\operatorname{sa}}$ (meaning that we have $\|a\| \le \|b\|$ whenever $0 \le a \le b$, see \cite[Theorem~2.2.5(3)]{Murphy1990}), we can find an equivalent norm on $\mathcal{A}_{\operatorname{sa}}$ which renders it a Banach lattice (see Section~\ref{section:perron-frobenius-theory-on-c-star-algebras} for a description of this norm). However, if an operator $L$ with spectral bound $0$ generates a bounded, eventually norm continuous and positive $C_0$-semigroup on a Banach lattice, then it follows from Perron--Frobenius theory that $0$ is a dominant spectral value of $L$, i.e.\ that we have $\sigma(L) \cap i\mathbb{R} = \{0\}$; see \cite[Corollary~C-III-2.13]{Arendt1986}. Hence, $\sigma(L) = \{0\}$. Since $L$ generates a bounded $C_0$-group it now follows from the semigroup version of Gelfand's $T = \id$ theorem \cite[Corollary~4.4.12]{Arendt2011} that $\mathcal{T}$ is trivial, i.e.\ that $T_t$ is the identity operator on $\mathcal{A}_{\operatorname{sa}}$ for all $t \in \mathbb{R}$. Hence $L = 0$, so $0 = Lc = -iac + ica$ for all $c \in \mathcal{A}_{\operatorname{sa}}$. This shows that $a$ commutes with all elements of $\mathcal{A}_{\operatorname{sa}}$. \end{proof} In the above proof we quoted a result of Perron--Frobenius theory for positive operator semigroups from \cite[Corollary~C-III-2.13]{Arendt1986}, and this result is in turn based on a rather deep theorem in \cite[Theorem~C-III-2.10]{Arendt1986}. However, we only needed a simple special case since the generator of our semigroup is bounded. Let us demonstrate how this special case can be treated without employing the entire machinery of Perron--Frobenius theory: \begin{proposition} \label{prop:perron-frobenius-for-bounded-generator} Let $E$ be a Banach lattice and let $L \in \mathcal{L}(E)$ be an operator with spectral bound $s(L) = 0$. If $e^{tL}$ is positive for every $t \ge 0$, then $\sigma(L) \cap i \mathbb{R} = \{0\}$. \end{proposition} \begin{proof} Assume that $e^{tL} \ge 0$ for all $t \ge 0$. Since $E$ is a Banach lattice it follows that $L + \|L\| \ge 0$, see \cite[Theorem~C-II-1.11]{Arendt1986}. One can prove by a simple resolvent estimate that the spectral radius of a positive operator on a Banach lattice is contained in the spectrum, see e.g.~\cite[Proposition~V.4.1]{Schaefer1974}. Hence $r(L + \|L\|) \in \sigma(L + \|L\|)$, so we conclude that the spectral bound $\spb(L+\|L\|) = \|L\|$ coincides with the spectral radius $r(L + \|L\|)$. Thus, $\sigma(L + \|L\|) \cap (\|L\| + i\mathbb{R}) = \{\|L\|\}$, which proves the assertion. \end{proof} \begin{remark} \label{rem:gelfands_t-id-theorem} In our proof of Theorem~\ref{thm:sherman} we used another non-trivial result, namely the semigroup version of Gelfand's $T = \id$ theorem. Let us note that we can instead use Gelfand's $T = \id$ theorem for single operators: if we know that $\sigma(L) = \{0\}$ and that the group $(e^{tL})_{t \in \mathbb{R}}$ is bounded, then it follows from the spectral mapping theorem for the holomorphic functional calculus (or for eventually norm continuous semigroups) that $\sigma(e^{tL}) = \{1\}$ for every $t \in \mathbb{R}$; since every $e^{tL}$ is doubly power bounded, we conclude from Gelfand's $T = \id$ theorem for single operators (see e.g.~\cite[Theorem~B.17]{Engel2000}) that $e^{tL} = \id_{\mathcal{A}_{\operatorname{sa}}}$ for all $t \in \mathbb{R}$. \end{remark} Despite what was said in Proposition~\ref{prop:perron-frobenius-for-bounded-generator} and Remark~\ref{rem:gelfands_t-id-theorem}, our proof of Sherman's theorem still relies on Gelfand's $T = \id$ theorem for single operators, which is a non-trivial result. Thus, our proof is not elementary; yet, all its non-elementary ingredients are essentially independent of $C^*$-algebra theory. \section{Perron--Frobenius theory on $C^*$-algebras} \label{section:perron-frobenius-theory-on-c-star-algebras} Our proof of Theorem~\ref{thm:sherman} suggests that certain Perron--Frobenius type spectral results can only be true on commutative $C^*$-algebras. Let us discuss this in a bit more detail: indeed, it was demonstrated in \cite[pp.\,387--388]{Arendt1986} and \cite[Section~4]{Luczak2010} by means of concrete examples that typical Perron--Frobenius results which are true on Banach lattices fail in general on non-commutative $C^*$-algebras. On the other hand, some results can even by shown in the non-commutative setting if one imposes additional assumptions (such as irreducibility and complete positivity) on the semigroup or the operator under consideration. For single matrices and operators, such results can for instance be found in \cite{Evans1978, Groh1981, Groh1982, Groh1983}. For time-continuous operator semigroups we refer for example to \cite[Section~D-III]{Arendt1986}; the papers \cite{Albeverio1978, Luczak2010} contain results for both the single operator and the time-continuous case, and in \cite[Section~6.1]{Batkai2012} Perron--Frobenius type results on $W^*$-algebras are proved by a different approach, using a version of the so-called Jacobs--DeLeeuw--Glicksberg decomposition. An example for an application of Perron--Frobenius theory on $C^*$-algebras to the analysis of quantum systems can be found in \cite[Theorem~2.2]{Jaksic2014}. In this section we are headed in a different direction: we use the idea of our proof of Theorem~\ref{thm:sherman} to show that certain Perron--Frobenius type results are \emph{never} true on non-commutative $C^*$-algebras. To state our result we need a bit of notation. Some of the following notions have already been used tacitly above, but to avoid any ambiguity in the formulation of the next theorem, it is important to recall them explicitly here. If $E$ is a Banach space, then we denote by $\mathcal{L}(E)$ the space of all bounded linear operators on $E$; the dual space of $E$ is denoted by $E'$ and the adjoint of an operator $L \in \mathcal{L}(E)$ is denoted by $L' \in \mathcal{L}(E')$. By an \emph{ordered Banach space} we mean a tuple $(E,E_+)$, often only denoted by $E$, where $E$ is a real Banach space, and $E_+ \subset E$ is a closed and pointed cone in $E$, meaning that $E_+$ is closed, that $\alpha E_+ + \beta E_+ \subseteq E_+$ for all $\alpha,\beta \in [0,\infty)$ and $E_+ \cap -E_+ = \{0\}$. On an ordered Banach space there is a canonical order relation $\le$ which is given by ``$x \le y$ iff $y - x \in E_+$''. The cone $E_+$ is called \emph{generating} if $E = E_+ - E_+$ and it is called \emph{normal} if there exists a constant $c > 0$ such that $\|x\| \le c\|y\|$ whenever $0 \le x \le y$. If $\mathcal{A}$ is a $C^*$-algebra, then the space $\mathcal{A}_{\operatorname{sa}}$ of self-adjoint elements in $\mathcal{A}$ is usually endowed with the cone $(\mathcal{A}_{\operatorname{sa}})_+ := \{a \in \mathcal{A}_{\operatorname{sa}}: \; \sigma(a) \subseteq [0,\infty)\}$; thus one obtains the canonical order on $\mathcal{A}_{\operatorname{sa}}$ that we already considered in Theorem~\ref{thm:sherman}. Note that the cone $(\mathcal{A}_{\operatorname{sa}})_+$ is normal (since $\|a\| \le \|b\|$ whenever $0 \le a \le b$, see \cite[Theorem~2.2.5(3)]{Murphy1990}) and generating \cite[p.\,45]{Murphy1990}. Let $E$ be an ordered Banach space. For all $x,y \in E$ the \emph{order interval} $[x,y]$ is given by $[x,y] := \{f \in E: \, x \le f \le y\}$. The space $E$ is said to have the \emph{Riesz decomposition property} if $[0,x] + [0,y] = [0,x+y]$ for every $x,y \in E_+$. If $E$ is lattice ordered, then the cone $E_+$ is generating and $E$ has the Riesz decomposition property \cite[Corollary~1.55]{Aliprantis2007}, but the converse is not in general true. If $E$ is an ordered Banach space with generating and normal cone and if the induced order makes $E$ a vector lattice, then $\|x\|_{\operatorname{BL}} := \sup_{0 \le z \le |x|} \|z\|$ defines an equivalent norm on $E$ which renders it a Banach lattice (use the Riesz decomposition property of $E$ to see that $\|\cdot\|_{\operatorname{BL}}$ satisfies the triangle inequality and use e.g.\ \cite[Theorem~2.37(3)]{Aliprantis2007} to see that the norm $\|\cdot\|_{\operatorname{BL}}$ is indeed equivalent to the original norm). Let $E$ be an ordered Banach space. An operator $L \in \mathcal{L}(E)$ is called \emph{positive} if $LE_+ \subseteq E_+$; this is denoted by $L \ge 0$. By $E'_+$ we denote the set of all positive functionals on $E$; here, a functional $x' \in E'$ is called \emph{positive} if $\langle x', x\rangle \ge 0$ for all $x \in E_+$. It follows from the Hahn--Banach separation theorem that a vector $x \in E$ is contained in $E_+$ if and only if $\langle x', x\rangle \ge 0$ for all $x' \in E'_+$; hence, an operator $L \in \mathcal{L}(E)$ is positive if and only if $L'E'_+ \subseteq E'_+$. If the positive cone in $E$ is generating, then $E'_+ \cap - E'_+ = \{0\}$ and thus, the dual space $E'$ is also an ordered Banach space. Let $E$ be an ordered Banach space. We call an operator $L \in \mathcal{L}(E)$ \emph{quasi-positive} if $L + \alpha \id_E \ge 0$ for some $\alpha \ge 0$, we call it \emph{exponentially positive} if $e^{tL} \ge 0$ for all $t \in [0,\infty)$ and and we call it \emph{cross positive} if $\langle x', Lx \rangle \ge 0$ for all $x \in E_+$ and all $x' \in E'_+$ which fulfil $\langle x', x \rangle = 0$. We point out, however, that those properties are sometimes named differently in the literature. If $A: E \supseteq D(A) \to E$ is a closed linear operator on a real Banach space $E$, then $\sigma(A)$ denotes the spectrum of the complex extension of $A$ to some complexification of $E$; similarly, all other notions from spectral theory are understood to be properties of the complex extension of $A$. The \emph{spectral bound} of $A$ is the number $s(A) := \sup\{\re \lambda: \, \lambda \in \sigma(A)\} \in [-\infty,\infty]$ and if $s(A) \in \mathbb{R}$, then the \emph{boundary spectrum} if $A$ is defined to be the set $\sigma_{{\operatorname{bnd}}}(A) := \{\lambda \in \sigma(A): \; \re \lambda = s(A)\}$. If $L \in \mathcal{L}(E)$, then the \emph{spectral radius} if $L$ is denoted by $r(L)$ and the \emph{peripheral spectrum} of $L$ is defined to be the set $\sigma_{\operatorname{per}}(L) := \{\lambda \in \sigma(L): \; |\lambda| = r(L)\}$. Many results in Perron--Frobenius theory ensure the \emph{cyclicity} of parts of the spectrum; here, a set $S \subseteq \mathbb{C}$ is called \emph{cyclic} if $re^{i\theta} \in S$ ($r \ge 0$, $\theta \in \mathbb{R}$) implies that $re^{in\theta} \in S$ for all integers $n \in \mathbb{Z}$. The set $S$ is called \emph{additively cyclic} if $\alpha + i\beta \in S$ ($\alpha, \beta \in \mathbb{R}$) implies that $\alpha + in\beta \in S$ for all integers $n \in \mathbb{Z}$. An important result in Perron--Frobenius theory states that on any given Banach lattice $E$, every positive power bounded operator with spectral radius $1$ has cyclic peripheral spectrum; in fact, an even somewhat stronger result is true, see \cite[Theorem~V.4.9]{Schaefer1974}. However, it is still an open problem whether \emph{every} positive operator on a Banach lattice has cyclic peripheral spectrum; see \cite{Glueck2015, Glueck2016} for a detailed discussion of this and for some recent progress in this question. An analogues result for $C_0$-semigroups says that if the generator of a positive and bounded $C_0$-semigroup has spectral bound $0$, then its boundary spectrum is additively cyclic; see \cite[Theorem~C-III-2.10]{Arendt1986} for a slightly stronger result. Theorem~\ref{thm:perron-frobenius-on-c-star-algebras} below shows that the above mentioned results are \emph{never} true an a non-commutative $C^*$-algebra. In fact, they are not even true for \emph{completely positive} operators and semigroups on such spaces; recall that a linear operator $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$, whose complex extension to $\mathcal{A}$ is denoted by $L_\mathbb{C}$, is called \emph{completely positive} if the operator $L_\mathbb{C} \otimes \id_{\mathbb{C}^{k \times k}} \in \mathcal{L}(\mathcal{A} \otimes \mathbb{C}^{k \times k})$ (more precisely: its restriction to the self-adjoint part of the $C^*$-algebra $\mathcal{A} \otimes \mathbb{C}^{k \times k}$) is positive for every $k \in \mathbb{N}_0$; see \cite[Section~II.6.9]{Blackadar2006} for some details. We call a $C_0$-semigroup $(e^{tA})_{t \ge 0}$ on $\mathcal{A}_{\operatorname{sa}}$ \emph{completely positive} if the operator $e^{tA}$ is completely positive for every $t \ge 0$. Completely positive operators play an important role in quantum physics. Here we are going to show that the above mentioned Perron--Frobenius type results are not even true for completely positive operators and semigroups if the $C^*$-algebra $\mathcal{A}$ is non-commutative. To show this we only need the simple fact that for every $a \in \mathcal{A}$ the operator $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ given by $Lc = a^*ca$ is completely positive. \begin{theorem} \label{thm:perron-frobenius-on-c-star-algebras} Let $\mathcal{A}$ be a $C^*$-algebra and let the space $\mathcal{A}_{\operatorname{sa}}$ of self-adjoint elements in $\mathcal{A}$ be endowed with the canonical order. The following assertions are equivalent: \begin{enumerate}[\upshape (i)] \item $\mathcal{A}$ is commutative. \item $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered. \item $\mathcal{A}_{\operatorname{sa}}$ has the Riesz decomposition property. \item Every cross positive operator in $\mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ is quasi-positive. \item Every exponentially positive operator in $\mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ is quasi-positive. \item Every power bounded positive operator in $\mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ with spectral radius $1$ has cyclic peripheral spectrum. \item Every power bounded completely positive operator in $\mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ with spectral radius $1$ has cyclic peripheral spectrum. \item If the generator $A$ of a bounded positive $C_0$-semigroup on $\mathcal{A}_{\operatorname{sa}}$ has spectral bound $0$, then its boundary spectrum is additively cyclic. \item If the generator $A$ of a bounded completely positive $C_0$-semigroup on $\mathcal{A}_{\operatorname{sa}}$ has spectral bound $0$, then its boundary spectrum is additively cyclic. \item Every norm continuous, bounded and completely positive $C_0$-group $(e^{tL})_{t \in \mathbb{R}}$ on $\mathcal{A}_{\operatorname{sa}}$ is trivial, i.e.\ it fulfils $e^{tL} = \id_{\mathcal{A}_{\operatorname{sa}}}$ for all $t \in \mathbb{R}$. \end{enumerate} \end{theorem} \begin{proof} We show ``(i) $\Rightarrow$ (ii) $\Rightarrow$ (iv) $\Rightarrow$ (v) $\Rightarrow$ (x) $\Rightarrow$ (i)'' as well as ``(ii) $\Rightarrow$ (iii) $\Rightarrow$ (v)'', ``(ii) $\Rightarrow$ (vi) $\Rightarrow$ (vii) $\Rightarrow$ (x)'' and ``(ii) $\Rightarrow$ (viii) $\Rightarrow$ (ix) $\Rightarrow$ (x)''. ``(i) $\Rightarrow$ (ii)'' This follows from Gelfand's representation theorem for commutative $C^*$-algebras. ``(ii) $\Rightarrow$ (iv)''. Since $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered and since the positive cone in $\mathcal{A}_{\operatorname{sa}}$ is normal, there is an equivalent norm on $\mathcal{A}_{\operatorname{sa}}$ which renders it a Banach lattice. Hence, it follows from \cite[Theorem~C-II-1.11]{Arendt1986} that every cross positive operator on $\mathcal{A}_{\operatorname{sa}}$ is quasi-positive. ``(iv) $\Rightarrow$ (v)'' Assume that (iv) is true and let $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ be exponentially positive. Due to (iv) it suffices to show that $L$ is cross positive, so let $0 \le a \in \mathcal{A}_{\operatorname{sa}}$ and $0 \le \varphi \in \mathcal{A}_{\operatorname{sa}}'$ such that $\langle \varphi, a \rangle = 0$. Then we obtain \begin{align*} \langle \varphi, La \rangle = \lim_{t \downarrow 0} \frac{\langle \varphi, (e^{tL} - \id_{\mathcal{A}_{\operatorname{sa}}})a \rangle}{t} = \lim_{t \downarrow 0} \frac{\langle \varphi, e^{tL}a \rangle}{t} \ge 0. \end{align*} Hence, $L$ is cross positive and it now follows from (iv) that $L$ is quasi-positive. ``(v) $\Rightarrow$ (x)'' Suppose that (v) is true, let $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ and assume that the group $(e^{tL})_{t \in \mathbb{R}}$ is bounded and completely positive. We argue as in Remark~\ref{prop:perron-frobenius-for-bounded-generator}: we have $\emptyset \not= \sigma(L) \subseteq i\mathbb{R}$ and we know from (v) that $L + \alpha \id_{\mathcal{A}_{\operatorname{sa}}}$ is positive for some $\alpha \ge 0$. Since the positive cone in $\mathcal{A}_{\operatorname{sa}}$ is generating and normal, it follows that the spectral radius of the operator $L + \alpha \id_{\mathcal{A}_{\operatorname{sa}}}$ is contained in its spectrum \cite[Section~2.2 in the Appendix]{Schaefer1999}; hence, the spectral radius equals the spectral bound $s(L + \alpha \id_{\mathcal{A}_{\operatorname{sa}}})$, which is in turn equal to $\alpha$. Thus, $\sigma(L + \alpha \id_{\mathcal{A}_{\operatorname{sa}}}) = \sigma(L + \alpha \id_{\mathcal{A}_{\operatorname{sa}}}) \cap (i\mathbb{R} + \alpha) = \{\alpha\}$, so we conclude that $\sigma(L) = \{0\}$. It now follows from the semigroup analogue of Gelfand's $T = \id$ theorem \cite[Corollary~4.4.12]{Arendt2011} that $e^{tL} = \id_{\mathcal{A}_{\operatorname{sa}}}$ for all $t \in \mathbb{R}$. ``(x) $\Rightarrow$ (i)'' Let us argue as in the proof of Theorem~\ref{thm:sherman}: fix $a \in \mathcal{A}_{\operatorname{sa}}$ and define $T_t \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ by $T_tc = e^{-ita}ce^{ita}$ for every $t \in \mathbb{R}$ (where the exponential function is computed in the unitization of $\mathcal{A}$ if $\mathcal{A}$ does not contain a unit itself). Then $(T_t)_{t \in \mathbb{R}}$ is a bounded, completely positive and operator norm continuous group on $\mathcal{A}_{\operatorname{sa}}$; its generator $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ is given by $Lc = -iac + ica$ for all $c \in \mathcal{A}_{\operatorname{sa}}$. It follows from (x) that $L = 0$ and hence, $a$ commutes with every $c \in \mathcal{A}_{\operatorname{sa}}$, which in turn proves that $\mathcal{A}$ is commutative. ``(ii) $\Rightarrow$ (iii)'' Every vector lattice has the Riesz decomposition property, see e.g.\ \cite[Corollary~1.55]{Aliprantis2007}. ``(iii) $\Rightarrow$ (v)'' Suppose that $\mathcal{A}_{\operatorname{sa}}$ has the Riesz decomposition property. Since the cone in $\mathcal{A}_{\operatorname{sa}}$ is normal, it follows that the dual space $\mathcal{A}_{\operatorname{sa}}'$ is a vector lattice \cite[Theorem~2.47]{Aliprantis2007}. Moreover, the cone in $\mathcal{A}_{\operatorname{sa}}'$ is normal, too \cite[Theorem~2.42]{Aliprantis2007}, so $\mathcal{A}_{\operatorname{sa}}'$ is a Banach lattice with respect to an equivalent norm. If $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ is exponentially positive, then so is its adjoint $L' \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}}')$ and hence, $L'$ is quasi-positive according to \cite[Theorem~C-II-1.11]{Arendt1986}. This in turn implies that $L$ is quasi-positive itself. ``(ii) $\Rightarrow$ (vi)'' If $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered, then it is a Banach lattice with respect to some equivalent norm (since the cone in $\mathcal{A}_{\operatorname{sa}}$ is normal). Hence, assertion (vi) follows from Perron--Frobenius theory of single operators on Banach lattices, see \cite[Theorem~V.4.9]{Schaefer1974}. ``(vi) $\Rightarrow$ (vii)'' This is obvious. ``(vii) $\Rightarrow$ (x)'' Suppose that (vii) is true and that $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ generates a completely positive and bounded group $(e^{tL})_{t \in \mathbb{R}}$. Then the spectrum of $L$ is bounded and contained in the imaginary axis. Applying the spectral mapping theorem for small $t>0$ we conclude that the peripheral spectrum of $e^{tL}$ can only be cyclic if it is contained in $\{1\}$. Hence, $\sigma(L) = \{0\}$, so assertion (x) follows from the semigroup analogue of Gelfand's $T = \id$ theorem \cite[Corollary~4.4.12]{Arendt2011}. ``(ii) $\Rightarrow$ (viii)'' Assume that $\mathcal{A}_{\operatorname{sa}}$ is lattice ordered. We use again that $\mathcal{A}_{\operatorname{sa}}$ is then a Banach lattice for some equivalent norm. Hence, assertion (viii) follows from Perron--Frobenius theory for positive $C_0$-semigroups on Banach lattices, see \cite[Theorem~C-III-2.10]{Arendt1986}. ``(viii) $\Rightarrow$ (ix)'' This is obvious. ``(ix) $\Rightarrow$ (x)'' If (ix) holds and if $L \in \mathcal{L}(\mathcal{A}_{\operatorname{sa}})$ generates a completely positive and bounded group $(e^{tL})_{t \in \mathbb{R}}$, then it follows from (ix) that $\sigma(L) = \{0\}$. Hence we can again employ the semigroup analogue of Gelfand's $T = \id$ theorem \cite[Corollary~4.4.12]{Arendt2011} to conclude that (x) holds. \end{proof} The above proof shows that many (not all) of the implications in Theorem~\ref{thm:perron-frobenius-on-c-star-algebras} are also true on general ordered Banach spaces with generating and normal cone. However, since we are mainly interested in $C^*$-algebras in this note, we omit a detailed discussion of this. \end{document}
\begin{document} \title{Congruences between modular forms and related modules } \author{Miriam Ciavarella} \date{ } \maketitle \begin{abstract} We fix $\ell$ a prime and let $M$ be an integer such that $\ell\not|M$; let $f\in S_2(\Gamma_1(M\ell^2))$ be a newform supercuspidal of fixed type related to the nebentypus, at $\ell$ and special at a finite set of primes. Let ${\bf T}^\psi$ be the local quaternionic Hecke algebra associated to $f$. The algebra ${\bf T}^\psi$ acts on a module $\mathcal M^\psi_f$ coming from the cohomology of a Shimura curve. Applying the Taylor-Wiles criterion and a recent Savitt's theorem, ${\bf T}^\psi$ is the universal deformation ring of a global Galois deformation problem associated to $\overline\rho_f$. Moreover $\mathcal M^\psi_f$ is free of rank 2 over ${\bf T}^\psi$. If $f$ occurs at minimal level, by a generalization of a Conrad, Diamond and Taylor's result and by the classical Ihara's lemma, we prove a theorem of raising the level and a result about congruence ideals. The extension of this results to the non minimal case is an open problem. \end{abstract} Keywords: modular form, deformation ring, Hecke algebra, quaternion algebra, congruences. \\ 2000 AMS Mathematics Subject Classification: 11F80 \section*{Introduction} The principal aim of this article is to detect some results about isomorphism of complete intersection between an universal deformation ring and a local Hecke algebra and about cohomological modules free over an Hecke algebra. From this results, it is possible deduce that there is an isomorphism between the quaternionic cohomological congruence module for a modular form and the classical congruence module.\\ Our work take place in a context of search which has its origin in the works of Wiles and Taylor-Wiles on the Shimura-Taniyama-Weil conjecture. Recall that the problem addressed by Wiles in \cite{wi} is to prove that a certain ring homomorphism $\phi:\mathcal R_\mathcal D\to{\bf T}_\mathcal D$ is an isomorphism (of complete intersection), where $\mathcal R_\mathcal D$ is the universal deformation ring for a mod $\ell$ Galois representation arising from a modular form and ${\bf T}_\mathcal D$ is a certain Hecke algebra. \\ Extending our results to a more general class of representations letting free the ramification to a finite set of prime we need a small generalization of a result of Conrad Diamond and Taylor. We will describe it, using a recent Savitt's theorem and we deduce from it two interesting results about congruences.\\ Our first result extends a work of Terracini \cite{Lea} to a more general class of types and allows us to work with modular forms having a non trivial nebentypus. Our arguments are largely identical to Terracini's in many places; the debt to Terracini's work will be clear throughout the paper. One important change is that since we will work with Galois representations which are not semistable at $\ell$ but only potentially semistable, we use a recent Savitt's theorem \cite{savitt}, that prove a conjecture of Conrad, Diamond and Taylor (\cite{CDT}, conjecture 1.2.2 and conjecture 1.2.3), on the size of certain deformation rings parametrizing potentially Barsotti-Tate Galois representations, extending results of Breuil and M\'{e}zard (conjecture 2.3.1.1 of \cite{BM}) (classifying Galois lattices in semistable representations in terms of \lq\lq strongly divisible modules \rq\rq) to the potentially crystalline case in Hodge-Tate weights $(0,1)$. Given a prime $\ell$, we fix a newform $f\in S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ with nebentypus $\psi$ of order prime to $\ell$, special at primes dividing $\Delta'$ and such that its local representation $\pi_{f,\ell}$ of $GL_2({\bf Q}_\ell)$, is associated to a fixed regular character $\chi$ of ${\bf F}_{\ell^2}^\times$ satisfying $\chi|_{{\bf Z}_\ell^\times}=\psi_\ell|_{{\bf Z}_\ell^\times}$. We consider the residual Galois representation $\overline\rho$ associated to $f$. We denote by ${\bf T}^\psi$ the ${\bf Z}_\ell$-subalgebra of $\prod_{h\in\mathcal B}\mathcal O_{h,\lambda}$ where $\mathcal B$ is the set of normalized newforms in $S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ which are supercuspidal of type $\tau=\chi^\sigma\oplus\chi$ at $\ell$ and whose associated representation is a deformation of $\overline\rho$. By the Jacquet-Langlands correspondence and the Matsushima-Murakami-Shimura isomorphism, one can see such forms in a local component ${\mathcal M^\psi}$ of the $\ell$-adic cohomology of a Shimura curve. By imposing suitable conditions on the type $\tau$, we describe for each prime $p$ dividing the level, a local deformation condition of $\overline\rho_p$ and applying the Taylor-Wiles criterion in the version of Diamond \cite{D} and Fujiwara, we prove that the algebra ${\bf T}^\psi$ is characterized as the universal deformation ring ${\mathcal R^\psi}$ of our global Galois deformation problem. We point out that in order to prove the existence of a family of sets realizing simultaneously the conditions of a Taylor-Wiles system, we make large use of Savitt's theorem \cite{savitt}: assuming the existence of a newform $f$ as above, the tangent space of the deformation functor has dimension one over the residue field. Our first resul is the following: \begin{theorem} \begin{itemize} \item[a)] $\Phi:{\mathcal R^\psi}\to{\bf T}^\psi$ is an isomorphism of complete intersection; \item[b)] ${\mathcal M^\psi}$ is a free ${\bf T}^\psi$-module of rank 2. \end{itemize} \end{theorem} \noindent We observe that in \cite{CDT} the authors assume that the type $\tau$ is strongly acceptable for $\overline\rho_\ell$. In this way they assure the existence of a modular form under their hyphotesis. Since we are interested to study the quaternionic cohomolgical module associated to the modular form $f$, as a general hyphotesis we suppose that there exist a modular form $f$ in our conditions, in other words we are assuming that uor cohomological module is not empty. \noindent Keeping the definitions as in \cite{CDT}, Savitt's result allows to suppress the assumption of acceptability in the definition of strongly acceptability, thus it is possible to extend Conrad, Diamond and Taylor's result \cite{CDT} relaxing the hypotheses on the residual representation.\\ \noindent Under the hypothesis that $f$ occurs with minimal level (i.e. the ramification at primes $p$ dividing the Artin conductor of the Galois representation $\rho_f$ is equal to the ramification of $\overline\rho_f$ at $p$) the module ${\mathcal M^\psi}$, used to construct the Taylor-Wiles system, can be also seen as a part of a module $\mathcal M^{\rm mod}$ coming from the cohomology of a modular curve, as described in \cite{CDT} \S5.3. Applying the extended Conrad, Diamond and Taylor's methods and by the Ihara's lemma for the cohomology of modular curves \cite{CDT}, the first part of our result can be extended by allowing the ramification on a set of primes $S$ disjoint from $N\Delta'\ell$. In this way it is possible to obtain results of the form: \begin{itemize} \item $\Phi_S:\mathcal R^\psi_S\to{\bf T}^\psi_S$ is an isomorphism of complete intersections, \end{itemize} where $\mathcal R^\psi_S$ is an universal deformation ring letting free the ramification at primes in $S$, ${\bf T}^\psi_S$ is a local Hecke algebra. We observe that, since there is not an analogous of the Ihara's lemma for the cohomology of the Shimura curve, we don't have any information about the correspondent module $\mathcal M^\psi_S$ coming from the cohomology of a Shimura curve. In particular, in the general case $S\not=\emptyset$, it is not possible to show that $\mathcal M^\psi_S$ is free over ${\bf T}^\psi_S$ We observe that as a consequence of the generalization of the Cornrad, Diamond and Taylor's result, two results about raising the level of modular forms and about the ideals of congruence follows. Let $S_1,S_2$ be two subsets of $\Delta_2$, we slightly modify the deformation problem, by imposing the condition sp at primes $p$ in $S_2$ and by allowing ramification at primes in $S_1$. Let we denote by $\eta_{S_1,S_2}$ the congruence ideal of a modular form relatively to the set $\mathcal B_{S_1,S_2}$ of the newforms of weight 2, Nebentypus $\psi$, level dividing $N\Delta_1\Delta_2\ell$ which are supercuspidal of type $\tau$ at $\ell$, special at primes in $\Delta_1S_2$. We prove that there is an isomorphism of complete intersections between the universal deformation ring $\mathcal R^\psi_{S_1,S_2}$ and the Hecke algebra ${\bf T}^\psi_{S_1,S_2}$ acting on the space $\mathcal B_{S_1,S_2}$ and $$\eta_{\Delta_2,\emptyset}=C\eta_{S_1,S_2}$$ where $C$ is a constant depending of the modular form. In particular we prove the following result: \begin{theorem} Let $f=\sum a_nq^n$ be a normalized newform in $S_2(\Gamma_0(M\ell^2),\psi)$ supercuspidal of type $\tau=\chi\oplus\chi^\sigma$ at $\ell$, special at primes in a finite set $\Delta'$, there exist $g\in S_2(\Gamma_0(qM\ell^2),\psi)$ supercuspidal of type $\tau$ at $\ell$, special at every prime $p|\Delta'$ such that $f\equiv g\ {\rm mod}\ \lambda$ if and only if $$a_q^2\equiv\psi(q)(1+q)^2\ {\rm mod}\ \lambda$$ where $q$ is a prime such that $(q,M\ell^2)=1,$ $q\not\equiv-1\ {\rm mod}\ \ell$. \end{theorem} We observe that our results concerning the cohomological modules, holds only at the minimal level since a quaternionic analogue of the Ihara's lemma is not available in this case. Let $S$ be a finite set of primes not dividing $M\ell$; we fix $f\in S_2(\Gamma_0(N\Delta'\ell^2 S),\psi)$ supercuspidal of type $\tau$ at $\ell$, special at primes $p|\Delta'$. If we modify our Galois deformation problem allowing ramification at primes in $S$, we obtain a new universal deformation ring $\mathcal R_S$ and a new Hecke algebra ${\bf T}^\psi_S$ acting on the newforms giving rise to such representation. We make the following conjecture: \begin{conjecture}\label{conj1} \begin{itemize} \item $\mathcal R_S\to{\bf T}^\psi_S$ is an isomorphism of complete intersection; \item let $\mathcal M^\psi_S$ be the module $H^1({\bf X}_1(NS),\mathcal O)_{\mathfrak m_S}^{\widehat\psi}$ coming from the cohomology of the Shimura curve ${\bf X}_1(NS)$ associated to the open compact subgroup of $B_{\bf A}^{\times,\infty}$, $V_1(NS)=\prod_{p\not|NS\ell}R_p^\times\prod_{p|NS}K_p^1(N)\times(1+u_\ell R_\ell)$ where $K_p^1(N)$ is defined in section \ref{shi}, and $u_\ell$ is a uniformizer of $B_\ell^\times$. $\mathcal M^\psi_S$ is a free ${\bf T}^\psi_S$-module of rank 2. \end{itemize} \end{conjecture} \noindent Conjecture \ref{conj1} easily follows from the following conjecture: \begin{conjecture}\label{noscon} Let $q$ be a prime number such that $q\not|N\Delta'\ell^2$. We fix a maximal non Eisenstein ideal of the Hecke algebra ${\bf T}_0^{\widehat\psi}(N)$ acting on the group $H^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}$. Let ${\bf X}_1(N)$ be the Shimura curve $${\bf X}_1(N)=B^\times\setminus B_{\bf A}^\times/K_\infty^+V_1(N)$$ where $$V_1(N)=\prod_{p\not|N\ell}R_p^\times\prod_{p|N}K_p^1(N)\times(1+u_\ell R_\ell)$$ where $K_p^1(N)$ is defined in section \ref{shi}, and $u_\ell$ is a uniformizer of $B_\ell^\times$. The map $$\alpha_\mathfrak m:H^1({\bf X}_1(N),\mathcal O)_\mathfrak m^{\widehat\psi}\times H^1({\bf X}_1(N),\mathcal O)_\mathfrak m^{\widehat\psi}\to H^1({\bf X}_1(Nq),\mathcal O)_{\mathfrak m^q}^{\widehat\psi}$$ is such that $\alpha\otimes_\mathcal O k$ is injective, where $\mathfrak m^q$ is the inverse image of the ideal $\mathfrak m$ under the natural map ${\bf T}_0^{\widehat\psi}(Nq)\to{\bf T}_0^{\widehat\psi}(N)$ and $k=\mathcal O/\lambda$. \end{conjecture} \noindent This conjecture would provide an analogue for the Shimura curves of the Ihara's Lemma in case $\ell|\Delta$. In \cite{DT} and in \cite{DTi}, Diamond and Taylor show that if $\ell$ not divides the discriminant of the indefinite quaternion algebra, then the analogue of conjecture \ref{noscon} holds. \section{Notations} \noindent For a rational prime $p$, ${\bf Z}_p$ and ${\bf Q}_p$ denote the ring of $p$-adic integers and the field of $p$-adic numbers, respectively. If $A$ is a ring, then $A^\times$ denotes the group of invertible elements of $A$. We will denote by ${\bf A}$ the ring of rational ad\'eles, and by ${\bf A}^\infty$ the finite ad\'eles.\\ Let $B$ be a quaternion algebra on ${\bf Q}$, we will denote by $B_{\bf A}$ the adelization of $B$, by $B_{\bf A}^\times$ the topological group of invertible elements in $B_{\bf A}$ and $B_{\bf A}^{\times,\infty}$ the subgroup of finite ad\'eles.\\ Let $R$ be a maximal order in $B$.For a rational place $v$ of ${\bf Q}$ we put $B_v=B\otimes_{\bf Q}{\bf Q}_v$; if $p$ is a finite place we put $R_p=R\otimes_{\bf Z}{\bf Z}_p$. \\ If $p$ is a prime not dividing the discriminant of $B$, included $p=\infty$, we fix an isomorphism $i_p:B_p\to M_2({\bf Q}_p)$ such that if $p\not=\infty$ we have $i_p(R_p)=M_2({\bf Z}_p)$.\\ We write $GL_2^+({\bf R})=\{g\in GL_2({\bf R})|\ det\ g>0\}$ and $K_\infty={\bf R}^\times O_2({\bf R}),$ $K_\infty^+={\bf R}^\times SO_2({\bf R}).$ If $K$ is a field, let $\overline K$ denote an algebraic closure of $K$; we put $G_K={\rm Gal}(\overline K/K)$. For a local field $K$, $K^{unr}$ denotes the maximal unramified extension of $K$ in $\overline K$; we put $I_K={\rm Gal}(\overline K/K^{unr})$, the inertia subgroup of $G_K$. For a prime $p$ we put $G_p=G_{{\bf Q}_p}$, $I_p=I_{{\bf Q}_p}$. If $\rho$ is a representation of $G_{\bf Q}$, we write $\rho_p$ for the restriction of $\rho$ to a decomposition group at $p$. \section{The local Hecke algebra ${\bf T}^\psi$}\label{de} We fix a prime $\ell>2$. Let ${\bf Z}_{\ell^2}$ denote the integer ring of ${\bf Q}_{\ell^2}$, the unramified quadratic extension of ${\bf Q}_\ell$. Let $M\not=1$ be a square-free integer not divisible by $\ell$. We fix $f$ an eigenform in $S_2(\Gamma_1(M\ell^2))$, then $f\in S_2(\Gamma_0(M\ell^2),\psi)$ for some Dirichlet character $\psi:({\bf Z}/M\ell^2{\bf Z})^\times\to\overline{\bf Q}^\times.$\\ For abuse of notation, let $\psi$ be the adelisation of the Dirichlet character $\psi$ and we denote by $\psi_p$ the composition of $\psi$ with the inclusion ${\bf Q}_p^\times\to {\bf A}^\times$.\\ We fix a regular character $\chi:{\bf Z}_{\ell^2}^\times\to \overline{\bf Q}^\times$ of conductor $\ell$ such that $\chi|_{{\bf Z}_\ell^\times}=\psi_\ell|_{{\bf Z}_\ell^\times}$ and we extend $\chi$ to ${\bf Q}_{\ell^2}^\times$ by putting $\chi(\ell)=-\psi_\ell(\ell)$. We observe that $\chi$ is not uniquely determined by $\psi$ and, if we fix an embedding of $\overline{\bf Q}$ in $\overline{\bf Q}_\ell$, we can ragard the values of $\chi$ in this field.\\ Since, by local classfield theory, $\chi$ can be regarded as a character of $I_\ell$ and we can consider the type $\tau=\chi\oplus\chi^\sigma:I_\ell\to GL_2(\overline{\bf Q}_\ell)$, where $\sigma$ denotes the complex conjugation.\\ We fix a decomposition $M=N\Delta'$ where $\Delta'$ is a product of an odd number of primes. If we shoose $f\in S_2(\Gamma_1(M\ell^2))$ such that the automorphic representation $\pi_f=\otimes_v\pi_{f,v}$ of $GL_2({\bf A})$ associated to $f$ is supercuspidal of type $\tau=\chi\oplus\chi^\sigma$ at $\ell$ and special at every primes $p|\Delta'$, then $\pi_{f,\ell}=\pi_\ell(\chi)$, where $\pi_\ell(\chi)$ is the representation of $GL_2({\bf Q}_\ell)$ associated to $\chi$, with central character $\psi_\ell$ and conductor $\ell^2$ (see \cite{Ge}, \S 2.8). Moreover, under our hypotheses, the nebentypus $\psi$ factors through $({\bf Z}/N\ell{\bf Z})^\times$. As a general hypothesis, we assume that $\psi$ has order prime to $\ell$.\\ Let $WD(\pi_\ell(\chi))$ be the $2$-dimensional representation of the Weil-Deligne group at $\ell$ associated to $\pi_\ell(\chi)$ by local Langlands correspondence. Since by local classfield theory, we can identify ${\bf Q}_{\ell^2}^\times$ with $W_{{\bf Q}_{\ell^2}}^{ab}$, we can see $\chi$ as a character of $W_{{\bf Q}_{\ell^2}}$; by (\cite{Ca} \S 11.3), we have \begin{equation}\label{ind} WD(\pi_\ell(\chi))=Ind^{W_{{\bf Q}_{\ell}}}_{W_{{\bf Q}_{\ell^2}}}(\chi)\otimes|\ |_\ell^{-1/2}. \end{equation} \noindent Let $\rho_f:G_{\bf Q}\to GL_2(\overline{\bf Q}_\ell)$ be the Galois representation associated to $f$ and $\overline\rho:G_{\bf Q}\to GL_2(\overline{\bf F}_\ell)$ be its reduction modulo $\ell$.\\ As in \cite{Lea}, we impose the following conditions on $\overline\rho$: \begin{equation}\label{con1} \overline\rho\ {\rm is\ absolutely\ irreducible}; \end{equation} \begin{equation}\label{cond2} {\rm if}\ p|N\ {\rm then}\ \overline\rho(I_p)\not=1; \end{equation} \begin{equation} \label{rara} {\rm if}\ p|\Delta'\ {\rm and}\ p^2\equiv 1 {\rm mod}\ \ell\ {\rm then}\ \overline\rho(I_p)\not=1; \end{equation} \begin{equation}\label{end} {\rm End}_{\overline{\bf F}_\ell[G_\ell]}(\overline\rho_\ell)=\overline{\bf F}_\ell. \end{equation} \begin{equation}\label{c3} {\rm if}\ \ell=3,\ \overline\rho\ \ {\rm is\ not\ induced\ from\ a\ character\ of\ }\ {\bf Q}(\sqrt{-3}). \end{equation} \noindent Let $K=K(f)$ be a finite extension of ${\bf Q}_\ell$ containing ${\bf Q}_{\ell^2}$, ${\rm Im}(\psi)$ and the eigenvalues for $f$ of all Hecke operators. Let $\mathcal O$ be the ring of integers of $K$, $\lambda$ be a uniformizer of $\mathcal O$, $k=\mathcal O/(\lambda)$ be the residue field. \noindent Let $\mathcal B$ denote the set of normalized newforms $h$ in $S_2(\Gamma_0(M\ell^2),\psi)$ which are supercuspidal of type $\chi$ at $\ell$, special at primes dividing $\Delta'$ and whose associated representation $\rho_h$ is a deformation of $\overline\rho$. For $h\in\mathcal B$, let $h=\sum_{n=1}^\infty a_n(h)q^n$ be the $q$-expansion of $h$ and let $\mathcal O_h$ be the $\mathcal O$-algebra generated in ${\bf Q}_\ell$ by the Fourier coefficients of $h$. Let ${\bf T}^\psi$ denote the sub-$\mathcal O$-algebra of $\prod_{h\in\mathcal B}\mathcal O_h$ generated by the elements $\widetilde T_p=(a_p(h))_{h\in\mathcal B}$ for $p\not|M\ell$.\\ \section{Deformation problem} Our next goal is to state a global Galois deformation condition of $\overline\rho$ which is a good candidate for having ${\bf T}^\psi$ as an universal deformation ring. \subsection{The global deformation condition of type $(\rm{sp},\tau,\psi)_Q$}\label{univ} First of all we observe that our local Galois representation $\rho_{f,\ell}=\rho_\ell$ is of type $\tau$ (\cite{CDT}).\\ We let $\Delta_1$ be the product of primes $p|\Delta'$ such that $\overline\rho(I_p)\not=1$, and $\Delta_2$ be the product of primes $p|\Delta'$ such that $\overline\rho(I_p)=1$.\\ We denote by $\mathcal C_\mathcal O$ the category of local complete noetherian $\mathcal O$-algebras with residue field $k$. Let $\epsilon:G_p:\to{\bf Z}_\ell^\times$ be the cyclotomic character and $\omega:G_p\to{\bf F}_\ell^\times$ be its reduction mod $\ell$. By analogy with \cite{Lea}, We define the global deformation condition of type $(\rm{sp},\tau,\psi)_Q$: \begin{definition}\label{def} Let $Q$ be a square-free integer, prime to $M\ell$. We consider the functor $\mathcal F_Q$ from $\mathcal C_\mathcal O$ to the category of sets which associate to an object $A\in\mathcal C_\mathcal O$ the set of strict equivalence classes of continuous homomorphisms $\rho:{G_{\bf Q}}\to GL_2(A)$ lifting $\overline\rho$ and satisfying the following conditions: \begin{itemize} \item[a$_Q$)] $\rho$ is unramified outside $MQ\ell$; \item[b)] if $p|\Delta_1N$ then $\rho(I_p)\simeq\overline\rho(I_p)$ ; \item[c)] if $p|\Delta_2$ then $\rho_p$ satisfies the sp-condition, that is ${\rm tr}(\rho(F))^2=(p\mu(p)+\mu(p))^2=\psi_p(p)(p+1)^2$ for a lift $F$ of ${\rm Frob}_p$ in $G_p$; \item[d)] $\rho_\ell$ is weakly of type $\tau$; \item[e)] ${\rm det}(\rho)=\epsilon\psi$, where $\epsilon:G_{\bf Q}\to{\bf Z}_\ell^\times$ is the cyclotomic character. \end{itemize} \end{definition} \noindent It is easy to prove that the functor $\mathcal F_Q$ is representable. \noindent Let $\mathcal R^\psi_Q$ be the universal ring associated to the functor $\mathcal F_Q$. We put $\mathcal F=\mathcal F_0$, ${\mathcal R^\psi}=\mathcal R^\psi_0$.\\ We observe that if $\overline\rho(I_p)=1$, by the Ramanujan-Petersson conjecture proved by Deligne, the sp-condition rules out thouse defomations of $ \overline\rho$ arising from modular forms which are not special at $p$. This space includes the restrictions to $G_p$ of representations coming from forms in $S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ which are special at $p$, but it does not contain those coming from principal forms in $S_2(\Gamma_0(N\Delta'\ell^2),\psi)$. \section{Cohomological modules coming from the Shimura curves}\label{shi} We fix a prime $\ell>2$. Let $\Delta'$ be a product of an odd number of primes, different from $\ell$. We put $\Delta=\ell\Delta'$. Let $B$ be the indefinite quaternion algebra over ${\bf Q}$ of discriminant $\Delta$. Let $R$ be a maximal order in $B$. Let $N$ be an integer prime to $\Delta$. We put $$K_p^0(N)=i_p^{-1}\left\{\left( \begin{array} [c]{cc} a & b\\ c & d \end{array} \right)\in GL_2({\bf Z}_p)\ |\ c\equiv 0\ {\rm mod}\ N\right\}$$ $$K_p^1(N)=i_p^{-1}\left\{\left( \begin{array} [c]{cc} a & b\\ c & d \end{array} \right)\in \ GL_2({\bf Z}_p)\ |\ c\equiv 0\ {\rm mod}\ N,\ a\equiv 1\ {\rm mod}\ N\right\}.$$ Let $s$ be a prime $s\not|N\Delta$. We define $$V_0(N,s)=\prod_{p\not|Ns}R^\times_p\times \prod_{p|N}K_p^0(N)\times K_s^1(s^2)$$ and $$V_1(N,s)=\prod_{p\not|N\ell s}R^\times_p\times\prod_{p|N}K_p^1(N)\times K_s^1(s^2)\times (1+u_\ell R_\ell).$$ We observe that there is an isomorphism $$V_0(N,s)/V_1(N,s)\simeq({\bf Z}/N{\bf Z})^\times\times{\bf F}_{\ell^2}^\times.$$ We will consider the character $\widehat\psi$ of $V_0(N,s)$ with kernel $V_1(N,s)$ defined as follow: $$\widehat\psi:=\prod_{p|N}\psi_p\times\chi:({\bf Z}/N{\bf Z})^\times\times{\bf F}_{\ell^2}^\times\to{\bf C}^\times$$ and we shall consider the space $S_2(V_0(N,s),\widehat\psi)$ of quaternionic modular forms with nebentypus $\widehat\psi$.\\ For $i=0,1$ let $\Phi_i(N,s)=(GL_2^+({\bf R})\times V_i(N,s))\cap B_{\bf Q}^\times$ and we consider the Shimura curves: $${\bf X}_i(N,s)=B_{\bf Q}^\times\setminus B^\times_{\bf A}/K_\infty^+\times V_i(N,s).$$ The finite commutative group $\Omega=({\bf Z}/N{\bf Z})^\times\times{\bf F}_{\ell^2}^\times$ naturally acts on the $\mathcal O$-module $H^*({\bf X}_1(N,s),\mathcal O)$ via its action on ${\bf X}_1(N,s)$. Since there is an injection of $H^*({\bf X}_1(N,s),\mathcal O)$ in $H^*({\bf X}_1(N,s),K)$, by (\cite{H2}, \S 7) the cohomology group $H^1({\bf X}_1(N,s),\mathcal O)$ is also equipped with the action of Hecke operator $T_p$, for $p\not=\ell$ and diamond operators $\langle n\rangle$ for $n\in({\bf Z}/N{\bf Z})^\times$. The Hecke action commutes with the action of $\Omega$, since we do not have a $T_\ell$ operator. The two actions are $\mathcal O$-linear.\\ We can write $\Omega=\Omega_1\times\Omega_2$ where $\Omega_1$ is the $\ell$-Sylow subgroup of $\Omega$ and $\Omega_2$ is the subgroup of $\Omega$ with order prime to $\ell$. Since $\Omega_2\subseteq\Omega$, $\Omega_2$ acts on $H^*({\bf X}_1(N,s),\mathcal O)$ and so $H^*({\bf X}_1(N,s),\mathcal O)=\bigoplus_\varphi H^*({\bf X}_1(N,s),\mathcal O)^\varphi$ where $\varphi$ runs over the characters of $\Omega_2$ and $H^*({\bf X}_1(N,s),\mathcal O)^\varphi$ is the sub-Hecke-module of $H^*({\bf X}_1(N,s),\mathcal O)$ on which $\Omega_2$ acts by the character $\varphi$. Since, by hypothesis, $\psi$ has order prime to $\ell$, $H^*({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}=H^*({\bf X}_1(N,s),\mathcal O)^\varphi$ for some character $\varphi$ of $\Omega_2$. So $H^*({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}$ is a direct summand of $H^*({\bf X}_1(N,s),\mathcal O)$. {} \noindent It follows easily from the Hochschild-Serre spectral sequence that $$H^*({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}\simeq H^*({\bf X}_0(N,s),\mathcal O({\widehat\psi}))$$ where $\mathcal O({\widehat\psi})$ is the sheaf $B^\times\setminus B_{\bf A}^\times\times\mathcal O/K_\infty^+\times V_0(N,s),$ $B^\times$ acts on $B^\times_{\bf A}\times\mathcal O$ on the left by $\alpha\cdot(g,m)=(\alpha g,m)$ and $K_\infty^+\times V_0(N,s)$ acts on the right by $(g,m)\cdot v=(g,m)\cdot(v_\infty,v^\infty)=(gv,{\widehat\psi}(v^\infty)m)$ where $v_\infty$ and $v^\infty$ are respectively the infinite and finite part of $v$ . By translating to the cohomology of groups we obtain (see \cite{sl}, Appendix) $H^1({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}\simeq H^1(\Phi_0(N,s),\mathcal O(\widetilde\psi)),$ where $\widetilde\psi$ is the restriction of ${\widehat\psi}$ to $\Phi_0(N,s)/\Phi_1(N,s)$ and $\mathcal O(\widetilde\psi)$ is $\mathcal O$ with the action of $\Phi_0(N,s)$ given by $a\mapsto\widetilde\psi^{-1}(\gamma)a$. \noindent It is well know the Hecke action on $H^1(\Phi_0(N,s),\mathcal O(\widetilde\psi))$ and the structure of the module $H^1({\bf X}_1(N,s),K)^{\widehat\psi}$ over the Hecke algebra. Let ${\bf T}^{\widehat\psi}_0(N,s)$ be the $\mathcal O$-algebra generated by the Hecke operators $T_p$, $p\not=\ell$ and the diamond operators, acting on $H^1({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}$. \begin{proposition}\label{ra} $H^1({\bf X}_1(N,s),K)^{\widehat\psi}$ is free of rank 2 over $${\bf T}^{\widehat\psi}_0(N,s)\otimes K={\bf T}^{\widehat\psi}_0(N,s)_K.$$ \end{proposition} \noindent The proof of proposition \ref{ra}, easy follows from the following lemmas: \begin{lemma} Let $L\supseteq K$ be two Galois fields, let $V$ be a vector space on $K$ and let $T$ be a $K$-algebra. If $V\otimes L$ is free of rank $n$ on $T\otimes L$ then $V$ is free of rank $n$ on $T$. \end{lemma} \begin{proof} Let $G={\rm Gal}(L/K)$. By the Galois theory, since $V\otimes L$ is free of rank $n$ on $T\otimes L$, we have $V\simeq(V\otimes L)^G\simeq((T\otimes L)^n)^G\simeq((T\otimes L)^G)^n\simeq T^n.$ \end{proof} \begin{lemma} Let ${\bf T}_0^{\widehat\psi}(N,s)_{\bf C}$ denote the algebra generated over ${\bf C}$ by the operators $T_p$ for $p\not=\ell$ acting on $H^1({\bf X}_1(N,s),{\bf C})^{\widehat\psi}$. Then $H^1({\bf X}_1(N,s),{\bf C})^{\widehat\psi}$ is free of rank 2 over ${\bf T}^{\widehat\psi}_0(N,s)_{\bf C}$. \end{lemma} \noindent We observe that the proof of this lemma follows by the same analysis explained in (\cite{Lea}, Proof of proposition 1.2), defining an homomorphism $$JL:S_2(V_1(N,s))\to S_2(\Gamma_0(s^2\Delta')\cap\Gamma_1(N\ell^2))$$ which is injective when restricted to the space $S_2(V_0(N,s),\widehat\psi)$ and equivariant by Hecke operators. \section{The $\mathcal O$-module ${\mathcal M^\psi}$ }\label{tw} Throughout this section, we largely mirror section 3 of \cite{Lea}, and we formulate a result that generalizes of a result of Terracini to the case of nontrivial nebentypus.\\ We set $\Delta=\Delta'\ell$; let $B$ be the indefinite quaternion algebra over ${\bf Q}$ of discriminant $\Delta$. Let $R$ be a maximal order in $B$.\\ It is convenient to choose an auxiliary prime $s\not|M\ell$, $s>3$ such that no lift of $\overline\rho$ can be ramified at $s$; such prime exists by \cite{DT}, Lemma 2. We consider the group $\Phi_0=\Phi_0(N,s)$; it easy to verify that the group $\Phi_0$ has not elliptic elements (\cite{Lea}). \noindent There exists an eigenform $\widetilde f\in S_2(\Gamma_0(Ms^2\ell^2),\psi)$ such that $\rho_f=\rho_{\widetilde f}$ and $T_s\widetilde f=0$. By the Jacquet-Langlands correspondence, the form $\widetilde f$ determines a character ${\bf T}^{\widehat\psi}_0(N,s)\to k$ sending the operator $t$ in the class ${\rm mod}\ \lambda$ of the eigenvalue of $t$ for $\widetilde f$. The kernel of this character is a maximal ideal $\mathfrak m$ in ${\bf T}^{\widehat\psi}_0(N,s)$. We define ${\mathcal M^\psi}=H^1({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}_\mathfrak m.$ By combining Proposition 4.7 of \cite{DDT} with the Jacquet-Langlands correspondence we see that there is a natural isomorphism ${\bf T}^\psi\simeq{\bf T}_0^{\widehat\psi}(N,s)_\mathfrak m.$ Therefore, by Proposition \ref{ra}, ${\mathcal M^\psi}\otimes_\mathcal O K\ {\rm is\ free\ of\ rank\ 2\ over}\ {\bf T}^\psi\otimes_\mathcal O K.$ \noindent Let $\mathcal B$ denote the set of newforms $h$ of weight two, nebentypus $\psi$, level dividing $M\ell^2s^2$, special at primes $p$ dividing $\Delta'$, supercuspidal of type $\tau$ at $\ell$ and such that $\overline\rho_h\sim\overline\rho$. For a newforms $h\in\mathcal B$, we let $K_h$ denote the field over ${\bf Q}_\ell$ generated by its coefficients $a_n(h)$, $\mathcal O_h$ denote the ring of integers of $K_h$ and let $\lambda$ be a uniformizer of $\mathcal O_h$. We let $A_h$ denote the subring of $\mathcal O_{h}$ consisting of those elements whose reduction mod $\lambda$ is in $k$. We know that with respect to some basis, we have $\rho_h:G_{\bf Q}\to GL_2(A_h)$ a deformation of $\overline\rho$ satisfying our global deformation problem.\\ The universal property of ${\mathcal R^\psi}$ furnishes a unique homomorphism $\pi_h:{\mathcal R^\psi}\to A_h$ such that the composite $G_{\bf Q}\to GL_2({\mathcal R^\psi})\to GL_2(A_h)$ is equivalent to $\rho_h$. Since ${\mathcal R^\psi}$ is topologically generated by the traces of $\rho^{\rm univ}({\rm Frob}_p)$ for $p\not=\ell$, (see \cite{Ma}, \S1.8), we conclude that the map ${\mathcal R^\psi}\to\prod_{h\in\mathcal B}\mathcal O_h$ such that $r\mapsto(\pi_h(r))_{h\in\mathcal B}$ has image ${\bf T}^\psi$. Thus there is a surjective homomorphism of $\mathcal O$-algebras $\Phi:{\mathcal R^\psi}\to{\bf T}^\psi.$ Our goal is to prove the following \begin{theorem}\label{goal} \begin{itemize} \item[a)] ${\mathcal R^\psi}$ is complete intersection of dimension 1; \item[b)] $\Phi:{\mathcal R^\psi}\to{\bf T}^\psi$ is an isomorphism; \item[c)] ${\mathcal M^\psi}$ is a free ${\bf T}^\psi$-module of rank 2. \end{itemize} \end{theorem} \subsection{Proof of theorem \ref{goal}} \noindent In order to prove theorem \ref{goal}, we shall apply the Taylor-Wiles criterion in the version of Diamond and Fujiwara and we continue to follow section 3 of \cite{Lea} closely.\\ \noindent We shall prove the existence of a family $\mathcal Q$ of finite sets $Q$ of prime numbers, not dividing $M\ell$ and of a ${\mathcal R^\psi}_Q$-module ${\mathcal M^\psi}_Q$ for each $Q\in\mathcal Q$ such that the system $({\mathcal R^\psi}_Q,{\mathcal M^\psi}_Q)_{Q\in\mathcal Q}$ satisfies the conditions (TWS1), (TWS2), (TWS3), (TWS4), (TWS5) and (TWS6) of \cite{Lea}. \noindent If this conditions are satisfy for the family $({\mathcal R^\psi}_Q,{\mathcal M^\psi}_Q)_{Q\in\mathcal Q}$, it will be called a {\bf Taylor-Wiles system} for $({\mathcal R^\psi},{\mathcal M^\psi})$. Then theorem \ref{goal} will follow from the isomorphism criterion (\cite{D}, theorem 2.1) developed by Wiles, Taylor-Wiles.\\ As in section 3.1 of \cite{Lea}, let $Q$ be a finite set of prime numbers not dividing $N\Delta$ and such that \begin{itemize} \item[(A)] $q\equiv 1\ {\rm mod}\ \ell,\ \ \forall q\in Q$; \item[(B)] if $q\in Q$, $\overline\rho({\rm Frob}_q)$ has distinct eigenvalues $\alpha_{1,q}$ and $\alpha_{2,q}$ contained in $k$. \end{itemize} We will define the modules $\mathcal M_Q$. If $q\in Q$ we put $$K'_q=\left\{\alpha\in R_q^\times\ |\ i_q(\alpha)\in \left( \begin{array} [c]{cc} H_q & *\\ q{\bf Z}_q & * \end{array} \right)\right\}$$ where $H_q$ is the subgroup of $({\bf Z}/q{\bf Z})^\times$ consisting of elements of order prime to $\ell$. By analogy with the definition of $V_Q$ in section 3.3 of \cite{Lea}, we define $$V_Q'(N,s)=\prod_{p\not|NQs}R_p^\times\times\prod_{p|N}K_p^0(N)\times K_s^1(s^2)\times\prod_{q|Q}K'_q$$ $$V_Q(N,s)=\prod_{p\not|NQs}R_p^\times\times\prod_{p|NQ}K_p^0(NQ)\times K_s^1(s^2)$$ $$\Phi_Q=(GL_2({\bf R}^+)\times V_Q(N,s))\cap B^\times,\ \ \Phi_Q'=(GL_2({\bf R}^+)\times V_Q'(N,s))\cap B^\times.$$ \noindent Then $\Phi_Q/\Phi_Q'\simeq\Delta_Q$ acts on $H^1(\Phi_Q',\mathcal O(\widetilde\psi))$. Let ${\bf T}_Q^{'\widehat\psi}(N,s)$ (resp. ${\bf T}_Q^{\widehat\psi}(N,s)$) be the Hecke $\mathcal O$-algebra generated by the Hecke operators $T_p$, $p\not=\ell$ and the diamond operators ( that are those Hecke operators coming from $\Delta_Q$) , acting on $H^1({\bf X}'_Q(N,s),\mathcal O)^{\widehat\psi}$ (resp. $H^1({\bf X}_Q(N,s),\mathcal O)^{\widehat\psi}$) where ${\bf X}'_Q(N,s)$ (resp. ${\bf X}_Q(N,s)$) is the Shimura curve associated to $V'_Q(N,s)$ (resp. $V_Q(N,s)$).\ There is a natural surjection $\sigma_Q:{\bf T}_Q^{'\widehat\psi}(N,s)\to {\bf T}_Q^{\widehat\psi}(N,s)$. Since the diamond operator $\langle n\rangle$ depends only on the image of $n$ in $\Delta_Q$, ${\bf T}_Q^{'\widehat\psi}(N,s)$ is naturally an $\mathcal O[\Delta_Q]$-algebra. Let $\widetilde\alpha_{i,q}$ for $i=1,2$ be the two roots in $\mathcal O$ of the polinomial $X^2-a_q(f)X+q$ reducing to $\alpha_{i,q}$ for $i=1,2$. There is a unique eigenform $\widetilde f_Q\in S_2(\Gamma_0(MQs^2\ell^2),\widehat\psi)$ such that $\rho_{\widetilde f_Q}=\rho_f$, $a_s(\widetilde f_Q)=0$, $a_q(\widetilde f_Q)=\widetilde\alpha_{2,q}$ for $q|Q$, where $\widetilde\alpha_{2,q}$ is the lift of $\alpha_{2,q}$.\\ By the Jacquet-Langlands correspondence, the form $\widetilde f_Q$ determines a character $\theta_Q:{\bf T}_Q^{\widehat\psi}(N,s)\to k$ sending $T_p$ to $a_p(\widetilde f_Q)\ {\rm mod}\ \lambda$ and the diamond operators to 1. We define $\widetilde\mathfrak m_Q={\rm ker} \theta_Q,$ $\mathfrak m_Q=\sigma^{-1}_Q(\widetilde\mathfrak m_Q)$, and $\mathcal M_Q=H^1(\Phi'_Q,\mathcal O(\widetilde\psi))_{\mathfrak m_Q}.$ Then the map $\sigma_Q$ induce a surjective homomorphism ${\bf T}_Q^{'\widehat\psi}(N,s)_{\mathfrak m_Q}\to {\bf T}_Q^{\widehat\psi}(N,s)_{\widetilde\mathfrak m_Q}$ whose kernel contains $I_Q({\bf T}_Q^{'\widehat\psi}(N,s))_{\mathfrak m_Q}$. \\ If $\mathcal Q$ is a family of finite sets $Q$ of primes satisying conditions $(A)$ e $(B)$, conditions (TWS1) and (TWS2) holds, as proved in \cite{Lea}, proposition 3.2; by the same methods as in \S 6 of \cite{DDT1} and in \S 4, \S 5 of \cite{Des}, it is easy to prove that our system $({\mathcal R^\psi}_Q,\mathcal M_Q)_{Q\in\mathcal Q}$ realize simultaneously conditions (TWS3), (TWS4), (TWS5). \\ We put $\delta_q=\left( \begin{array} [c]{cc} q & 0\\ 0 & 1 \end{array} \right).$ Let $\eta_q$ be the id\`{e}le in $B_{\bf A}^\times$ defined by $\eta_{q,v}=1$ if $v\not|q$ and $\eta_{q,q}=i^{-1}_q\left( \begin{array} [c]{cc} q & 0\\ 0 & 1 \end{array} \right).$ By strong approximation, write $\eta_q=\delta_qg_\infty u$ with $\delta_q\in B^\times$, $g_\infty\in GL_2^+({\bf R})$, $u\in V_{Q'}(N,s)$. We define a map \begin{eqnarray} H^1(\Phi_Q,\mathcal O(\widetilde\psi))&\to& H^1(\Phi_{Q'},\mathcal O(\widetilde\psi))\\ x&\to& x|_{\eta_q} \end{eqnarray} as follows: let $\xi$ be a cocycle representing the cohomology class $x$ in $H^1(\Phi_{Q},\mathcal O(\widetilde\psi))$; then $x|_{\eta_q} $ is represented by the cocycle $$\xi|_{\eta_q}(\gamma)=\widehat\psi(\delta_q)\cdot\xi(\delta_q\gamma\delta_q^{-1}).$$ We observe that if $\mathcal Q$ is a family of finite sets $Q$ of primes satisying conditions $(A)$ e $(B)$, then condition (TWS6) hold for the system $({\mathcal R^\psi}_Q,\mathcal M_Q)_{Q\in\mathcal Q}$. The proof is essentially the same as in \cite{Lea}, using the following lemma: \begin{lemma} $$T_p(x|_{\eta_q})=(T_p(x)|_{\eta_q})\ \ \ {\rm if}\ \ p\not|MQ'\ell,$$ \begin{equation}\label{r} T_q(x|_{\eta_q})=q\widehat\psi(q)res_{\Phi_Q/\Phi_{Q'}}x, \end{equation} $$T_q(res_{\Phi_Q/\Phi_{Q'}}x)=res_{\Phi_Q/\Phi_{Q'}}(T_q(x))-x|_{\eta_q}.$$ \end{lemma} \proof We prove (\ref{r}). We put $\widetilde\delta_q=\left( \begin{array} [c]{cc} 1 & 0\\ 0 & q \end{array} \right),$ and we decompose the double coset $\Phi_{Q'}\widetilde\delta_q\Phi_{Q'}=\coprod_{i=1}^q\Phi_{Q'}\widetilde\delta_qh_i$ with $h_i\in\Phi_{Q'}$; if $\gamma\in\Phi_{Q'}$, then: \begin{eqnarray} T_q(\xi|_{\eta_q})(\gamma)&=&\sum_{i=1}^q\widehat\psi(h_i\widetilde\delta_q)\xi|_{\eta_q}(\widetilde\delta_qh_i\gamma h_{j(i)}^{-1}\widetilde\delta^{-1}_q)\nonumber\\ &=&\sum_{i=1}^q\widehat\psi(h_i)\widehat\psi(\widetilde\delta_q)\widehat\psi(\delta_q)\xi(\delta_q\widetilde\delta_qh_i\gamma h_{j(i)}^{-1}\widetilde\delta^{-1}_q\delta^{-1}_q)\nonumber\\ &=&\widehat\psi(q)\sum_{i=1}^q\widehat\psi(h_i)\xi(h_i\gamma h_{j(i)}^{-1}) \end{eqnarray} where the latter sum holds since $\delta_q\widetilde\delta_q=\left( \begin{array} [c]{cc} q & 0\\ 0 & q \end{array} \right)$. From the cocycle relations we have $T_q(x|_{\eta_q})=q\widehat\psi(q)res_{\Phi_Q/\Phi_{Q'}}x.$\qed \noindent The conditions defining the functor $\mathcal F_Q$, characterize a global Galois deformation problem with fixed determinant (\cite{M}, \S 26). We let $ad^0\overline\rho$ denote the subrepresentation of the adjoint representation of $\overline\rho$ over the space of the trace-0-endomorphisms and we let $ad^0\overline\rho(1)={\rm Hom}(ad^0\overline\rho,\mu_p)\simeq Symm^2(\overline\rho)$, with the action of $G_p$ given by $g\varphi)(v)=g\varphi(g^{-1}v).$ Local deformation conditions a$_Q$), b), c), d) allow one to define for each place $v$ of ${\bf Q}$, a subgroup $L_v$ of $H^1(G_v, ad^0\overline\rho)$, the tangent space of the deformation functor (see \cite{M}). We will describe the computation of the local terms of the dimension formula coming from the Poitou-Tate sequence: \begin{itemize} \item ${\rm dim}_k(H^0(G_{\bf Q},ad^0\overline\rho)={\rm dim}_k(H^0(G_{\bf Q},ad^0\overline\rho(1))=0$, by the same argument as in \cite{Des} pp.441. \item ${\rm dim}_kL_\ell=1$. In fact let ${\bf R}^D_{\mathcal O,\ell}$ be the local universal deformation ring associated to a local deformation problem of being weakly of type $\tau$ \cite{CDT}. Since, in dimension 2, potentially Barsotti-Tate is equivalent to potentially crystalline (hence potentially semi stable) of Hodge-Tate weight $(0,1)$ (see \cite{FM}, theorem C2), this allow us to apply Savitt's result (\cite{savitt}, theorem 6.22). Since under our hypothesis ${\bf R}^D_{\mathcal O,\ell}\not=0$, we deduce that there is an isomorphism $ \mathcal O[[X]]\simeq{\bf R}^D_{\mathcal O,\ell}$. \item ${\rm dim}_kH^0(G_\ell,ad^0\overline\rho)=0$, because of hypothesis (\ref{end}) \item ${\rm dim}_kH^1(G_p/I_p,(ad^0\overline\rho)^{I_p})-{\rm dim}_kH^0(G_p,ad^0\overline\rho)=0$ for $p|N\Delta_1$\\ If we let $W=ad^0\overline\rho$, this follows from the exact exact sequence \begin{displaymath} 0\to H^0(G_p,W)\to H^0(I_p,W)\stackrel{\scriptstyle{\rm Frob}_p-1}{\to}H^0(I_p,W)\to H^1(G_p/I_p,W^{I_p})\to 0. \end{displaymath} \item ${\rm dim}_kL_p=1$ for $p|\Delta_2$, infact the following lemma holds: \begin{lemma}\label{versal} The versal defomation ring of the local defomation problem of satisfying the sp-condition is $\mathcal O[[X,Y]]/(X,XY)=\mathcal O[[Y]].$ \end{lemma} \item ${\rm dim}_kH^0(G_p,ad^0\overline\rho)=1$, since the eigenvalues of $\overline\rho({\rm Frob}_p)$ are distinct. \item ${\rm dim}_kH^1(G_q,ad^0\overline\rho)=2$ if $q|Q$, in fact $H^1(G_q/I_q,W)=W/({\rm Frob}_q-1)W$ has dimension 1, because in the base $\left\{\left( \begin{array} [c]{cc} 1 & 0\\ 0 & -1 \end{array} \right), \left( \begin{array} [c]{cc} 0 & 0\\ 0 & 1 \end{array} \right), \left( \begin{array} [c]{cc} 0 & 0\\ 0 & 1 \end{array} \right)\right\}$ of $W$, ${\rm Frob}_q$ is the matrix $$\left( \begin{array} [c]{ccc} 1 & 0 & 0\\ 0 & \alpha_{1,q}\alpha_{2,q}^{-1} & 0\\ 0 & 0 & \alpha_{1,q}^{-1}\alpha_{2,q} \end{array} \right)$$ and $\alpha_{1,q}\not=\alpha_{2,q}$ by hypothesis. We observe that: \begin{eqnarray} H^1(I_q,W)^{G_q/I_q}&=&\{\alpha\in {\rm Hom}({\bf Z}_q^\times,W)\ |\ ({\rm Frob}_q-1)\alpha=0\}\nonumber\\ &\simeq& W[{\rm Frob}_q-1]\nonumber \end{eqnarray} is again one-dimensional, and $H^2(G_q/I_q,W)=0$ since $G_q/I_q\simeq\widehat{{\bf Z}}.$ The desidered result follows from the inflation-restriction exact sequence. \item ${\rm dim}_kH^0(G_q,ad^0\overline\rho)=1$ if $q|Q$, since the eigenvalues of ${\rm Frob}_q$ are $1, \alpha_{1,q}^2, \alpha_{2,q}^2$ and $\alpha_{1,q}^2, \alpha_{2,q}^2\not=1$ for hypothesis. \item ${\rm dim}_kH^1(G_\infty,ad^0\overline\rho)=0$, since $|G_\infty|=2\not=\ell$. \item ${\rm dim}_kH^0(G_\infty,ad^0\overline\rho)=1$, since the eigenvalues of complex conjugation on $ad^0\overline\rho$ are $\{1,-1,-1\}.$ \end{itemize} \proof[lemma \ref{versal}] \noindent We first observe that it is possible to characterize the deformations of $\overline\rho_p$ in the unramified case, if $p^2\not\equiv 1\ {\rm mod}\ \ell$ and, as in Lemma 2.1 in \cite{Lea}, it it is easy to prove that the versal deformation ring ${\mathcal R^\psi}'_p$ is generated by two elements $X,Y$ such that $XY=0$. \\ It is immediate to see that the sp-condition is equivalent to require that every homomorphism $\varphi:{\mathcal R^\psi}'_p\to A$ associated to the deformation $\rho$ of $\overline\rho_{G_p}$ over a $\mathcal O$-algebra $A\in\mathcal C_\mathcal O$, has $\varphi(X)=0$.\qed \noindent The dimension formula allow us to obtain the following identity: \begin{eqnarray}\label{f} {\rm dim}_kSel_Q(ad^0\overline\rho)-{\rm dim}_kSel^*_Q(ad^0\overline\rho(1))=|Q| \end{eqnarray} and, since the minimal number of topological generators of ${\mathcal R^\psi}_Q$ is equal to ${\rm dim}_kSel_Q(ad^0\overline\rho)$, we obtain that the $\mathcal O$-algebra ${\mathcal R^\psi}_Q$ can be generated topologically by $|Q|+{\rm dim}_kSel^*_Q(ad^0\overline\rho(1))$ elements. Applying the same arguments as in $\cite{DDT}$ the proof of theorem \ref{goal} follows. \section{Quaternionic congruence module} In this section we will keep the notations as in the previous sections.\\ As remarked in section \ref{shi}, there is an injective map $$JL:S_2(V_0(N),\widehat\psi)\to S_2(\Gamma_0(\Delta')\cap\Gamma_1(N\ell^2))$$ where $V_0(N)=\prod_{p\not|N}R_p^\times\times\prod_{p|N}K_p^0(N)$ is an open compact subgroup of $B_{\bf A}^{\times,\infty}$. We put $V_{\widehat\psi}=JL(S_2(V_0(N),\widehat\psi))$ the subset of $S_2(\Gamma_0(\Delta')\cap\Gamma_1(N\ell^2))$ generated by thouse new eigenforms with nebentypus $\psi$ which are supercuspidal of type $\tau$ at $\ell$ and special at primes dividing $\Delta'$. Let $K\subset\overline{\bf Q}_\ell$ be a finite extension containing ${\bf Q}_{\ell^2}$; we consider $f\in S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ a newform, supercuspidal of type $\tau$ at $\ell$ and special at primes dividing $\Delta'$, and let $X$ be the subspace of $V_{\widehat\psi}(K)$ spanned by $f$. We remark, that there is an isomorphism between the $K$-algebra ${\bf T}_0^{\widehat\psi}(K)={\bf T}_0^{\widehat\psi}(N)\otimes_\mathcal O K$ generated over $K$ by the operators $T_p$, $p\not=\ell$ acting on $H^1({\bf X}_1(N),K)^{\widehat\psi}$ and the Hecke algebra generated by all the Hecke operators acting on $V_{\widehat\psi}(K)$. Thus $V_{\widehat\psi}=X\oplus Y$ where $Y$ is the orthogonal complement of $X$ with respect to the Petersson product over $V_{\widehat\psi}$ and there is an isomorphism $${\bf T}_0^{\widehat\psi}(K)\simeq{\bf T}_{0}^{\widehat\psi}(K)_X\oplus{\bf T}_{0}^{\widehat\psi}(K)_Y$$ where ${\bf T}_{0}^{\widehat\psi}(K)_X={\bf T}_0^{\widehat\psi}(N)_X\otimes K$ and ${\bf T}_{0}^{\widehat\psi}(K)_Y={\bf T}_0^{\widehat\psi}(N)_Y\otimes K $ are the $K$ algebras generated by the Hecke operators acting on $X$ and $Y$ respectively.\\ As in classical case, it is possible to define the quaternionic congruence module for $f$ and, by the Jacquet-Langlands correspondence, it is easy to prove that $\widetilde M^{\rm quat}=\mathcal O/(\lambda^n)$ where $n$ is the smallest integer such that $\lambda^ne_f\in{\bf T}_0^{\widehat\psi}(N)$ where $e_f$ is the projector onto the coordinate corresponding to $f$. There are the isomorphisms $$\widetilde M^{\rm quat}\simeq\frac{{\bf T}_0^{\widehat\psi}(N)_X\oplus{\bf T}_0^{\widehat\psi}(N)_Y}{{\bf T}_0^{\widehat\psi}(N)}\simeq\frac{e_f{\bf T}_0^{\widehat\psi}(N)}{e_f{\bf T}_0^{\widehat\psi}(N)\cap{\bf T}_0^{\widehat\psi}(N)}$$ where the first one is an isomorphism of $\mathcal O$-modules and the second one is obtained considernig the projection map of ${\bf T}_0^{\widehat\psi}(N)_X\oplus{\bf T}_0^{\widehat\psi}(N)_Y$ onto the first component.\\ Now, let $\mathfrak m$ be the maximal ideal of ${\bf T}_0^{\widehat\psi}(N)$ definend in section \ref{tw}, since $e_f{\bf T}_0^{\widehat\psi}(N)_\mathfrak m=e_f{\bf T}_0^{\widehat\psi}(N)$ then, by the results in the previous sections $$\widetilde M^{\rm quat}=\mathcal O/(\lambda^n)=\frac{e_f{\bf T}^\psi}{e_f{\bf T}^\psi\cap{\bf T}^\psi}=(\widetilde L^{\rm quat})^2$$ where $\widetilde L^{\rm quat}$ is the quaternionic cohomological congruence module for $f$ $$\widetilde L^{\rm quat}=\frac{e_fH^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}}{e_fH^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}\cap H^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}}.$$ \section{A generalization of the Conrad, Diamond and Taylor's result using Savitt's theorem}\label{gen} In \cite{CDT}, Conrad, Diamond and Taylor assume that the type $\tau$ is strongly acceptable for $\overline\rho|_{G_\ell}$ and they consider the global Galois deformation problem of being of {\bf type $(S,\tau)$ }, where $S$ is a set of rational primes which does not contain $\ell$. Savitt's theorem allows to suppress the assumption of acceptability in the definition of strong acceptability and their result, theorem 5.4.2., still follows. They first suppose that $S=\emptyset$ and they prove their result using the improvement on the method of Taylor and Wiles \cite{taywi} found by Diamond \cite{D} and Fujiwara \cite{Fu}, then they prove their result for an arbitrary $S$ by induction on $S$, using Ihara's Lemma.\\ In particular, if $S$ is a set of rational prime not dividing $N\Delta'\ell$, we consider a newform $f$ of weight 2, level dividing $SM\ell^2$, with nebentypus $\psi$ (not trivial), supercuspidal of type $\tau=\chi\oplus\chi^\sigma$ at $\ell$ and such that $\overline\rho_f=\overline\rho$ satisfies the conditions (\ref{con1}),(\ref{cond2}),(\ref{end}) and (\ref{c3}) of section \ref{de}. As a general hypothesis, we assume that $f$ occurs with type $\tau$ and minimal level, that is $\overline\rho_f$ is ramified at every prime in $\Delta'$.\\ We consider deformations of type $(S,\tau)$ of $\overline\rho_f$ unramified outside the level of $f$ and such that ${\rm det}(\rho)=\epsilon\psi$; we will call this deformation problem of type $(S,\tau,\psi)$. Then Savitt'theorem assure that the tangent space of the deformation functor at $\ell$ is still one-dimensional and so it is possible to go on with the same construction as in \cite{CDT}. Let $\mathcal R_S^{\rm mod,\psi}$ be classical type $(S,\tau,\psi)$ universal deformation ring which pa\-ra\-me\-trizes representations of type $(S,\tau,\psi)$ with residual representation $\overline\rho$ and let ${\bf T}_S^{\rm mod,\psi}$ be the classical Hecke algebra acting on the space of the modular forms of type $(S,\tau,\psi)$. If we denote by $\mathcal M_S^{\rm mod}$ the cohomological module defined in \S 5.3 of \cite{CDT}, which is essentially the \lq\lq$\tau$-part\rq\rq of the first cohomology group of a modular curve of level depending on $S$, let $\mathcal M^{\rm mod,\psi}_S$ be the $\psi$-part of $\mathcal M_S^{\rm mod}$. Then the following proposition follows: \begin{proposition}\label{gcdt} The map $$\Phi_S^{\rm mod,\psi}:\mathcal R_S^{\rm mod,\psi}\to{\bf T}_S^{\rm mod,\psi}$$ is a complete intersection isomorphism and $\mathcal M^{\rm mod,\psi}_S$ is a free ${\bf T}_S^{\rm mod,\psi}$-module of rank 2. \end{proposition} \noindent In particular we observe that: \begin{lemma} There is an isomorphism of ${\bf T}^\psi$-modules between $H^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}_\mathfrak m$ and $\mathcal M^{{\rm mod},\psi}_\emptyset$ where ${\bf X}_1(N)$ is the Shimura curve associated to $V_1(N)=\prod_{p\not|N\ell}R_p^\times\prod_{p|N}K_p^1(N)\times(1+u_\ell R_\ell)$. \end{lemma} \proof We observe that if $f$ occurs with type $\tau$ and minimal level ${\mathcal R^\psi}\simeq\mathcal R^{{\rm mod},\psi}_\emptyset$. By theorem \ref{goal} and proposition \ref{gcdt}, there is an isomorphism between the Hecke algebras ${\bf T}^\psi\simeq{\bf T}^{{\rm mod},\psi}_\emptyset$ thus ${\mathcal M^\psi}\simeq \mathcal M^{{\rm mod},\psi}_\emptyset$ as ${\bf T}^\psi$-modules.\qed \noindent We will describe some consequences of this result. \section{Congruence ideals}\label{con} In this section we generalize the results about congruence ideals of Terracini \cite{Lea}, considering modular forms with nontrivial nebentypus.\\ Let $\Delta_1$ be a set of primes, disjoint from $\ell$. By an abuse of notation, we shall sometimes denote by $\Delta_1$ also the product of the prime in this set.\\ As before, let $f$ be a newform in $S_2(\Gamma_0(N\Delta_1\ell^2),\psi))$, supercuspidal of type $\tau$ at $\ell$ and as a general hypothesis we assume that the residual representation $\overline\rho$, associated to $f$ occurs with type $\tau$ and minimal level. \\ We observe that if $\overline\rho_\ell$ has the form as at pp.525 of \cite{CDT}, then $f$ satisfies the above hypothesis.\\ We assume that the character $\psi$ satisfies the condition as in the section \ref{de}, that $\overline\rho$ is absolutely irreducible and that $\overline\rho_\ell$ has trivial centralizer.\\ Let $\Delta_2$ be a finite set of primes $p$, not dividing $\Delta_1\ell$ such that $p^2\not\equiv 1\ {\rm mod}\ \ell$ and ${\rm tr}(\overline\rho({\rm Frob}_p))^2\equiv\psi(p)(p+1)^2\ {\rm mod}\ \ell$. We let $\mathcal B_{\Delta_2}$ denote the set of newforms $h$ of weight 2, character $\psi$ and level dividing $N\Delta_1\Delta_2\ell$ which are special at $\Delta_1$, supercuspidal of type $\chi$ at $\ell$ and such that $\overline\rho_h=\overline\rho$. We choose an $\ell$-adic ring $\mathcal O$ with residue field $k$, sufficiently large, so that every representation $\rho_h$ for $h\in\mathcal B_{\Delta_2}$ is defined over $\mathcal O$ and $Im(\psi)\subseteq\mathcal O$. For every pair of disjoint subset $S_1, S_2$ of $\Delta_2$ we denote by $\mathcal R^\psi_{S_1,S_2}$ the universal solution over $\mathcal O$ for the deformation problem of $\overline\rho$ consisting of the deformations $\rho$ satisfying: \begin{itemize} \item[a)] $\rho$ is unramified outside $N\Delta_1 S_1S_2\ell$; \item[b)] if $p|\Delta_1N$ then $\rho(I_p)=\overline\rho(I_p)$; \item[c)] if $p|S_2$ then $\rho_p$ satisfies the sp-condition; \item[d)] $\rho_\ell$ is weakly of type $\tau$; \item[e)] ${\rm det}(\rho)=\epsilon\psi$ where $\epsilon:G_{\bf Q}\to{\bf Z}^\times_\ell$ is the cyclotomic character. \end{itemize} Let $\mathcal B_{S_1,S_2}$ be the set of newforms in $\mathcal B_{\Delta_2}$ of level dividing $N\Delta_1S_1S_2\ell$ which are special at $S_2$ and let ${\bf T}^\psi_{S_1,S_2}$ be the sub-$\mathcal O$-algebra of $\prod_{h\in\mathcal B_{S_1,S_2}}\mathcal O$ generated by the elements $\widetilde T_p=(a(h))_{h\in\mathcal B_{S_1,S_2}}$ for $p$ not in $\Delta_1\cup S_1\cup S_2\cup\{\ell\}$. Since $\mathcal R^\psi_{S_1,S_2}$ is generated by traces, we know that there exist a surjective homomorphism of $\mathcal O$-algebras $\mathcal R^\psi_{S_1,S_2}\to{\bf T}^\psi_{S_1,S_2}$. Moreover by the results obtained in section \ref{gen}, we have that $\mathcal R^\psi_{S_1,\emptyset}\to{\bf T}^\psi_{S_1,\emptyset}$ is an isomorphism of complete intersections, for any subset $S_1$ of $\Delta_2$.\\ If $\Delta_1\not=1$ then each ${\bf T}^\psi_{\emptyset,S_2}$ acts on a local component of the cohomology of a suitable Shimura curve, obtained by taking an indefinite quaternion algebra of discriminant $S_2\ell$ or $S_2\ell p$ for a prime $p$ in $\Delta_1$. Therefore, theorem \ref{goal} gives the following: \begin{corollary} Suppose that $\Delta_1\not=1$ and that $\mathcal B_{\emptyset,S_2}\not=\emptyset;$ then the map $$\mathcal R^\psi_{\emptyset,S_2}\to{\bf T}^\psi_{\emptyset,S_2}$$ is an isomorphism of complete intersections. \end{corollary} If $p\in S_2$ there is a commutative diagram: \begin{eqnarray} &\mathcal R^\psi_{S_1p,S_2/p}& \to \mathcal R^\psi_{S_1,S_2}\nonumber\\ &\downarrow &\ \ \ \ \ \downarrow\nonumber\\ &{\bf T}^\psi_{S_1p,S_2/p}&\to {\bf T}^\psi_{S_1,S_2}\nonumber\\ \end{eqnarray} where all the arrows are surjections.\\ For every $p|\Delta_2$ the deformation over $\mathcal R^\psi_{\Delta_2,\emptyset}$ restricted to $G_p$ gives maps $$\mathcal R^{\psi}_p=\mathcal O[[X,Y]]/(XY)\to\mathcal R^\psi_{\Delta_2,\emptyset}.$$ The image $x_p$ of $X$ and the ideal $(y_p)$ generated by the image $y_p$ of $Y$ in $\mathcal R^\psi_{\Delta_2,\emptyset}$ do not depend on the choice of the map. By an abuse of notation, we shall call $x_p,y_p$ also the image of $x_p,y_p$ in every quotient of $\mathcal R^\psi_{\Delta_2,\emptyset}$. If $h$ is a form in $\mathcal B_{\Delta_2,\emptyset}$, we denote by $x_p(h),y_p(h)\in\mathcal O$ the images of $x_p,y_p$ by the map $\mathcal R^\psi_{\Delta_2,\emptyset}\to\mathcal O$ corresponding to $\rho_h$.\\ \begin{lemma} If $h\in\mathcal B_{\Delta_2}$ and $p|\Delta_2$, then: \begin{itemize} \item[a)] $x_p(h)=0$ if and only if $h$ is special at $p$; \item[b)] if $h$ is unramified at $p$ then $(x_p(h))=(a_p(h)^2-\psi(p)(p+1)^2)$; \item[c)] $y_p(h)=0$ if and only if $h$ is unramified at $p$; \item[d)] if $h$ is special at $p$, the oreder at $(\lambda)$ of $y_p(h)$ is the greatest positive integer $n$ such that $\rho_h(I_p)\equiv\sqrt{\psi(p)}\otimes 1\ {\rm mod}\ \lambda^n.$ \end{itemize} \end{lemma} \begin{proof} It is an immediate consequence of the definition of the sp-condition (proof of lemma \ref{versal}). Statement b), follows from the fact that $$a_p(h)={\rm tr}(\rho_h({\rm Frob}_p))$$ and $$\rho_h({\rm Frob}_h)=\left( \begin{array} [c]{cc} \pm p\sqrt{\psi(p)}+x_p(h) & 0\\ 0 & p\psi(p)/(\pm p\sqrt{\psi(p)}+x_p(h)) \end{array} \right).$$ \end{proof} \noindent In particular $(y_p)$ is the kernel of the map $\mathcal R^\psi_{S_1,S_2}\to\mathcal R^\psi_{S_1/p,S_2}$. \noindent If $h\in\mathcal B_{S_1,S_2}$ let $\theta_{h,S_1,S_2}:{\bf T}^\psi_{S_1,S_2}\to\mathcal O$ be the character corresponding to $h$. \noindent We consider the congruence ideal of $h$ relatively to $\mathcal B_{S_1,S_2}$: $$\eta_{h,S_1,S_2}=\theta_{h,S_1,S_2}(Ann_{{\bf T}^\psi_{S_1,S_2}}({\rm ker}\ \theta_{h,S_1,S_2})).$$ \noindent It is know that $\eta_{h,S_1,S_2}$ controls congruences between $h$ and linear combinations of forms different from $h$ in $\mathcal B_{S_1,S_2}$. \begin{theorem}\label{cong} Suppose $\Delta_1\not=1$ and $\Delta_2$ as above. Then \begin{itemize} \item[a)] $\mathcal B_{\emptyset,\Delta_2}\not=0$; \item[b)] for every subset $S\subseteq\Delta_2$, the map $\mathcal R^\psi_{S,\Delta_2/S}\to{\bf T}^\psi_{S,\Delta_2/S}$ is an isomorphism of complete intersection; \item[c)] for every $h\in\mathcal B_{\emptyset,\Delta_2}$, $\eta_{h,S,\Delta_2/S}=(\prod_{p|S}y_p(h))\eta_{h,\emptyset,\Delta_2}.$ \end{itemize} \end{theorem} \noindent The proof of this theorem is essentilly the same as in \cite{Lea}.\\ If we combine point c) of theorem \ref{cong} to the results in Section 5.5 of \cite{CDT}, we obtain: \begin{corollary} If $h\in\mathcal B_{S_1,S_2},$ then \begin{displaymath} \eta_{h,\Delta_2,\emptyset}=\prod_{p|\frac{\Delta_2}{S_1S_2}}x_p(h)\prod_{p|S_2}y_p(h)\eta_{h,S_1,S_2}. \end{displaymath} \end{corollary} In particular, from this corollary we deduce the following theorem: \begin{theorem} Let $f=\sum a_nq^n$ be a normalized newform in $S_2(\Gamma_0(M\ell^2),\psi)$ supercuspidal of type $\tau=\chi\oplus\chi^\sigma$ at $\ell$, with minimal level, special at primes in a finite set $\Delta'$, there exist $g\in S_2(\Gamma_0(qM\ell^2),\psi)$ supercuspidal of type $\tau$ at $\ell$, special at every prime $p|\Delta'$ such that $f\equiv g\ {\rm mod}\ \lambda$ if and only if $$a_q^2\equiv\psi(q)(1+q)^2\ {\rm mod}\ \lambda$$ where $q$ is a prime such that $(q,M\ell^2)=1,$ $q\not\equiv-1\ {\rm mod}\ \ell$. \end{theorem} \noindent The problem of remuve the hypothesis of minimal level, is still open and could be solved by proving conjecture \ref{noscon} \section{Problem: extension of results to the non minimal case} Let $\ell\geq 2$ be a prime number. Let $\Delta'$ be a product of an odd number of primes, different from $\ell$. We put $\Delta=\Delta'\ell$. Let $B$ be the indefinite quaternion algebra over ${\bf Q}$ of discriminant $\Delta$. Let $R$ be a maximal order in $B$.\\ Let $N$ be a positive rational number, we observe that in our deformation problem, in section \ref{de}, we have assumed that the representation $\overline\rho$ associated to $f\in S_2(\Gamma_0(N\Delta\ell),\psi)$, occurs with type $\tau$ and minimal level at $N$ ( not necessarily at $\Delta'$). Let now $S$ be a finite set of rational primes which does not divide $M\ell$, where $M=N\Delta'$, we fix $f\in S(\Gamma_0(N\Delta\ell S),\psi)$ a modular newform of weight 2, level $N\Delta\ell S$, supercuspidal of type $\tau$ at $\ell$, special at primes dividing $\Delta'$ and with Nebentypus $\psi$. Let $\rho$ be the Galois representation associated to $f$ and let $\overline\rho$ be its reduction modulo $\ell$; we suppose that conditions (\ref{con1}), (\ref{cond2}), (\ref{rara}), (\ref{end}) and (\ref{c3}) hold. \noindent If we denote by $\Delta_1$ the product of primes $p|\Delta'$ such that $\overline\rho(I_p)\not=1$ and by $\Delta_2$ the product of primes $p|\Delta'$ such that $\overline\rho(I_p)=1$. As usual we assume that $p^2\not\equiv 1\ {\rm mod}\ \ell$ if $p|\Delta_2$. We say that the representation $\rho$ is of {\bf type $(S,{\rm sp},\tau,\psi)$} if conditions b), c), d), e) of definition \ref{def} hold and \begin{itemize} \item[a)] $\rho$ is unramified outside $M\ell S$. \end{itemize} \noindent This is a deformation condition. Let $\mathcal R^\psi_S$ be the universal deformation ring which parametrizes representations of type $(S,{\rm sp},\tau,\psi)$ with residual representation $\overline\rho$ and let ${\bf T}^\psi_S$ be the Hecke algebra acting on the space of modular forms of type $(S,{\rm sp},\tau,\psi)$.\\ Since the dimension of the Selmer groups do not satisfy the control conditions, it is not possible to construct a Taylor-Wiles system by considering a deformation problem of type $(S,{\rm sp},\tau,\psi)$ with $S\not=\emptyset$, as in section \ref{tw}; a still open problem is to prove theorem \ref{goal} for $\mathcal R^\psi_S$, ${\bf T}^\psi_S$ and ${\mathcal M^\psi}_S$. If $S=0$ then we retrouve our theorem \ref{goal}. A possible approach to this problem is to use, as in the classical case, the results of De Smit, Rubin, Schoof \cite{DRS} and Diamond \cite{Dia}, by induction on the cardinality of $S$. If we make the inductive hypothesis assuming the result true for $S$, we have to verify that the result holds for $S'=S\cup\{q\}$ where $q$ is a prime number not dividing $M\ell S$. Following the litterature, to prove the inductive step the principal ingredients are: \begin{itemize} \item a duality result about $\mathcal M^\psi_S$ (to appeare), showing the existence of a perfect pairing $\mathcal M^\psi_S\times \mathcal M^\psi_S\to\mathcal O$ which induces an isomorphism $\mathcal M^\psi_S\to{\rm Hom}_\mathcal O(\mathcal M^\psi_S,\mathcal O)$ of ${\bf T}^\psi_S$-modules; \item conjecture \ref{noscon}, saying that the natural ${\bf T}^\psi_{S'}$-injection $\left(\mathcal M^\psi_S\right)^2\hookrightarrow \mathcal M^\psi_{S'}$ is injective when tensorized with $k$. \end{itemize} For semplicity, we shall assume that $q^2\not\equiv 1\ {\rm mod}\ \ell$ for every $q\in S$. \noindent Then, as observed in the previous section, there is an isomorphism: $$\frac{\mathcal R^\psi_{S'}}{(y_q)}\ \widetilde\to\ \mathcal R^\psi_{S}$$ that induces an isomorphism on the Hecke algebras: $$\frac{{\bf T}^\psi_{S'}}{(y_q)_{{\bf T}^\psi_{S'}}}\ \widetilde\to\ {\bf T}^\psi_{S}$$ where $(y_q)_{{\bf T}^\psi_{S'}}$ is the image of $(y_q)$ in ${\bf T}^\psi_{S'}$; but, for a general $S$, we don't have any information about $\mathcal M^\psi_S=H^1({\bf X}_1(NS),\mathcal O)_{\mathfrak m_S}^{\widehat\psi}$ as a ${\bf T}_S^{\widehat\psi}$-module, where $\mathfrak m_S$ is the inverse image of $\mathfrak m$ with respect to the natural map ${\bf T}_S^{\widehat\psi}\to {\bf T}_\emptyset^{\widehat\psi}$. \end{document}
\begin{document} \title{Nonexistence of tight spherical design of harmonic index $4$} \author{Takayuki Okuda and Wei-Hsuan Yu} \subjclass[2010]{Primary 52C35; Secondary 14N20, 90C22, 90C05} \keywords{spherical design, equiangular lines} \address{ Department of Mathematics, Graduate School of Science, Hiroshima University 1-3-1 Kagamiyama, Higashi-Hiroshima, 739-8526 Japan} \email{[email protected]} \address{Department of Mathematics, Michigan State University, 619 Red Cedar Road, East Lansing, MI 48824} \email{[email protected]} \thanks{The first author is supported by Grant-in-Aid for JSPS Fellow No.25-6095 and the second author is supported in part by NSF grants CCF1217894, DMS1101697} \date{} \maketitle \begin{abstract} We give a new upper bound of the cardinality of a set of equiangular lines in $\mathbb{R}^n$ with a fixed angle $\theta$ for each $(n,\theta)$ satisfying certain conditions. Our techniques are based on semi-definite programming methods for spherical codes introduced by Bachoc--Vallentin [J.~Amer.~Math.~Soc.~2008]. As a corollary to our bound, we show the nonexistence of spherical tight designs of harmonic index $4$ on $S^{n-1}$ with $n \geq 3$. \end{abstract} \section{Introduction} The purpose of this paper is to give a new upper bound of the cardinality of a set of equiangular lines with certain angles (see Theorem \ref{thm:rel}). As a corollary to our bound, we show the nonexistence of tight designs of harmonic index $4$ on $S^{n-1}$ with $n \geq 3$ (see Theorem \ref{thm:nonex-tight}). Throughout this paper, $S^{n-1} := \{ x \in \mathbb{R}^n \mid \| x \| = 1 \}$ denotes the unit sphere in $\mathbb{R}^{n}$. By Bannai--Okuda--Tagami \cite{ban13}, a finite subset $X$ of $S^{n-1}$ is called \emph{a spherical design of harmonic index $t$ on $S^{n-1}$} (or shortly, \emph{a harmonic index $t$-design on $S^{n-1}$}) if $\sum_{{\bold x} \in X} f({\bold x}) = 0$ for any harmonic polynomial function $f$ on $\mathbb{R}^{n}$ of degree $t$. Our concern in this paper is in tight harmonic index $4$-designs. A harmonic index $t$-design $X$ is said to be \emph{tight} if $X$ attains the lower bound given by \cite[Theorem 1.2]{ban13}. In particular, for $t = 4$, a harmonic index $4$-design on $S^{n-1}$ is tight if its cardinality is $(n+1)(n+2)/6$. For the cases where $n=2$, we can construct tight harmonic index $4$-designs as two points ${\bold x}$ and ${\bold y}$ on $S^{1}$ with the inner-product $\langle {\bold x},{\bold y} \rangle_{\mathbb{R}^2} = \pm \sqrt{1/2}$. The paper \cite[Theorem 4.2]{ban13} showed that if tight harmonic index $4$-designs on $S^{n-1}$ exist, then $n$ must be $2$ or $3(2k-1)^2 -4$ for some integers $k \geq 3$. As a main result of this paper, we show that the later cases do not occur. That is, the following theorem holds: \begin{thm}\label{thm:nonex-tight} For each $n \geq 3$, spherical tight design of harmonic index $4$ on $S^{n-1}$ does not exist. \end{thm} A set of lines in $\mathbb{R}^n$ is called \emph{an equiangular line system} if the angle between each pair of lines is constant. By definition, an equiangular line system can be considered as a spherical two-distance set with the inner product set $\{ \pm \cos \theta \}$ for some constant $\theta$. Such the constant $\theta$ is called \emph{the common angle} of the equiangular line system. The recent development of this topic can be found in \cite{barg14, grea14}. By \cite[Proposition 4.2]{ban13}, any tight harmonic index $4$-design on $S^{n-1}$ can be considered as an equiangular line system with the common angle $\arccos \sqrt{3/(n+4)}$. The proof of Theorem \ref{thm:nonex-tight} will be reduced to a new relative upper bound (see Theorem \ref{thm:rel}) for the cardinalities of equiangular line systems with a fixed common angle. Note that in some cases, our relative bound is better than the Lemmens--Seidel relative bound (see Section \ref{sec:main} for more details). The paper is organized as follows: In Section \ref{sec:main}, as a main theorem of this paper, we give a new relative bound for the cardinalities of equiangular line systems with a fixed common angle satisfying certain conditions. Theorem \ref{thm:nonex-tight} is followed as a corollary to our relative bound. In Section \ref{sec:proof}, our relative bound is proved based on the method by Bachoc--Vallentin \cite{bac08a}. \section{Main results}\label{sec:main} In this paper, we denote by $M(n)$ and $M_{\cos \theta}(n)$ the maximum number of equiangular lines in $\mathbb{R}^n$ and that with the fixed common angle $\theta$, respectively. By definition, \[ M(n) = \sup_{0 \leq \alpha < 1} M_\alpha(n). \] The important problems for equiangular lines are to give upper and lower estimates $M(n)$ or $M_\alpha(n)$ for fixed $\alpha$. One can find a summary of recent progress of this topic in \cite{barg14, grea14}. Let us fix $0 \leq \alpha < 1$. Then for a finite subset $X$ of $S^{n-1}$ with $I(X) \subset \{ \pm \alpha \}$, we can easily find an equiangular line system with the common angle $\arccos \alpha$ and the cardinality $|X|$, where \[ I(X) := \{ \langle {\bold x},{\bold y} \rangle_{\mathbb{R}^n} \mid {\bold x},{\bold y} \in X \text{ with } {\bold x} \neq {\bold y} \} \] is the set of inner-product values of distinct vectors in $X \subset S^{n-1} \subset \mathbb{R}^n$. The converse is also true. In particular, we have \begin{align*} M_{\alpha}(n) = \max \{ |X| \mid X \subset S^{n-1} \text{ with } I(X) \subset \{ \pm \alpha \} \}, \end{align*} and therefore, our problem can be considered as a problem in special kinds of spherical two-distance sets. In this paper, we are interested in upper estimates of $M_\alpha(n)$. According to \cite{lem73}, Gerzon gave the upper bound on $M(n)$ as $M(n) \leq n(n+1)/2$ and therefore, we have \[ M_\alpha(n) \leq \frac{n(n+1)}{2} \quad \text{ for any } \alpha. \] This upper bound is called the Gerzon absolute bound. Lemmens and Seidel \cite{lem73} showed that \begin{equation*} M_\alpha(n) \le \frac {n(1-\alpha^2)}{1-n\alpha^2} \quad \text{ in the cases where } 1-n\alpha^2 > 0. \end{equation*} This inequality is sometimes called the Lemmens--Seidel relative bound as opposed to the Gerzon absolute bound. As a main theorem of this paper, we give other upper estimates of $M_\alpha(n)$ as follows: \begin{thm}\label{thm:rel} Let us take $n \geq 3$ and $\alpha \in (0,1)$ with \[ 2-\frac{6\alpha-3}{\alpha^2} < n < 2 + \frac{6\alpha+3}{\alpha^2}. \] Then \[ M_\alpha(n) \leq 2 + \frac{(n-2)}{\alpha} \max \left\{ \frac{(1-\alpha)^3}{(n-2)\alpha^2 +6\alpha-3}, \frac{(1+\alpha)^3}{-(n-2)\alpha^2 + 6\alpha+3} \right\}. \] In particular, for an integer $l \geq 2$, if \[ 3l^2-6l+2 < n < 3l^2 + 6l+2 \] then \[ M_{1/l}(n) \leq 2 + (n-2) \max \left\{ \frac{(l-1)^3}{-3l^2 + 6l + (n-2)}, \frac{(l+1)^3}{3l^2 + 6l -(n-2)} \right\}. \] \end{thm} Recall that by \cite[Proposition 4.2, Theorem 4.2]{ban13} for $n \geq 3$, if there exists a tight harmonic index $4$-design $X$ on $S^{n-1}$, then $n = 3(2k-1)^2-4$ for some $k \geq 3$ and \begin{align*} M_{\sqrt{3/(n+4)}}(n) = (n+1)(n+2)/6. \end{align*} However, as a corollary to Theorem \ref{thm:rel}, we have the following upper bound of $M_{\sqrt{3/(n+4)}}(n)$ and obtain Theorem \ref{thm:nonex-tight}. \begin{cor}\label{cor:tight4-eq} Let us put $n_k := 3(2k-1)^2-4$ and $\alpha_k := \sqrt{3/(n_k+4)} = 1/(2k-1)$. Then for each integer $k \geq 2$, \begin{align*} M_{\alpha_k}(n_k) &\leq 2 (k-1) (4k^3-k-1) (< (n_k+1)(n_k+2)/6). \end{align*} \end{cor} It should be remarked that in the setting of Corollary \ref{cor:tight4-eq}, the Lemmens--Seidel relative bound does not work since \[ 1-n_k \alpha_k^2 = -2(4 k^2-4k-1)/(2k-1)^2 < 0 \] and our bound is better than the Gerzon absolute bound. The proof of Theorem \ref{thm:rel} is given in Section \ref{sec:proof} based on Bachoc--Vallentin's SDP method for spherical codes \cite{bac08a}. The origins of applications of the linear programming method in coding theory can be traced back to the work of Delsarte \cite{del73}. Applications of semidefinite programming (SDP) method in coding theory and distance geometry gained momentum after the pioneering work of Schrijver \cite{Schrijver05code} that derived SDP bounds on codes in the Hamming and Johnson spaces. Schrijver's approach was based on the so-called Terwilliger algebra of the association scheme. The similar idea for spherical codes were formulated by Bachoc and Vallentin \cite{bac08a} regarding for kissing number problems. Barg and Yu \cite{barg13} modified it to achieve maximum size of spherical two-distance sets in $\mathbb{R}^n$ for most values of $n \leq 93$. In our proof, we restricted the method to obtain upper bounds for equiangular line sets. We can see in \cite{GST06upperbounds,Schrijver05code} and \cite{bac08a, bac09opti, bac09sdp, Musin08bounds} that the SDP method works well for studying binary codes and spherical codes, respectively. Especially, for equiangular lines, Barg and Yu \cite{barg14} give the best known upper bounds of $M(n)$ for some $n$ with $n \leq 136$ by the SDP method. Our bounds in Corollary \ref{cor:tight4-eq} are the same as \cite{barg14} in lower dimensions. However, in some cases, we need some softwares to complete the SDP method. It should be emphasized that our theorem offer upper bound of $M_{\alpha_k}(n_k)$ for arbitrary large $n_k$ and the proof can be followed by hand calculations without using any convex optimization software. \section{Proof of our relative bound}\label{sec:proof} To prove Theorem \ref{thm:rel}, we apply Bachoc--Vallentin's SDP method for spherical codes in \cite{bac08a} to spherical two-distance sets. The explicit statement of it was given by Barg--Yu \cite{barg13}. We use symbols $P^{n}_l(u)$ and $S^n_l(u,v,t)$ in the sense of \cite{bac08a}. It should be noted that the definition of $S^{n}_l(u,v,t)$ is different from that of \cite{bac09opti} and \cite{barg13} (see also \cite[Remark 3.4]{bac08a} for such the differences). In order to state it, we define \begin{align*} W(x)&:= \begin{pmatrix}1&0\\0&0\end{pmatrix} + \begin{pmatrix}0&1\\1&1\end{pmatrix} (x_1+x_2)/3 + \begin{pmatrix} 0&0\\0&1 \end{pmatrix} (x_3+x_4+x_5+x_6), \\ S^{n}_l(x;\alpha,\beta) &:= S^{n}_{l}(1,1,1)+S^{n}_l(\alpha,\alpha,1)x_1 + S^{n}_l(\beta,\beta,1) x_2 + S^{n}_l(\alpha,\alpha,\alpha) x_3 \\ & \quad \quad + S^{n}_l(\alpha,\alpha,\beta) x_4 + S^{n}_l(\alpha,\beta,\beta) x_5 + S^{n}_l(\beta,\beta,\beta) x_6 \end{align*} for each $x = (x_1,x_2,x_3,x_4,x_5,x_6) \in \mathbb{R}^6$ and $\alpha, \beta \in [-1,1)$. We remark that $W(x)$ is a symmetric matrix of size $2$ and $S^{n}_{l}(x;\alpha,\beta)$ is a symmetric matrix of infinite size indexed by $\{ (i,j) \mid i,j = 0,1,2,\dots, \}$. \begin{fact}[Bachoc--Vallentin \cite{bac08a} and Barg--Yu \cite{barg13}]\label{fact:SDP-problem} Let us fix $\alpha, \beta \in [-1,1)$. Then any finite subset $X$ of $S^{n-1}$ with $I(X) \subset \{ \alpha,\beta \}$ satisfies \[ |X| \leq \max \{ 1 + (x_1 + x_2)/3 \mid x = (x_1,\dots,x_6) \in \Omega^{n}_{\alpha,\beta} \} \] where the subset $\Omega^{n}_{\alpha,\beta}$ of $\mathbb{R}^{6}$ is defined by \[ \Omega_{\alpha,\beta}^{n} := \{\, x = (x_1,\dots,x_6) \in \mathbb{R}^{6} \mid \text{ $x$ satisfies the following four conditions } \,\}. \] \begin{enumerate} \item $x_i \geq 0$ for each $i = 1,\dots,6$. \item $W(x)$ is positive semi-definite. \item $3 + P^{n}_l(\alpha) x_1 + P^{n}_l (\beta) x_2 \geq 0$ for each $l=1,2,\dots.$ \item Any finite principal minor of $S^{n}_l(x;\alpha,\beta)$ is positive semi-definite for each $l = 0,1,2,\dots.$ \end{enumerate} \end{fact} To prove Theorem \ref{thm:rel}, we use the following ``linear version'' of Fact \ref{fact:SDP-problem}: \begin{cor}\label{cor:triangleLP} In the same setting of Fact $\ref{fact:SDP-problem}$, \[ |X| \leq \max \{ 1 + (x_1 + x_2)/3 \mid x = (x_1,\dots,x_6) \in \widetilde{\Omega}^{n}_{\alpha,\beta} \} \] where the subset $\widetilde{\Omega}^{n}_{\alpha,\beta}$ of $\mathbb{R}^{6}$ is defined by \[ \widetilde{\Omega}_{\alpha,\beta}^{n} := \{\, x = (x_1,\dots,x_6) \in \mathbb{R}^{6} \mid \text{ $x$ satisfies the following three conditions } \,\}. \] \begin{enumerate} \item $x_i \geq 0$ for each $i = 1,\dots,6$. \item $\det W(x) \geq 0$. \item $(S^{n}_l)_{i,i}(x;\alpha,\beta) \geq 0$ for each $l,i = 0,1,2,\dots$, where $(S^{n}_l)_{i,i}(x;\alpha,\beta)$ is the $(i,i)$-entry of the matrix $S^{n}_l(x;\alpha,\beta)$. \end{enumerate} \end{cor} By Corollary \ref{cor:triangleLP}, the proof of Theorem \ref{thm:rel} can be reduced to show the following proposition: \begin{prop}\label{prop:SDPrel} Let $n \geq 3$ and $0 < \alpha <1$. Then the following holds: \begin{enumerate} \item \begin{align*} \max \{ 1+(x_1+x_2)/3 \mid x \in \widetilde{\Omega}^{n}_{\alpha,-\alpha} \} \leq 2 + 2 + (n-2)\frac{(1-\alpha)^3}{\alpha((n-2)\alpha^2 +6\alpha-3)} \end{align*} if $(1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq (1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq 0$. \item \begin{align*} \max \{ 1+(x_1+x_2)/3 \mid x \in \widetilde{\Omega}^{n}_{\alpha,-\alpha} \} \leq 2 + (n-2) \frac{(1+\alpha)^3}{\alpha(-(n-2)\alpha^2 + 6\alpha+3)} \end{align*} if $(1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq (1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq 0$. \end{enumerate} \end{prop} For the proof of Proposition \ref{prop:SDPrel}, we need the next explicit formula of $(S^{n}_3)_{1,1}$ which are obtained by direct computations: \begin{lem}\label{lem:S3explicite} For each $-1 < \alpha < 1$, \begin{align*} (S^{n}_3)_{1,1}(1,1,1) &= 0 \\ (S^{n}_3)_{1,1}(\alpha,\alpha,1) &= \frac{n(n+2)(n+4)(n+6)}{3(n-1)(n+1)(n+3)} \alpha^2 (1-\alpha^2)^3 \\ (S^{n}_3)_{1,1}(\alpha,\alpha,\alpha) &= -\frac{n(n+2)(n+4)(n+6)}{(n-2)(n-1)(n+1)(n+3)} (\alpha-1)^3 \alpha^3 ((n-2)\alpha^2-6\alpha-3) \\ (S^{n}_3)_{1,1}(\alpha,\alpha,-\alpha) &= -\frac{n(n+2)(n+4)(n+6)}{(n-2)(n-1)(n+1)(n+3)} \alpha^3 (\alpha+1)^3 ((n-2)\alpha^2 +6\alpha-3). \end{align*} \end{lem} \begin{proof}[Proof of Proposition $\ref{prop:SDPrel}$] Fix $\alpha$ with $0 < \alpha < 1$ and take any $x \in \widetilde{\Omega}^{n}_{\alpha,-\alpha}$. For simplicity we put $X = (x_1+x_2)/3$, $Y = x_3+x_5$ and $Z = x_4+x_6$. By computing $\det W(x)$, we have \begin{align} -X(X-1)+Y+Z \geq 0. \label{eq:Wrel} \end{align} Furthermore, we have $(S^{n}_3)_{1,1}(x;\alpha,-\alpha) \geq 0$, and hence, by Lemma \ref{lem:S3explicite}, \begin{multline*} (n-2) \frac{(1-\alpha^2)^3}{\alpha} X - (1-\alpha)^3 (-(n-2)\alpha^2 + 6\alpha+3) Y \\ - (1+\alpha)^3 ((n-2)\alpha^2 +6\alpha-3) Z \geq 0 \end{multline*} Therefore, in the cases where \[ (1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq (1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq 0, \] we obtain \begin{align*} (n-2) \frac{(1-\alpha^2)^3}{\alpha} X - (1+\alpha)^3 ((n-2)\alpha^2 +6\alpha-3) (Y+Z) \geq 0. \end{align*} By combining with \eqref{eq:Wrel}, \begin{align*} (n-2) \frac{(1-\alpha^2)^3}{\alpha} X - (1+\alpha)^3 ((n-2)\alpha^2 +6\alpha-3) X(X-1) \geq 0 \end{align*} Thus we have \[ 2 + (n-2) \frac{(1-\alpha)^3}{\alpha((n-2)\alpha^2 +6\alpha-3)} \geq X+1 = 1 + (x_1+x_2)/3. \] By the similar arguments, in the cases where \[ (1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq (1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq 0, \] we have \[ 2 + (n-2) \frac{(1+\alpha)^3}{\alpha(-(n-2)\alpha^2 +6\alpha+3)} \geq X+1 = 1 + (x_1+x_2)/3. \] \end{proof} \begin{rem} Harmonic index $4$-designs are defined by using the functional space $\mathop{\mathrm{Harm}}\nolimits_{4}(S^{n-1})$. Therefore, it seems to be natural to consider $\mathop{\mathrm{Harm}}\nolimits_4(S^{n-1})$ in Bachoc--Vallentin's SDP method. In our proof, the functional space \[ H^{n-1}_{3,4} \subset \bigoplus_{m=0}^{4} H^{n-1}_{m,4} = \mathop{\mathrm{Harm}}\nolimits_{4}(S^{n-1}) \] $($see \cite{bac08a} for the notation of $H^{n-1}_{m,l}$$)$ plays an important role to show the nonexistence of tight designs of harmonic index $4$ since $(S^n_3)_{1,1}$ comes from $H^{n-1}_{3,4}$. We checked that if we consider $H^{n-1}_{0,4} \oplus H^{n-1}_{1,4} \oplus H^{n-1}_{2,4} \oplus H^{n-1}_{4,4}$ instead of $H^{n-1}_{3,4}$, our upper bound can not be obtained for small $k$. However, we can not find any conceptional reason of the importance of $H^{n-1}_{3,4}$. \end{rem} \section*{Acknowledgements.} The authors would like to give heartfelt thanks to Eiichi Bannai, Alexander Barg and Makoto Tagami whose suggestions and comments were of inestimable value for this paper. The authors also would like to thanks Akihiro Munemasa, Hajime Tanaka and Ferenc Sz{\"o}ll{\H o}si for their valuable comments. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Multi-Agent Submodular Optimization} \author{Richard Santiago \and F. Bruce Shepherd} \institute{Richard Santiago \at McGill University, Montreal, Canada \\ \email{[email protected]} \and F. Bruce Shepherd \at University of British Columbia, Vancouver, Canada \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Recent years have seen many algorithmic advances in the area of submodular optimization: (SO) $\min/\max~f(S): S \in \mathcal{F}$, where $\mathcal{F}$ is a given family of feasible sets over a ground set $V$ and $f:2^V \rightarrow \mathbb{R}$ is submodular. This progress has been coupled with a wealth of new applications for these models. Our focus is on a more general class of {\em multi-agent submodular optimization} (MASO) $\min/\max \sum_{i=1}^{k} f_i(S_i): S_1 \uplus S_2 \uplus \cdots \uplus S_k \in \mathcal{F}$. Here we use $\uplus$ to denote disjoint union and hence this model is attractive where resources are being allocated across $k$ agents, each with its own submodular cost function $f_i()$. This was introduced in the minimization setting by Goel et al. In this paper we explore the extent to which the approximability of the multi-agent problems are linked to their single-agent versions, referred to informally as the {\em multi-agent gap}. We present different reductions that transform a multi-agent problem into a single-agent one. For minimization, we show that (MASO) has an $O(\alpha \cdot \min \{k, \log^2 (n)\})$-approximation whenever (SO) admits an $\alpha$-approximation over the convex formulation. In addition, we discuss the class of ``bounded blocker'' families where there is a provably tight $O(\log n)$ multi-agent gap between (MASO) and (SO). For maximization, we show that monotone (resp. nonmonotone) (MASO) admits an $\alpha (1-1/e)$ (resp. $\alpha \cdot 0.385$) approximation whenever monotone (resp. nonmonotone) (SO) admits an $\alpha$-approximation over the multilinear formulation; and the $1-1/e$ multi-agent gap for monotone objectives is tight. We also discuss several families (such as spanning trees, matroids, and $p$-systems) that have an (optimal) multi-agent gap of 1. These results substantially expand the family of tractable models for submodular maximization. \keywords{submodular optimization \and approximation algorithms \and multi-agent} \end{abstract} \section{Introduction} \label{sec:intro} A function $f:2^V \to \mathbb{R}$ is \emph{submodular} if $f(S) + f(T) \geq f(S \cup T) + f(S \cap T)$ for any $S,T \subseteq V$. We say that $f$ is \emph{monotone} if $f(S) \leq f(T)$ whenever $S \subseteq T$. Throughout, all submodular functions are nonnegative, and we usually assume that $f(\emptyset)=0$. Our functions are given by a \emph{value oracle}, where for a given set $S$ an algorithm can query the oracle to find its value $f(S)$. For a family of feasible sets $S\in\mathcal{F}$ on a finite ground set $V$ we consider the following broad class of submodular optimization (SO) problems: \begin{equation} \label{eqn:SA} \mbox{SO($\mathcal{F}$) ~~~~Min~ / ~Max ~$f(S):S\in\mathcal{F}$} \end{equation} where $f$ is a nonnegative submodular set function on $V$. There has been an impressive recent stream of activity around these problems for a variety of set families $\mathcal{F}$. We explore the connections between these (single-agent) problems and their multi-agent incarnations. In the {\em multi-agent (MA)} version, we have $k$ agents each of which has an associated nonnegative submodular set function $f_{i}$, $i \in [k]$. As before, we are looking for sets $S\in\mathcal{F}$, however, we now have a 2-phase task: the elements of $S$ must also be partitioned amongst the agents. Hence we have set variables $S_{i}$ and seek to optimize $\sum_{i}f_{i}(S_{i})$. This leads to the multi-agent submodular optimization (MASO) versions: \begin{equation} \label{eqn:MA} \mbox{MASO($\mathcal{F}$) ~~~~Min~ / ~Max ~$\sum_{i=1}^{k} f_{i}(S_{i}):S_{1}\uplus S_{2}\uplus \cdots\uplus S_{k}\in\mathcal{F}$.} \end{equation} \iffalse We mention that while we focus on a partition constraint (the notation ``$+$'' indicates disjoint union), in some settings, the problem is of interest even under the standard union constraint. \fi The special case when $\mathcal{F}=\{V\}$ has been previously examined both for minimization (the minimum submodular cost allocation problem \cite{hayrapetyan2005network,svitkina2010facility,ene2014hardness,chekuri2011submodular}) and maximization (submodular welfare problem \cite{lehmann2001combinatorial,vondrak2008optimal}). For general families $\mathcal{F}$, however, we are only aware of the development in Goel et al. \cite{goel2009approximability} for the minimization setting. A natural first question is whether any multi-agent problem could be directly reduced (or encoded) to a single-agent one over the same ground set $V$. Goel et al. give an explicit example where such a reduction does not exist. More emphatically, they show that when $\mathcal{F}$ consists of vertex covers in a graph, the {\em single-agent (SA)} version (i.e., (\ref{eqn:SA})) has a 2-approximation while the MA version has an inapproximability lower bound of $\Omega(\log n)$. Our first main objective is to explain the extent to which approximability for multi-agent problems is intrinsically connected to their single-agent versions, which we also refer to as the {\em primitive} associated with $\mathcal{F}$. We refer to the {\em multi-agent (MA) gap} as the approximation-factor loss incurred by moving to the MA setting. \iffalse Goel et al. show this is not the case. The following simple submodular-cost problem has a MA cost function which cannot be realized by a single submodular cost function over the same ground set $V$. There are three tasks $\{A,B,C\}$ and two agents $\{1,2\}$. Each agent can do one of the tasks at cost 1 each. In addition agent 1 can do tasks $\{A,B\}$ at cost of only 1, and $\{A,B,C\}$ at cost of only 2. Agent 2 can do $\{A,C\}$ at cost of only 1, and $\{A,B,C\}$ at cost of only 2. We can associate a cost $c(S)$ for any subset $S \subseteq \{A,B,C\}$ which corresponds to the best allocation of tasks in $S$ to the agents. This function $c(S)$ is not, however, submodular even though each agent's cost function was submodular. To see this, note that the marginal cost of adding $B$ to $\{A\}$ is 0 which is smaller than the marginal cost of adding it to $\{A,C\}$. \fi Our second objective is to extend the multi-agent model and show that in some cases this larger class remains tractable. Specifically, we define the {\em capacitated multi-agent submodular optimization (CMASO) problem} as follows: \begin{equation} \label{ma} \mbox{CMASO$(\mathcal{F})$}~~~~~~~ \begin{array}{rc} \max / \min & \sum_{i=1}^{k}f_{i}(S_{i}) \\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{F}\\ & S_{i}\in\mathcal{F}_{i}\,,\,\forall i\in[k] \end{array} \end{equation} \noindent where we are supplied with subfamilies $\mathcal{F}_i$. Many existing applications fit into this framework and some of these can be enriched through the added flexibility of the capacitated model. \iffalse For instance, one may include set bounds on the variables: $L_{i}\subseteq S_{i}\subseteq U_{i}$ for each $i$, or simple cardinality constraints: $|S_{i}| \leq b_{i}$ for each $i$. A well-studied (\cite{fleischer2006sap,goundan2007revisiting,calinescu2011maximizing}) application of CMASO in the maximization setting is the Separable Assignment Problem (SAP) , which corresponds to the setting where the objectives are taken to be modular, the $\mathcal{F}_i$ downward closed (i.e. hereditary) families, and $\mathcal{F}$ to be the trivial $2^V$. We believe the CMASO framework has considerable potential for future applications and \fi We illustrate this with concrete examples in Section~\ref{sec:applications}. Prior work in both the single and multi-agent settings is summarized in Section~\ref{sec:related work}. We present our main results next. \subsection{Our contributions} \label{sec:reduce} We first discuss the minimization side of MASO (i.e. (\ref{eqn:MA})). Here the work of \cite{ene2014hardness} showed that for general nonnegative submodular functions the problem is in fact inapproximable within any multiplicative factor even in the case where $\mathcal{F}=\{V\}$ and $k=3$ (since it is NP-Hard to decide whether the optimal value is zero). Hence we focus almost completely on nonnegative monotone submodular objectives $f_i$. In fact, even in the single-agent setting with a nonnegative monotone submodular function $f$, there exist a number of polynomial hardness results over fairly simple set families $\mathcal{F}$; examples include minimizing a submodular function subject to a cardinality constraint \cite{svitkina2011submodular} or over the family of spanning trees \cite{goel2009approximability}. We show, however, that if the SA primitive for a family $\mathcal{F}$ admits approximation via a natural convex relaxation (see Appendices~\ref{sec:blocking} and \ref{sec:relaxations}) based on the Lov\'asz extension, then we may extend this to its multi-agent version with a modest blow-up in the approximation factor. \begin{theorem} \label{thm:klog} Suppose there is a (polytime) $\alpha(n)$-approximation for monotone SO($\mathcal{F}$) minimization via the blocking convex relaxation. Then there is a (polytime) $O(\alpha(n) \cdot \min\{k, \log^2 (n)\})$-approximation for monotone MASO($\mathcal{F}$) minimization. \end{theorem} We remark that the $O(\log^2 (n))$ approximation loss due to having multiple agents (i.e the MA gap) is in the right ballpark, since the vertex cover problem has a factor $2$-approximation for single-agent and a tight $O(\log n)$-approximation for the MA version \cite{goel2009approximability}. We also discuss how Goel et al's $O(\log n)$-approximation for MA vertex cover is a special case of a more general phenomenon. Their analysis only relies on the fact that the feasible family (or at least its upwards closure) has a {\em bounded blocker property}. Given a family $\mathcal{F}$, the {\em blocker} $\mathcal{B}(\mathcal{F})$ of $\mathcal{F}$ consists of the minimal sets $B$ such that $B \cap F \neq \emptyset$ for each $F \in \mathcal{F}$. We say that $\mathcal{B}(\mathcal{F})$ is {\em $\beta$-bounded} if $|B| \leq \beta$ for all $B \in \mathcal{B}(\mathcal{F})$. Families with bounded blockers have been previously studied in the SA minimization setting, where the works \cite{koufogiannakis2013greedy,iyer2014monotone} show that $\beta$-approximations are always available. Our next result (combined with these) establishes an $O(\log n)$ MA gap for bounded blocker families, thus improving the $O(\log^2 (n))$ factor in Theorem \ref{thm:klog} for general families. We remark that this $O(\log n)$ MA gap is tight due to examples like vertex covers ($2$-approximation for SA and a tight $O(\log n)$-approximation for MA) or submodular facility location ($1$-approximation for SA and a tight $O(\log n)$-approximation for MA). \begin{theorem} \label{thm:beta} Let $\mathcal{F}$ be a family with a $\beta$-bounded blocker. Then there is a randomized $O(\beta \log n)$-approximation algorithm for monotone $MASO(\mathcal{F})$ minimization. \end{theorem} While our work focuses almost completely on monotone objectives, we show in Section \ref{sec:MAfromSA} that upwards closed families with a bounded blocker remain tractable under some special types of nonmonotone objectives introduced by Chekuri and Ene. \iffalse While our work focuses almost completely on monotone objectives, for the case of upwards closed families with a bounded blocker we can show the following. \begin{theorem} Let $\mathcal{F}$ be an upwards closed family with a $\beta$-bounded blocker. Let the objectives be of the form $f_i = g_i + h$ where each $g_i$ is nonnegative monotone submodular and $h$ is nonnegative submodular. Then there is a randomized $O(k \beta \log n)$-approximation algorithm for the associated multi-agent problem. Moreover, in the case where $h$ is also symmetric there is an $O(\beta \log n)$-approximation algorithm. \end{theorem} \fi We conclude our minimization work by discussing a class of families which behaves well for MA minimization despite not having a bounded blocker. More specifically, in Section~\ref{sec:rings} we observe that crossing (and ring) families have an MA gap of $O(\log n)$. \begin{theorem} There is a tight $\ln (n)$-approximation for monotone MASO($\mathcal{F}$) minimization over crossing families $\mathcal{F}$. \end{theorem} We now discuss our contributions for the maximization setting. Our main result here establishes that if the SA primitive for a family $\mathcal{F}$ admits approximation via its multilinear relaxation (see Section \ref{sec:max-SA-MA-formulations}), then we may extend this to its multi-agent version with a constant factor loss. \begin{theorem} \label{thm:max-MA-gap} If there is a (polytime) $\alpha(n)$-approximation for monotone SO($\mathcal{F}$) maximization via its multilinear relaxation, then there is a (polytime) $(1-1/e) \cdot \alpha(n)$-approximation for monotone MASO($\mathcal{F}$) maximization. Furthermore, given a downwards closed family $\mathcal{F}$, if there is a (polytime) $\alpha(n)$-approximation for nonmonotone SO($\mathcal{F}$) maximization via its multilinear relaxation, then there is a (polytime) $0.385 \cdot \alpha(n)$-approximation for nonmonotone MASO($\mathcal{F}$) maximization. \end{theorem} We remark that the $(1-1/e)$ MA gap in the monotone case is tight due to examples like $\mathcal{F}=\{V\}$, where there is a trivial $1$-approximation for the SA problem and a tight $(1-1/e)$-approximation for the MA version \cite{vondrak2008optimal}. In Section \ref{sec:MASA} we describe a simple generic reduction that shows that for some families an (optimal) MA gap of $1$ holds. \iffalse In Section \ref{sec:MASA} we also describe a generic reduction that transforms any CMASO($\mathcal{F}$) problem (i.e. any multi-agent problem (\ref{ma})) into a SO($\mathcal{F}'$) problem. We use the simple idea of viewing assignments of elements $v$ to agents $i$ in a {\em multi-agent bipartite graph}. This was first proposed by Lehmann et al in \cite{lehmann2001combinatorial} and also used in Vondrak's work \cite{vondrak2008optimal}. In those cases $\mathcal{F}=\{V\}$ and $\mathcal{F}_i = 2^V$. Here we discuss the impact of the reduction on more general families $\mathcal{F}$ and $\mathcal{F}_i$, where there is a priori no reason that this should be well-behaved. We show how several properties of the objective and certain family of feasible sets stay \emph{invariant} (i.e. preserved) under the reduction. These lead to several algorithmic consequences, the most important ones being for maximization where we show an (optimal) MA gap of 1 for several families. \fi \begin{theorem} \label{thm:max-invariance} Let $\mathcal{F}$ be a matroid, a $p$-matroid intersection, or a $p$-system. Then, if there is a (polytime) $\alpha$-approximation algorithm for monotone (resp. nonmonotone) SO($\mathcal{F}$) maximization, there is a (polytime) $\alpha$-approximation algorithm for monotone (resp. nonmonotone) MASO($\mathcal{F}$) maximization. \end{theorem} In the setting of CMASO (i.e. (\ref{ma})) our results provide additional modelling flexibility. They imply that one maintains decent approximations even while adding interesting side constraints. For instance, for a monotone maximization instance of CMASO where $\mathcal{F}$ corresponds to a $p$-matroid intersection and the $\mathcal{F}_i$ are all matroids, our results from Section \ref{sec:MASA} lead to a $(p+1+\epsilon$)-approximation algorithm. We believe that these, combined with other results from Section \ref{sec:MASA}, substantially expand the family of tractable models for maximization. While the impact of this reduction is more for maximization, it also has some interesting consequences in the minimization setting. We discuss in Section \ref{sec:reduction-properties} how some of our results help explaining why for the family of spanning trees, perfect matchings, and $st$-paths, the approximations factors revealed in \cite{goel2009approximability} for the monotone minimization problem are the same for both the single-agent and multi-agent versions. \iffalse We now discuss a generic reduction that transforms any multi-agent problem (\ref{ma}) into a single-agent problem. The reduction has positive algorithmic consequences in the maximization regime. We use the simple idea of viewing assignments of elements $v$ to agents $i$ in a {\em multi-agent bipartite graph}. This was first proposed by Nisan et al in \cite{lehmann2001combinatorial} and also used in Vondrak's work \cite{vondrak2008optimal}. In those cases $\mathcal{F}=\{V\}$ and the associated bipartite graph is $G([k],V)=([k]+V,E)$ where there is an edge $iv$ for each $i\in[k]$ and each $v\in V$. Any valid assignment of some elements in $V$ to agents corresponds to a $b$-matching $Z$ in $G$, where $b[i]=\infty$ for each $i\in[k]$ and $b(v)=1$ for each $v\in V$. While the above auxiliary graph is fine when $\mathcal{F}=\{V\}$, in more general multi-agent~optimization problems we may use a subset $F \subset V$ from some family $\mathcal{F}$. Hence we are seeking $b$-matchings $Z$ for which $cov(Z)\in\mathcal{F}$. Here we define $cov(Z)$ to be the set of nodes $v \in V$ which are saturated by $Z$. There is no apriori reason that these {\em $\mathcal{F}$-constrained $b$-matchings} should have any nice structure, even for simple $\mathcal{F}$. We show (Section \ref{sec:MASA}) that this is indeed the case if $\mathcal{F}$ is a matroid. \begin{theorem} \label{thm:mainlift} If $\mathcal{F}$ induces a matroid over $V$, then so does the family of $\mathcal{F}$-constrained $b$-matchings over the edges of the multi-agent bipartite graph $G$. \end{theorem} Moreover, nice structure is also maintained if we start with a family of matroid bases or an intersection of $p$ matroids (Corollaries~\ref{Corollary p-matroid intersection} and \ref{Corollary bases}). In addition, these reductions also behave well in the more general CMASO model. This, together with the recent advances in SA submodular optimization (Section~\ref{sec:related work}), yields the following algorithmic consequences. \fi \iffalse \begin{theorem} \label{thm:max cap} Consider the capacitated multi-agent {\bf maximization} problem (CMASO) \begin{equation*} \begin{array}{rc} \max & \sum_{i \in [k]} f_i (S_i) \\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{F}\\ & S_{i}\in\mathcal{F}_{i}\,,\,\forall i\in[k] \end{array} \end{equation*} where the $f_i$'s are nonnegative submodular functions and the families $\mathcal{F}_{i}$ are matroids. Then we have the following results. \begin{enumerate} \item If $\mathcal{F}$ is a {\bf matroid}, then there is a $(\frac{1}{2}-\epsilon)$-approximation for monotone functions and a $(\frac{1}{2+O(1)})$-approximation for nonmonotone functions. Moreover, in the special case where $\mathcal{F}_i=2^V$ for all $i$, there is a $(1-1/e)$-approximation for monotone functions and a $1/e$-approximation for nonmonotone functions. \item If $\mathcal{F}$ is a {\bf $p$-matroid intersection}, then there is a $(\frac{1}{p+1}-\epsilon)$ and a $(\frac{1}{p+1+O(1)})$-approximation for monotone and nonmonotone functions respectively. Moreover, in the case where $\mathcal{F}_i=2^V$ for all $i$, there is a $(\frac{1}{p}-\epsilon)$-approximation for monotone functions. \item If $\mathcal{F}$ is the set of {\bf matroid bases} and $\mathcal{F}_i=2^V$ for all $i$, then there is a $(1-1/e)$-approximation for monotone functions and a $\frac{1}{2}(1-\frac{1}{\nu})$-approximation for nonmonotone functions where $\nu$ denotes the fractional base packing number. \end{enumerate} \end{theorem} The reduction also applies to a model for robust submodular maximization \cite{orlin2016robust}, where robustness is with respect to the adversarial removal of up to $\tau$ elements (see Section~\ref{sec:lifting-reduction}). This leads to the following algorithmic consequences. \begin{theorem} \label{thm:max robust} Consider the multi-agent {\bf robust maximization} problem \begin{equation*} \label{eq:robust} \begin{array}{ccc} \max & \min & \sum_{i \in [k]} f_i (S_i - A_i), \\ S_1 \uplus \cdots \uplus S_k \in \mathcal{F} & A_i \subseteq S_i & \\ S_i \in \mathcal{F}_i & \sum_i |A_i|\leq \tau & \end{array} \end{equation*} where the $f_i$'s are nonnegative monotone submodular functions and the families $\mathcal{F}_i$ are matroids. Then we have the following. \begin{enumerate} \item There is a $(\frac{1}{\tau+1})(\frac{1}{p+1}-\epsilon)$-approximation for any fixed $\epsilon>0$ in the case where $\mathcal{F}$ is a $p$-matroid intersection. \item In the special case where $\mathcal{F}_i = 2^V$ for all $i$, there is a $(\frac{1-1/e}{\tau+1})$-approximation in the case where $\mathcal{F}$ is a matroid, and a $(\frac{1}{\tau+1})(\frac{1}{p}-\epsilon)$-approximation for any fixed $\epsilon>0$ in the case where $\mathcal{F}$ is a $p$-matroid intersection. \end{enumerate} \end{theorem} \fi \subsection{Some applications of (capacitated) multi-agent optimization} \label{sec:applications} In this section we present several problems in the literature which are special cases of Problem (\ref{eqn:MA}) and the more general Problem (\ref{ma}). We also indicate how the extra generality of {\sc CMASO} (i.e. (\ref{ma})) gives modelling advantages. We start with the \underline{maximization} setting. \begin{example}[The Submodular Welfare Problem] ~ The most basic problem in the maximization setting arises when we take the feasible space $\mathcal{F}=\{V\}$. This describes a well-known model (introduced in \cite{lehmann2001combinatorial}) for allocating goods to agents, each of which has a monotone submodular valuation (utility) function over baskets of goods. This is formulated as (\ref{eqn:MA}) by considering nonnegative monotone functions $f_{i}$ and $\mathcal{F}=\{V\}$. The CMASO framework allows us to incorporate additional constraints into this problem by defining the families $\mathcal{F}_{i}$ appropriately. For instance, one can impose cardinality constraints on the number of elements that an agent can take, or to only allow agent $i$ to take a set $S_i$ of elements satisfying some bounds $L_i \subseteq S_i \subseteq U_i$. \end{example} \begin{example}[The Separable Assignment Problem] ~ An instance of the Separable Assignment Problem (SAP) consists of $m$ items and $n$ bins. Each bin $j$ has an associated downwards closed collection of feasible sets $\mathcal{F}_j$, and a modular function $v_{j}(i)$ that denotes the value of placing item $i$ in bin $j$. The goal is to choose disjoint feasible sets $S_j \in \mathcal{F}_j$ so as to maximize $\sum_{j=1}^n v_j(S_j)$. This well-studied problem (\cite{fleischer2006sap,goundan2007revisiting,calinescu2011maximizing}) corresponds to a CMASO instance where all the objectives are modular, $\mathcal{F}=2^V$, and the families $\mathcal{F}_i$ are downwards closed. \end{example} We next discuss an example where using matroid-capacity constraints $\mathcal{F}_i$ in CMASO is beneficial. \begin{example}[Recommendation Systems and Matroid Constraints] ~ This is a widely deployed class of problems that entails the targeting of product ads to a mass of (largely unknown) buyers or ``channels''. In \cite{chen2016conflict} a ``meta'' problem is considered where (known) prospective buyers are recommended to interested sellers. This type of recommendation system incurs additional constraints such as (i) bounds on the size of the buyer list provided to each seller (e.g., constrained by a seller's budget) and (ii) bounds on how often a buyer appears on a list (to not bombard buyers). These constraints are modelled as a ``$b$-matching'' problem in a bipartite buyer-seller graph $G_B$. They also consider a more sophisticated model which incorporates ``conflict-aware'' constraints on the buyer list for each seller, e.g., no more than one buyer from a household should be recommended to a seller. They model conflicts using extra edges amongst the buyer nodes and they specify an upper bound on the number of allowed conflict edges induced by a seller's recommendation list. Heuristics for this (linear-objective) model \cite{chen2016conflict} are successfully developed on Ebay data, even though the computational problem is shown to be NP-hard. In fact, subsequent work \cite{chen2016group} shows that conflict-aware $b$-matching suffers an inapproximability bound of $O(n^{1-\epsilon})$. We now propose an alternative model which admits an $O(1)$-approximation. Moreover, we allow a more general submodular multi-agent objective $\sum_i f_i(B_i)$ where $B_i$ are the buyers recommended to seller $i$. To formulate this in the CMASO model (\ref{ma}) we consider the same complete buyer-seller bipartite graph from previous work. We now represent a buyer list $B_i$ as a set of edges $S_i$. In order that each buyer $v$ is not recommended more than its allowed maximum $b(v)$, we add the constraint that the number of edges in $F=\cup S_i$ which are incident to buyer node $v$ is at most $b(v)$. The family $\mathcal{F}$ of such sets $F$ forms a partition matroid. Hence the problem can be formulated as: \[ \begin{array}{cc} \max & \sum_{i=1}^{k}f_{i}(S_{i})\\ \mbox{s.t.} & S_1 \uplus S_2 \cdots \uplus S_k \in \mathcal{F} \\ & S_i \in \mathcal{F}_i, \;\; \forall i \in [k] \end{array} \] We now define $\mathcal{F}_i$ to enforce conflict constraints for the seller as follows. Let $V_i$ denote the edges from seller node $i$ to allowable buyers for $i$ (possibly $V_i$ is all buyers). We may then partition $V_i$ into ``households'' $V_{ij}$. In order to model conflicts, we insist that $S_i$ is allowed to include at most $1$ element from each $V_{ij}$. The resulting family $\mathcal{F}_i$ is a partition or laminar matroid. Our results imply that this new version has a polytime $O(1)$-approximation (in the value oracle model). \iffalse FBS. Seems fine as is. \textcolor{red}{I dont like that I switched to non-disjoint union for the first matroid M. We can avoid this but only by making copies of the buyer nodes which I also dont like. Thoughts?} \textcolor{green}{Taking copies of the nodes seem like the cleanest option in my opinion. However, (and we should point this out when working with upwards-closed families in section 5 and 6), notice that when working with monotone function the disjoint union constraint can be relaxed to a regular union constraint. Hence, even though we are working with normal unions and upwards-closed families, we are still solving the original problem with disjoint union as stated in MSFO or the CMASO framework} \fi \end{example} \begin{example}[Sensor Placement] ~ The problem of placing sensors and information gathering has been popular in the submodularity literature \cite{krause2007near,krause1973optimal,krause08efficient}. We are given a set of sensors $V$ and a set of possible locations $\{1,2,\ldots,k\}$ where the sensors can be placed. There is also a budget constraint restricting the number of sensors that can be deployed. The goal is to place sensors at some of the locations so as to maximize the total ``informativeness''. Consider a multi-agent objective function $\sum_{i \in [k]} f_i(S_i)$, where $f_i(S_i)$ measures the informativeness of placing sensors $S_i$ at location $i$. It is then natural to consider a diminishing return (i.e. submodularity) property for the $f_i$'s. We can then formulate the problem as MASO($\mathcal{F}$) where $\mathcal{F}:=\{ S \subseteq V: |S| \leq b \}$ imposes the budget constraint. We can also use CMASO for additional modelling flexibility. For instance, we may define $\mathcal{F}_i=\{ S\subseteq V_i: |S| \leq b_i \}$ where $V_i$ are the allowed sensors for location $i$ and $b_i$ is an upper bound on the sensors located there. \iffalse We can then formulate the problem as a CMASO instance where $\mathcal{F}:=\{ S \subseteq V: |S| \leq b \}$ imposes the budget constraint and $\mathcal{F}_i$ gives additional modelling flexibility. For instance, we may define $\mathcal{F}_i=\{ S \subseteq V: S\subseteq V_i, |S| \leq b_i \}$, where $V_i$ are the allowed sensors for location $i$ and $b_i$ is an upper bound on the sensors located there. \fi \end{example} \iffalse \begin{example}[Public Supply Chain] Consider the problem of transporting resources from the hinterland to external markets served by a single port. As prices fluctuate the underlying value of resources changes and this motivates the expansion of public or private networks. These problems have a large number of stakeholders such as port authority, multiple train and trucking carriers, labor unions and overseeing government transportation bodies. It is not uncommon for one good (say coal) to be transported by several distinct organizations and even more than one firm involved in the same mode of transport (say rail). The design and expansion of such networks therefore represent examples of strong multi-agent behaviour. Moreover, feasible sets based on matroid constraints lend themselves to very natural transportation structures such as arborescences (which are a special case of intersection of two matroids). \iffalse \textcolor{red}{This is a bit vague. Can we come up with a truly formal definition? It seems complicated since a carrier which builds a link should both pay for construction and benefit from carrying resources over the link. I always wanted to think about this to make a nice detailed model. Should be doable with some time.} \fi \end{example} \fi We now discuss Problem (\ref{eqn:MA}) and (\ref{ma}) in the \underline{minimization} setting. \begin{example}[Minimum Submodular Cost Allocation] \label{ex:MSCA} ~ The most basic problem in the minimization setting arises when we simply take $\mathcal{F}=\{V\}$. This problem, $\min \sum_{i=1}^{k}f_{i}(S_{i}): S_{1}\uplus \cdots\uplus S_{k}=V$, has been widely considered in the literature for both monotone \cite{svitkina2010facility} and nonmonotone functions \cite{chekuri2011submodular,ene2014hardness}, and is referred to as the {\sc Minimum Submodular Cost Allocation (MSCA) problem}\footnote{Sometimes referred to as submodular procurement auctions.} (introduced in \cite{hayrapetyan2005network,svitkina2010facility} and further developed in \cite{chekuri2011submodular}). This is formulated as (\ref{eqn:MA}) by taking $\mathcal{F}=\{V\}$. The CMASO framework allows us to incorporate additional constraints into this problem. The most natural are to impose cardinality constraints on the number of elements that an agent can take, or to only allow agent $i$ to take a set $S_i$ of elements satisfying some bounds $L_i \subseteq S_i \subseteq U_i$. \end{example} \begin{example}[Multi-agent Minimization] ~ Goel et al \cite{goel2009approximability} consider the special cases of MASO($\mathcal{F}$) where the objectives are nonnegative monotone submodular and $\mathcal{F}$ is either the family of vertex covers, spanning trees, perfect matchings, or shortest $st$ paths. \end{example} \iffalse \textcolor{red}{THIS NEXT EXAMPLE SHOULD BE TURNED INTO (OR REPLACED BY) A MULTI-AGENT ONE} IS THERE A GOOD EXAMPLE FOR CROSSING FAMILIES? \begin{example}[Rings] It is known \cite{schrijver2000combinatorial} that arbitrary submodular functions can be minimized efficiently on a ring family, i.e. a family of sets closed under unions and intersections (see \cite{orlin2009faster} for faster implementations, and work on modified ring families \cite{goemans2010symmetric}). A natural extension is to minimization of multivariate submodular functions over a \emph{multivariate ring family, }where by the latter we mean a family of tuples closed under (component wise) unions and intersections. We provide the formal definitions in Section \ref{sec:multi-ring} and show that this more general problem can still be solved efficiently by applying the reduction from Section \ref{sec:MASA}. \end{example} \fi \subsection{Related work} \label{sec:related work} {\bf Single Agent Optimization.} The high level view of the tractability status for unconstrained (i.e., $\mathcal{F}=2^V$) submodular optimization is that both maximization and minimization generally behave well. Minimizing a submodular set function is a classical combinatorial optimization problem which can be solved in polytime \cite{grotschel2012geometric,schrijver2000combinatorial,iwata2001combinatorial}. Unconstrained maximization on the other hand is known to be inapproximable for general submodular set functions but admits a polytime constant-factor approximation algorithm when $f$ is nonnegative \cite{buchbinder2015tight,feige2011maximizing}. In the constrained maximization setting, the classical work \cite{nemhauser1978analysis,nemhauser1978best,fisher1978analysis} already established an optimal $(1-1/e)$-approximation factor for maximizing a nonnegative monotone submodular function subject to a cardinality constraint, and a $(1/(k+1))$-approximation for maximizing a nonnegative monotone submodular function subject to $k$ matroid constraints. This approximation is almost tight in the sense that there is an (almost matching) factor $\Omega(\log(k)/k)$ inapproximability result \cite{hazan2006complexity}. \iffalse in classical work of Fisher, Nemhauser, and Wolsey \cite{nemhauser1978analysis,nemhauser1978best,fisher1978analysis} it is shown that the greedy algorithm achieves an optimal $(1-\frac{1}{e})$-approximation factor for maximizing a monotone submodular function subject to a cardinality constraint, and a $(k+1)$-approximation for maximizing a monotone submodular function subject to $k$ matroid constraints. This approximation is almost tight in the sense that there is an (almost matching) factor $\Omega(k/\log(k))$ inapproximability result. Some of these results have been refined or improved recently. \fi For nonnegative monotone functions, \cite{vondrak2008optimal,calinescu2011maximizing} give an optimal $(1-1/e)$-approximation based on multilinear extensions when $\mathcal{F}$ is a matroid; \cite{kulik2009maximizing} provides a $(1-1/e-\epsilon)$-approximation when $\mathcal{F}$ is given by a constant number of knapsack constraints, and \cite{lee2010submodular} gives a local-search algorithm that achieves a $(1/k-\epsilon)$-approximation (for any fixed $\epsilon>0$) when $\mathcal{F}$ is a $k$-matroid intersection. For nonnegative nonmonotone functions, a $0.385$-approximation is the best factor known \cite{buchbinder2016constrained} for maximization under a matroid constraint, in \cite{lee2009non} a $1/(k+O(1))$-approximation is given for $k$ matroid constraints with $k$ fixed. A simple ``multi-greedy'' algorithm \cite{gupta2010constrained} matches the approximation of Lee et al. but is polytime for any $k$. Vondrak \cite{vondrak2013symmetry} gives a $\frac{1}{2}(1-\frac{1}{\nu})$-approximation under a matroid base constraint where $\nu$ denotes the fractional base packing number. Finally, Chekuri et al \cite{vondrak2011submodular} introduce a general framework based on relaxation-and-rounding that allows for combining different types of constraints. This leads, for instance, to $0.38/k$ and $0.19/k$ approximations for maximizing nonnegative submodular monotone and nonmonotone functions respectively under the combination of $k$ matroids and $\ell=O(1)$ knapsacks constraints. \iffalse Recently, Orlin et al \cite{orlin2016robust} considered a robust formulation for constrained monotone submodular maximization, previously introduced by Krause et al \cite{krause2008robust}. The robustness in this model is with respect to the adversarial removal of up to $\tau$ elements. The problem can be stated as $\max_{S\in \mathcal{F}} \; \min_{A\subseteq S, |A|\leq \tau} \; f(S-A)$, where $\mathcal{F}$ is an independence system. Assuming an $\alpha$-approximation is available for the special case $\tau=0$ (i.e. the non-robust version), \cite{orlin2016robust} gives an $\alpha / (\tau +1)$-approximation for the robust problem. \fi For constrained minimization, the news is worse \cite{goel2009approximability,svitkina2011submodular,iwata2009submodular}. If $\mathcal{F}$ consists of spanning trees (bases of a graphic matroid) Goel et al \cite{goel2009approximability} show a lower bound of $\Omega(n)$, while in the case where $\mathcal{F}$ corresponds to the cardinality constraint $\{S:|S|\geq k\}$ Svitkina and Fleischer \cite{svitkina2011submodular} show a lower bound of $\tilde{\Omega}(\sqrt{n})$. There are a few exceptions. The problem can be solved exactly when $\mathcal{F}$ is a ring family (\cite{schrijver2000combinatorial}), triple family (\cite{grotschel2012geometric}), or parity family (\cite{goemans1995minimizing}). In the context of NP-Hard problems, there are almost no cases where good (say $O(1)$ or $O(\log n)$) approximations exist. We have that the submodular vertex cover admits a $2$-approximation (\cite{goel2009approximability,iwata2009submodular}), and the $k$-uniform hitting set has $O(k)$-approximation. {\bf Multi-agent Problems.} In the maximization setting the main multi-agent problem studied is the Submodular Welfare Maximization ($\mathcal{F}=\{V\}$) for which the initial $1/2$-approximation \cite{lehmann2001combinatorial} was improved to $1-1/e$ by Vondrak \cite{vondrak2008optimal} who introduced the continuous greedy algorithm. This approximation is in fact optimal \cite{khot2005inapproximability,mirrokni2008tight}. We are not aware of maximization work for Problem (\ref{eqn:MA}) for a nontrivial family $\mathcal{F}$. For the multi-agent minimization setting, MSCA (i.e. $\mathcal{F}=\{V\}$) is the most studied application of Problem (\ref{eqn:MA}). For nonnegative monotone functions, MSCA is equivalent to the Submodular Facility Location problem considered in \cite{svitkina2010facility}, where a tight $O(\log n)$ approximation is given. If the functions $f_{i}$ are nonnegative and nonmonotone, then no multiplicative factor approximation exists \cite{ene2014hardness}. If, however, the functions can be written as $f_{i}=g_{i}+h$ for some nonnegative monotone submodular $g_{i}$ and a nonnegative symmetric submodular function $h$, an $O(\log n)$ approximation is given in \cite{chekuri2011submodular}. In the more general case where $h$ is nonnegative submodular, an $O(k \log n)$ approximation is provided in \cite{ene2014hardness}, and this is tight \cite{mirzakhani2014sperner}. \iffalse Chekuri and Ene \cite{chekuri2011approximation} consider a variant where all the agents have the same objective function $f$, and moreover, agent $i$ is forced to take some given fixed element $s_i \in V$. More specifically, given a set of elements $\{s_1, s_2, \ldots, s_k \} \subseteq V$ they consider the problem $\min \sum_{i=1}^k f(S_i)$ subject to $s_i \in S_i$ and the partition constraint $S_1 + S_2 + \cdots + S_k = V$. A $2$-approximation and a $(1.5-1/k)$-approximation are given for this problem in the cases where $f$ is nonnegative submodular and nonnegative symmetric submodular respectively. \fi Goel et al \cite{goel2009approximability} consider the minimization case of (\ref{ma}) for nonnegative monotone submodular functions, in which $\mathcal{F}$ is a nontrivial collection of subsets of $V$ (i.e. $\mathcal{F}\subset2^{V}$) and there is no restriction on the $\mathcal{F}_{i}$ (i.e. $\mathcal{F}_{i}=2^{V}$ for all $i$). In particular, given a graph $G$ they consider the families of vertex covers, spanning trees, perfect matchings, and shortest $st$ paths. They provide a tight $O(\log n)$ approximation for the vertex cover problem, and show polynomial hardness for the other cases. To the best of our knowledge \cite{goel2009approximability} is the only work on Problem (\ref{eqn:MA}) for nontrivial collections $\mathcal{F}$. \section{Multi-agent submodular minimization} \label{sec:multimin} In this section we seek generic reductions for multi-agent minimization problems to their single-agent primitives. We mainly focus on the case of nonnegative monotone submodular objective functions and we work with a natural convex relaxation that is obtained via the Lov\'asz extension of a set function (cf. Appendices \ref{sec:blocking} and \ref{sec:relaxations}). We show that if the SA primitive admits approximation via such relaxation, then we may extend this to its MA version up to an $O(\min\{k, \log^2 (n)\})$ factor loss. \iffalse Our main result in the minimization setting is the following. \begin{theorem} Assume there is a (polytime) $\alpha(n)$-approximation for monotone SO($\mathcal{F}$) minimization based on rounding the blocking convex relaxation. Then there is a (polytime) $O(\alpha(n) \cdot \min\{k, \log (n) \log (\frac{n}{\log n})\})$-approximation for monotone MASO($\mathcal{F}$) minimization. \end{theorem} \fi As noted already, the $O(\log^2 (n))$ approximation factor loss due to having multiple agents is in the right ballpark since for vertex covers there is a factor $2$-approximation for SA submodular minimization, and a tight $O(\log n)$-approximation for the multi-agent version \cite{goel2009approximability}. In Section \ref{sec:MAfromSA} we discuss an extension of this vertex cover result to a larger class of families with a MA gap of $O(\log n)$. \subsection{The single-agent and multi-agent formulations} \label{sec:SA-MA-formulations} Due to monotonicity, one may often assume that we are working with a family $\mathcal{F}$ which is {\em upwards-closed} (sometimes referred to as {\em blocking families}), i.e. if $F \subseteq F'$ and $F \in \mathcal{F}$, then $F' \in \mathcal{F}$. This can be done without loss of generality even if we seek polytime algorithms, since separation over a polytope with vertices $\{\chi^F: F \in \mathcal{F}\}$ implies separation over its dominant. We refer the reader to Appendix~\ref{sec:blocking} for details. \iffalse Due to monotonicity, one may often assume that we are working with a family $\mathcal{F}$ which is {\em upwards-closed}, aka a {\em blocking family} (cf. \cite{iyer2014monotone}). The advantage is that to certify whether $F \in \mathcal{F}$, we only need to check that $F \cap B \neq \emptyset$ for each element $B$ of the family $\mathcal{B}(\mathcal{F})$ of minimal blockers of $\mathcal{F}$. We discuss the details in Appendix \ref{sec:blocking}. \fi For a set function $f:\{0,1\}^V \to \mathbb{R}$ with $f(\emptyset)=0$ one can define its {\em Lov\'asz extension} $f^L:\mathbb{R}_+^V \to \mathbb{R}$ (introduced in \cite{lovasz1983submodular}) as follows. Let $0 < v_1 < v_2 < ... < v_m$ be the distinct positive values taken in some vector $z \in \mathbb{R}_+^V$, and let $v_0=0$. For each $i \in \{0,1,...,m\}$ define the set $S_i:=\{ j: z_j > v_i\}$. In particular, $S_0$ is the support of $z$ and $S_m=\emptyset$. One then defines (see Appendix \ref{sec:LE-def} for equivalent definitions): \[ f^L(z) = \sum_{i=0}^{m-1} (v_{i+1}-v_i) f(S_i). \] It follows from the definition that $f^L$ is positively homogeneous, that is $f^L(\alpha z)=\alpha f^L(z)$ for any $\alpha > 0$ and $z \in \mathbb{R}_+^V$. Moreover, it is also straightforward to see that $f^L$ is a monotone function if $f$ is. We have the following result due to Lov\'asz. \begin{lemma} [Lov\'asz \cite{lovasz1983submodular}] The function $f^L$ is convex if and only if $f$ is submodular. \end{lemma} This now gives rise to natural convex relaxations for the single-agent and multi-agent problems (see Appendix \ref{sec:relaxations}) based on some upwards closed relaxation $\{z \geq 0:Az\geq r\}$ of the integral polyhedron $conv(\{\chi^S:S\in \mathcal{F}\})$. In particular, let us denote $P(\mathcal{F}):=\{z \geq 0:Az\geq r\}$, and assume $A$ is a matrix with nonnegative integral entries and $r$ is a vector with positive integral components (if $r_i = 0$ then the $ith$ constraint is always satisfied and we can remove it). For simplicity, we also assume that the entries of $A$ are polynomially bounded in $n$. The {\em single-agent Lov\'asz extension formulation} (used in \cite{iwata2009submodular,iyer2014monotone}) is: \begin{equation} \label{SA-LE} (\mbox{SA-LE}) ~~~~ \min f^L(z): z \in P(\mathcal{F}), \end{equation} and the {\em multi-agent Lov\'asz extension formulation} (used in \cite{chekuri2011submodular} for $\mathcal{F}=\{V\}$) is: \begin{equation} \label{MA-LE} (\mbox{MA-LE}) ~~~~ \min \sum_{i=1}^k f^L_i(z_i): z_1 + z_2 + \cdots + z_k \in P(\mathcal{F}). \end{equation} \iffalse The relaxation (SA-LE) has been already considered (\cite{iwata2009submodular,iyer2014monotone}) for different types of families $\mathcal{F}$, while we are only aware of (MA-LE) being used (\cite{chekuri2011submodular}) in the case $\mathcal{F}=\{V\}$. \fi By standard methods (see Appendix \ref{sec:relaxations}) one may solve these problems in polytime if one can separate over the relaxation $P(\mathcal{F})$. This is often the case for many natural families such as spanning trees, perfect matchings, $st$-paths, and vertex covers. \subsection{Rounding the (MA-LE) formulation for upwards closed families $\mathcal{F}$} It is shown in \cite{chekuri2011submodular} that in the setting of monotone objectives and $\mathcal{F} = \{V\}$, a fractional solution of (MA-LE) can be rounded into an integral one at an $O(\log n)$ factor loss. Moreover, they show this still holds for some special types of nonmonotone objectives. \begin{theorem}[\cite{chekuri2011submodular}] \label{thm:monot-MSCA} Let $z_1+z_2+\cdots+z_k$ be a feasible solution for (MA-LE) in the setting where $\mathcal{F}=\{V\}$ (i.e. $\sum_{i\in [k]} z_i = \chi^V$) and $f_i = g_i + h$ where the $g_i$ are nonnegative monotone submodular and $h$ is nonnegative symmetric submodular. Then there is a randomized rounding procedure that outputs an integral feasible solution $\bar{z}_1+\bar{z}_2+\cdots+\bar{z}_k$ such that $\sum_{i\in [k]} f^L_i (\bar{z}_i) \leq O(\log n) \sum_{i \in [k]} f_i^L(z_i)$ on expectation. That is, we get a partition $S_1,S_2,\ldots,S_k$ of $V$ such that $\sum_{i \in [k]}f_i(S_i) \leq O(\log n) \sum_{i \in [k]} f_i^L(z_i)$ on expectation. \end{theorem} Our next result shows that the above rounding procedure can be adapted in a straightforward way to the setting where we have a general upwards closed family $\mathcal{F}$. We omit the proof to Appendix \ref{sec:nonmonotone}. \begin{theorem} \label{thm:sym-MSCA} Consider an instance of (MA-LE) where $\mathcal{F}$ is an upwards closed family and $f_i = g_i + h$ where the $g_i$ are nonnegative monotone submodular and $h$ is nonnegative symmetric submodular. Let $z_1+z_2+\cdots+z_k$ be a feasible solution such that $\sum_{i \in [k]} z_i \geq \chi^U$ for some $U \in \mathcal{F}$. Then there is a randomized rounding procedure that outputs an integral feasible solution $\bar{z}_1+\bar{z}_2+\cdots+\bar{z}_k$ such that $\sum_{i \in [k]} \bar{z}_i \geq \chi^U$ and $\sum_{i\in [k]} f^L_i (\bar{z}_i) \leq O(\log |U|) \sum_{i \in [k]} f_i^L(z_i)$ on expectation. That is, we get a subpartition $S_1,S_2,\ldots,S_k$ such that $\biguplus_{i \in [k]} S_i \supseteq U \in \mathcal{F}$ and $\sum_{i \in [k]}f_i(S_i) \leq O(\log |U|) \sum_{i \in [k]} f_i^L(z_i)$ on expectation. \end{theorem} \subsection{A multi-agent gap of $O(\min\{k, \log^2 (n)\})$} \label{sec:generic-min-approx} In this section we present the proof of Theorem \ref{thm:klog}. The main idea behind our reductions is the following. We start with an optimal solution $z^* = z_1^* + z_2^* + \cdots + z^*_k$ to the multi-agent relaxation (MA-LE) and build a new feasible solution $\hat{z} = \hat{z}_1 + \hat{z}_2 + \cdots + \hat{z}_k$ where the $\hat{z}_i$ have supports $V_i$ that are pairwise disjoint. We interpret the $V_i$ as the set of items associated (or pre-assigned) to agent $i$. Once we have such a pre-assignment we consider the single-agent problem $\min g(S): S \in \mathcal{F}$ where \begin{equation} \label{g-function} g(S)=\sum_{i=1}^k f_i(S \cap V_i). \end{equation} It is clear that $g$ is nonnegative monotone submodular since the $f_i$ are as well. Moreover, for any solution $S \in \mathcal{F}$ for this single-agent problem we obtain a MA solution of the same cost by setting $S_i = S \cap V_i$, since we then have $g(S) = \sum_{i\in [k]} f_i (S \cap V_i) = \sum_{i \in [k]} f_i(S_i)$. For a set $S \subseteq V$ and a vector $z \in [0,1]^V $ we denote by $z|_S$ the truncation of $z$ to elements of $S$. That is, we set $z|_S (v) = z(v)$ for each $v \in S$ and to zero otherwise. Then notice that by definition of $g$ we have that $g^L(z) = \sum_{i \in [k]} f^L_i(z|_{V_i})$. Moreover, if we also have that the $V_i$ are pairwise disjoint, then $\sum_{i \in [k]} f^L_i(z|_{V_i}) = \sum_{i \in [k]} f^L_i(z_i)$. We formalize this observation in the following result. \begin{proposition} \label{prop:g-function} Let $z = z_1 + z_2 + \cdots + z_k$ be a feasible solution to (MA-LE) where the vectors $z_i$ have pairwise disjoint supports $V_i$. Then $g^L(z) = \sum_{i \in [k]} f^L_i(z|_{V_i}) = \sum_{i \in [k]} f^L_i(z_i).$ \end{proposition} The next two results show how one can get a feasible solution $\hat{z} = \hat{z}_1 + \hat{z}_2 + \cdots + \hat{z}_k$ where the $\hat{z}_i$ have pairwise disjoint supports, by losing a factor of $O( \log^2 (n) )$ and $k$ respectively. We remark that these two results combined prove Theorem \ref{thm:klog}. \begin{theorem} \label{thm:min-log} Suppose there is a (polytime) $\alpha(n)$-approximation for monotone SO($\mathcal{F}$) minimization based on rounding (SA-LE). Then there is a (polytime) $O(\alpha(n) \log (n) \log (\frac{n}{\log n}) )$-approximation for monotone MASO($\mathcal{F}$) minimization. \end{theorem} \begin{proof} Let $z^* = z_1^* + z_2^* + \cdots + z^*_k$ denote an optimal solution to (MA-LE) with value $OPT_{frac}$. In order to apply a black box single-agent rounding algorithm we must create a different multi-agent solution. This is done in several steps, the first few of which are standard. The key steps are the {\em fracture, expand and return} steps which arise later in the process. Let $a_{max}$ denote the largest entry of the matrix $A$. Call an element $v$ {\em small} if $z^*(v) \leq \frac{1}{2 n \cdot a_{max}}$. Then note that the total contribution of small elements in any given constraint is at most a half, i.e. for any row $a_i$ of the matrix $A$ we have $a_i \cdot z|_{small} \leq \frac{1}{2}$. We obtain a new feasible solution $z' = z'_1 + z'_2 + \cdots + z'_k$ by removing all small elements from the support of the $z^*_i$ and then doubling the resulting vectors. Notice that this is indeed feasible since $A z' \geq 2(r-\frac{1}{2} \cdot \mathbf{1}) = 2r - \mathbf{1} \geq r$, where $\mathbf{1}$ denotes the vector of all ones. Moreover, by monotonicity and homogeneity of the $f^L_i$, this at most doubles the cost of $OPT_{frac}$. We now prune the solution $z' = z'_1 + z'_2 + \cdots + z'_k$ a bit more. Let $Z_j$ be the elements $v$ such that $z'(v) \in (2^{-(j+1)},2^{-j}]$ for $j=0,1,2,\ldots,L$. Since $z'(v) > \frac{1}{2n\cdot a_{max}}$ for any element in the support, and we assume that $a_{max}$ is polynomially bounded in $n$, we have that $L = O(\log n)$. We call $Z_j$ {\em bin $j$} and define $r_j = 2^j$. We round up each $v \in Z_j$ so that $z'(v)=2^{-j}$ by augmenting the $z'_i$ values by at most a factor of $2$. We may do this simultaneously for all $v$ by possibly ``truncating'' the values associated to some of the elements. As before, this is fine since the $f^L_i$ are monotone. In the end, we call this a {\em uniform solution} $z'' = z''_1 + z''_2 + \cdots + z''_k$ in the sense that each $z''(v)$ is some power of $2$. Note that its cost is at most $4 \cdot OPT_{frac}$. {\sc Fracture.} We now {\em fracture} the vectors $z''_i$ by defining vectors $z''_{i,j} = z''_i |_{Z_j}$ for each $i \in [k]$ and each $j \in \{0,1,\ldots,L\}$, where recall that the notation $z|_S$ denotes the truncation of $z$ to elements of $S$. Notice that $z''_i = \sum_{j=0}^L z''_{i,j}$. {\sc Expand.} Now for each $j \in \{0,1,\ldots,L\}$ we blow up the vectors $z''_{i,j}$ by a factor $r_j$. (Don't worry, this scaling is temporary.) Since $z''(v) = \frac{1}{r_j}$ for each $v \in Z_j$, this means that the resulting values yield a (probably fractional) cover of $Z_j$. We can then use the rounding procedure discussed in Theorem \ref{thm:monot-MSCA} (with ground set $Z_j$ and taking $h\equiv 0$) to get an integral solution $z'''_{i,j}$ such that $\sum_i f^L_i (z'''_{i,j}) \leq O(\log |Z_j|) \sum_i f^L_i (r_j \cdot z''_{i,j})$ on expectation. {\sc Return.} Now we go back to get a new MA-LE solution $\hat{z} = \hat{z}_1 + \hat{z}_2 + \cdots + \hat{z}_k$ by setting $\hat{z}_i = \sum_{j=0}^L \frac{1}{r_j} z'''_{i,j}$. Note that $\hat{z}=z''$ and so this is indeed feasible (and again uniform). Moreover, we have that the cost of this new solution satisfies \begin{multline*} \sum_{i=1}^k f^L_i (\hat{z}_i) = \sum_{i=1}^k f^L_i ( \sum_{j=0}^L \frac{1}{r_j} z'''_{i,j} ) \leq \sum_{i=1}^k \sum_{j=0}^L \frac{1}{r_j} f^L_i ( z'''_{i,j} ) = \sum_{j=0}^L \frac{1}{r_j} \sum_{i=1}^k f^L_i ( z'''_{i,j} ) \\ \leq O( \sum_{j=0}^L \sum_{i=1}^k \log (|Z_j|) f^L_i (z''_{i,j}) ) \leq O( \sum_{j=0}^L \log (|Z_j|) \sum_{i=1}^k f^L_i (z''_i) ) \leq O( L \cdot \log (\frac{n}{L})) \cdot OPT_{MA} , \end{multline*} where in the first inequality we use the convexity and homogeneity of the $f^L_i$, in the second inequality we use again the homogeneity together with the upper bound for $\sum_i f^L_i (z'''_{i,j})$, in the third inequality we use monotonicity and the fact that $z''_{i,j} \leq z''_i$ for all $j$, and in the last one we use that $\sum_{i=1}^k f^L_i (z''_i) \leq 4 \cdot OPT_{frac} \leq 4 \cdot OPT_{MA}$ and \begin{equation*} \sum_{j=0}^L \log |Z_j| = \log (\prod_{j=0}^L |Z_j|) \leq \log (\frac{\sum_{j=0}^L |Z_j|}{L+1})^{L+1} = (L+1) \cdot \log (\frac{n}{L+1}) = O( L \cdot \log (\frac{n}{L})), \end{equation*} where the inequality follows from the AM-GM inequality. \iffalse \begin{eqnarray*} \sum_i f^L_i (\hat{z}_i) & = & \sum_i f^L_i ( \sum_{j=0}^L \frac{1}{r_j} z'''_{i,j} ) \leq \sum_i \sum_{j=0}^L \frac{1}{r_j} f^L_i ( z'''_{i,j} ) = \sum_{j=0}^L \frac{1}{r_j} \sum_i f^L_i ( z'''_{i,j} ) \leq O(\log n) \sum_i \sum_{j=0}^L f^L_i (z''_{i,j}) \\ & \leq & O(\log n) (L+1) \sum_i f^L_i (z''_i) \leq O(\log^2 (n)) \sum_i f^L_i (z^*_i) \leq O(\log^2 (n)) \cdot OPT(\mbox{MA}) , \end{eqnarray*} where in the first inequality we use the convexity and homogeneity of the $f^L_i$, in the second inequality we use again the homogeneity together with the upper bound for $\sum_i f^L_i (z'''_{i,j})$, and in the third inequality we use monotonicity and the fact that $z''_{i,j} \leq z''_i$ for all $j$. \fi {\sc Single-Agent Rounding.} In the last step we use the function $g$ defined in (\ref{g-function}), with sets $V_i$ corresponding to the support of the $\hat{z}_i$. Given our $\alpha$-approximation rounding assumption for (SA-LE), we can round $\hat{z}$ to find a set $\hat{S}$ such that $g(\hat{S})\leq \alpha g^L(\hat{z})$. Then, by setting $\hat{S}_i = \hat{S} \cap V_i$ we obtain a MA solution satisfying: \begin{equation*} \sum_{i=1}^k f_i(\hat{S}_i) = g(\hat{S}) \leq \alpha g^L(\hat{z}) = \alpha \sum_{i=1}^k f^L_i (\hat{z}_i) \leq \alpha \cdot O( L \cdot \log (\frac{n}{L})) \cdot OPT_{MA}, \end{equation*} where the second equality follows from Proposition \ref{prop:g-function}. Since $L = O( \log n )$, this completes the proof. \qed \end{proof} We now give an approximation in terms of the number of agents, which becomes preferable when $k < \log^2 (n)$. \begin{lemma} \label{lem:min-k} Suppose there is a (polytime) $\alpha(n)$-approximation for monotone SO($\mathcal{F}$) minimization based on rounding (SA-LE). Then there is a (polytime) $k \alpha(n)$-approximation for monotone MASO($\mathcal{F}$) minimization. \end{lemma} \begin{proof} Let $z^* = z_1^* + z_2^* + \cdots + z^*_k$ denote an optimal solution to (MA-LE) with value $OPT_{frac}$. We build a new feasible solution $\hat{z} = \hat{z}_1 + \hat{z}_2 + \cdots + \hat{z}_k$ as follows. For each element $v \in V$ let $i' = \argmax_{i \in [k]} z^*_i(v)$, breaking ties arbitrarily. Then set $\hat{z}_{i'}(v)=k z^*_i(v)$ and $\hat{z}_{i}(v)=0$ for each $i\neq i'$. By construction we have $\hat{z} \geq z^*$, and hence this is indeed a feasible solution. Moreover, by construction we also have that $\hat{z}_i \leq k z_i^*$ for each $i \in [k]$. Hence, given the monotonicity and homogeneity of the $f^L_i$ we have \begin{equation*} \sum_{i=1}^k f^L_i(\hat{z}_i) \leq \sum_{i=1}^k f^L_i(k z^*_i) = k \sum_{i=1}^k f^L_i(z^*_i) = k \cdot OPT_{frac} \leq k \cdot OPT_{MA}. \end{equation*} Since the $\hat{z}_i$ have disjoint supports $V_i$, we can now use the function $g$ defined in (\ref{g-function}) and do a single-rounding argument as in Theorem \ref{thm:min-log}. This completes the proof. \qed \end{proof} \iffalse We note that the above result does not particularly depend on the blocking formulation $P^*(\mathcal{F})$. It still holds true for any other upwards closed relaxation of $conv(\{\chi^S:S\in \mathcal{F}\})$ that we decide to use in the (SA-LE) convex formulation. \fi The above lemma has interesting consequences in the case where $\mathcal{F}=\{V\}$. This is the submodular facility location problem considered by Svitkina and Tardos in \cite{svitkina2010facility}. They give an $O(\log n)$-approximation where $n$ denotes the number of customers/clients/demands. Lemma \ref{lem:min-k} implies we also have a $k$-approximation which is preferable in facility location problems where the number of customers swamps the number of facility locations (for instance, for Amazon). \begin{corollary} \label{Cor facility-location} There is a polytime $k$-approximation for submodular facility location, where $k$ denotes the number of facilities. \end{corollary} \begin{proof} The single-agent version of the problem is the trivial $\min f(S): S \in \{V\}$. Hence a polytime exact algorithm is available for the single-agent problem and thus by Lemma \ref{lem:min-k} a polytime $k$-approximation is available for the multi-agent version. \qed \end{proof} \iffalse The above two corollaries are in fact special cases of using the reduction from Section \ref{sec:MASA} combined with the following result. \begin{lemma} \label{lemma partition} Let $\mathcal{P}=(E,\mathcal{I})$ be a partition matroid over $E$ with $\mathcal{I}=\{S\subseteq E:|S\cap E_{i}|\leq k_{i}\}$, and $f$ a nonnegative monotone submodular function. Consider the problem $\min f(S):S\in\mathbb{B}(\mathcal{P})^{\uparrow}$, where $\mathbb{B}(\mathcal{P})$ denotes the set of all bases of the matroid $\mathcal{P}$, and $\mathbb{B}(\mathcal{P})^{\uparrow}$ its upwards closure. Then there is a $\max_{i}\{|E_{i}|-k_{i}+1\}$-approximation. If $P^*(\mathbb{B}(\mathcal{P})^{\uparrow})$ has a polytime separation oracle, then this is a polytime algorithm. \end{lemma} \begin{proof} The blockers in this case are given by $\mathcal{B}(\mathbb{B}(\mathcal{P})^{\uparrow})=\{B\subseteq E:|B\cap E_{i}|\geq|E_{i}|-k_{i}+1\}$. Now the result follows from Theorem \ref{thm:device} and Corollary \ref{cor:sep-algorithmic}. \qed \end{proof} Notice that both Corollary \ref{Cor Non-separable} and Corollary \ref{Cor facility-location} follow from applying the reduction from Section \ref{sec:MASA} and then using Lemma \ref{lemma partition} with $E_{i}=\delta(v_{i})$ and $k_{i}=1$ for each $i\in[n]$. \fi \subsection{A tight multi-agent gap of $O(\log n)$ for bounded blocker families} \label{sec:MAfromSA} In Section \ref{sec:generic-min-approx} we established an $O( \log^2 (n) )$ MA gap whenever there is a SA approximation algorithm based on the (SA-LE) formulation. For the vertex cover problem, however, there is an improved MA gap of $O(\log n)$ due to Goel et al. In this section we generalize their result by describing a larger class of families wtih such MA gap. Recall that due to monotonicity, one may often assume that we are working with a family $\mathcal{F}$ which is {\em upwards-closed}, aka a {\em blocking family} (cf. \cite{iyer2014monotone}). The advantage is that to certify whether $F \in \mathcal{F}$, we only need to check that $F \cap B \neq \emptyset$ for each element $B$ of the family $\mathcal{B}(\mathcal{F})$ of minimal blockers of $\mathcal{F}$. We discuss the details in Appendix \ref{sec:blocking}. The blocking relaxation for a family $\mathcal{F}$ is then given by $P^*(\mathcal{F}):=\{z \geq 0: z(B) \geq 1 ~\textit{for all $B \in \mathcal{B}(\mathcal{F})$} \}$. In this section we consider the formulations (SA-LE) and (MA-LE) in the special case where the fractional relaxation of the integral polyhedron is given by $P^*(\mathcal{F})$. The $2 \ln (n)$-approximation algorithm of Goel et al. for multi-agent vertex cover relies only on the fact that the feasible set family has the following {\em bounded blocker property}. We call a clutter (family of noncomparable sets) $\mathcal{F}$ {\em $\beta$-bounded} if $|F| \leq \beta$ for all $F \in \mathcal{F}$. We then say that $\mathcal{F}$ has a $\beta$-bounded blocker if $|B|\leq \beta$ for each $B \in \mathcal{B}(\mathcal{F})$. The main SA minimization result for such families is the following. \iffalse \cite{koufogiannakis2013greedy} gives a greedy algorithm that achieves a $\beta$-approximation for such families. Moreover, this is a polytime algorithm when $\mathcal{B}(\mathcal{F})$ is of polynomial size. Subsequently, \cite{iyer2014monotone} achieved the same $\beta$-approximation factor via a convex program. In addition, their algorithm is polytime whenever $P^*(\mathcal{F})$ has a polytime separation oracle (which is a weaker assumption than the one from \cite{koufogiannakis2013greedy}). \fi \begin{theorem}[\cite{iyer2014monotone,koufogiannakis2013greedy}] \label{thm:SABBagain} Let $\mathcal{F}$ be a family with a $\beta$-bounded blocker. Then there is a $\beta$-approximation algorithm for monotone $SO(\mathcal{F})$ minimization. If $P^*(\mathcal{F})$ has a polytime separation oracle, then this is a polytime algorithm. \end{theorem} Our next result establishes an $O(\log n)$ MA gap for families with a bounded blocker. In fact, while our work has focused on monotone objectives (due to the inapproximabiltiy results for general submodular $f_i$) the next result extends to some special types of nonmonotone objectives. These were introduced in \cite{chekuri2011submodular} and \cite{ene2014hardness}, where a tractable middle-ground is found for the minimum submodular cost allocation problem (where $\mathcal{F} = \{V\}$). They work with objectives $f_i=g_i+h$ where the $g_i$ are monotone submodular and $h$ is symmetric submodular (in \cite{chekuri2011submodular}) or just submodular (in \cite{ene2014hardness}). \iffalse we establish a $\beta \ln (n)$-approximation for the monotone multi-agent minimization problem associated to families with a $\beta$-bounded blocker. We remark that this $O(\ln n)$-gap between SA and MA instances for families with bounded blockers is tight, as one can see in the examples of vertex covers ($2$-approximation for SA and a tight $O(\ln n)$-approximation for MA) or submodular facility location ($1$-approximation for SA and a tight $\ln (n)$-approximation for MA). \fi \iffalse Notice that this result indeed generalizes the $2\ln(n)$-approximation for multi-agent vertex cover, since the family of vertex cover of a graph is $2$-bounded. \fi \iffalse For the single-agent nonnegative monotone problem it was shown in \cite Bilmes that families with $\beta$-bounded blockers always admit a $\beta$-approximation. \fi \iffalse \begin{theorem} \label{BB alpha log(n) approx}Let $\mathcal{F}$ be a family with a $\beta$-bounded blocker. Then there is a $\beta\ln(n)$-approximation algorithm for the associated multi-agent nonnegative monotone problem. If $P^*(\mathcal{F})$ has a polytime separation oracle, then this is a polytime algorithm. \end{theorem} \begin{proof} Let $x^{*}$ be an optimal solution to (\ref{MA-LP}). Recall that if $P^*(\mathcal{F})$ has a polytime separation oracle then we have that $x^*$ can be computed in polynomial time (see Appendix \ref{sec:ma extensions}). Consider $Q=\{v\in V:\sum_{S\ni v}\sum_{i\in[k]}x^{*}(S,i)\geq\frac{1}{\beta}\}$. Since $\mathcal{F}$ has a $\beta$-bounded blocker it follows that $Q\in\mathcal{F}$, and moreover, $\beta x^{*}$ is a fractional cover of $Q$. That is, $\sum_{S}\sum_{i}\beta x^{*}(S,i)\chi^{S}\geq\chi^{Q}$. We now appeal to standard analysis of the greedy algorithm applied to the fractional set cover (of $Q$) with set costs given by $c((S,i))=f_{i}(S)$. This results in an integral cover of $Q$ whose cost is augmented by a factor of at most $\ln n$ times the fractional cover. Denote this integral cover by $(S^{1},1),...,(S^{m_{1}},1),(S^{1},2),\ldots,$ $(S^{m_{2}},2),...,(S^{m_{k}},k)$. Since $\sum_{S}\sum_{i}x^{*}(S,i)f_{i}(S)\leq OPT(MA)$ we have that the integral cover cost satisfies \begin{eqnarray*} \sum_{i=1}^{k}\sum_{j=1}^{m_{i}}c(S^{j},i) & \leq & \ln(n)\sum_{S}\sum_{i}\beta x^{*}(S,i)f_{i}(S)\\ & = & \beta\ln(n)\sum_{S}\sum_{i}x^{*}(S,i)f_{i}(S)\\ & \leq & \beta \ln(n)OPT(MA). \end{eqnarray*} In addition, by letting $S_{i}=\cup_{j=1}^{m_{i}}(S^{j},i)$, by submodularity (or just subadditivity) we have \[ \sum_{i=1}^{k}f_{i}(S_{i})=\sum_{i=1}^{k}f_{i}(\cup_{j=1}^{m_{i}}(S^{j},i))\leq\sum_{i=1}^{k}\sum_{j=1}^{m_{i}}f_{i}((S^{j},i))=\sum_{i=1}^{k}\sum_{j=1}^{m_{i}}c(S^{j},i). \] Finally, if $S_{i}\cap S_{j}\neq\emptyset$ for some $i\neq j$ throw away the common shared elements from one of them arbitrarily. Denote by $S_{1}^{*}$,...,$S_{k}^{*}$ these new sets, and notice that $S_{i}^{*}\subseteq S_{i}$ and $S_{1}^{*}\uplus ...\uplus S_{k}^{*}=Q$. By monotonicity of the functions $f_{i}$'s we have \[ \sum_{i\in[k]}f_{i}(S_{i}^{*})\leq\sum_{i\in[k]}f_{i}(S_{i})\leq\beta\ln(n)OPT(\mbox{MA}) \] and the result follows.\qed \end{proof} \fi We remark that by taking $h\equiv 0$ (which is symmetric submodular), we obtain a result for monotone functions. We note that in this setting we do not need $\mathcal{F}$ to be upwards closed, since due to monotonicity we can work with the upwards closure of $\mathcal{F}$ without loss of generality as previously discussed on Section \ref{sec:SA-MA-formulations} (see Appendix \ref{sec:blocking} for further details). Moreover, as previously pointed out, this $O(\log n)$ MA gap is tight due to examples like vertex covers ($2$-approximation for SA and a tight $O(\log n)$-approximation for MA) or submodular facility location ($1$-approximation for SA and a tight $O(\log n)$-approximation for MA). \begin{theorem} \label{BB-nonmonotone}Let $\mathcal{F}$ be an upwards closed family with a $\beta$-bounded blocker. Let the objectives be of the form $f_i = g_i + h$ where each $g_i$ is nonnegative monotone submodular and $h$ is nonnegative symmetric submodular. Then there is a randomized $O(\beta \log n)$-approximation algorithm for the associated $MASO(\mathcal{F})$ minimization problem. If $P^*(\mathcal{F})$ has a polytime separation oracle, then this is a polytime algorithm. \end{theorem} \begin{proof} Let $z^* = \sum_{i \in [k]} z^*_i$ be an optimal solution to (MA-LE) based on the blocking relaxation $P^*(\mathcal{F})$ with value $OPT_{frac}$. Consider the new feasible solution given by $\beta z^* = \sum_{i \in [k]} \beta z^*_i$ and let $U=\{v\in V: \beta z^*(v) \geq 1\}$. Since $\mathcal{F}$ has a $\beta$-bounded blocker it follows that $U\in\mathcal{F}$. We now have that $\sum_{i \in [k]} \beta z^*_i$ is a feasible solution such that $\sum_{i \in [k]} \beta z^*_i \geq \chi^U$. Thus, we can use Theorem \ref{thm:sym-MSCA} to get an integral feasible solution $\sum_{i \in [k]} \bar{z}_i$ such that $\sum_{i \in [k]} \bar{z}_i \geq \chi^U$ and $\sum_{i\in [k]} f^L_i (\bar{z}_i) \leq O(\log |U|) \sum_{i \in [k]} f_i^L(\beta z^*_i) \leq \beta \cdot O(\log n) \cdot OPT_{frac}$ on expectation. \qed \end{proof} It is shown in \cite{ene2014hardness} (see their Proposition 10) that given any nonnegative submodular function $h$, one may define a nonnegative symmetric submodular function $h'$ such that for any partition $S_1,S_2,\ldots,S_k$ we have $\sum_i h'(S_i) \leq k \sum_i h(S_i)$. This, with our previous result, yields the following corollary. \iffalse They also show that the $k$ factor loss due to the non-symmetry is in fact tight. \fi \begin{corollary} \label{BB-nonmonotone-again}Let $\mathcal{F}$ be an upwards closed family with a $\beta$-bounded blocker. Let the objectives be of the form $f_i = g_i + h$ where each $g_i$ is nonnegative monotone submodular and $h$ is nonnegative submodular. Then there is a randomized $O(k \beta \log n)$-approximation algorithm for the associated $MASO(\mathcal{F})$ minimization problem. \end{corollary} One may wish to view the above results through the lens of MA gaps, leading to the question of what is the associated SA primitive? For $g_i+h$ objectives, the SA version should be for general (or symmetric) nonnegative submodular objectives. Moreover, as we only use upwards closed families, one may deduce that these single-agent versions have $\beta$-appoximations via the concept of the monotone closure of a nonmonotone objective from \cite{iyer2014monotone}. Hence our results establish MA gaps of $O(\log n)$ (resp. $O(k \log n)$) in these nonmonotone settings (and the factor of $k$ is tight \cite{mirzakhani2014sperner}). Before concluding this section, we note that Theorems~\ref{thm:klog} and \ref{thm:SABBagain} imply a $k \beta$-approximation for families with bounded blockers, which becomes preferable when $k < O(\log n)$. \begin{corollary} \label{cor:kgapBB} Let $\mathcal{F}$ be a family with a $\beta$-bounded blocker. Then there is a $k \beta$-approximation algorithm for the associated monotone $MASO(\mathcal{F})$ minimization problem. \end{corollary} \iffalse Many natural classes $\mathcal{F}$ have been shown to behave poorly for (nonnegative, monotone) submodular minimization. For instance, polynomial inapproximability has been established in the case where $\mathcal{F}$ is the family of edge covers (\cite{iwata2009submodular}), spanning trees (\cite{goel2009approximability}), or perhaps most damning is $\tilde{\Omega}(\sqrt{n})$-inapproximability for minimizing $f(S)$ subject to the simple cardinality constraint $|S| \geq k$\footnote{which is equivalent to the constraint $|S|=k$ since f is monotone} \cite{svitkina2011submodular}. These results all show that it is difficult to distinguish between two similar submodular functions, one of which ``hides'' the optimal set. At some level, the proofs leverage the fact that $f$ can have wild (so-called) curvature \cite{conforti1984submodular}. A few cases exist where good approximations hold. For instance, for ring families the problem can be solved exactly by adapting any algorithm that works for unconstrained submodular minimization (\cite{orlin2009faster,schrijver2000combinatorial}). Gr\"{o}tschel, Lov\'asz, and Schrijver (\cite{grotschel2012geometric}) show that the problem can be also solved exactly when $\mathcal{F}$ is a \emph{triple family}. Examples of these kind of families include the family of all sets of odd cardinality, the family of all sets of even cardinality, or the family of sets having odd intersection with a fixed $T \subseteq V$. More generally, Goemans and Ramakrishnan (\cite{goemans1995minimizing}) extend this last result by showing that the problem can be solved exactly when $\mathcal{F}$ is a \emph{parity family} (a more general class of families). In the context of NP-Hard problems, the cases in which a good (say $O(1)$ or $O(\log(n))$) approximation exists are almost non-existent. We have that the submodular vertex cover admits a $2$-approximation (\cite{goel2009approximability,iwata2009submodular}), and the $k$-uniform hitting set has $O(k)$-approximation. It is natural to examine the core reasons why these approximations algorithms work in the hope of harnessing ingredients in other contexts of interest. We emphasize that our work in this section focuses only on nonnegative normalized monotone functions. We also restrict attention to upwards-closed families $\mathcal{F}$, that is, if $F \subseteq F'$ and $F \in \mathcal{F}$, then $F' \in \mathcal{F}$. Due to monotonicity this is usually a benign assumption (see Appendix \ref{sec:blocking}). For instance, if we have a well-described formulation (or approximation) for the polytope $P(\mathcal{F})$ whose vertices are $\{\chi^F: F \in \mathcal{F}\}$, then we can also separate over its upwards closure $P(\mathcal{F})^\uparrow:=\{z \geq x: x \in P\}$. A second issue one must address when working with the upwards-closure of a family $\mathcal{F}$ is whether, given $F'$ in the closure, one may find a set $F \in \mathcal{F}$ with $F \subseteq F'$ in polytime. This is also the case if a polytime separation oracle is available for $P(\mathcal{F})$. \fi \iffalse The algorithms used in each case differ, and work for different reasons. The VC algorithm works since we are minimizing over a blocking family $\mathcal{F}$ whose blocker has bounded size sets. Facility location is based on a greedy algorithm whose \textquotedbl{}bang - per -buck\textquotedbl{} subroutine is based on detecting whether a given submod function takes on negative values. The alg of C-E\dots{} no clue. \\ {\em TO DO:\\ \\ Look up: k-hitting set result from Vondrak's slides. The paper with the hardness result considers the problem: vertex cover for k-regular hypergraphs.\\ } \fi \iffalse \subsubsection{Bounded Blockers and Single-Agent Minimization} \label{subsec:BB-gap} Algorithms for vertex cover and $k$-uniform hitting sets rely only on the fact that the feasible space has a {\em bounded blocker property}. We call a clutter (set family) $\mathcal{F}$ {\em $\beta$-bounded} if $|F| \leq \beta$ for all $F \in \mathcal{F}$. The next result is the main algorithmic device of this section and has several natural applications including implications for the class of multi-agent problems over $\mathcal{F}$. These are discussed in the following sections. \begin{theorem} \label{thm:device} Consider the (nonnegative, normalized, monotone) submodular minimization primitive \[ \min f(S):S\in\mathcal{F} \] and its natural LP (\ref{eqn:X}). If $\mathcal{B}(\mathcal{F})$ is $\beta$-bounded, then (\ref{eqn:X}) has integrality gap of at most $\beta$. \end{theorem} \begin{proof} Now assume we have that $x^{*}$ is an optimal solution for the primal (\ref{eqn:X}) with value {\sc opt}. We claim that $Q=\{v\in V:\sum_{S:S\ni v}x^{*}(S)\geq\frac{1}{\beta}\}$ is an $\beta$-approximation. Recall that $Q\in\mathcal{F}$ if and only if $Q\cap B\neq\emptyset$ for each $B\in\mathcal{B}(\mathcal{F})$. Pick an arbitrary $B\in\mathcal{B}(\mathcal{F})$, since $x^{*}$ is a feasible solution we must have that $\sum_{v\in B}\sum_{S\ni v}x^{*}(S)\geq1$. Hence, there must exist some $v_{0}\in B$ such that $\sum_{S\ni v_{0}}x^{*}(S)\geq\frac{1}{|B|}\geq\frac{1}{\beta}$, where the last inequality follows from the fact that $\mathcal{B}(\mathcal{F})$ is $\beta$-bounded. It follows that $v_{0}\in Q$ and thus $Q\cap B\supseteq\{v_{0}\}\neq\emptyset$. Since $\beta x^{*}$ is a fractional cover of $Q$, we may now apply Corollary~\ref{cor:convexbound} (the latter part in particular) to deduce that $f(Q) \leq \beta$ {\sc opt}. \qed \end{proof} In many circumstances this result becomes algorithmic. For instance, as discussed in Lemma~\ref{lem:polybound} in Appendix \ref{sec:extended-lp} we have the following. \begin{corollary} \label{cor:algorithmic} If $\mathcal{B}(\mathcal{F})$ is $\beta$-bounded and $|\mathcal{B}(\mathcal{F})| \in poly(n)$, then there is a polytime $\beta$-approximation for (\ref{eqn:X}). This is the case in particular if $\beta=O(1)$. \end{corollary} We note that it is not necessary to have a $O(1)$-bounded blocker to derive an $O(1)$-approximation algorithm; for instance we could have that $\mathcal{F}$ is polynomially bounded in some cases so an exact algorithm exists. The inapproximability result of Khot et al. \cite{khot2008vertex} could be seen as a type of converse to Theorem~\ref{thm:device}; it suggests that if the blockers are large, and suitably rich, then submodular minimization over $\mathcal{F}$ is doomed. \begin{theorem}[Khot, Regev \cite{khot2008vertex}] Assuming the Unique Games conjecture, there is a $(k-\epsilon)$-inapproximability factor for the problem of finding a minimum size vertex cover in a $k$-uniform hypergraph. \end{theorem} The restriction that $|\mathcal{B}(\mathcal{F})| \in poly(n)$ could be quite severe. In Appendix~\ref{sec:combined} we discuss how LP (\ref{eqn:X}) can still be solved in polytime under a much weaker assumption. In particular we show the following. \begin{corollary} \label{cor:sep-algorithmic} Assume there is a polytime separation oracle for the blocking formulation $P^*(\mathcal{F})$. Then LP (\ref{eqn:X}) can be solved in polytime. \end{corollary} \fi \iffalse \subsection{Multi-agent minimization over $\mathcal{F}=\{V\}$} \label{sec:MA min F=V} In this section we discuss the multi-agent framework (\ref{ma}) in the minimization setting and in the special case where the functions $f_{i}$ are nonnegative and $\mathcal{F}=\{V\}$. That is, we focus on the problem \[ \begin{array}{cc} \min & \sum_{i=1}^{k}f_{i}(S_{i})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V\\ & S_{i}\in \mathcal{F}_i\,,\,\forall i\in[k] \end{array} \] where the functions $f_{i}$ are nonnegative submodular. As previously discussed in Section~\ref{sec:applications} (see Example \ref{ex:MSCA}), the special case of this problem where the agents are allowed to take any subset of $V$ (i.e. $\mathcal{F}_{i}=2^{V}$ for all $i$) has been widely considered in the literature and it has been referred to as the Minimum Submodular Cost Allocation (MSCA) problem. We summarize the results for MSCA here. \begin{theorem} Consider MSCA for nonnegative functions $f_{i}$. We have the following. \begin{enumerate} \item There is a polytime (tight) $O(\ln (n))$-approx for the case where the functions are monotone (\cite{hayrapetyan2005network,svitkina2010facility}). \item There is a polytime (tight) $O(\ln (n))$-approx for the case where the functions $f_{i}$ can be written as $f_{i}=g_{i}+h$ for some nonnegative monotone submodular $g_{i}$ and a nonnegative symmetric submodular function $h$ (\cite{chekuri2011submodular}). \item There is a polytime (tight) $O(k\cdot\ln (n))$-approx for the case where the functions $f_{i}$ can be written as $f_{i}=g_{i}+h$ for some nonnegative monotone submodular $g_{i}$ and a nonnegative submodular function $h$ (\cite{ene2014hardness,mirzakhani2014sperner}). \item For general functions $f_i$ and $k\geq3$ this problem does not admit any finite multiplicative approximation factor (\cite{ene2014hardness}). \end{enumerate} \end{theorem} We now discuss the monotone MSCA problem with additional constraints on the sets that the agents can take. Two of the most natural settings to consider are to impose cardinality constraints on the number of elements that an agent can take, or to allow agent $i$ to only take elements from some subset $V_{i}\subseteq V$, or both. The rest of this section will be dealing with multi-agent problems that incorporate one (or both) of such constraints. The following result is an observation that derives from the work of Svitkina-Tardos \cite{svitkina2010facility} on the submodular facility location problem. \begin{corollary} The problem \[ \begin{array}{cc} \min & \sum_{i=1}^{k}f_{i}(S_{i})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V\\ & S_{i}\subseteq V_{i}, \end{array} \] where the functions $f_{i}$ are nonnegative monotone submodular admits a $\ln(\max_{i\in[k]}|V_{i}|)$-approximation. \end{corollary} \begin{proof} Svitkina-Tardos \cite{svitkina2010facility} provide a reduction of the problem \[ \begin{array}{cc} \min & \sum_{i=1}^{k}f_{i}(S_{i})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V \end{array} \] where the functions $f_{i}$ are nonnegative monotone submodular to a set cover instance. Then, the greedy algorithm (using unconstrained submodular minimization as a subroutine) is used to obtain an optimal $\ln(n)$-approximation. The same reduction applied in this setting gives a $t$-set cover instance (i.e. an instance where all the sets have size at most $t$) with $t=\max_{i\in[k]}|V_{i}|$. Now the result follows from the fact that greedy gives a $\ln(t)$-approximation for the $t$-set cover problem. \qed \end{proof} We now consider the above problem with an additional cardinality constraint on the sets the agents can take. We use the techniques from the lifting reduction presented in Section \ref{sec:MASA} and a known result from matching theory in bipartite graphs to obtain the desired approximation. We first introduce some definitions that will be useful for the proof. Let $G=(A+B,E)$ be a bipartite graph with $|A|=k$ and $A=\{a_1,...,a_k\}$. We call a subset of edges $M\subseteq E$ \emph{saturating} if $cov(M)=B$. Also, given some nonnegative integers $b_{1},...,b_{k}$, we say that $M$ is a \emph{$(b_{1},...,b_{k})$-matching} if $|M\cap\delta(a_{i})|\leq b_{i}$ for each $i\in[k]$. \begin{lemma} Consider the following multi-agent minimization problem \[ \begin{array}{cc} \min & \sum_{i\in[k]}g_{i}(S_{i})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V\\ & S_{i}\subseteq V_{i}\\ & |S_{i}|\leq b_{i}. \end{array} \] Then, if the functions $g_{i}$ are nonnegative monotone submodular there exists a polytime $(\max_{i}b_{i})$-factor approximation. \end{lemma} \begin{proof} Apply the generic reduction from Section \ref{sec:MASA}, and let $G=([k]+V,E)$ be the corresponding bipartite graph (i.e., with $E=\uplus_{i\in[k]}\{(i,v):v\in V_{i}\}$) and $f$ the corresponding function. Define weights $w:E\to\mathbb{R}_{+}$ such that $w_{e}=f(e)$ for each $e\in E$. Let $M\subseteq E$ be any saturating $(b_{1},...,b_{k})$-matching in $G$. By submodularity of $f$ we have that $w(M)\geq f(M)$. Moreover, since $g$ can be written as $g(S_{1},...,S_{k})=\sum_{i\in[k]}g_{i}(S_{i})$, we have that $f(M)=\sum_{i\in[k]}f_{i}(M_{i})$ for some monotone submodular functions $f_{i}$ where $M_{i}:=M\cap\delta(i)$ and $f_{i}(M_{i})=g_{i}(cov(M_{i}))$. Moreover, by monotonicity of the $f_{i}$ we have \[ f_{i}(M_{i})\geq\max_{e\in M_{i}}f_{i}(e)=\max_{e\in M_{i}}f(e)\geq\frac{1}{|M_{i}|}\sum_{e\in M_{i}}f(e)=\frac{1}{|M_{i}|}w(M_{i})\geq\frac{1}{b_{i}}w(M_{i}) \] where the first equality follows from the fact that $f(e)=f_{i}(e)$ for every $e\in\delta(i)$, and the last inequality from the fact that $M$ is a saturating $(b_{1},...,b_{k})$-matching and hence $|M_{i}|\leq b_{i}$ for every $i\in[k]$. Hence, it follows that for each saturating $(b_{1},...,b_{k})$-matching $M$ we have \[ w(M)\geq f(M)=\sum_{i\in[k]}f_{i}(M_{i})\geq\sum_{i\in[k]}\frac{1}{b_{i}}w(M_{i})\geq\sum_{i\in[k]}(\frac{1}{\max_{i\in [k]}b_{i}})w(M_{i})=\frac{1}{\max_{i\in [k]}b_{i}}w(M). \] In particular, if $M^{*}$ is a minimum saturating $(b_{1},...,b_{k})$-matching for the weights $w$, we have \[ \begin{array}{cccccc} w(M^{*}) & \geq f(M^{*})\geq & \min & f(M) & \geq & \frac{1}{\max_{i \in [k]}b_{i}}w(M^{*}).\\ & & \mbox{s.t.} & M\mbox{ is a saturating }(b_{1},...,b_{k})\mbox{\mbox{-matching}} \end{array} \] This is, $f(M^{*})$ is a $(\max_{i}b_{i})$-factor approximation. We conclude the argument by noticing that $w$ is a modular function, and hence we can find a minimum saturating $(b_{1},...,b_{k})$-matching in polynomial time. \qed \end{proof} \iffalse \textcolor{red}{NOT SURE WHETHER THE NEXT COROLLARY SHOULD STAY. IT WAS WRITTEN MAINLY TO COMPARE IT TO THE MV CASE, WHICH WAS VERY HARD} \begin{corollary} \label{cor:b_i=1} Consider the above problem in the special case where $k=n$ and all the $b_{i}=1$. Then, an optimal solution to the problem is given by a minimum perfect matching with respect to the weights $w$ as defined in the proof above. Moreover, this solution is still optimal even in the case where we do not require anymore the original functions $g_{i}$ to be nonnegative or monotone. \end{corollary} \fi \iffalse Surprisingly, the above problem becomes very hard to approximate if we allow general multivariate submodular functions instead of decomposable ones. This is true even under the special conditions of Corollary \ref{cor:b_i=1} in which $k=n$ and $b_{i}=1$ for all the agents. \begin{lemma} \label{lem:b_i=1Hard} There is an information-theoretic hardness lower bound of $\Omega(n)$ for the multivariate problem \[ \begin{array}{cc} \min & g(S_{1},\ldots,S_{k})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V\\ & S_{i}\subseteq V_{i}\\ & |S_{i}| \leq 1 \end{array} \] where $g$ is a nonnegative monotone multivariate submodular function. \end{lemma} \begin{proof} We prove this in the special case where $k=n$. Notice that in this case the constraint $|S_i|\leq 1$ becomes $|S_i| = 1$. We reduce an instance of the submodular perfect matching (SPM) problem to the problem above. In \cite{goel2009approximability}, the following is shown for any fixed $\epsilon,\delta > 0$. Any randomized $(\frac{n^{1-3\epsilon}}{1+\delta})$-approximation algorithm for the submodular minimum cost perfect matching problem on bipartite graphs (with at least one perfect matching), requires exponentially many queries. So let us consider an instance of SPM consisting of a bipartite graph $G=(A+B,E)$ with $|A|=|B|$ and a nonnegative monotone submodular function $f:2^{E}\to\mathbb{R}_{+}$. The goal is to find a perfect matching $M\subseteq E$ of minimum submodular cost. We can transform this to the general multivariate problem above by applying the reduction from Section \ref{sec:MASA} in the opposite direction. More formally, let $k=|A|,\:V=B,\mbox{ and }V_{i}=cov(\delta(a_{i}))$ where $A=\{a_{1},...,a_{k}\}$. Also, define a multivariate function $g:2^{V_{1}}\times...\times2^{V_{k}}\to\mathbb{R}_{+}$ by $g(S_{1},...,S_{k})=f(\uplus_{i\in[k]}\{(a_{i},b):b\in S_{i}\})$ for any $(S_{1},...,S_{k})\in2^{V_{1}}\times \cdots \times2^{V_{k}}$ (i.e. $S_{i}\subseteq V_{i}$). We have $g$ is multivariate submodular by Claim~\ref{Claim f submodular}. It follows that a solution to \[ \begin{array}{cc} \min & g(S_{1},\ldots,S_{k})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V\\ & S_{i}\subseteq V_{i}\\ & |S_{i}|=1 \end{array} \] (where here $|V|=k$) is a solution to the original SPM instance. \qed \end{proof} \fi We conclude this section by showing that monotone MSCA (which admits an $O(\ln(n))$-approximation) becomes very hard when additional cardinality constraints are added. \begin{claim} The problem \[ \begin{array}{cc} \min & \sum_{i=1}^{k}f_{i}(S_{i})\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}=V\\ & |S_{i}|\leq b_{i} \end{array} \] where the $f_{i}$ are nonnegative monotone submodular functions is $\Omega(\frac{\sqrt{n}}{\ln n})$-hard. This is even the case when $k=2$. \end{claim} \begin{proof} It is known (\cite{svitkina2011submodular}) that the problem $\min f(S): |S|=m$ where $f$ is a nonnegative monotone submodular function is $\Omega(\frac{\sqrt{n}}{\ln n})$-hard. We reduce a general instance of this problem to the problem of the claim. To see this notice that by monotonicity of $f$ we have \[ \begin{array}{ccccccccccc} \min & f(S) & = & \min & f(S) & = & \min & f(S) & = & \min & f(S)+f'(S')\\ \mbox{s.t.} & |S|=m & & \mbox{s.t.} & |S|\geq m & & \mbox{s.t.} & S+S'=V & & \mbox{s.t.} & S+S'=V\\ & & & & & & & |S'|\leq|V|-m & & & |S'|\leq|V|-m \end{array} \] where $f'(S')=0$ for all $S'\subseteq V$. \qed \end{proof} \fi \subsection{A tight multi-agent gap of $O(\log n)$ for ring and crossing families} \label{sec:rings} It is well known (\cite{schrijver2000combinatorial}) that submodular minimization can be solved exactly in polynomial time over a ring family. In this section we observe that the MA problem over this type of constraint admits a tight $\ln (n)$-approximation. More generally, we consider {\em crossing families}. A family $\mathcal{F}$ of subsets of $V$ forms a ring family (aka lattice family) if for each $A,B \in \mathcal{F}$ we also have $A \cap B, A \cup B \in \mathcal{F}$. A crossing family is one where we only require it for sets where $A \setminus B, B \setminus A, A \cap B, V- (A \cup B)$ are all non-empty. Hence any ring family is a crossing family. For any crossing family $\mathcal{F}$ and any $u,v \in V$, let $\mathcal{F}_{uv}=\{A \in \mathcal{F}: u \in A, v \not\in A\}$. It is easy to see that $\mathcal{F}_{uv}$ is a ring family. Moreover, we may solve the original MA problem by solving the associated MA problem for each non-empty $\mathcal{F}_{uv}$ and then selecting the best output solution. So we assume now that we are given a ring family in such a way that we may compute its minimal set $M$ (which is unique). This is a standard assumption when working with ring families (cf. submodular minimization algorithm described in \cite{schrijver2000combinatorial}). Then, due to monotonicity and the fact that $\mathcal{F}$ is closed under intersections, it is not hard to see that the original problem reduces to the facility location problem $$ \min \sum_{i=1}^k f_i(S_i): S_1 \uplus \cdots \uplus S_k = M \; , $$ which admits a tight $(\ln |M|)$-approximation (\cite{svitkina2010facility}). In particular, for the special case where we have the trivial ring family $\mathcal{F} = \{V\}$ we get a tight $\ln (n)$-approximation. The next result summarizes these observations. \begin{theorem} There is a tight $\ln (n)$-approximation for monotone $MASO(\mathcal{F})$ minimization over crossing families $\mathcal{F}$. \end{theorem} \section{Multi-agent submodular maximization} \label{sec:MASA} In this section we describe two different reductions. The first one reduces the capacitated multi-agent problem (\ref{ma}) to a single-agent problem, and it is based on the simple idea of taking $k$ disjoint copies of the original ground set. We show that several properties of the objective and family of feasible sets stay \emph{invariant} (i.e. preserved) under the reduction. We use this to establish an (optimal) MA gap of 1 for several families. Examples of such families include spanning trees, matroids, and $p$-systems. Our second reduction is based on the multilinear extension of a set function. We establish that if the SA primitive admits approximation via its multilinear relaxation (see Section \ref{sec:max-SA-MA-formulations}), then we may extend this to its MA version with a constant factor loss, in the monotone and nonmonotone settings. Moreover, for the monotone case our MA gap is tight. \subsection{The lifting reduction} \label{sec:lifting-reduction} In this section we describe a generic reduction of (\ref{ma}) to a single-agent problem $$\max/\min f(S): S\in\mathcal{L}.$$ The argument is based on the idea of viewing assignments of elements $v$ to agents $i$ in a {\em multi-agent bipartite graph}. This simple idea (which is equivalent to making $k$ disjoint copies of the ground set) already appeared in the classical work of Fisher et al \cite{fisher1978analysis}, and has since then been widely used \cite{lehmann2001combinatorial,vondrak2008optimal,calinescu2011maximizing,singh2012bisubmodular}. We review briefly the reduction here for completeness and to fix notation. Consider the complete bipartite graph $G=([k]+V,E)$. Every subset of edges $S \subseteq E$ can be written uniquely as $S = \uplus_{i \in [k]} (\{i\} \times S_i)$ for some sets $S_i \subseteq V$. This allows us to go from a multi-agent objective (such as the one in (\ref{ma})) to a univariate objective $f:2^{E}\to\mathbb{R}$ over the lifted space. Namely, for each set $S \subseteq E$ we define $f(S)=\sum_{i \in [k]} f_i(S_i)$. The function $f$ is well-defined because each subset $S\subseteq E$ can be uniquely written as $S = \uplus_{i \in [k]} (\{i\} \times S_i)$ for some $S_i \subseteq V$. We consider two families of sets over $E$ that capture the original constraints: $$ \mathcal{F}':=\{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{F}\} \hspace{15pt} \mbox{and} \hspace{15pt} \mathcal{H}:=\{S\subseteq E:S_{i}\in\mathcal{F}_{i},\;\forall i\in[k]\}. $$ \noindent We now have: \[ \begin{array}{cccccccc} \max/\min & \sum_{i \in [k]} f_i(S_i) & = & \max/\min & f(S) & = & \max/\min & f(S)\\ \mbox{s.t.} & S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{F} & & \mbox{s.t.} & S\in\mathcal{F}' \cap \mathcal{H} & & \mbox{s.t.} & S\in\mathcal{L}\\ & S_{i}\in\mathcal{F}_{i}\,,\,\forall i\in[k] \end{array}, \] where in the last step we just let $\mathcal{L}:=\mathcal{F}' \cap\mathcal{H}$. \iffalse Notice that for the robust problem discussed in Theorem \ref{thm:max robust} we get the SA problem \begin{equation*} \begin{array}{cccccccc} \max & \min & \sum_{i \in [k]} f_i(S_i - A_i) & = & \max & \min & f(S-A). \\ S_1 \uplus \cdots \uplus S_k \in \mathcal{F} & A_i \subseteq S_i & & & S \in \mathcal{L} & A \subseteq S& \\ S_{i} \in \mathcal{F}_{i} & \sum_i |A_i|\leq \tau & & & & |A| \leq \tau & \end{array} \end{equation*} \fi This reduction is interesting if our new function $f$ and family of sets $\mathcal{L}$ have properties which allows us to handle them computationally. This will depend on the original structure of the functions $f_i$ and the set families $\mathcal{F}$ and $\mathcal{F}_{i}$. In terms of the objective, it is straightforward to check (as previously pointed out in \cite{fisher1978analysis}) that if the $f_i$ are (nonnegative, respectively monotone) submodular functions, then $f$ as defined above is also (nonnegative, respectively monotone) submodular. In Section \ref{sec:reduction-properties} we discuss several properties of the families $\mathcal{F}$ and $\mathcal{F}_i$ that are preserved under this reduction. \subsection{The single-agent and multi-agent formulations} \label{sec:max-SA-MA-formulations} For a set function $f:\{0,1\}^V \to \mathbb{R}$ we define its \emph{multilinear extension} $f^M:[0,1]^V \to \mathbb{R}$ (introduced in \cite{calinescu2007maximizing}) as \begin{equation*} f^M(z)=\sum_{S \subseteq V} f(S) \prod_{v \in S} z_v \prod_{v \notin S} (1-z_v). \end{equation*} An alternative way to define $f^M$ is in terms of expectations. Consider a vector $z \in [0,1]^V$ and let $R^z$ denote a random set that contains element $v_i$ independently with probability $z_{v_i}$. Then $f^M(z)= \mathbb{E}[f(R^z)]$, where the expectation is taken over random sets generated from the probability distribution induced by $z$. This gives rise to natural single-agent and multi-agent relaxations. The {\em single-agent multilinear extension relaxation} is: \begin{equation} \label{SA-ME} (\mbox{SA-ME}) ~~~~ \max f^M(z): z \in P(\mathcal{F}), \end{equation} and the {\em multi-agent multilinear extension relaxation} is: \begin{equation} \label{MA-ME} (\mbox{MA-ME}) ~~~~ \max \sum_{i=1}^k f^M_i(z_i): z_1 + z_2 + \cdots + z_k \in P(\mathcal{F}), \end{equation} where $P(\mathcal{F})$ denotes some relaxation of $conv(\{\chi^S:S\in \mathcal{F}\})$. While the relaxation (SA-ME) has been used extensively \cite{calinescu2011maximizing,lee2009non,feldman2011unified,ene2016constrained,buchbinder2016constrained} in the submodular maximization literature, we are not aware of any previous work using the multi-agent relaxation (MA-ME). The following result shows that when $f$ is nonnegative submodular and the formulation $P(\mathcal{F})$ is downwards closed and admits a polytime separation oracle, the relaxation (SA-ME) can be solved approximately in polytime. \begin{theorem}[\cite{buchbinder2016constrained,vondrak2008optimal}] \label{thm:multilinear-solve-monot} Let $f:2^V \to \mathbb{R}_+$ be a nonnegative submodular function and $f^M:[0,1]^V \to \mathbb{R}_+$ its multilinear extension. Let $P \subseteq [0,1]^V$ be any downwards closed polytope that admits a polytime separation oracle, and denote $OPT = \max f^M(z): z\in P$. Then there is a polytime algorithm (\cite{buchbinder2016constrained}) that finds $z^* \in P$ such that $f^M(z^*) \geq 0.385 \cdot OPT$. Moreover, if $f$ is monotone there is a polytime algorithm (\cite{vondrak2008optimal}) that finds $z^* \in P$ such that $f^M(z^*) \geq (1-1/e) OPT$. \end{theorem} For monotone objectives the assumption that $P$ is downwards closed is without loss of generality. This is not the case, however, when the objective is nonmonotone. Nonetheless, this restriction is unavoidable, as Vondr{\'a}k \cite{vondrak2013symmetry} showed that no algorithm can find $z^* \in P$ such that $f^M(z^*) \geq c \cdot OPT$ for any constant $c>0$ when $P$ admits a polytime separation oracle but it is not downwards closed. We can solve (MA-ME) to the same approximation factor as (SA-ME). This follows from the fact that the MA problem has the form $\{ \max g(w) : w \in W \subseteq {\bf R}^{nk} \}$ where $g(w)=g(z_1,z_2,\ldots,z_k)=\sum_{i \in [k]} f^M_i(z_i)$ and $W$ is the downwards closed polytope $\{w=(z_1,...,z_k): \sum_i z_i \in P(\mathcal{F})\}$. Clearly we have a polytime separation oracle for $W$ given that we have one for $P(\mathcal{F})$. Moreover, it is straightforward to check (see Lemma \ref{lem:max-multilinear} on Appendix \ref{sec:Appendix-Invariance}) that $g(w)=f^M(w)$, where $f$ is the function on the lifted space after applying the lifting reduction from Section \ref{sec:lifting-reduction}. Thus, $g$ is the multilinear extension of a nonnegative submodular function, and we can now use Theorem \ref{thm:multilinear-solve-monot}. \subsection{A tight multi-agent gap of $1-1/e$} \label{sec:max-MA-gap} In this section we present the proof of Theorem \ref{thm:max-MA-gap}. The high-level idea behind our reduction is the same as in the minimization setting (see Section \ref{sec:generic-min-approx}). That is, we start with an (approximate) optimal solution $z^* = z_1^* + z_2^* + \cdots + z^*_k$ to the multi-agent (MA-ME) relaxation and build a new feasible solution $\hat{z} = \hat{z}_1 + \hat{z}_2 + \cdots + \hat{z}_k$ where the $\hat{z}_i$ have supports $V_i$ that are pairwise disjoint. We then use for the SA rounding step the single-agent problem (as previously defined in (\ref{g-function}) for the minimization setting) $\max g(S): S \in \mathcal{F}$ where $g(S)=\sum_{i \in [k]} f_i(S \cap V_i)$. Similarly to Proposition \ref{prop:g-function} which dealt with the Lov\'asz extension, we have the following result for the multilinear extension. \begin{proposition} \label{prop:max-g-function} Let $z = \sum_{i\in [k]} z_i$ be a feasible solution to (MA-ME) where the vectors $z_i$ have pairwise disjoint supports $V_i$. Then $g^M(z) = \sum_{i \in [k]} f^M_i(z|_{V_i}) = \sum_{i \in [k]} f^M_i(z_i).$ \end{proposition} We now have all the ingredients to prove our main result in the maximization setting. \begin{theorem} If there is a (polytime) $\alpha(n)$-approximation for monotone SO($\mathcal{F}$) maximization based on rounding (SA-ME), then there is a (polytime) $(1-1/e) \cdot \alpha(n)$-approximation for monotone MASO($\mathcal{F}$) maximization. Furthermore, given a downwards closed family $\mathcal{F}$, if there is a (polytime) $\alpha(n)$-approximation for nonmonotone SO($\mathcal{F}$) maximization based on rounding (SA-ME), then there is a (polytime) $0.385 \cdot \alpha(n)$-approximation for nonmonotone MASO($\mathcal{F}$) maximization. \end{theorem} \begin{proof} We discuss first the case of monotone objectives. {\sc STEP 1.} Let $z^* = z_1^* + z_2^* + \cdots + z^*_k$ denote an approximate solution to (MA-ME) obtained via Theorem \ref{thm:multilinear-solve-monot}, and let $OPT_{frac}$ be the value of an optimal solution. We then have that $\sum_{i \in [k]} f^M_i(z^*_i) \geq (1-1/e) OPT_{frac} \geq (1-1/e) OPT_{MA}$. {\sc STEP 2.} For an element $v \in V$ let $\bf{e_v}$ denote the characteristic vector of $\{v\}$, i.e. the vector in $\mathbb{R}^V$ which has value $1$ in the $v$-th component and zero elsewhere. Notice that by definition of the multilinear extension we have that the functions $f^M_i$ are linear along directions $\bf{e_v}$ for any $v \in V$. It then follows that the function \begin{equation*} h(t) = f^M_i (z^*_i + t \mathbf{e_v} ) + f^M_{i'} (z^*_{i'} - t \mathbf{e_v} ) + \sum_{j\in [k], j\neq i,i'} f^M_j(z^*_j) \end{equation*} is also linear for any $v\in V$ and $i \neq i' \in [k]$, since it is the sum of linear functions (on $t$). In particular, given any $v \in V$ such that there exist $i \neq i' \in [k]$ with $z^*_i(v),z^*_{i'}(v)>0$, there is always a choice so that increasing one component and decreasing the other by the same amount does not decrease the objective value. We use this as follows. Let $v \in V$ be such that there exist $i \neq i' \in [k]$ with $z^*_i(v),z^*_{i'}(v)>0$. Then, we either set $z^*_i(v) = z^*_i(v) + z^*_{i'}(v)$ and $z^*_{i'}(v) = 0$, or $z^*_{i'}(v) = z^*_i(v) + z^*_{i'}(v)$ and $z^*_i(v) = 0$, whichever does not decrease the objective value. We repeat until the vectors $z^*_i$ have pairwise disjoint support. Let us denote these new vectors by $\hat{z}_i$ and let $\hat{z}= \sum_{i\in [k]} \hat{z}_i$. Then notice that the vector $z^* = \sum_{i\in [k]} z^*_i$ remains invariant after performing each of the above updates (i.e. $\hat{z} = z^*$), and hence the new vectors $\hat{z}_i$ remain feasible. \iffalse Let $\bar{z} = (z^*_1,z^*_2,\ldots,z^*_k)$. From Lemma \ref{lem:max-multilinear} we know that $f^M(\bar{z}) = \sum_{i \in [k]} f^M_i(z^*_i)$. Moreover, since $f^M$ is the multilinear extension of a submodular function $f$, it follows from Lemma \ref{lem:multilinear-convex} that $f^M$ is convex in the directions $\bf{e_i} - \bf{e_j}$. That is, given any two components $z^*_i(v_j)$ and $z^*_{i'}(v_{j'})$ of the vector $\bar{z}=(z^*_1,z^*_2,\ldots,z^*_k)$, there is always a choice so that increasing one component and decreasing the other by the same amount does not decrease the objective value. We use this as follows. Let $v \in V$ be such that there exist $i \neq i' \in [k]$ with $z^*_i(v),z^*_{i'}(v)>0$. Then, we either set $z^*_i(v) = z^*_i(v) + z^*_{i'}(v)$ and $z^*_{i'}(v) = 0$, or $z^*_{i'}(v) = z^*_i(v) + z^*_{i'}(v)$ and $z^*_i(v) = 0$, whichever does not decrease the objective value. We repeat until the vectors $z^*_i$ have pairwise disjoint support. Notice that the vector $z^* = \sum_{i\in [k]} z^*_i$ remains invariant after performing each of these changes, and hence the new modified vectors $z^*_i$ remain feasible at all times. Let us denote the new vectors by $\hat{z}_i$ and let $\hat{z}= \sum_{i\in [k]} \hat{z}_i$. \fi {\sc STEP 3.} In the last step we use the function $g$ defined in (\ref{g-function}), with sets $V_i$ corresponding to the support of the $\hat{z}_i$. Given our $\alpha$-approximation rounding assumption for (SA-ME), we can round $\hat{z}$ to find a set $\hat{S}$ such that $g(\hat{S})\geq \alpha g^M(\hat{z})$. Then, by setting $\hat{S}_i = \hat{S} \cap V_i$ we obtain a MA solution satisfying \begin{equation*} \sum_{i \in [k]} f_i(\hat{S}_i) = g(\hat{S}) \geq \alpha g^M(\hat{z}) = \alpha \sum_i f^M_i (\hat{z}_i) \geq \alpha \sum_i f^M_i (z^*_i) \geq \alpha (1-1/e) OPT_{MA}, \end{equation*} where the second equality follows from Proposition \ref{prop:max-g-function}. This completes the proof for monotone objectives. In the nonmonotone case the proof is very similar. Here we restrict our attention to downwards closed families, since then we can get a $0.385$-approximation at STEP 1 via Theorem \ref{thm:multilinear-solve-monot}. We then apply STEP 2 and 3 in the same fashion as we did for monotone objectives. This leads to a $0.385 \cdot \alpha(n)$-approximation for the multi-agent problem. \iffalse In the case of nonmonotone objectives the proof is very similar, the only differences being at STEP 1. Notice that in the nonmonotone case we can only approximate the solution to the multilinear extension in the case where the family $\mathcal{F}$ is downwards closed, i.e. when $P(\mathcal{F})$ is downwards closed. Thus, in the nonmonotone setting we only work with downwards closed families. We can then get a $0.385$-approximation at STEP 1 for the multilinear extension optimization problem via Theorem \ref{thm:multilinear-solve-monot}. We then apply STEP 2 and 3 in the same fashion as we did for monotone objectives. This leads to a $0.385 \cdot \alpha(n)$-approximation for the multi-agent problem. \fi \qed \end{proof} \subsection{Invariance under the lifting reduction} \label{sec:reduction-properties} In Section \ref{sec:max-MA-gap} we established a MA gap of $(1-1/e)$ for monotone objectives and of $0.385$ for nonmonotone objectives and downwards closed families based on the multilinear formulations. In this section we describe several families with an (optimal) MA gap of $1$. Examples of these family classes include spanning trees, matroids, and $p$-systems. Moreover, the reduction in this case is completely black box, and hence do not depend on the multilinear (or some other particular) formulation. We saw in Section \ref{sec:lifting-reduction} how if the original functions $f_i$ are all submodular, then the lifted function $f$ is also submodular. We now focus on the properties of the original families $\mathcal{F}_i$ and $\mathcal{F}$ that are also preserved under the lifting reduction. We show, for instance, that if the family $\mathcal{F}$ induces a matroid (or more generally a $p$-system) over the original ground set $V$, then so does the family $\mathcal{F}'$ over the lifted space $E$. We summarize these results in Table \ref{table:properties-preserved}, and present most of the proofs in this section. We next discuss some of the algorithmic consequences of these invariance results. \begin{table}[H] \caption{Invariant properties under the lifting reduction} \label{table:properties-preserved} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline & \textbf{Multi-agent problem} & \textbf{Single-agent (i.e. reduced) problem} & Result\tabularnewline \hline 1 & $f_i$ submodular & $f$ submodular & \cite{lehmann2001combinatorial} \tabularnewline \hline 2 & $f_i$ monotone & $f$ monotone & \cite{lehmann2001combinatorial} \tabularnewline \hline 3 & $(V,\mathcal{F})$ a $p$-system & $(E,\mathcal{F}')$ a $p$-system & Prop \ref{prop:p-systems} \tabularnewline \hline 4 & $\mathcal{F}$ = bases of a $p$-system & $\mathcal{F}'$ = bases of a $p$-system & Corollary \ref{cor:p-systems bases} \tabularnewline \hline 5 & $(V,\mathcal{F})$ a matroid & $(E,\mathcal{F}')$ a matroid & Corollary \ref{cor:matroid-invariant}\tabularnewline \hline 6 & $\mathcal{F}$ = bases of a matroid & $\mathcal{F}'$ = bases of a matroid & Corollary \ref{cor:matroid-bases} \tabularnewline \hline 7 & $(V,\mathcal{F})$ a $p$-matroid intersection & $(E,\mathcal{F}')$ a $p$-matroid intersection & Appendix \ref{sec:Appendix-Invariance} \tabularnewline \hline \iffalse 8 & $\mathcal{F}=\{V\}$ & $\mathcal{F}'=$ set of bases of a partition matroid & Appendix \ref{sec:Appendix-Invariance} \tabularnewline \hline 9 & $\mathcal{F}=2^{V}$ & $(E,\mathcal{F}')$ a partition matroid & Appendix \ref{sec:Appendix-Invariance} \tabularnewline \hline \fi 8 & $(V,\mathcal{F}_{i})$ a matroid for all $i\in[k]$ & $(E,\mathcal{H})$ a matroid & Appendix \ref{sec:Appendix-Invariance} \tabularnewline \hline 9 & $\mathcal{F}_{i}$ a ring family for all $i\in[k]$ & $\mathcal{H}$ a ring family & Appendix \ref{sec:Appendix-Invariance} \tabularnewline \hline 10 & $\mathcal{F}=$ forests (resp. spanning trees) & $\mathcal{F}'=$ forests (resp. spanning trees) & Section \ref{sec:reduction-properties}\tabularnewline \hline 11 & $\mathcal{F}=$ matchings (resp. perfect matchings) & $\mathcal{F}'=$ matchings (resp. perfect matchings) & Section \ref{sec:reduction-properties}\tabularnewline \hline 12 & $\mathcal{F}=$ $st$-paths & $\mathcal{F}'=$ $st$-paths & Section \ref{sec:reduction-properties}\tabularnewline \hline \end{tabular} } \end{table} In the setting of MASO (i.e. (\ref{eqn:MA})) this invariance allows us to leverage several results from the single-agent to the multi-agent setting. These are based on the following result, which uses the fact that the size of the lifted space $E$ is $nk$. \begin{theorem} \label{thm:max-invariance1} Let $\mathcal{F}$ be a matroid, a $p$-matroid intersection, or a $p$-system. If there is a (polytime) $\alpha(n)$-approximation algorithm for monotone (resp. nonmonotone) SO($\mathcal{F}$) maximization (resp. minimization), then there is a (polytime) $\alpha(nk)$-approximation algorithm for monotone (resp. nonmonotone) MASO($\mathcal{F}$) maximization (resp. minimization). \end{theorem} In both the monotone and nonmonotone maximization settings, the approximation factors $\alpha(n)$ for the family classes described in the theorem above are independent of $n$. Hence, we immediately get that $\alpha(nk)=\alpha(n)$ for these cases, and thus approximation factors for the corresponding MA problems are the same as for the SA versions. In our MA gap terminology this implies a MA gap of 1 for such problems. In the setting of CMASO (i.e. (\ref{ma})) the results described on entries $8$ and $9$ of Table \ref{table:properties-preserved} provide additional modelling flexibility. This allows us to combine several constraints while keeping approximation factors fairly good. For instance, for a monotone maximization instance of CMASO where $\mathcal{F}$ corresponds to a $p$-matroid intersection and the $\mathcal{F}_i$ are all matroids, the above invariance results lead to a $(p+1+\epsilon$)-approximation. \iffalse Let $\mathcal{C}$ be some family class (e.g. the class of matroids) over the original ground set $V$ and $\mathcal{C}'$ be the same class of objects (e.g. matroids) over the new ground set $E$. Then we say that the class $\mathcal{C}$ is \emph{invariant} if for each instance of MASO($\mathcal{F}$) with a family $\mathcal{F} \in \mathcal{C}$, the reduction outputs an instance of SO($\mathcal{F}'$) with a family $\mathcal{F}' \in \mathcal{C}'$. We show, for example, that the class of matroids or the class of $p$-systems are invariant in the previous sense. These results are summarized in Table \ref{table:properties-preserved}, and note that they prove Theorem \ref{thm:max-invariance}. Moreover, combining these invariance results with the recent advances in SA submodular optimization (Section~\ref{sec:related work}), now imply optimal approximations for many MA maximization (both monotone and nonmonotone) problems. \fi We now prove some of the results from Table \ref{table:properties-preserved}. We start by presenting some definitions that will be useful. For a subset of edges $S \subseteq E$ we define its \emph{coverage} $cov(S)$ as the set of nodes $v \in V$ saturated by $S$. That is, $v\in cov(S)$ if there exists $i\in [k]$ such that $(i,v)\in S$. We then note that by definition of $\mathcal{F}'$ it is straightforward to see that for each $S \subseteq E$ we have that \begin{equation} \label{def:family-H} S \in \mathcal{F}' \iff cov(S) \in \mathcal{F} \mbox{ and } |S|=|cov(S)|. \end{equation} For a set $S \subseteq E$, a set $B\subseteq S$ is called a \emph{basis} of $S$ if $B$ is an inclusion-wise maximal independent subset of $S$. Our next result describes how bases and their cardinalities behave under the lifting reduction. \begin{lemma} \label{lem:bases-invariance} Let $S$ be an arbitrary subset of $E$. Then for any basis $B$ (over $\mathcal{F}'$) of $S$ there exists a basis $B'$ (over $\mathcal{F}$) of $cov(S)$ such that $|B'|=|B|$. Moreover, for any basis $B'$ of $cov(S)$ there exists a basis $B$ of $S$ such that $|B|=|B'|$. \end{lemma} \begin{proof} For the first part, let $B$ be a basis of $S$ and take $B':=cov(B)$. Since $B \in \mathcal{F}'$ we have by (\ref{def:family-H}) that $B'\in \mathcal{F}$ and $|B'| = |B|$. Now, if $B'$ is not a basis of $cov(S)$ then we can find an element $v \in cov(S)-B'$ such that $B'+v \in \mathcal{F}$. Moreover, since $v \in cov(S)$ there exists $i \in [k]$ such that $(i,v)\in S$. But then we have that $B+(i,v) \subseteq S$ and $B+(i,v)\in \mathcal{F}'$, a contradiction with the fact that $B$ was a basis of $S$. For the second part, let $B'$ be a basis of $cov(S)$. For each $v \in B'$ let $i_v$ be such that $(i_v,v) \in S$, and take $B:=\uplus_{v \in B'} (i_v,v)$. It is clear by definition of $B$ that $cov(B)=B'$ and $|B|=|B'|$. Hence $B \in \mathcal{F}'$ by (\ref{def:family-H}). If $B$ is not a basis of $S$ there exists an edge $(i,v)\in S-B$ such that $B+(i,v)\in \mathcal{F}'$. But then by (\ref{def:family-H}) we have that $cov(B+(i,v))\in \mathcal{F}$ and $B'\subsetneq cov(B+(i,v))\subseteq cov(S)$, a contradiction since $B'$ was a basis of $cov(S)$. \qed \end{proof} We say that $(V,\mathcal{F})$ is a \emph{$p$-system} if for each $U \subseteq V$, the cardinality of the largest basis of $U$ is at most $p$ times the cardinality of the smallest basis of $U$. Our following result is a direct consequence of Lemma \ref{lem:bases-invariance}. \begin{proposition} \label{prop:p-systems}If $(V,\mathcal{F})$ is a $p$-system, then $(E,\mathcal{F}')$ is a $p$-system. \end{proposition} \iffalse \begin{proof} It follows by (\ref{def:family-H}) that given any set $S \subseteq E$, we have that $B$ is a basis (over $\mathcal{H}$) of $S$ if and only if $|B|=|cov(B)|$ and $cov(B)$ is a basis (over $\mathcal{F}$) of $cov(S)$. In particular, it follows that the maximum cardinality of a basis of $S$ equals the maximum cardinality of a basis of $cov(S)$, and same for bases of minimum cardinality. Now the result follows from the fact that $\mathcal{F}$ is a $p$-system. \qed \end{proof} \fi \begin{corollary} \label{cor:p-systems bases} If $\mathcal{F}$ corresponds to the set of bases of a $p$-system $(V,\mathcal{I})$, then $\mathcal{F}'$ also corresponds to the set of bases of some $p$-system $(E,\mathcal{I}')$. \end{corollary} \begin{proof} Consider $(E,\mathcal{I}')$ where $\mathcal{I}':=\{S\subseteq E: cov(S)\in \mathcal{I} \mbox{ and } |cov(S)|=|S|\}$. Then by Proposition \ref{prop:p-systems} we have that $(E,\mathcal{I}')$ is a $p$-system. It is now straightforward to check that $\mathcal{F}'$ corresponds precisely to the set of bases of $(E,\mathcal{I}')$. \qed \end{proof} The following two results follow from Proposition \ref{prop:p-systems} and Corollary \ref{cor:p-systems bases} and the fact that matroids are precisely the class of 1-systems. \begin{corollary} \label{cor:matroid-invariant}If $(V,\mathcal{F})$ is a matroid, then $(E,\mathcal{F}')$ is a matroid. \end{corollary} \begin{corollary} \label{cor:matroid-bases}Assume $\mathcal{F}$ is the set of bases of some matroid $\mathcal{M}=(V,\mathcal{I})$, then $\mathcal{F}'$ is the set of bases of some matroid $\mathcal{M}'=(E,\mathcal{I}')$. \end{corollary} We now focus on families defined over the set of edges of some graph $G$. To be consistent with our previous notation we denote by $V$ the set of edges of $G$, since this is the ground set of the original problem. The lifting reduction is based on the idea of making $k$ disjoint copies for each original element, and visualize the new ground set (or lifted space) as edges in a bipartite graph. However, when the original ground set corresponds to the set of edges of some graph $G$, we may just think of the lifted space as being the set of edges of the graph $G'$ obtained by taking $k$ disjoint copies of each original edge. We think of the edge that corresponds to the $ith$ copy of $v$ as assigning element $v$ to agent $i$. We can formalize this by defining a mapping $\pi:E\to E'$ that takes an edge $(i,v)\in E$ from the lifted space to the edge in $G'$ that corresponds to the $ith$ copy of the original edge $v$. It is clear that $\pi$ is a bijection. Moreover, notice that given any graph $G$ and family $\mathcal{F}$ of forests (as subset of edges) of $G$, the bijection $\pi : E \to E'$ also satisfies that $\bar{\mathcal{F}}:=\{\pi (S): S \in \mathcal{F}'\}$ is precisely the family of forests of $G'$. That is, there is a one-to-one correspondence between forests of $G'$ and assignments $S_1 \uplus S_2 \cdots \uplus S_k = S \in \mathcal{F}$ for the original MA problem where $S$ is a forest of $G$. The same holds for spanning trees, matchings, perfect matchings, and st-paths. \iffalse We now focus on families defined over the set of edges of some graph $G$. Our results for these types of families have the following flavour. Consider MASO($\mathcal{F}$) where $\mathcal{F}$ corresponds to the family of forests of some graph $G$. Then SO($\mathcal{F}'$) can also be seen as defined over the family $\mathcal{F}'$ of forests of some graph $G'$. More formally, let $E$ and $\mathcal{F}'$ be as defined in the reduction. Then given any graph $G$ and family $\mathcal{F}$ of forests (as subset of edges) of $G$, there exists a graph $G'$ with set of edges $E'$ and a bijection $\pi : E \to E'$ satisfying that $\bar{\mathcal{F}}:=\{\pi (S): S \in \mathcal{F}'\}$ is the family of forests of $G'$. We show that forests (and spanning trees), matchings (and perfect matchings), and st-paths are examples of such families. \begin{lemma} \label{Lemma Dummy Copies} Let $(V,\mathcal{F})$ where $V$ corresponds to the set of edges of some graph $G$, and $\mathcal{F}$ to the family of either forests, spanning trees, matchings, perfect matchings, or $st$-paths of $G$. Then the reduced problem is given by $(E,\mathcal{F}')$ where $E$ corresponds to the set of edges of some graph $G'$, and $\mathcal{F}'$ to the family of either forests, spanning trees, matchings, perfect matchings, or $st$-paths of $G'$. \end{lemma} \begin{proof} First notice that to be consistent with our previous notation we denote by $V$ the set of edges of $G$, since this is the ground set of the original problem. Hence, in this case an element $v\in V$ denotes an edge of the original graph $G$. The lifting reduction is based on the idea of making $k$ disjoint copies for each original element, and visualize the new ground set (or lifted space) as edges in a bipartite graph. However, when the original ground set corresponds to the set of edges of some graph $G$, we may just think of the lifted space in this case as being the set of edges of the graph $G'$ obtained by taking $k$ disjoint copies of each original edge. We think of choosing the edge that corresponds to the $ith$ copy of $v$ as assigning element $v$ to agent $i$. We can formalize this by defining a mapping $\pi:E\to E'$ that takes an edge $(i,v)\in E$ from the lifted space to the edge in $G'$ that corresponds to the $ith$ copy of the original edge $v$. It is clear that $\pi$ is a bijection. \textcolor{red}{the rest of the proof needs some work} Now, in the case of forests. It is clear that for any feasible assignment $S_1,S_2,\ldots,S_k$ there is a forest in $G'$ corresponding to picking . In addition, this is a one-to-one correspondence. Then, via the bijections $\pi$ and $\pi'$ we can identify a tuple $(S_{1},...,S_{k})$ with the set of edges in $G'$ that takes the first copy of the edges in $S_{1}$, the second copies of the edges in $S_{2}$, and so on. Now, the following three statements are straightforward. (1) Let $\mathcal{F}$ be the family of forests of $G$. Then: $(S_{1},...,S_{k})$ is a feasible tuple for the original problem (i.e. $S_{1}+\cdots+S_{k}\in\mathcal{F}$) $\iff\pi(S_{1},...,S_{k})\in\mathcal{H}\iff\pi'(\pi(S_{1},...,S_{k}))$ induces a forest in $G'$. (2) Let $\mathcal{F}$ be the family of matchings in $G$. Then: $(S_{1},...,S_{k})$ is a feasible tuple for the original problem $\iff\pi(S_{1},...,S_{k})\in\mathcal{H}\iff\pi'(\pi(S_{1},...,S_{k}))$ induces a matching in $G'$. (3) Let $\mathcal{F}$ be the family of $st$-paths in $G$. Then: $(S_{1},...,S_{k})$ is a feasible tuple for the original problem $\iff\pi(S_{1},...,S_{k})\in\mathcal{H}\iff\pi'(\pi(S_{1},...,S_{k}))$ induces an $\ensuremath{st}$-path in $G'$.\qed \end{proof} \fi This observation becomes algorithmically useful given that $G'$ has the same number of nodes of $G$. Thus, any approximation factor or hardness result for the above combinatorial structures that depend on the number of nodes (and not on the number of edges) of the underlying graph, will remain the same in the MA setting. We note that this explains why for the family of spanning trees, perfect matchings, and $st$-paths, the approximations factors revealed in \cite{goel2009approximability} for the monotone minimization problem are the same for both the SA and MA versions. Our next result summarizes this. \begin{theorem} \label{thm:max-invariance2} Let $\mathcal{F}$ be the family (seen as edges) of forests, spanning trees, matchings, perfect matchings, or $st$-paths of a graph $G$ with $m$ nodes and $n$ edges. Then, if there is a (polytime) $\alpha(n)$-approximation algorithm for monotone (resp. nonmonotone) SO($\mathcal{F}$) maximization (resp. minimization), there is a (polytime) $\alpha(nk)$-approximation algorithm for monotone (resp. nonmonotone) MASO($\mathcal{F}$) maximization (resp. minimization). Moreover, if there is a (polytime) $\alpha(m)$-approximation algorithm for monotone (resp. nonmonotone) SO($\mathcal{F}$) maximization (resp. minimization), then there is a (polytime) $\alpha(m)$-approximation algorithm for monotone (resp. nonmonotone) MASO($\mathcal{F}$) maximization (resp. minimization). \end{theorem} \section{Conclusion} \label{sec:conclusions} A number of interesting questions remain. Perhaps the main one being whether the $O(\log^2 (n))$ MA gap for minimization can be improved to $O(\log n)$? We have shown this is the case for bounded blocker and crossing families. Another question is whether the $\alpha \log^2 (n)$ and $ \alpha k $ approximations can be made truly black box? I.e., do not depend on the convex formulation. On separate work (\cite{santiago2016multivariate}) we discuss multivariate submodular objectives. We show that our reductions for maximization remain well-behaved algorithmically and this opens up more tractable models. This is the topic of planned future work. \appendix \begin{center} \large \textbf{APPENDIX} \end{center} \section{Upwards-closed (aka blocking) families} \label{sec:blocking} In this section, we give some background for blocking families. As our work for minimization is restricted to monotone functions, we can often convert an arbitrary set family into its upwards-closure (i.e., a blocking version of it) and work with it instead. We discuss this reduction as well. The technical details discussed in this section are fairly standard and we include them for completeness. Several of these results have already appeared in \cite{iyer2014monotone}. \subsection{Blocking families and a natural relaxation for the integral polyhedron} \label{sec:block} A set family $\mathcal{F}$, over a ground set $V$ is {\em upwards-closed} if $F \subseteq F'$ and $F \in \mathcal{F}$, implies that $F' \in \mathcal{F}$; these are sometimes referred to as {\em blocking families}. Examples of such families include vertex-covers or set covers more generally, whereas spanning trees are not. For a blocking family $\mathcal{F}$ one often works with the induced sub-family $\mathcal{F}^{min}$ of minimal sets. Then $\mathcal{F}^{min}$ has the property that it is a {\em clutter}, that is, $\mathcal{F}^{min}$ does not contain a pair of comparable sets, i.e., sets $F \subset F'$. If $\mathcal{F}$ is a clutter, then $\mathcal{F}=\mathcal{F}^{min}$ and there is an associated {\em blocking} clutter $\mathcal{B}(\mathcal{F})$, which consists of the minimal sets $B$ such that $B \cap F \neq \emptyset$ for each $F \in \mathcal{F}$. We refer to $\mathcal{B}(\mathcal{F})$ as the {\em blocker} of $\mathcal{F}$. One also checks that for an arbitrary upwards-closed family $\mathcal{F}$, we have the following. \begin{claim}[Lehman] \label{claim:blocker} \begin{enumerate} \item $F \in \mathcal{F}$ if and only if $F \cap B \neq \emptyset$ for all $B \in \mathcal{B}(\mathcal{F}^{min})$. \item $\mathcal{B}(\mathcal{B}(\mathcal{F}^{min})) = \mathcal{F}^{min}$. \end{enumerate} \end{claim} \noindent Thus the significance of blockers is that one may assert membership in an upwards-closed family $\mathcal{F}$ by checking intersections on sets from the blocker $\mathcal{B}(\mathcal{F}^{min})$. If we define $\mathcal{B}(\mathcal{F})$ to be the minimal sets which intersect every element of $\mathcal{F}$, then one checks that $\mathcal{B}(\mathcal{F})=\mathcal{B}(\mathcal{F}^{min})$. These observations lead to a natural relaxation for minimization problems over the integral polyhedron $P(\mathcal{F}) := conv(\{\chi^F: \textit{$F \in \mathcal{F}$}\})$. The {\em blocking formulation} for $\mathcal{F}$ is: \begin{equation} \label{eqn:polyhedron} P^*(\mathcal{F}) = \{z \in \mathbb{R}^V_{\geq 0}: z(B) \geq 1~~\forall B \in \mathcal{B}(\mathcal{F}^{min})=\mathcal{B}(\mathcal{F})\}. \end{equation} \noindent Clearly we have $P(\mathcal{F}) \subseteq P^*(\mathcal{F})$. \subsection{Reducing to blocking families} Now consider an arbitrary set family $\mathcal{F}$ over $V$. We may define its {\em upwards closure} by $\mathcal{F}^{\uparrow}=\{F': F \subseteq F' \textit{ for some $F \in \mathcal{F}$}\}$. In this section we argue that in order to solve a monotone optimization problem over sets in $\mathcal{F}$ it is often sufficient to work over its upwards-closure. As already noted $\mathcal{B}(\mathcal{F})=\mathcal{B}(\mathcal{F}^{\uparrow}) = \mathcal{B} (\mathcal{F}^{min})$ and hence one approach is via the blocking formulation $P^*(\mathcal{F})=P^*(\mathcal{F}^{\uparrow})$. This requires two ingredients. First, we need a separation algorithm for the blocking relaxation, but indeed this is often available for many natural families such as spanning trees, perfect matchings, $st$-paths, and vertex covers. The second ingredient needed is the ability to turn an integral solution $\chi^{F'}$ from $P^*(\mathcal{F}^{\uparrow})$ or $P(\mathcal{F}^\uparrow)$ into an integral solution $\chi^F \in P(\mathcal{F})$. We now argue that this is the case if a polytime separation algorithm is available for the blocking relaxation $P^*(\mathcal{F}^{\uparrow})$ or for the polytope $P(\mathcal{F}):= conv(\{\chi^F: \textit{$F \in \mathcal{F}$}\})$. \iffalse doable if we have separation over the polyhedron $P(\mathcal{F})=conv(\chi^F: F \in \mathcal{F})$. \fi For a polyhedron $P$, we denote its {\em dominant} by $P^{\uparrow} := \{z: z \geq x \textit{~for some~} x \in P \}$. The following observation is straightforward. \begin{claim} \label{claim:lattice-points} Let $H$ be the set of vertices of the hypercube in $\mathbb{R}^V$. Then $$H \cap P(\mathcal{F}^\uparrow) = H \cap P(\mathcal{F})^\uparrow = H \cap P^*(\mathcal{F}^\uparrow).$$ In particular we have that $\chi^S \in P(\mathcal{F})^\uparrow \iff \chi^S \in P^*(\mathcal{F}^\uparrow)$. \end{claim} \iffalse \begin{claim} If we can separate over $P$ in polytime, then we can separate over the dominant $P^\uparrow$. \end{claim} \begin{proof} Given a vector $y$, we can decide whether $y \in P^\uparrow$ by solving \begin{align*} x + s = y \\ x \in P \\ s \geq 0. \end{align*} Since can we easily separate over the first and third constraints, and a separation oracle for $P$ is given (i.e. we can also separate over the set of constraints imposed by the second line), it follows that we can separate over the above set of constraints in polytime. \qed \end{proof} \begin{corollary} If we can separate over $P(\mathcal{F})$, then we can separate over the dominant $P(\mathcal{F})^\uparrow \subseteq P^*(\mathcal{F})$. \end{corollary} \fi We can now use this observation to prove the following. \iffalse This yields the following mechanism for turning feasible sets in $\mathcal{F}^{\uparrow}$ into feasible sets in $\mathcal{F}$. \fi \begin{lemma} \label{lem:dominant-reduction} Assume we have a separation algorithm for $P^*(\mathcal{F}^\uparrow)$. Then for any $\chi^{S} \in P^*(\mathcal{F}^\uparrow)$ we can find in polytime $\chi^{M} \in P(\mathcal{F})$ such that $\chi^{M} \leq \chi^{S}$. \end{lemma} \begin{proof} Let $S=\{1,2,\ldots,k\}$. We run the following routine until no more elements can be removed: \vspace*{10pt} For $i \in S$\\ \hspace*{20pt} If $\chi^{S-i} \in P^*(\mathcal{F}^\uparrow)$ then $S=S-i$ \vspace*{10pt} Let $\chi^M$ be the output. We show that $\chi^M \in P(\mathcal{F})$. Since $\chi^M \in P^*(\mathcal{F}^\uparrow)$, by Claim \ref{claim:lattice-points} we know that $\chi^M \in P(\mathcal{F})^\uparrow$. Then by definition of dominant there exists $x\in P(\mathcal{F})$ such that $x\leq \chi^M \in P(\mathcal{F})^\uparrow$. It follows that the vector $x$ can be written as $x = \sum_{i} \lambda_{i} \chi^{U_i}$ for some $U_i \in \mathcal{F}$ and $\lambda_i \in (0,1]$ with $\sum_i \lambda_i =1$. Clearly we must have that $U_i \subseteq M$ for all $i$, otherwise $x$ would have a non-zero component outside $M$. In addition, if for some $i$ we have $U_i \subsetneq M$, then there must exist some $j \in M$ such that $U_i \subseteq M-j \subsetneq M$. Hence $M-j \in \mathcal{F}^{\uparrow}$, and thus $\chi^{M-j} \in P(\mathcal{F})^\uparrow$ and $\chi^{M-j} \in P^*(\mathcal{F}^\uparrow)$. But then when component $j$ was considered in the algorithm above, we would have had $S$ such that $M \subseteq S$ and so $\chi^{S-j} \in P^*(\mathcal{F}^\uparrow)$ (that is $\chi^{S-j} \in P(\mathcal{F})^\uparrow$), and so $j$ should have been removed from $S$, contradiction. \qed \end{proof} We point out that for many natural set families $\mathcal{F}$ we can work with the relaxation $P^*(\mathcal{F}^\uparrow)$ assuming that it admits a separation algorithm. Then, if we have an algorithm which produces $\chi^{F'} \in P^*(\mathcal{F}^\uparrow)$ satisfying some approximation guarantee for a monotone problem, we can use Lemma \ref{lem:dominant-reduction} to construct in polytime $F \in \mathcal{F}$ which obeys the same guarantee. Moreover, notice that for Lemma \ref{lem:dominant-reduction} to work we do not need an actual separation oracle for $P^*(\mathcal{F}^\uparrow)$, but rather all we need is to be able to separate over $0-1$ vectors only. Hence, since the polyhedra $P^*(\mathcal{F}^\uparrow), \, P(\mathcal{F}^\uparrow)$ and $P(\mathcal{F})^\uparrow$ have the same $0-1$ vectors (see Claim \ref{claim:lattice-points}), a separation oracle for either $P(\mathcal{F}^\uparrow)$ or $P(\mathcal{F})^\uparrow$ would be enough for the routine of Lemma \ref{lem:dominant-reduction} to work. We now show that this is the case if we have a polytime separation oracle for $P(\mathcal{F})$. The following result shows that if we can separate efficiently over $P(\mathcal{F})$ then we can also separate efficiently over the dominant $P(\mathcal{F})^\uparrow$. \iffalse We now show that we can also turn in polytime an integral solution $\chi^{F'} \in P(\mathcal{F}^\uparrow)$ into an integral solution $\chi^F \in P(\mathcal{F})$. We work from the point of view of the dominant $P(\mathcal{F})^\uparrow$. Notice that this is fine since by Claim \ref{claim:lattice-points} we have that $\chi^{F'} \in P(\mathcal{F}^\uparrow) \iff \chi^{F'} \in P(\mathcal{F})^\uparrow$. The following result shows that if we can separate efficiently over $P(\mathcal{F})$ then we can also separate efficiently over the dominant $P(\mathcal{F})^\uparrow$. \fi \begin{claim} If we can separate over a polyhedron $P$ in polytime, then we can also separate over its dominant $P^\uparrow$ in polytime. \end{claim} \begin{proof} Given a vector $y$, we can decide whether $y \in P^\uparrow$ by solving \begin{align*} x + s = y \\ x \in P \\ s \geq 0. \end{align*} Since can we easily separate over the first and third constraints, and a separation oracle for $P$ is given (i.e. we can also separate over the set of constraints imposed by the second line), it follows that we can separate over the above set of constraints in polytime. \qed \end{proof} Now we can apply the same mechanism from Lemma \ref{lem:dominant-reduction} to turn feasible sets from $\mathcal{F}^{\uparrow}$ into feasible sets in $\mathcal{F}$. \begin{corollary} \label{cor:dominant-reduction2} Assume we have a separation algorithm for $P(\mathcal{F})^\uparrow$. Then for any $\chi^{S} \in P(\mathcal{F})^\uparrow$ we can find in polytime $\chi^{M} \in P(\mathcal{F})$ such that $\chi^{M} \leq \chi^{S}$. \end{corollary} We conclude this section by making the remark that if we have an algorithm which produces $\chi^{F'} \in P(\mathcal{F}^\uparrow)$ satisfying some approximation guarantee for a monotone problem, we can use Corollary \ref{cor:dominant-reduction2} to construct $F \in \mathcal{F}$ which obeys the same guarantee. \iffalse \begin{corollary} Assume we have a separation algorithm for $P(\mathcal{F})^\uparrow$. Then for any $\chi^{S} \in P(\mathcal{F})^\uparrow$ we can find in polytime $\chi^{M} \in P(\mathcal{F})$ such that $\chi^{M} \leq \chi^{S}$. \end{corollary} \begin{proof} This follows directly from Lemma \ref{lem:dominant-reduction} and the fact that $\chi^S \in P(\mathcal{F})^\uparrow \iff \chi^S \in P^*(\mathcal{F}^\uparrow)$. \qed \end{proof} \fi \iffalse \begin{lemma} Our results (i.e. black box reduction and bounded blockers argument) for minimization hold for well-behaved families $\mathcal{F}$, and not only for upwards-closed families. \end{lemma} \begin{proof} So far the results are stated for upwards-closed families where a separation oracle is available for $P(\mathcal{F})^\uparrow$. If $\mathcal{F}$ is well-behaved, we have seen (Claim 1) that this is the case. Hence, we can find $\chi^{F'} \in P(\mathcal{F})^\uparrow$ satisfying some approximation guarantees. We now use Claim 2 to recover (in polytime) some $F \in \mathcal{F}$ such that $\chi^F \leq \chi^{F'}$. By monotonicity the same approximation guarantees hold for this set $F$. \qed \end{proof} \fi \iffalse We conclude this section by pointing out that for many natural set families $\mathcal{F}$ we can work with the relaxation $P^*(\mathcal{F}^\uparrow)$ assuming that it admits a separation algorithm. Then, if we have an algorithm which produces $\chi^{F'} \in P^*(\mathcal{F}^\uparrow)$ satisfying some approximation guarantee for a monotone problem, we can use Lemma \ref{lem:dominant-reduction} to construct $F \in \mathcal{F}$ which obeys the same guarantee. \fi \iffalse Alternatively, the reduction also shows that if we have a polytime separation oracle for $P(\mathcal{F})$ and an algorithm which produces $\chi^{F'}\in P(\mathcal{F})^\uparrow$ satisfying some approximation guarantee for a monotone problem, we can use \ref{} to construct $F \in \mathcal{F}$ which obeys the same guarantee. \fi \section{Relaxations for constrained submodular minimization} \label{sec:relaxations} Submodular optimization techniques for minimization on a set family have involved two standard relaxations, one being linear \cite{goel2009approximability} and one being convex \cite{chekuri2011submodular,iwata2009submodular,iyer2014monotone}. We introduce the latter in this section. \iffalse \subsection{Blocking Families and a Natural Relaxation for $\mathcal{P}(\mathcal{F})$} We will be working with upwards-closed set families $\mathcal{F}'$, over a ground set $V$. which are {\em upwards-closed}, that is, if $F \subseteq F'$ and $F \in \mathcal{F}'$, then $F' \in \mathcal{F}'$. These are sometimes referred to as {\em blocking families}. Two of the families just discussed are blocking families - vertex-covers and cardinality-constrained families - whereas spanning trees are not. For blocking families one normally works with the induced sub-family $\mathcal{F}$ of minimal sets. Then $\mathcal{F}$ has the property that it is a {\em clutter}, that is, $\mathcal{F}$ does not contain a pair of sets $F \subset F'$. The upwards-closed family determined by a clutter $\mathcal{F}$ is denoted by $\mathcal{F}^{\uparrow} = \{F: F \supseteq F', F' \in \mathcal{F}\}$. For such a clutter there is an associated {\em blocking} clutter $\mathcal{B}(\mathcal{F})$, which consists of the minimal sets $B$ such that $B \cap F \neq \emptyset$ for each $F \in \mathcal{F}$. We refer to $\mathcal{B}(\mathcal{F})$ as the {\em blocker} of $\mathcal{F}$. The following is straightforward. \begin{claim} \label{claim:blocker} \begin{enumerate} \item $F \in \mathcal{F}^{\uparrow}$ if and only if $F \cap B \neq \emptyset$ for all $B \in \mathcal{B}(\mathcal{F})$. \item $\mathcal{B}(\mathcal{B}(\mathcal{F}))) = \mathcal{F}$. \end{enumerate} \end{claim} \noindent The significance of this is that one may assert membership in $\mathcal{F}^{\uparrow}$ by checking intersections on sets from the blocker $\mathcal{B}(\mathcal{F})$. That is $F \in \mathcal{F}$ if and only if $F \cap B \neq \emptyset$ for each $B \in \mathcal{B}(\mathcal{F})$. This simple observation gives a starting point for exploring the integral polyhedron $P(\mathcal{F}) = \{ z \in \mathbb{R}^V: z \geq \chi^F \textit{for some $F \in \mathcal{F}$}\}$. The blocking condition gives rise to the {\em natural linear relaxation} for $P(\mathcal{F})$: \begin{equation} \label{eqn:polyhedron} P^*(\mathcal{F}) = \{z \in \mathbb{R}^V_{\geq 0}: z(B) \geq 1~~\forall ~B \in \mathcal{B}(\mathcal{F})\} \end{equation} \noindent and so $F \in \mathcal{F}$ if and only if $\chi^F \in P^*(\mathcal{F})$. \fi \subsection{A convex relaxation} \label{sec:LE-def} We will be working with upwards-closed set families $\mathcal{F}$, and their blocking relaxations $P^*(\mathcal{F})$. As we now work with arbitrary vectors $z \in [0,1]^n$, we must specify how our objective function $f(S)$ behaves on all points $z \in P^*(\mathcal{F})$. Formally, we call $g:[0,1]^V \rightarrow \mathbb{R}$ an {\em extension} of $f$ if $g(\chi^S) = f(S)$ for each $S \subseteq V$. For a submodular objective function $f(S)$ there can be many extensions of $f$ to $[0,1]^V$ (or to $\mathbb{R}^V$). The most popular one has been the so-called {\em Lov\'asz Extension} (introduced in \cite{lovasz1983submodular}) due to several of its desirable properties. We present two of several equivalent definitions for the Lov\'asz Extension. Let $0 < v_1 < v_2 < ... < v_m \leq 1$ be the distinct positive values taken in some vector $z \in [0,1]^V$. We also define $v_0=0$ and $v_{m+1}=1$ (which may be equal to $v_m$). Define for each $i \in \{0,1,...,m\}$ the set $S_i=\{ j: z_j > v_i\}$. In particular, $S_0$ is the support of $z$ and $S_m=\emptyset$. One then defines: \[ f^L(z) = \sum_{i=0}^{m} (v_{i+1}-v_i) f(S_i). \] It is not hard to check that the following is an equivalent definition of $f^L$ (e.g. see \cite{chekuri2011submodular}). For a vector $z \in [0,1]^V$ and $\theta \in [0,1]$, let $z^\theta := \{v \in V: z(v)\geq \theta\}$. We then have that $$f^L(z) = \mathbb{E}_{\theta \in [0,1]} f(z^\theta) = \int_{0}^{1} f(z^\theta) d\theta,$$ where the expectation is taken over the uniform distribution in $[0,1]$. \begin{lemma} [Lov\'asz \cite{lovasz1983submodular}] The function $f^L$ is convex if and only if $f$ is submodular. \end{lemma} One could now attack constrained submodular minimization by solving the problem \begin{equation} \label{SA-LE-appendix} (\mbox{SA-LE}) ~~~~ \min f^L(z): z \in P^*(\mathcal{F}), \end{equation} and then seek rounding methods for the resulting solution. This is the approach used in \cite{chekuri2011submodular,iwata2009submodular,iyer2014monotone}. We refer to the above as the {\em single-agent Lov\'asz extension formulation}, abbreviated as (SA-LE). \iffalse We will later use two additional properties of the Lov\'asz Extension. We say that a function $g$ is (positively) {\em homogeneous} if for each $c \geq 0$, we have $g(cz) = c g(z)$. \begin{lemma} \label{lem:homogext} The Lov\'asz Extension of $f$ is homogeneous if $f$ is submodular and normalized (i.e. $f(\emptyset)=0$). That is for any $x \in [0,1]^n$ and $\alpha \in [0,1]$ $f^L(\alpha x)=\alpha f^L(x)$. In particular, for any set $Q$ we have $f^L((\alpha) \chi^Q) = \alpha f(Q)$. \end{lemma} \begin{proof} This follows by construction. Recall that \[ f^L(x) = \sum_{i=0}^{m} (v_{i+1}-v_i) f(S_i) \] where $v_0 = 0$, $v_{m+1} = 1$, and $0 \leq v_1 < v_2 < ... < v_m \leq 1$ are the distinct values taken in the vector $x \in [0,1]^n$. The sets $S_i$ are defined as $S_i=\{ j: x_j > v_i\}$ for each $i \in \{0,1,...,m\}$. It follows directly that \[ f^L(\alpha x) = \sum_{i=0}^{m-1} (\alpha) (v_{i+1}-v_i) f(S_i) + (1-\alpha v_m) f(\emptyset) = \alpha f^L(x) \] \qed \end{proof} \begin{lemma} \label{lem:monotoneext} If $f(S)$ is a monotone set function, then $f^L$ is a monotone function. \end{lemma} \fi \iffalse This can be used to establish the following. \begin{corollary} \label{cor:convexbound} Let $g:[0,1]^V \rightarrow \mathbb{R}$ be a convex, monotonic, homogeneous function. Suppose that $z \leq \sum_S x(S) \chi^S$, then $g(z) \leq \sum_S x(S) g(\chi^S)$. In particular, if $f^L$ is the extension of a normalized, monotonic, submodular function $f(S)$ and $\chi^Z \leq \sum_S x(S) \chi^S$, then $f(Z) \leq \sum_S x(S) f(S)$. \end{corollary} \begin{proof} We prove the latter statement, the first being similar. Without loss of generality, $C:=\sum_{S\subseteq V} x(S)>0$. Hence: \[ \begin{array}{cclr} f(Q) & = & f^L(\chi^{Q}) & (\mbox{by definition})\\ & = & \frac{C}{C}f^L(\chi^{Q})\\ & = & Cf^L(\frac{1}{C}\chi^{Q}) & (\mbox{by homogeneity})\\ & \leq & Cf^L(\frac{1}{C}\sum_{S\subseteq V} x(S)\chi^{S}) & (\mbox{by monotonicity})\\ & \leq & C\sum_{S\subseteq V}(\frac{x(S)}{C})f^L(\chi^{S}) & (\mbox{by convexity})\\ & = & \sum_{S\subseteq V}x(S)f^L(\chi^{S})\\ & = & \sum_{S\subseteq V} x(S)f(S). & (\mbox{by definition}). \end{array} \] \qed \end{proof} \fi \iffalse \subsection{An extended LP formulation} \label{sec:extended-lp} By Claim~\ref{claim:blocker} (see Appendix \ref{sec:blocking}), an integer programming formulation for the single-agent problem is \begin{equation} \label{eqn:X1} \begin{array}{cc} \min & \sum_{S\subseteq V}f(S)x(S)\\ \mbox{s.t.} & \sum_{v\in B}\sum_{S\ni v}x(S)\geq1,\,\forall B\in\mathcal{B}(\mathcal{F})\\ & x(S)\geq0. \end{array} \end{equation} The corresponding dual is given by \begin{equation} \label{eqn:Y1} \begin{array}{cc} \max & \sum_{B\in\mathcal{B}(\mathcal{F})}y_{B}\\ \mbox{s.t.} & \sum_{v\in S}\sum_{B\ni v}y_{B}\leq f(S),\,\forall S\subseteq V\\ & y_{B}\geq0. \end{array} \end{equation} Using the ideas from \cite{goel2009approximability}, one may solve these LPs in polytime if the blocking family is not too large. \begin{lemma} \label{lem:polybound} If $f$ is submodular and $|\mathcal{B}(\mathcal{F})| \in poly(n)$, then the linear programs (\ref{eqn:X1},\ref{eqn:Y1}) can be solved in polytime. \end{lemma} \iffalse \begin{proof} It is well-known \cite{grotschel2012geometric} that a class of LPs can be solved in polytime if (i) it admits a polytime separation algorithm and (ii) has a polynomially bounded number of variables. Moreover, if this holds, then the class of dual problems can also be solved in polytime. Hence we focus on the dual (\ref{eqn:Y1}) for which condition (ii) is assured by our assumption on $|\mathcal{B}(\mathcal{F})|$. Let's now consider its separation problem. For a fixed vector $y \geq 0$ we define $z_{y}(S):=\sum_{v\in S}\sum_{v\in B} y_{B}$ for any $S\subseteq V$. Notice that $z_{y}$ is a modular function. Hence, $f-z_{y}$ is submodular and so we can solve the problem $\min_{S\subseteq V}f(S)-z_{y}(S)$ exactly in polytime. It follows that $y$ is feasible if and only if this minimum is nonnegative. Thus, we can separate in the dual efficiently. \qed \end{proof} \fi The restriction that $|\mathcal{B}(\mathcal{F})| \in poly(n)$ is quite severe and we now show how it may be weakened. \fi \subsection{Tractability of the single-agent formulation (SA-LE)} \label{sec:combined} In this section we show that one may solve (SA-LE) approximately as long as a polytime separation algorithm for $P^*(\mathcal{F})$ is available. This is useful in several settings and in particular for our methods which rely on the multi-agent Lov\'asz extension formulation (discussed in the next section). \iffalse Recall from Appendix~\ref{sec:block} that $P^*(\mathcal{F}) = \{z \in \mathbb{R}^V_{\geq 0}: z(B) \geq 1~~\forall B \in \mathcal{B}(\mathcal{F}^{min})\}$. To do this, we work with a combination of the LP and the convex relaxation from the prior section. For a feasible solution $x$ to (\ref{eqn:X1}), we define its {\em image} as \[ z = \sum_S x(S) \chi^S. \] We sometimes work with an extended formulation \[ Q = \{(x,z): \mbox{$z$ is the image of feasible $x$ for (\ref{eqn:X1})}\}. \] One establishes easily that $P^*(\mathcal{F})$ is just a projection of $Q$. In fact, the convex and LP relaxations are equivalent in the following sense, due to the pleasant properties of the Lov\'asz Extension. \begin{lemma} \label{lem:equivopts} \[ \min \{ f^L(z): z \in P^*(\mathcal{F})\} = \min \{ f^L(z): (x,z) \in Q \} = \min \{ \sum_S x(S) f(S): \mbox{$(x,z) \in Q$} \}. \] Moreover, given a feasible solution $z^* \in P^*(\mathcal{F})$ we may compute $x^*$ such that $(x^*,z^*) \in Q$ and $\sum_S x^*(S) f(S) = f^L(z^*)$. This may be done in time polynomially bounded in $n$ and the encoding size of $z^*$. \end{lemma} \begin{proof} The first equality follows from the preceding lemma. For the second equality we consider $(x,z) \in Q$. By Corollary~\ref{cor:convexbound}, we have that $f^L(z) \leq \sum_S x(S) f(S)$. Hence $\leq$ holds. Conversely, suppose that $z^*$ is an optimal solution for the convex optimization problem and let $x^*$ be determined by the ``level sets'' associated with $z^*$ in the definition of $f^L$. By definition we have $\sum_S x^*(S) f(S) = f^L(z^*)$. Moreover, one easily checks that since $z^*(B) \geq 1$ for each blocking set $B$, we also have that $x^*$ is feasible for (\ref{eqn:X1}). Hence $\geq$ holds as well. \qed \end{proof} We make essential use of the following theorem as follows. It shows that the optimal solution for the convex programme coincides with that of a linear program. The latter LP, however, has optimal solutions have polybounded encoding sizes. This means that a polytime weak optimization routine (as given by Ellipsoid) can be turned into an polytime exact optimization routine for the convex programme (and hence by the Lemma, also for the LP). We discuss this in further detail below. \fi {\bf Polytime Algorithms.} One may apply the Ellipsoid Method to obtain a polytime algorithm which approximately minimizes a convex function over a polyhedron $K$ as long as various technical conditions hold. For instance, one could require that there are two ellipsoids $E(a,A) \subseteq K \subseteq E(a,B)$ whose encoding descriptions are polynomially bounded in the input size for $K$. We should also have polytime (or oracle) access to the convex objective function defined over ${\bf R}^n$. In addition, one must be able to polytime solve the subgradient problem for $f$.\footnote{For a given $y$, find a subgradient of $f$ at $y$.} One may check that the subgradient problem is efficiently solvable for Lov\'asz extensions of polynomially encodable submodular functions. We call $f$ {\em polynomially encodable} if the values $f(S)$ have encoding size bounded by a polynomial in $n$ (we always assume this for our functions). If these conditions hold, then methods from \cite{grotschel2012geometric} imply that for any $\epsilon >0$ we may find an approximately feasible solution for $K$ which is approximately optimal. By approximate here we mean for instance that the objective value is within $\epsilon$ of the real optimum. This can be done in time polynomially bounded in $n$ (size of input say) and $\log \frac{1}{\epsilon}$. Let us give a few details for our application. \iffalse In our setting, the rightmost LP from the previous lemma is essentially identical to the linear program (\ref{eqn:X1}); it just works over $Q$ which is like carrying some extra meaningless variables $z$. The preceding result says that we can work instead over the more compact (in terms of variables) space $P^*(\mathcal{F})$ at the price of using a convex objective function. \fi Our convex problem's feasible space is $P^*(\mathcal{F})$ and it is easy to verify that our optimal solutions will lie in the $0-1$ hypercube $H$. So we may define the feasible space to be $H$ and the objective function to be $g(z)=f^L(z)$ if $z \in H \cap P^*(\mathcal{F})$ and $=\infty$ otherwise. (Clearly $g$ is convex in ${\bf R}^n$ since it is a pointwise maximum of two convex functions; alternatively, one may define the Lov\'asz Extension on ${\bf R}^n$ which is also fine.) Note that $g$ can be evaluated in polytime by the definition of $f^L$ as long as $f$ is polynomially encodable. We can now easily find an ellipsoid inside $H$ and one containing $H$ each of which has poly encoding size. We may thus solve the convex problem to within $\pm \epsilon$-optimality in time bounded by a polynomial in $n$ and $\log \frac{1}{\epsilon}$. \iffalse Note that our lemmas guarantee to produce $(x^*,z^*) \in Q$ so we have exact feasibility. \fi \begin{corollary} \label{cor:ellipsoid} Consider a class of problems $\mathcal{F}, f$ for which $f$'s are submodular and polynomially-encodable in $n=|V|$. If there is a polytime separation algorithm for the family of polyhedra $P^*(\mathcal{F})$, then the convex program (SA-LE) can be solved to accuracy of $\pm \epsilon$ in time bounded by a polynomial in $n$ and $\log \frac{1}{\epsilon}$. \end{corollary} \iffalse One final comment about exact solvability of these problems. \iffalse Since our convex problem is equivalent to an LP (over $Q$ with nonzero coefficients on the $x$-variables), we know that an optimal solution $(x^*,z^*)$ will be determined at a vertex of the polyhedron $Q$. \fi \textcolor{red}{It is straightforward to check by contradiction that $z^*$ is in fact a vertex of $P^*(\mathcal{F})$.} As any such vertex is defined by an $n \times n$ nonsingular system, and hence by Cramer's Rule, the encoding size of $z^*$ is polynomially bounded (in $n$). \iffalse But then $x^*$ could also be viewed as a vertex to a polyhedron of the form: $\{x: A x = z^*, \, x \geq 0 \}$. The support of any such vertex is again determined by a nonsingular linear system with at most $n$ rows. As $z^*$ is polybounded, so is the encoding size of $x^*$. \fi Since $f$ and $z^*$ are {\em polynomially-encodable}, we have the the optimal solution to (SA-LE) is polynomially bounded. Hence we can pre-select a polybounded $\epsilon > 0$ which guarantees us to find the exact optimum, not just an $\epsilon$-optimal one. Thus we have the following. \begin{corollary} Consider a class of problems $\mathcal{F}, f$ for which $f$'s are submodular and polynomially-encodable in $n=|V|$. If there is a polytime separation algorithm for the family of polyhedra $P^*(\mathcal{F})$, then one may find an exact optimum $z^*$ for the convex program (SA-LE) in polytime. \end{corollary} \fi \iffalse FBS: did not get to this. The function $f^L$ is indeed well-behaved and this actually follows from Lemma~\ref{lem:equivopts}. Since the lefthand optimum is equal to the righthand, we know that the encoding complexity of the lefthand optimal solution is bounded by that of the LP. ...{\bf not clear.} I thought it was standard Cramer's Rule, but this may be trickier than I thought. I was hoping we would get a lemma that could replace Lemma \ref{lem:polybound}. \fi \subsection{The multi-agent formulation} \label{sec:ma extensions} The single-agent formulation (SA-LE) discussed above has a natural extension to the multi-agent setting. This was already introduced in \cite{chekuri2011submodular} for the case $\mathcal{F}=\{V\}$. \begin{equation} \label{MA-LE-appendix} (\mbox{MA-LE}) ~~~~ \min \sum_{i=1}^k f^L_i(z_i): z_1 + z_2 + \cdots + z_k \in P^*(\mathcal{F}). \end{equation} We refer to the above as the {\em multi-agent Lov\'asz extension formulation}, abbreviated as (MA-LE). We remark that we can solve (MA-LE) as long as we have polytime separation of $P^*(\mathcal{F})$. This follows the approach from the previous section (see Corollary \ref{cor:ellipsoid}) except our convex program now has $k$ vectors of variables $z_1,z_2, \ldots ,z_k$ (one for each agent) such that $z=\sum_i z_i$. \iffalse The extended formulation now forces $z \in P^*(\mathcal{F})$, and we also have constraints forcing that each $z_i$ is the image of an associated $(x_i(S))$ vector, i.e. $z_i = \sum_S x(S,i)\chi^S$. As in the SA case, for any feasible $z^*=z_1^* + \cdots + z_k^* \in P^*(\mathcal{F})$ we can find in polytime an $x^*$ that is feasible for (\ref{MA-LP1}) and such that $\sum_i f^L_i(z_i^*)=\sum_i \sum_S f_i(S) x^*(S,i)$. This uses the same plan as for Lemma \ref{lem:equivopts}. Namely, given a vector $z_i$ we can use its level sets to determine an associate ``pre-image'' vector $(x(S,i): S \subseteq V)$. Our arguments for tractability of the multi-agent formulation also depend on having a polytime weak optimization routine for the problem $\min \sum_i f^L_i(z_i): z_1 + z_2 + \ldots + z_k \in P^*(\mathcal{F})$. \fi This problem has the form $\{ \min g(w) : w \in W \subseteq {\bf R}^{nk} \}$ where $g$ is convex and $W$ is the full-dimensional convex body $\{w=(z_1,...,z_k): \sum_i z_i \in P^*(\mathcal{F})\}$. Clearly we have a polytime separation routine for $W$, and hence we may apply Ellipsoid as in the single-agent case. \begin{corollary} \label{cor:sep-sol-LP} Assume there is a polytime separation oracle for $P^*(\mathcal{F})$. Then we can solve the multi-agent formulation (MA-LE) in polytime. \end{corollary} \begin{corollary} \label{cor:SA-MA-LP} Assume we can solve the single-agent formulation (SA-LE) in polytime. Then we can also solve the multi-agent formulation (MA-LE) in polytime. \end{corollary} \begin{proof} If we can solve (SA-LE) in polytime then we can also separate over $P^*(\mathcal{F})$ in polynomial time. Now the statement follows from Corollary~\ref{cor:sep-sol-LP}. \qed \end{proof} \section{Dealing with some special types of nonmonotone objectives} \label{sec:nonmonotone} We present the proof of Theorem \ref{thm:sym-MSCA} discussed in Section \ref{sec:SA-MA-formulations}. \iffalse We present a detailed proof of the results discussed on Section \ref{sec:BBnonmonotone} for nonmonotone objectives for completeness. The proof follows closely the arguments presented in \cite{chekuri2011submodular} (for the symmetric case) and in \cite{ene2014hardness} (for the general case). Both of these papers worked in the setting where $\mathcal{F} = \{V\}$. \fi \iffalse \begin{theorem} \label{thm:sym-MSCA-appendix} Consider an instance of (MA-LE) where $\mathcal{F}$ is an upwards closed family and $f_i = g_i + h$ where the $g_i$ are nonnegative monotone submodular and $h$ is nonnegative symmetric submodular. Let $z_1+z_2+\cdots+z_k$ be a feasible solution such that $\sum_{i \in [k]} z_i \geq \chi^U$ for some $U \in \mathcal{F}$. Then there is a randomized rounding procedure that outputs an integral feasible solution $\bar{z}_1+\bar{z}_2+\cdots+\bar{z}_k$ such that $\sum_{i \in [k]} \bar{z}_i \geq \chi^U$ and $\sum_{i\in [k]} f^L_i (\bar{z}_i) \leq O(\log n) \sum_{i \in [k]} f_i^L(z_i)$ on expectation. That is, we get a subpartition $S_1,S_2,\ldots,S_k$ such that $\biguplus_{i \in [k]} S_i \supseteq U$ and $\sum_{i \in [k]}f_i(S_i) \leq O(\log n) \sum_{i \in [k]} f_i^L(z_i)$ on expectation. \end{theorem} \fi \begin{proof}[Theorem \ref{thm:sym-MSCA}] Let $z = \sum_{i \in [k]} z_i$ be a feasible solution to (MA-LE) and such that $\sum_{i \in [k]} z_i \geq \chi^U$ for some $U \in \mathcal{F}$. Consider the below CE-Rounding procedure originally described in the work of \cite{chekuri2011submodular} for the case $\mathcal{F}=\{V\}$. \begin{algorithm}[ht] \caption{CE-Rounding} \label{alg:CE-rounding} $S \leftarrow \emptyset$ \quad /* set of assigned elements */\\ $S_i \leftarrow \emptyset$ for all $i \in [k]$ /* set of elements assigned to agent $i$ */\\ \While{$S \notin \mathcal{F}$}{ Pick $i\in [k]$ uniformily at random\\ Pick $\theta \in [0,1]$ uniformily at random \\ $S(i,\theta):= \{v \in V: z_i(v)\geq \theta\}$ \\ $S_i \leftarrow S_i \cup S(i,\theta) $\\ $S \leftarrow S \cup S(i,\theta) $\\ } /* Uncross $S_1,S_2,\ldots,S_k$ */\\ $S'_i \leftarrow S_i$ for all $i \in [k]$\\ \While{there exist $i \neq j$ such that $S'_i \cap S'_j \neq \emptyset$}{ \eIf{$h(S'_i) + h(S'_j - S'_i) \leq h(S'_i) + h(S'_j)$}{ $S'_j \leftarrow S'_j - S'_i$\\}{ $S'_i \leftarrow S'_i - S'_j$\\} } Output $(S'_1,S'_2,\ldots,S'_k)$ \end{algorithm} It is discussed in \cite{chekuri2011submodular} and not difficult to see that the first while loop assigns all the elements from $U$ in $O(k \log |U|)$ iterations with high probability. Since $\mathcal{F}$ is upwards closed and $U \in \mathcal{F}$, this implies that the first while loop terminates in $O(k \log |U|)$ iterations with high probability. Moreover, it is clear that the uncrossing step takes a polynomial number of iterations. Let $S_1,S_2,\ldots,S_k$ be the output after the first while loop and $S'_1,S'_2,\ldots,S'_k$ the final output of the rounding. At each iteration of the first while loop, the expected cost associated to the random set $S(i,\theta)$ is given by $$\mathbb{E}_{i,\theta}[f_i(S(i,\theta))] = \frac{1}{k} \sum_{i=1}^k \mathbb{E}_{\theta}[f_i(S(i,\theta))]= \frac{1}{k} \sum_{i=1}^k f^L_i (z_i).$$ Hence, given the subadditivity of the objectives (since the functions are submodular and nonnegative), the expected cost increase at each iteration of the first while loop is upper bounded by $\frac{1}{k} \sum_{i=1}^k f^L_i (z_i)$. Since the first while loop terminates w.h.p. in $O(k \log |U|)$ iterations, it now follows by linearity of expectation that the total expected cost of $\sum_i f_i(S_i)$ is at most $O(\log |U|) \sum_{i=1}^k f^L_i (z_i)$. Finally, we use a result (see Lemma 3.1) from \cite{chekuri2011submodular} that guarantees that if $h$ is symmetric submodular then the uncrossing step of the rounding satisfies $\sum_i h(S'_i) \leq \sum_i h(S_i)$. Moreover, by monotonicity of the $g_i$ it is also clear that $\sum_i g_i(S'_i) \leq \sum_i g_i(S_i)$. Thus, we have that $S'_1 \uplus S'_2 \uplus \cdots \uplus S'_k = S \in \mathcal{F}$ and $\sum_i f_i(S'_i) \leq \sum_i f_i(S_i) \leq O(\log |U|) \sum_{i=1}^k f^L_i (z_i)$ on expectation. This concludes the argument. \qed \end{proof} \iffalse \begin{theorem} Let $\mathcal{F}$ be an upwards closed family with a $\beta$-bounded blocker. Let the objectives be of the form $f_i = g_i + h$ where each $g_i$ is nonnegative monotone submodular. Then we have the following results for the associated multi-agent problem. \begin{itemize} \item[\bfseries{(1)}] There is a randomized $O(\beta\log n)$-approximation algorithm in the case where $h$ is nonnegative symmetric submodular. \item[\bfseries{(2)}] There is a randomized $O(k \beta\log n)$-approximation algorithm in the case where $h$ is nonnegative submodular. \end{itemize} If $P^*(\mathcal{F})$ has a polytime separation oracle, then these are polytime algorithms. \end{theorem} \begin{proof} \textbf{(1)}. Let $z^* = z^*_1 + z^*_2 + \cdots + z^*_k$ be an optimal solution to (MA-LE) with value $OPT_{frac}$. Consider the vector $\beta z^* = \sum_{i\in [k]} \beta z^*_i$ and let $U=\{v\in V: \beta z^*(v) \geq 1\}$. Since $\mathcal{F}$ has a $\beta$-bounded blocker it follows that $U\in\mathcal{F}$. We now use the following rounding procedure described in the work of \cite{chekuri2011submodular}. \begin{algorithm}[ht] \label{alg:CE-rounding} $S \leftarrow \emptyset$ \quad /* set of assigned elements */\\ $S_i \leftarrow \emptyset$ for all $i \in [k]$ /* set of elements assigned to agent $i$ */\\ \While{$S \notin \mathcal{F}$}{ Pick $i\in [k]$ uniformily at random\\ Pick $\theta \in [0,1]$ uniformily at random \\ $S(i,\theta):= \{v \in V: \beta z^*_i(v)\geq \theta\}$ \\ $S_i \leftarrow S_i \cup S(i,\theta) $\\ $S \leftarrow S \cup S(i,\theta) $\\ } /* Uncross $S_1,S_2,\ldots,S_k$ */\\ $S'_i \leftarrow S_i$ for all $i \in [k]$\\ \While{there exist $i \neq j$ such that $S'_i \cap S'_j \neq \emptyset$}{ \eIf{$h(S'_i) + h(S'_j - S'_i) \leq h(S'_i) + h(S'_j)$}{ $S'_j \leftarrow S'_j - S'_i$\\}{ $S'_i \leftarrow S'_i - S'_j$\\} } Output $(S'_1,S'_2,\ldots,S'_k)$ \end{algorithm} It is discussed in \cite{chekuri2011submodular} and not difficult to see that the first while loop assigns all the elements from $U$ in $O(k \log n)$ iterations with high probability. Since $\mathcal{F}$ is upwards closed and $U \in \mathcal{F}$, this implies that the first while loop terminates in $O(k \log n)$ iterations with high probability. Moreover, it is clear that the uncrossing step takes a polynomial number of iterations. Let $S_1,S_2,\ldots,S_k$ be the output after the first while loop and $S'_1,S'_2,\ldots,S'_k$ the final output of the rounding. At each iteration of the first while loop, the expected cost associated to the random set $S(i,\theta)$ is given by $$\mathbb{E}_{i,\theta}[f_i(S(i,\theta))] = \frac{1}{k} \sum_{i=1}^k \mathbb{E}_{\theta}[f_i(S(i,\theta))]= \frac{1}{k} \sum_{i=1}^k f^L_i (\beta z^*_i) = \frac{1}{k} \sum_{i=1}^k \beta f^L_i ( z^*_i) = \frac{\beta}{k} OPT_{frac}.$$ Hence, given the subadditivity of the objectives (since the functions are submodular and nonnegative), the expected cost increase at each iteration of the first while loop is upper bounded by $\frac{\beta}{k} OPT_{frac}$. Since the first while loop terminates w.h.p. in $O(k \log n)$ iterations, it now follows by linearity of expectation that the total expected cost of $\sum_i f_i(S_i)$ is at most $O(\beta \log n) OPT_{frac}$. Finally, we use a result (see Lemma 3.1) from \cite{chekuri2011submodular} that guarantees that if $h$ is symmetric submodular, then the uncrossing step of the rounding satisfies $\sum_i h(S'_i) \leq \sum_i h(S_i)$. Moreover, by monotonicity of the $g_i$ it is also clear that $\sum_i g_i(S'_i) \leq \sum_i g_i(S_i)$. Thus, we have that $S'_1 \uplus S'_2 \uplus \cdots \uplus S'_k = S \in \mathcal{F}$ and $\sum_i f_i(S'_i) \leq \sum_i f_i(S_i) \leq O(\beta \log n) OPT_{frac}$. This concludes the argument. \textbf{(2)}. For the case where $h$ is just nonnegative submodular we use a result (see Proposition 10) from \cite{ene2014hardness}. Given a nonnegative submodular function $h$, they consider the nonnegative symmetric submodular function defined as $h'(S) = h(S) + h(V-S)$. They then show that for any partition $S_1,S_2,\ldots,S_k$ the function $h'$ satisfies that $\sum_i h'(S_i) \leq k \sum_i h(S_i)$. Now, given a nonnegative submodular function $h$, we can get a randomized $O(\beta \log n)$-approximation via \textbf{(1)} for the problem with $f_i = g_i + h'$. Moreover, by the above observation from \cite{ene2014hardness} it follows that then this is a randomized $O(k \beta \log n)$-approximation for the original problem with $f_i = g_i + h$. We point out that in subsequent work (\cite{mirzakhani2014sperner}) it was shown that the $k$ factor loss due to the non-symmetry is in fact necessary. \qed \end{proof} \fi \iffalse \section{Applications of bounded blockers \textcolor{red}{I WOULD STRONGLY REMOVE THIS FROM THIS SUMBISSION. WE AGREED THAT WE WANT TO TONE DOWN BOUNDED BLOCKERS AS MUCH AS POSSIBLE, SO I THINK THIS SHOULD GO. IT WAS ORIGINALLY ADDED TO THE PAPER IN ORDER TO ARGUE HOW OUR ORIGINAL SA BOUNDED BLOCKER RESULT COULD BE USEFUL, BUT NOW THAT WE KNOW OTHER PEOPLE HAVE DONE ALREADY AND THE BOUNDED BLOCKER NOTION HAS BEEN ALREADY DISCUSSED OUT THERE, I DO NOT SEE ANY REASON WHY THIS SHOULD STAY.}} \label{sec:BB-applications} We discuss several problems which can be addressed by bounded blockers. We start with some of the simplest examples which also indicate how one might find uses for Theorem~\ref{thm:device}. We later discuss applications to multi-agent minimization. \begin{example}[{\bf Edge Covers}] {\em Let $G=(V,E)$ be an undirected graph and $\mathcal{F}$ denote the family of edge covers of $G$. It follows that $\mathcal{B}(\mathcal{F})$ consists of the $n$ sets $\delta(v)$ for each $v \in V$. Theorem~\ref{thm:device} immediately implies that that the minimum submodular edge cover problem is $\Delta$-approximable for graphs of max degree $\Delta$. In particular, the $\Omega(n/log(n))$ inapproximability result from \cite{iwata2009submodular} is pinpointed on graphs with at least one high-degree node. } \end{example} This example is not surprising but gives clarity to the message that whenever the blocker $\mathcal{B}(\mathcal{F})$ induces a bounded-degree hypergraph, Theorem~\ref{thm:device} applies. Let's explore a few questions which can be approached like this. In \cite{svitkina2010facility} an $O(\ln n)$-approximation is given for submodular facility location, but for instances where the number of customers $n$ dwarfs the number of facilities $k$ the following is relevant. \begin{example}[{\bf Facility Location}]{\em Suppose we have a submodular facility location problem with $k$ facilities and $n$ customers. We may represent solutions as subsets of edges in a $k \times n$ bipartite graph $G$ which include exactly one edge incident to each customer node. As our submodular functions are monotone, we can work with the associated blocking family $\mathcal{F}$ for which $\mathcal{B}(\mathcal{F})$ consists of the edges $\delta_G(v)$, where $v$ is a customer node. A $k$-approximation now follows from Theorem~\ref{thm:device}. } \end{example} We consider the several examples from the broad task of finding {\em low-cost dense networks}. \begin{example}[{\bf Lightly Pruned Networks}] {\em Suppose we seek low-cost subgraphs of some graph $G$ where nodes can only have a limited number $\tau$ of their edges removed. In other words, the degree of each node $v$ should remain at least $deg(v)-\tau$. The family of subgraphs (viewed as edge subsets) $\mathcal{F}$ has as its blocker the set of all sub-stars of size $\tau+1$: $\mathcal{B}(\mathcal{F})=\{E' \subseteq \delta(v): |E'| = \tau+1, v \in V(G)\}$. When $\tau$ is small (constant or polylog) one obtains sensible approximations. This can be extended to notions of density which incorporate some tractable family of cuts $\delta(S) \in \mathcal{C}$. That is, where the designed subgraph must retain all but a bounded number $\tau$ of edges from each $\delta(S)$.} \end{example} \begin{example}[{\bf Girth-Widening}]{\em For an undirected graph $G$ we seek to minimize $f(J)$ where $J$ is subset of edges for which $G-J$ does not have any short cycles. For instance, given a threshold $\tau$ we want $G-J$ to not have any cycles of length at most $\tau$. We seek $J \in \mathcal{F}$ whose blocker consists of all cycles with at most $\tau$ edges. The separation can be solved in this setting since constrained shortest paths has a polytime algorithm if one of the two metrics is unit-cost. } \end{example} Decomposing (or clustering) graphs into lower diameter subgraphs is a scheme which occasionally arises (for instance, the tree-embedding work of \cite{fakcharoenphol2003tight} employs a diameter-reduction step). We close with an example in this general vein. \begin{example}[{\bf Diameter-Reduction}] {\em Suppose we would like to find a submodular-sparse cut in a graph $G$ which decomposes it into subgraphs $G_1,G_2, \ldots$ each of which has a bounded diameter $\tau$. We are then looking at a submodular minimization problem whose bounded blocker consists of paths of length exactly $\tau+1$. \\} \end{example} \fi \section{Invariance under the lifting reduction} \label{sec:Appendix-Invariance} \begin{corollary} \label{Corollary p-matroid intersection}If $(V,\mathcal{F})$ is a p-matroid intersection, then so is $(E,\mathcal{F}')$.\end{corollary} \begin{proof} Let $\mathcal{F}=\cap_{i=1}^{p}\mathcal{I}_{i}$ for some matroids $(V,\mathcal{I}_{i})$. Then we have that \begin{align*} \mathcal{F}'= & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{F}\}\\ = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\cap_{i=1}^{p}\mathcal{I}_{i}\}\\ = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{I}_{i},\forall i\in[p]\}\\ = & \bigcap_{i\in[p]}\{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{I}_{i}\}\\ = & \bigcap_{i\in[p]}\mathcal{I}_{i}'. \end{align*} Moreover, from Corollary \ref{cor:matroid-invariant} we know that $(E,\mathcal{I}_{i}')$ is a matroid for each $i\in[p]$, and the result follows. \qed \end{proof} \iffalse \begin{corollary} \label{Corollary bases-}Assume $\mathcal{F}$ is the set of bases of some matroid $\mathcal{M}=(V,\mathcal{I})$, then $\mathcal{H}$ is the set of bases of some matroid $\mathcal{M}'=(E,\mathcal{I}')$. \end{corollary} \begin{proof} Let $\mathcal{F}=\mathcal{B}(\mathcal{M})$ and define $\mathcal{I}'=\{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{I}\}.$ From Corollary \ref{cor:matroid-invariant} we know that $\mathcal{M}'=(E,\mathcal{I}')$ is a matroid. We show that $\mathcal{H}=\mathcal{B}(\mathcal{M}')$. Let $r_{\mathcal{M}}$ and $r_{\mathcal{M}'}$ denote the rank of $\mathcal{M}$ and $\mathcal{M}'$ respectively. Then notice that \begin{align*} \mathcal{H}= & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{F}\}\\ = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{B}(\mathcal{M})\}\\ = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}=I\in\mathcal{I},|I|=r_{\mathcal{M}}\}\\ = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{I},|S|=r_{\mathcal{M}}\}, \end{align*} where the last equality follows from the fact that $|I|=|S_{1}\uplus \cdots\uplus S_{k}|=\sum_{i\in[k]}|S_{i}|=|S|$. On the other hand we have that \begin{align*} \mathcal{B}(\mathcal{M}')= & \{S\subseteq E:S\in\mathcal{I}',|S|=r_{\mathcal{M}'}\}\\ = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in\mathcal{I},|S|=r_{\mathcal{M}'}\}. \end{align*} Hence, $\mathcal{H}=\mathcal{B}(\mathcal{M}')$ if and only if $r_{\mathcal{M}}=r_{\mathcal{M}'}$. But notice that for any $S\in\mathcal{I}'$ we have that $|S|=|I|$ for some $I\in\mathcal{I}$, and hence $r_{\mathcal{M}'}\leq r_{\mathcal{M}}$. Also, for any $I\in\mathcal{I}$ we can find some $S\in\mathcal{I}'$ (e.g. set $S_{1}=I$ and $S_{i}=\emptyset$ for $i\neq1$) such that $|S|=|I|$, and thus $r_{\mathcal{M}'}\geq r_{\mathcal{M}}$. It follows that $r_{\mathcal{M}}=r_{\mathcal{M}'}$ and hence $\mathcal{H}=\mathcal{B}(\mathcal{M}')$ as we wanted to show.\qed \end{proof} \fi \iffalse \begin{proposition} \label{Claim F =00003D 2^V}Let $\mathcal{F}=2^{V}$, then $(E,\mathcal{H})$ is a partition matroid.\end{proposition} \begin{proof} Assume $\mathcal{F}=2^{V}$, then \[ \begin{array}{ccl} \mathcal{H} & = & \{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}\in2^{V}\}\\ & = & \{S\subseteq E:S_{i}\cap S_{j}=\emptyset,\;\forall i\neq j\}\\ & = & \{S\subseteq E:\forall v\in V,\,|S\cap\delta(v)|\leq1\}\\ & = & \{S\subseteq E:\forall v\in V,\,|S\cap([k]\times\{v\})|\leq1\}. \end{array} \] Since $\{[k]\times\{v\}:v\in V\}$ is a partition of $E$, it follows that $(E,\mathcal{H})$ is a partition matroid over $E$.\qed \end{proof} \begin{proposition} \label{Claim F =00003D V}If $\mathcal{F}=\{V\}$, then $\mathcal{H}$ corresponds to the set of bases of a partition matroid.\end{proposition} \begin{proof} Let $\mathcal{F}=\{V\}$, then $\mathcal{H}=\{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}=V\}$. From Claim \ref{Claim F =00003D 2^V} we know that $\mathcal{M}=(E,\mathcal{I})$ is a partition matroid over $E$, where \[ \mathcal{I}:=\{S\subseteq E:\forall v\in V,\,|S\cap([k]\times\{v\})|\leq1\}. \] Moreover, it is easy to see that the set of bases of $\mathcal{M}$ corresponds exactly to $\{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}=V\}$. Since $\mathcal{H}=\{S\subseteq E:S_{1}\uplus \cdots\uplus S_{k}=V\}$, the claim follows.\qed \end{proof} \fi We now discuss some invariant properties with respect to the families $\mathcal{F}_i$. \begin{proposition} \label{Claim F_i matroids}If $(V,\mathcal{F}_{i})$ is a matroid for each $i\in[k]$, then $(E,\mathcal{H})$ is also a matroid.\end{proposition} \begin{proof} Let $\mathcal{M}_{i}:=(\{i\}\times V,\mathcal{I}_{i})$ for $i\in[k]$, where $\mathcal{I}_{i}:=\{\{i\}\times S:S\in\mathcal{F}_{i}\}$. Since $(V,\mathcal{F}_{i})$ is a matroid, we have that $\mathcal{M}_{i}$ is also a matroid. Moreover, by taking the matroid union of the $\mathcal{M}_{i}$'s we get $(E,\mathcal{H})$. Hence, $(E,\mathcal{H})$ is a matroid. \qed \end{proof} \begin{proposition} \label{claim ring families} If $\mathcal{F}_{i}$ is a ring family over $V$ for each $i\in[k]$, then $\mathcal{H}$ is a ring family over $E$.\end{proposition} \begin{proof} Let $S,T\in\mathcal{H}$ and notice that $S\cup T=\biguplus_{i\in[k]}(\{i\}\times(S_{i}\cup T_{i})$) and $S\cap T=\biguplus_{i\in[k]}(\{i\}\times(S_{i}\cap T_{i})$). Since $\mathcal{F}_{i}$ is a ring family for each $i\in[k]$, it follows that $S_{i}\cup T_{i}\in\mathcal{F}_{i}$ and $S_{i}\cap T_{i}\in\mathcal{F}_{i}$ for each $i\in[k]$. Hence $S\cup T\in\mathcal{H}$ and $S\cap T\in\mathcal{H}$, and thus $\mathcal{H}$ is a ring family over $E$. \qed \end{proof} We saw in Section \ref{sec:lifting-reduction} that if the original functions $f_i$ are all submodular, then the lifted function $f$ is also submodular. Recall that for any set $S \subseteq E$ in the lifted space, there are unique sets $S_i \subseteq V$ such that $S = \uplus_{i \in [k]} (\{i\} \times S_i)$. We think of $S_i$ as the set of items assigned to agent $i$. In a similar way, given any vector $\bar{z} \in [0,1]^E$, there are unique vectors $z_i \in [0,1]^V$ such that $\bar{z} = (z_1,z_2,\ldots,z_k)$, where we think of $z_i$ as the vector associated to agent $i$. Our following result establishes the relationship between the values $f^M(\bar{z})$ and $\sum_{i \in [k]} f_i^M(z_i)$. \begin{lemma} \label{lem:max-multilinear} Let the functions $f_i$ and $f$ be as described in the lifting reduction on Section \ref{sec:lifting-reduction}. Then for any vector $\bar{z}=(z_1,z_2,\ldots,z_k) \in [0,1]^E$, where $z_i \in [0,1]^V$ is the vector associated with agent $i$, we have that $f^M(\bar{z})=f^M(z_1,z_2,\ldots,z_k)=\sum_{i \in [k]} f_i^M(z_i)$. \end{lemma} \begin{proof} We use the definition of the multilinear extension in terms of expectations (see Section \ref{sec:max-SA-MA-formulations}). Recall that for a vector $z \in [0,1]^V$, $R^z$ denotes a random set that contains element $v_i$ independently with probability $z_{v_i}$. We use $\mathbb{P}_{z}(S)$ to denote $\mathbb{P}[R^z=S]$. We then have \begin{eqnarray*} f^M (\bar{z}) & = & \mathbb{E} [f(R^{\bar{z}})] = \sum_{S \subseteq E} f(S) \mathbb{P}_{\bar{z}}(S) \\ & = & \sum_{S_1 \subseteq V} \sum_{S_2 \subseteq V} \cdots \sum_{S_k \subseteq V} [\sum_{i=1}^k f_i (S_i)] \cdot \mathbb{P}_{(z_1,z_2,\ldots,z_k)}(S_1,S_2,\ldots,S_k) \\ & = & \sum_{i=1}^k \sum_{S_1 \subseteq V} \sum_{S_2 \subseteq V} \cdots \sum_{S_k \subseteq V} f_i (S_i) \cdot \mathbb{P}_{(z_1,z_2,\ldots,z_k)}(S_1,S_2,\ldots,S_k)\\ & = & \sum_{i=1}^k \sum_{S_i \subseteq V} f_i (S_i) \sum_{S_j \subseteq V, j\neq i} \mathbb{P}_{(z_1,z_2,\ldots,z_k)}(S_1,S_2,\ldots,S_k)\\ & = & \sum_{i=1}^k \sum_{S_i \subseteq V} f_i(S_i) \mathbb{P}_{z_i}(S_i) = \sum_{i=1}^k \mathbb{E}[f_i(S_i^{z_i})] = \sum_{i=1}^k f^M_i(z_i). \end{eqnarray*} \qed \end{proof} \end{document}
\begin{document} \footnote{{\it 2000 Mathematics Subject Classification.} Primary:32F45. {\it Key words and phrases.} Lempert function, Arakelian's theorem.} \begin{abstract} We prove that the multipole Lempert function is monotone under inclusion of pole sets. \end{abstract} \maketitle Let $D$ be a domain in $\mathbb C^n$ and let $A=(a_j)_{j=1}^l$, $1\le l\le\infty$, be a countable (i.e. $l=\infty$) or a non--empty finite subset of $D$ (i.e. $l\in\mathbb N$). Moreover, fix a function $\boldsymbol p:D\longrightarrow\mathbb R_+$ with $$ |\bs{p}|:=\{a\in D:\boldsymbol p(a)>0\}=A. $$ $\boldsymbol p$ is called a {\it pole function for $A$} on $D$ and $|\bs{p}|$ its {\it pole set}. In case that $B\subset A$ is a non--empty subset we put $\boldsymbol p_B:=\boldsymbol p$ on $B$ and $\boldsymbol p_B:=0$ on $D\setminus B$. $\boldsymbol p_B$ is a pole function for $B$. For $z\in D$ we set $$ l_D(\boldsymbol p,z)=\inf\{\prod_{j=1}^l|\lambda_j|^{\boldsymbol p(a_j)}\}, $$ where the infimum is taken over all subsets $(\lambda_j)_{j=1}^l$ of $\mathbb D$ (in this paper, $\mathbb D$ is the open unit disc in $\mathbb C$) for which there is an analytic disc $\varphi\in\mathcal O(\mathbb D,D)$ with $\varphi(0)=z$ and $\varphi(\lambda_j)=a_j$ for all $j$. Here we call $l_D(\boldsymbol p,\cdot)$ {\it the Lempert function with $\boldsymbol p$-weighted poles at $A$} \cite{Wik1,Wik2}; see also \cite{jarpfl}, where this function is called {\it the Coman function for $\boldsymbol p$}. Recently, F.~Wikstr\"om \cite{Wik1} proved that if $A$ and $B$ are finite subsets of a convex domain $D\subset\mathbb C^n$ with $\varnothing\neq B\subset A$ and if $\boldsymbol p$ is a pole function for $A$, then $l_D(\boldsymbol p,\cdot)\le l_D(\boldsymbol p_B,\cdot)$ on $D$. On the other hand, in \cite{Wik2} F.~Wikstr\"om gave an example of a complex space for which this inequality fails to hold and he asked whether it remains to be true for an arbitrary domain in $\mathbb C^n$. The main purpose of this note is to present a positive answer to that question, even for countable pole sets (in particular, it follows that the infimum in the definition of the Lempert function is always taken over a non-empty set). \begin{theorem} For any domain $D\subset\mathbb C^n$, any countable or non-empty finite subset $A$ of $D$, and any pole function $\boldsymbol p$ for $A$ we have $$ l_D(\boldsymbol p,\cdot)=\inf\{l_D(\boldsymbol p_B,\cdot):\varnothing\neq B \text{ a finite subset of } A\}. $$ Therefore, $$ l_D(\boldsymbol p,\cdot)=\inf\{l_D(\boldsymbol p_B,\cdot):\varnothing\neq B\subset A\}. $$ \end{theorem} \vskip 0.5cm The proof of this result will be based on the following \ \begin{theorem*}[Arakelian's Theorem \cite{Ara}] Let $E\subset\Omega\subset\mathbb C$ be a relatively closed subset of the domain $\Omega$. Assume that $\Omega^*\setminus E$ is connected and locally connected. (Here $\Omega^*$ denotes the one--point compactification of $\Omega$.) If $f$ is a complex-valued continuous function on $E$ that is holomorphic in the interior of $E$ and if $\varepsilon>0$, then there is a $g\in\mathcal O(\Omega)$ with $|g(z)-f(z)|<\varepsilon$ for any $z\in E$. \end{theorem*} \ \begin{proof} Fix a point $z\in D$. First, we shall verify the inequality $$ l_D(\boldsymbol p,z)\le\inf\{l_D(\boldsymbol p,z):\varnothing\neq B \text{ a finite subset of } A\}:\leqno{(1)} $$ Take a non--empty proper finite subset $B$ of $A$. Without loss of generality we may assume that $B=A_m:=(a_j)_{j=1}^m$ for a certain $m\in\mathbb N$, where $A=(a_j)_{j=1}^l$, $m<l\le\infty$. Now, let $\varphi:\Bbb D\to D$ be an analytic disc with $\varphi(\lambda_j)=a_j,$ $1\le j\le m,$ where $\lambda_0:=0$ and $a_0:=z.$ Fix $t\in[\max_{0\le j\le m}|\lambda_j|,1)$ and put $\lambda_j=1-\frac{1-t}{j^2}$,\; $j\in A(m)$, where $A(m):=\{m+1,\dots,l\}$ if $l<\infty$, respectively $A(m):=\{j\in\mathbb N:j>m\}$ if $l=\infty$. Consider a continuous curve $\varphi_1:[t,1)\to D$ such that $\varphi_1(t)=\varphi(t)$ and $\varphi_1(\lambda_j)=a_j$, $j\in A(m)$. Define $$ f=\begin{cases} \varphi|_{\overline{t\Bbb D}}\\ \varphi_1|_{[t,1)} \end{cases} $$ on the set $F_t=\overline{t\Bbb D}\cup[t,1)\subset\mathbb D$. Observe that $F_t$ satisfies the geometric condition in Arakelian's Theorem. Since $(\lambda_j)_{j=0}^l$ satisfy the Blaschke condition, for any $k$ we may find a Blaschke product $B_k$ with zero set $(\lambda_j)_{j=0,j\neq k}^l$. Moreover, we denote by $d$ the function $\operatorname{dist}(\partial D,f)$ on $F_t$, where the distance arises from the $l^\infty$--norm. Let $\eta_1$, $\eta_2$ be continuous real-valued functions on $F_t$ with \begin{gather*} \eta_1,\eta_2\le\log\frac{d}{9},\qquad \eta_1,\eta_2=\min_{\overline{t\mathbb D}} \log\frac{d}{9}\hbox{ on }(\overline{t\mathbb D}),\\ \text{ and }\quad \eta_1(\lambda_j)=\eta_2(\lambda_j)+\log(2^{-j-1}|B_j(\lambda_j)|),\quad j\in A(m). \end{gather*} Applying three times Arakelian's theorem, we may find functions $\zeta_1, \zeta_2\in\mathcal O(\mathbb D)$ and a holomorphic mapping $h$ on $\mathbb D$ such that $$ |\zeta_1-\eta_1|\le 1,\;|\zeta_2-\eta_2|\le 1,\hbox{ and }|h-f|\le \varepsilon|e^{\zeta_1-1}|\le \varepsilon e^{\eta_1}\text{ on } F_t, $$ where $\varepsilon:= \min\{\tfrac{|B_j(\lambda_j)|}{2^{j+1}}:j=0,\dots,m\}<1$ (in the last case apply Arakelian's theorem componentwise to the mapping $e^{1-\zeta_1}f$). In particular, we have \begin{gather*} |h-f|\le\frac{d}{9}\quad \text{ on } F_t,\\ |\gamma_j|\leq e^{\eta_1(\lambda_j)}2^{-j-1}|B_j(\lambda_j)| =e^{\eta_2(\lambda_j)}2^{-j-1}|B_j(\lambda_j)|,\quad j=0,\dots,m,\\ |\gamma_j|\le e^{\eta_1(\lambda_j)}= e^{\eta_2(\lambda_j)}2^{-j-1}|B_j(\lambda_j)| ,\;j\in A(m), \end{gather*} where $\gamma_j:=h(\lambda_j)-f(\lambda_j)$, $j\in\mathbb Z_+$ if $l=\infty$, respectively $0\leq j\leq l$ if $l\in\mathbb N$. Then, in virtue of $e^{\eta_2(\lambda_j)} \le e^{1+\operatorname{Re} \zeta_2(\lambda_j)}$, the function $$ g:=e^{\zeta_2}\sum_{j=0}^l\frac{B_j }{e^{\zeta_2(\lambda_j)}B_j(\lambda_j)}\gamma_j $$ is holomorphic on $\mathbb D$ with $g(\lambda_j)=\gamma_j$ and $$ |g|\le e^{\operatorname{Re}\zeta_2+1}\le e^{\eta_2+2}\le\frac{e^2}{9}d\quad \text{ on } F_t. $$ For $q_t:=h-g$ it follows that $q_t(\lambda_j)=f(\lambda_j)$ and $$ |q_t-f|\le\frac{e^2+1}{9}d<d\quad\text{ on }F_t. $$ Thus we have found a holomorphic mapping $q_t$ on $\mathbb D$ with $q_t(\lambda_j)=a_j$ and $q_t(F_t)\subset D$. Hence there is a simply connected domain $E_t$ such that $F_t\subset E_t\subset\mathbb D$ and $q_t(E_t)\subset D$. Let $\rho_t:\mathbb D\to E_t$ be the Riemann mapping with $\rho_t(0)=0,$ $\rho_t'(0)>0$ and $\rho_t(\lambda_j^t)=\lambda_j$. Considering the analytic disc $q_t\circ\rho_t:\mathbb D\to D$, we get that $$ l_D(\boldsymbol p,z)\le\prod_{j=1}^l|\lambda_j^t|^{\boldsymbol p(a_j)}\leq\prod_{j=1}^m|\lambda_j^t|^{\boldsymbol p(a_j)}. $$ Note that by the Carath\'eodory kernel theorem, $\rho_t$ tends, locally uniformly, to the identity map of $\mathbb D$ as $t\to 1$. This shows that the last product converges to $\prod_{j=1}^m|\lambda_j|^{\boldsymbol p(a_j)}$. Since $\varphi$ was an arbitrary competitor for $l_G(\boldsymbol p_{A_m},z)$, the inequality (1) follows. On the other hand, the existence of an analytic disc whose graph contains $A$ and $z$ implies that $$ l_D(\boldsymbol p,z)\ge\limsup_{m\to\infty}l_D(\boldsymbol p_{A_m},z), $$ which completes the proof. \end{proof} \begin{remark}{\rm Looking at the above proof shows that we have proved an approximation and simultaneous interpolation result, i.e. the constructed function $q_t$ approximates and interpolates the given function $f$. We first mention that the proof of Theorem 1 could be simplified using a non-trivial result on interpolation sequences due to L.~Carleson (see, for example, Chapter 7, Theorem 3.1 in \cite{and}). Moreover, it is possible to prove the following general result which extends an approximation and simultaneous interpolation result by P.~M.~Gauthier, W.~Hengartner, and A.~A.~Nersesyan (see \cite{gauhen}, \cite{ner}): Let $D\subset\mathbb C$ be a domain, $E\subset D$ a relatively closed subset satisfying the condition in Arakelian's theorem, $\Lambda\subset E$ such that $\Lambda$ has no accumulation point in $D$ and $\Lambda\cap\operatorname{int} E$ is a finite set. Then for given functions $f,h\in\mathcal C(E)\cap\mathcal O(\operatorname{int} E)$ there exists a $g\in\mathcal O(D)$ such that $$ |g-f|<e^{\operatorname{Re} h} \text { on } E,\quad \text{ and } \quad f(\lambda)=g(\lambda),\;\lambda\in\Lambda. $$ It is even possible to prescribe a finite number of the derivatives for $g$ at all the points in $\Lambda$. } \end{remark} \ As a byproduct we get the following result. \begin{corollary} Let $D\subset\mathbb C^n$ be a domain and let $\boldsymbol p, \boldsymbol q:D\to\mathbb R_+$ be two pole functions on $D$ with $\boldsymbol p\leq\boldsymbol q$, $|\bs{q}|$ at most countable. Then $l_D(\boldsymbol q,z)\leq l_D(\boldsymbol p,z)$, $z\in D$. \end{corollary} Hence, the Lempert function is monotone with respect to pole functions with an at most countable support. \begin{remark}{\rm The Lempert function is, in general, non strictly monotone under inclusion of pole sets. In fact, take $D:=\mathbb D\times\mathbb D$, $A:=\{a_1,a_2\}\subset\mathbb D$, where $a_1\neq a_2$, and observe that $l_D(\boldsymbol p,(0,0))=|a_1|$, where $\boldsymbol p:=\chi|_{A\times\{a_1\}}$ (use the product property in \cite{dietra}, \cite{jarpfl}, \cite{nikzwo}). }\end{remark} \begin{remark}{\rm Let $D\subset\mathbb C^n$ be a domain, $z\in D$, and let now $\boldsymbol p:D\longrightarrow\mathbb R_+$ be a "general" pole function, i.e. $|\bs{p}|$ is uncountable. Then there are two cases: 1) There is a $\varphi\in\mathcal O(\mathbb D,D)$ with $\varphi(\lambda_{\varphi,a})=a$, $\lambda_{\varphi,a}\in\mathbb D$, for all $a\in|\bs{p}|$ and $\varphi(0)=z$. Defining \begin{multline*} l_D(\boldsymbol p,z):=\inf\{\prod|\lambda_{\psi,a}|^{\boldsymbol p(a)}:\psi\in\mathcal O(\mathbb D,D) \text{ with }\\ \psi(\lambda_{\psi,a})=a \text{ for all } a\in|\bs{p}|, \psi(0)=z\} \end{multline*} then $l_D(\boldsymbol p,z)=0$. Observe that $l_D(\boldsymbol p,z)=\inf\{l_D(\boldsymbol p_B,z):\varnothing \neq B\text{ a finite subset of } A\}$. 2) There is no analytic disc as in 1). In that case we may define $^{1)}$\footnote{1) Compare the definition of the Coman function (for the second case) in \cite{jarpfl}.} $$ l_D(\boldsymbol p,z):=\inf\{l_D(\boldsymbol p_B,z):\varnothing\neq B\text { a finite subset of } A\}. $$ Example \ref{ex} below may show that the definition in 2) is more sensitive than the one used in \cite{jarpfl}. } \end{remark} \vskip 0.5cm Before giving the example we use the above definition of $l_D(\boldsymbol p,\cdot)$ for an arbitrary pole function $\boldsymbol p$ to conclude. \begin{corollary} Let $D\subset\mathbb C^n$ be a domain and let $\boldsymbol p, \boldsymbol q :D\longrightarrow\mathbb R_+$ be arbitrary pole functions with $\boldsymbol p\leq\boldsymbol q$. Then $l_D(\boldsymbol q,\cdot)\leq l_D(\boldsymbol p,\cdot)$. \end{corollary} \begin{example}\label{ex} {\rm Put $D:=\mathbb D\times\mathbb D$ and let $A\subset\mathbb D$ be uncountable, e.g. $A=\mathbb D$. Then there is no $\varphi\in\mathcal O(\mathbb D,D)$ passing through $A\times\{0\}$ and $(0,w)$, $w\in\mathbb D_*$. Put $\boldsymbol p:=\chi|_{A\times\{0\}}$ on $D$ as a pole function. Let $B\subset A$ be a non--empty finite subset. Then applying the product property (see \cite{dietra}, \cite{jarpfl}, \cite{nikzwo}), we get $$ l_D(\boldsymbol p_B,(0,w))=g_D(B\times\{0\},(0,w))=\max\{g_{\mathbb D}(B,0),g_{\mathbb D}(0,w)\}, $$ where $g_{\mathbb D}(A,\cdot)$ denotes the Green function in $\mathbb D$ with respect to the pole set $A$. Therefore, $l_D(\boldsymbol p,(0,w))=|w|$ $^{1)}$.} \end{example} To conclude this note we mention that the Lempert function is not holomorphically contractible, even if the holomorphic map is a proper covering. \begin{example} {\rm Let $\pi:\mathbb D_*\longrightarrow\mathbb D_*$, $\pi(z):=z^2$. Obviously, $\pi$ is proper and a covering. Fix two different points $a_1, a_2\in\mathbb D_*$ with $a_1^2=a_2^2=:c$. For a point $z\in\mathbb D_*$ we know that $$ l_{\mathbb D_*}(\chi|_{\{c\}},z^2)=\min\{l_{\mathbb D_*}(\chi|_{\{a_1\}},z),l_{\mathbb D_*}(\chi|_{\{a_2\}},z)\} \geq l_{\mathbb D_*}(\chi|_A,z), $$ where $A:=\{a_j:j=1,2\}$ and $\chi|_B$ is the characteristic function for the set $B\subset D_*$. Assume that $l_{\mathbb D_*}(\chi|_{\{a_1\}},z)\leq l_{\mathbb D_*}(\chi|_{\{a_2\}},z)$. Then this left side is nothing than the classical Lempert function for the pair $(a_1,z)$. Recalling how to calculate it via a covering map one easily concludes that $l_{\mathbb D_*}(\chi|_A,z)< l_{\mathbb D_*}(\chi|_{\{a_1\}},z)$. Hence $l_{\mathbb D_*}(\boldsymbol p ,\pi(z))>l_{\mathbb D_*}(\boldsymbol p\circ\pi,z)$, where $\boldsymbol p:=\chi|_{\{c\}}$. Therefore, the Lempert function with a multipole behaves worse than the Green function with a multipole.} \end{example} {\bf Acknowledgments.} This note was written during the stay of the first named author at the University of Oldenburg supported by a grant from the DFG (September - October 2004). He likes to thank both institutions for their support. We thank the referee for pointing out an error in the first version of this paper. \end{document} \end{document}
\begin{document} \title{Construction of Antisymmetric Variational Quantum States \\ with Real-Space Representation} \author{Takahiro Horiba} \email{[email protected]} \author{Soichi Shirai} \author{Hirotoshi Hirai} \affiliation{Toyota Central R\&D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192, Japan} \date{\today} \begin{abstract} Electronic state calculations using quantum computers are mostly based on second quantization, which is suitable for qubit representation. Another way to describe electronic states on a quantum computer is first quantization, which is expected to achieve smaller scaling with respect to the number of basis functions than second quantization. Among basis functions, a real-space basis is an attractive option for quantum dynamics simulations in the fault-tolerant quantum computation (FTQC) era. A major difficulty in first quantization with a real-space basis is state preparation for many-body electronic systems. This difficulty stems from of the antisymmetry of electrons, and it is not straightforward to construct antisymmetric quantum states on a quantum circuit. In the present paper, we provide a design principle for constructing a variational quantum circuit to prepare an antisymmetric quantum state. The proposed circuit generates the superposition of exponentially many Slater determinants, that is, a multi-configuration state, which provides a systematic approach to approximating the exact ground state. We implemented the variational quantum eigensolver (VQE) to obtain the ground state of a one-dimensional hydrogen molecular system. As a result, the proposed circuit well reproduced the exact antisymmetric ground state and its energy, whereas the conventional variational circuit yielded neither an antisymmetric nor a symmetric state. Furthermore, we analyzed the many-body wave functions based on quantum information theory, which illustrated the relation between the electron correlation and the quantum entanglement. \end{abstract} \maketitle \section{Introduction} Quantum computers are currently attracting increasing attention as promising hardware for materials computations~\cite{mcardle2020quantum, bauer2020quantum, ma2020quantum}, and a number of studies on a variety of material systems have been conducted ~\cite{nam2020ground, shirai2022calculation, shirai2023computational, hirai2023excited, colless2018computation, kassal2009quantum, rall2020quantum,kassal2008polynomial, oftelie2020towards, ollitrault2021molecular}. Such materials computations are mostly based on second quantization~\cite{peruzzo2014variational, mcclean2016theory, cerezo2021variational}, which is suitable for describing electronic states on quantum computers. An alternative way to describe electronic states on quantum computers is first quantization, in which a wave function is specified by the expansion coefficients of the basis functions. With quantum computers, it is possible to obtain a wave function represented by an exponential number of basis functions with a polynomial number of qubits~\cite{zalka1998efficient}. Therefore, first quantization offers the possibility of achieving smaller scaling with respect to the number of basis functions than second quantization~\cite{abrams1997simulation, berry2018improved, babbush2019quantum, chan2023grid, su2021fault}. Among basis functions, a real-space basis is an attractive option because a systematic improvement of computational accuracy and a rapid convergence to the continuum limit can be expected by increasing the number of qubits~\cite{hirose2005first}. In addition, a real-space basis can be applied to systems with a variety of boundary conditions~\cite{ohba2012linear, hirose2005first}, and thus is suitable for quantum dynamics calculations~\cite{childs2022quantum}. Recently, Chan et al.\ proposed quantum circuits for computing real-space quantum dynamics based on first quantization~\cite{chan2023grid}, which represented a promising blueprint of quantum simulations in the fault-tolerant quantum computing (FTQC) era. However, first quantization has a significant challenge, namely state preparation. Let us consider preparing the ground state of the system, which is a typical choice of initial state for dynamics calculations. As a state preparation method, we consider quantum phase estimation (QPE)~\cite{nielsen2002quantum} which has been employed in several studies on first quantization~\cite{berry2018improved, chan2023grid}. By using QPE, it is possible to distill the ground state from an input state which has sufficient overlap with the ground state~\cite{berry2018improved}. Thus, the problem is how to prepare such an input state. In first quantization with a real-space basis, preparing an input state takes a tremendous number of gate operations because probability the amplitudes of the many-body wave function need to be encoded into a state vector of a quantum circuit. Although several amplitude-encoding methods have been proposed~\cite{holmes2020efficient, koppe2023amplitude, ollitrault2020nonadiabatic, nakaji2022approximate}, it is not at all straightforward to prepare an approximate ground state with any of them. This state preparation problem is often avoided by using oracle circuits. Compared to constructing such oracle circuits, variational methods such as the variational quantum eigensolver (VQE)~\cite{peruzzo2014variational, mcclean2016theory, cerezo2021variational} are considered to be relatively feasible approaches. In fact, a number of studies have proposed preparing an input state using the VQE in second quantization~\cite{halder2023iterative,dong2022ground}. Unfortunately, it is also not straightforward to implement the VQE in first quantization. This is due to the antisymmetry of electrons, which are Fermi particles. In second quantization, antisymmetry does not need to be considered for variational quantum states, because antisymmetry is naturally introduced as the anticommutation relations of creation and annihilation operators. By contrast, in first quantization, antisymmetry is imposed on the many-body wave function itself, and thus variational quantum states must satisfy antisymmetry. Nevertheless, there is no guarantee that quantum states generated by conventional variational quantum circuits will satisfy antisymmetry, which raises the possibility that the VQE will yield non-antisymmetric states. Therefore, a variational quantum circuit that generates only antisymmetric quantum states is required in order to obtain the electronic ground state by the VQE. In the present paper, we provide a design principle for constructing such an antisymmetrized variational quantum circuit. Our proposed circuit consists of two types of variational circuits that achieve antisymmetry-preserving transformations of a state vector on a Hilbert space. It is noteworthy that the proposed circuit generates a multi-configuration (MC) state. That is, it is possible to generate superpositions of an exponentially large number of Slater determinants by alternately layering these two types of variational circuits. This scheme provides a systematic approach to approximating the exact ground state. To verify the validity of our method, we performed the VQE calculation for a one-dimensional hydrogen molecular (1D-$\rm{H_2}$) system, and demonstrated that the proposed circuit well reproduced the exact antisymmetric, or fermionic, ground state and its energy. In addition to implementing the VQE, we analyzed the many-body wave functions based on quantum information theory. Such an analysis reveals the microscopic electronic structure of a many-body wave function represented in real space and illustrates the relation between the electron correlation and the quantum entanglement. \section{Method} \begin{figure*} \caption{Quantum circuit to generate an antisymmetrized variational quantum state for one-dimensional two-electron systems} \label{fig:overall_circuit} \end{figure*} In this section, we introduce the first-quantized VQE framework based on the real-space representation and describe our proposed variational quantum circuit. We also describe the setting of the numerical experiments for a 1D-$\rm{H_2}$ system. \subsection{First-quantized VQE with a real-space basis} To begin with, we briefly describe the first-quantized formulation of the many-body electron problem. The many-body Schr\"{o}dinger equation for an $\eta$-electron molecular system is expressed by the following equation: \begin{equation} H\ket{\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})} = E\ket{\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})}, \label{eq:shrodinger}\\ \end{equation} where $\ket{\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})}$ is the antisymmetric many-body wave function to be constructed on a quantum circuit. The molecular Hamiltonian $H$ with the Born-Oppenheimer approximation is expressed in atomic units as follows: \begin{equation} H = \sum_{i=1}^{\eta}\left[-\frac{{\bm{\nabla_i}}^2}{2}-\sum_{p}\frac{Z_{p}}{|\bm{r_i}-\bm{R_p}|}\right]+\sum_{i<j}\frac{1}{|\bm{r_i}-\bm{r_j}|}, \label{eq:hamiltonian}\\ \end{equation} where $r_i$ is the $i$th electron coordinate, $R_p$ is the $p$th nuclear coordinate, and $Z_p$ is the atomic number of the $p$th nucleus. The VQE in this study is based on this first-quantized Hamiltonian and the many-body wave function represented in real space. Next, we introduce the real-space basis. Let us consider the expansion of the many-body wave function by the real-space basis $\ket{\delta(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})}$ (delta functions located at grid points $\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta}$). The expansion coefficient of a many-body wave function is given as \begin{equation} \psi(\bm{r'_1},\bm{r'_2},\cdots,\bm{r'_\eta}) = \braket{\delta(\bm{r'_1},\bm{r'_2},\cdots,\bm{r'_\eta})|\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})}.\\ \end{equation} The real-space basis of an $\eta$-electron system consists of that of one-electron system $\ket{\delta(\bm{r_i})}$, as follows: \begin{equation} \ket{\delta(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})} = \ket{\delta(\bm{r_1})}\ket{\delta(\bm{r_2})}\cdots\ket{\delta(\bm{r_\eta})}.\\ \end{equation} $\ket{\delta(\bm{r_i})}$ consists of three-dimensional spatial coordinates $x, y, z$ and one spin coordinate $s$. Assuming that $L$ qubits are assigned to each spatial dimension and one qubit to spin, $\ket{\delta(\bm{r_i})}$ is expressed as \begin{equation} \ket{\delta(\bm{r_i})} = \ket{{{x_1}}^{(i)}\cdots{{x_L}}^{(i)}} \ket{{{y_1}}^{(i)}\cdots{{y_L}}^{(i)}} \ket{{{z_1}}^{(i)}\cdots{{z_L}}^{(i)}} \ket{s^{(i)}}, \\ \end{equation} where ${x_k}^{(i)},{y_k}^{(i)},{z_k}^{(i)},s^{(i)} \in \left\{0,1\right\}, \forall k \in[1,L]$. The number of qubits that constitute $\ket{\delta(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})}$ is $\eta(3L+1)$, and thus the many-body wave function is expanded in $2^{\eta(3L+1)}$ basis functions. Quantum computers are expected to realize such an exponentially large number of basis functions with a polynomial number of qubits, which is a significant advantage over classical computers. In order to implement the VQE, it is necessary to measure the energy expectation value of a quantum state $\ket{\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})}$. The energy expectation value of molecular Hamiltonian $H$ in Eq.~\ref{eq:hamiltonian} is expressed as follows: \begin{equation} \begin{split} E &= \braket{\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})|H|\psi(\bm{r_1},\bm{r_2},\cdots,\bm{r_\eta})} \\ &= E_K + E_V^{e-n} + E_V^{e-e}, \end{split} \end{equation} where $E_K$ is the electron kinetic energy, $E_V^{e-n}$ is the electron-nuclei Coulomb attraction energy, and $E_V^{e-e}$ is the electron-electron Coulomb repulsion energy. The kinetic and Coulomb energy operators of the Hamiltonian are diagonal in momentum space and real space, respectively. The momentum space basis is obtained by the quantum Fourier transformation (QFT) of the real-space basis. Letting $\ket{\psi(\bm{k_i})}=U_{\mathrm{QFT}}\ket{\psi(\bm{r_i})}$ be the one-body momentum space basis, the kinetic energy $E_K$ is expressed as \begin{equation} E_K =\sum_{i=1}^{\eta} -\frac{{\bm{k_i}}^2}{2}|\psi(\bm{k_i})|^2. \end{equation} The Coulomb energies $E_V^{e-n}, E_V^{e-e}$ are expressed in terms of one-body and two-body real-space bases as follows: \begin{equation} \begin{split} E_V^{e-n} &= -\sum_{i=1}^{\eta}\sum_{p}\frac{Z_{p}}{|\bm{r_i}-\bm{R_p}|}|\psi(\bm{r_i})|^2, \\ E_V^{e-e} &= \sum_{i<j}\frac{1}{|\bm{r_i}-\bm{r_j}|}|\psi(\bm{r_i},\bm{r_j})|^2. \end{split} \end{equation} Probability distributions $|\psi(\bm{k_i})|^2$, $|\psi(\bm{r_i})|^2$, and $|\psi(\bm{r_i,r_j})|^2$ can be obtained by measuring the output of the (QFT applied) quantum circuit in the computational basis (Pauli $Z$ basis). Note that, as pointed out by Chen et al., this method is not efficient in terms of sampling cost and is sensitive to discretization errors of grids~\cite{chan2023grid}. According to their work, QPE should be used for measuring energy expectation values in future quantum calculations, but this naive method was adopted in the present study because of a lack of sufficient computational resources to simulate QPE on a classical computer. The remaining issue in the first-quantized VQE is how to construct a variational quantum circuit that generates antisymmetric ansatz states. In the following, we describe the design principle for constructing a variational quantum circuit to settle the above issue. \subsection{Antisymmetrized variational quantum circuit} To explain the design principles of circuits, we consider the one-dimensional two-electron system, which is the minimum model required to describe the antisymmetry of electrons. In the following part of this section , we discuss two-electron systems, but we would like to note that the proposed method is applicable not only to two-electron systems but more generally to systems with larger numbers of electrons. The overall circuit architecture is shown in Fig.~\ref{fig:overall_circuit}. The proposed circuit consists of three parts: (1) seed state preparation, (2) one-body and two-body space variational circuits, and (3) measurement of energy expectations. Since the measurement procedure of energy expectations has already been described, we here describe only the parts of the circuit that generate antisymmetric ansatz states. In the first part of the circuit, some antisymmetric state, i.e., seed state, is prepared, at which the state vector of the circuit belongs to the antisymmetric subspace. The subsequent one-body and two-body space variational circuits transform the state vector into the ground state while keeping it in the antisymmetric subspace. Such a preparation of an antisymmetric state and antisymmetry-preserving transformations are the main design principle of the proposed circuit. In the following, we describe each part of the circuit. \subsubsection{Seed state preparation} The simplest antisymmetric state of a two-electron system is the Greenberger-Horne-Zeilinger (GHZ) state, as follows: \begin{equation} \ket{\psi_{\mathrm{GHZ}}}=\frac{1}{\sqrt{2}}(\ket{0}_1\ket{1}_2-\ket{1}_1\ket{0}_2). \end{equation} This state can be generated by a series of an X (NOT) gate, an H (Hadamard) gate, and a CNOT (controlled NOT) gate as shown in Fig.~\ref{fig:overall_circuit}. In this circuit, the spin coordinates $\ket{s^{(i)}}$ are antisymmetrized, meaning that this state is a singlet state in which the two spins are antiparallel $\downarrow_1\uparrow_2 - \uparrow_1\downarrow_2$ (0 is assumed to be $\downarrow$ and 1 to be $\uparrow$). This state is expressed as follows, including the spatial coordinates (the normalization constant $1/\sqrt{2}$ is omitted): \begin{equation} \ket{\psi_{\mathrm{seed}}(\bm{r_1},\bm{r_2})} = \ket{\bm{0}}_1\ket{0}_1 \otimes \ket{\bm{0}}_2\ket{1}_2 - \ket{\bm{0}}_1\ket{1}_1 \otimes \ket{\bm{0}}_2\ket{0}_2, \end{equation} where the spatial coordinate of each electron is $\ket{\bm{0}}=\ket{00\cdots0}$. Obviously, this state is antisymmetric under the exchange of two electrons and can be used as a seed state. For more-than-two-electron systems, seed states cannot be constructed in such a simple way. As an example, for three-electron systems, the following Slater determinant consisting of the three states $\ket{00}$, $\ket{01}$, and $\ket{10}$ is one of the antisymmetric states (the normalization constant $1/\sqrt{3!}$ is omitted). \begin{equation} \begin{split} \ket{\psi_{00,01,10}}&= \begin{vmatrix} \ket{00}_1 & \ket{01}_1 & \ket{10}_1 \\ \ket{00}_2 & \ket{01}_2 & \ket{10}_2 \\ \ket{00}_3 & \ket{01}_3 & \ket{10}_3 \\ \end{vmatrix}\\ &=\ket{00}_1\ket{01}_2\ket{10}_3+\ket{01}_1\ket{10}_2\ket{00}_3\\ &\quad+\ket{10}_1\ket{00}_2\ket{01}_3-\ket{10}_1\ket{01}_2\ket{00}_3\\ &\quad-\ket{01}_1\ket{00}_2\ket{10}_3-\ket{00}_1\ket{10}_2\ket{01}_3. \end{split} \end{equation} Previous research has proposed methods for constructing quantum circuits to prepare such a Slater determinant, though it is not as simple as that for the GHZ state. Thanks to the work of Berry et al., an implementation of the Fisher-Yates shuffle~\cite{durstenfeld1964algorithm} on a quantum circuit has been provided and can generate the superpositions of an exponential number of permutation elements with polynomial computational complexity~\cite{berry2018improved}. Following their method, it is possible to systematically generate seed states for more-than-two-electron systems. Another way to prepare a seed state is a variational approach, which achieves a seed state using conventional variational circuits. In this approach, the Hadamard test (swap test) can be employed as an objective function. Letting $\mathrm{SWAP}_{ij}$ be the swap operator acting on the subspace of the $i$th and $j$th electrons, the output of the Hadamard test for the quantum state $\ket{\psi}$ becomes \begin{equation} p_0 = \frac{1+\braket{\psi|\mathrm{SWAP}_{ij}|\psi}}{2}, p_1 = \frac{1-\braket{\psi|\mathrm{SWAP}_{ij}|\psi}}{2}, \end{equation} where $p_0$ and $p_1$ are the measurement probabilities of the $\ket{0}$ and $\ket{1}$ of an ancilla qubit. Suppose $\ket{\psi}$ is an antisymmetric wave function, $\mathrm{SWAP}_{ij}\ket{\psi} = -\ket{\psi}$; then the output of the Hadamard test becomes $p_0=0,p_1=1$ (for symmetric wave functions, $p_0=1,p_1=0$). Therefore, an approximated antisymmetric state can be obtained by updating variational parameters to maximize the output of the Hadamard tests for all two-electron swap operations. By repeatedly performing the Hadamard test on an approximated antisymmetric state, the pure antisymmetric states can eventually be distilled, which can then be used as a seed state. \subsubsection{One-body space variational circuit} The proposed variational quantum circuit consists of two components: a one-body space variational circuit and a two-body ($\eta$-body) space variational circuit. We first explain the one-body space variational circuit. A one-body space variational circuit consists of unitary operators that act on the subspace of each electron (one-body space). Consider $\mathcal{U}^{(1)}(\theta) = \left[U^{(1)}(\theta)\right]^{\otimes\eta}$ as the one-body space variational circuit, where $U^{(1)}(\theta)$ is a unitary operator (variational circuit) acting on a one-body space. Since $U^{(1)}(\theta)$ transforms a state vector equally in each one-body space, $\mathcal{U}^{(1)}(\theta)$ does not destroy the antisymmetry of the ansatz state. For two-electron systems, $\mathcal{U}^{(1)}(\theta)=U^{(1)}(\theta)\otimes U^{(1)}(\theta)$ acting on the seed state (GHZ state) $\ket{\psi_{\mathrm{seed}}(\bm{r_1}, \bm{r_2})}$ yields \begin{equation} \begin{split} \ket{\psi(\bm{r_1}, \bm{r_2})} &= \mathcal{U}^{(1)}(\theta) \ket{\psi_{\mathrm{seed}}(\bm{r_1}, \bm{r_2})}\\ &=U^{(1)}(\theta) \ket{\bm{0}}_1\ket{0}_1 \otimes U^{(1)}(\theta) \ket{\bm{0}}_2\ket{1}_2 \\ &\quad- U^{(1)}(\theta) \ket{\bm{0}}_1\ket{1}_1 \otimes U^{(1)}(\theta) \ket{\bm{0}}_2\ket{0}_2\\ &=\ket{\alpha(\theta)}_1 \otimes \ket{\beta(\theta)}_2 - \ket{\beta(\theta)}_1 \otimes \ket{\alpha(\theta)}_2. \end{split} \end{equation} As can be seen, $\mathcal{U}^{(1)}(\theta)$ transforms $\ket{\bm{0}}\ket{0},\ket{\bm{0}}\ket{1}$ into $\ket{\alpha(\theta)}=U^{(1)}(\theta)\ket{\bm{0}}\ket{0}$, $\ket{\beta(\theta)}=U^{(1)}(\theta)\ket{\bm{0}}\ket{1}$ in each one-body space, and antisymmetry is preserved under this transformation. The resulting state can be expressed by a single Slater determinant consisting of $\ket{\alpha(\theta)}$ and $\ket{\beta(\theta)}$ as follows: \begin{equation} \begin{split} \ket{\psi_{\mathrm{SD}}(\bm{r_1}, \bm{r_2})} &=\ket{\alpha(\theta)}_1 \otimes \ket{\beta(\theta)}_2 - \ket{\beta(\theta)}_1 \otimes \ket{\alpha(\theta)}_2\\ &= \begin{vmatrix} \ket{\alpha(\theta)}_1 & \ket{\beta(\theta)}_1 \\ \ket{\alpha(\theta)}_2 & \ket{\beta(\theta)}_2 \end{vmatrix}. \end{split} \label{eq:hf_state} \end{equation} This indicates that a one-body space variational circuit can only explore quantum states within the HF approximation. By using the two-body ($\eta$-body) space variational circuit described in the following, we can explore quantum states beyond the HF approximation. \subsubsection{Two-body space variational circuit} The usual strategy to go beyond the HF approximation is based on configuration interaction (CI) theory, in which a many-body wave function is approximated by a linear combination, or superposition, of multiple Slater determinants. As previously mentioned, a one-body space circuit generates an electronic state expressed by a single Slater determinant. Therefore, a superposition of multiple one-body space circuits is expected to generate a superposition of multiple Slater determinants, that is, an MC state. For a two-electron system, consider two different operators $U_a\otimes U_a, U_b\otimes U_b$, where $U_a$ and $U_b$ are unitary operators acting on a one-body space . Their superposition is expressed as \begin{equation} U^{(2)}(\theta) = c_a(\theta) \cdot U_a \otimes U_a + c_b(\theta) \cdot U_b \otimes U_b , \end{equation} where $c_a(\theta)$, $c_b(\theta)$ are superposition coefficients parametrized by $\theta$. Operator $U^{(2)}(\theta)$ acting on a single Slater determinant $\ket{\psi_{\mathrm{SD}}}$ yields \begin{equation} \begin{split} &U^{(2)}(\theta)\ket{\psi_{\mathrm{SD}}} \\ &= c_a(\theta) \begin{vmatrix} U_a\ket{\alpha}_1 & U_a\ket{\beta}_1 \\ U_a\ket{\alpha}_2 & U_a\ket{\beta}_2 \\ \end{vmatrix} + c_{b}(\theta) \begin{vmatrix} U_b\ket{\alpha}_1 & U_b\ket{\beta}_1 \\ U_b\ket{\alpha}_2 & U_b\ket{\beta}_2 \\ \end{vmatrix}\\ &=c_a(\theta)\ket{\psi^{a}_{\mathrm{SD}}}+c_b(\theta)\ket{\psi^{b}_{\mathrm{SD}}}. \label{eq:mc_state} \end{split} \end{equation} As expected, a superposition of two different Slater determinants $\ket{\psi^{a}_{\mathrm{SD}}}$, $\ket{\psi^{b}_{\mathrm{SD}}}$, is generated by $U^{(2)}(\theta)$. \begin{figure} \caption{Implementation of $U^{(2)}(\theta)$. Top: implementations of Ising gate and real-valued symmetry-preserving (RSP) ansatz. Bottom: conceptual picture of generation of superposed HF states by an $R_{zz}(\theta)$ gate.} \label{fig:multi_conf} \end{figure} One of the simplest implementations of such an operator is the Ising gate~\cite{jones2003robust}. Ising gates such as $R_{xx}(\theta)$, $R_{yy}(\theta)$, and $R_{zz}(\theta)$ are represented by a superposition of $I \otimes I$ and $P \otimes P$ ($I$ is the identity operator, $P$ is the Pauli operator $X$, $Y$, $Z$); thus, they can be employed as $U^{(2)}(\theta)$. For example, $R_{zz}(\theta)$ shown in Fig.~\ref{fig:multi_conf} is represented by \begin{equation} \begin{split} R_{zz}(\theta) &= \begin{pmatrix} e^{-i\theta/2} & 0 & 0 & 0 \\ 0 & e^{i\theta/2} & 0 & 0 \\ 0 & 0 & e^{i\theta/2} & 0 \\ 0 & 0 & 0 & e^{-i\theta/2} \\ \end{pmatrix}\\ &= \cos\frac{\theta}{2} \cdot I \otimes I -i\sin\frac{\theta}{2} \cdot Z \otimes Z. \end{split} \end{equation} As can be seen from Fig.~\ref{fig:multi_conf}, the Ising gates act on a two-body space; thus, we refer to $U^{(2)}(\theta)$ as a two-body space variational circuit. For $\eta$-electron systems, an $\eta$-body space variational circuit $U^{(\eta)}(\theta)$ is represented by $U^{(\eta)}(\theta) = \sum_{i} c_i(\theta) {U_i}^{\otimes \eta}$. Such operators can be easily implemented using Ising gates with cascaded CNOT gates across the $\eta$-body space~\cite{whitfield2011simulation, kuhn2019accuracy}. The Ising gate is not the only option as a two-body space variational circuit. For example, the real-valued symmetry-preserving (RSP) ansatz~\cite{ibe2022calculating} shown in Fig.~\ref{fig:multi_conf} can also be used as a two-body space variational circuit. The RSP ansatz is represented by the superposition of the following operators: \begin{equation} \begin{split} U_{\mathrm{RSP}}(\theta) &= \begin{pmatrix} \cos \theta & 0 & 0 & -\sin \theta \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \sin \theta & 0 & 0 & \cos \theta \\ \end{pmatrix}\\ &=\frac{1}{2} \left[ X \otimes X + Y \otimes Y + \cos\theta( I \otimes I + Z \otimes Z )\right. \\ &\qquad \left.- i\sin\theta(X \otimes Y + Y \otimes X) \right]. \label{eq:rsp} \end{split} \end{equation} Here, terms $X\otimes Y, Y\otimes X$ appear as tensor products of different operators $X$ and $Y$. Either of these terms alone destroys the antisymmetry of ansatz states, but pairs of them preserve antisymmetry as follows: \begin{equation} \begin{split} &(X \otimes Y + Y \otimes X)\ket{\psi_{\mathrm{SD}}} \\ &= \begin{vmatrix} X\ket{\alpha}_1 & Y\ket{\beta}_1 \\ X\ket{\alpha}_2 & Y\ket{\beta}_2 \\ \end{vmatrix} + \begin{vmatrix} Y\ket{\alpha}_1 & X\ket{\beta}_1 \\ Y\ket{\alpha}_2 & X\ket{\beta}_2 \\ \end{vmatrix}\\ &=\ket{\psi'_{\mathrm{SD}}}+\ket{\psi''_{\mathrm{SD}}}. \end{split} \end{equation} In Eq.~\ref{eq:rsp}, the RSP ansatz is represented by a real-valued unitary operator, which is convenient for describing real-valued eigenstates of one-dimensional systems. Unfortunately, however, it is not obvious how to extend this ansatz to a more-than-two-body space. Although the Ising gates are considered a more suitable option for generalizing to a system with a larger number of electrons, in the present study, the RSP ansatz was employed for convenience. As illustrated in Fig.~\ref{fig:multi_conf}, a single Slater determinant is split each time $U^{(2)}(\theta)$ acts on it. Therefore, it is expected that an exponentially large number of Slater determinants will be generated by repeatedly applying the two-body space variational circuit followed by the one-body space variational circuit. However, such consecutive application of two-body variational circuits does not increase the number of superposed Slater determinants as one might expect. This is because a product of multiple Pauli operators reduces to a single Pauli operator due to the commutation relation among Pauli operators ($P_i^2=I,P_1P_2=iP_3$). This reduction of Pauli operators can be prevented by using the alternating layered structure of one-body and two-body space variational circuits as shown in Fig.~\ref{fig:overall_circuit}. If the one-body space variational circuit is noncommutative with the Pauli operators, then the reduction of Pauli operators between two-body space variational circuits is prevented. Therefore, by repeatedly applying one-body and two-body space variational circuits alternately to a seed state, the number of superposed Slater determinants can be increased exponentially, and so is a systematic approach to approximating the exact ground state. During the optimization process of the VQE framework, variational parameters in one-body and two-body circuits are simultaneously optimized so as to minimize the energy expectation values of an ansatz state. As can be seen from Eqs.~\ref{eq:hf_state}--\ref{eq:mc_state}, optimizing one-body and two-body circuits corresponds to optimizing the electronic states composing each Slater determinant (electronic orbitals) and the superposition coefficients of the Slater determinants (CI coefficients), respectively. Therefore, this framework can be regarded as the multi-configuration self-consistent field (MCSCF) method, which is one of the post-Hartree-Fock \textit{ab initio} quantum chemistry methods. A notable advantage of the proposed method is its ability to perform MCSCF calculations based on an exponential number of electron configurations with polynomial computational complexity. \subsection{Settings for Numerical Simulations} To verify the validity of our method, we performed the VQE calculation with the proposed circuit to obtain the ground state of a 1D-$\rm{H_2}$ system. The Hamiltonian of this system is expressed as follows: \begin{equation} \begin{split} H&=\sum_{i=1}^{2}\left[-\frac{1}{2}\frac{d^2}{dr_i^2}-\sum_{p=1}^{2}\frac{1}{\left|r_i-R_p+\epsilon\right|}\right]\\ &\qquad +\frac{1}{|r_1-r_2+\epsilon|}+\frac{1}{|R_1-R_2|}. \label{eq:hydrogen} \end{split} \end{equation} where $r_1, r_2$ and $R_1, R_2$ are the positions of electrons and protons, respectively. To avoid zero division in the Coulomb interaction terms, a small real value $\epsilon$ is added to the denominators of those terms (soft Coulomb potential). \begin{figure} \caption{Probability amplitudes $\ket{\psi(r_1,r_2)}$ of degenerated ground states (one fermionic state ($S^2=0, S_z=0$) and three bosonic states ($S^2=1, S_z=0,\pm1$)) at interatomic distance $|R_1-R_2| = $ 0.5 bohrs ($R_1, R_2$ are shown by pink dots). Each many-body wave function is shown in the four subspaces corresponding to the spin configurations $\uparrow_1\uparrow_2, \uparrow_1\downarrow_2, \downarrow_1\uparrow_2, \downarrow_1\downarrow_2$. Note that the correspondence between colors and values is different for each plot.} \label{fig:exact} \end{figure} Note here that the ground states of fermions and bosons are completely degenerate. Figure~\ref{fig:exact} shows four degenerate ground states obtained by the exact diagonalization of Eq.~\ref{eq:hydrogen} at the interatomic distance $|R_1-R_2| = $ 0.5 bohrs. As shown in Fig.~\ref{fig:exact}, the ground states of the fermions are the singlet state ($S^2=0, S_z=0$) whose spin symmetry is antisymmetric and spatial symmetry is symmetric. On the other hand, the ground states of the bosons are the triplet state ($S^2=1, S_z=0,\pm1$) whose spin and spatial symmetry are symmetric. Since the Hamiltonian of the system depends only on the spatial coordinate, there is no energy difference between fermionic and bosonic ground states whose spatial symmetries are the same. Therefore, without taking into account the symmetry of the variational quantum circuit, the quantum states obtained by the VQE can be a mixture of fermionic and bosonic states, even though an accurate energy value is obtained. We demonstrate such convergence to a symmetry-neglected state in the numerical simulations. The following describes the calculation conditions of the system and the quantum circuits used for numerical simulations. Six qubits (5 qubits for spatial coordinate and 1 qubit for spin coordinate) represent the one-body space, and thus 12 qubits were used for the two-electron system. Spatial coordinates of electrons $r_1, r_2$ ranged from -0.5 bohrs to 0.5 bohrs and the grid width $\varDelta r$ was $1/2^{5}$ bohrs = 0.03125 bohrs. $\epsilon$ was set to $\varDelta r/2$. The spatial grid employed in this study was limited to a very coarse one, due to the computational resources of our classical computer, but the realization of the FTQC is expected to allow calculations with a grid fine enough to reproduce the electronic structure of real materials. For variational circuits, the hardware efficient (HE) ansatz~\cite{kandala2017hardware} and the real-valued symmetry preserving ansatz (RSP)~\cite{ibe2022calculating} were employed. The HE ansatz consists of Ry gates and CNOT gates~\cite{shee2022qubit}, which in the present study had 6 layers. \begin{figure} \caption{Circuit architectures employed by VQE. (a) Symmetry-neglected (SN) architecture; (b) Hartree-Fock (HF) architecture; (c) Multi-configuration (MC) architecture. HE in the figure denotes the hardware efficient ansatz with 6 layers and RSP denotes the real-valued symmetry preserving ansatz.} \label{fig:circ_sn} \label{fig:circ_hf} \label{fig:circ_mc} \label{fig:circ} \end{figure} In order to demonstrate the effects of the antisymmetrization and the multi-configuration, we implemented the VQE with three different circuits. The architectures of these circuits are shown in Fig.~\ref{fig:circ}. The first architecture is the symmetry-neglected (SN) architecture shown in Fig.~\ref{fig:circ}(a), consisting of consecutive HE ansatz blocks acting on the two-body space. In this architecture, the antisymmetry of the seed state is not considered to be preserved, and thus it is expected that the VQE will yield a neither antisymmetric (fermionic) nor symmetric (bosonic) state. The number of blocks of the HE ansatz was 6. The second architecture is the Hartree-Fock (HF) architecture shown in Fig.~\ref{fig:circ}(b), consisting of consecutive one-body space variational circuits. In this architecture, the antisymmetry of the seed state is preserved, but the VQE yields a ground state within the HF approximation. The third architecture is the multi-configuration (MC) architecture shown in Fig.~\ref{fig:circ}(c), consisting of alternately layered one-body and two-body space variational circuits. In this architecture, the VQE can achieve a ground state beyond the HF approximation, and its energy is expected to be lower than that obtained with the HF architecture. To confirm this energy stabilization, potential energy curves were calculated for the HF and MC architectures, at 16 points of interatomic distance $|R_1-R_2|$ ranging from $\varDelta r$ to $16\varDelta r$ = 0.5 bohrs. The number of one-body space variational circuits was 15 for both architectures and that of two-body space variational circuits was 14 for the MC architecture. For the optimization of variational parameters, the steepest descent method with the Adam optimizer~\cite{kingma2014adam} was employed and the number of optimization steps was set to 10000. All calculations for quantum circuits were performed by a noiseless state vector simulator. \section{Result and discussion} \begin{figure*} \caption{Many-body wave functions obtained by the VQE with (a) the SN architecture, (b) the HF architecture and (c) the MC architecture at the interatomic distance $|R_1-R_2| = $ 0.5 bohrs ($R_1, R_2$ are shown by pink dots).} \label{fig:wf_sn} \label{fig:wf_hf} \label{fig:wf_mc} \label{fig:wave_func} \end{figure*} \subsection{Results of the VQE Calculations} Figure~\ref{fig:wave_func} shows the many-body wave functions obtained by the VQE with the different architectures at an interatomic distance $|R_1-R_2| = $ 0.5 bohrs. In the case of the wave function obtained with the SN architecture shown in Fig.~\ref{fig:wave_func}(a), the spatial coordinate is symmetric under the exchange of two electrons, but its spin coordinate is neither antisymmetric nor symmetric ($\uparrow_1\uparrow_2+\uparrow_1\downarrow_2\neq\uparrow_1\uparrow_2+\downarrow_1\uparrow_2$). This indicates that unless the symmetry for the variational quantum circuit is not considered, the resulting state of the VQE can converge to such a symmetry-neglected state. This is the notable difference from the second-quantized VQE. In contrast, as shown in Figs.~\ref{fig:wave_func}(b)(c), the wave functions obtained with the HF and MC architectures are the antisymmetric singlet states, as expected. The shape of the wave function obtained with the MC architecture well reproduces that of the exact fermionic ground state shown in Fig.~\ref{fig:exact}, while that with the HF architecture is apparently different from it. As will be described later, this difference in shape of the wave functions indicates the difference in representability of the quantum states between the HF approximation and CI theory. The difference between them is also reflected in their ground state energies. Figure~\ref{fig:pec_ee}(a) shows the potential energy curves obtained with the HF and MC architectures. It can be seen that the ground state energies obtained with the MC architecture reproduce the results of the exact diagonalization well, whereas those obtained with the HF architecture converge to higher values. This demonstrates the energy stabilization due to the multi-configuration character, in other words, the effect of electron correlation, which is the pillar of the quantum chemistry theories~\cite{jensen2017introduction, helgaker2013molecular}. \begin{figure} \caption{(a) Potential energy curves and (b) entanglement entropy obtained by exact diagonalization (black dashed lines) and VQE with the MC architecture (blue solid lines) and the HF architecture (red solid lines). The calculated potential energy curves are significantly different from those of the three-dimensional hydrogen molecular system, which is attributed to the difference in dimensions between the systems~\cite{hirai2022molecular}.} \label{fig:pec} \label{fig:ee} \label{fig:pec_ee} \end{figure} \subsection{Analysis of Many-Body States based on Quantum Information Theory} Although the numerical experiments in this section confirmed that the proposed method can reproduce the exact fermionic ground state well, the obtained many-body wave function provides little insight into the microscopic electronic structure. The electronic structure is mostly understood from the electron orbital picture; however, since our method is based on a real-space basis, the orbital picture of the obtained state is not obvious. Furthermore, a lack of an orbital picture makes it impossible to quantitatively evaluate the multi-configuration character of the obtained state. In order to get a clear orbital picture and evaluate the multi-configuration character of obtained states, we analyzed many-body quantum states from the perspective of quantum information theory. Quantum information has been attracting increasing attention in recent years as a key to understanding electron dynamics in chemical reactions and strongly correlated materials ~\cite{molina2015quantum, brandejs2019quantum, esquivel2011quantum, rissler2006measuring, boguslawski2013orbital, lin2013spatial, soh2019long, zhu2020entanglement, lanata2014principle, chen2019incoherent}. We will first quantify the multi-configuration character of many-body states. To evaluate the multi-configuration character, we employed the entanglement entropy, which is typically employed to measure the degree of entanglement between subsystems. We shall now briefly explain the definition of entanglement entropy. A many-body wave function of the two-electron system can be decomposed into multiple product states by the Schmidt decomposition, as follows: \begin{equation} \ket{\psi(r_1,r_2)} = \sum_i \lambda_i \ket{\mu_i(r_1)} \otimes \ket{\chi_i(r_2)}, \label{eq:Schmidt} \end{equation} where $\lambda_i$ are the Schmidt coefficients (superposition coefficients of product states) and $\ket{\mu_i(r_1)}, \ket{\chi_i(r_2)}$ are the Schmidt basis on subsystems. The entanglement entropy $S$ is defined by the Shannon entropy of the probability distribution $|\lambda_i|^2$ as \begin{equation} S = -\sum_i |\lambda_i|^2 \log_2|\lambda_i|^2. \label{eq:ee} \end{equation} Considering the fact that the Shannon entropy of a normal distribution is proportional to the logarithm of its variance, the larger the variance of probability distribution $|\lambda_i|^2$, that is, the more product states are superposed, the larger the entanglement entropy. Since the entanglement entropy of a single Slater determinant $\ket{\psi_{\rm{HF}}}=1/\sqrt2(\ket{\alpha}_1 \otimes \ket{\beta}_2 - \ket{\beta}_1 \otimes \ket{\alpha}_2)$ is 1, that of an MC state is expected to be larger than 1. Figure~\ref{fig:pec_ee}(b) shows the entanglement entropies calculated for the ground states obtained by the exact diagonalization and the MC architecture. As expected, the entanglement entropy of the MC state is larger than 1 and increases as interatomic distance increases, which is in good agreement with the exact result. This behavior of entanglement entropy indicates that the ground state becomes more multi-configurational as the interatomic distance approaches the dissociation limit, which is a well-known fact in the field of conventional quantum chemistry~\cite{jensen2017introduction, helgaker2013molecular}. We can now extract the electron orbital picture from the many-body state represented in real space. From the expression of the Schmidt decomposition in Eq.~\ref{eq:Schmidt}, Schmidt basis $\ket{\mu_i(r_1)},\ket{\chi_i(r_2)}$ can be regarded as one-body wave functions, that is, the electron orbitals of the electrons, and the Schmidt coefficients as their contributions to the many-body state. Therefore, the Schmidt basis and the Schmidt coefficients provide insight into the electron orbital picture of many-body wave functions. \begin{figure}\label{fig:svd} \end{figure} Figure~\ref{fig:svd} shows the results of the Schmidt decomposition performed for one anti-parallel spin subspace $\ket{\psi(r_{1\downarrow},r_{2\uparrow})}$ of the exact ground state (upper left part of Fig.~\ref{fig:exact}) and the MC state (Fig.~\ref{fig:wave_func}(c)). The left side of the figure shows the first three Schmidt coefficients $\lambda_0, \lambda_1, \lambda_2$ of each interatomic distance. As shown, the contribution of the zeroth product state $\lambda_0$ is nearly constant at all interatomic distances, whereas that of the first product state $\lambda_1$ increases as interatomic distance increases, which is consistent with the behavior of the entanglement entropy shown in Fig.~\ref{fig:pec_ee}(b). This suggests that the configuration interaction between the zeroth and first product states, that is, their superposition, explains the energy stabilization due to the multi-configuration character. The right side of the figure shows the product states constituting the MC state at the interatomic distance $|R_1-R_2| = $ 0.5 bohrs. The electron orbitals constituting each product state are indicated by black lines in each figure. Using these electron orbitals, we will illustrate the energy stabilization due to the multi-configuration character in the following discussion. The electron orbitals constituting the zeroth and first product states have peaks at the proton positions $R_1, R_2$ (pink dots in figure), and these orbitals correspond to bonding and antibonding orbitals. Following the typical explanation of the linear combination of atomic orbitals (LCAO) approximation, let $\ket{p_i(r)}$ be the electron orbitals distributed around the $i$th proton. Then the bonding and antibonding orbitals are represented as $\ket{\psi_{\mathrm{\sigma}}(r)}=\ket{p_1(r)}+\ket{p_2(r)}$ and $\ket{\psi_{\mathrm{\sigma^{*}}}(r)}=\ket{p_1(r)}-\ket{p_2(r)}$, respectively. The bonding state of two electrons $\ket{\psi_{\mathrm{\sigma\sigma}}(r_1,r_2)}$ is expressed as the tensor product of the bonding orbitals $\ket{\psi_{\mathrm{\sigma}}(r)}$ as follows: \begin{equation} \begin{split} &\ket{\psi_{\mathrm{\sigma\sigma}}(r_1,r_2)}\\ &=\ket{\psi_{\mathrm{\sigma}}(r_1)}\otimes\ket{\psi_{\mathrm{\sigma}}(r_2)} \\ &= \ket{p_1(r_1)} \otimes \ket{p_2(r_2)} + \ket{p_2(r_1)} \otimes \ket{p_1(r_2)}\\ &\qquad + \ket{p_1(r_1)} \otimes \ket{p_1(r_2)} + \ket{p_2(r_1)} \otimes \ket{p_2(r_2)}, \end{split} \end{equation} The four terms in the above expression correspond to the four peaks in the zeroth product state and in the HF ground state shown in Fig.~\ref{fig:wave_func}(b); thus, \begin{equation} \ket{\psi_{\mathrm{\sigma\sigma}}(r_1,r_2)} \sim \ket{\mu_0(r_1)}\otimes\ket{\chi_0(r_2)} \sim \ket{\psi_{\mathrm{HF}}(r_1,r_2)}. \end{equation} The antibonding state, or two-electron excited state, $\ket{\psi_{\mathrm{\sigma^{*}\sigma^{*}}}(r_1,r_2)}$ is expressed as the tensor product of the antibonding orbitals $\ket{\psi_{\mathrm{\sigma^{*}}}(r)}$ as follows: \begin{equation} \begin{split} &\ket{\psi_{\mathrm{\sigma^{*}\sigma^{*}}}(r_1,r_2)}\\ &=-\ket{\psi_{\mathrm{\sigma^{*}}}(r_1)}\otimes\ket{\psi_{\mathrm{\sigma^{*}}}(r_2)} \\ &= \ket{p_1(r_1)} \otimes \ket{p_2(r_2)} + \ket{p_2(r_1)} \otimes \ket{p_1(r_2)}\\ &\qquad - \ket{p_1(r_1)} \otimes \ket{p_1(r_2)} - \ket{p_2(r_1)} \otimes \ket{p_2(r_2)}, \end{split} \end{equation} where $-1$ is taken as a global phase. The four terms in the above expression correspond to the four peaks in the first product state; thus, \begin{equation} \ket{\psi_{\mathrm{\sigma^{*}\sigma^{*}}}(r_1,r_2)} \sim \ket{\mu_1(r_1)}\otimes\ket{\chi_1(r_2)}. \end{equation} Superposing bonding and antibonding states equally yields \begin{equation} \begin{split} &\ket{\psi_{\mathrm{\sigma\sigma+\sigma^{*}\sigma^{*}}}(r_1,r_2)}\\ &=\frac{1}{2}\ket{\psi_{\mathrm{\sigma\sigma}}(r_1,r_2)} + \frac{1}{2}\ket{\psi_{\mathrm{\sigma^{*}\sigma^{*}}}(r_1,r_2)}\\ &= \ket{p_1(r_1)} \otimes \ket{p_2(r_2)} + \ket{p_2(r_1)} \otimes \ket{p_1(r_2)}. \end{split} \end{equation} It can be seen that energetically unfavorable electron configurations $\ket{p_1(r_1)} \otimes \ket{p_1(r_2)}$, $\ket{p_2(r_1)} \otimes \ket{p_2(r_2)}$, where both electrons are distributed around the same proton, are cancelled out and a lower energy state is achieved. The two terms in the above expression correspond to the two peaks in the MC state $\ket{\psi_{\mathrm{MC}}(r_1,r_2)}$ shown in Fig.~\ref{fig:wave_func}(c); thus, \begin{equation} \ket{\psi_{\mathrm{\sigma\sigma+\sigma^{*}\sigma^{*}}}(r_1,r_2)} \sim \ket{\psi_{\mathrm{MC}}(r_1,r_2)}. \end{equation} This state represents the electron configuration where electrons are distributed around different protons to avoid each other, reminiscent of correlated, or entangled, electrons. To summarize, a multi-configuration state is inherently an entangled state consisting of multiple product states, and electron correlation can be interpreted as quantum entanglement in the electron system. Although this discussion on electron orbitals may be just a textbook subject~\cite{jensen2017introduction, helgaker2013molecular}, it is worth noting that such an orbital picture can be extracted from a real-space represented by many-body states by using the Schmidt decomposition. In addition, the minor orbital contributions, other than bonding and antibonding orbitals, can be considered in our method. Indeed, as shown in the left part of Fig.~\ref{fig:svd}, the exact ground state and the MC state include the contribution of the second product state $\lambda_2\ket{\mu_2(r_{1\downarrow})}\otimes\ket{\chi_2(r_{2\uparrow})}$ whose electron orbitals have nodes at proton positions and are distributed in the interstitial region. Since a real-space basis is a complete basis set of a discrete space, it can represent both localized atomic orbitals and such delocalized orbitals, freeing us from the need for careful selection of basis functions in typical quantum chemical calculations. From this point, a real-space basis is an attractive option for the quantum simulation in the FTQC era. The results shown in this section demonstrate one possible framework of quantum chemistry in the FTQC era, where we understand electronic structures based on electron orbitals derived from many-body wave functions represented in real space. In this study, many-body wave functions were analyzed by a classical computer; however, since it is almost impossible to obtain a state vector of even a quantum circuit of a few dozen qubits, this analysis must be performed on a quantum computer. Fortunately, several methods have been proposed to perform singular value decomposition on quantum computers~\cite{rebentrost2018quantum, bravo2020quantum, wang2021variational}, which may allow such analysis to be performed on a quantum computer with reasonable computational resources. The problem concerned in our method is the barren plateaus~\cite{mcclean2018barren, cerezo2021cost, wang2021noise} in the optimization process of the variational state. In fact, the disappearance of gradients of variational parameters was observed during the optimization process, which required a large number of optimization steps. Such barren plateaus are the most serious problem in most variational quantum algorithms. Nevertheless, as mentioned in the Introduction, since by using the QPE, the true ground state can be distilled from the approximated ground state prepared by the VQE, the incomplete convergence of the VQE due to the barren plateaus will be tolerated to some extent. The complementary combination of the VQE and the QPE will allow the state preparation for first-quantized methods. \section{Summary and Conclusion} In this paper, we propose a state preparation method for the first-quantized quantum simulations based on the real-space representation. We employ a variational method for preparing ground states, and provide a design principle for constructing antisymmetrized variational quantum circuits. Our proposed circuits are capable of generating a superposition of exponentially large number of Slater determinants, that is, MC states with polynomial numbers of quantum gates. We performed VQE calculations for a 1D-$\rm{H_2}$ system and confirmed that the proposed circuit reproduces well the exact fermionic ground state. In addition to performing VQE, we quantitatively evaluated the multi-configuration character of many-body states by using the entanglement entropy between two electrons, and extracted the electronic orbital picture from many-body states represented in real space by the Schmidt decomposition. Quantum computers, as demonstrated in this study, have great potential to simulate many-body electron systems and shed light on their quantum information. We believe that our proposed method will contribute to realizing the first-quantized simulation for electron dynamics and will bring a deeper understanding of electron systems to materials science in the FTQC era. \nocite{*} \end{document}
\begin{document} \begin{abstract} The purpose of this article is to shed light on the question of how many and what kind of cusps a rational cuspidal curve on a Hirzebruch surface can have. We use birational transformations to construct rational cuspidal curves with four cusps on the Hirzebruch surfaces and find associated results for these curves. \end{abstract} \title{Rational cuspidal curves with four cusps on Hirzebruch surfaces} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} Let $C$ be a reduced and irreducible curve on a smooth complex surface $X$. A point $p$ on $C$ is called a \emph{cusp} if it is singular and if the germ $(C,p)$ of $C$ at $p$ is irreducible. A curve $C$ is called \emph{cuspidal} if all of its singularities are cusps, and $j$-cuspidal if it has $j$ cusps $p_1, \ldots, p_j$. Let $g$ denote the geometric genus of the curve, and recall that the curve is called \emph{rational} if $g=0$. Assume that two curves $C$ and $C'$, without common components, meet at a point $p$, then $p$ is called an \emph{intersection point} of $C$ and $C'$. If $f$ and $f'$ are local equations of $C$ and $C'$ at $p$, then by the \emph{intersection multiplicity} $(C \cdot C')_p$ of $C$ and $C'$ at $p$ we mean $$(C \cdot C')_p := \dim_{\ensuremath{\mathbb{C}}}\ensuremath{\mathscr{O}}_{X,p}/(f,f').$$ We sometimes view the intersection of $C$ and $C'$ as a $0$-cycle, and express this by the notation $C \cdot C'$, where $$C \cdot C' = \sum _{p \in C \cap C'}(C \cdot C')_pp.$$ For any two divisors $C$ and $C'$ on $X$, we calculate the \emph{intersection number} $C \ensuremath{\,.\,} C'$ using linear equivalence and the pairing ${\mathrm{Pic}(X) \times \mathrm{Pic}(X) \rightarrow \ensuremath{\mathbb{Z}}}$ \cite[Theorem V 1.1, pp.357--358]{Hart:1977}. By \cite[Theorem V 3.9, p.391]{Hart:1977}, there exists for any curve $C$ on a surface $X$ a sequence of $t$ monoidal transformations, $$V=V_t \xrightarrow{\sigma_t} V_{t-1} \xrightarrow{} \cdots \xrightarrow{} V_1 \xrightarrow{\sigma_1} V_0=X,$$ such that the reduced total inverse image of $C$ under the composition $\sigma:V \rightarrow X$, $$D := \sigma^{-1}(C)_{\mathrm{red}},$$ is a \emph{simple normal crossing divisor} (SNC-divisor) on the smooth complete surface $V$ (see \cite{Iitaka}). The pair $(V,D)$ and the transformation $\sigma$ are referred to as an \emph{embedded resolution} of $C$, and it is called a \emph{minimal embedded resolution} of $C$ when $t$ is the smallest integer such that $D$ is an SNC-divisor. Let $p$ be a cusp on a curve $C$, let $m$ denote the \emph{multiplicity} of $p$, and let $m_i$ denote the multiplicity of the infinitely near points $p_i$ of $p$. Then the \emph{multiplicity sequence} $\overline{m}$ of the cusp $p$ is defined to be the sequence of integers $$\overline{m}=[m,m_1,\ldots,m_{t-1}],$$ where $t$ is the number of monoidal tranformations in the local minimal embedded resolution of the cusp, and we have $m_{t-1}=1$ (see \cite{Brieskorn}). We follow the convention of compacting the notation by omitting the number of ending 1s in the sequence and indexing the number of repeated elements, for example, we write $$[6,6,3,3,3,2,1,1]=[6_2,3_3,2].$$ Note that there are further relations between the elements in the sequence (see \cite{FlZa95}). The collection of multiplicity sequences of a cuspidal curve will be referred to as its \emph{cuspidal configuration}. The question of how many and what kind of cusps a rational cuspidal curve on a given surface can have naturally arises, and in this article we give new answers to the last part of this question for rational cuspidal curves on Hirzebruch surfaces. The first part of the question is addressed in a subsequent article \cite{MOEONC}. Our main results can be summarized in the following theorem. \begin{thm*} For all $e\geq0$ and $h \in \{0,1\}$, except the pairs $(e,k)=(h,k)=(0,0)$, the following rational cuspidal curves with four cusps exist on $\ensuremath{\mathbb{F}_e}$ and $\ensuremath{\mathbb{F}_h}$. \begin{table}[H] \renewcommand\thesubtable{} \setlength{\extrarowheight}{2pt} \centering {\noindent \begin{tabular*}{1\linewidth}{@{}c@{\extracolsep{\fill}}l@{}l@{}c} \toprule {\textnormal{\textbf{Type}}} & {\textnormal{\textbf{Cuspidal configuration}}} & \textnormal{\textbf{For}} & \textnormal{\textbf{Surface}}\\ \toprule {$(2k+1,4)$} & $[4_{k-1+e},2_3],[2],[2],[2]$& $k \geq 0$ & $\ensuremath{\mathbb{F}_e}$\\ {$(3k+1-h,5)$} & $[4_{2k-1+h},2_3],[2],[2],[2]$& $k \geq 0$ & $\ensuremath{\mathbb{F}_h}$ \\ {$(2k+2-h,4)$} & $[3_{2k-1+h},2],[2_3],[2],[2]$ & $k \geq 0$ &$\ensuremath{\mathbb{F}_h}$ \\ {$(k+1-h,3)$} & $[2_{n_1}],[2_{n_2}],[2_{n_3}],[2_{n_4}]$& $k \geq 2$ and $\sum n_i = 2k+h$ & $\ensuremath{\mathbb{F}_h}$\\ {$(0,3)$} & $[2],[2],[2],[2]$& & $\mathbb{F}_2$\\ \bottomrule \end{tabular*}} \label{tab:fourcusps} \end{table} \end{thm*} The lack of examples of curves with more than four cusps leads us to the following conjecture. \begin{conj*} A rational cuspidal curve on a Hirzebruch surface has at most four cusps. \end{conj*} Associated to these curves, we find results regarding the Euler characteristic of the logarithmic tangent sheaf $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ and the maximal multiplicity of a cusp on a rational cuspidal curve. Moreover, we give an example of a real rational cuspidal curve with four real cusps on a Hirzebruch surface. \subsection{Structure} In Section \ref{MN} we motivate the study of rational cuspidal curves on Hirzebruch surfaces by recalling the history of the study of such curves on the projective plane. In Section \ref{PR} we give the basic definitions and preliminary results for (rational) cuspidal curves on Hirzebruch surfaces. Section \ref{4C} contains the main results of this article. Here we construct several series of rational cuspidal curves with four cusps on Hirzebruch surfaces. In Section \ref{AR} we present associated results, and in particular we compute $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ for the curves we have found in Section \ref{4C}. Moreover, we find a lower bound for the multiplicity of the cusp with the highest multiplicity. Additionally, we construct a real rational cuspidal curve with four real cusps. In the appendix we construct some of the rational cuspidal curves with four cusps on Hirzebruch surfaces that were found in Section \ref{4C}. \section{Motivation: the case of plane curves}\label{MN} Let $\ensuremath{\mathbb{P}^2}$ denote the projective plane with coordinates $(x:y:z)$ and coordinate ring $\ensuremath{\mathbb{C}}[x,y,z]$. A reduced and irreducible curve $C$ on $\ensuremath{\mathbb{P}^2}$ is given as the zero set $\ensuremath{\mathscr{V}}(F)$ of a homogeneous, reduced and irreducible polynomial $F(x,y,z) \in \ensuremath{\mathbb{C}}[x,y,z]_d$ for some $d$. In this case, the polynomial $F$ and the curve $C$ is said to have degree $d$. Let $q$ be a smooth point on a plane curve $C$ with tangent $T$. Recall that the point $q$ is called an \emph{inflection point} if the local intersection multiplicity $(C \cdot T)_q$ of $C$ and $T$ at $q$ is $\geq 3$. Plane rational cuspidal curves have been studied quite intensively both classically and the last 20 years. Classically, the study was part of the process of classifying plane curves, and additionally bounds on the number of cusps were produced (see \cite{Clebsch, Lefschetz, Salmon, Telling, Veronese, Wieleitner}). In the modern context, rational cuspidal curves with many cusps play an important role in the study of open surfaces (see \cite{Fent, FlZa94, Wak}), and the study of these curves was further motivated in the mid 1990s by Sakai in \cite{Sakai}. The two tasks at hand were first to classify all rational and elliptic cuspidal plane curves, and second to find the maximal number of cusps on a rational cuspidal plane curve. The first complete classification of rational cuspidal curves of degree $d \leq 5$ that we have found, up to cuspidal configuration, is by Namba in \cite{Namba}, and a list of the curves of degree $d=5$ can be found in Table \ref{tab:degree5} \cite[Theorem 2.3.10, pp.179--182]{Namba}. The subclassification of the curves is due to the appearance of inflection points. The remarkable thing about this list of curves is that there are fewer curves with many cusps than expected. In fact, for curves of degree $d \leq 5$ there are only three curves with three cusps and only one with four cusps. For curves in higher degrees, less is known, but there is a classification of curves of degree $d=6$ by Fenske in \cite{Fent}, and in that case the highest number of cusps is three, and up to cuspidal configuration there are only two such curves. \begin{table}[htb] \renewcommand\thesubtable{} \setlength{\extrarowheight}{2pt} \centering {\small {\noindent \begin{tabular*}{\linewidth}{@{}c@{\extracolsep{\fill}}l@{}c} \toprule {\textbf{Curve}} &{\textbf{Cuspidal conf.}}& {\textbf{Parametrization}}\\ \toprule $C_{1A}$&$\quad[4]$& $(s^5:st^4:t^5)$\\ $C_{1B}$&$\quad[4]$&$(s^5-s^4t:st^4:t^5)$\\ $C_{1C}$&$\quad[4]$&$(s^5+as^4t-(1+a)s^2t^3:st^4:t^5), \; a\neq-1$\\ $C_2$&$\quad [2_6]$&$(s^4t:s^2t^3-s^5:t^5-2s^3t^2)$\\ \midrule $C_{3A}$&$\quad [3,2],[2_2]$ &$(s^5:s^3t^2:t^5)$\\ $C_{3B}$&$\quad [3,2],[2_2]$ &$(s^5:s^3t^2:st^4+t^5)$\\ $C_4$&$\quad [3], [2_3]$&$(s^4t-\frac{1}{2}s^5:s^3t^2:\frac{1}{2}st^4+t^5)$\\ $C_5$&$\quad [2_4],[2_2]$&$(s^4t-s^5:s^2t^3-\frac{5}{32}s^5:-\frac{47}{128}s^5+\frac{11}{16}s^3t^2+st^4+t^5)$\\ \midrule $C_6$&$\quad [3],[2_2],[2]$&$(s^4t-\frac{1}{2}s^5:s^3t^2:-\frac{3}{2}st^4+t^5)$\\ $C_7$&$\quad [2_2],[2_2],[2_2]$&$(s^4t-s^5:s^2t^3-\frac{5}{32}s^5:-\frac{125}{128}s^5-\frac{25}{16}s^3t^2-5st^4+t^5)$\\ \midrule $C_8$&$\quad [2_3],[2],[2],[2]$&$(s^4t:s^2t^3-s^5:t^5+2s^3t^2)$\\ \bottomrule \end{tabular*}} } \caption[Plane rational cuspidal curves of degree five.]{Plane rational cuspidal curves of degree five.} \label{tab:degree5} \end{table} In two articles from the 1990s Flenner and Zaidenberg construct two series of rational cuspidal curves with three cusps, where one cusp has a relatively high multiplicity \cite{FlZa95, FlZa97}. A third series was found and further exploration of such curves was done by Fenske in \cite{Fen99a,Fent}. The known rational cuspidal curves with three or more cusps can be listed as follows. \begin{enumerate}[$(I)$] \item For $d=5$, a rational cuspidal curve with three or more cusps has one of these cuspidal configurations \cite[Theorem 2.3.10, pp.179--182]{Namba}: \begin{align*} &[3],[2_2],[2],\\ &[2_2],[2_2],[2_2],\\ &[2_3],[2],[2],[2]. \end{align*} \item For any $a\geq b \geq 1$ there exists a rational cuspidal plane curve $C$ of degree $d=a+b+2$ with three cusps \cite[Theorem 3.5, p.448]{FlZa95}: \begin{equation*} [d-2],[2_a],[2_b]. \end{equation*} \item For any $a \geq 1$, there exists a rational cuspidal plane curve $C$ of degree $d=2a+3$ with three cusps \cite[Theorem 1.1, p.94]{FlZa97}: \begin{equation*} [d-3,2_a],[3_a],[2]. \end{equation*} \item For any $a \geq 1$, there exists a rational cuspidal plane curve $C$ of degree $d=3a+4$ with three cusps \cite[Theorem 1.1, p.512]{Fen99a}: \begin{equation*} [d-4,3_a],[4_a,2_2],[2]. \end{equation*} \end{enumerate} All these curves are constructed explicitly by successive Cremona transformations of plane curves of low degree. In cases $(II)$ and $(III)$ it is proved by Flenner and Zaidenberg in \cite{FlZa95,FlZa97} that these are the only tricuspidal curves with maximal multiplicity of this kind. The same is proved by Fenske in \cite{Fen99a} for case $(IV)$ under the assumption that $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})=0$. Note that this result is originally proved with $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})\leq0$, but by a result by Tono in \cite{Tono05} only $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})=0$ is needed. The curves in $(II)$, $(III)$ and $(IV)$ constitute three series of cuspidal curves with three cusps, with infinitely many curves in each series. The lack of examples of curves with more than four cusps, leads to a conjecture originally proposed by Orevkov. \begin{conj} A plane rational cuspidal curve can not have more than four cusps. \end{conj} Further research by Fenske in \cite[Section 5]{Fent} and Piontkowski in \cite{Piontkowski} implies that there are no other tricuspidal curves, hence the above conjecture was extended \cite{Piontkowski}. \begin{conj} A plane rational cuspidal curve can not have more than four cusps. If it has three or more cusps, then it is given in the above list. \end{conj} Associated to these results, in \cite{FlZa94} Flenner and Zaidenberg conjectured that $\ensuremath{\mathbb{Q}}$-acyclic affine surfaces $Y$ with logarithmic Kodaira dimension $\overline{\kappa}(Y)= 2$ are \emph{rigid} and \emph{unobstructed}. Complements to rational cuspidal curves with three or more cusps on the projective plane are examples of such surfaces. In that case, the conjecture says that for $(V,D)$ the minimal embedded resolution of the plane curve, the Euler characteristic of the logarithmic tangent sheaf vanishes, $$\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})=\mathrm{h}^2(V,\ensuremath{\Theta_V \left\langle D \right\rangle})-\mathrm{h}^1(V,\ensuremath{\Theta_V \left\langle D \right\rangle})+\mathrm{h}^0(V,\ensuremath{\Theta_V \left\langle D \right\rangle})=0,$$ and in particular, $\mathrm{h}^1(V,\ensuremath{\Theta_V \left\langle D \right\rangle})=\mathrm{h}^2(V,\ensuremath{\Theta_V \left\langle D \right\rangle})=0$. Note that it is known by \cite[Theorem 6]{Iit} that $\mathrm{h}^0(V,\ensuremath{\Theta_V \left\langle D \right\rangle})=0$. This conjecture is referred to as the \emph{rigidity conjecture}. In the study of plane rational cuspidal curves, there has, moreover, been found results bounding the multiplicity of the cusp with the highest multiplicity. This gives restrictions on the possible cuspidal configurations on such a curve. The first result on this matter is by Matsuoka and Sakai in \cite{MatsuokaSakai}, where it is shown that $$d<3\hat{m},$$ where with $m_j$ the multiplicity of the cusp $p_j$, $\hat{m}$ is defined to be $\hat{m}:=\max\{m_j\}$. Improved inequalities were found by Orevkov in \cite{Ore}, where for $\alpha=\frac{3 + \sqrt{5}}{2}$, the bound is improved in general to $$d < \alpha (\hat{m}+1)+\frac{1}{\sqrt{5}},$$ for curves with $\ol{\kappa}(\Po \setminus C)=-\infty$, this is further improved to $d < \alpha \hat{m}$, and for curves with $\ol{\kappa}(\Po \setminus C)=2$, the bound is $$d < \alpha (\hat{m}+1)-\frac{1}{\sqrt{5}}.$$ Note also the result by Tono in \cite{Tono05} on the number of cusps on a plane cuspidal curve. For plane \emph{rational} cuspidal curves, the upper bound on the number of cusps is 8. We deal with this question for cuspidal curves on Hirzebruch surfaces in a subsequent article \cite{MOEONC}. \section{Notation and preliminary results}\label{PR} In this section we recall what we mean by a curve on a Hirzebruch surface and state preliminary results that give restrictions on the cuspidal configurations of rational cuspidal curves in this case. We first recall some basic facts about the Hirzebruch surfaces. Let $\mathbb{F}_e$ denote the Hirzebruch surface of type $e$ for any $e \geq 0$. Recall that $\ensuremath{\mathbb{F}_e}$ is a projective ruled surface, with $\ensuremath{\mathbb{F}_e}=\mathbb{P}(\ensuremath{\mathscr{O}} \oplus \ensuremath{\mathscr{O}}(-e))$ and morphism $\pi: \mathbb{F}_e \longrightarrow \ensuremath{\mathbb{P}^1}$. Moreover, $p_a(\mathbb{F}_e)=0$ and $p_g(\mathbb{F}_e)=0$ \cite[Corollary V 2.5, p.371]{Hart:1977}. For any $e \geq 0$, the surface $\ensuremath{\mathbb{F}_e}$ can be considered as the toric variety associated to a fan $\Sigma_e \subset \ensuremath{\mathbb{Z}}^2$, where the rays of the fan $\Sigma_e$ are generated by the vectors \begin{displaymath} v_1=\begin{bmatrix}1\\0 \end{bmatrix}, \quad v_2=\begin{bmatrix}0\\1 \end{bmatrix}, \quad v_3=\begin{bmatrix}-1\\e \end{bmatrix}, \quad v_4=\begin{bmatrix}0 \\-1 \end{bmatrix}. \end{displaymath} The coordinate ring of $\ensuremath{\mathbb{F}_e}$ (see \cite{COX}) is denoted by $S_e := \mathbb{C}[x_{0},x_{1},y_{0},y_{1}]$, where the variables can be given a grading by $\mathbb{Z}^{2}$, \begin{eqnarray*} \deg x_0 & = & (1,0),\\ \deg x_1 & = & (1,0),\\ \deg y_0 & = & (0,1),\\ \deg y_1 & = & (-e,1). \end{eqnarray*} Let $S_e(a,b)$ denote the $(a,b)$-graded part of $S_e$, \begin{displaymath} S_e(a,b):=\mathrm{H}^0(\ensuremath{\mathbb{F}_e},\ensuremath{\mathscr{O}}_{\ensuremath{\mathbb{F}_e}}(a,b)) = \bigoplus_{\substack{\alpha_{0}+\alpha_{1}-e\beta_{1} = a \\ \beta_{0}+\beta_{1} = b}} \mathbb{C}x_{0}^{\alpha_{0}}x_{1}^{\alpha_{1}}y_{0}^{\beta_{0}}y_{1}^{\beta_{1}}. \end{displaymath} A reduced and irreducible curve $C$ on $\ensuremath{\mathbb{F}_e}$ is given as the zero set $\ensuremath{\mathscr{V}}(F)$ of a reduced and irreducible polynomial $F(x_0,x_1,y_0,y_1) \in S_e(a,b)$. In this case, the polynomial $F$ is said to have bidegree $(a,b)$ and the curve $C$ is said to be of type $(a,b)$. In the language of divisors, let $L$ be a \emph{fiber} of $\pi: \mathbb{F}_e \longrightarrow \ensuremath{\mathbb{P}^1}$ and $M_0$ the \emph{special section} of $\pi$. The Picard group of $\ensuremath{\mathbb{F}_e}$, $\mathrm{Pic}(\mathbb{F}_e)$, is isomorphic to $\ensuremath{\mathbb{Z}} \oplus \ensuremath{\mathbb{Z}}$. We choose $L$ and $M \sim eL+M_0$ as a generating set of ${\rm{Pic}}(\ensuremath{\mathbb{F}_e})$, and then we have \cite[Theorem V 2.17, p.379]{Hart:1977} $$L^2=0, \qquad L \ensuremath{\,.\,} M=1, \qquad M^2=e.$$ The canonical divisor $K$ on $\ensuremath{\mathbb{F}_e}$ can be expressed as \cite[Corollary V 2.11, p.374]{Hart:1977} $$K \sim (e-2)L-2M \; \text{ and }\; K^2=8.$$ Any irreducible curve $C\neq L,M_0$ corresponds to a divisor on $\mathbb{F}_e$ given by \cite[Proposition V 2.20, p.382]{Hart:1977} $$C \sim aL+bM,\quad b>0, \,a\geq 0.$$ Recall that there are \emph{birational transformations} between these surfaces and the projective plane, and these can be given in a quite explicit way (see \cite{MOEPHD}). Using the birational transformations, we are able to construct curves on one surface from a curve on another surface by taking the strict transform. A first result concerning cuspidal curves on $\mathbb{F}_e$ regards the genus $g$ of the curve. \begin{cor}[Genus formula]\label{genusfe} A cuspidal curve $C$ of type $(a,b)$ with cusps $p_j$, for $j=1,\ldots,s$, and multiplicity sequences $\overline{m}_j=[m_0,m_1, \ldots, m_{t_j-1}]$ on the Hirzebruch surface $\mathbb{F}_e$ has genus $g$, where $$g=\frac{(b-1)(2a-2+be)}{2}-\sum_{j=1}^s \sum_{i=0}^{t_j-1} \frac{m_i(m_i-1)}{2}.$$ \end{cor} \begin{proof} Recall that $C \sim aL+bM$, $K \sim (e-2)L-2M$, $L^2=0$, $L \ensuremath{\,.\,} M = 1$ and $M^2=e$. By the general genus formula \cite[Example V 3.9.2, p.393]{Hart:1977}, we have $$g=\frac{(aL+bM) \ensuremath{\,.\,} (aL+bM+(e-2)L-2M)}{2}+1-\sum_{j=1}^s \delta_j,$$ where $\delta$ is the \emph{delta invariant}. This gives \begin{align*} g&=\frac{b^2e-2be+ab+be-2a+ab-2b}{2}+1-\sum_{j=1}^s \sum_{i=0}^{t_j-1} \frac{m_i(m_i-1)}{2}\\ &=\frac{(b-1)(2a-2+be)}{2}-\sum_{j=1}^s \sum_{i=0}^{t_j-1} \frac{m_i(m_i-1)}{2}. \end{align*} \end{proof} Secondly, the structure of the Hirzebruch surfaces gives restrictions on the multiplicity sequence of a cusp on a curve on such a surface. \begin{thm} \label{thm:multb} Let $p$ be a cusp on a reduced and irreducible curve $C$ of type $(a,b)$ with multiplicity sequence $\overline{m}=[m,m_1,\ldots, m_{t-1}]$ on a Hirzebruch surface $\ensuremath{\mathbb{F}_e}$. Then $m \leq b$. \end{thm} \begin{proof} The coordinates of the point $p$ determine a unique fiber $L$. By intersection theory, $m \leq (L \cdot C)_p$ \cite[Exercise I 5.4, p.36]{Hart:1977}. Moreover, $(L \cdot C)_p \leq L \ensuremath{\,.\,} C$. By intersection theory again, $L \ensuremath{\,.\,} C = b$. Hence, $m \leq (L \cdot C)_p \leq L \ensuremath{\,.\,} C = b$. \end{proof} Further restrictions on the type of points on a curve on $\ensuremath{\mathbb{F}_e}$ can be found using Hurwitz's theorem \cite[Corollary IV 2.4, p.301]{Hart:1977}. First, the general result in this situation. \begin{thm}[Hurwitz's theorem for $\ensuremath{\mathbb{F}_e}$]\label{RHFe} Let $C$ be a curve of genus $g$ and type $(a,b)$, where $b > 0$, on a Hirzebruch surface $\ensuremath{\mathbb{F}_e}$ with $e>0$. Let $\tilde{C}$ denote the normalization of $C$, and let $\nu$ be the composition of the normalization map $\tilde{C} \rightarrow C$ and the projection map $C \rightarrow \ensuremath{\mathbb{P}^1}$ of degree $b$. Let $e_p$ denote the ramification index of a point $p \in \tilde{C}$ with respect to $\nu$. Then the following equality holds, $$2b+2g-2 = \sum_{p \in \tilde{C}} (e_p-1).$$ When $e=0$, for curves $C$ of genus $g$ and type $(a,b)$, with $a,b>0$, we have $$2\min\{a,b\}+2g-2 = \sum_{p \in \tilde{C}} (e_p-1).$$ \end{thm} \begin{proof} The result follows from \cite[Corollary IV 2.4, p.301]{Hart:1977}. With $\nu$ as above, we get $$2b+2g-2 = \sum_{p \in \tilde{C}} (e_p-1).$$ When $e=0$, use the projection map of lower degree, i.e., $\min\{a,b\}$. \end{proof} An immediate corollary gives restrictions on the multiplicities of cusps on a curve. \begin{cor}\label{RHM} Let $C$ be a cuspidal curve of type $(a,b)$ and genus $g$ with $s>0$ cusps $p_j$ with multiplicities $m_j$ on a Hirzebruch surface $\ensuremath{\mathbb{F}_e}$ with $e>0$. Then the following inequality holds, $$2b+2g-2 \geq \sum_{j=1}^s (m_j-1).$$ When $e=0$, the same is true with $\min\{a,b\}$ instead of $b$. \end{cor} \begin{proof} The cusps $p_j$ of $C$ gives branching points with ramification index bigger than or equal to the multiplicity $m_j$, so the result follows from Theorem \ref{RHFe}. \end{proof} \section{Rational cuspidal curves with four cusps}\label{4C} In this section we give examples of rational cuspidal curves with four cusps on Hirzebruch surfaces, and our aim is to shed light on the question of how many and what kind of cusps a rational cuspidal curve on a Hirzebruch surface can have. We are not able to construct many rational cuspidal curves with four cusps on the Hirzebruch surfaces. Indeed, on each $\ensuremath{\mathbb{F}_e}$, with $e \geq 0$, we construct one infinite series of rational cuspidal curves with four cusps. On $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ and $\ensuremath{\mathbb{F}_1}$ we construct another three infinite series of rational cuspidal curves with four cusps, and on $\mathbb{F}_2$ we construct a single additional rational cuspidal curve with four cusps. The following theorem presents the series of rational fourcuspidal curves that consists of curves on all the Hirzebruch surfaces. In Appendix \ref{Appendix:A} we construct some of the curves from a plane fourcuspidal curve using birational maps and \begin{verbatim}Maple\end{verbatim} \cite{Maple}. \begin{thm} \label{thm:4cusp} For all $e \geq 0$ and $k \geq 0$, except for the pair ${(e,k)=(0,0)}$, there exists on the Hirzebruch surface $\ensuremath{\mathbb{F}_e}$ a rational cuspidal curve $C_{e,k}$ of type $(2k+1,4)$ with four cusps and cuspidal configuration $$[4_{k-1+e},2_3],[2],[2],[2].$$ \end{thm} \begin{proof} We will show that for each $e \geq 0$ there is an infinite series of curves on $\ensuremath{\mathbb{F}_e}$, and we show this by induction on $k$. The proof is split in two, and we treat the case of $k$ odd and even separately. We construct the series of curves $C_{e,0}$ for $e \geq 1$, and then we construct the initial series $C_{e,1}$ and $C_{e,2}$, with $e \geq0$. We only treat the induction to prove the existence of $C_{e,k}$ for odd values of $k$, as the proof for even values of $k$ is completely parallel. Let $C$ be the rational cuspidal curve of degree 5 on $\ensuremath{\mathbb{P}^2}$ with cuspidal configuration $[2_3],[2],[2],[2]$. Let $p$ be the cusp with multiplicity sequence $[2_3]$, and let $T$ be the tangent line to $C$ at $p$. Then $T \cdot C=4 p+r$, with $r$ a smooth point on $C$. Blowing up at $r$, the strict transform of $C$ is a curve $C_{1,0}$ of type $(1,4)$ on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration $[2_3],[2],[2],[2]$. Letting $T_{1,0}$ denote the strict transform of $T$ and $p_{1,0}$ the strict transform of $p$, we have ${T_{1,0} \cdot C_{1,0}=4 p_{1,0}}$. We observe that $p_{1,0}$ is fiber tangential. Let $E_{1}$ denote the special section on $\ensuremath{\mathbb{F}_1}$, and let $s_{0,1}=E_{1} \cap T_{1,0}$. From $C_{1,0}$ we can proceed with the construction of curves on Hirzebruch surfaces in three ways. First, we show by induction on $e$ that the curves $C_{e,0}$ exist on the Hirzebruch surfaces $\ensuremath{\mathbb{F}_e}$, for all $e \geq 1$. We have already seen that $C_{1,0}$ exists on $\ensuremath{\mathbb{F}_1}$, and that there exists a fiber $T_{1,0}$ with the property that ${T_{1,0} \cdot C_{1,0}=4 p_{1,0}}$ for the first cusp $p_{1,0}$. Now assume $e \geq 2$ and that the curve $C_{e-1,0}$ of type $(1,4)$ exists on $\mathbb{F}_{e-1}$ with cuspidal configuration $[4_{e-2},2_3],[2],[2],[2]$, where $p_{e-1,0}$ denotes the first cusp and $T_{e-1,0}$ has the property that ${T_{e-1,0} \cdot C_{e-1,0}=4 p_{e-1,0}}$. Then, with $E_{e-1}$ the special section of $\mathbb{F}_{e-1}$, blowing up at the intersection $s_{e-1,0} \in E_{e-1} \cap T_{e-1,0}$ and contracting $T_{e-1,0}$, we get $C_{e,0}$ on $\ensuremath{\mathbb{F}_e}$ of type $(1,4)$ with cuspidal configuration $[4_{e-1},2_3],[2],[2],[2]$. Moreover, we note that there exists a fiber $T_{e,0}$ with ${T_{e,0} \cdot C_{e,0}=4 p_{e,0}}$. So the series exists on all $e \geq 1$ for $k=0$. Second, note that from the curve $C_{1,0}$ on $\ensuremath{\mathbb{F}_1}$ it is possible to construct the curve $C_{0,1}$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ by blowing up at $p_{1,0}$ before contracting $T_{1,0}$. The curve $C_{0,1}$ is a curve of type $(3,4)$ with cuspidal configuration ${[2_3],[2],[2],[2]}$, and there is a fiber $T_{0,1}$ such that ${T_{0,1}\ensuremath{\,.\,} C_{0,1}=4p_{0,1}}$. Blowing up at a point ${s_{0,1} \in T_{0,1} \setminus \{p_{0,1}\}}$ and contracting $T_{0,1}$ result in the curve $C_{1,1}$ on $\ensuremath{\mathbb{F}_1}$ of type $(3,4)$ with cuspidal configuration $[4,2_3],[2],[2],[2]$. Moreover, there exists a fiber $T_{1,1}$ with ${T_{1,1} \cdot C_{1,1}=4 p_{1,1}}$ and $p_{1,1} \notin E_1$. The same induction on $e$ as above proves that the series exists for $k=1$. Third, note that from the curve $C_{1,0}$ on $\ensuremath{\mathbb{F}_1}$ it is possible to construct the curve $C_{0,2}$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ by blowing up at a point ${t_{1,0} \in T_{1,0} \setminus \{p_{1,0}, s_{1,0}\}}$ before contracting $T_{1,0}$. The curve $C_{0,2}$ is a curve of type $(5,4)$ with cuspidal configuration $[4,2_3],[2],[2],[2]$, and there is a fiber $T_{0,2}$ such that ${T_{0,2}\cdot C_{0,2}=4p_{0,2}}$. Blowing up at a point ${s_{0,2} \in T_{0,2} \setminus \{p_{0,2}\}}$ and contracting $T_{0,2}$ give the curve $C_{1,2}$ on $\ensuremath{\mathbb{F}_1}$ of type $(5,4)$ with cuspidal configuration ${[4_2,2_3],[2],[2],[2]}$. Moreover, there exists a fiber $T_{1,2}$ with ${T_{1,2} \cdot C_{1,2}=4 p_{1,2}}$ and $p_{1,2} \notin E_1$. The same induction on $e$ as above proves that the series exists for $k=2$. Next assume $k \geq 3$, with $k$ odd, and that there exists a series of curves $C_{e,k-2}$ of type $(2k-3,4)$ on $\ensuremath{\mathbb{F}_e}$ for all $e \geq 0$ with cuspidal configuration $[4_{e+k-3},2_3],[2],[2],[2]$. Then, in particular, the curve $C_{1,k-2}$ on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration \linebreak $[4_{k-2},2_3],[2],[2],[2]$ exists. Moreover, there exists a fiber $T_{1,k-2}$ on $\ensuremath{\mathbb{F}_1}$ such that ${T_{1,k-2} \cdot C_{1,k-2}=4 p_{1,k-2}}$, where $p_{1,k-2}$ denotes the cusp with multiplicity sequence $[4_{k-2},2_3]$. With $E_1$ the special section on $\ensuremath{\mathbb{F}_1}$, let ${s_{1,k-2} \in E_1 \cap T_{1,k-2}}$. We now blow up at a point $t_{1,k-2} \in T_{1,k-2} \setminus \{p_{1,k-2},s_{1,k-2}\}$ and subsequently contract $T_{1,k-2}$. This gives the curve $C_{0,k}$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $(2k+1,4)$ with cuspidal configuration $[4_{k-1},2_3],[2],[2],[2]$. With $T_{0,k}$ the strict transform of the exceptional line of the latter blowing up, we have ${T_{0,k} \cdot C_{0,k}=4 p_{0,k}}$. Blowing up at a point $s_{0,k} \in T_{0,k} \setminus \{p_{0,k}\}$ and contracting $T_{0,k}$ gives the curve $C_{1,k}$ on $\ensuremath{\mathbb{F}_1}$ of type $(2k+1,4)$ with cuspidal configuration $[4_k,2_3],[2],[2],[2]$. Moreover, there is a fiber $T_{1,k}$ with the property that ${T_{1,k}\cdot C_{1,k}=4p_{1,k}}$. With the same induction on $e$ as above, we get the series of curves $C_{e,k}$. \end{proof} There are three infinite series of rational fourcuspidal curves that can be found on the Hirzebruch surfaces $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ and $\ensuremath{\mathbb{F}_1}$. Before we list these three series, we consider the rational cuspidal curves with four cusps on $\ensuremath{\mathbb{F}_1}$ that we can get by blowing up a single point on $\ensuremath{\mathbb{P}^2}$. These curves represent examples from the series. \begin{thm} Let $C$ be the rational cuspidal curve with four cusps of degree 5 on $\ensuremath{\mathbb{P}^2}$. The following rational cuspidal curves with four cusps on $\mathbb{F}_1$ can be constructed from $C$ by blowing up a single point on $\ensuremath{\mathbb{P}^2}$. \begin{table}[H] \renewcommand\thesubtable{} \setlength{\extrarowheight}{2pt} \centering {\noindent \begin{tabular*}{0.75\linewidth}{@{}c@{\extracolsep{\fill}}c@{}c@{}l} \toprule \textnormal{\textbf{\# Cusps}} & \textnormal{\textbf{Curve}} & \textnormal{\textbf{Type}} & \textnormal{\textbf{Cuspidal configuration}} \\ \toprule \multirow{3}{0pt}{4} & $C_1$ & $(0,5)$& $[2_3],[2],[2],[2]$ \\ &$C_2$ &$(1,4)$ & $[2_3],[2],[2],[2]$ \\ &$C_3$ &$(2,3)$ & $[2_2],[2],[2],[2]$ \\ \bottomrule \end{tabular*}} \caption {Rational cuspidal curves on $\mathbb{F}_1$ with four cusps.} \label{tab:prccw4} \end{table} \end{thm} \begin{proof} The curve $C_1$ is constructed by blowing up any point $r$ on $\ensuremath{\mathbb{P}^2} \setminus C$. Note that if $r$ is on the tangent line to a cusp on $C$, then $C_1$ has cusps that are fiber tangential. If $r$ is only on tangent lines of smooth points on $C$, then $C_1$ has smooth fiber tangential points. The curve $C_2$ is constructed by blowing up any smooth point $r$ on $C$. Again, if $r$ is on a tangent line of $C$, $C_2$ will have points that are fiber tangential. The curve $C_3$ is constructed by blowing up the cusp with multiplicity sequence $[2_3]$. \end{proof} \noindent The fact that we can construct curves where the curve have nontransversal intersection with some fiber(s) is crucial in the later constructions. Although we do not get new cuspidal configurations in this first step, cusps can sometimes be constructed later on. We now give the three series of rational cuspidal curves with four cusps on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ and $\ensuremath{\mathbb{F}_1}$. For notational purposes we denote these surfaces by $\ensuremath{\mathbb{F}_h}$, with $h \in \{0,1\}$, in the theorems. \begin{thm} For $h\in\{0,1\}$ and all integers $k\geq 0$, except the pair $(h,k)=(0,0)$, there exists on the Hirzebruch surface $\ensuremath{\mathbb{F}_h}$ a rational cuspidal curve $C_{h,k}$ of type $(3k+1-h,5)$ with four cusps and cuspidal configuration $$[4_{2k-1+h},2_3],[2],[2],[2].$$ \end{thm} \begin{proof} The proof is by construction and induction on $k$. Let $C$ be a rational cuspidal curve of degree 5 on $\ensuremath{\mathbb{P}^2}$ with cuspidal configuration ${[2_3],[2],[2],[2]}$. Let $p$ be the cusp with multiplicity sequence $[2_3]$, and let $T$ be the tangent line to $C$ at $p$. There is a smooth point $r \in C$, such that $T \cdot C=4 p+r$. Blowing up at any point $t \in T \setminus \{p,r\}$, we get the curve $C_{1,0}$ of type $(0,5)$ and cuspidal configuration ${[2_3],[2],[2],[2]}$ on $\ensuremath{\mathbb{F}_1}$. Moreover, with $T_{1,0}$ the strict transform of $T$ and $p_{1,0}$, $r_{1,0}$ the strict transforms of the points $p$ and $r$, we have ${T_{1,0} \cdot C_{1,0}=4 p_{1,0}+r_{1,0}}$. Now assume that the curve $C_{1,k-1}$ of type $(3(k-1),5)$ exists on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration $[4_{2(k-1)},2_3],[2],[2],[2]$, and the intersection $T_{1,k-1} \cdot C_{1,k-1}=4 p_{1,k-1}+r_{1,k-1}$ for a fiber $T_{1,k-1}$ and points as above. Then blowing up at $r_{1,k-1}$ and contracting $T_{1,k-1}$, we get a curve $C_{0,k}$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $(3k+1,5)$ and cuspidal configuration $[4_{2k-1},2_3],[2],[2],[2]$. Moreover, there is a fiber $T_{0,k}$ with the property that $T_{0,k} \cdot C_{0,k}=4 p_{0,k}+r_{0,k}$. Blowing up at $r_{0,k}$ and contracting $T_{0,k}$, we get a rational cuspidal curve $C_{1,k}$ of type $(3k,5)$ on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration $[4_{2k},2_3],[2],[2],[2]$. \end{proof} \begin{thm} For $h\in\{0,1\}$ and all integers $k\geq 0$, except the pair $(h,k)=(0,0)$, there exists on the Hirzebruch surface $\ensuremath{\mathbb{F}_h}$ a rational cuspidal curve of type $(2k+2-h,4)$ with four cusps and cuspidal configuration $$[3_{2k-1+h},2],[2_3],[2],[2].$$ \end{thm} \begin{proof} The proof is by construction and induction on $k$. Let $C$ be the rational cuspidal curve of degree 5 on $\ensuremath{\mathbb{P}^2}$ with cuspidal configuration ${[2_3],[2],[2],[2]}$. Let $q$ be one of the cusps with multiplicity sequence $[2]$, and let $T$ be the tangent line to $C$ at $q$. Then there are smooth points $r,s \in C$, such that ${T \cdot C=3 q+r+s}$. Blowing up at $s$, we get the curve $C_{1,0}$ of type $(1,4)$ and cuspidal configuration $[2_3],[2],[2],[2]$ on $\ensuremath{\mathbb{F}_1}$. Moreover, with $T_{1,0}$ the strict transform of $T$ and $p_{1,0}$, $r_{1,0}$ the strict transforms of the points $p$ and $r$, we have $T_{1,0} \cdot C_{1,0}=3 p_{1,0}+r_{1,0}$. Now assume that the curve $C_{1,k-1}$ of type $(2k-1,4)$ exists on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration $[3_{2k-2},2],[2_3],[2],[2]$, and the intersection $T_{1,k-1} \cdot C_{1,k-1}=3 p_{1,k-1}+r_{1,k-1}$ for a fiber $T_{1,k-1}$ and points as above. Then blowing up at $r_{1,k-1}$ and contracting $T_{1,k-1}$, we get a curve $C_{0,k}$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $(2k+2,4)$ and cuspidal configuration $[3_{2k-1},2],[2_3],[2],[2]$. Moreover, there is a fiber $T_{0,k}$ with the property that $T_{0,k} \cdot C_{0,k}=3 p_{0,k}+r_{0,k}$. Blowing up at $r_{0,k}$ and contracting $T_{0,k}$, we get a rational cuspidal curve $C_{1,k}$ of type $(2k+1,4)$ on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration $[3_{2k},2],[2_3],[2],[2]$. \end{proof} \begin{thm}\label{2erne} For $h\in\{0,1\}$, all integers $k \geq 2$, and every choice of $n_j \in \mathbb{N}$, with $j=1,\ldots,4$, such that $\sum_{j=1}^4 n_j=2k+h$, there exists on the Hirzebruch surface $\ensuremath{\mathbb{F}_h}$ a rational cuspidal curve of type $(k+1-h,3)$ with four cusps and cuspidal configuration $$[2_{n_1}],[2_{n_2}],[2_{n_3}],[2_{n_4}].$$ \end{thm} \begin{proof} We prove the existence of the curves on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ by induction on $k$. In the proof we show that any curve on $\ensuremath{\mathbb{F}_1}$ can be reached from a curve on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ by an elementary transformation, hence we prove the theorem for $h\in \{0,1\}$. First we observe that a choice of $n_j$ such that the condition $\sum_{j=1}^4 n_j=2k$ means that either all four $n_j$ are odd, two are odd and two are even, or all four are even. We split the proof into these three cases, and prove only the first case completely. The other two can be dealt with in the same way once we have proved the existence of a first curve. We now prove the theorem when all $n_j$ are odd. Let $C$ be a rational cuspidal curve on $\ensuremath{\mathbb{P}^2}$ of degree 4 with three cusps and cuspidal configuration ${[2],[2],[2]}$ for cusps $p_j$, $j=1,2,3$. Let $p_4$ be a general smooth point on $C$ and let $T$ be the tangent line to $C$ at $p_4$. Then ${T \cdot C = 2 p_4+t_1+t_2}$, where $t_1,t_2$ are two smooth points on $C$. Blowing up at $t_1$ and $t_2$ and contracting $T$, we get a rational cuspidal curve on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $(3,3)$ with four ordinary cusps. Fixing notation, we say that we have a curve $C_2$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $(3,3)$ and four cusps $p^2_j$, $j=1,\ldots,4$, all with multiplicity sequence $[2]$. Since the choice of $p_4 \in \ensuremath{\mathbb{P}^2}$ was general, there are four $(1,0)$-curves $L^2_j$ such that $$L^2_j \cdot C_2=2 p^2_j+r^2_j,$$ for smooth points $r^2_j \in C_2$. Now assume that we have a curve $C_{k-1}$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $((k-1)+1,3)$, with cuspidal configuration ${[2_{n_1-2}],[2_{n_2}],[2_{n_3}],[2_{n_4}]}$ such that all $n_j$ are odd, and such that there exist fibers $L^{k-1}_j$ with $$L^{k-1}_j \cdot C_{k-1}=2 p^{k-1}_j+r^{k-1}_j,$$ for smooth points $r^{k-1}_j$ on $C_{k-1}$. We blow up at $r^{k-1}_1$, contract the corresponding $L^{k-1}_1$ and get a curve $C_{1,k-1}$ on $\ensuremath{\mathbb{F}_1}$ of type $(k-1,3)$ with cuspidal configuration ${[2_{n_1-1}],[2_{n_2}],[2_{n_3}],[2_{n_4}]}$. Moreover, since $r^{k-1}_1$ was not fiber tangential, we have that $r^{1,k-1}_1 \notin E_1$, and the strict transform of the exceptional fiber of the blowing up, $L^{1,k-1}_1$, has intersection with $C_{1,k-1}$, $$L^{1,k-1}_1 \cdot C_{1,k-1}=2 p^{1,k-1}_1+r^{1,k-1}_1.$$ Blowing up at $r^{1,k-1}_1$ and contracting $L^{1,k-1}_1$ bring us back to $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ and a curve $C_{k}$ of type $(k+1,3)$ and cuspidal configuration $[2_{n_1}],[2_{n_2}],[2_{n_3}],[2_{n_4}]$. This takes care of the case when all $n_j$ are odd. To prove the theorem when two $n_j$ are even or all $n_j$ are even, we only show that there exist curves on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of the right type and cuspidal configurations $[2_2],[2_2],[2],[2]$ and ${[2_2],[2_2],[2_2],[2_2]}$. The rest of the argument is then similar to the above. To get the first curve, we blow up $C_2$ in $r^2_1$ and $r^2_2$ and contract $L^2_1$ and $L^2_2$. This is a curve $C_3$ of type $(4,3)$ with cuspidal configuration ${[2_2],[2_2],[2],[2]}$. The curve is on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ since it can be shown with direct calculations in \verb+Maple+ that $r^2_1$ and $r^2_2$ are not on the same $(0,1)$-curve. To get the second curve, we blow up at the analogous $r^3_3$ and $r^3_4$ on the curve $C_3$, before contracting $L^3_3$ and $L^3_4$. We are again on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ by a similar argument to the above, and the curve $C_4$ is of type $(5,3)$ and has cuspidal configuration $[2_2],[2_2],[2_2],[2_2]$. \end{proof} \noindent Note that the construction of the curves in Theorem \ref{2erne} can also be done from the rational cuspidal cubic on $\ensuremath{\mathbb{P}^2}$. \begin{proof}[Alternative proof of Theorem \ref{2erne}] Let $C$ be the rational cuspidal cubic on $\ensuremath{\mathbb{P}^2}$. Let $s$ be a general point on $\ensuremath{\mathbb{P}^2}$, where general here means that $s$ is neither on $C$, nor the tangent line to the cusp, nor the tangent line to the inflection point on $C$. For example we can choose ${y^2z-x^3}$ as the defining polynomial of $C$, and take ${s=(0:1:1)}$. Then the polar curve of $C$ with respect to the point $s$, given by the defining polynomial ${2yz+y^2}$, intersects $C$ in three smooth points, ${(2^{\frac{2}{3}}:-2:1)}$, ${(2^{-\frac{1}{3}}(-1+\sqrt{3}i):-2:1)}$ and ${(2^{-\frac{1}{3}}(-1-\sqrt{3}i):-2:1)}$. Blowing up at $s$ brings us to $\ensuremath{\mathbb{F}_1}$ and a curve of type $(0,3)$ with one ordinary cusp, say $p_4$. We additionally have three fibers $L_j$, $j=1,\ldots,3$, with the property that ${L_j \cdot C=2 p_j+r_j}$ for smooth points $p_j$ and $r_j$ on $C$. Blowing up at the $r_j$'s and contracting the $L_j$'s, we get the desired series of curves. \end{proof} The series in Theorem \ref{2erne} can be extended to a series of rational cuspidal curves with less than four cusps in an obvious way. We state this as a corollary. \begin{cor} For $h\in\{0,1\}$, all integers $k \geq 0$, and every choice of $n_j \in \mathbb{N} \cup \{0\}$, with ${j=1,\ldots,4}$, such that ${\sum_{j=1}^4 n_j=2k+h}$, there exists on the Hirzebruch surface $\ensuremath{\mathbb{F}_h}$ a rational cuspidal curve of type $(k+1-h,3)$ with $s \in \{0,1,2,3,4\}$ cusps and cuspidal configuration $$[2_{n_1}],[2_{n_2}],[2_{n_3}],[2_{n_4}].$$ \end{cor} \begin{proof} These curves can be constructed from the curves in Theorem \ref{2erne} by a similar construction. In order to construct the curves with less than four cusps we have to blow up cusps on the curves in the series from Theorem \ref{2erne}. \end{proof} Last in this section we provide an example of a curve not represented in any of the above series. This is the only example we have found of such a curve, and in particular the only such curve on $\mathbb{F}_2$. \begin{thm} On $\mathbb{F}_2$ there exists a rational cuspidal curve of type $(0,3)$ with four cusps and cuspidal configuration $$[2],[2],[2],[2].$$ \end{thm} \begin{proof} Let $C$ be the plane rational fourcuspidal curve of degree $5$. Let $p$ be the cusp $[2_3]$ and $p_i$, $i=1,2,3$, the cusps with multiplicity sequence $[2]$. Let $T$ be the tangent line to $C$ at $p$. Let $L_i$ denote the line through $p$ and $p_i$, with $i=1,2,3$. There are smooth points $s$ and $r_i$, $i=1,2,3$, on $C$, such that $$T \cdot C=4 p + s, \qquad L_i \cdot C=2 p+2 p_i + r_i.$$ Blowing up at $p$ gives a $(2,3)$-curve on $\ensuremath{\mathbb{F}_1}$ with cuspidal configuration ${[2_2],[2],[2],[2]}$. Let $C'$ denote the strict transform of $C$, $T'$ and $L_i'$ the strict transforms of $T$ and $L_i$, and let $E'$ be the special section on $\ensuremath{\mathbb{F}_1}$. Let $p'$ be the cusp $[2_2]$, $p_i'$ the other cusps, and $s'$ and $r_i'$ the strict transforms of the points $s$ and $r_i$. Then we have the following intersections, $$E' \cdot C'=2 p', \qquad T'\cdot C'=2 p'+s', \qquad L_i'\cdot C'=2 p_i' + r_i'.$$ Since $p' \in E'$, blowing up at $p'$ and contracting $T'$, we get a cuspidal curve on $\mathbb{F}_2$ of type $(0,3)$ and cuspidal configuration ${[2],[2],[2],[2]}$. \end{proof} \section{Associated results}\label{AR} The main result in this section is that the rigidity conjecture proposed by Flenner and Zaidenberg for plane rational cuspidal curves can \emph{not} be extended to the case of rational cuspidal curves on Hirzebruch surfaces. First in this section we state and prove two lemmas for rational cuspidal curves on Hirzebruch surfaces, the first analogous to a lemma by Flenner and Zaidenberg \cite[Lemma 1.3, p.148]{FlZa94}, and the other a lemma bounding the sum of the so-called $M$-numbers of the curve. Second, we use these lemmas to give an explicit formula for $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ in this case. We calculate this value for the curves constructed in Section \ref{4C}, and with that we provide examples of curves for which $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \neq 0$. Third, we use the two mentioned lemmas and other results to find a lower bound on the highest multiplicity of a cusp on a rational cuspidal curve on a Hirzebruch surface. Last in this section, we investigate real cuspidal curves on Hirzebruch surfaces. \subsection{Two lemmas} We now state and prove two lemmas for rational cuspidal curves on Hirzebruch surfaces. First, we prove a lemma that is a variant of \cite[Lemma 1.3, p.148]{FlZa94}. \begin{lem}\label{lemflzadef}\label{lemflzadeffe} Let $C$ be a rational cuspidal curve on $\ensuremath{\mathbb{F}_e}$. Let $(V,D)$ be the minimal embedded resolution of $C$, and let $K_V$ be the canonical divisor on $V$. Moreover, let $D_1, \ldots, D_r$ be the irreducible components of $D$, $\Theta_V$ the tangent sheaf of $V$, $\mathscr{N}_{D/V}$ the normal sheaf of $D$ in $V$, and let $c_2$ be the second Chern class of $V$. Then the following hold. \begin{enumerate} \item[$(0)$] $D$ is a rational tree. \item[$(1)$] $\chi(\Theta_V)=8-2r$. \item[$(2)$] $K_V^2=9-r$. \item[$(3)$] $c_2:=c_2(V)=3+r$. \item[$(4)$] $\displaystyle \chi \left(\bigoplus \mathscr{N}_{D_i/V}\right)=r+\sum_{i=1}^rD_i^2$. \item[$(5)$] $\displaystyle \chi(\ensuremath{\Theta_V \left\langle D \right\rangle})=K_V \ensuremath{\,.\,} (K_V+D)-1.$ \end{enumerate} \end{lem} \begin{proof} Note that the proof is very similar to the proof of \cite[Lemma 1.3, p.148]{FlZa94}, only small details are changed. \begin{enumerate} \item[$(0)$] Since $(V,D)$ is the minimal embedded resolution of $C$, $D$ is an SNC-divisor. Since $C$ is a rational curve, $\tilde{C}$ is smooth, and all exceptional divisors are smooth and rational, then all components of $D$ are smooth and rational. The dual graph of $D$, say $\Gamma$, is necessarily a connected graph, and since $C$ is cuspidal, $\Gamma$ will not contain cycles. Thus, $D$ is by definition a rational tree. \item[$(3)$] Since $V$ is obtained by $r-1$ blowing ups, we have that the Chern class $c_2:=c_2(V)$ is $$c_2(V)=c_2(\ensuremath{\mathbb{F}_e})+r-1.$$ Moreover, by \cite[Corollary V 2.5, p.371]{Hart:1977} we have that $\chi(\mathscr{O}_{\ensuremath{\mathbb{F}_e}})=1$. With $K$ the canonical divisor on $\ensuremath{\mathbb{F}_e}$, we apply the formula in \cite[Remark V 1.6.1, p.363]{Hart:1977}, \begin{align*} 12\chi(\mathscr{O}_{\ensuremath{\mathbb{F}_e}})&=K^2+c_2(\ensuremath{\mathbb{F}_e})\\ 12 &=8 + c_2(\ensuremath{\mathbb{F}_e}).\\ \end{align*} We get $c_2(\ensuremath{\mathbb{F}_e})=4$, hence, \begin{align*} c_2(V)&=4+r-1\\ &=3+r. \end{align*} \item[$(2)$] We have by \cite[Proposition V 3.4, p.387]{Hart:1977} that $\chi(\mathscr{O}_V)=\chi(\mathscr{O}_{\ensuremath{\mathbb{F}_e}})=1$. By the formula in \cite[Remark V 1.6.1, p.363]{Hart:1977} again, we get \begin{align*} K_V^2&=12\chi(\mathscr{O}_V)-c_2\\ &=12-(3+r)\\ &=9-r. \end{align*} \item[$(4)$] Since $D_i$ is a rational curve on the surface $V$ for all $i$, we have that $g(D_i)=0$. By \cite[Proof of Proposition II 8.20, p.182]{Hart:1977} we have that $$\mathscr{N}_{D_i/V}\cong \mathscr{L}(D_i) \otimes \ensuremath{\mathscr{O}}_{D_i}.$$ Hence, by the Riemann--Roch theorem for curves \cite[p.362]{Hart:1977}, \begin{align*} \displaystyle \chi \left(\bigoplus \mathscr{N}_{D_i/V}\right)&= \chi \left(\bigoplus \mathscr{L}(D_i) \otimes \ensuremath{\mathscr{O}}_{D_i}\right)\\ &=\sum_{i=1}^r\left(D_i^2+1\right)\\ &=r+\sum_{i=1}^rD_i^2. \end{align*} \item[$(1)$] By the Hirzebruch--Riemann--Roch theorem for surfaces \cite[Theorem A 4.1, p.432]{Hart:1977}, we have that for any locally free sheaf $\mathscr{E}$ on $V$ of rank $s$ with Chern classes $c_i(\mathscr{E})$, $$\chi(\mathscr{E})=\frac{1}{2}c_1(\mathscr{E}) \ensuremath{\,.\,} \bigl(c_1(\mathscr{E})+c_1(\Theta_V)\bigr)-c_2(\mathscr{E})+s\cdot \chi\left(\ensuremath{\mathscr{O}}_V\right).$$ Moreover, by \cite[Example A 4.1.2, p.433]{Hart:1977}, $\Theta_V$ has rank $s=2$ and ${c_1(\Theta_V)=-K_V}$. We have by the previous results, \begin{align*} \chi(\Theta_V)&=\frac{1}{2}(-K_V) \ensuremath{\,.\,} (-2K_V)-c_2(\Theta_V)+2\chi\left(\ensuremath{\mathscr{O}}_V\right)\\ &=K_V^2-c_2+2\\ &=9-r-(3+r)+2\\ &=8-2r. \end{align*} \item[$(5)$] Observe first that since $D$ is an SNC-divisor, we have by direct calculation \begin{align*}\displaystyle D^2&=\sum_{i=1}^rD_i^2 + \sum_{i\neq j}D_iD_j\\ &=\sum_{i=1}^rD_i^2+(1+2(r-2)+1)\\ &=\sum_{i=1}^rD_i^2+2r-2. \end{align*} Since $D$ is an effective divisor, we have by definition that ${p_a(D)=1-\chi(\ensuremath{\mathscr{O}}_D)}$. Since $D$ is a rational tree, by \cite[Lemma 1.2, p.148]{FlZa94}, $p_a(D)=0$. So by the adjunction formula \cite[Exericise V 1.3, p.366]{Hart:1977}, $$K_V \ensuremath{\,.\,} D=-D^2-2.$$ Using the additivity of $\chi$ on the short exact sequence (see \cite[pp.147,162]{FlZa94}), $$0 \longrightarrow \ensuremath{\Theta_V \left\langle D \right\rangle} \longrightarrow \Theta_V \longrightarrow \bigoplus \mathscr{N}_{D_i/V} \longrightarrow 0,$$ and the above results and remarks, we get \begin{align*} \chi(\ensuremath{\Theta_V \left\langle D \right\rangle})&=\chi(\Theta_V)-\chi \left(\bigoplus \mathscr{N}_{D_i/V}\right)\\ &=(8-2r)-\Bigl(r+\sum_{i=1}^rD_i^2 \Bigr)\\ &=8-2r-(r+D^2-2r+2)\\ &=6-r-D^2\\ &=K_V^2-D^2-3\\ &=K_V^2+2K_V\ensuremath{\,.\,} D-2(-D^2-2)-D^2-3\\ &=(K_V+D)^2+1\\ &=K_V \ensuremath{\,.\,} (K_V+D)-1. \end{align*} \end{enumerate} \end{proof} The second lemma bounds the sum of the so-called $M$-numbers by the type of the curve, and this work is inspired by Orevkov (see \cite{Ore}). For a cusp $p$, the associated \emph{$M$-number} can be defined as \begin{equation*} {M} := \eta+\omega-1, \end{equation*} where \begin{equation*} \eta := \sum_{i=0}^{t-1}\left(m_i-1\right), \end{equation*} and $\omega$ is the number of blowing ups in the minimal embedded resolution which center is an intersection point of the strict transforms of two exceptional curves of the resolution, i.e., an \emph{inner} blowing up. Moreover, the $M$-number can be expressed in terms of the multiplicity sequence $\overline{m}$, \begin{equation*} {M} = \sum_{i=0}^{t-1}\left(m_i-1\right)+\sum_{i=1}^{t-1}\left(\left\lceil\frac{m_{i-1}}{m_i}\right\rceil-1\right)-1, \end{equation*} where $\lceil a \rceil$ is the smallest integer $\geq a$. See \cite[Definition 1.5.23, p.44]{Fent} and \cite[p.659]{Ore} for more details. Before we state and prove the lemma, recall that $\ol{\kappa}(\Fe \setminus C)$ denotes the logarithmic Kodaira dimension of the complement to $C$ in $\ensuremath{\mathbb{F}_e}$ (see \cite{Iitaka}). \begin{lem}\label{kod0M} For a rational cuspidal curve $C$ of type $(a,b)$ on $\ensuremath{\mathbb{F}_e}$ with $s$ cusps and $\ol{\kappa}(\Fe \setminus C)\geq 0$, we have $$\sum_{j=1}^s M_j \leq 2(a+b)+be.$$ \end{lem} \begin{proof} Let $(V,D)$ and ${\sigma=\sigma_1 \circ \ldots \circ \sigma_t}$ be the minimal embedded resolution of $C$. Write ${\sigma^*(C)=\tilde{C}+\sum_{i=1}^t m_{i-1}E_i}$, with $\tilde{C}$ the strict transform of $C$ under $\sigma$, $m_{i-1}$ the multiplicity of the center of $\sigma_i$ and $E_i$ the exceptional curve of $\sigma_i$. Then by induction and \cite[Proposition V 3.2, p.387]{Hart:1977} we find that \begin{align*} \tilde{C}^2 &=\Bigl(\sigma^*(C)-\sum_{i=1}^t m_{i-1}E_i\Bigr)^2 \notag\\ &=C^2-\sum_{i=0}^{t-1}m_i^2 \notag \\ &=b^2e+2ab-\sum_{i=0}^{t-1}m_i^2. \end{align*} \noindent By the genus formula, we may rewrite this, \begin{align*} \tilde{C}^2&=b^2e+2ab-\sum_{i=0}^{t-1}m_i^2\\ &=be+2(a+b)-2-\sum_{i=0}^{t-1} m_i. \end{align*} \noindent Moreover, we have that for ${D=\tilde{C}+\sum_{i=1}^t E_i'}$, with $E_i'$ the strict transform of $E_i$ under the composition $\sigma_{i+1} \circ \cdots \circ \sigma_t$, \begin{align*} D^2&=\tilde{C}^2+2\tilde{C} \ensuremath{\,.\,} \Bigl(\sum_{i=1}^t E_i'\Bigr)+\Bigl(\sum_{i=1}^t E_i'\Bigr)^2 \notag\\ &=\tilde{C}^2+2s+\Bigl(\sum_{i=1}^t E_i'\Bigr)^2. \end{align*} \noindent Now we split the latter term in this sum into the sum of the strict transforms of the exceptional divisors for each cusp, $$\sum_{i=1}^t E_i'=\sum_{j=1}^s E_{p_j},$$ where $s$ denotes the number of cusps. By \cite[Lemma 2, p.235]{MatsuokaSakai}, we have $$\omega_j=-E_{p_j}^2-1.$$ \noindent Combining the above results, we get \begin{align*} D^2&=be+2(a+b)-2- \sum_{i=0}^{t-1}m_i+2s-\sum_{j=1}^s (\omega_j+1)\\ &=be+2(a+b)-2-\sum_{i=0}^{t-1} m_i-\sum_{j=1}^s (\omega_j-1). \end{align*} \noindent By the proof of Lemma \ref{lemflzadef}, we have $$6-r-D^2=(K_V+D)^2+1.$$ Note that $r$ denotes the number of components of the divisor $D$. This number is equal to the total number of blowing ups needed to resolve the singularities, plus one component from the strict transform of the curve itself. Following the notation established, we have $r=t+1$. Moreover, by assumption, $\ol{\kappa}(\Fe \setminus C) \geq 0$. By the logarithmic Bogomolov--Miyaoka--Yau-inequality (B--M--Y-inequality) in the form given by Orevkov \cite[Theorem 2.1, p.660]{Ore} and the topological Euler characteristic of the complement to the curve (see \cite{MOEONC}), we then have that $$(K_V+D)^2 \leq 6.$$ \noindent So we get \begin{align*} 0&\leq 1+r+D^2\\ & \leq 1+r+be+2(a+b)-2-\sum_{i=0}^{t-1} m_i-\sum_{j=1}^s (\omega_j-1)\\ & \leq -1+1+be+2(a+b)-\sum_{i=0}^{t-1}(m_i-1)-\sum_{j=1}^s (\omega_j-1)\\ &\leq be+2(a+b)-\sum_{j=1}^s M_j. \end{align*} \noindent Hence, $$\sum_{j=1}^s M_j \leq 2(a+b)+be.$$ \end{proof} \subsection{An expression for $\chi\left(\ensuremath{\Theta_V \left\langle D \right\rangle}\right)$} In this section we give a formula for $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ for curves $C$ on $\ensuremath{\mathbb{F}_e}$. Complements to rational cuspidal curves with three or more cusps on Hirzebruch surfaces can be shown to have $\ol{\kappa}(\Fe \setminus C)=2$ (see \cite{MOEONC}), however, these open surfaces are no longer $\ensuremath{\mathbb{Q}}$-acyclic, so we do not expect the rigidity conjecture of Flenner and Zaidenberg to hold in this case. Indeed, we calculate the value of $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ for the curves provided in Section \ref{4C}, and observe that for these curves we do not necessarily have $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})=0$. \begin{thm}\label{chithetafe} For an irreducible rational cuspidal curve $C$ of type $(a,b)$ on $\ensuremath{\mathbb{F}_e}$ with $s$ cusps $p_j$ with respective $M$-numbers $M_j$, we have $$\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})=7-2a-2b-be+\sum_{j=1}^s M_j.$$ \end{thm} \begin{proof} By \cite[Proposition 2.4, p.445]{FlZa94}, $$K_V \ensuremath{\,.\,} (K_V+D)=K_{\ensuremath{\mathbb{F}_e}} \ensuremath{\,.\,} (K_{\ensuremath{\mathbb{F}_e}}+C)+\sum_{j=1}^s M_j.$$ \noindent By Lemma \ref{lemflzadeffe}, we then get \begin{align*} \chi(\ensuremath{\Theta_V \left\langle D \right\rangle})&= K_V \ensuremath{\,.\,} (K_V+D)-1\\ &= K_{\ensuremath{\mathbb{F}_e}} \ensuremath{\,.\,} (K_{\ensuremath{\mathbb{F}_e}}+C)+\sum_{j=1}^s M_j-1\\ &= ((e-2)L-2M) \ensuremath{\,.\,} \Bigl((a+e-2)L+(b-2)M\Bigr)-1 + \sum_{j=1}^s M_j\\ &= 7-2a-2b-be+\sum_{j=1}^s M_j. \end{align*} \end{proof} With the above result in mind, we investigate $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ further. Let $C$ be a rational cuspidal curve of type $(a,b)$ on $\ensuremath{\mathbb{F}_e}$, and let $(V,D)$ be as before. By the above, we have that \begin{align*} \chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) &:= \mathrm{h}^0(V,\ensuremath{\Theta_V \left\langle D \right\rangle})-\mathrm{h}^1(V,\ensuremath{\Theta_V \left\langle D \right\rangle})+h^2(V,\ensuremath{\Theta_V \left\langle D \right\rangle})\\ &\, = K_V \ensuremath{\,.\,} (K_V+D)-1\\ &\, = 7-2(a+b)-be+\sum_{j=1}^sM_j. \end{align*} Moreover, when $\ol{\kappa}(\Fe \setminus C) \geq 0$, we see from Lemma \ref{kod0M} that $$\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \leq 7.$$ If $\ol{\kappa}(\Fe \setminus C)=2$, then it follows from a result by Iitaka in \cite[Theorem 6]{Iit} that $\mathrm{h}^0(V,\ensuremath{\Theta_V \left\langle D \right\rangle})=0$. Then we have $$\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) = h^2(V,\ensuremath{\Theta_V \left\langle D \right\rangle}) -\mathrm{h}^1(V,\ensuremath{\Theta_V \left\langle D \right\rangle}).$$ In \cite[Lemma 4.1, p.219]{Tono05} Tono shows that if first, $\ol{\kappa}(\Fe \setminus C)=2$, and second, the pair $(V,D)$ is \emph{almost minimal} (see \cite{Miyanishi}), then $K_V \ensuremath{\,.\,} (K_V+D) \geq 0$. For plane curves, this result by Tono and a result by Wakabayashi in \cite{Wak} implies that for rational cuspidal curves with three or more cusps we have that $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \geq 0$. Similarly, a rational cuspidal curve on a Hirzebruch surface that fulfills the two prerequisites has $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \geq -1$. While a smiliar result to Wakabayashi's result ensures that three or more cusps implies $\ol{\kappa}(\Fe \setminus C)=2$ \cite{MOEONC}, rationality, however, is no longer a guarantee for almost minimality (see \cite[p.98]{MOEPHD}). Therefore, for rational cuspidal curves with three or more cusps on a Hirzebruch surface, $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ is not necessarily bounded below. \hide{If $\ol{\kappa}(\Fe \setminus C)=2$ \emph{and} $(V,D)$ is almost minimal, we can apply a lemma by Tono \cite[Lemma 4.1, p.219]{Tono05}. In this case $K_V \ensuremath{\,.\,} (K_V+D) \geq 0$, hence $$-1 \leq \chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \leq 7.$$ Going back to the proof of Theorem \ref{ONCfe}, we see that for curves of genus $g$ on $\ensuremath{\mathbb{F}_e}$, we have that $n <2g+2$. As before, $n$ is the number of exceptional curves not in $D$ that will be contracted by the minimalization morphism. For rational cuspidal curves, we see that we have $n=\{0,1\}$. This means that we, in contrast to the situation on $\ensuremath{\mathbb{P}^2}$, are not directly in the situation that the resolution of a rational curve gives an almost minimal pair $(V,D)$. Therefore, $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ is not necessarily bounded below in this case.} For rational cuspidal curves with four cusps on Hirzebruch surfaces $\ensuremath{\mathbb{F}_e}$ and $\ensuremath{\mathbb{F}_h}$, where $e \geq 0$ and $h \in \{0,1\}$, the values of $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ is given in Table \ref{tab:fourcuspchitheta}. \begin{table}[H] \renewcommand\thesubtable{} \setlength{\extrarowheight}{2pt} \centering {\noindent \begin{tabular*}{1\linewidth}{@{}c@{\extracolsep{\fill}}l@{}c@{}c} \toprule {\textbf{Type}} & {\textbf{Cuspidal configuration}} & $\mathbf{\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})}$ & {\textbf{Surface}}\\ \toprule {$(2k+1,4)$} & $[4_{k-1+e},2_3],[2],[2],[2]$& $1-k-e$ & $\ensuremath{\mathbb{F}_e}$\\ {$(3k+1-h,5)$} & $[4_{2k-1+h},2_3],[2],[2],[2]$& $-1$ & $\ensuremath{\mathbb{F}_h}$ \\ {$(2k+2-h,4)$} & $[3_{2k-1+h},2],[2_3],[2],[2]$ & $0$ &$\ensuremath{\mathbb{F}_h}$ \\ {$(k+1-h,3)$} & $[2_{n_1}],[2_{n_2}],[2_{n_3}],[2_{n_4}]$& $-1$ & $\ensuremath{\mathbb{F}_h}$\\ {$(0,3)$} & $[2],[2],[2],[2]$& $-1$ & $\mathbb{F}_2$\\ \bottomrule \end{tabular*}} \caption[$\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ for rational cuspidal curves with four cusps on $\ensuremath{\mathbb{F}_e}$.]{$\chi(\ensuremath{\Theta_V \left\langle D \right\rangle})$ for rational cuspidal curves with four cusps on $\ensuremath{\mathbb{F}_e}$ and $\ensuremath{\mathbb{F}_h}$. For the three first series, $k \geq 0$. For the fourth series, $k \geq 2$ and $\sum_{j=1}^4 n_j=2k+h$.} \label{tab:fourcuspchitheta} \end{table} An important observation from this list is the fact that $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \leq 0$ for all these curves. We reformulate this observation in a conjecture (cf. \cite{Bobins, FlZa94}). \begin{conj} Let $C$ be a rational cuspidal curve with four or more cusps on a Hirzebruch surface $\ensuremath{\mathbb{F}_e}$. Then $\chi(\ensuremath{\Theta_V \left\langle D \right\rangle}) \leq 0$. \end{conj} \subsection{On the multiplicity} In the following we establish a result on the multiplicities of the cusps on a rational cuspidal curve on a Hirzebruch surface. Note that this work is inspired by Orevkov (see \cite{Ore}). Assume that $C$ is a rational cuspidal curve on a Hirzebruch surface $\ensuremath{\mathbb{F}_e}$. Let $p_1,\ldots,p_s$ denote the cusps of $C$, and $m_{p_1},\ldots,m_{p_s}$ their multiplicities. Renumber the cusps such that $m_{p_1}\geq m_{p_2} \geq \ldots \geq m_{p_s}$. Then for curves with $\ol{\kappa}(\Fe \setminus C) \geq 0$ we are able to establish a lower bound on $m_{p_1}$. \begin{thm} A rational cuspidal curve $C$ of type $(a,b)$ on $\ensuremath{\mathbb{F}_e}$, with $\ol{\kappa}(\Fe \setminus C) \geq 0$ and $s$ cusps, must have at least one cusp $p_1$ with multiplicity $m:= m_{p_1}$ that satisfies the below inequality, $$m > \frac{3}{2}+a+b-\frac{1}{2}\sqrt{1+20(a+b)+4(a^2+b^2)+4be(1-b)}.$$ \end{thm} \begin{proof} Using Lemma \ref{kod0M} and a lemma by Orevkov \cite[Lemma 4.1 and Corollary 4.2, pp.663--664]{Ore}, we get \begin{align*} 2(a+b)+be&\geq \sum_{j=1}^s M_j\\ & \geq M_{1}+\sum_{j=2}^sM_{j}\\ &> \frac{\mu_{1}}{m}+m-3+\sum_{j=2}^s M_{j}\\ &=\frac{(b-1)(2a-2+be)}{m}+m-3+\sum_{j=2}^s\Bigr(M_{j}-\frac{\mu_{j}}{m}\Bigr)\\ &\geq \frac{2ab-2(a+b)+2+b^2e-be}{m}+m-3+\sum_{j=2}^s\Bigr(M_{j}-\frac{\mu_{j}}{m_{j}}\Bigr)\\ &\geq \frac{2ab-2(a+b)+2+b^2e-be}{m}+m-3. \end{align*} \noindent This means that $$0 > \frac{2ab-2(a+b)+2+b^2e-be}{m}+m-3-2(a+b)-be.$$ \noindent Let $$g(a,b,m)= \frac{2ab-2(a+b)+2+b^2e-be}{m}+m-3-2(a+b)-be.$$ Factoring $g$, we have that $g<0$ for $$m > \frac{3}{2}+a+b-\frac{1}{2}\sqrt{1+20(a+b)+4(a^2+b^2)+4be(1-b)}.$$ \end{proof} \begin{cor} A rational cuspidal curve $C$ of type $(a,b)$ with two or more cusps on a Hirzebruch surface $\ensuremath{\mathbb{F}_e}$ must have at least one cusp $p_1$ with multiplicity $m$ that satisfies the below inequality, $$m > \frac{3}{2}+a+b-\frac{1}{2}\sqrt{1+20(a+b)+4(a^2+b^2)+4be(1-b)}.$$ \end{cor} \begin{proof} The corollary follows directly from the result in \cite{MOEONC} on the logarithmic Kodaira dimension of the complement to a curve with two cusps. \end{proof} \begin{rem} Note that this theorem will exclude some potential curves. For example, a rational cuspidal curve of type $(a,4)$ on $\mathbb{F}_1$ with two or more cusps (see \cite{MOEONC}) must have at least one cusp with multiplicity $m=3$ for any $a\geq6$. We also have that any rational cuspidal curve of type $(a,5)$ on $\mathbb{F}_1$ with two or more cusps must have at least one cusp with multiplicity $m=3$ for any $a\geq3$. Similarly, any rational cuspidal curve of type $(a,b)$ on $\mathbb{F}_1$ with two or more cusps and $b\geq 6$ must have at least one cusp with multiplicity $m=3$. \end{rem} \subsection{Real cuspidal curves} It is well known that the known plane rational cuspidal curves with three cusps can be defined over $\ensuremath{\mathbb{R}}$ \cite{FlZa95,FlZa97,Fen99a}. That is not the case for the plane rational cuspidal quintic curve with cuspidal configuration ${[2_3],[2],[2],[2]}$ (see \cite{MOEPHD}). On the Hirzebruch surfaces, the question whether all cusps on real cuspidal curves can have real coordinates is still hard to answer. Recall that we call $C=\ensuremath{\mathscr{V}}(F)$ a real curve if the polynomial $F$ has real coefficients. However, all known curves on $\ensuremath{\mathbb{F}_e}$ can be constructed from curves on $\ensuremath{\mathbb{P}^2}$. Since the birational transformations can be given as real transformations, if it is possible to arrange the curve on $\ensuremath{\mathbb{P}^2}$ such that the preimages of the cusps have real coordinates, then the cusps will have real coordinates on the curve on the Hirzebruch surface as well. Note the possibility that this arrangement is not always attainable. Considering the rational cuspidal curves on $\ensuremath{\mathbb{F}_e}$ with four cusps, we see that most of them are constructed from the plane rational cuspidal quintic with cuspidal configuration ${[2_3],[2],[2],[2]}$. Hence, we expect that the cusps on these curves can not all have real coordinates when the curve is real. Contrary to this intuition, however, there are examples of fourcuspidal curves with this property. \begin{prp} The series of rational cuspidal curves of type ${(k+1-h,3)}$, $k \geq 2$, with four cusps and cuspidal configuration ${[2_{n_1}],[2_{n_2}],[2_{n_3}],[2_{n_4}]}$, where the indices satisfy $\sum_{j=1}^4n_j=2k+h$, on the Hirzebruch surfaces $\ensuremath{\mathbb{F}_h}$, $h \in \{0,1\}$, has the property that all cusps can be given real coordinates on a real curve. \end{prp} \begin{proof} We have seen that the series of curves can be constructed using the plane rational cuspidal quartic $C$ with three cusps. Let $$y^2z^2+x^2z^2+x^2y^2-2xyz(x+y+z)$$ be a real defining polynomial of $C$. Then it is possible to find a tangent line to $C$ that intersects $C$ in three real points. For example, choose the line $T$ defined by $$\frac{2048}{125}x+\frac{2048}{27}y-\frac{1048576}{3375}z=0.$$ This line is tangent to $C$ at the point ${(\frac{64}{9}: \frac{64}{25}: 1)}$, and it intersects $C$ transversally at the points ${(16: \frac{16}{25}: 1)}$ and ${(\frac{4}{9}: 4: 1)}$. With this configuration, there exists a birational transformation from $\ensuremath{\mathbb{P}^2}$ to $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ that preserves the real coordinates of the cusps on $C$ and constructs a fourth cusp with real coordinates on the strict transform of $C$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$. We blow up the two real points at the transversal intersections and contract the tangent line $T$, using the birational map from $\ensuremath{\mathbb{P}^2}$ to $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ (see \cite{MOEPHD}). In coordinates, the map is given by a composition of a (real) change of coordinates on $\ensuremath{\mathbb{P}^2}$ and the map $\phi$, $$\begin{array}{cccc} \phi: &\ensuremath{\mathbb{P}^2} &\dashrightarrow &\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1} \\ &(x:y:z) &\mapsto& (x:y;z:x), \end{array}$$ with inverse $$\begin{array}{cccc} \phi^{-1}:& \ensuremath{\mathbb{P}^1 \times \mathbb{P}^1} &\dashrightarrow &\ensuremath{\mathbb{P}^2} \\ &(x_0:x_1;y_0:y_1) &\mapsto &(x_0y_1:x_1y_1:x_0y_0). \end{array}$$ The strict transform of $C$ is a real curve $C'$ on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ of type $(3,3)$ and cuspidal configuration ${[2],[2],[2],[2]}$, and all the cusps have real coordinates. On $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$, since the cusps $p_j$ have real coordinates, a fiber, say $L_j$, intersecting $C'$ at a cusp is real. Using the defining polynomial of $L_j$ to substitute one of the variables $x_0$ or $x_1$ in the defining polynomial of $C$ and removing the factor of $x_i^3$, we are left with a polynomial with real coefficients in $y_0$ and $y_1$ of degree $3$. This polynomial has a double real root, and one simple, hence real, root. The double root corresponds to the $y$-coordinates of the cusp $p_j$, and the simple root to the $y$-coordinates of a smooth intersection point $r_j$ of $C$ and $L_j$. Successively blowing up at any $r_j$ and contracting the corresponding $L_j$ lead to the desired series of curves. Since the points we blow up have real coordinates, the transformations preserve the real coordinates of the cusps. Hence, all the curves in the series can have four cusps with real coordinates. \end{proof} An image of a real rational cuspidal curve of type $(3,3)$ with four ordinary cusps on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ is given in Figure \ref{4realcusps}. In the figure, the surface $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ is embedded in $\mathbb{P}^3$ using the Segre embedding, and we have chosen a suitable affine covering of $\mathbb{P}^3$. The image is created in cooperation with Georg Muntingh using \verb+surfex+ \cite{Surfex}. \begin{center} \begin{figure}\label{4realcusps} \end{figure} \end{center} \appendix \section{Construction of curves}\label{Appendix:A} \noindent We show by two examples the explicit construction of some of the rational cuspidal curves in Theorem \ref{thm:4cusp}. This is done with the computer program \verb+Maple+ \cite{Maple} and the package \verb+algcurves+. See \cite{MOEPHD} for description of the maps. \subsection{The $(1,4)$-curves of Theorem \ref{thm:4cusp}} \begin{ex} The curves of type $(1,4)$ on $\ensuremath{\mathbb{F}_e}$, $e \geq 1$, can be constructed and checked with the following code. Here we have done this up to $e=2$, but the procedure generalizes to any $e \geq 1$. We first take the defining polynomial of the plane rational cuspidal quintic with four cusps, and verify its cuspidal configuration with the command \verb+singularities+. The \verb+Maple+ output of the command \verb+singularities+ is a list with elements on the form $$[[x,y,z],m,\delta,b],$$ where $[x,y,z]$ denotes the plane projective coordinates, $m$ the multiplicity, $\delta$ the delta invariant, and $b$ the number of branches of the singularity. {\scriptsize \begin{verbatim} > with(algcurves): > F := y^4*z-2*x*y^2*z^2+x^2*z^3+2*x^2*y^3-18*x^3*y*z-27*x^5: > singularities(F, x, y); {[[0, 0, 1], 2, 3, 1], [[RootOf(27*_Z^3+16*z^3), -(9/4)*RootOf(27*_Z^3+16*z^3)^2/z, 1], 2, 1, 1]} \end{verbatim} } \noindent We next move the curve such that we can blow up the appropriate point on $\ensuremath{\mathbb{P}^2}$. By inspection of the defining polynomial, we find the tangent line to the curve at the cusp with multiplicity sequence $[2_3]$ to be $\ensuremath{\mathscr{V}}(x)$, and its smooth intersection point with the curve is $(0:1:0)$. We then change coordinates and move $(0:1:0)$ to $(0:0:1)$. {\scriptsize \begin{verbatim} > z := 1: sort(F); -27*x^5+2*x^2*y^3-18*x^3*y+y^4-2*x*y^2+x^2 > singularities(F*x, x, y); unassign('z'): {[[0, 0, 1], 3, 7, 2], [[0, 1, 0], 2, 1, 2], [[(4/31)*RootOf(27*_Z^3+108*_Z-92)-24/31-(9/31)*RootOf(27*_Z^3+108*_Z-92)^2,... ...-24/31-(27/31)*RootOf(27*_Z^3+108*_Z-92)-(9/31)*RootOf(27*_Z^3+108*_Z-92)^2, 1], 2, 1, 1]} > x := xa: y := za: z := ya: \end{verbatim} } \noindent Now we blow up $(0:0:1)$, take the strict transform and check that we have the $(1,4)$-curve on $\ensuremath{\mathbb{F}_1}$ by finding its singularities in all four affine coverings. Note that \verb+Maple+ provides false singularities, since the \verb+algcurve+ package considers curves as objects on $\ensuremath{\mathbb{P}^2}$. The existing singularities have coordinates on the form $[\cdot,\cdot,1]$. {\scriptsize \begin{verbatim} > xa := x0*y1: ya := x1*y1: za := y0: > factor(F); -y1*(-y0^4*x1+2*y1^2*x0*y0^2*x1^2-y1^4*x0^2*x1^3-2*y1*x0^2*y0^3+18*y1^3*x0^3*y0*x1+27*y1^4*x0^5) > F := -y0^4*x1+2*y1^2*x0*y0^2*x1^2-y1^4*x0^2*x1^3-2*y1*x0^2*y0^3+18*y1^3*x0^3*y0*x1+27*y1^4*x0^5: > x0 := 1: y0 := 1: singularities(F, x1, y1); unassign('x0', 'y0'): {[[0, 1, 0], 3, 3, 3], [[1, 0, 0], 4, 9, 1], [[RootOf(16*_Z^3+27), -(4/9)*RootOf(16*_Z^3+27), 1], 2, 1, 1]} > x0 := 1: y1 := 1: singularities(F, x1, y0); unassign('x0', 'y1'): {[[1, 0, 0], 2, 3, 1], [[RootOf(16*_Z^3+27), (4/3)*RootOf(16*_Z^3+27)^2, 1], 2, 1, 1]} > x1 := 1: y1 := 1: singularities(F, x0, y0); unassign('x1', 'y1'): {[[0, 0, 1], 2, 3, 1], [[RootOf(27*_Z^3+16), -(9/4)*RootOf(27*_Z^3+16)^2, 1], 2, 1, 1]} > x1 := 1: y0 := 1: singularities(F, x0, y1); unassign('x1', 'y0'): {[[0, 1, 0], 5, 13, 4], [[1, 0, 0], 4, 12, 4], [[RootOf(27*_Z^3+16), (3/4)*RootOf(27*_Z^3+16), 1], 2, 1, 1]} \end{verbatim} } \noindent The curve on $\ensuremath{\mathbb{F}_1}$ is positioned in such a way that we can apply the Hirzebruch one up transformation repeatedly, and get the $(1,4)$-curve on $\ensuremath{\mathbb{F}_e}$ for any $e$. After one transformation we have a curve on $\mathbb{F}_2$ with the prescribed singularities. The following code verifies the latter claim. {\scriptsize \begin{verbatim} > x0 := x0a: x1 := x1a: y0 := y0a: y1 := x0a*y1a: > F; -y0a^4*x1a+2*x0a^3*y1a^2*y0a^2*x1a^2-x0a^6*y1a^4*x1a^3-2*x0a^3*y1a*y0a^3 +18*x0a^6*y1a^3*y0a*x1a+27*x0a^9*y1a^4 > x0a := 1: y0a := 1: singularities(F, x1a, y1a); unassign('x0a', 'y0a'): {[[0, 1, 0], 3, 3, 3], [[1, 0, 0], 4, 9, 1], [[RootOf(16*_Z^3+27), -(4/9)*RootOf(16*_Z^3+27), 1], 2, 1, 1]} > x0a := 1: y1a := 1: singularities(F, x1a, y0a); unassign('x0a', 'y1a'): {[[1, 0, 0], 2, 3, 1], [[RootOf(16*_Z^3+27), (4/3)*RootOf(16*_Z^3+27)^2, 1], 2, 1, 1]} > x1a := 1: y1a := 1: singularities(F, x0a, y0a); unassign('x1a', 'y1a'): {[[0, 0, 1], 4, 9, 1], [[0, 1, 0], 5, 16, 4], [[RootOf(27*_Z^3+16), 4/3, 1], 2, 1, 1]} > x1a := 1: y0a := 1: singularities(F, x0a, y1a); unassign('x1a', 'y0a'): {[[0, 1, 0], 9, 45, 4], [[1, 0, 0], 4, 18, 4], [[RootOf(27*_Z^3+16), 3/4, 1], 2, 1, 1]} \end{verbatim} } \end{ex} \subsection{A $(5,4)$-curve of Theorem \ref{thm:4cusp}} \begin{ex} Instead of constructing the entire series of $(1,4)$ curves, we may take the $(1,4)$-curve on $\ensuremath{\mathbb{F}_1}$ and construct a $(5,4)$-curve on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$. We then have to change coordinates before we apply the Hirzebruch one down transformation. The curve on $\ensuremath{\mathbb{F}_1}$ has a cusp at $(0:1;0,1)$, which we must move, say to $(0:1;1,1)$, before we apply the transformation to $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$. Then we check that we have the $(5,4)$-curve on $\ensuremath{\mathbb{P}^1 \times \mathbb{P}^1}$ with the prescribed singularities. {\scriptsize \begin{verbatim} > x0 := x0a: x1 := x1a: y0 := y0a-x1a*y1a: y1 := y1a: > x0a := x0b: x1a := x1b: y0a := x0*y0b: y1a := y1b: > F; -x1b*x0b^4*y0b^4+4*x0b^3*y0b^3*x1b^2*y1b-6*x0b^2*y0b^2*x1b^3*y1b^2 +4*x0b*y0b*x1b^4*y1b^3-x1b^5*y1b^4+2*y1b^2*x0b^3*x1b^2*y0b^2 -4*y1b^3*x0b^2*x1b^3*y0b+2*y1b^4*x0b*x1b^4+y1b^4*x0b^2*x1b^3 -2*y1b*x0b^5*y0b^3+6*y1b^2*x0b^4*y0b^2*x1b-6*y1b^3*x0b^3*y0b*x1b^2 +18*y1b^3*x0b^4*x1b*y0b-18*y1b^4*x0b^3*x1b^2+27*y1b^4*x0b^5 > x0b := 1: y0b := 1: singularities(F, x1b, y1b); unassign('x0b', 'y0b'): {[[0, 1, 0], 5, 10, 5], [[1, 0, 0], 4, 15, 1], [[RootOf(16*_Z^3+27), 4/9-(16/27)*RootOf(16*_Z^3+27)+(16/81)*RootOf(16*_Z^3+27)^2, 1], 2, 1, 1]} > x0b := 1: y1b := 1: singularities(F, x1b, y0b); unassign('x0b', 'y1b'): {[[1, 1, 0], 2, 3, 1], [[RootOf(16*_Z^3+27), RootOf(16*_Z^3+27)+(4/3)*RootOf(16*_Z^3+27)^2, 1], 2, 1, 1]} > x1b := 1: y1b := 1: singularities(F, x0b, y0b); unassign('x1b', 'y1b'): {[[0, 1, 0], 4, 15, 1], [[1, 0, 0], 3, 3, 3], [[RootOf(27*_Z^3+16), -(9/4)*RootOf(27*_Z^3+16)-(27/16)*RootOf(27*_Z^3+16)^2, 1], 2, 1, 1]} > x1b := 1: y0b := 1: singularities(F, x0b, y1b); unassign('x1b', 'y0b'): {[[0, 0, 1], 4, 9, 1], [[0, 1, 0], 5, 10, 5], [[1, 0, 0], 4, 6, 4], [[RootOf(27*_Z^3+16), 4/9-(1/3)*RootOf(27*_Z^3+16)+RootOf(27*_Z^3+16)^2, 1], 2, 1, 1]} \end{verbatim} } \end{ex} \end{document}
\begin{document} \title{On twin prime distribution and associated biases} \author{Shaon Sahoo} \address{Indian Institute of Technology, Tirupati 517506, India} \email{[email protected]} \keywords{Twin primes, Arithmetic progression, Chebyshev's bias} \begin{abstract} We study some new aspects of the twin prime distribution, focusing especially on how the prime pairs are distributed in arithmetic progressions when they are classified according to the residues of their first members. A modified totient function is seen to play a significant role in this study - we analyze this function and use it to construct a new heuristics for the twin prime conjecture. For the twin primes, we also discuss a sieve similar to Eratosthenes' sieve and a formula similar to the Legendre's formula for the prime counting function. We end our work with a discussion on three types of biases in the distribution of twin primes. Where possible, we compare our results with the corresponding results from the distribution of primes. \end{abstract} \maketitle \tableofcontents \section{Introduction} \label{sec1} Important proven results are scarce in the study of twin primes. The statements about the infinitude of twin primes and corresponding distribution are the well known conjectures in this field. Although unproven so far, there are some important breakthroughs towards proving these conjectures. Chen's work \cite{chen73} and more recent work by Zhang \cite{zhang14} may be noted in this regard. In this work we study some new aspects of the distribution of twin primes; we especially focus on studying the twin primes in arithmetic progressions. Many important results are known when primes are studied in residue classes. For example, Dirichlet's work (1837) shows that, if $(a,q) = 1$, there exists infinite number of primes $p$ so that $p\equiv a~(\text{mod}~q)$. In analogy with these studies, one can also study twin primes after classifying them according to the residues of their first members. In this study of twin primes, a modified totient function ($\phi_2$) is seen to play a role similar to that of the Euler's totient function in the study of primes. We analyze this modified totient function and an associated modified M\"{o}bius function ($\mu_2$). For the twin primes, we also discuss in this work a sieve similar to the sieve of Eratosthenes and a formula similar to the Legendre's formula for the prime counting function. We also provide a new heuristics for the twin prime conjecture using the modified totient function $\phi_2$. While studying primes in residue classes, one of the most interesting results that came out is the biases or, as popularly known, the prime number races \cite{granville06}. Chebyshev (1853) first noted that certain residue classes are surprisingly more preferred over others. For example, if $\pi(x;q;a)$ denotes the number of primes $\le x$ in the residue class $a$ (mod $q$), it is seen that $\pi(x;4;3)>\pi(x;4;1)$ in most occasions. A detailed explanation of this bias (Chebyshev's) is given by Rubinstein and Sarnak in their 1994 paper \cite{rubinstein94}. More recently (2016), a different type of bias was reported by Oliver and Soundararajan \cite{oliver16}. They found that a prime of a certain residue class is more likely to be followed by a prime of another residue class. A conjectural explanation was presented to understand such bias. In the last part of this paper we discuss similar biases in the distribution of the twin prime pairs. In addition, we also report a third type of bias: if $(\hat{p}_n-\hat{p}_{n-1})$ represents the difference between the first members of two consecutive twin prime pairs, then $(\hat{p}_n-\hat{p}_{n-1}\pm1)$ are more likely to be prime numbers than to be composites. We also find that the possibility of $(\hat{p}_n-\hat{p}_{n-1}-1)$ to be a prime is more than that of $(\hat{p}_n-\hat{p}_{n-1}+1)$ to be a prime. The numerical results presented here are based on the analysis of the first 500 million primes ($\pi(x_0)=5\times 10^8$) and corresponding about 30 million twin prime pairs ($\pi_2(x_0)=29,981,546$). Here $x_0$ is taken as the last prime we consider, i.e. 11,037,271,757. Besides these studies, we also state a conjecture regarding the arithmetic progression of twin primes in the line with the Green-Tao theorem and give some numerical examples of this type of progression. \section{Notations and definitions} \label{sec2} In this paper the letters $i,j,a,b,q,n$ and $N$ represent positive integers. The letters $x,y,z$ and $c$ represent real numbers, and $p$ always represents a prime number with $p_n$ being the $n$-th prime number. Here $\hat{p}$ represents a twin prime pair whose first member is $p$; $\hat{p}_n$ denotes the $n$-th twin prime pair. Unless mentioned otherwise, any statement involving a twin prime pair will be interpreted with respected to the first member of the pair, e.g., $\hat{p} \le x$ means that the first prime of the pair is less than or equal to $x$, or $\hat{p} \equiv a$ (mod $q$) means that the first member of the pair is congruent to $a$ (mod $q$). We denote gcd of $a$ and $b$ by $(a,b)$. In this paper we denote a reduced residue class of $q$ by a number $a$ if ($a,q$) = 1 and $0<a<q$. A pair of integers are called a {\it coprime pair} to a third integer if both the integers are individually coprimes to the third integer; e.g., 6 and 14 form a coprime pair to 25. A coprime pair to an integer is called a {\it twin coprime pair} if the integers in the pair are separated by 2. The functions $\mu(n)$ and $\phi(n)$ respectively denote the M\"{o}bius function and Euler's totient function. The functions $\omega(n)$ and $\omega_o(n)$ denote respectively the number of prime divisors and odd prime divisors of $n$ (for odd $n$, $\omega(n) = \omega_o(n)$, and for even $n$, $\omega(n) = \omega_o(n)+1$). \\ \noindent {\it Definitions:}\\ \indent $\phi_2(q):=\#\{a\le q~|(a,q)=1 ~{\rm and}~ (a+2,q)=1\}$.\\ \indent $\mu_2(n):=\mu(n)2^{\omega_o(n)}$.\\ \indent $\pi(x):= \#\{p \le x \}$.\\ \indent $\pi(x;q;a):= \#\{p \le x ~|p\equiv a (\text{mod}~q)\}$.\\ \indent $\pi(x;q;a_i|a_j):= \#\{p_n \le x ~| p_n\equiv a_i (\text{mod}~q)~\text{and}~p_{n+1}\equiv a_j (\text{mod}~q)\}$.\\ \indent $\delta(x;q;a):= \frac{\pi(x;q;a)}{\pi(x)}$.\\ \indent $\delta(x;q;a_i|a_j):= \frac{\pi(x;q;a_i|a_j)}{\pi(x;q;a_i)}$.\\ \indent $\Delta(x;q;a_i,a_j):= \pi(x;q;a_i) - \pi(x;q;a_j)$.\\ \indent $\bar{\Delta}(x;q;a_i,a_j):= \Delta(x;q;a_i,a_j)\frac{ln(x)}{\sqrt{x}}$.\\ \indent $\pi^{\pm}(x):= \#\{p_n \le x~| (p_{n+1}-p_n \pm 1) ~\text{is a prime} \}$\\ \indent $\delta^{\pm}(x):=\frac{\pi^{\pm}(x)}{\pi(x)}$\\ \indent $\pi_2(x):=\#\{\hat{p} \le x\}$.\\ \indent $\pi_2(x;q;a):=\#\{\hat{p} \le x ~| \hat{p}\equiv a (\text{mod}~q)\}$.\\ \indent $\pi_2(x;q;a_i|a_j):= \#\{\hat{p}_n \le x ~| \hat{p}_n\equiv a_i (\text{mod}~q)~\text{and}~\hat{p}_{n+1}\equiv a_j (\text{mod}~q)\}$.\\ \indent $\delta_2(x;q;a):= \frac{\pi_2(x;q;a)}{\pi_2(x)}$.\\ \indent $\delta_2(x;q;a_i|a_j):= \frac{\pi_2(x;q;a_i|a_j)}{\pi_2(x;q;a_i)}$.\\ \indent $\Delta_2(x;q;a_i,a_j):= \pi_2(x;q;a_i) - \pi_2(x;q;a_j)$. \\ \indent $\bar{\Delta}_2(x;q;a_i,a_j):= \Delta_2(x;q;a_i,a_j)\frac{ln^2(x)}{10\sqrt{x}}$.\\ \indent $\pi^{\pm}_2(x):= \#\{\hat{p}_n \le x~| (\hat{p}_{n+1}-\hat{p}_n \pm 1) ~\text{is a prime} \}$\\ \indent $\delta^{\pm}_2(x):=\frac{\pi^{\pm}_2(x)}{\pi_2(x)}$\\ \section{Arithmetic progressions and twin primes} \label{sec3} Unlike the case of prime numbers where each reduced residue class has infinite number of primes (Dirichlet's Theorem), in case of the twin primes, some reduced residue class may not have more than one prime pair. This is because the existence of the second prime adds extra constraint on the first one. For example, there is no twin prime pair in the residue class $a=1$ (mod 6). For $q=10$, the residue class $a=3$ has only one prime pair, namely 3 and 5. These observations can be concisely presented in the following theorem. \begin{theorem} The reduced residue class $a$ (mod q) can not have more than one twin prime pair when $(a+2,q)>1$. For such $a$, if both $a$ and $a+2$ are prime numbers, then they represent the only twin prime pair in the residue class $a$. \label{th3.1} \end{theorem} This theorem can simply be proved by noting that for any prime $p\equiv a$ (mod $q$), $p+2$ will be forced to be a composite if $(a+2,q)\neq 1$. Here only exception may happen when the number $a+2$ is itself a prime number. In such a case, if $a$ is also a prime number, then the pair consisting of $a$ and $a+2$ is the only twin prime pair in the class $a$. It may be mentioned in the passing that some non-reduced residue class may have one twin prime pair. For example, when $q=9$, residue class $a=3$ has one prime pair, namely 3 and 5. Theorem \ref{th3.1} helps us to come up with a conjecture for twin primes in the line of the Dirichlet's Theorem for prime numbers. For that we first define below an admissible class in the present context of twin prime distribution. \begin{definition} A residue class $a$ (mod $q$) is said to be admissible if $(a,q)=1$ and $(a+2,q)=1$. \label{df3.1} \end{definition} With this definition of admissible class, we now propose the following conjecture. \begin{conjecture} Each admissible residue class of a given modulus contains infinite number of twin prime pairs. \label{cj3.1} \end{conjecture} A stronger version of this conjecture can be formulated with the definition of a {\it modified totient function} $\phi_2(q)$. This function gives the number of {\it admissible} residue classes modulo $q$. \begin{definition} The function $\phi_2(q)$ denotes the number of admissible residue classes modulo $q$, i.e., $\phi_2(q)=\#\{a<q~|(a,q)=1 ~{\rm and}~ (a+2,q)=1\}$. \label{df3.2} \end{definition} Clearly $\phi_2(q)\le \phi(q)$, where $\phi(q)$ is the Euler's totient function. While calculating $\phi_2(q)$, a reduce residue class $a$ is excluded if there exists a prime divisor ($p$) of $q$ so that $p|a+2$. This observation helps us to deduce that $\phi_2(2^n)=\phi(2^n)=2^{n-1}$ for $n\ge 1$, and for any odd prime $p$, $\phi_2(p^m)=\phi(p)-p^{m-1}=p^m(1-2/p)$ when $m\ge 1$. In Section \ref{sec7} we prove that \begin{equation} \begin{split} \phi_2(q) &= q \prod_{p|q}(1-\frac 2 p) \text{~~for odd q, and}\\ \nonumber \phi_2(q) &= q (1-\frac 1 2) \prod_{p>2,~p|q}(1-\frac 2 p) \text{~~for even q}. \end{split} \end{equation} If we take $\phi_2(1) = 1$ then the function $\phi_2(q)$ is multiplicative. Above formula for $\phi_2(q)$ is stated as a theorem in Section \ref{sec4}. The function $\phi_2(q)$ can also be expressed in terms of the principal Dirichlet character ($\chi_1$) in the following way: $\phi_2(q)=\sum_{i=1}^q\chi_1(i)\chi_1(i+2)=\sum_{k=1}^{\phi(q)}\chi_1(a_k+2)$, where the second sum is over the reduced residues ($a_k$). The values of the function for some integers are as follows: $\phi_2(4)= \phi(4)=2$, $\phi_2(6)= \phi(6)-1=1$, $\phi_2(7)= \phi(7)-1=5$ and $\phi_2(9)= \phi(9)-3=3$. Using the function $\phi_2(q)$, we now propose the following conjecture regarding the twin prime counting function for a given admissible class. \begin{conjecture} For a given $q$ and an admissible class $a$ (mod $q$), we have \begin{equation} {\displaystyle \lim_{x\rightarrow\infty}} \frac{\pi_2(x)}{\pi_2(x;q;a)} = \phi_2(q). \nonumber \end{equation} \label{cj3.2} \end{conjecture} We end this section with a conjecture on the arithmetic progression of twin primes in line with the Green-Tao theorem \cite{greentao08}. For some positive integers $a$ and $b$, let the sequence $a+kb$ represents (first members of) twin primes for $k=0, 1, 2, \cdots, l-1$ ($l>2$). This sequence is said to represent an arithmetic progression of twin primes with $l$ terms. \begin{conjecture} For every natural number $l$ ($l>2$), there exist arithmetic progressions of twin primes with $l$ terms. \label{cj3.3} \end{conjecture} Some examples for the arithmetic progressions of twin primes are as follows: 41+420$\cdot$k, for k = 0, 1, .., 5; 51341+16590$\cdot$k, for k = 0, 1, .., 6; 2823809+570570$\cdot$k, for k = 0, 1, .., 6. Each number in a progression represents the first member of a twin prime. \section{Functions $\mu_2(n)$ and $\phi_2(n)$} \label{sec4} We begin this section with the following definition of a {\it modified M\"{o}bius function} $\mu_2(n)$. \begin{definition} If $\omega_o(n)$ denotes the number of odd prime divisors of $n$, then the {\it modified M\"{o}bius function} is defined as \begin{equation} \mu_2(n)=\mu(n)2^{\omega_o(n)},\nonumber \end{equation} where $\mu(n)$ is the M\"{o}bius function. \label{df4.1} \end{definition} The function $\mu_2(n)$ is multiplicative. The values of $\omega_o(n)$ for some $n$ are as follows: $\omega_o(1)=0$, $\omega_o(2)=0$, $\omega_o(3)=1$ and $\omega_o(6)=1$. The main reason we defined $\mu_2(n)$ is that it is possible to express $\phi_2(n)$ in terms of this modified M\"{o}bius function. The following theorem (\ref{th4.1}) gives two different expressions of $\phi_2(n)$. The proof of the theorem is given in Section \ref{sec7}. \begin{theorem} The function $\phi_2(n)$, as defined in Definition \ref{df3.2}, takes the following product form \begin{equation} \phi_2(n) = n (1-\frac{\theta_n}{2}) \prod_{p>2,~p|n}(1-\frac 2 p), \label{eq_ph2} \end{equation} where $\theta_n$ is either 0 or 1 depending on whether $n$ is odd or even respectively. Furthermore, using $\mu_2(n)$ function, one can express $\phi_2(n)$ as the following divisor sum: \begin{equation} \phi_2(n) = n \sum_{d|n} \frac{\mu_2(d)}{d}. \label{eq_p2m2} \end{equation} \label{th4.1} \end{theorem} In the following we discuss some properties of the functions $\mu_2(n)$ and $\phi_2(n)$. \begin{lemma} The divisor sum of the function $\mu_2$ is given by \begin{equation} \sum_{d|n}\mu_2(d)= \left\{\begin{array}{ll} 0 & if~~ n~is~ even \\ \nonumber (-1)^{\omega(n)} & if~~ n~is~ odd. \end{array}\right. \end{equation} \label{lm4.1} \end{lemma} \begin{proof} The formula is true for $n=1$ since $\mu_2(1)=\mu(1)2^{0} = 1$. Assume now that $n>1$ and is written $n=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$. First we take $n$ to be even with $p_1 = 2$ and $a_1\ge1$. We note that for any odd divisor $d$, there is an even divisor $2d$ so that $\mu_2(d)+\mu_2(2d) = 0$. It may be noted that we have $\mu_2(2^a d)=0$ for $a>1$. This shows that $\sum_{d|n}\mu_2(d) = 0$ when $n$ is even. We now take $n$ to be odd. Using Lemma \ref{lm8.1}, we get the following. \begin{equation} \sum_{d|n}\mu_2(d) = \sum_{d|n} \mu(d) 2^{\omega_o(d)} = \prod_{p|d}(1-2)=(-1)^{\omega(n)}. \nonumber \end{equation} \end{proof} \begin{lemma} If $n=\prod_{p|n}p^{a_p}$, then the divisor sum of the function $\phi_2$ is given by \begin{equation} \sum_{d|n}\phi_2(d) = n \prod_{p>2,~p|n}\frac{\phi_2(p)+p^{-a_p}}{\phi(p)}. \nonumber \end{equation} \label{lm4.2} \end{lemma} \begin{proof} For odd $n$, taking advantage of the multiplicative property of $\phi_2$, we write $\sum_{d|n}\phi_2(d) = \prod_{p|n}\{1+\phi_2(p)+\phi_2(p^2)+\cdots+\phi_2(p^{a_p})\}$. Now since $\phi_2(p^m)=p^m(1-\frac 2p)$ for odd $p$ and $m\ge1$, we get $\sum_{d|n}\phi_2(d) = \prod_{p|n}\frac{p^{a_p}(p-2)+1}{p-1} = n \prod_{p|n}\frac{\phi_2(p)+p^{-a_p}}{\phi(p)}$. For even $n$, there will be an additional multiplicative factor to this result; the factor is $(1+\phi_2(2)+\phi_2(2^2)+\cdots+\phi_2(2^{a_2})) = 2^{a_2}$. This factor is a part of $n$. \end{proof} \begin{theorem} Let $D(\mu_2,s)$ denote the Dirichlet series for $\mu_2$, i.e., $D(\mu_2,s):=\sum_{n=1}^{\infty}\frac{\mu_2(n)}{n^s}$. We have \begin{equation} D(\mu_2,s) = (1-\frac{1}{2^s})\prod_{p>2}(1-\frac{2}{p^s}) {~~ for ~~} Re(s) >1.\nonumber \end{equation} \label{th4.2} \end{theorem} \begin{proof} First we note that $2^{\omega_o(n)}\le 2^{\omega(n)}\le d(n)$, where $d(n)$ counts the number of divisors of $n$. It is known that $\sum_{n=1}^{\infty}\frac{d(n)}{n^s}$ is an absolutely convergent series for $Re(s)>1$ (chapter 11, \cite{apostol76}); this implies that the series $\sum_{n=1}^{\infty}\frac{\mu_2(n)}{n^s}$ also converges absolutely for $Re(s)>1$. Using Lemma \ref{lm8.2}, we have $D(\mu_2,s)=\prod_p\{1+\frac{\mu_2(p)}{p^s}\} = (1-\frac{1}{2^s})\prod_{p>2}(1-\frac{2}{p^s})$. \end{proof} \begin{theorem} Let $D(\phi_2,s)$ denote the Dirichlet series for $\phi_2$, i.e., $D(\phi_2,s):=\sum_{n=1}^{\infty}\frac{\phi_2(n)}{n^s}$. We have \begin{equation} D(\phi_2,s)= \zeta(s-1) D(\mu_2,s) \text{~~ for ~~} Re(s) >2,\nonumber \end{equation} where $\zeta(s-1)$ is the Riemann zeta function. \label{th4.3} \end{theorem} \begin{proof} We first note that $\phi_2(n)\le n$. So the series $\sum_{n=1}^{\infty}\frac{\phi_2(n)}{n^s}$ is absolutely convergent for $Re(s)>2$. The theorem now can be simply proved by taking $f(n)=\mu_2(n)$ and $g(n)=n$ in the Theorem \ref{th8.1}. \end{proof} \noindent {\bf Remark.} We here note that $\frac{1}{D(\mu_2,s)}= (1-\frac{1}{2^s})^{-1}\prod_{p>2}(1-\frac{2}{p^s})^{-1} = (1+\frac{1}{2^s}+\frac{1}{2^{2s}}+\cdots)\prod_{p>2}(1+\frac{2}{p^s}+\frac{2^2}{p^{2s}}+\cdots) = \sum_{n=1}^{\infty}\frac{2^{\Omega_o(n)}}{n^s}$, where $\Omega_o(n)$ counts the number of odd prime divisors of $n$ with multiplicity. Now in analogy with the relation $\left(\sum_{n=1}^{\infty}\frac{\mu(n)}{n^s}\right)^{-1} = \zeta(s)$, one may define $\zeta_2(s):=\sum_{n=1}^{\infty}\frac{2^{\Omega_o(n)}}{n^s}$ so that Theorems \ref{th4.2} and \ref{th4.3} can be written respectively as $D(\mu_2,s) = 1/\zeta_2(s)$ and $D(\phi_2,s)=\zeta(s-1)/\zeta_2(s)$. \begin{theorem} We have the following weighted average of $\mu_2(n)$: \begin{equation} \sum_{n\le x} \mu_2(n)\left[\frac{x}{n}\right] = L(x) + L(\frac x2) + 2L(\frac x4) + 4L(\frac x8) + \cdots, \nonumber \end{equation} where $L(x) = \sum_{n\le x} (-1)^{\omega(n)}$. It is understood that $L(x)=0$ for $0 < x <1$. \label{th4.4} \end{theorem} \begin{proof} Using Theorem \ref{th8.3} with $f(n) = \mu_2(n)$ and $g(n) = 1$, we get \begin{align*} \sum_{n\le x} \mu_2(n)\left[\frac{x}{n}\right] = & \sum_{n\le x} \sum_{d|n} \mu_2(d) & \\ = & \sum_{odd~ n \le x} (-1)^{\omega(n)} & \text{(using Lemma \ref{lm4.1})}\\ = & L(x) - \sum_{n \le x/2} (-1)^{\omega(2n)} & \text{(where $L(x) = \sum_{n \le x} (-1)^{\omega(n)}$)}\\ \end{align*} Since $\sum_{n \le x/2} (-1)^{\omega(2n)} = \sum_{odd~ n \le x/2} (-1)^{\omega(2n)}+\sum_{n \le x/4} (-1)^{\omega(4n)}$ and $\omega(2n) = \omega(n) + 1$ for odd $n$, we get $K(x) = L(x) + K(\frac x2) - \sum_{n \le x/4} (-1)^{\omega(4n)}$, where $K(x)=\sum_{n\le x} \mu_2(n)\left[\frac{x}{n}\right]$. Now manipulating the last term repeatedly we get the following series: $K(x) = L(x) + K(\frac x2) + K(\frac x4) + K(\frac x8) + \cdots$. This series can be further manipulated to find $K(x)$ in the following way: $K(x) = L(x) + L(\frac x2) + 2L(\frac x4) + 4L(\frac x8) + \cdots$. \end{proof} \section{Function $\phi_2(n)$ and twin primes} \label{sec5} We noted in Conjecture \ref{cj3.2} how $\phi_2(n)$ can be helpful in studying the twin prime distributions. In this section we will present some more observations on how $\phi_2(n)$ can be useful in the study of twin primes. First we mention a simple application of $\phi_2(n)$ in the form of the following theorem. \begin{theorem} For a given $l \ge 1$, consider a set $\mathcal{P}$ of any $l$ distinct primes. There always exists a number $m>1$ so that $(p,m(m+2))=1$ for every prime $p \in \mathcal{P}$. \label{th5.1} \end{theorem} \begin{proof} When $l=1$ and $p\ge 5$, from Theorem \ref{th4.1}, we get $\phi_2(p) \ge 2$. This confirms the existence of at least one $m>1$ so that $(p,m(m+2))=1$. It is also easy to explicitly check that the theorem is correct when $p=2$ or 3.\\ When $l=2$, and two primes are 2 and 3, we can explicitly check that the theorem is valid. For example, $m=5$ works here. When one or both primes are 2 or 3, it is easy to see that $\phi_2(p_1p_2)\ge 2$ for two primes $p_1$ and $p_2$.\\ When $l\ge 3$, we can define $P_l = {\displaystyle \prod_{p \in \mathcal{P}}}p$, and note that $\phi_2(P_l) \ge 2$. Which confirms the existence of at least one $m>1$ so that $(p,m(m+2))=1$ for every prime $p \in \mathcal{P}$. \end{proof} \begin{corollary} For a given $n>2$, there always exists a number $m>n$ so that $(p,m(m+2))=1$ for every prime $p\le n$. \label{cor5.1} \end{corollary} \begin{proof} This corollary can be shown to be a consequence of Theorem \ref{th5.1}. We will give here a separate direct proof of the corollary. Let $P_n = {\displaystyle \prod_{p\le n}}p$. From Theorem \ref{th4.1}, we have $\phi_2(P_n)={\displaystyle \prod_{odd~p\le n}}(p-2)\ge 1$. We recall that, between 1 and $P_n$, there are $\phi_2(P_n)$ twin coprime pairs to $P_n$. In this case all these pairs (one pair when $2<n<5$) lie above $n$. This proves the theorem. \end{proof} This theorem tells us that it is always possible to find a twin coprime pair to the primorial ($P_n$) upto a given number ($n>2$). It may be mentioned here that we can have a stronger statement than the Corollary \ref{th5.1} by noting that there are $\phi_2(P_n)$ number of twin coprime pairs in ($n,P_n$). It is also interesting to note that the Theorem \ref{th5.1} implies that there are infinite number of primes. According to the Hardy-Littlewood conjecture, $\pi_2(x) \sim 2C_2\frac{x}{(ln x)^2}$, where $C_2$ is the so-called twin prime constant; this constant is given by the following product form: $C_2= \prod_{p>2}\frac{p(p-2)}{(p-1)^2}$. In terms of $\phi(n)$ and $\phi_2(n)$, this constant can be written in the following series forms, \begin{equation} C_2 = \sum_{odd~n=1}^{\infty}\frac{\mu(n)}{\phi^2(n)},~~\text{and} \label{eq_c2} \end{equation} \begin{equation} \frac{1}{C_2} = \sum_{odd~n=1}^{\infty} \frac{\mu^2(n)}{n\phi_2(n)}. \label{eq_inc2} \end{equation} Both the equations can be proved by expressing their right hand sides as Euler products (see also Lemma \ref{lm8.2}; to use it in the present context, leave out the sum over the even numbers from the left side and drop $p=2$ from the right side of the equation). The Equation \ref{eq_c2} also appears in \cite{golomb60}. Let $P_n$ denote the product of the first $n$ primes, i.e., $P_n=\Pi_{k=1}^{n}p_k$. It is easy to check that \begin{equation} 2C_2 = {\displaystyle \lim_{n\rightarrow\infty}} \frac{P_n \phi_2(P_n)}{\phi^2(P_n)} = {\displaystyle \lim_{n\rightarrow\infty}} \frac{\phi_2(P^2_n)}{\phi^2(P_n)}. \label{eq_c2ro} \end{equation} The last expression of $C_2$, as appears in Equation \ref{eq_c2ro}, is especially interesting here. It helps us to view the Hardy-Littlewood twin prime conjecture from a different angle. We explain this in the following. \paragraph{\bf Heuristics for twin prime conjecture.} We take $X = P_x = \prod_{p\le x}p$ for large $x$. The number of integers which are coprime to and smaller than $X$ is $\phi(X)$. It may be mentioned here that all the coprimes, except 1, lie in $[x,X]$. The total number of the coprime pairs of all possible gaps is $\frac 12 \phi(X) ( \phi(X)-1) \sim \frac 12 \phi^2(X)$. Among these pairs, the number of the twin coprime pairs (pairs with gap = 2) is $\phi_2(X)$. All these twin coprime pairs lie in $[x,X]$. So in $[x,X]$, the fraction of the coprime pairs which are the twin coprime pairs is $\frac{2\phi_2(X)}{\phi^2(X)}$. Now the number of primes in $[x,X]$ is $\pi(X)-\pi(x)\sim \pi(X)$. The number of the all possible prime pairs (with any non-zero gap) is $\frac 12 \pi(X)(\pi(X)-1) \sim \frac 12 \pi^2(X)$. Among these prime pairs, the number of the twin prime pairs in $[x,X]$ would then be: $\pi_2(X) -\pi_2(x) \sim \frac 12 \pi^2(X) \times \frac{\phi_2(X)}{\frac 12 \phi^2(X)}$. From the prime number theorem we know $\pi(X) \sim \frac{X}{\ln X}$. So for large $X$, we finally get $\pi_2(X) \sim \frac{X^2}{\ln^2 X} \times \frac{\phi_2(X)}{\phi^2(X)} = \frac{X}{\ln^2 X} \times \frac{X\phi_2(X)}{\phi^2(X)}$. Now $\frac{X\phi_2(X)}{\phi^2(X)}$ converges to $2C_2$ as noted in Equation \ref{eq_c2ro}; this implies: $\pi_2(X) \sim 2C_2 \frac{X}{\ln^2 X}$. Above argument can also be given in the following ways. We note that $\frac {\pi^2(X)}{\phi^2(X)}$ denotes the fraction of the coprime pairs which are the prime pairs (of any possible gaps). Now there are $\phi_2(X)$ twin coprime pairs; among them, the number of twin prime pairs would then be $\frac {\pi^2(X)}{\phi^2(X)} \times \phi_2(X) \sim 2C_2 \frac{X}{\ln^2 X}$. We also note that $\frac {\pi(X)}{\phi(X)}$ is the probability of a coprime to be a prime. So $\left(\frac {\pi(X)}{\phi(X)}\right)^2$ is the probability that the two members of a coprime pair are both primes. This suggest that the number of the twin prime pairs among $\phi_2(X)$ twin coprime pairs is $\left(\frac {\pi(X)}{\phi(X)}\right)^2 \times \phi_2(X) \sim 2C_2 \frac{X}{\ln^2 X}$. In the preceding heuristics, the reason we take $X$ to be a large primorial (product of primes upto some large number) is that the density of coprimes vanishes for this special case ($\phi(X)/X \rightarrow 0$ as $X \rightarrow \infty$; see Theorem \ref{th5.2}). This vanishing density is a desired property in our heuristics as we know that the density of primes has this property ($\pi(N)/N \rightarrow 0$ as $N \rightarrow \infty$). If we take any arbitrary form of $X$ then the density of coprimes may not vanish. We may also note here that, as $X \rightarrow \infty$, $\frac{X\phi_2(X)}{\phi^2(X)}$ goes to a limit when the special form of $X$ is taken. On the other hand if $N$ increases arbitrarily then the function $\frac{N\phi_2(N)}{\phi^2(N)}$ does not have any limit. Next we discuss the estimated values of $\phi(X)/X$ and $\phi_2(X)/X$ when $X$ is the product of primes upto $x$. \begin{theorem}[Mertens' result] As $x\rightarrow \infty$, we have ${\displaystyle \prod_{p\le x}(1-\frac{1}{p}) = \frac{e^{-\gamma}}{\ln x} + O(\frac{1}{\ln^2 x})}$. Here $\gamma \approx 0.577216$ is the Euler-Mascheroni constant. \label{th5.2} \end{theorem} The proof of this result can be found in the most standard text books. \begin{theorem} As $x\rightarrow \infty$, we have ${\displaystyle (1- \frac 12) \prod_{2<p\le x}(1-\frac{2}{p}) = 2C_2\frac{e^{-2\gamma}}{\ln^2 x} + O(\frac{1}{\ln^3 x})}$. Here $C_2$ is the twin prime constant and $\gamma$ is the Euler-Mascheroni constant. \label{th5.3} \end{theorem} \begin{proof} It is possible to prove Theorem \ref{th5.3} directly using the result from Theorem \ref{th5.2}. It can be done in the following way.\\ ${\displaystyle (1- \frac 12) \prod_{2<p\le x}(1-\frac{2}{p}) = \left(2\prod_{2<p\le x} \frac{1-\frac 2p}{(1- \frac 1p)^2}\right) \prod_{p\le x}(1- \frac 1p)^2}$. \\ Now as $x\rightarrow \infty$, ${\displaystyle 2\prod_{2<p\le x} \frac{1-\frac 2p}{(1- \frac 1p)^2} = 2C_2}$ and the last part can be estimated from Mertens' second result, ${\displaystyle \prod_{p\le x}(1- \frac 1p)^2 = \frac{e^{-2\gamma}}{\ln^2 x} + O(\frac{1}{\ln^3 x})}$. This finally gives us\\ ${\displaystyle (1- \frac 12) \prod_{2<p\le x}(1-\frac{2}{p}) = 2C_2\frac{e^{-2\gamma}}{\ln^2 x} + O(\frac{1}{\ln^3 x})}$. \end{proof} The function $\phi_2(n)$ also gives a non-trivial upper bound for $\pi_2(n)$. We make a formal statement in terms of the following theorem. \begin{theorem} For $n>2$, $\pi_2(n)\le \phi_2(n)+\omega(n)$. \label{th5.4} \end{theorem} By taking $n$ to be a large primorial (product of primes upto about $\ln n$), we see that for $n \rightarrow \infty$, using Theorem \ref{th5.3}, $\pi_2(n) \le \phi_2(n)+\omega(n) \sim \frac {2C_2 n e^{-2\gamma}}{(\ln \ln n)^2}$. This upper bound, of-course, is not very tight; a much better bound can be found directly from $\pi(n)$. We note that, except 5, no other prime number is part of two twin prime pairs; accordingly: $\pi_2(n)\le \frac 12 \pi(n) \sim \frac{n}{2 \ln n}$. \section{Eratosthenes sieve and Legendre's formula for twin primes} \label{sec6} The Eratosthenes sieve gives us an algorithm to find the prime numbers in a list of integers. The Legendre's formula uses the algorithm to write the prime-counting function in the following mathematical form: $\pi(x) - \pi(\sqrt{x})+1 = \sum_{d|P(\sqrt{x})} \mu(d) \left[\frac{x}{d}\right]$, where $P(z) = \prod_{p\le z}p$. Here we first discuss a similar sieve to detect the twin primes in $[1,x]$. This sieve may be called the Eratosthenes sieve for the twin primes. We use this sieve then to write a mathematical formula for twin-prime counting function - which may be called the Legendre's formula for twin primes. \paragraph{\bf Eratosthenes sieve for twin primes.} Assume that we are to find out the (first members) of twin primes in $[1,N]$ with $N>2$. To start the sieving process, we first take a list of integers from 2 to $N+2$. We then carry out the following steps. Take the lowest integer ($n$) available in the list (initially $n=2$), and then remove (slashes ``/" in the example below) the integers from the list which are multiples of but greater than $n$. Then remove (strike-outs ``---" in the example below) the integers from the list which are two less than the integers removed in the last step. Circle the lowest integer available in the list (3 in the first instance). Now repeat the process with the circled number (new $n$) and continue the process till we reach $N$ in the list. At the end of the process, the circled numbers are the (first members) of the twin primes in $[1,N]$. \noindent {\bf Example.} To find the (first members) of twin primes below 30, we take the following list of numbers from 2 to 32. We then carry out the above mentioned steps to circle out the twin primes below 30. \noindent \sout{2} \circled{$\left. 3\right.$} \sout{$\slashed{4}$} \circled{$\left. 5\right.$} \sout{$\slashed{6}$} \sout{7} \sout{$\slashed{8}$} $\slashed{9}$ \sout{$\slashed{10}$} \circled{11} \sout{$\slashed{12}$} \sout{13} \sout{$\slashed{14}$} $\slashed{15}$ \sout{$\slashed{16}$} \circled{17} \sout{$\slashed{18}$} \sout{19} \sout{$\slashed{20}$} $\slashed{21}$ \sout{$\slashed{22}$} \sout{23} \sout{$\slashed{24}$} \sout{$\slashed{25}$} \sout{$\slashed{26}$} $\slashed{27}$ \sout{$\slashed{28}$} \circled{29} \sout{$\slashed{30}$} 31 $\slashed{32}$ \noindent So the twin primes (first members) below 30 are 3, 5, 11, 17 and 29. \paragraph{\bf Legendre's formula for twin primes.} The main idea behind the sieve above can be used to find a formula for $\pi_2(x)$. We write this formula as a theorem below. The proof of the theorem is give in Section \ref{sec7}. \begin{theorem} If $P(z) = {\displaystyle \prod_{p\le z}}p$ and $[x]$ denotes the greatest integer $\le x$, we have \begin{equation} \begin{split} \pi_2(x) - \pi_2(\sqrt{x}) & = \sum_{ab|P(\sqrt{x})}\mu(ab) \left[\frac{x-l_{a,b}}{ab}\right], \end{split} \label{eq6.1} \end{equation} where $a$ takes only odd values and $b$ takes both odd and even values. Here $l_{a,b}$ is the lowest (positive) integer which is divisible by $a$ and simultaneously $l_{a,b}+2$ is divisible by $b$. If we consider $l_{a,b}=at_{a,b}$, then $t_{a,b}$ is the solution of the congruence $a t_{a,b}+2\equiv 0~(mod~b)$ with $0< t_{a,b} < b$ when $b>1$. For $b=1$, we take $t_{a,b}=1$. \label{th6.1} \end{theorem} \section{Proofs of Theorems \ref{th4.1} and \ref{th6.1} } \label{sec7} We will prove Theorem \ref{th4.1} by two methods - first by the principle of cross-classification (inclusion-exclusion principle) and then by the Chinese remainder theorem. The first method is also used for proving Theorem \ref{th6.1}. \noindent {\bf Proof of Theorem \ref{th4.1}}\\ {\it Method 1:} We first assume that $n$ is an odd number and $p_1, \cdots, p_r$ are the distinct prime divisors of $n$. Let $S = \{1,2,\cdots,n\}$ and $S_k = \{m\le n : p_k|m(m+2)\}$. It is clear that \begin{equation} \phi_2(n) = N\left(S -\bigcup_{i=1}^r S_i\right), \nonumber \end{equation} where $N(T)$ denotes the number of elements in a subset $T$ of $S$. Now by the principle of cross-classification (see Theorem \ref{th8.2} for notations and formal statement), we have \begin{equation} \phi_2(n) = N(S) - \sum_{1\le i \le r}N(S_i) + \sum_{1\le i<j \le r}N(S_iS_j)- \cdots + (-1)^rN(S_1S_2\cdots S_r). \nonumber \end{equation} We have $N(S)=n$ and $N(S_i)=2\frac{n}{p_i}$. In general, to calculate $N(S_{k_1}\cdots S_{k_l})$, we write $N(S_{k_1}\cdots S_{k_l}) = {\displaystyle \sum_{d|P_l}} N(A_{P_l}^{d})$ where $P_l$ is the product of given $l$ primes corresponding to the sets $S_{k_1}, S_{k_2}, \cdots, S_{k_l}$, and $A_{P_l}^{d}=\{m\le n : d|m ~\text{and}~ d'|m+2\}$ with $dd' = P_l$. For $d'=1$, $N(A_{P_l}^{d})=\frac {n}{P_l}$. For $d'>1$, $N(A_{P_l}^{d})$ is given by the number of solutions for $a$ in the congruence $d a + 2 \equiv 0 ~(\text{mod}~d')$ so that $0<d a \le n$. This congruence has a unique solution modulo $d'$; let $t$ be the solution where $1\le t < d'$. All the solutions for $a$ would then be in the form $t+s d'$ where $s$ is so that $0<(t+s d') d \le n$. We have now $s \le \frac {n}{P_l} - \frac{t}{d'}$. Since $\frac {n}{P_l}$ is an integer and $\frac{t}{d'}<1$, the maximum value of $s$ can be $\frac {n}{P_l}-1$. On the other hand, the minimum value of $s$ is 0. This gives $N(A_{P_l}^{d})= \frac {n}{P_l}$. Since there are $2^l$ distinct divisors of $P_l$, we get $N(S_{k_1}\cdots S_{k_l}) = 2^l \frac {n}{P_l}$. Now using the principle of cross-classification, we get $\phi_2(n) = n - 2\sum_{1\le i \le r} \frac{n}{p_i} + 2^2 \sum_{1\le i<j \le r} \frac{n}{p_ip_j} - \cdots (-1)^r 2^r \frac{n}{p_1 \cdots p_r} = n \sum_{d|n} \frac{\mu_2(d)}{d}$. If $n$ is even, we can take $p_1=2$ and correspondingly $S_1=\{m\le n : p_1|m \}$. We now have $N(S_1)=\frac{n}{p_1}$. To find $N(S_1S_{k_1}\cdots S_{k_l})$, where the sets $S_{k_1}, S_{k_2}, \cdots, S_{k_l}$ correspond to $l$ different odd primes, we write $N(S_1S_{k_1}\cdots S_{k_l}) = {\displaystyle \sum_{d|P_l}} N(A_{P_l}^{d})$. Here $P_l$ is the product of given $l$ odd primes, and $A_{P_l}^{d}=\{m\le n : d|m ~\text{and}~ 2d'|m+2\}$ with $dd' = P_l$. For $d'=1$, $N(A_{P_l}^{d})=\frac {n}{2P_l}$. For $d'>1$, $N(A_{P_l}^{d})$ is given by the number of solutions for $a$ in the congruence $d a + 2 \equiv 0 ~(\text{mod}~2d')$ so that $0<da \le n$. Proceeding as before, we find that $N(A_{P_l}^{d})= \frac {n}{2P_l}$ and $N(S_1S_{k_1}\cdots S_{k_l}) = 2^l \frac {n}{2P_l}$. Using these values, we finally get $\phi_2(n)=n \sum_{d|n} \frac{\mu_2(d)}{d}$. We may here note that the distinctiveness of even $n$ is taken care by $\mu_2(n)$ which is defined by $\mu_2(n)=\mu(n)2^{\omega_o(n)}$ where $\omega_o(n)$ is the number of odd prime divisors of $n$. \noindent {\it Method 2:} Let $n=p_1^{a_1}p_2^{a_2}\cdots p_r^{a_r}$. Now consider the system of congruences with $i$-th congruence as $x\equiv b_i$ (mod $p_i^{a_i}$), where $(b_i,p_i)=(b_i+2,p_i)=1$ and $0<b_i<p_i^{a_i}$. In the congruence, $b_i$ can take $\phi_2(p_i^a)$ values, where, as discussed in Section \ref{sec3}, $\phi_2(p^a) = p^a(1-\frac 2p)$ for an odd prime $p$, and $\phi_2(p^a) = p^a(1-\frac 1p)$ for the even prime $p=2$. For a given set $(b_1,b_2,\cdots b_r)$, the system of congruences has a unique solution modulo $n$ according to the Chinese remainder theorem. So for all possible values of $(b_1,b_2,\cdots b_r)$, there are $\prod_r \phi_2(p_r^{a_r})$ unique solutions which lie above 0 and below $n$. \noindent {\bf Proof of Theorem \ref{th6.1}}\\ To find the number of twin primes in $[\sqrt{x},x]$, we can suitably use the concept behind the Eratosthenes sieve for twin primes discussed in Section \ref{sec6}. To translate this concept in the mathematical form, we adopt here the approach which is used in the proof (Method 1) of Theorem \ref{th4.1}. As before, we take $p_1=2$ and $S_1=\{m\le x : p_1|m \}$. For any odd prime $p_k$, we take $S_k = \{m\le x : p_k|m(m+2)\}$. This sets help us to write \begin{equation} \pi_2(x)-\pi_2(\sqrt{x}) = N\left(S -\bigcup_{1\le k \le r} S_k\right), \nonumber \end{equation} where $r = \pi(\sqrt{x})$ and $N(T)$ denotes the number of elements in a subset $T$ of $S$. Now by the principle of cross-classification (Theorem \ref{th8.2}), we have \begin{equation} \begin{split} \pi_2(x)-\pi_2(\sqrt{x}) = & N(S) - \sum_{1\le i \le r}N(S_i) + \sum_{1\le i<j \le r}N(S_iS_j)- \cdots \\ & + (-1)^rN(S_1S_2\cdots S_r). \label{pi2x} \end{split} \end{equation} We have $N(S)=[x]$, $N(S_1)=[\frac{x}{p_1}]$ for $p_1=2$, and $N(S_i)=2[\frac{x}{p_i}]$ for odd primes. In general, to find $N(S_{k_1}\cdots S_{k_l})$, where the sets $S_{k_1}, S_{k_2}, \cdots, S_{k_l}$ correspond to $l$ different odd primes, we write $N(S_{k_1}\cdots S_{k_l}) = {\displaystyle \sum_{d|P_l}} N(A_{P_l}^{d})$ where $P_l$ is the product of given $l$ odd primes and $A_{P_l}^{d}=\{m\le x : d|m ~\text{and}~ d'|m+2\}$ with $dd' = P_l$. For $d'=1$, $N(A_{P_l}^{d})=[\frac {x}{P_l}]$. For $d'>1$, $N(A_{P_l}^{d})$ is given by the number of solutions for $a$ in the congruence $d a + 2 \equiv 0 ~(\text{mod}~d')$ so that $0<d a \le x$. This congruence has a unique solution modulo $d'$; let $t_{d,d'}$ be the solution where $1\le t_{d,d'} < d'$. All the solutions for $a$ would then be in the form $t_{d,d'}+s d'$ where $s$ is so that $0<d (t_{d,d'}+s d') \le x$. Since $t_{d,d'} < d'$, $s$ can not be a negative integer. In other way, when $d t_{d,d'}>x$ or $s \le \frac {x}{P_l} - \frac{t_{d,d'}}{d'} < 0$, then we do not have any solution for $a$; hence in such a case $N(A_{P_l}^{d})=0$. Since $\frac{t_{d,d'}}{d'}<1$, we have $-1<\frac {x}{P_l} - \frac{t_{d,d'}}{d'}$. So in the case when $d t_{d,d'}>x$, we can write $N(A_{P_l}^{d})=1+\left[ \frac {x}{P_l} - \frac{t_{d,d'}}{d'} \right]$. When $d t_{d,d'}\le x$ or $\frac {x}{P_l} - \frac{t_{d,d'}}{d'}\ge 0$, then $s$ can take any value from 0 to $\left[\frac {x}{P_l} - \frac{t_{d,d'}}{d'}\right]$. Therefore, in this case also $N(A_{P_l}^{d})=1+\left[ \frac {x}{P_l} - \frac{t_{d,d'}}{d'} \right]$. To find $N(S_1S_{k_1}\cdots S_{k_l})$, where the sets $S_{k_1}, S_{k_2}, \cdots, S_{k_l}$ correspond to $l$ different odd primes, we write $N(S_1S_{k_1}\cdots S_{k_l}) = {\displaystyle \sum_{d|P_l}} N(A_{P_l}^{d})$. Here $P_l$ is the product of given $l$ odd primes, and $A_{P_l}^{d}=\{m\le x : d|m ~\text{and}~ 2d'|m+2\}$ with $dd' = P_l$. For $d'=1$, $N(A_{P_l}^{d})=[\frac {x}{2P_l}]$. For $d'>1$, $N(A_{P_l}^{d})$ is given by the number of solutions for $a$ in the congruence $da + 2 \equiv 0 ~(\text{mod}~2d')$ so that $0<da \le x$. Let $t_{d,2d'}$ be the unique solution of the congruence, where $0 < t_{d,2d'} < 2d'$. All the solutions for $a$ would then be in the form $t_{d,2d'}+2s d'$ where $s$ is so that $0< d (t_{d,2d'}+2s d') \le x$. Proceeding as for the odd $n$ case, we find that $N(A_{P_l}^{d})=1+\left[ \frac {x}{2P_l} - \frac{t_{d,2d'}}{2d'} \right]$. If we write $N(S) = N(1)$, $N(S_i) = N(p_i)$, $N(S_iS_j) = N(p_ip_j)$ and so on, we can rewrite Equation \ref{pi2x} in the following way, \begin{equation} \pi_2(x)-\pi_2(\sqrt{x}) = \sum_{a|P(\sqrt{x})}\mu(a)N(a), \label{pi2x2} \end{equation} where $P(\sqrt{x})= {\displaystyle \prod_{p\le\sqrt{x}}p}$. Now $N(a)=\sum_{d|a}' N(A_{P_l}^{d})$ where $\sum'$ indicates the sum over only the odd divisors ($d$) of $a$ and $N(A_{P_l}^{d})=1+\left[ \frac {x}{a} - \frac{t_{d,d'}}{d'} \right]$ with $d'=a/d$ for $d'>1$. Here $0<t_{d,d'}<d'$ and $d t_{d,d'} +2 \equiv 0 ~(\text{mod}~d')$. For $d'=1$, we take $t_{d,d'}=1$. Using the result that $\sum_{d|n}\mu(d) = 0$ for $n>1$, we finally get from Equation \ref{pi2x2}, $\pi_2(x)-\pi_2(\sqrt{x}) = {\displaystyle \sum_{dd'|P(\sqrt{x})}\mu(dd')\left[ \frac {x}{dd'} - \frac{t_{d,d'}}{d'} \right]}$, where $d$ is always odd and $d'$ can take both odd and even values. \section{Some standard results} \label{sec8} In this section we will provide some useful standard results without their proofs. \begin{theorem} Consider two functions $F(s)$ and $G(s)$ which are represented by the following two Dirichlet series, \begin{equation} \begin{split} F(s) &= \sum_{n=1}^{\infty}\frac{f(n)}{n^s} ~~for~~ Re(s)>a, ~~and \\ \nonumber G(s) & = \sum_{n=1}^{\infty}\frac{g(n)}{n^s} ~~for~~ Re(s)>b. \end{split} \end{equation} Then in the half-plane where both the series converge absolutely, we have \begin{equation} F(s)G(s)= \sum_{n=1}^{\infty}\frac{h(n)}{n^s}, \nonumber \end{equation} where $h=f\ast g$, the Dirichlet convolution of $f$ and $g$: \begin{equation} h(n) = \sum_{d|n}f(d)g(\frac nd). \nonumber \end{equation} \label{th8.1} \end{theorem} The series for $F(s)$ and $G(s)$ are absolutely convergent respectively for $Re(s)>a$ and $Re(s)>b$. The proof of the theorem can be found in any standard number theory text book (e.g. chapter 11, \cite{apostol76}). \begin{lemma} For a multiplicative function $f$, we have \begin{equation} \begin{split} \sum_{d|n}\mu(d)f(d) = &\prod_{p|n}(1-f(p)) ~~\text{and} \\\nonumber \sum_{d|n}\mu^2(d)f(d) = &\prod_{p|n}(1+f(p)). \end{split} \end{equation} \label{lm8.1} \end{lemma} \begin{lemma} Let $f$ be a multiplicative function so that the series $\sum_n f(n)$ is absolutely convergent. Then the sum of the series can be expressed as the following absolutely convergent infinite product over all primes, \begin{equation} \sum_{n=1}^{\infty} f(n) = \prod_{p} \{1+f(p)+f(p^2)+ \cdots\}. \nonumber \end{equation} \label{lm8.2} \end{lemma} \begin{theorem}[Principle of cross-classification] Let $S$ be a non-empty finite set and $N(T)$ denotes the number of elements of any subset $T$ of $S$. If $S_1, S_2, ..., S_n$ are given subsets of $S$, then \begin{equation} \begin{split} N\left(S-\bigcup_{i=1}^r S_i\right) = & ~N(S) - \sum_{1\le i \le n}N(S_i) + \sum_{1\le i<j \le n}N(S_iS_j)\\ \nonumber &-\sum_{1\le i<j<k \le n}N(S_iS_jS_k)+ \cdots + (-1)^nN(S_1S_2\cdots S_n), \end{split} \end{equation} where $S-T$ consists of those elements of $S$ which are not in $T$, and $S_iS_j$, $S_iS_jS_k$, etc. denote respectively $S_i\cap S_j$, $S_i\cap S_j\cap S_k$, etc. \label{th8.2} \end{theorem} \begin{theorem} If $h(n)=\sum_{d|n}f(d)g(\frac nd)$, let\\ \begin{equation} H(x) = \sum_{n\le x} h(n), ~~ F(x) = \sum_{n\le x} f(n), ~~ \text{and} ~~ G(x) = \sum_{n\le x} g(n). \nonumber \end{equation} We then have \\ \begin{equation} H(x) = \sum_{n\le x} f(n) G(\frac xn) = \sum_{n\le x} g(n)F(\frac xn). \nonumber \end{equation} \label{th8.3} \end{theorem} The proof of the theorem can be found in, for example, chapter 3 of Ref. \cite{apostol76}. \section{Biases in twin prime distribution} \label{sec10} Our analysis of the first 500 million prime numbers and corresponding about 30 million twin prime pairs ($\pi(x_0)=5\times 10^8$ and $\pi_2(x_0)=29,981,546$) shows three different types of biases both in the distributions of the prime numbers and the twin prime pairs. For plotting different arithmetic functions, we take a data point after every 50 primes ($x_i=p_{_{50i}}$) while presenting the results concerning prime numbers, and we take a data point after every 25 twin prime pairs ($x_i=\hat{p}_{_{25i}}$) while presenting the results concerning twin primes. In the following we present our findings. \subsection{Type-I bias} If $\pi_2(x;4;1)$ and $\pi_2(x;4;3)$ represent the number of prime pairs $\le x$ in the residue class $a=1$ and $a=3$ respectively, we find that $\pi_2(x;4;1)>\pi_2(x;4;3)$ for the most values of $x$. This can be seen from the Table \ref{tab1}. The bias is also evident from Figure \ref{fig1} where we plot the functions $\delta_2(x;4;3) = \frac{\pi_2(x;4;3)}{\pi_2(x)}$ and $\bar{\Delta}_2(x;4;3,1) = (\pi_2(x;4;3) - \pi_2(x;4;1))\frac{ln(x)}{10\sqrt{x}}$. In the definition of $\bar{\Delta}_2(x;4;3,1)$, the factor 10 in the denominator is an overall scaling factor. Ideally, without bias, $\delta_2(x;4;3)$ should be 0.5 and $\bar{\Delta}_2(x;4;3,1)$ should be zero. We also plot the corresponding functions for the prime numbers; we see that, while the prime numbers are biased towards the residue class $a=3$, the twin prime pairs are in contrast biased towards the residue class $a=1$. \begin{table}[th] \renewcommand \arraystretch{1.5} \noindent\[ \begin{array}{r|r r r r r} x & \pi(x;4;3) & \Delta(x;4;3,1) & \pi_2(x;4;3) & \Delta_2(x;4;3,1)\\ \hline 10^{7} & 332398 & 218 & 29498 & 16 \\ 5\cdot10^{7} & 1500681 & 229 & 119330 & -441 \\ 10^{8} & 2880950 & 446 & 219893 & -526 \\ 5\cdot10^{8} & 13179058 & 2250 & 919192 & -1786 \\ 10^{9} & 25424042 & 551 & 1711775 & -956 \\ 5\cdot10^{9} & 117479241 & 4260 & 7308783 & -600 \\ 10^{10} & 227529235 & 5960 & 13706087 & -505 \\ \end{array} \] \caption{(Type-I bias) Here biases in case of primes and twin primes are respectively quantified by the functions $\Delta(x;4;3,1)=\pi(x;4;3)-\pi(x;4;1)$ and $\Delta_2(x;4;3,1)=\pi_2(x;4;3)-\pi_2(x;4;1)$. } \label{tab1} \end{table} \begin{figure} \caption{(Type-I bias) Plots of the functions $\delta_2(x;4;3)$ and $\bar{\Delta}_2(x;4;3,1)$. The corresponding plots for the prime numbers are also shown. The broken lines show expected results if there were no bias.} \label{fig1} \end{figure} Although the overall bias is seen to be towards the residue class $a=1$ for the prime pairs, there is an interval in between where the class $a=3$ is preferred. In fact we numerically find that upto about $x\approx50,000$, the residue class $a=1$ is preferred, then upto about $x\approx10^7$, the residue class $a=3$ is preferred. After this for a very long interval (we check upto about $x\approx 1.1\times 10^{10}$), the residue class $a=1$ is again preferred for the twin primes. We also calculate the Brun's constant separately for the two residue classes. Let $B_2(x;4;1) = (\frac15 +\frac17)+(\frac{1}{17} +\frac{1}{19}) + \cdots$ and $B_2(x;4;3) = (\frac13 + \frac15) + (\frac{1}{11}+\frac{1}{13})+\cdots$. Here the first series involves the twin pairs whose first members are congruent to 1 (mod 4), similarly, the second series involves the twin primes whose first members are congruent to 3 (mod 4). In both cases we only consider the prime pairs in $[1,x]$. We find that $B_2(x_0;4;1)\approx0.802233$ and $B_2(x_0;4;3)\approx0.985735$, where $x_0$ is given by $\pi_2(x_0)=29,981,546$. We note here that $B_2(x_0) = B_2(x_0;4;1) + B_2(x_0;4;3) \approx 1.787967$. The value of $B_2(x_0)$ is still somewhat far from its known value of $B_2\approx1.902161$; this is due to extremely slow convergence of the series of the reciprocals of twin primes. We find that $B_2(x_0;4;3)$ keeps a lead over $B_2(x_0;4;1)$ from the beginning (the first twin pair belongs to class $a=3$). Since $B_2-B_2(x_0)\approx0.114194$ and $B_2(x_0;4;3)-B_2(x_0;4;1)=0.183502$, it can be concluded that, irrespective of how large $x$ is, $B_2(x;4;3)$ will always be larger than $B_2(x;4;1)$. In future study, it will be interesting to find out whether $lim_{x\rightarrow \infty} B_2(x;4;1)$ and $lim_{x\rightarrow \infty} B_2(x;4;3)$ exist and, if so, what the corresponding limits are. \begin{figure} \caption{(Type-II bias) Plots of the functions $\delta_2(x;4;1|1)$ and $\delta_2(x;4;3|1)$ along with their counterparts for the prime numbers are shown here. Plots of the complementary functions, like $\delta_2(x;4;3|3) = 1 - \delta_2(x;4;3|1)$, are not provided. The broken line shows expected result if there were no bias.} \label{fig2} \end{figure} \subsection{Type-II bias} A second type of bias can be found if we consider the consecutive twin prime pairs. Our numerical results show that after a prime pair of certain residue class, it is more probable to find the next prime pair to be from the different residue class. To quantify this bias, we define the following functions: $\delta_2(x;4;1|1):= \frac{\pi_2(x;4;1|1)}{\pi_2(x;4;1)}$ and $\delta_2(x;4;1|3):= \frac{\pi_2(x;4;1|3)}{\pi_2(x;4,1)}$. Here $\pi_2(x;4;a_i|a_j)$ denotes the number of twin prime pairs ($\le x$) belonging to the residue class $a_i$ provided that their next pairs belong to the class $a_j$. It may be noted that $\delta_2(x;4;1|1)+\delta_2(x;4;1|3)=1$. Functions $\delta_2(x;4;3|1)$ and $\delta_2(x;4;3|3)$ are defined similarly. The plots of function $\delta_2(x;4;1|1)$ and $\delta_2(x;4;3|1)$ can be found in Figure \ref{fig2}. The corresponding plots for the prime numbers are also shown in the figure. If we consider the twin prime pairs to be completely uncorrelated, the values of these functions should be 0.5. But we see that, for example, $\delta_2(x;4;1|3)>0.5>\delta_2(x;4;1|1)$ for all values of $x$ that we investigated. Exactly same behavior is seen in case of prime numbers, although the bias is more here. The bias in consecutive primes was investigated earlier in Ref. \cite{oliver16}. We here would like to point out that the functions like $\delta_2(x;4;1|1)$ and $\delta_2(x;4;3|1)$ are quite regular and smooth compared to a function like $\delta_2(x;4,3)$ or $\bar{\Delta}_2(x;4;3,1)$. The values of the functions $\delta_2(x;4;1|1)$ and $\delta_2(x;4;3|1)$ for some values of $x$ can be found in Table \ref{tab2}. \begin{table}[] \renewcommand \arraystretch{1.5} \noindent\[ \begin{array}{c|c c c c} x & \delta(x;4;1|1) & \delta(x;4;3|1) & \delta_2(x;4;1|1) & \delta_2(x;4;3|1)\\ \hline 10^{7} & 0.4350 & 0.5647 & 0.4769 & 0.5228 \\ 10^{8} & 0.4434 & 0.5565 & 0.4815 & 0.5198 \\ 10^{9} & 0.4489 & 0.5511 & 0.4836 & 0.5166 \\ 10^{10} & 0.4534 & 0.5466 & 0.4864 & 0.5136 \\ \end{array} \] \caption{(Type-II bias) Variation of functions $\delta(x;4;a_i|a_j)$ and $\delta_2(x;4;a_i|a_j)$ are very slow and smooth. Values of the functions are rounded off to the four decimal places.} \label{tab2} \end{table} Although $\delta_2(x;4;a_i|a_j)$ varies very slowly, we expect all these functions to approach 0.5 as $x$ goes to infinity. Assuming that there are infinite number of twin prime pairs, we propose the following conjecture. \begin{conjecture} For any two residue classes $a_i$ and $a_j$, we have $\delta_2(x;4;a_i|a_j)=0.5$ as $x\rightarrow\infty$. \label{cj10.1} \end{conjecture} \begin{figure} \caption{(Type-III bias) Plots of the functions $\delta^{+}_2(x)$ and $\delta^{-}_2(x)$. The corresponding results for the prime numbers are also provided.} \label{fig3} \end{figure} \subsection{Type-III bias} A third type of bias can be found if we study the gap between the first members of consecutive twin prime pairs. To quantify the bias, we define the following functions: $\delta^{+}_2(x):=\frac{\pi^{+}_2(x)}{\pi_2(x)}$, where $\pi^{+}_2(x)$ is the number of prime pairs ($\le x$) for each of which the following twin prime gap plus 1 is a prime number, i.e., $\pi^{+}_2(x) = \#\{\hat{p}_n \le x~| (\hat{p}_{n+1}-\hat{p}_n + 1) ~\text{is a prime} \}$. Similarly, the function $\delta^{-}_2(x)$ is defined to quantify the bias in the twin prime gap minus 1. If the prime pairs were distributed in a completely random manner, both $\delta^{\pm}_2(x)$ would have been less than 0.5 since we have more odd composites than the primes. But what we find is that $\delta^{-}_2(x) > \delta^{+}_2(x) > 0.5$ for all the values of $x$ that we investigated. Exactly similar bias is seen for the prime numbers, but in this case, interestingly, $\delta^{+}(x) > \delta^{-}(x) > 0.5$. The results related to this particular type of bias can be seen in Fig. \ref{fig3}. The values of the functions $\delta^{\pm}_2(x)$ for some values of $x$ can be found in Table \ref{tab3}. \begin{table}[ht] \renewcommand \arraystretch{1.5} \noindent\[ \begin{array}{c| c c c c} x & \delta^{+}(x) & \delta^{-}(x) & \delta^{+}_2(x) & \delta^{-}_2(x) \\ \hline 10^{7} & 0.7418 & 0.6739 & 0.6673 & 0.7103 \\ 10^{8} & 0.7203 & 0.6664 & 0.6401 & 0.6803 \\ 10^{9} & 0.7026 & 0.6585 & 0.6186 & 0.6560 \\ 10^{10} & 0.6875 & 0.6506 & 0.6001 & 0.6349 \\ \end{array} \] \caption{(Type-III bias) Variation of functions $\delta^{\pm}(x)$ and $\delta^{\pm}_2(x)$ are very slow and smooth. Values of the functions are rounded off to the four decimal places.} \label{tab3} \end{table} Although $\delta^{\pm}(x)$ and $\delta^{\pm}_2(x)$ change very slowly, we expect all these functions to eventually go to 0 since the number of odd composite numbers grow faster than the prime numbers. Assuming infinitude of twin primes, we propose the following conjecture for this type of bias. \begin{conjecture} As $x \rightarrow \infty$, $\delta^{+}(x) = \delta^{-}(x) = 0$ and $\delta^{+}_2(x) = \delta^{-}_2(x) = 0$. \label{cj10.2} \end{conjecture} \end{document}
\begin{document} \title{Generalized Euler classes, differential forms and commutative DGAs} \begin{abstract} In the context of commutative differential graded algebras over $\mathbb Q$, we show that an iteration of ``odd spherical fibration" creates a ``total space" commutative differential graded algebra with only odd degree cohomology. Then we show for such a commutative differential graded algebra that, for any of its ``fibrations" with ``fiber" of finite cohomological dimension, the induced map on cohomology is injective. \end{abstract} \section{Introduction} In geometry, one would like to know which rational cohomology classes in a base space can be annihilated by pulling up to a fibration over the base with finite dimensional fiber. One knows that if $[x]$ is a $2n$-dimensional rational cohomology class on a finite dimensional CW complex $X$, there is a $(2n-1)$-sphere fibration over $X$ so that $[x]$ pulls up to zero in the cohomology groups of the total space. In fact there is a complex vector bundle $V$ over $X$ of rank $n$ whose Euler class is a multiple of $[x]$. Thus this multiple is the obstruction to a nonzero section of $V$, and vanishes when pulled up to the part of $V$ away from the zero section, which deformation retracts to the unit sphere bundle. Rational homotopy theory provides a natural framework to study this type of questions, where topological spaces are replaced by commutative differential graded algebras (commutative DGAs) and topological fibrations replaced by algebraic fibrations. This will be the context in which we work throughout the paper. The reader can read more in \cite{MR1802847, MR2355774, MR0646078} about the topological meaning of the results of this paper from the perspective of rational homotopy theory of manifolds and general spaces. The first theorem (Theorem $\ref{thm:itoddsphere}$) of the paper states that the above construction, when iterated, creates a ``total space" commutative DGA with only odd degree cohomology. \begin{thma} For each commutative DGA $(A, d)$, there exists an iterated odd algebraic spherical fibration $(TA, d) $ over $(A, d)$ so that all even cohomology [except dimension zero] vanishes. \end{thma} Our next theorem (Theorem $\ref{thm:oddsphere}$) then limits the odd degree classes that can be annihilated by fibrations whose fiber has finite cohomological dimension. \begin{thmb} Let $(B, d)$ be a connected commutative DGA such that $H^{2k}(B)=0$ for all $0 < 2k \leq 2N$. If $\iota \colon (B, d) \to (B\otimes \Lambda V, d) $ is an algebraic fibration whose algebraic fiber has finite cohomological dimension, then the induced map \[ \iota_\ast \colon \bigoplus_{i\leq 2N} H^i(B)\to \bigoplus_{i\leq 2N} H^i(B\otimes \Lambda V)\] is injective. \end{thmb} It follows from the two theorems above that the iterated odd spherical fibration construction is universal for cohomology classes that pull back to zero by any fibrations whose fiber has finite cohomological dimension. The paper is organized as follows. In Section $\ref{sec:pre}$, we recall some definitions from rational homotopy theory. In Section $\ref{sec:gysin}$, we use iterated algebraic spherical fibrations to prove Theorem A. In Section $\ref{sec:algsphere}$, we define bouquets of algebraic spheres and analyze their minimal models. In Section $\ref{sec:main}$, we prove Theorem B. The work in this paper started during a visit of the authors at IHES. The authors would like to thank IHES for its hospitality and for providing a great environment. \section{Preliminaries} \label{sec:pre} We recall some definitions related to commutative differential graded algebras. For more details, see \cite{MR1802847, MR2355774, MR0646078}. \begin{definition} A commutative differential graded algebra (commutative DGA) is a graded algebra $B = \oplus_{i\geq 0} B^{i}$ over $\mathbb Q$ together with a differential $d\colon B^i \to B^{i+1}$ such that $d^2 = 0$, $xy = (-1)^{ij} yx,$ and $d(xy) = (d x) y + (-1)^{i} x (d y)$, for all $x\in B^{i}$ and $y\in B^{j}$. \end{definition} \begin{definition} \begin{enumerate}[(1)] \item A commutative DGA $(B, d)$ is called connected if $B^0 = \mathbb Q$. \item A commutative DGA $(B, d) $ is called simply connected if $(B, d)$ is connected and $H^1(B) = 0$. \item A commutative DGA $(B, d)$ is of finite type if $H^k(B)$ is finite dimensional for all $k\geq 0$. \item A commutative DGA $(B,d )$ has finite cohomological dimension $d$, if $d$ is the smallest integer such that $H^k(B) = 0 $ for all $k> d$. \end{enumerate} \end{definition} \begin{definition} A connected commutative DGA $(B, d)$ is called a model algebra if as a commutative graded algebra it is free on a set of generators $\{x_\alpha\}_{\alpha\in \Lambda}$ in positive degrees, and these generators can be partially ordered so that $d x_\alpha$ is an element in the algebra generated by $x_\beta$ with $\beta < \alpha$. \end{definition} \begin{definition} A model algebra $(B, d)$ is called minimal if for each generator $x_\alpha$, $d x_\alpha$ has no linear term, that is, \[ d(B) \subset B^+ \cdot B^+, \textup{ where } B^+ = \oplus_{k>0} B^k.\] \end{definition} \begin{remark} For every connected commutative DGA $(A, d_A)$, there exists a minimal model algebra $(\mathcal M(A), d)$ and a morphism $\varphi\colon (\mathcal M(A), d)\to (A, d_A)$ such that $\varphi$ induces an isomorphism on cohomology. $(\mathcal M(A), d)$ is called a minimal model of $(A, d)$, and is unique up to isomorphism. See page $288$ of \cite{MR0646078} for more details, cf. \cite{MR1802847, MR2355774}. \end{remark} \begin{definition}\label{def:algfib} \begin{enumerate}[(i)] \item An algebraic fibration (also called \textit{relative model algebra}) is an inclusion of commutative DGAs $(B, d) \hookrightarrow (B\otimes \Lambda V, d)$ with $V = \oplus_{k\geq 1} V^k$ a graded vector space; moreover, $V = \bigcup_{n=0}V(n)$, where $V(0) \subseteq V(1) \subseteq V(2) \subseteq \cdots $ is an increasing sequence of graded subspaces of $V$ such that \[ d : V(0) \to B \quad \textup{ and } \quad d: V(n) \to B\otimes \Lambda V(n-1), \quad n\geq 1,\] where $\Lambda V$ is the free commutative DGA generated by $V$. \item An algebraic fibration is called minimal if \[ \textup{Im}(d) \subset B^+\otimes \Lambda V + B\otimes \Lambda^{\geq 2} V.\] \end{enumerate} \end{definition} Let $\iota\colon (B, d) \hookrightarrow (B\otimes \Lambda V, d)$ be an algebraic fibration. Suppose $B$ is connected. Consider the canonical augmentation morphism $\varepsilon: (B, d) \to (\mathbb Q, 0)$ defined by $\varepsilon (B^+) = 0$. It naturally induces a commutative DGA: \[ (\Lambda V, \bar d) := \mathbb Q\otimes_B (B\otimes \Lambda V, d).\] We call $(\Lambda V, \bar d) $ the algebraic fiber of the given algebraic fibration. \section{Iterated odd spherical algebraic fibrations}\label{sec:gysin} In this section, we show that for each commutative DGA, there exists an iterated odd algebraic spherical fibration over it such that the total commutative DGA has only odd degree cohomology. Let $(B, d)$ be a connected commutative DGA. An \emph{odd algebraic spherical fibration} over $(B, d)$ is an inclusion of commutative DGAs of the form \[ \varphi: (B, d) \to (B\otimes \Lambda (x), d), \] such that $d x \in B$, where $x$ has degree $2k-1$ and $\Lambda(x)$ is the free commutative graded algebra generated by $x$. The element $e = d x \in B$ is called the Euler class of this algebraic spherical fibration. \begin{proposition}\label{prop:gysin} Let $(B, d)$ be a commutative DGA. For every even dimensional class $\beta\in H^{2k}(B)$ with $k>0$, there exists an odd algebraic spherical fibration $\varphi\colon (B, d) \to (B\otimes \Lambda(x), d)$ such that its Euler class is equal to $\beta$ and the kernel of the map $\varphi_\ast\colon H^{i+2k}(B) \to H^{i+2k}(B\otimes \Lambda (x) )$ is $H^i(B)\cdot\beta = \{a\cdot \beta \mid a\in H^i(B)\}$. \end{proposition} \begin{proof} Let $(B\otimes \Lambda (x), d)$ be the commutative DGA obtained from $(B, d)$ by adding a generator $x$ of degree $2k-1$ and defining its differential to be $d x = \beta$. We have the following short exact sequence \[ 0 \to (B, d) \to (B\otimes \Lambda (x), d) \to ( B \otimes (\mathbb Q\cdot x), d\otimes \textup{Id}) \to 0,\] which induces a long exact sequence \[ \cdots \to H^{i-1}(B\otimes (\mathbb Q \cdot x)) \to H^i(B) \to H^i(B\otimes \Lambda (x)) \to H^i(B\otimes (\mathbb Q \cdot x)) \to \cdots. \] Applying the identification $H^{i + (2k-1)}(B\otimes (\mathbb Q \cdot x)) \cong H^i(B)$, we obtain the following Gysin sequence \[ \cdots \to H^{i}(B) \xrightarrow{\cup e} H^{i+2k}(B) \xrightarrow{\varphi_\ast} H^{i+2k}(B\otimes \Lambda (x)) \xrightarrow{\partial_{i+1}} H^{i+1}(B) \to \cdots. \] This finishes the proof. \end{proof} \begin{definition} An \emph{iterated odd algebraic spherical fibration} over $(B, d)$ is algebraic fibration $(B, d) \hookrightarrow (B\otimes \Lambda V, d)$ such that $V^k=0$ for $k$ even. This fibration is called \emph{finitely iterated odd algebraic spherical fibration} if $\dim V < \infty$. \end{definition} Now let us prove the main result of this section. \begin{theorem}\label{thm:itoddsphere} For each commutative DGA $(A, d)$, there exists an iterated odd algebraic spherical fibration $(TA, d) $ over $(A, d)$ such that all even cohomology [except dimension zero] vanishes. \end{theorem} \begin{proof} We will construct $TA$ by induction. In the following, for notational simplicity, we shall omit the differential $d$ from our notation. Let $\mathcal A_0 = A$. Suppose we have defined the iterated odd algebraic spherical fibration $\mathcal A_{m-1}$ over $A$. Fix a basis of $H^{2k}(\mathcal A_{m-1})$ for each $k>0$. Denote the union of all these bases by $\{a_i\}_{i\in I}$. Define $W_{m-1}$ to be a $\mathbb Q$ vector space with basis $\{x_i\}_{i\in I}$, where $|x_i|= |a_i| -1$. We define $\mathcal A_{m}$ to be the iterated odd algebraic spherical fibration $ \mathcal A_{m-1} \otimes \Lambda (W_{m-1})$ over $\mathcal A_{m-1}$ with $d x_i = a_i$ for all $i\in I$. The inclusion map $\iota\colon \mathcal A_{m-1} \hookrightarrow \mathcal A_{m}$ induces the zero map $\iota_\ast = 0 \colon H^{2k}(\mathcal A_{m-1}) \to H^{2k}(\mathcal A_{m})$ for all $k>0$. By construction, $\mathcal A_m$ is also an iterated odd algebraic spherical fibration. Finally, we define $TA$ to be the direct limit of $\mathcal A_m$ under the inclusions $\mathcal A_m \hookrightarrow \mathcal A_{m+1}$. Clearly, $TA$ is an iterated odd algebraic spherical fibration over $A$. More precisely, let $V = \bigcup_{i=0}^\infty W_{i}$. We have $TA = A\otimes \Lambda V$ with the filtration of $V$ given by $V(n) = \bigcup_{i=0}^n W_{i}$. Moreover, we have $H^{2k}(TA) = 0$ for all $2k>0$. This completes the proof. \end{proof} \begin{remark}\label{rm:finsphere} If an element $\alpha \in H^\bullet(A) $ maps to zero in $H^{\bullet}(TA)$, then there exists a subalgebra $S_\alpha$ of $TA$ such that $S_{\alpha}$ is a \emph{finitely} iterated odd algebraic spherical fibration over $A$ and $\alpha$ maps to zero in $H^\bullet(S_{\alpha})$. \end{remark} \section{Bouquets of algebraic spheres} \label{sec:algsphere} In this section, we introduce a notion of bouquets of algebraic spheres. It is an algebraic analogue of usual bouquets of spheres in topology. \begin{definition}\label{def:algsphere} For a given set of generators $X= \{x_i\}$ with $x_i$ having odd degree $|x_i|$, we define the bouquet of odd algebraic spheres labeled by $X$ to be the following commutative DGA \[\mathcal S(X) = \left( \bigwedge_{x_i\in X} \mathbb Q[x_i]\right)\Big/\langle x_ix_j = 0 \mid \textup{all } i, j \rangle \] with the differential $d = 0$. \end{definition} \begin{proposition}\label{prop:algsphere} Let $\mathcal S(X)$ be a bouquet of odd algebraic spheres, and $\mathcal M(X) = (\Lambda V, d)$ be its minimal model. Then $\mathcal M(X)$ satisfies the following properties: \begin{enumerate}[(i)] \item $\mathcal M$ has no even degree generators, that is, $V$ does not contain even degree elements; \item each element in $H^{\geq 1}(\mathcal M(X))$ is represented by a generator, that is, an element in $V$. \end{enumerate} \end{proposition} \begin{proof} This is a special case of Koszul duality theory, cf. \cite[Chapter 3, 7 \& 13]{MR2954392}. Since $\mathcal S = \mathcal S(X)$ has zero differential, we may forget its differential and view it as a graded commutative algebra. An explicit construction of a minimal model of $\mathcal S$ is given as follows: first take the Koszul dual coalgebra $\mathcal S^{\text{\textexclamdown}}$ of $\mathcal S$; then apply the cobar construction to $\mathcal S^{\text{\textexclamdown}}$, and denote the resulting commutative DGA by $\Omega \mathcal S^{\text{\textexclamdown}}$. By Koszul duality, $\mathcal M(X) \coloneqq \Omega \mathcal S^{\text{\textexclamdown}}$ is a minimal model of $\mathcal S$. More precisely, set $W = \bigoplus_{i\geq 0} W_{i}$ to be the graded vector space spanned by $X$. Let $sW$ (resp. $s^{-1}W$) be the suspension (resp. desuspension) of $W$, that is, $(sW)_{i-1} = W_i$ (resp. $(s^{-1}W)_{i} = W_{i-1}$). Let $\mathcal L^c = \mathcal L^c(sW)$ be the cofree Lie coalgebra generated by $sW$. More explicitly, let $T^c(sW) = \bigoplus_{n\geq 0}(sW)^{\otimes n}$ be the tensor coalgebra, and $T^c(sW)^+ = \bigoplus_{n\geq 1} (sW)^{\otimes n}$. The coproduct on $T^c(sW)$ naturally induces a Lie cobraket on $T^c(sW)$. Then we have $\mathcal L^c(sW) = T^c(sW)^+/T^c(sW)^+\ast T^c(sW)^+$, where $\ast$ denotes the shuffle multiplication. With the above notation, we have $\mathcal S^{\text{\textexclamdown}} \cong \mathcal L^c$. The cobar construction of $\mathcal L^c$ is given explicitly by \[ \mathbb Q\to s^{-1}\mathcal L^c \xrightarrow{\ d \ } \Lambda^2 (s^{-1}\mathcal L^c) \to \cdots \to \Lambda^n(s^{-1}\mathcal L^c) \to \cdots \] with the differential $d$ determined by the Lie cobraket of $\mathcal L^c$. Now the desired properties of $\mathcal M(X)$ follow from this explicit construction. \end{proof} \begin{remark} In the special case of a bouquet of odd algebraic spheres where the cohomology of a commutative DGA model is that of a circle or the first Betti number is zero, this was discussed by Baues \cite[Corollary 1.2]{MR0442922} and by Halperin and Stasheff \cite[Theorem 1.5]{MR539532}. \end{remark} \section{Main theorem} \label{sec:main} In this section, we show that if a commutative DGA has cohomology, up to a certain degree, isomorphic to that of a bouquet of odd algebraic spheres, then its minimal model is isomorphic to that of the bouquet of odd algebraic spheres, up to that given degree. Then we apply it to prove that if a commutative DGA has only odd degree cohomology up to a certain degree, then all nonzero cohomology classes up to that degree will never pull back to zero by any algebraic fibration whose fiber has finite cohomological dimension. Suppose $B$ is a connected commutative DGA of finite type such that $H^{2k}(B)=0$ for all $ 0 < 2k \le 2N$. Let $X_i$ be a basis of $H^{i}(B)$ and $X = \bigcup_{i=1}^{2N+1}X_i$. Let $M = \mathcal M(X)$ be the bouquet of odd algebraic spheres labeled by $X$ from Definition $\ref{def:algsphere}$. Then we have $H^{i}(M) \cong H^i(B)$ for all $0\leq i \leq 2N$. Let $M_k \subset M$ be the subalgebra generated by the generators of degree $\leq k$. \begin{lemma}Let $k$ be an odd integer. Then $H^{k+2}(M_k)=H^{k+1}(M_k) =0$. \end{lemma} \begin{proof} $H^{k+1}(M_k)=0$ as $H^{k+1}(M_k) \to H^{k+1}(M)=0$ is injective. By Proposition $\ref{prop:algsphere}$ above, $M$ has no even-degree generators. In particular, we have $M_k=M_{k+1}$. Moreover, $H^{\geq 1}(M)$ is spanned by odd-degree generators. From the first observation it follows that the map $H^{k+2}(M_k) \to H^{k+2}(M)$ is injective, and from the second that its range is $0$. \end{proof} It follows that for an odd $k$, we have $M_{k+2} =M_k \otimes \Lambda (V[k+2])$ as an algebra, where the vector space $V = V_1 \oplus V_2 $ is placed at degree $(k+2)$, with $V_1 \cong H^{k+2}(M)$ and $V_2 = H^{k+3}(M_{k})$. The differential can be described as follows. It suffices to define $d \colon V \to M_{k}$. We define $d=0$ on $V_1$. To define $d$ on $V_2$, let us choose a basis $\{a_i\}_{i\in I}$ of $H^{k+3}(M_k)$. Let $\{\tilde a_i\}_{i\in I}$ be the corresponding basis of $V_2$. Then we define $d \tilde a_i =a_i$. \begin{proposition} \label{prop:constr} For each odd integer $k\le 2N$, there exists a morphism $\varphi_k \colon M_k \to B$ such that the induced map on cohomology $H^i(M_k)\cong H^i(M) \to H^i(B)$ is an isomorphism for $i \le k$. \end{proposition} \begin{proof}We construct the maps $\varphi_k$ by induction. By the previous lemma and the fact that $M$ has no even degree generators, it suffices to define $\varphi_k$ for odd integers $k$ . The case where $k=1$ is clear. Now assume that we have constructed $\varphi_n$, with $n$ an odd integer $\leq 2N-3$. We shall extend $\varphi_n$ to a morphism $\varphi_{n+2}$ on $M_{n+2} = M_n\otimes \Lambda (V[n+2])$, where the vector space $V = V_1 \oplus V_2 $ is placed at degree $(n+2)$, with $V_1 \cong H^{n+2}(M)$ and $V_2 = H^{n+3}(M_{n})$. It suffices to define $\varphi_{n+2}$ on $V$. Let $\{b_j\}_{j\in J}$ be a basis of $H^{n+2}(B)$. Since $H^{n+2}(M)\cong H^{n+2}(B)$, let $\{\tilde b_j\}_{j\in J}$ be the corresponding basis of $V_1$. We define $\varphi_{n+2}$ on $V_1$ by setting $\varphi_{n+2}(\tilde b_j) = b_j$. Similarly, choose a basis $\{c_\lambda\}_{\lambda \in K}$ of $H^{n+3}(M_n)$, and let $\{\tilde c_\lambda\}_{\lambda \in K}$ be the corresponding basis of $V_2$. Since $H^{n+3}(B) = 0$, for each $c_\lambda \in M_n$, there exists $\theta_\lambda\in B$ such that $\varphi_n(c_\lambda) = d \theta_\lambda$. We define $\varphi_{n+2}$ on $V_2$ by setting $\varphi_{n+2}(\tilde c_\lambda) =\theta_\lambda$. By construction, the induced map $(\varphi_{n+2})_\ast$ on $H^i$ agrees with $(\varphi_n)_\ast$ for $i\leq n+1$ and $(\varphi_{n+2})_\ast$ is an isomorphism on $H^{2n+2}$. This finishes the proof. \end{proof} Now let $\mathcal{M}_B$ be a minimal model of $B$ and $(\mathcal{M}_B)_k$ be the subalgebra generated by the generators of degree $\le k$. Combining the above results, we have proved the following proposition. \begin{proposition} \label{prop:sw} The commutative DGAs $(\mathcal{M}_B)_{2N-1}$ and $M_{2N-1}$ are isomorphic. \end{proposition} Moreover, we have the following result, which is an immediate consequence of the construction in Proposition $\ref{prop:constr}$. \begin{corollary}\label{cor:tosphere} Let $B$ be a connected commutative DGA such that $H^{2i}(B)=0$ for all $ 0 < 2i \le 2N$. Let $\alpha$ be a nonzero class in $H^{2k+1}(\mathcal{M}_B)$ with $2k+1<2N$. Then there exists a morphism $\psi \colon \mathcal{M}_B \to (\Lambda(\eta), 0)$ such that $\psi_*(\alpha)=[\eta]$, where $\eta$ has degree $2k+1$ and $\Lambda(\eta)$ is the free commutative graded algebra generated by $\eta$. \end{corollary} \begin{proof} From the description of the minimal model $\mathcal M_B$ of $B$, it follows that $\mathcal M_B$ has a set of generators such that all the cohomology groups up to degree $(2N-1)$ is generated by the cohomology classes of these generators; moreover we can choose these generators so that the given class $\alpha$ is represented by a generator, say, $a$. Then we define $\psi$ by mapping $a$ to $\eta$ and the other generators to $0$. \end{proof} An inductive application of the same argument above proves the following. \begin{proposition} Suppose $(C, d)$ is a connected commutative DGA with $H^{2k}(C) = 0 $ for all $2k>0$. Let $X_i$ be a basis of $H^i(C)$ and $X_C = \bigcup_{i=1}^\infty X_i$. Then the bouquet of odd algebraic spheres $\mathcal M(X_C)$ is a minimal model of $(C, d)$. \end{proposition} Applying the above proposition to the commutative DGA $(TA, d)$ from Theorem $\ref{thm:itoddsphere}$ immediately gives us the following corollary. \begin{corollary}\label{cor:ta} With the same notation as above, the minimal model of $(TA, d)$ is isomorphic to a bouquet of odd algebraic spheres. \end{corollary} Before proving the main theorem of this section, we shall prove the following special case first. \begin{theorem}\label{thm:oddsphere} Let $(\Lambda (x), d)$ be the commutative DGA generated by $x$ of degree $2k+1\geq 1$ such that $d x=0$. For any algebraic fibration $\varphi: (\Lambda(x), d) \to (\Lambda(x)\otimes \Lambda V, d)$ whose algebraic fiber $(\Lambda V, \bar d )$ has finite cohomological dimension, the map $\varphi_\ast: H^j(\Lambda (x)) \to H^j(\Lambda(x)\otimes \Lambda V)$ is injective for all $j$. \end{theorem} \begin{proof} The case where $2k+1 = 1$ is trivial. Let us assume $2k+1>1$ in the rest of the proof. Let $\varphi\colon (\Lambda V, d) \hookrightarrow (\Lambda(x)\otimes\Lambda V, d)$ be any algebraic fibration whose algebraic fiber has finite cohomological dimension. It suffices to show that $\varphi_\ast\colon H^{2k+1}(\Lambda(x)) \to H^{2k+1}(\Lambda(x)\otimes \Lambda V)$ is injective, since the induced map $\varphi_\ast$ on $H^i$ is automatically injective for $i\neq 2k+1$. Now suppose to the contrary that \[\varphi_\ast(x) = 0 \textup{ in } H^{2k+1}(\Lambda(x)\otimes \Lambda V). \] Then we have $ x = d (w\cdot x + v)$ for some $w, v\in \Lambda V$. By inspecting the degrees of the two sides, one sees that $w = 0$. Therefore, we have $x = d v$ for some $v\in \Lambda V$. It follows that $\bar d v = 0$. Now let $n\in \mathbb N$ be the smallest integer such that $[v^n] = 0$ in $H^{\bullet}(\Lambda V, \bard)$. Such an integer exists since $(\Lambda V, \bard)$ has finite cohomological dimension. Then there exists $u\in \Lambda V$ such that $v^n = \bar d u$. Equivalently, we have \[ v^n = u_0\cdot x + d u,\] for some $u_0\in \Lambda V$. It follows that \[ 0 = d^2 u = d( v^n - u_0\cdot x) = nv^{n-1}\cdot x - (du_0)\cdot x. \] Therefore, $v^{n-1} = \frac{1}{n} d u_0$, which implies that $[v^{n-1}] = 0$ in $H^{\bullet}(\Lambda V, \bard)$. We arrive at a contradiction. This completes the proof. \end{proof} Now let us prove the main result of this section. \begin{theorem} \label{thm:inj} Let $(B, d)$ be a connected commutative DGA such that $H^{2k}(B)=0$ for all $0 < 2k \leq 2N$. If $\iota \colon (B, d) \to (B\otimes \Lambda V, d) $ is an algebraic fibration whose algebraic fiber has finite cohomological dimension, then the induced map \[ \iota_\ast \colon \bigoplus_{i< 2N} H^i(B)\to \bigoplus_{i< 2N} H^i(B\otimes \Lambda V)\] is injective. \end{theorem} \begin{proof} Let $f\colon (\mathcal M_B, d)\to (B, d)$ be a minimal model algebra of $B$. \begin{claim*} For any algebraic fibration $\iota \colon (B, d) \to (B\otimes \Lambda V, d) $, there exist an algebraic fibration $\varphi\colon (\mathcal M_B, d) \to (\mathcal M_B\otimes \Lambda V, d)$ and a quasi-isomorphism $g\colon (\mathcal M_B\otimes \Lambda V, d) \to (B\otimes \Lambda V, d)$ such that the following diagram commutes: \[ \xymatrix{ \mathcal M_B \ar@{^{(}->}[d]_{\varphi} \ar[r]^f & B \ar@{^{(}->}[d]^\iota \\ \mathcal M_B\otimes \Lambda V \ar[r]^-{g} & B\otimes \Lambda V.} \] \end{claim*} We construct $\varphi$ and $g$ inductively. Consider the filtration $V = \cup_{n=0}^\infty V(k)$ from Definition $\ref{def:algfib}$. Choose a basis $\{x_i\}_{i\in I_0}$ of $V(0)$. Let $x = x_i$ be a basis element. If $d x = a \in B$, then $d a = d^2 x = 0$. It follows that there exists $\tilde a \in \mathcal M_B$ such that $f(\tilde a) = a + d c$ for some $c\in B$. We define an algebraic fibration $\varphi_0\colon (\mathcal M_B, d) \hookrightarrow (\mathcal M_B\otimes \Lambda(x), d)$ by setting $d x = \tilde a$. Moreover, we extend $f\colon (\mathcal M_B, d) \to (B, d)$ to a morphism (of commutative DGAs) $g_0\colon (\mathcal M_B\otimes \Lambda (x), d) \to (B\otimes \Lambda (x), d)$ by setting $g(x) = x + c$. By the Gysin sequence from Section $\ref{prop:gysin}$, we see that $g_0$ is a quasi-isomorphism. Now apply the same construction to all basis elements $\{x_i\}_{i\in I_0}$. We still denote the resulting morphisms by $\varphi_0 \colon (\mathcal M_B, d) \to (\mathcal M_B\otimes \Lambda (V(0)), d)$ and $g_0\colon (\mathcal M_B\otimes \Lambda (V(0)), d) \to (B\otimes \Lambda (V(0)), d)$. Now suppose we have constructed an algebraic fibration \[ \varphi_{k} \colon (\mathcal M_B\otimes \Lambda (V(k-1)), d) \to (\mathcal M_B\otimes \Lambda (V(k)), d)\] and a quasi-isomorphism $g_k\colon (\mathcal M_B\otimes \Lambda (V(k)), d) \to (B\otimes \Lambda (V(k)), d)$ such that the following diagram commutes: \[ \xymatrixcolsep{4pc}\xymatrix{ \mathcal M_B\otimes \Lambda (V(k-1)) \ar@{^{(}->}[d]_{\varphi_k} \ar[r]^{g_{k-1}} & B\otimes \Lambda(V(k-1)) \ar@{^{(}->}[d]^\iota \\ \mathcal M_B\otimes \Lambda (V(k)) \ar[r]^-{g_k} & B\otimes \Lambda (V(k)).} \] Let $\{y_i\}_{i\in I_{k+1}}$ be a basis of $V(k+1)$ that extends the basis $\{x_i\}_{i\in I_{k}}$ of $V(k)\subseteq V(k+1)$. Apply the same construction above to elements in $\{y_i\}_{i\in I_{k+1}} \backslash \{x_i\}_{i\in I_{k}}$, but with $B\otimes \Lambda (V(k))$ in place of $B$, and $\mathcal M_B\otimes \Lambda (V(k))$ in place of $\mathcal M_B$. We define $(\mathcal M_B\otimes \Lambda V, d)$ to be the direct limit of $(\mathcal M_B\otimes \Lambda (V(k)), d)$ with respect to the morphisms $\varphi_{k} \colon (\mathcal M_B\otimes \Lambda (V(k-1)), d)$. We define $\varphi$ to be the natural inclusion morphism $ (\mathcal M_B, d) \hookrightarrow (\mathcal M_B\otimes \Lambda V, d)$. The morphisms $g_k$ together also induce a quasi-isomorphism $g\colon (\mathcal M_B\otimes \Lambda V, d) \to (B\otimes \Lambda V, d)$, which makes the diagram in the claim commutative. This finishes the proof of the claim. Now assume to the contrary that there exists $0\neq \alpha\in H^{2k+1}(B)$ with $2k+1 < 2N$ such that $\iota_\ast(\alpha) = 0$. Let $\tilde \alpha \in H^{2k+1}(\mathcal M_B)$ be the class such that $f_\ast(\tilde \alpha) = \alpha$. In particular, we have $\varphi_{\ast}(\tilde \alpha) = 0$. By Corollary $\ref{cor:tosphere}$, there exists a morphism $\psi\colon (\mathcal M_B, d) \to (\Lambda (\eta), 0)$ such that $\psi_{\ast}(\tilde \alpha) = \eta$. Now let \[ \tau\colon (\Lambda(\eta), 0) \to (\Lambda(\eta) \otimes \Lambda V, d) = (\Lambda(\eta) \otimes_{\mathcal M_B} (\mathcal M_B\otimes \Lambda V), d)\] be the push-forward algebraic fibration of $\varphi\colon (\mathcal M_B, d) \to (\mathcal M_B\otimes \Lambda V, d)$. It follows that \[ \tau_\ast(\eta) = \tau_\ast \psi_{\ast}(\tilde \alpha) = (\psi\otimes 1)_\ast\varphi_{\ast}(\tilde \alpha) = 0 \] which contradicts Theorem $\ref{thm:oddsphere}$. This completes the proof. \end{proof} \end{document}
\begin{document} \title[The shapes of pure cubic fields]{T\MakeLowercase{he shapes of pure cubic fields}} \subjclass[2010]{11R16, 11R45, 11E12} \keywords{Pure cubic fields, lattices, equidistribution, carefree couples} \author{R\MakeLowercase{obert} H\MakeLowercase{arron}} \address{ Department of Mathematics\\ Keller Hall\\ University of Hawai`i at M\={a}noa\\ Honolulu, HI 96822\\ USA } \email{[email protected]} \date{\today} \begin{abstract} We determine the shapes of pure cubic fields and show that they fall into two families based on whether the field is wildly or tamely ramified (of Type I or Type II in the sense of Dedekind). We show that the shapes of Type I fields are rectangular and that they are equidistributed, in a regularized sense, when ordered by discriminant, in the one-dimensional space of all rectangular lattices. We do the same for Type II fields, which are however no longer rectangular. We obtain as a corollary of the determination of these shapes that the shape of a pure cubic field is a complete invariant determining the field within the family of all cubic fields. \end{abstract} \maketitle \tableofcontents \section{Introduction} The shape of a number field is an invariant coming from the geometry of numbers that can be seen as a refinement of the discriminant, or as a positive definite relative of the trace-zero form. Specifically, a degree $n$ number field $K$ can be embedded into its Minkowski space as $j_\mathbf{R}:K\hookrightarrow K\otimes_\mathbf{Q}\mathbf{R}\cong\mathbf{R}^n$ yielding a rank $n$ lattice $j_\mathbf{R}(\mc{O}_K)\subseteq\mathbf{R}^n$, where $\mc{O}_K$ denotes the ring of integers of $K$. The \emph{shape} of $K$ is defined to be the equivalence class of the rank $n-1$ lattice given by the image $\mc{O}_K^\perp$ of $j_\mathbf{R}(\mc{O}_K)$ in $\mathbf{R}^n/\mathbf{R} j_\mathbf{R}(1)$ up to scaling, rotations, and reflections. Explicitly, one obtains a quadratic form on $K$ via \begin{equation}\label{eqn:quadform} \alpha\longmapsto\!\!\!\sum_{\sigma\in\mathrm{Hom}(K,\mathbf{C})}|\sigma(\alpha)|^2 \end{equation} and the shape can be defined as the $\mathrm{GL}_{n-1}(\mathbf{Z})$-equivalence class, up to scaling by positive real numbers, of the restriction of this form to the image of $\mc{O}_K$ under the projection map $\alpha\mapsto\alpha-\tr_{K/\mathbf{Q}}(\alpha)/n$. The shape of $K$ may equivalently be considered as an element of the double-coset space $\mathrm{GL}_{n-1}(\mathbf{Z})\backslash\mathrm{GL}_{n-1}(\mathbf{R})/\mathrm{GO}_{n-1}(\mathbf{R})$, where $\mathrm{GO}$ denotes the group of orthogonal similitudes. This space, which we denote $\mc{L}_{n-1}$ and call the \emph{space of rank $(n-1)$ lattices}, carries a natural $\mathrm{GL}_{n-1}(\mathbf{R})$-invariant measure. Little is known about the shapes of number fields. Their study was first taken up in the PhD thesis of David Terr \cite{Terr}, a student of Hendrik Lenstra's. In it, Terr shows that the shapes of both real and complex cubic fields are equidistributed in $\mc{L}_2$ when the fields are ordered by the absolute value of their discriminants. This result has been generalized to $S_4$-quartic fields and to quintic fields by Manjul Bhargava and Piper Harron in \cite{Manjul-Piper, PiperThesis} (where the cubic case is treated, as well). Aside from this, Bhargava and Ari Shnidman \cite{Manjul-Ari} have studied which real cubic fields have a given shape, and in particular which shapes are possible for fields of given quadratic resolvent. Finally, Guillermo Mantilla-Soler and Marina Monsurr\`{o} \cite{M-SM} have determined the shapes of cyclic Galois extensions of $\mathbf{Q}$ of prime degree. Note however that \cite{M-SM} uses a slightly different notion of shape: they instead restrict the quadratic form in \eqref{eqn:quadform} to the space of elements of trace zero in $\mc{O}_K$. It is possible to carry out their work with our definition and our work with theirs; the answers are slightly different as described below.\footnote{Also, when \cite{Manjul-Ari} refers to `shape' in the complex cubic case, they use the trace form instead of \eqref{eqn:quadform}, yielding an indefinite quadratic form, hence the need for the present article.} This article grew out of the author's desire to explore an observation he made that, given a fixed quadratic resolvent in which $3$ ramifies, the shapes of real $S_3$-cubic fields sort themselves into two sets depending on whether $3$ is tamely or wildly ramified; a \emph{tame versus wild dichotomy}, if you will. Considering complex cubics, the pure cubics---that is, those of the form $\mathbf{Q}(m^{1/3})$---are in many ways the simplest. These were partitioned into two sets by Dedekind, who, in the typical flourish of those days, called them Type I and Type II. In our context, \emph{pure} cubics are exactly those whose quadratic resolvent is $\mathbf{Q}(\omega)$, $\omega$ being a primitive cube root of unity, and Type I (resp.\ Type~II) corresponds to $3$ being wildly (resp.\ tamely) ramified. The first theorem we prove (Theorem~\ref{thm:A}) computes the shape of a given pure cubic field and shows that, just as in the real case, pure cubic shapes exhibit a tame versus wild dichotomy.\footnote{We remark that this dichotomy also appears in \cite{M-SM} and in ongoing work of the author with Melanie Matchett Wood. In the former, it is shown that, for a given prime $\ell\geq5$, only two `shapes' (in the sense of \cite{M-SM}) are possible for cyclic extensions of degree $\ell$, and the `shape' is exactly determined by whether $\ell$ is tame or wild (for $\ell=3$, the two possibilities collapse to one, a result already appearing in \cite{Terr}, and for $\ell=2$, well, there is only one rank one lattice). We however remark that, with our definition of shape, the dichotomy in \cite{M-SM} disappears, while with their definition, \textit{all} pure cubics have rectangular `shape'. It would be very interesting to better understand the subtle difference between the two definitions. In the ongoing project with Wood, the tame versus wild dichotomy is shown to hold for totally real Galois quartic fields.} In the real cubic case, \cite{Manjul-Ari} shows that for fields of a fixed quadratic resolvent, there are only finitely many options for the shape. However, there are infinitely many possibilities for the shape of a pure cubic: indeed, Theorem~\ref{thm:B} shows that there is a bijection between pure cubic fields and their shapes! In fact, this theorem goes further and says that the shape of a pure cubic field uniquely determines it (up to isomorphism) within the collection of all cubic fields. Contrast this with Mantilla-Soler's result that says that complex cubic fields of the same discriminant have isometric integral trace forms \cite[Theorem~3.3]{M-S}. A natural question to ask is: how are the shapes distributed? The results of \cite{Manjul-Ari}, together with the aforementioned observation, imply that, after splitting the shapes of real $S_3$-cubic fields as needed according to tame versus wild, the shapes are equidistributed amongst the finitely many options. In Theorem~\ref{thm:C}, we show that the shapes of Type~I and Type~II pure cubic fields are equidistributed within their respective one-dimensional family of shapes (viewing $\mc{L}_2$ as a subset of the upper-half plane, and noting that the natural measure on $\mc{L}_2$ is given by the hyperbolic metric on the upper-half plane). Unlike the other equidistribution results described above, the spaces of lattices under consideration have infinite measure. We are thus lead to a ``regularized'' notion of equidistribution taking into account the different growth rates that occur for fields whose shape lies in bounded versus unbounded subsets of the one-dimensional spaces of lattices. \subsection{Statement of the main theorems} Let $K=\mathbf{Q}(m^{1/3})$ be a pure cubic field and write $m=ab^2$, where $a$ and $b$ are relatively prime, squarefree, positive integers. Note that the cube root of $m^\prime:=a^2b$ also generates $K$ (as $mm^\prime$ is a cube), so we may, and do, assume that $a>b$. We will refer to $r_K:=a/b$ as the \emph{ratio of $K$}. Then, $K$ is Type II if and only if $3\nmid m$ and $r_K\equiv\pm1\mod{9}$. As described above, the shape of $K$ is a rank 2 lattice, up to orthogonal similitudes. For a lattice of rank $2$, it is common to consider it as living in $\mathbf{C}$ and to take a representative of its equivalence class that has basis $\{1,z\}$ with $z\in\mf{H}$, the upper-half plane. Changing the basis by an element of $\SL(2,\mathbf{Z})$ and rewriting the basis as $\{1,z^\prime\}$ corresponds to the action of $\SL(2,\mathbf{Z})$ by fractional linear transformations on $\mf{H}$. In fact, we allow changing the basis by $\mathrm{GL}(2,\mathbf{Z})$: defining \[ \vect{0&1\\1&0}\cdot z:=1/\ol{z} \] allows us to extend the action of $\SL(2,\mathbf{Z})$ to all of $\mathrm{GL}(2,\mathbf{Z})=\SL(2,\mathbf{Z})\sqcup\vect{0&1\\1&0}\SL(2,\mathbf{Z})$. Thus, the space of rank 2 lattices may be identified (up to some identification on the boundary) with a fundamental domain for the action of $\mathrm{GL}(2,\mathbf{Z})$ on $\mf{H}$ given by \[ \mc{F}:=\{z=x+iy\in\mf{H}:0\leq x\leq1/2, x^2+y^2\geq1\}. \] Here is our first main theorem. \begin{theoremA}\label{thm:A}\mbox{} \begin{itemize} \item Pure cubic fields of Type I have shapes lying on the imaginary axis inside $\mc{F}$; specifically, the shape of $K$ is $ir_K^{1/3}\in\mc{F}$. Thus, their shapes are rectangular. \item Pure cubic fields of Type II have shapes lying on the line segment $\mathrm{Re}(z)=1/3$, $\mathrm{Im}(z)>1/3$ in $\mf{H}$; specifically, the shape of $K$ is $(1+ir_K^{1/3})/3\in\mf{H}$. The shapes of Type II pure cubics are thus parallelograms with no extra symmetry. \end{itemize} \end{theoremA} This implies that pure cubic fields do indeed exhibit a tame versus wild dichotomy. The proof of this theorem will be accomplished by explicitly writing down a basis for a lattice representing the shape. \begin{remark} For Type II fields, the given line segment does not lie in $\mc{F}$; instead, it lies in the union of $\mc{F}$ with its translates by $\gamma_1=W$, $\gamma_2=SU$, and $\gamma_3=SUSW$, where \[ W=\vect{0&1\\1&0},\quad S=\vect{0&-1\\1&0},\quad\text{and}\quad U=\vect{1&-1\\0&1}. \] These 4 pieces correspond to $r_K^{1/3}$ in the intervals $I_0=(\sqrt{8},\infty)$, $I_1=(\sqrt{5},\sqrt{8})$, $I_2=(\sqrt{2},\sqrt{5})$, and $I_3=(1,\sqrt{2})$, respectively. To describe the location of the shapes within $\mc{F}$, one need simply act on the appropriate subsegments of the line segment by $\gamma_i^{-1}$. One then obtains the union of a line segment and three circular arcs: (the intersection of $\mc{F}$ with) the line $\mathrm{Re}(z)=1/3$ and the circles centred at $3/2$, $-1/2$, and $1/2$, all of radius $3/2$. \end{remark} An important corollary to this theorem is that the shape provides a complete invariant for pure cubic fields. \begin{theoremA}\label{thm:B} If the shape of a cubic field is one of the shapes occurring in Theorem~\ref{thm:A}, then the field is the uniquely determined pure cubic field of that shape. \end{theoremA} Once we know Theorem~\ref{thm:A}, the proof of this result is a simple argument involving the field of rationality of the coordinates of the shape in $\mf{H}$. Note that there are plenty of pure cubic fields sharing the same discriminant. For instance, $\mathbf{Q}(6^{1/3})$ and $\mathbf{Q}(12^{1/3})$ both have discriminant $-3^3\cdot 6^2$, but their shapes are $i6^{1/3}$ and $i(\frac{3}{2})^{1/3}$, respectively. This result illustrates how different the shape and the trace form can be. Indeed, Mantilla-Soler shows in \cite[Theorem~3.3]{M-S} that two complex cubic fields have isometric integral trace form if and only if they have the same discriminant. It is natural to ask whether the shape is a complete invariant for complex cubic fields in general.\footnote{The shape fails miserably to distinguish (real) Galois cubic fields: they all have hexagonal shape!} We will present a positive answer to this question in upcoming work using a rationality argument as in the proof of Theorem~\ref{thm:B} below. As for equidistribution, we must introduce measures on $\mathscr{S}_{\mathrm{I}}:=\{iy:y\geq1\}$ and $\mathscr{S}_{\mathrm{II}}:={\{\frac{1+iy}{3}:y\geq1\}}$. As a natural choice, we pick those induced by the hyperbolic metric on $\mf{H}$, equivalently those invariant under a subgroup of $\SL_2(\mathbf{R})$. Recall the hyperbolic line element on $\mf{H}$ is \[ ds=\frac{\sqrt{dx^2+dy^2}}{y} \] and thus induces the measure $\frac{dy}{y}$ on both $\mathscr{S}_{\mathrm{I}}$ and $\mathscr{S}_{\mathrm{II}}$. Alternatively, note that the diagonal torus $\{\operatorname{diag}(y,y^{-1})\}\subseteq\SL_2(\mathbf{R})$ acting on $i$ yields a homeomorphism from itself to the upper imaginary axis mapping the elements with $y>1$ onto $\mathscr{S}_{\mathrm{I}}$. As such, it induces an invariant measure on $\mathscr{S}_{\mathrm{I}}$, namely $\frac{dy}{y}$.\footnote{To be precise, $\operatorname{diag}(y,y^{-1})$ sends $i$ to $y^2i$, so in terms of the coordinate $iy$, the induced measure is $\frac{dy}{2y}$. We will ignore this $1/2$.} By conjugating this torus, we obtain the same for $\mathscr{S}_{\mathrm{II}}$. We denote the induced measures on $\mathscr{S}_{\textrm{I}}$ and $\mathscr{S}_{\textrm{II}}$ by $\mu_{\textrm{I}}$ and $\mu_{\textrm{II}}$, respectively. A slight complication arises as equidistribution is typically considered for finite measures, whereas on both $\mathscr{S}_{\mathrm{I}}$ and $\mathscr{S}_{\mathrm{II}}$ the measure $\frac{dy}{y}$ is merely $\sigma$-finite. This is in fact reflected in the asymptotics for pure cubic fields! Indeed, the number of pure cubic fields of discriminant bounded by $X$ is on the order of $\sqrt{X}\log(X)$ (see \cite[Theorem~1.1]{Cohen-Morra} or \cite[Theorem~8]{Manjul-Ari}), whereas those of bounded discriminant with shape lying in a finite interval (i.e.\ with $r_K$ in a finite interval) only grow like $\sqrt{X}$ (see Theorem~\ref{thm:fieldasymptotics} below). We thus ``regularize'' our notion of equidistribution and obtain the following result. \begin{theoremA}\label{thm:C} Define \[ C_{\mathrm{I}}=\frac{2C\sqrt{3}}{15}\quad\text{and}\quad C_{\mathrm{II}}=\frac{C\sqrt{3}}{10}, \] where \[ C=\prod_p\left(1-\frac{3}{p^2}+\frac{2}{p^3}\right), \] the product being over all primes $p$. For $?=$ I, resp.~II, and real numbers $1\leq R_1<R_2$, let $[R_1,R_2)_?$ denote the ``interval'' $i[R_1,R_2)$, resp.~$(1+i[R_1,R_2))/3$, in $\mathscr{S}_?$. Then, for all $R_1, R_2$, \[ \lim_{X\rightarrow\infty}\frac{\#\left\{K\text{ of type ?}:|\Delta(K)|\leq X,\operatorname{sh}(K)\in[R_1,R_2)_?\right\}}{C_?\sqrt{X}}=\int_{[R_1,R_2)_?}d\mu_?, \] where $\Delta(K)$ is the discriminant of $K$ and $\operatorname{sh}(K)$ is the shape of $K$ (taken in $\mathscr{S}_?$). \end{theoremA} \begin{remark}\mbox{} \begin{enumerate} \item For the usual, unregularized, notion of equidistribution, the left-hand side above would be \[ \lim_{X\rightarrow\infty}\frac{\#\left\{K:|\Delta(K)|\leq X,\operatorname{sh}(K)\in[R_1,R_2)_?\right\}}{\#\left\{K:|\Delta(K)|\leq X\right\}}, \] where the denominator is $C_?\sqrt{X}\log(X)+o(\sqrt{X})$. Given the different growth rates of the numerator and the denominator, this limit is always $0$. \item We phrase this theorem in a more measure-theoretic way below (see Theorem~\ref{thm:C'}), defining a sequence of measures $\mu_{?,X}$ that converges weakly to $\mu_?$. \end{enumerate} \end{remark} \section{Determining the shape}\label{sec:proofofthmA} In this section, we provide integral bases for pure cubic fields (\S\ref{sec:int_bases}) and go on to use these bases to explicitly determine bases of $\mc{O}_K^\perp$ (\S\ref{sec:shape}). These calculations directly prove Theorem~\ref{thm:A}. In \S\ref{sec:thumb}, we prove Theorem~\ref{thm:B}. \subsection{Some basic facts about pure cubic fields}\label{sec:int_bases} We briefly determine an integral basis and the discriminant of a pure cubic field, and explain how ramification at $3$ is what distinguishes Type~I from Type~II. Every pure cubic field $K$ corresponds to exactly one pair $(m,m^\prime)$, where $m=ab^2$, $m^\prime=a^2b$, and $a$ and $b$ are two positive, relatively prime, squarefree integers whose product is at least $2$. Let $\alpha$ and $\beta$ be the roots in $K$ of $x^3-m$ and $x^3-m^\prime$, respectively, so that $\beta=\alpha^2/b$, $\alpha=\beta^2/a$, and $K=\mathbf{Q}(\alpha)=\mathbf{Q}(\beta)$. The discriminant of $x^3-m$ is $-3^3a^2b^4$, so that $\Delta(K)\mid\gcd(-3^3a^2b^4, 3^3a^4b^2)=-3^3a^2b^2$. The fact that $x^3-m$ (resp.\ $x^3-m^\prime$) is Eisenstein at all primes dividing $a$ (resp.\ $b$) implies that the index $[\mathcal{O}_K:\mathbf{Z}[\alpha,\beta]]$ is relatively prime to $ab$, and hence divides $3$. Thus, if $3|m$, then the index is $1$ and $\{1,\alpha,\beta\}$ is an integral basis of $\mc{O}_K$. Otherwise, since $[\mathcal{O}_K:\mathbf{Z}[\alpha,\beta]]=[\mathcal{O}_K:\mathbf{Z}[\alpha-m,\beta]]$, we may consider the minimal polynomial $(x+m)^3-m$ of $\alpha-m$. It is Eisenstein at $3$ if and only if $m\not\equiv\pm1\mod{9}$,\footnote{In fact, it is Eisenstein if and only if $m^3\not\equiv m\mod{9}$, but the only cubes modulo $9$ are $0,\pm1$, and $3\nmid m$, by assumption. Note also that the condition $m\equiv\pm1\mod{9}$ is equivalent to $r_K\equiv\pm1\mod{9}$; indeed, $r_K=m/b^3$.} in which case, again, $\{1,\alpha,\beta\}$ is an integral basis of $\mc{O}_K$. For $m\equiv\pm1\mod{9}$, (the ``Type II'' fields) one may check by hand that the element $\nu=(1\pm\alpha+\alpha^2)/3$ is integral. Clearly, $[\mathbf{Z}[\nu,\beta]:\mathbf{Z}[\alpha,\beta]]=3$, so $\{1,\nu,\beta\}$ is then an integral basis of $\mc{O}_K$. This shows that the discriminant of $K$ is $-3^3a^2b^2$ (resp.\ $-3a^2b^2$) for Type I (resp.\ Type II). We summarize these results in the following lemma. \begin{lemma}\label{lem:intbasis} With the above notation, \begin{itemize} \item the discriminant of a pure cubic of Type I is $\Delta(K)=-3^3a^2b^2$ and $\{1,\alpha,\beta\}$ is an integral basis; \item the discriminant of a pure cubic of Type II is $\Delta(K)=-3a^2b^2$ and $\{1,\nu,\beta\}$ is an integral basis. \end{itemize} \end{lemma} The two possible factorizations of a ramified prime in a cubic field are $\mf{p}_1\mf{p}_2^2$ and $\mf{p}^3$. The prime $3$ being tamely ramified means that $3=\mf{p}_1\mf{p}_2^2$, where $\mf{p}_1$ is unramified and $\mf{p}_2$ is tamely ramified. Consequently, $\mf{p}_2=\mf{p}_2^{2-1}$ exactly divides the different of $K/\mathbf{Q}$. Since $\mf{p}_2$ has norm $3$, this implies $3$ exactly divides $\Delta(K)$. On the other hand, if $3$ is wild, then $3=\mf{p}^3$, in which case $\mf{p}^3$ divides the different. Again, $\mf{p}$ has norm $3$ so $3^3$ divides $\Delta(K)$. \subsection{The shape}\label{sec:shape} Let $\omega:=e^{2\pi i/3}\in\mathbf{C}$ and let $m^{1/3}$ denote the real cube root of $m$. Let $j_\mathbf{R}:K\rightarrow\mathbf{R}^3$ denote the embedding into the ``Minkowski space'' of $K$ (following Neukirch's normalizations, see \cite[\S{I.5}]{Neukirch}): if $\sigma$ denotes the unique real embedding and $\tau$ denotes the complex embedding sending $\alpha$ to $\omega m^{1/3}$, then \[ j_\mathbf{R}(a):=\big(\sigma(a),\mathrm{Re}(\tau(a)),\mathrm{Im}(\tau(a))\big). \] We also let $j:K\rightarrow\mathbf{C}^3$ be given by $j(a)=(\sigma(a),\tau(a),\ol{\tau}(a))$. It is understood that the space $\mathbf{R}^3$ is equipped with the inner product $\langle\cdot,\!\cdot\rangle$ given by the diagonal matrix $\operatorname{diag}(1,2,2)$. This is the inner product obtained by restricting the standard Hermitian inner product on $\mathbf{C}^3$ to the image of $j$, identified with $\mathbf{R}^3$ as in \cite[\S{I.5}]{Neukirch}. As it is unlikely to cause confusion, we shall abuse notation and write $\alpha$ and $\beta$ to sometimes mean $\sigma(\alpha)=m^{1/3}$ and $\sigma(\beta)=m^{\prime1/3}$ depending on context. Given a rank 2 lattice $\Lambda$ with Gram matrix\footnote{Recall that a Gram matrix for a lattice $\Lambda$ is the symmetric matrix whose $(i,j)$-entry is the inner product of the $i$th and $j$th basis vectors, for some choice of basis.} $B=(B_{ij})$ the associated point in $\mf{H}$ is $z=x+iy$, where \begin{equation}\label{eqn:xycoords} x=\frac{B_{0,1}}{B_{0,0}},\quad y=\sqrt{\frac{B_{1,1}}{B_{0,0}}-x^2}. \end{equation} The starting point is the following proposition which immediately proves Theorem~\ref{thm:A} for Type I fields. \begin{proposition}\label{prop:shape_of_OK} For any pure cubic field $K$, the image of the set $\{1,\alpha,\beta\}$ under $j_\mathbf{R}$ is an orthogonal set whose Gram matrix is \[ \vect{3\\& 3\alpha^2\\&&3\beta^2}. \] \end{proposition} \begin{proof} First off, \[ j(1)=(1,1,1),\quad j(\alpha)=(\alpha,\omega\alpha,\omega^2\alpha),\quad\text{and}\quad j(\beta)=(\beta,\omega^2\beta,\omega\beta). \] Thus, the proposition comes down to computing the Hermitian dot products of these vectors. \end{proof} This yields Theorem~\ref{thm:A} for Type I fields as follows. The set $\{j_\mathbf{R}(\alpha),j_\mathbf{R}(\beta)\}$ is a basis of $\mc{O}_K^\perp$ whose Gram matrix is \[ \vect{3\alpha^2\\&3\beta^2}. \] Using the conversion to $z=x+iy\in\mf{H}$ in Equation~\eqref{eqn:xycoords} gives $x=0$ and $y=(\beta/\alpha)=(a/b)^{1/3}=r_K^{1/3}$. To study Type II fields, we will need to first know the image of $\nu$ in $\mc{O}_K^\perp$. \begin{lemma} For $a\in K$, the image $a^\perp$ of $a$ in $\mc{O}_K^\perp$ is \[ a^\perp=j_\mathbf{R}(a)-\frac{\langle j_\mathbf{R}(a),j_\mathbf{R}(1)\rangle}{3}j_\mathbf{R}(1). \] \end{lemma} This follows from linear algebra and the simple fact that $\langle j_\mathbf{R}(1),j_\mathbf{R}(1)\rangle=3$. By Proposition~\ref{prop:shape_of_OK}, $\alpha^\perp=j_\mathbf{R}(\alpha)$ and $\beta^\perp=j_\mathbf{R}(\beta)$. Since $j_\mathbf{R}(\nu)=(j_\mathbf{R}(1)\pm j_\mathbf{R}(\alpha)+bj_\mathbf{R}(\beta))/3$, we get that \begin{equation} \nu^\perp=j_\mathbf{R}(\nu)-\frac{1}{3}j_\mathbf{R}(1)=\frac{1}{3}(\pm\alpha^\perp+b\beta^\perp). \end{equation} \begin{lemma} The Gram matrix of the basis $\{\nu^\perp,\beta^\perp\}$ of $\mc{O}_K^\perp$ is \[ \vect{\displaystyle\frac{\alpha^2(1+\alpha^2)}{3} & \displaystyle\frac{\alpha^4}{b} \\ \displaystyle\frac{\alpha^4}{b} & \displaystyle\frac{3\alpha^4}{b^2}}. \] \end{lemma} \begin{proof} Since $\langle\alpha^\perp,\beta^\perp\rangle=0$, we have \[ \langle\nu^\perp,\nu^\perp\rangle=\frac{1}{9}(\langle\alpha^\perp,\alpha^\perp\rangle+b^2\langle\beta^\perp,\beta^\perp\rangle)=\frac{1}{9}(3\alpha^2+3b^2\beta^2). \] This is the claimed value since $b\beta=\alpha^2$. Furthermore, \[ \langle\nu^\perp,\beta^\perp\rangle=\frac{b}{3}\langle\beta^\perp,\beta^\perp\rangle=b\beta^2=\alpha^4/b. \] Finally, we already know the inner product of $\beta^\perp$ with itself. \end{proof} This basis is not terribly pleasing from the point of view of shapes. Making the following change of basis yields a nicer Gram matrix and proves Theorem~\ref{thm:A} for Type II fields. \begin{lemma} Let $\epsilon\in\{\pm1\}$ and $k\in\mathbf{Z}$ be such that $b=3k+\epsilon$. Define \[ \gamma=\vect{3&-b\\1&-k}. \] The basis $\{v_1,v_2\}$ of $\mc{O}_K^\perp$ given by \[ \vect{v_1\\v_2}=\gamma\vect{\nu^\perp\\\beta^\perp} \] has the Gram matrix \[ \vect{3\alpha^2 & \alpha^2 \\ \alpha^2 & \displaystyle\frac{\alpha^2+\beta^2}{3}}. \] The associated point in $\mf{H}$ is $\frac{1}{3}+i\frac{r_K^{1/3}}{3}$. \end{lemma} \begin{proof} We have that \[ v_1=3\nu^\perp-b\beta^\perp=\pm\alpha^\perp+\alpha^{\perp2}-\alpha^{\perp2}=\pm\alpha^\perp \] and \[ v_2=\nu^\perp-k\beta^\perp=\frac{\pm\alpha^\perp+(3k+\epsilon)\beta^\perp-3k\beta^\perp}{3}=\frac{1}{3}(\pm\alpha^\perp+\epsilon\beta^\perp). \] Thus, \begin{align*} \langle v_1,v_1\rangle=3\alpha^2,\quad\langle v_1,v_2\rangle=\frac{1}{3}\langle\alpha^\perp,\alpha^\perp\rangle=\alpha^2,\quad\\ \langle v_2,v_2\rangle=\frac{1}{9}(\langle\alpha^\perp,\alpha^\perp\rangle+\langle\beta^\perp,\beta^\perp\rangle)=\frac{\alpha^2+\beta^2}{3}. \end{align*} The associated point in $\mf{H}$ has $x=\alpha^2/(3\alpha^2)=1/3$ and \[ y=\sqrt{\frac{\alpha^2+\beta^2}{9\alpha^2}-\frac{1}{9}}=\frac{\beta}{3\alpha}=\frac{r_K^{1/3}}{3}. \] \end{proof} This concludes the proof of Theorem~\ref{thm:A}. \subsection{Proof of Theorem~\ref{thm:B}}\label{sec:thumb} Let $L$ be any `non-pure' cubic field and let $z_L=x_L+iy_L$ be any point in the upper-half plane corresponding to the lattice $\mc{O}_L^\perp$. By studying the fields of definition of $x_L$ and $y_L$, we will briefly show that $z_L$ cannot be the shape of a pure cubic field. Since Theorem~\ref{thm:A} already shows that non-isomorphic pure cubic fields have different shapes, Theorem~\ref{thm:B} will follow. First, we note that the key result from Theorem~\ref{thm:A} needed here is that the $y$-coordinate of the shape of a pure cubic field $K$ is \textit{not} rational, rather it lies `properly' in the image of $K$ in $\mathbf{R}$. For this paragraph, we will call such a number `purely cubic'. Now, if $L$ is a real cubic field, then the inner product on $L\otimes_\mathbf{Q}\mathbf{R}$ is simply the trace form and so the Gram matrix of $\mc{O}_L^\perp$ has coefficients in $\mathbf{Q}$. The equations in \eqref{eqn:xycoords} giving $z_L$ in terms of the Gram matrix show that $x_L\in\mathbf{Q}$ and $y_L$ is in some quadratic extension of $\mathbf{Q}$. Thus, $y_L$ is not purely cubic. For a complex cubic field $L$, let $\sigma_L$ denote the unique real embedding of $L$. By definition of the inner product on $L\otimes_\mathbf{Q}\mathbf{R}$ as recalled above, the entries of the Gram matrix of any basis of $\mc{O}_L^\perp$ lie in the Galois closure $N_L$ of $\sigma_L(L)$. The equations in \eqref{eqn:xycoords} then show that $x_L\in N_L$ and that $y_L$ is in a quadratic extension $\wt{N}_L$ of $N_L$. Since cubic extensions have no non-trivial intermediate extensions, for any pure cubic field $K$, we have that $\wt{N}_L\cap K=\mathbf{Q}$. Thus, again, $y_L$ is not purely cubic. \section{Equidistribution of shapes}\label{sec:equid} We have organized this section beginning with the most conceptual and ending with the most detailed. Specifically, \S\ref{sec:carefree} contains the real mathematical content: counting ``strongly carefree couples'' in a cone below a hyperbola, subject to congruence conditions. In \S\ref{sec:fieldcounting}, these results are translated into counts for pure cubic fields of bounded discriminant with ratio in a given interval. Finally, to begin with, \S\ref{sec:proofThmC} consists mostly of some basic measure theory in order to introduce and prove Theorem~\ref{thm:C} in a conceptually nicer way. \subsection{Proof of Theorem~\ref{thm:C}}\label{sec:proofThmC} For $?=$ I or II, let $C_c(\mathscr{S}_?)$ denote the space of continuous functions on $\mathscr{S}_?$ with compact support, and for a positive integer $X$, define a positive linear functional $\varphi_{?,X}$ on $C_c(\mathscr{S}_?)$ by \[ \varphi_{?,X}(f)=\frac{1}{C_?\sqrt{X}}\sum_{\substack{K\text{ of type ?}\\ |\Delta(K)|\leq X}}f(\operatorname{sh}(K)), \] where the shape of $K$ is taken in $\mathscr{S}_?$. By the Riesz representation theorem, each $\varphi_{?,X}$ corresponds to a (regular Radon) measure $\mu_{?,X}$ on $\mathscr{S}_?$. Note that since these measures are finite sums of point measures, we have that \[ \mu_{?,X}([R_1,R_2)_?)=\frac{\#\left\{K\text{ of type ?}:|\Delta(K)|\leq X,\operatorname{sh}(K)\in[R_1,R_2)_?\right\}}{C_?\sqrt{X}}, \] for all $1\leq R_1<R_2$. \begin{theorem}\label{thm:C'} For $?=$ I or II, the sequence $\mu_{?,X}$ converges weakly to $\mu_?$. That is, for all $f\in C_c(\mathscr{S}_?)$, \[ \lim_{X\rightarrow\infty}\int fd\mu_{?,X}=\int fd\mu_?. \] \end{theorem} \begin{proof} The first step will be to prove the statement given in Theorem~\ref{thm:C} in the introduction, i.e.\ we must obtain asymptotics for the number of pure cubic fields of bounded discriminant and ratio in a given interval. This is the subject of Theorem~\ref{thm:fieldasymptotics} below. We obtain, in the notation of that theorem, \[ \mu_{?,X}([R_1,R_2)_?)=\frac{\mc{N}_?(X,R_1^3,R_2^3)}{C_?\sqrt{X}}=\log\left(\frac{R_2}{R_1}\right)+o(1). \] Thus, $\mu_{?,X}([R_1,R_2)_?)\rightarrow\mu_?([R_1,R_2)_?)$ as desired. Passing from this result to the full result is a standard argument. Indeed, let $f\in C_c(\mathscr{S}_?)$ be given and let $\epsilon>0$ be arbitrary. Then, there exists a pair $(f_1,f_2)$ of step functions (i.e.\ finite linear combinations of characteristic functions of intervals of the form $[R_1,R_2)_?$ above) with $f_1\leq f\leq f_2$ and $\int(f_2-f_1)d\mu_?<\epsilon$. Since we already know the theorem for such step functions, we obtain that \begin{align*} \int fd\mu_?-\epsilon&\leq\int f_1d\mu_?=\lim_{X\rightarrow\infty}\int f_1d\mu_{?,X}\\ &\leq\liminf_{X\rightarrow\infty}\int fd\mu_{?,X}\\ &\leq\limsup_{X\rightarrow\infty}\int fd\mu_{?,X}\\ &\leq\lim_{X\rightarrow\infty}\int f_2d\mu_{?,X}=\int f_2d\mu_?\\ &\leq\int fd\mu_?+\epsilon. \end{align*} As $\epsilon$ is arbitrary, the result follows. \end{proof} \subsection{Counting pure cubic fields of bounded discriminant and ratio}\label{sec:fieldcounting} For real numbers $X, R_1,R_2\geq1$ with $R_1<R_2$, let $\mc{N}_?(X,R_1,R_2)$ denote the number of pure cubic fields (up to isomorphism) of type $?=$~I or II, of discriminant less than $X$ (in absolute value), and ratio in the interval $(R_1,R_2)$. When we drop the subscript, we count Type I and II together. \begin{theorem}\label{thm:fieldasymptotics} For fixed $R_1$ and $R_2$, we have the following asymptotics: \begin{align} \mc{N}_{\mathrm{I}}(X,R_1,R_2)&=\frac{2C}{15\sqrt{3}}\sqrt{X}\log\frac{R_2}{R_1}+o(\sqrt{X}),\\ \mc{N}_{\mathrm{II}}(X,R_1,R_2)&=\frac{C}{10\sqrt{3}}\sqrt{X}\log\frac{R_2}{R_1}+o(\sqrt{X}), \end{align} \begin{align} \mc{N}(X,R_1,R_2)&=\frac{7C}{30\sqrt{3}}\sqrt{X}\log\frac{R_2}{R_1}+o(\sqrt{X}), \end{align} where $C$ is as in Theorem~\ref{thm:C}. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:fieldasymptotics}] For Type~I fields, the discriminant is $-3^3a^2b^2$, so, in the notation of Theorem~\ref{prop:TypeIIabcount} below, we take $N=\frac{1}{3}\sqrt{X/3}$ and get that \[ \mc{N}_{\mathrm{I}}(X,R_1,R_2)=\frac{1}{2}\left(\mc{S}_{\mathrm{I}}\left(\frac{1}{3}\sqrt{X/3}, R_2\right)-\mc{S}_{\mathrm{II}}\left(\frac{1}{3}\sqrt{X/3}, R_1\right)\right). \] The desired result follows from Theorem~\ref{prop:TypeIIabcount}. Similarly, for Type~II fields, the discriminant is $-3a^2b^2$, so $N=\sqrt{X/3}$, and we have that \[ \mc{N}_{\mathrm{II}}(X,R_1,R_2)=\frac{1}{2}\left(\mc{S}_{\mathrm{II}}(\sqrt{X/3}, R_2)-\mc{S}_{\mathrm{II}}(\sqrt{X/3}, R_1)\right), \] which gives the desired result by Theorem~\ref{prop:TypeIIabcount}. \end{proof} The next section contains all the counting results used in the above proof. \subsection{Counting strongly carefree couples in a hyperbolic pie slice, with congruence conditions}\label{sec:carefree} A pair $(a,b)$ of positive integers is called a \emph{strongly carefree couple} if $a$ and $b$ are relatively prime and squarefree. In \cite{Carefree}, Pieter Moree counts the number of strongly carefree couples in a box of side $N$ obtaining the asymptotic $CN^2+O(N^{3/2})$, where $C$ is as in Theorem~\ref{thm:C}. Counting pure cubic fields of bounded discriminant and ratio amounts to counting strongly carefree couples below the hyperbola $ab=N$ and within the cone $R^{-1}\leq a/b\leq R$, while counting only those of Type~II imposes a congruence condition modulo 9. In this section, we determine asymptotics for these counts following the methods of \cite{Carefree} and \cite[\S6.2]{Manjul-Ari}. For $N,R\geq1$, let \[ \mc{S}(N,R)=\#\left\{(a,b)\in\mathbf{Z}_{\geq1}^2:(a,b)\text{ strongly carefree}, ab\leq N, \frac{1}{R}\leq\frac{a}{b}\leq R\right\}, \] \[ \mc{S}_{\mathrm{II}}(N,R)=\#\left\{(a,b)\in\mc{S}(N,R):3\nmid ab, a^2\equiv b^2\mod{9}\right\}, \] and \[ \mc{S}_{\mathrm{I}}(N,R)=\mc{S}(N,R)-\mc{S}_{\mathrm{II}}(N,R). \] \begin{theorem}\label{prop:TypeIIabcount} For fixed $R$, we have \begin{align} \mc{S}_{\mathrm{I}}(N,R)&=\frac{4C}{5}N\log R+o(N),\\ \mc{S}_{\mathrm{II}}(N,R)&=\frac{C}{5}N\log R+o(N),\\ \mc{S}(N,R)&=CN\log R+o(N). \end{align} \end{theorem} \begin{proof} For the two latter counts, we cover the hyperbolic pie slice with the following four pieces: \begin{itemize} \item[(i)] $1\leq a\leq\sqrt{RN}, 1\leq b\leq\frac{N}{a}$, \item[(ii)] $1\leq a\leq\sqrt{\frac{N}{R}}, 1\leq b\leq\frac{N}{a}$, \item[(iii)] $1\leq a\leq\sqrt{\frac{N}{R}}, 1\leq b\leq Ra$, \item[(iv)] $1\leq a\leq\sqrt{RN}, 1\leq b\leq\frac{1}{R}a$. \end{itemize} Letting $\mc{S}^?(N,R)$ denote the number of points with condition $?\in\{\text{(i), (ii), (iii), (iv)}\}$ imposed, and similarly for $\mc{S}^?_{\mathrm{II}}(N,R)$, we have that \[ \mc{S}_?(N,R)=\mc{S}_?^{(i)}(N,R)-\mc{S}_?^{(ii)}(N,R)+\mc{S}_?^{(iii)}(N,R)-\mc{S}_?^{(iv)}(N,R), \] for $?=\mathrm{II}$ or nothing. For $\mc{S}(N,R)$, we will apply Lemmas~\ref{lem:regionsi-iii} and \ref{lem:regionsii-iv} with $n=1$, whereas for $\mc{S}_{\mathrm{II}}(N,R)$ we take $n=9$ and sum over two $\psi$s, one sending $a$ to $a\mod{9}$, the other to $-a\mod{9}$. Note that all but one of the terms in Lemma~\ref{lem:regionsi-iii} cancel in $\mc{S}_?^{(i)}(N,R)-\mc{S}_?^{(ii)}(N,R)$ since they are independent of $R$. Applying Lemma~\ref{lem:regionsi-iii} with $\rho=R$, then $\rho=1/R$ and noting that Lemma~\ref{lem:regionsii-iv} implies the contribution of $\mc{S}_?^{(iii)}(N,R)-\mc{S}_?^{(iv)}(N,R)$ is negligible, we obtain the stated results. \end{proof} The point counting in regions of the form (i)--(iv) above are contained in the following two lemmas. \begin{lemma}\label{lem:regionsi-iii} Let $n$ be a positive integer, let $\psi$ be any function from the positive integers prime to $n$ to $(\mathbf{Z}/n\mathbf{Z})^\times$, and let $\rho$ be a positive real number. Then, \begin{align*} \sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\!\!\!\!\!\sum_{\substack{b\leq N/a\\ (a,b)=1\\ b\equiv\psi(a)\mod{n}}}\!\!\!\!\!\mu^2(b)&=\frac{CN}{n}\prod_{p\mid n}\left(\frac{p^2}{(p-1)(p+2)}\right)\\ &\cdot\left(\frac{\log N}{2}+\frac{\log \rho}{2}+\gamma+3\kappa+\sum_{p\mid n}\frac{\log p}{p+2}\right)+O(N^{3/4+\epsilon}) \end{align*} for all $\epsilon>0$,where $\gamma$ is the Euler--Mascheroni constant, $C$ is as in Theorem~\ref{thm:C}, and \[ \kappa=\sum_p\frac{\log(p)}{p^2+p-2}. \] \end{lemma} \begin{proof} Applying Lemma~\ref{lem:bsum} below to the inner sum with $a^\prime=\psi(a)$ yields \[ \sum_{\substack{b\leq N/a\\ (a,b)=1\\ b\equiv\psi(a)\mod{n}}}\mu^2(b)=\frac{\varphi(a)N}{a^2n}\cdot\frac{1}{\zeta(2)}\cdot\prod_{p\mid an}\frac{p^2}{p^2-1}+O(2^{\omega(a)}\sqrt{N/a}). \] Estimating the error using that\footnote{Indeed, $2^{\omega(m)}\leq d(m)$ and $d(m)=o(m^{\epsilon})$ for all $\epsilon>0$ (see e.g. \cite[Exercise~13.13]{Apostol}).} \[ \sum_{m\leq x}\mu^2(m)\frac{2^{\omega(m)}}{\sqrt{m}}=O(x^{1/2+\epsilon}),\text{ for all }\epsilon>0, \] we obtain, for all $\epsilon>0$, and up to $O(N^{3/4+\epsilon})$, \begin{align*} \sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\!\!\!\!\!\sum_{\substack{b\leq N/a\\ (a,b)=1\\ b\equiv\psi(a)\mod{n}}}\!\!\!\!\!\mu^2(b)&=\frac{N}{n\zeta(2)}\prod_{p\mid n}\frac{p^2}{p^2-1}\sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\frac{\varphi(a)}{a^2}\prod_{p\mid a}\frac{p^2}{p^2-1}\\ &=\frac{N}{n\zeta(2)}\prod_{p\mid n}\frac{p^2}{p^2-1}\sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\prod_{p\mid a}\frac{1}{p+1}. \end{align*} Applying Perron's formula as in Lemma~\ref{lem:perron} below with $k=0$ and $x=\sqrt{\rho N}$ yields the desired result. \end{proof} \begin{lemma}\label{lem:regionsii-iv} Let $n,\psi$, and $\rho$ be as in Lemma~\ref{lem:regionsi-iii}. Then. \[ \sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\sum_{\substack{b\leq a/\rho\\ (a,b)=1\\ b\equiv\psi(a)\mod{n}}}\mu^2(b)=O(N^{3/4+\epsilon}),\text{ for all }\epsilon>0. \] \end{lemma} \begin{proof} This proof proceeds along the same lines as the previous one. Lemma~\ref{lem:bsum} gives \[ \sum_{\substack{b\leq a/\rho\\ (a,b)=1\\ b\equiv\psi(a)\mod{n}}}\mu^2(b)=\frac{\varphi(a)}{\rho an}\cdot\frac{1}{\zeta(2)}\cdot\prod_{p\mid an}\frac{p^2}{p^2-1}+O(2^{\omega(a)}\sqrt{a/\rho}). \] Again using that $d(m)=o(m^\epsilon)$, for all $\epsilon>0$, we see that\footnote{Here, we simply replace the $\sqrt{a}$ in the error term above with $N^{1/4}$.} \[ \sum_{m\leq x}2^{\omega(m)}=O(x^{1+\epsilon}),\text{ for all }\epsilon>0. \] This gives, for all $\epsilon>0$, and up to $O(N^{3/4+\epsilon})$, \begin{align*} \sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\!\!\!\!\!\sum_{\substack{b\leq a/\rho\\ (a,b)=1\\ b\equiv\psi(a)\mod{n}}}\!\!\!\!\!\mu^2(b)&=\frac{1}{\rho n\zeta(2)}\prod_{p\mid n}\frac{p^2}{p^2-1}\sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\frac{\varphi(a)}{a}\prod_{p\mid a}\frac{p^2}{p^2-1}\\ &=\frac{1}{\rho n\zeta(2)}\prod_{p\mid n}\frac{p^2}{p^2-1}\sum_{\substack{a\leq\sqrt{\rho N}\\ (a,n)=1}}\mu^2(a)\prod_{p\mid a}\frac{p}{p+1}. \end{align*} Since all the terms in the sum are less than 1, the sum itself is $O(\sqrt{N})$, and we are done. \end{proof} In order to obtain the counts above with congruence conditions, we need the following improvements to \cite[\S2]{Carefree}. \begin{lemma} Fix $n\in\mathbf{Z}\geq1$. For $a$ and $a^\prime$ relatively prime to $n$, let \[ T_{a,a^\prime,n}(x)=\#\{b\leq x:(a,b)=1, b\equiv a^\prime\mod{n}\}. \] Then \[ T_{a,a^\prime,n}(x)=\frac{\varphi(a)x}{an}+O(2^{\omega(a)}), \] where $\varphi$ is the Euler totient function and $\omega(a)$ is the number of distinct prime divisors of $a$. \end{lemma} Note that the right hand side is independent of $a^\prime$. \begin{proof} Using the identity \[ \sum_{d|m}\mu(d)=\begin{cases} 1&\text{if }m=1\\ 0&\text{otherwise} \end{cases} \] and the orthogonality of Dirichlet characters in the form \[ \frac{1}{\varphi(n)}\sum_{\chi\text{ mod }n}\chi(m)\ol{\chi}(m^\prime)=\begin{cases} 1&\text{if }m\equiv m^\prime\mod{n}\\ 0&\text{otherwise} \end{cases} \] (where the sum is over all Dirichlet characters modulo $n$), we obtain \begin{align*} T_{a,a^\prime,n}(x)&=\sum_{\substack{b\leq x\\(a,b)=1\\b\equiv a^\prime\mod{n}}}1\\ &=\sum_{\substack{b\leq x\\b\equiv a^\prime\mod{n}}}\sum_{d\mid\gcd(a,b)}\mu(d)\\ &=\sum_{b\leq x}\left(\sum_{d\mid\gcd(a,b)}\mu(d)\frac{1}{\varphi(n)}\sum_{\chi\text{ mod }n}\chi(a^\prime)\ol{\chi}(b)\right)\\ &=\frac{1}{\varphi(n)}\sum_{d\mid a}\mu(d)\left(\sum_{c\text{ mod }n}\sum_{\chi\text{ mod }n}\chi(a^\prime)\ol{\chi}(c)\right)\\ &\phantom{=}\cdot\#\{\text{multiples }b\text{ of }d:b\leq x, b\equiv c\mod{n}\}\\ &=\sum_{d\mid a}\mu(d)\left(\frac{x}{dn}+O(1)\right)\\ &=\frac{\varphi(a)x}{an}+O(2^{\omega(a)}), \end{align*} where we've used that \[ \sum_{d\mid m}\mu(d)=\frac{\varphi(m)}{m}. \] \end{proof} \begin{lemma}\label{lem:bsum} Fix $n\in\mathbf{Z}\geq1$. For $a$ and $a^\prime$ relatively prime to $n$, let \[ S_{a,a^\prime,n}(x)=\#\{b\leq x:(a,b)=1, b\equiv a^\prime\mod{n},b\text{ squarefree}\}. \] Then \[ S_{a,a^\prime,n}(x)=\frac{\varphi(a)x}{an}\cdot\frac{1}{\zeta(2)}\cdot\prod_{p\mid an}\frac{p^2}{p^2-1}+O(2^{\omega(a)}\sqrt{x}), \] where $\zeta(2)=\pi^2/6$. \end{lemma} Once again, note that the right hand side is independent of $a^\prime$. \begin{proof} By the inclusion-exclusion principle, \begin{align*} S_{a,a^\prime,n}(x)&=\sum_{\substack{m\leq\sqrt{x}\\(m,an)=1}}\mu(m)T_{a,a^\prime m^{-2},n}(x/m^2)\\ &=\frac{\varphi(a)x}{an}\left(\sum_{\substack{m\leq\sqrt{x}\\(m,an)=1}}\frac{\mu(m)}{m^2}\right)+O(2^{\omega(a)}\sqrt{x}) \end{align*} \begin{align*} &=\frac{\varphi(a)x}{an}\left(\sum^\infty_{\substack{m=1\\(m,an)=1}}\frac{\mu(m)}{m^2}\right)+O(2^{\omega(a)}\sqrt{x})\\ &=\frac{\varphi(a)x}{an}\cdot\frac{1}{\zeta^{(an)}(2)}+O(2^{\omega(a)}\sqrt{x})\\ &=\frac{\varphi(a)x}{an}\cdot\frac{1}{\zeta(2)}\cdot\prod_{p\mid an}\frac{p^2}{p^2-1}+O(2^{\omega(a)}\sqrt{x}). \end{align*} \end{proof} Finally, our application of Perron's formula is dealt with in the following lemma. This approach appears in \cite[\S6.2]{Manjul-Ari}, though we include further details and a slightly more general result. \begin{lemma}\label{lem:perron} Let $k$ be a nonnegative integer and let $n$ be a positive integer. Let \[ A_k(a)=\mu^2(a)\prod_{p\mid a}\frac{p^k}{p+1}\quad\text{and}\quad f_k(s)=\sum_{\substack{a\geq1\\(a,n)=1}}\frac{A_k(a)}{a^s}. \] Then, \[ \sum_{\substack{a\leq x\\(a,n)=1}}A_k(a)=\zeta(2)C\prod_{p\mid n}\left(1+\frac{1}{p+1}\right)^{\!\!-1}\cdot\begin{cases} \displaystyle\log(x)+\gamma+3\kappa+\sum_{p\mid n}\frac{\log p}{p+2}&\text{if }k=0,\\ \displaystyle\frac{x^k}{k}&\text{if }k>0, \end{cases} \] where $C$ is as in Theorem~\ref{thm:C} and $\kappa$ is as in Lemma~\ref{lem:regionsi-iii}. \end{lemma} \begin{proof} Note that $A_k(a)=a^kA_0(k)$ so that $f_k(s)=f_0(s-k)$. Since $A_0(a)\leq 1/a$, $f_0(s)$ converges absolutely for $\mathrm{Re}(s)>0$. Perron's formula (see e.g.\ \cite[Theorem~11.18]{Apostol}) then states that for any $\epsilon>0$ and any $k\geq0$ \[ \sum_{a\leq x}\!{\vphantom{\sum}}^\ast A_k(a)=\frac{1}{2\pi i}\int_{k+\epsilon-i\infty}^{k+\epsilon+i\infty}f_k(s)\frac{x^s}{s}ds=\frac{1}{2\pi i}\int_{k+\epsilon-i\infty}^{k+\epsilon+i\infty}f_0(s-k)\frac{x^s}{s}ds, \] where the asterisk on the sum means that if $x$ is an integer, then the last term must be divided by 2. Following \cite[\S6.2]{Manjul-Ari}, we let $h(s)=f_0(s)/\zeta(s+1)$ and we compute \[ \frac{1}{2\pi i}\int_{k+\epsilon-i\infty}^{k+\epsilon+i\infty}h(s-k)\zeta(s-k+1)\frac{x^s}{s}ds \] by shifting the integral just to the left of $\mathrm{Re}(s)=k$ picking up a pole of $\zeta(s)$ at $s=1$ and, when $k=0$, also the pole of $1/s$ at $s=0$. Since $h(s)$ converges for $\mathrm{Re}(s)>-1/2$, we may use Laurent series to determine the residues that occur. For $k>0$, near $s=k$ \[ h(s-k)\zeta(s-k+1)\frac{x^s}{s}=(h(0)+\cdots)\cdot(\frac{1}{s-k}+\cdots)\cdot(x^k+\cdots)\cdot(1/k+\cdots), \] so the residue is simply $h(0)x^k/k$. For $k=0$, near $s=0$ \[ h(s)\zeta(s+1)\frac{x^s}{s}=(h(0)+h^\prime(0)s+\cdots)\cdot(\frac{1}{s}+\gamma+\cdots)\cdot(1+s\log x+\cdots)\cdot1/s, \] so the residue is $h(0)\gamma+h(0)\log(x)+h^\prime(0)$. In order to evaluate $h(0)$, we consider $h(s)/\zeta(s+2)=f(s)/(\zeta(s+1)\zeta(s+2))$ at $s=0$, where its Euler product converges. Note that \[ \left(1+\frac{1}{p+1}\right)(1-p^{-1})(1-p^{-2})=1-\frac{3}{p^2}+\frac{2}{p^3} \] so \[ \frac{h(0)}{\zeta(2)}=C\prod_{p\mid n}\left(1+\frac{1}{p+1}\right)^{-1}. \] In order to evaluate $h^\prime(0)$, we instead evaluate $h^\prime(0)/h(0)$. One may verify that \[ h(s)=\prod_{p\mid n}\left(1+\frac{1}{p+1}p^{-s}\right)^{-1}\prod_p\left(1-\frac{1}{p(p+1)}(p^{-s}+p^{-2s})\right). \] Taking the logarithm, then the derivative, and evaluating at $s=0$ yields \[ \frac{h^\prime(0)}{h(0)}=3\kappa+\sum_{p\mid n}\frac{\log(p)}{p+2}. \] \end{proof} \subsection*{Acknowledgments} The author would like to thank Piper Harron, Ari Shnidman, and Rufus Willett for some helpful conversations, as well as the referee for suggesting some points that should be clarified and asking some good questions. \end{document}
\begin{document} \newcommand{\RN}[1]{ \textup{\uppercase\expandafter{\romannumeral#1}} } \title{Cascaded High Dimensional Histograms: \\A Generative Approach to Density Estimation} \author{\name Siong Thye Goh \email [email protected] \\ \AND \name Cynthia Rudin \email [email protected] \\ \addr Massachusetts Institute of Technology \\ Cambridge, MA02139, USA.} \editor{} \maketitle \begin{abstract} We present tree- and list- structured density estimation methods for high dimensional binary/categorical data. Our density estimation models are high dimensional analogies to variable bin width histograms. In each leaf of the tree (or list), the density is constant, similar to the flat density within the bin of a histogram. Histograms, however, cannot easily be visualized in higher dimensions, whereas our models can. The accuracy of histograms fades as dimensions increase, whereas our models have priors that help with generalization. Our models are sparse, unlike high-dimensional histograms. We present three generative models, where the first one allows the user to specify the number of desired leaves in the tree within a Bayesian prior. The second model allows the user to specify the desired number of branches within the prior. The third model returns lists (rather than trees) and allows the user to specify the desired number of rules and the length of rules within the prior. Our results indicate that the new approaches yield a better balance between sparsity and accuracy of density estimates than other methods for this task. \end{abstract} \begin{keywords} Density Estimation, Decision Trees, Histogram, Interpretable Modeling. \end{keywords} \section{Introduction} A histogram is a piecewise constant density estimation model. There are good reasons that the histogram is among the first techniques taught to any student dealing with data \citep{chakrabarti2006data}: (i) histograms are easy to visualize, (ii) they are accurate as long as there are enough data in each bin, and (iii) they have a logical structure that most people find interpretable. A downside of the conventional histogram is that all of these properties fail in high dimensions, particularly for binary or categorical data. One cannot easily visualize a conventional high dimensional histogram. For binary data this would require us to visualize a high dimensional hypercube. In terms of accuracy, there may not be enough data in each bin, so the estimates would cease to be accurate. In terms of interpretability, for a high dimensional histogram, a large set of logical conditions ceases to be an interpretable representation of the data, and can easily obscure important relationships between variables. Considering marginals is often useless for binary variables, since there are only two bins (0 and 1). The question is how to construct a piecewise constant density estimation model (like a histogram) that has the three properties mentioned above: (i) it can be visualized, (ii) it is accurate, (iii) it is interpretable. In this paper we present three cascaded (tree- or list- structured) density estimation models. These are similar to variable bin-width histograms, (e.g., see \cite{wand1997data,scott1979optimal}), though our approaches use only a subset of the variables. A leaf (that is, a histogram bin) is defined by conditions on a subset of variables (e.g. ``the second component of $x$ is 0" and ``the first component of $x$ is 1"), and the density is estimated to be constant with each leaf. Let us give an example to illustrate how each bin is modeled to be of constant density. Let us say we are modeling the population of burglaries (housebreaks) in a city. We might want to create a density model to understand how common or unusual the particular details of a crime might be (e.g., do we see crimes like this every month, or is this relatively uncommon?). A leaf (histogram bin) in our model might be the following: if \textit{premise} is \texttt{residence}, \textit{owner present} is \texttt{false}, \textit{location of entry} is \texttt{window}, then $p(\textrm{state})$ is 0.20. This means that the total density in the bin where these conditions hold is 0.20, that is, for 20\% of burglaries, the three conditions are met. Let us say we have an additional variable \textit{means of entry} with outcomes \texttt{pried}, \texttt{forced}, and \texttt{unlocked}, indicating how the criminal entered the premise. Each of these outcomes would be equally probably in the leaf, each with probability 0.20/3=.067. We described just one bin above, whereas a full tree could be that of Figure \ref{FigFarmSlide1}. \begin{figure} \caption{A sparse tree to represent the grain data set. Probability of belonging to the leaf, the densities ($f$) and volume (Vol) are specified in the sparse tree.} \label{FigFarmSlide1} \end{figure} Bayesian priors control the shape of the tree. This helps with both generalization and interpretability. For the first method, the prior parameter controls the number of leaves. For the second method, the prior controls the desired number of branches for nodes of the tree. For the third method, which creates lists (one-sided trees), the prior controls the desired number of leaves and also the length of the leaf descriptions. This generative structure aims to fix the three issues with conventional histograms: (i) visualization: we need only write down the conditions we used in the tree- or list- shaped cascade to visualize the model. (ii) accuracy: the prior encourages the cascade to be smaller, which means the bins are larger, and generalize better. (iii) interpretability: the prior encourages sparsity, and encourages the cascade to obey a user-defined notion of interpretability. Density estimation is a classic topic in statistics and machine learning. Without using domain-specific generative assumptions, the most useful techniques have been nonparametric, mainly variants of kernel density estimation (KDE) \citep{akaike1954approximation,rosenblatt1956remarks,parzen1962estimation,cacoullos1966estimation,mahapatruni2011cake,nadaraya1970remarks,rejto1973density,wasserman2006all,silverman1986density,devroye1991exponential}. KDE is highly tunable, not domain dependent, and can generalize well, but does not have the interpretable logical structure of histograms. Similar alternatives include mixtures of Gaussians \citep{Li99mixturedensity,zhuang1996gaussian,ormoneit1995improved,ormoneit1998averaging,chen2006probability,seidl2009indexing}, forest density estimation \citep{Liu:2011:FDE:1953048.2021032}, RODEO \citep{AISTATS07_LiuLW} and other nonparametric Bayesian methods \citep{muller2004nonparametric} which have been proposed for general purpose (not interpretable per se) density estimation. \citep{jebara2012bayesian} provides a Bayesian treatment of latent directed graph structure for non-iid data, but does not focus on sparsity. P\'olya trees are generated probabilistically for real valued features and could be used as priors \citep{wong2010optional}. The most similar paper to ours is on density estimation trees (DET) \citep{Ram:2011:DET:2020408.2020507}. DETs are constructed in a top-down greedy way. This gives them a disadvantage in optimization, often leading to lower quality trees. They also do not have a generative interpretation, and their parameters do not have a physical meaning in terms of the shape of the trees (unlike the methods defined in this work). \section{Models} For all the three models, we will need the following notation. There are $p$ features. We express the path to a leaf as the set of conditions on each feature along the path. For instance, for a particular leaf (leaf $t$ in Figure \ref{MyTree}), we might see conditions that require the first feature $x_{.1} \in \left\{ 4,5,6\right\}$ and the second feature $x_{.2} \in \left\{ 100,101\right\}$. Thus the leaf is defined by the set of all outcomes that obey these conditions, that is, the leaf could be $$x \in\left\{ x_{.1} \in \left\{4,5,6 \right\}, x_{.2} \in \left\{ 100, 101\right\}, x_{3.}, x_{.4}, \ldots, x_{.p} \textrm{ are any allowed values}\right\}.$$ This implies there is no restriction on $ x_{3.}, x_{.4}, \ldots, x_{.p}$ for observations within the leaf. Notationally, a condition on the $j^{th}$ feature is denoted $x_{.j} \in \sigma_j(l)$ where $\sigma_j(l)$ is the set of allowed values for feature $j$ along the path to leaf $l$. If there are no conditions on feature $j$ along the path to $l$, then $\sigma_j(l)$ includes all possible outcomes for feature $j$. Thus, leaf $l$ includes outcomes $x$ obeying: $$x \in \left\{ x_{.1} \in \sigma_1(l),x_{.2} \in \sigma_2(l), \ldots, x_{.p} \in \sigma_p(l)\right\}.$$ For categorical data, the volume of a leaf $l$ is defined to be $\mathbb{V}_l=\prod_{j=1}^p |\sigma_j(l)|$. We give an example of this computation next. \subsubsection*{Volume Computation Example} The data are categorical. Possible outcomes for $x_{.1}$ are $\left\{ 1,2,3,4,5,6,7\right\}$. Possible outcomes for $x_{.2}$ are $\left\{ 100,101,102,103\right\}$. Possible outcomes for $x_{.3}$ are $\left\{ 10,11,12,13,14,15\right\}$. Possible outcomes for $x_{.4}$ are $\left\{ 8,9,10\right\}$. Consider the tree in Figure \ref{MyTree}. \begin{figure} \caption{Example of computation of volume.} \label{MyTree} \end{figure} We compute the volume for leaf $l$. Here, $\sigma_1(l)=\left\{ 4,5 \right\}$ since $l$ requires both $x_{.1} \in \left\{ 4,5,6\right\}$ and $x_{.1} \in \left\{ 4,5\right\}$. $\sigma_2(l)=\left\{ 100,101\right\},$ $\sigma_3(l)=\left\{ 10,15\right\},$ and $\sigma_4(l)=\left\{ 8,9,10\right\}$ because there is no restriction on $x_{.4}$. So $$\mathbb{V}_l=\prod_j |\sigma_j(l)|=2\cdot 2\cdot 2\cdot 3=24.$$ Our notation handles only categorical data for ease of exposition but can be extended to handle ordinal and continuous data. For ordinal data, the definition is the same as for categorical but $\sigma_j$ can (optionally) include only continguous values (e.g. $\left\{ 3,4,5\right\}$ but not $\left\{ 3,4,6\right\}$). For continuous variables, $\sigma_j$ is the ``volume" of the continuous variables, for example, for node condition $x_{.j} \in (0,0.5), \sigma_j=0.5-0$. In the next three subsections, we present the leaf-based modeling approach, branch-based modeling approach, and an approach to construct density rule lists. \subsection{Model \RN{1}: Leaf-based Cascade Model} We define prior and likelihood for the tree-based model. To create the tree we will optimize the posterior over possible trees.\\ \noindent \textbf{Prior:}\\ For this model, the main prior on tree $T$ is on the number of leaves $K_T$. This prior we choose to be Poisson with a particular scaling (which will make sense later on), where the Poisson is centered at a user-defined parameter $\lambda$. Notation $N_{K_T}$ is the number of trees with $K_T$ leaves. The prior is: \begin{eqnarray*}P(\textrm{Number of leaves in }T= K_T| \lambda) &\propto& N_{K_T}\cdot \textrm{ Poisson}(K_T,\lambda)\\ &=& N_{K_T} e^{-\lambda}\frac{\lambda^{K_T}}{K_T!}.\end{eqnarray*} Thus $\lambda$ allows the user to control the number of leaves in the tree. The number of possible trees is finite, thus the distribution can be trivially normalized. Among trees with $K_T$ leaves, tree $T$ is chosen uniformly, with probability $1/N_{K_T}$. This means the probability to choose a particular tree $T$ is Poisson: \begin{eqnarray*} P(T|\lambda)\propto P(T|K_T)P(K_T|\lambda)&\propto &\frac{1}{N_{K_T}}N_{K_T} e^{-\lambda}\frac{\lambda^{K_T}}{K_T!} = e^{-\lambda}\frac{\lambda^{K_T}}{K_T!} \\ &\propto& \textrm{Poisson}(K_T,\lambda). \end{eqnarray*} We place a uniform prior over the probabilities for a data point to land in each of the leaves. To do this, we start from a Dirichlet distribution with equal parameters $\alpha_1=\ldots=\alpha_{K_T}=\alpha \in \mathbb{Z}^+$ where hyperparameter $\alpha>1$. We denote the vector with $K_T$ equal entries $[\alpha,...,\alpha]$ as $\bm{\alpha}_{K_T}$. We draw multinomial parameters $\bm\theta=[{\theta}_1, \ldots, {\theta}_{K_T}]$ from Dir$(\bm{\alpha}_{K_T})$. Thus, the first part of our model is as follows, given hyperparameters $\lambda$ and $\alpha$: \begin{eqnarray*} \textrm{Number of leaves in T: } K_T&\propto & \textrm{scaled Poisson}(\lambda), \textrm{ i.e., } N_{K_T}\cdot \textrm{ Poisson}(K_T,\lambda)\\ \textrm{Tree shape }\;:T& \propto & \textrm{Uniform over trees with }K_T\textrm{ leaves}\\ \textrm{Prior distribution over leaves: }\bm\theta &\propto & \textrm{Dir}(\bm{\alpha}_{K_T}). \end{eqnarray*} As usual, the prior can be overwhelmed given enough data.\\ \noindent \textbf{Likelihood:}\\ Let $n_l$ denote the number of points captured by the $l$-th leaf, and denote $\mathbb{V}_l$ to be the volume of that leaf, defined above. The probability to land at any specific value within leaf $l$ is $\frac{\theta_l}{\mathbb{V}_l}$. The likelihood for the full data set is thus $$P(X|\bm{\theta},T)=\prod_{l=1}^{K_T} \left( \frac{\theta_l}{\mathbb{V}_l} \right)^{n_l}.$$\\ \textbf{Posterior:} \\ The posterior can be written as follows, where we have substituted the distributions from the prior into the formula. Here, $B(\bm{\alpha}_{K_T})=\frac{\prod_{l=1}^{K_T} \Gamma(\alpha_l)}{\Gamma(\sum_{l=1}^{K_T} \alpha_l)}=\frac{(\Gamma(\alpha))^{K_T}}{\Gamma(K_T\alpha)}$ is the multinomial beta function which is also the normalizing constant for the Dirichlet distribution. \begin{align*} &P(T|\lambda, \bm{\alpha},X) \\ & \propto \int_{\bm{\theta}: \textrm{simplex}} P(K_T|\lambda)\cdot P(T|K_T)\cdot P(\bm{\theta}|\bm{\alpha}_{K_T})\cdot P(X|\bm{\theta},T) d \bm{\theta}\\ &\propto\int_{\bm{\theta}: \textrm{simplex}} P(T|\lambda) \left[\frac{1}{B(\bm{\alpha}_{K_T})} \left(\prod_{l=1}^{K_T} {\theta}_l^{\alpha-1} \right)\right]\left[\prod_{l=1}^{K_T} \left(\frac{{\theta}_l}{\mathbb{V}_l} \right)^{n_l} \right] d \bm{\theta}\\ &\propto P(T|\lambda) \frac{1}{B(\bm{\alpha}_{K_T})} \left( \prod_{l=1}^{K_T}\left(\frac{1}{\mathbb{V}_l} \right)^{n_l}\right)\int_{\bm{\theta}: \textrm{simplex}} \prod_{l=1}^{K_T} {\theta}^{n_l+\alpha-1} d \bm{\theta} \\ &\propto P(T|\lambda)\frac{B(n_1+\alpha, \ldots, n_{K_T}+\alpha)}{B(\bm{\alpha}_{K_T})}\prod_{l=1}^{K_T}\frac{1}{\mathbb{V}_l^{n_l}} \\ &\propto P(T|\lambda) \frac{\Gamma (K_T \alpha)}{\Gamma (n+K_T\alpha)}\prod_{l=1}^{K_T} \frac{(n_l+\alpha-1)!}{(\alpha-1)!} {\mathbb{V}_l^{-n_l}}, \end{align*} where $P(T|\lambda)$ is simply Poisson($K_T,\lambda$) as discussed earlier. For numerical stability, we maximize the log-posterior which is equivalent to maximizing the posterior. For the purposes of prediction, we are required to estimate the density that is being assigned to leaf $l$. This is calibrated to the data, simply as: $$\hat{f}=\frac{n_l}{n \mathbb{V}_l}$$ where $n$ is the total number of training data points and $n_l$ is the number of training data points that reside in leaf $l$. The formula implicitly states that the density in the leaf is uniformly distributed over the features whose values are undetermined within the leaf (features for which $\sigma_j$ contains all outcomes for feature $j$). \subsection{Model \RN{2}: Branch-based Cascade Model} In the previous model, a Dirichlet distribution is drawn only over the leaves. In this model, a Dirichlet distribution is drawn at every internal node to determine branching. Similar to the previous model, we choose the tree that optimizes the posterior. \\ \noindent \textbf{Prior:}\\ The prior is comprised of two pieces: the part that creates the tree structure, and the part that determines how data propagates through it.\\ \textbf{Tree Structure Prior}: For tree $T$, we let $B_T= \left\{ b_i |i \in I \right\}$ be a multiset, where each element is the count of branches from a node of the tree. For instance, if in tree $T$, the three nodes have 3 branches, 2 branches, and 2 branches respectively, then $B_T=\left\{3,2,2\right\}$. We let $\NBT$ denote the number of trees with the same multiset $B_T$. Note that $B_T$ is unordered, so $\{3,2,2\}$ is the same multiset as $\{2,3,2\}$ or $\{2,2,3\}$. Let $I$ denote the set of internal nodes of tree $T$ and let $L$ denote the set of leaves. We let $\mathbb{V}_l$ denote the volume at leaf $l$. In the generative model, a Poisson distribution with parameter $\lambda$ is used at each internal node in a top down fashion to determine the number of branches. Iteratively, for node $i$, the number of branches, $b_i$, obeys $b_i \sim \textrm{Poisson}(\lambda).$ Hence, at any node $i$, with probability $\exp(-\lambda) \frac{\lambda^{b_i}}{b_i!}$, there are $b_i$ branches from node $i$. This implies that with probability $\exp(-\lambda)$, the node is a leaf. In summary, \begin{equation*} P(\text{Multiset of branches }= B|\lambda) \propto \NB \left[\prod_{i \in I}e^{-\lambda} \frac{\lambda^{b_i}}{b_i!}\right]\left[\prod_{l \in L}e^{-\lambda}\right].\end{equation*} Among trees with multiset $B$, tree $T$ is chosen uniformly, with probability $\frac{1}{\NB}.$ This means the probability to choose a particular tree is: \begin{equation} P(T|\lambda)\propto P(T|B_T)P(B_T|\lambda) \propto \frac{1}{\NBT} \NBT \left[\prod_{i \in I}e^{-\lambda} \frac{\lambda^{b_i}}{b_i!}\right]\left[\prod_{l \in L}e^{-\lambda}\right]=\left[\prod_{i \in I}e^{-\lambda} \frac{\lambda^{b_i}}{b_i!}\right]\left[\prod_{l \in L}e^{-\lambda}\right]. \label{branchtreeprior} \end{equation} \noindent\textbf{Tree Propagation Prior:} After the tree structure is determined, we need a generative process for how the data propagate through each internal node. We denote $\theta_{l}$ as the probability to land in leaf $l$. We denote $\widetilde{\theta}_{ij}$ as the probability to traverse to node $j$ from internal node $i$. Notation $\bm{\theta}$ is the vector of leaf probabilities (the $\theta_l$'s), $\widetilde{\bm{\theta}}$ is the set of all $\widetilde\theta_{ij}$'s, and $\widetilde{\bm{\theta}}_i$ is the set of all internal node transition probabilities from node $i$ (the $\widetilde{\theta}_{ij}$'s). We compute $P(\widetilde{\bm{\theta}_i}|\alpha, T)$ for all internal nodes $i$ of tree $T$. At each internal node, we draw a sample from a Dirichlet distribution with parameter $[\alpha,\ldots, \alpha]$ (of size equal to the number of branches $b_i$ of $i$) to determine the proportion of data, $\widetilde{\theta}_{i,j}$, that should go along the branch leading to each child node $j$ from the internal parent node $i$. Thus, $\widetilde{\bm\theta_i} \sim \textrm{Dir}(\bm\alpha)$ for each internal node $i$, that is: $$P(\widetilde{\bm{\theta}}_i|\bm\alpha,T)=\frac{1}{B_{b_i}(\bm{\alpha})}\prod_{j \in C_i} \widetilde{\theta}_{ij}^{\alpha-1},$$ where $B_k(\bm{\alpha})$ is the normalizing constant for the Dirichlet distribution with parameter ${\alpha}$ and $k$ categories, and $C_i$ are the indices of the children of $i$. Thus, \begin{equation}\label{model2priorsecondterm} P(\widetilde{\bm{\theta}}|\bm\alpha,T)=\prod_i P(\widetilde{\bm{\theta}}_i|\bm\alpha,T) = \prod_i \frac{1}{B_{b_i}(\bm{\alpha})}\prod_{j \in C_i} \widetilde{\theta}_{ij}^{\alpha-1}.\end{equation} Thus, the prior is $P(T|\lambda) \cdot P(\widetilde{\bm{\theta}}|\alpha,T)$, where $P(T|\lambda)$ is in (\ref{branchtreeprior}) and $P({\widetilde{\bm\theta}}|\alpha,T)$ is in (\ref{model2priorsecondterm}). In summary, the prior of our model is as follows, given hyperparameters $\lambda$ and $\alpha$: \begin{eqnarray*}\label{priormodel} \textrm{Multiset of branches: } B_T &\propto & \NBT \left[\prod_{i \in I}e^{-\lambda} \frac{\lambda^{b_i}}{b_i!}\right]\left[\prod_{l \in L}e^{-\lambda}\right].\\ \textrm{Tree shape }\;:T& \sim & \textrm{Uniform over trees with branches } B_T.\\ \textrm{Prior distribution over each branch: }\widetilde{\bm\theta}_i &\sim & \textrm{Dir}(\alpha). \end{eqnarray*} \noindent \textbf{Likelihood:}\\ The density within leaf $l$ is uniform and equal to $$P(X=x|X \in \text{leaf } l)= \left\{ \begin{array}{cc} \frac{1}{\mathbb{V}_l}, & x \in \text{\text{leaf } l,} \\ 0, & \text{otherwise.} \end{array}\right.$$ We denote the set $P_l$ as the set of branches in the path from the root of the tree to the leaf $l$. The probability of $X$ taking on value $x$ (permitting that $x$ is an allowed outcome in leaf $l$) is thus: \begin{align*} P(X=x , X \in \text{leaf } l) &=P(X=x|X \in \text{leaf } l)\cdot P(X \in \text{leaf } l) \\ &= \frac{1}{\mathbb{V}_l} \cdot \theta_l = \frac{\prod_{(\hat{i},\hat{c}) \in P_l} \widetilde{\theta}_{\hat{i},\hat{c}}}{\mathbb{V}_l}. \end{align*} Denote the set of children of node $i$ by $C_i$ and the number of data points in node $c$ as $n_c$. It is true that: $$\prod_{l \in L} \left(\prod_{(\hat{i},\hat{c})\in P_l} \widetilde{\theta}_{\hat{i},\hat{c}}\right)^{n_l}=\prod_{i \in I}\prod_{c \in C_i} \widetilde{\theta}_{i,c}^{n_{c}}.$$ The equality stems from two distinct ways of counting the branches that a particular data point passes through from the root to the leaf. The first way of counting is to start from the leaf and count backwards from the leaf to the root (depth first). The second way of counting is by examining each internal node (breadth first). Hence the likelihood of a particular data set can be written: $$P(X|{\widetilde{\bm{\theta}}},T)=\prod_{l \in L} \left(\frac{\prod_{(\hat{i},\hat{c})\in P_l} \widetilde{\theta}_{\hat{i},\hat{c}}}{\mathbb{V}_l}\right)^{n_l}=\frac{\prod_{i \in I}\prod_{c \in C_i} \widetilde{\theta}_{i,c}^{n_{c}}}{\prod_{l \in L} \mathbb{V}_l^{n_l}}.$$ \noindent \textbf{Posterior:}\\ The posterior is proportional to the prior times the likelihood terms. Here we are integrating over the $\widetilde{\bm{\theta}}_i$ terms for each of the internal nodes $i$. \begin{align*} &P(T|\lambda, \alpha, X) \\ &\propto \int P(B_T|\lambda) \cdot P(T|B_T) \cdot P(\widetilde{\bm{\theta}}| \alpha,T) \cdot P(X| \widetilde{\bm{\theta}},T) d\widetilde{\bm{\theta}} \\ &\propto \left[\prod_{l \in L} \left(\frac{ e^{-\lambda}}{\mathbb{V}_l^{n_l}}\right)\right]\left(\prod_{i \in I} e^{-\lambda} \frac{\lambda^{b_i}}{b_i!}\frac{1}{B_{b_i}(\alpha, \ldots, \alpha)} \int_{\widetilde{\bm{\theta}}_i \in \text{simplex}} \prod_{c \in C_i} \widetilde{\theta}_{i,c}^{\alpha-1} \widetilde{\theta}_{i,c}^{n_{c}}d \widetilde{\bm{\theta}}_i \right) \\ &= e^{-\lambda(|I|+|L|)} \left(\prod_{i \in I} \frac{\lambda^{b_i}}{b_i!}\frac{1}{B_{b_i}(\alpha, \ldots, \alpha)} \int_{\widetilde{\bm{\theta}}_i \in \text{simplex}} \prod_{c \in C_i} \widetilde{\theta}_{i,c}^{n_{c}+\alpha-1} d \widetilde{\bm{\theta}}_i \right) \prod_{l \in L} \left(\frac{ 1}{\mathbb{V}_l^{n_l}}\right)\\ &= e^{-\lambda(|I|+|L|)} \lambda^{\sum_{i \in I} b_i} \left(\prod_{i \in I} \frac{1}{b_i!}\frac{B_{b_i}(\alpha+n_{c_1}, \ldots, \alpha+n_{c_{b_i}})}{B_{b_i}(\alpha, \ldots, \alpha)} \right) \prod_{l \in L} \left(\frac{ 1}{\mathbb{V}_l^{n_l}}\right)\\ &= e^{-\lambda(|I|+|L|)} \lambda^{ |L|+|I|-1 } \left(\prod_{i \in I} \frac{1}{b_i!}\frac{B_{b_i}(\alpha+n_{c_1}, \ldots, \alpha+n_{c_{b_i}})}{B_{b_i}(\alpha, \ldots, \alpha)} \right) \prod_{l \in L} \left(\frac{ 1}{\mathbb{V}_l^{n_l}}\right) \end{align*} where $c_1,\ldots,c_{b_i} \in C_i$ in the second last expression. We used the equation $\sum_{i \in I} b_i=|L|+|I|-1$ for a tree in the last line. We use a specialized simulated annealing method to search for the maximum a posteriori tree. Our algorithm moves among neighboring trees and records the best tree that has been found so far. The description of this method is in the appendix.\\ \noindent \textbf{Possible Extension:} We can include an upper layer of the hierarchical Bayesian Model to control (regularize) the number of features $d$ that are used in the cascade out of a total of $p$ dimensions. This would introduce an extra multiplicative factor within the posterior of $\left( \begin{array}{c} p \\ d \end{array} \right)\gamma^d (1-\gamma)^{p-d}$, where $\gamma$ is a parameter between 0 and 1, where a smaller value favors a simpler model. \begin{eqnarray*} \lefteqn{\left( \begin{array}{c} p \\ d \end{array} \right)\gamma^d (1-\gamma)^{p-d} e^{-\lambda(|I|+|L|)} \lambda^{|I|+|L|-1}} \\ &&\left(\prod_{i \in I} \frac{1}{b_i!}\frac{B_{b_i}(\alpha+n_{c_1}, \ldots, \alpha+n_{c_{b_i}})}{B_{b_i}(\alpha, \ldots, \alpha)} \right) \prod_{l \in L} \left(\frac{ 1}{\mathbb{V}_l^{n_l}}\right). \end{eqnarray*} \subsection{Model \RN{3}: Leaf-based Density Rule List} Rather than producing a general tree, an alternative approach is to produce a rule list. A rule list is a one-sided tree. Rule lists are easier to optimize than trees. Each tree can be expressed as a rule list; however, some trees may be more complicated to express as a rule list. By using lists, we implicitly hypothesize that the full space of trees may not be necessary and that simpler rule lists may suffice. An example of a density rule list is as follows: \textbf{if} $x$ obeys $a_1$ \textbf{then} density$(x)=f_1$ \textbf{else if} $x$ obeys $a_2$ \textbf{then} density$(x)=f_2$ $\vdots$ \textbf{else if} $x$ obeys $a_m$ \textbf{then} density$(x)=f_m$ \textbf{else} density$(x)=f_0$. The antecedents $a_1$,...,$a_m$ are chosen from a large pre-mined collection of possible antecedents, called $A$. We define $A$ to be the set of all possible antecedents of size at most $H$, where the user chooses $H$. The size of $A$ is: $$|A|=\sum_{j=0}^H A_j,$$ where $A_j$ is the number of antecedents of size $j$, $$A_j=\sum_{ \left[ \begin{array}{c} t_1, t_2, \ldots, t_j \in \left\{ 1,\ldots, p \right\} \\ \text{s.t. } t_1 > t_2 \ldots >t_j \end{array}\right]} \prod_{i=1}^j q_{t_i},$$ where feature $i$ consists of $q_i$ categories. \noindent \textbf{Generative Process:} We now sketch the generative model for the tree from the observations $x$ and antecedents $A$. Prior parameters $\lambda$ and $\eta$ are used to indicate preferences over the length of the density list and the number of conjunctions in each sub-rule $a_i$. Define $a_{<j}$ as the antecedents before $j$ in the rule list if there are any. For example $a_{<3}=\left\{ a_1, a_2 \right\}$. Similarly, let $c_j$ be the cardinalities of the antecedents before $j$ in the rule list. Let $d$ denote the rule list. The generative model is as follows, following the exposition of \cite{LethamRuMcMa15}: \begin{enumerate} \item Sample a decision list length $m \sim P(m | A, \lambda)$. \item For decision list rule $j=1,\ldots, m:$ \\ Sample the cardinality of antecedent $a_j$ in $d$ as $c_j \sim P(c_j| c_{<j}, A, \eta)$. \\ Sample $a_j$ of cardinality $c_j$ from $P(a_j|a_{<j},c_j,A)$. \item For observation $i=1, \ldots, n$: Find the antecedent $a_j$ in $d$ that is the first that applies to $x_i$. If no antecedents in $d$ applies, set $j=0$. \item Sample parameter ${\bm{\theta}} \sim$ Dirichlet ($\bm{\alpha}$) for the probability to be in each of the leaves, where $\bm{\alpha}$ is a user-chosen vector of size $m+1$, usually where all elements are the same. $f_i=\frac{\theta_i}{\mathbb{V}_i}$, where $\mathbb{V}_i$ is the volume. \end{enumerate} \noindent\textbf{Prior:}\\ The distribution of $m$ is the Poisson distribution, truncated at the total number of preselected antecedents: $$P(m|A,\lambda)=\frac{\lambda^m/m!}{\sum_{j=0}^{|A|} (\lambda^j/j!)}, m=0, \ldots, |A|.$$ When $|A|$ is huge, we can use the approximation $P(m|A, \lambda) \approx \lambda^m/m!$, as the denominator of the previous term would be close to 1. We let $R_j(c_1,\ldots, c_j,A)$ be the set of antecedent cardinalities that are available after drawing antecedent $j$, and we let $P(c_j|c_{<j},A, \eta)$ be a Poisson truncated to remove values for which no rules are available with that cardinality: $$P(c_j|c_{<j},A,\eta)=\frac{(\eta^{c_j}/c_j!)}{\sum_{k \in R_{j-1}(c_{<j},A)}(\eta^k/k!)}, \hspace{5pt} c_j \in R_{j-1}(c_{<j},A).$$ We use a uniform distribution over antecedents in $A$ of size $c_j$ excluding those in $a_{j}$, $$P(a_j|a_{<j},c_j,A) \propto 1, \hspace{20pt} a_j \in \left\{ a \in A \setminus a_{<k}: |a|=c_j\right\}.$$ The cascaded prior for the antecedent lists is thus: $$P(d|A, \lambda, \eta)=P(m|A,\lambda) \cdot \prod_{j=1}^m P(c_j|c_{<j},A, \eta) \cdot P(a_j|a_{<j},c_j,A).$$ The prior distribution over the leaves $\bm{\theta}=[\theta_1,\ldots,\theta_m,\theta_0]$ is drawn from Dir($\bm{\alpha}_{m+1}$). $$P(\bm{\theta}|\alpha)=\frac{1}{B_{m+1}(\alpha, \cdots,\alpha)}\prod_{l=0}^m \theta_l^{\alpha-1}$$ It is straightforward to sample an ordered antecedent list $d$ from the prior by following the generative model that we just specified, generating rules from the top down. \noindent \textbf{Likelihood:}\\ \noindent Similar to the first model, the probability to land at any specific value within leaf $l$ is $\frac{\theta_l}{\mathbb{V}_l}$. Hence, the likelihood for the full data set is: $$P(X|\bm{\theta},d)=\prod_{l=0}^{m}\left( \frac{\theta_l}{\mathbb{V}_l} \right)^{n_l}.$$ \noindent \textbf{Posterior:} \\ \noindent The posterior can be written as \begin{eqnarray*} \lefteqn{ P(d|A,\lambda, \eta, \alpha, X) } \\ &\propto &\int_{\bm{\theta} \in \textrm{simplex}} P(d|A,\lambda, \eta) \cdot P(\bm{\theta} | \alpha) \cdot P(X|\bm{\theta}, d) d\bm{\theta} \\ & = & P(d|A, \lambda, \eta) \int_{\bm{\theta} \in \textrm{ simplex}} \frac{1}{B_{m+1}(\alpha, \cdots,\alpha)} \prod_{l=0}^m \theta_l^{\alpha-1}\left(\frac{\theta_l}{\mathbb{V}_l}\right)^{n_l} d \bm{\theta}\\ &=& P(d|A, \lambda, \eta)\frac{\prod_{l=0}^m\Gamma{(n_l+\alpha)} \mathbb{V}_l^{-n_l}}{\Gamma(\sum_{l=0}^m (n_l+\alpha))}. \end{eqnarray*} where the last equality uses the standard Dirichlet-multinomial distribution derivation. To search for optimal rule lists that fit the data, we use local moves (adding rules, removing rules, and swapping rules) and use the Gelman-Rubin convergence diagnostic applied to the log posterior function. A technical challenge that we need to address in our problem is the computation of the volume of a leaf. Volume computation is not needed in the construction of a decision list classifier like that of Letham et al. \cite{LethamRuMcMa15} but it is needed in the computation of density list. There are multiple ways to compute the volume of a leaf of a rule list. The first set of approaches do not require the overhead of creating a complicated data structure, and thus might be better for smaller problems.\\ \textit{Approach 1}: create uniform data over the whole domain, and count the number of points that satisfy the antecedents. This approach would be expensive when the domain is huge but easy to implement for smaller problems. \\ \textit{Approach 2}: use an MCMC sampling approach to sample uniformly the whole domain space. This approach is again not practical when the domain size is huge as the number of samples required will increase exponentially due to curse of dimensionality.\\ \textit{Approach 3}: use the inclusion-exclusion principle to directly compute the volume of each leaf. Consider computing the volume of the $i$-th leaf. Let $V_{a_i}$ denote the volume induced by the rule $a_i$, that is the number of points in the domain that satisfy $a_i$. To belong to that leaf, a data point has to satisfy $a_i$ and not $a_{<i}$. Hence the volume of the $i$-th leaf is equal to the volume obeying $a_i$ alone, minus the volume that has been used by earlier rules. Hence, we have the following: \begin{align*} \mathbb{V}_i&=V_{a_i \wedge \bigwedge_{k=1}^{i-1} a_k^c}\\&=V_{a_i}-V_{a_i \wedge (\bigvee_{k=1}^{i-1} a_k)} \\&=V_{a_i}-V_{ (\bigvee_{k=1}^{i-1}a_i \wedge a_k)} \\&=V_{a_i}-\sum_{k=1}^{i-1}(-1)^{k+1}\sum_{1\leq j_1 \leq \ldots j_k \leq n} V_{a_i\wedge a_{j_1} \ldots \wedge a_{j_k}}, \end{align*} where the last expression is due to the inclusion-exclusion principle and it only involves the volume resulting from conjunctions. The volume resulting from conjunctions can be easily computed from data. Without loss of generality, suppose we want to compute the volume of $V_{a_1 \wedge \ldots \wedge a_k}$, for each feature that appears, we examine if there is any contradiction. For example if feature 1 is present in both $a_1$ and $a_2$ and they specify feature 1 to take different values, then we have found a contradiction and the volume should be 0. Suppose this is not the case, then the volume is equal to the product of the number of distinct categories of all the features that are not used. By using the inclusion-exclusion principle, we reduce the problem to just computing a volume of conjunctions, however, computing these volumes requires a clever data structure. This would be suitable for larger problems but might slow down computations for smaller problems. \section{Experiments} Our experimental setup is as follows. We considered five models: the leaf-based cascaded histograms, the branch-based cascaded histograms, the leaf-based density list, regular histograms and density estimation trees (DET) \citep{Ram:2011:DET:2020408.2020507}. To our knowledge, this essentially represents the full set of logical, high dimensional density estimation methods. To ascertain uncertainty, we split the data in half 5 times randomly and assessed test log-likelihood and sparsity of the trees for each method. A model with fewer bins and higher test likelihood is a better model. For the histogram, we treated each possible configuration as a separate bin. DET was designed for continuous data, which meant that the computation of volume needed to be adapted -- it is the number of configurations in the bin (rather than the lengths of each bin multiplied together). The DET method has two parameters, the minimum allowable support in a leaf, and the maximum allowable support. We originally planned to use a minimum of 0 and a maximum of the size of the full dataset, but the algorithm often produced trivial models when we did this. We tried also values $\left\{ 0,3,5\right\}$ for the minimum values and $\left\{ 10,n,\lfloor\frac{n}{2} \rfloor \right\}$ where $n$ is the number of training data points, and reported results for the best of these. For the leaf-based cascade model, the mean of the Poisson prior was chosen from the set $\left\{ 5,8\right\}$ using nested cross validation. For the branch-based cascade model, the parameter to control the number of branches was chosen from the set $\left\{ 2,3\right\}$. $\gamma$ was fixed to be 0.5, and $\alpha$ was set to be 2 for the experiment. For the leaf-based density list model, the parameters $\lambda,\eta$ and $\alpha$ were chosen to be 3, 1 and 1 respectively. \subsection{An Experiment on the Titanic Data Set} The Titanic dataset has an observation for each of the 2201 people aboard the Titanic. There are 3 features: gender, whether someone is an adult, and the class of the passenger (first class, second class, third class, or crew member). A cascade would help us understand the set of people on board the Titanic. \begin{figure} \caption{The scatter plot for titanic. } \label{Figtitanic} \end{figure} Figure \ref{Figtitanic} shows the results, both out-of-sample likelihood and sparsity, for each model, for each of the 5 folds. The histogram method had high likelihood, but also the most leaves (by design). The other methods performed similarly, arguably the leaf-based density list method performed slightly better in the likelihood-sparsity tradeoff. DET produced a trivial tree for one of the splits. In general, we will see similar results on other datasets: the histogram produces too many bins, the leaf-based density list model and leaf-based cascade perform well, and DET has inconsistent performance (possibly due to its top-down greedy nature, or the fact that DET approximately optimizes Hellinger distance rather than likelihood.) Figure \ref{Figtitanictree} shows one of the density cascades generated by the leaf-based method. The reason for the top split is clear: the distributions of the males and females were different, mainly due to the fact that the crew was mostly male. There were fewer children than adults, and the volume of crew members was very different than the volume of $1^{\textrm{st}}, 2^{\textrm{nd}}$, and $3^{\textrm{rd}}$ class passengers. Figure \ref{Figtitaniclist} shows one of the density lists generated by our model. It shows that male crew and third class male adults have higher density. \begin{figure} \caption{Tree representing titanic.} \label{Figtitanictree} \end{figure} \begin{figure} \caption{List representing titanic. Each arrow represents an ``else if" statement. This can be directly compared to the cascade in Figure \ref{Figtitanictree}. Slight differences in estimates between the two models occurred because we used different splits of data for the two figures. The estimates were robust to the change in data.} \label{Figtitaniclist} \end{figure} \subsection{Crime Dataset} The housebreak data used for this experiment were obtained from the Cambridge Police Department, Cambridge, Massachusetts. The motivation is to understand the common types of modus operandi (M.O.) characterizing housebreaks, which is important in crime analysis. The data consist of 3739 separate housebreaks occurring in Cambridge between 1997 and 2012 inclusive. We used 6 categorical features. \begin{enumerate} \item Location of entry: ``window," ``door," ``wall," and ``basement." \item Means of entry: ``forceful" (cut, broke, cut screen, etc.), ``open area," ``picked lock," ``unlocked," and ``other." \item Whether the resident is inside. \item Whether the premise is judged to be ransacked by the reporting officer. \item ``Weekday" or ``Weekend." \item Type of premise. The first category is ``Residence" (including apartment, residence/unk., dormitory, single-family house, two-family house, garage (personal), porch, apartment hallway, residence unknown, apartment basement, condominium). The second category is ``non-medical, non-religious work place" (commercial unknown, accounting firm, research, school). The third group consists of halfway houses, nursing homes, medical buildings, and assisted living. The fourth group consists of parking lots and parking garages, and the fifth group consists of YWCAs, YMCAs, and social clubs. The last groups are ``storage," ``construction site," ``street," and ``church" respectively. \end{enumerate} \begin{figure} \caption{the scatter plot for Cambridge Police Department dataset.} \end{figure} The experiments show that DET and our approaches are competitive for the crime data set. (The histogram's results involve too many bins to fit on the figure.) Let us discuss one of the trees obtained from the leaf-based cascade method where we have set the mean of the Poisson distribution to be chosen from the set $\left\{ 20,30\right\}$. The tree is in Figure \ref{fig:crimetree}. It states that most burglaries happen at residences -- the non-residential density has values less than $1 \times 10^{-4}.$ Given that a crime scene is a residence, most crimes happened when the resident was not present. If the premise is a residence and the resident was present for the housebreak, the burglary is more likely to happen on a weekday, in which case most burglaries involve forceful means of entry (density = $9.16 \times 10^{-3}$). When the premise is a residence and the resident was not present, the location of entry is usually either a window or a door. Given this setting: \begin{enumerate} \item If the means of entry is forceful, most crime happens on weekdays, and in that case it is almost twice as likely that the means of entry is through a door (density=0.15) compared to a window (density=0.078). If the crime happened on a weekend, it is more likely for the crime scene not to be ransacked (density=$5.50 \times 10^{-2}$) as compared to being ransacked (density=$2.41 \times 10^{-3}$). \item If the means of entry is either an open area or a lock is picked, it is more likely to be on a weekday (density=$2.48 \times 10^{-3}$) compared to a weekend (density=$7.30 \times 10^{-3}$). \item If the means of entry is none of the above, it is almost three times more likely to happen on a weekday (density=$3.60 \times 10^{-3}$) compared to a weekend (density=$1.07 \times 10^{-3}$). \end{enumerate} These types of results can be useful for crime analysts to assess whether a particular modus operandi is unusual. A density list for these data is presented in Figure \ref{fig:crimelist}. \section{Empirical Analysis} Each subsection below is designed to provide insight into how the models operate. \subsection{Sparse Tree Dataset} We generated a dataset that arises from a tree with 6 leaves, involving 3 features. The data consists of 1000 data points, where 100 points are tied at value (1,2,1), 100 points are at (1,2,2), 100 points are at (2,1,1), 400 points are at (2,1,2), and 300 points are at (2,2,2). The correct tree is in Figure \ref{Fig:simdata}. \begin{figure} \caption{Performance vs sparsity on sparse tree data set.} \label{Fig:sparsetreescatter} \end{figure} \begin{figure} \caption{Tree diagram for sparse tree data set.} \label{Fig:simdata} \end{figure} \begin{figure} \caption{Output for leaf-based model that recovers the data structure.} \label{Fig:recoversimdata} \end{figure} \begin{figure} \caption{List output for sparse tree data set.} \label{Fig:listsimdata} \end{figure} We trained the models on half of the dataset and tested on the other half. Figure \ref{Fig:sparsetreescatter} shows the scatter plot of out-of-sample performance and sparsity. This is a case where the DET failed badly to recover the true model. It produced a model that was too sparse, with only 4 leaves. The leaf-based cascade method recovered the full tree from Figure \ref{Fig:simdata} and we present the tree in Figure \ref{Fig:recoversimdata}. The output for the corresponding density list is presented in Figure \ref{Fig:listsimdata}. \subsection{Extreme Uniform Dataset} We generated a 1-dimensional data set that consists of 100 data points. The data are simply all unique integers from 1 to 100. This is a case where the histogram badly fails to generalize. Figure \ref{Fig:extreme} shows the result. \begin{figure} \caption{Performance vs sparsity on uniform data. } \label{Fig:extreme} \end{figure} The leaf-based and branch-based models both return the solution that consists of a single root node, implying that the data are in fact uniformly distributed, or at least that we do not have evidence to further split on the single node. The density list output is close to uniform as well. DET is competitive as well, though it does not return the trivial tree. The histogram totally fails, since the test data and training data do not overlap at all. \section{Consistency} A consistent model has estimates that converge to the real densities as the size of the training set grows. Consistency of conventional histograms is well studied for example, \cite{abou1976conditions,devroye1983distribution}. More generally, consistency for general rectangular partitions has been studied by \cite{zhao1991almost,lugosi1996consistency}. Typical consistency proofs, (e.g.,\cite{citeulike:3463149,Ram:2011:DET:2020408.2020507}) require the leaf diameters to become asymptotically smaller as the size of the data grows. In our case if the ground truth density is a tree, we do not want our models to asymptotically produce smaller and smaller bin sizes, we would rather they reproduce the ground truth tree. This means we require a new type of consistency proof. \\ \noindent\textbf{Definition 1}: \textit{Density trees} have a single root and there are conditions on each branch. A density value, $f_l$ is associated with each leaf $l$ of the tree. \\ \noindent\textbf{Definition 2}: Two trees, $T_1$ and $T_2$ are \textit{equivalent} with respect to density $f$ if they assign the same density values to every data point on the domain, $f_{T_1}(x)=f_{T_2}(x)$, for all $x$. We denote the class of trees that are equivalent to $T$ as $[T]_f$.\\ \noindent\textbf{Theorem 1}: Let $\Theta$ be the set of all density trees. Consider these conditions: \begin{enumerate} \item $T_n \in \argmax_T \textrm{Obj}(T)$. The objective function can be decomposed into $\textrm{Obj}(T)=\ln q_n(T|X)+\ln g_n(T|X)$ where \\ $\argmax_T \left[ \ln q_n(T|X)+ \ln g_n(T|X) \right]\equiv \argmax_T \ln g_n(T|X)$ as $n \rightarrow \infty$. \item $\ln g_n(T|X)$ converges in probability, for any tree $T$, to the empirical log-likelihood that is obtained by the maximum likelihood principle, $\hat{l}_n(T|X)=\frac{1}{n}\sum_{i=1}^n \ln \hat{f}_n(x_i|T)$. \item $\sup_{T \in \Theta}|\hat{l}_n(T|X)-l(T)| \xrightarrow{P} 0$ where $l(T)=\mathbb{E}_x(\ln(f(x|T)))$. \item $T^*_{\textrm{MLE}}\in \argmax_Tl(T)$ is unique up to equivalence among elements of $[T^*_{\textrm{MLE}}]_f$. \end{enumerate} If these conditions hold, then the trees $T_n$ that we learned, $T_n\in \argmax_T \textrm{Obj}(T)$, obey $T_n\in [T^*_{\textrm{MLE}}]_f$ for $n >M$ for some $M$. The first condition and the second condition are true any time we use a Bayesian model. They are also true any time we use regularized empirical likelihood where the regularization term's effect fades with the number of observations. Note that the third condition is automatically true by the law of large numbers. The last condition is not automatically true, and requires regularity conditions for identifiability. The result states that our learned trees are equivalent to maximum likelihood trees when there are enough data. The proof is presented in Appendix C. \section{Conclusion} We have presented a Bayesian approach to density estimation using cascaded piecewise constant estimators. These estimators have nice properties: their prior encourages them to be sparse, which permits interpretability. They do not have the pitfalls of other nonparametric density estimation methods like density estimation trees, which are top-down greedy. They are consistent, without needing to asymptotically produce infinitesimally small leaves. Practically, the approaches presented here have given us insight into a real data set (the housebreak dataset from the Cambridge Police) that we could not have obtained reliably in any other way. \acks{We would like to acknowledge support for this project from Accenture, Siemens, and Wistron. } \begin{figure*} \caption{Tree representing the crime data set.} \label{fig:crimetree} \end{figure*} \begin{figure} \caption{List representing the crime data set. Each arrow represent an ``else if" statement.} \label{fig:crimelist} \end{figure} \appendix \section*{Appendix A. Simulated Annealing Algorithm} \label{app:theorem} At each iteration we need to determine which neighboring tree to move to. To decide which neighbor to move to, we fix a parameter $\epsilon>0$ beforehand, where $\epsilon$ is small. At each time, we generate a number from the uniform distribution on (0,1), then:\\ 1. If the number is smaller than $\frac{1-\epsilon}{4}$, we select uniformly at random a parent which has leaves as its children, and remove its children. This is always possible unless the tree is the root node itself in which case we cannot remove it and this step is skipped. \\ 2. If the random number is between $\frac{1-\epsilon}{4}$ and $\frac{1-\epsilon}{2}$, we pick a leaf randomly and a feature randomly. If it is possible to split on that feature, then we create children for that leaf. (If the feature has been used up by the leaf's ancestors, we cannot split.) \\ 3. If the random number is between $\frac{1-\epsilon}{2}$ and $\frac{3(1-\epsilon)}{4}$, we pick a node randomly, delete its descendants and split the node into two nodes containing subsets of the node's outcomes. Sometimes this is not possible, for example if we pick a node where all the features have been used up by the node's ancestors, or if the node has only one outcome. In that case we skip this step. \\ 4. If the random number is between $\frac{3(1-\epsilon)}{4}$ and $(1-\epsilon)$, we choose two nodes that share a common parent, delete all their descendants and merge the two nodes.\\ 5. If the random number is more than $1-\epsilon$, we perform a structural change operation where we remove all the children of a randomly chosen node of the tree. The last three actions avoid problems with local minima. The algorithms can be warm started using solutions from other algorithms, e.g., DET trees. We found it useful to occasionally reset to the best tree encountered so far or the trivial root node tree. Ordinal data are treated differently than binary categorical data in that merges and splits in Steps 3 and 4 are done on neighboring ordinal values. \section*{Appendix B. Evaluation Metric} We discuss evaluation metrics for use in out-of-sample testing and parameter tuning with nested cross-validation. The natural evaluation metric is the likelihood of the trained model calculated on the test data. Hellinger distance is an alternative \citep{hellinger1909neue}, however, we prefer likelihood for two reasons. (i) To compute the Hellinger distance, it is assumed that the real distribution is known, when in reality it is not known. (ii) Hellinger distance would be approximated by $2-\frac{2}{n}\sum_{i=1}^n \sqrt{\frac{\hat{f}(x_i)}{f(x_i)}}$ \citep{AISTATS07_LiuLW} where $f$ is not known in practice. The estimate of the Hellinger distance often comes out negative, which is nonsensical. An alternative is to use a least square criterion \citep[see][]{Load:1999}. We consider the risk for tree $h$: \begin{align*} L(h) &= \int (\hat{f}_n(x)-f(x))^2 dx \\ &= \int (\hat{f}(x))^2 dx -2 \int \hat{f}_n(x)f(x)dx+ \int f^2(x)dx. \end{align*} Since the true density is not known, the third term cannot be evaluated, is a constant, and thus can be ignored. The first two terms can be estimated using a leave one out estimator: $$\hat{J} (h) = \int \left( \hat{f}_n(x)\right)^2 dx-\frac{2}{n} \sum_{i=1}^{n} \hat{f}_{(-i)}(x_i)$$ where the second term's $f(x)$ vanishes because $x$ is drawn from density distribution $f$. We desire this value to be as negative as possible. We know that $\hat{f}(x_i)= \frac{n_l}{nV_l}$ if $i \in l$ where $l$ is a leaf. We approximate $\hat{f}_{(-i)}$ using the assumption that the tree does not change structurally when one point is removed: $$\hat{f}_{(-i)}(x_i) \approx \frac{n_l-1}{(n-1)V_l}=\frac{n (n_l-1)}{(n-1)n_l}\hat{f}(x_i)=\frac{n (n_l-1)}{(n-1)n_l}\hat{f}_l.$$ Hence, we simplify $\hat{J}(h)$ as follows: \begin{align*} \hat{J} (h) &= \int\left( \hat{f}_n(x)\right)^2 dx-\frac{2}{n} \sum_{l \in L} (n_l) \hat{f}_{(-i)}(x_i) \\ &\approx \int\left( \hat{f}_n(x)\right)^2 dx-2 \sum_{l \in L} \frac{n_l-1}{(n-1)}\hat{f}_l \\ &= \sum_{l \in L} \hat{f}_l^2V_l-2 \sum_{l \in L} \frac{n_l-1}{(n-1)}\hat{f}_l \\ &= \sum_{l \in L} \left(\hat{f}_lV_l- \frac{2(n_l-1)}{(n-1)}\right) \hat{f}_l \\ &=\sum_{l \in L} \left(\frac{n_l}{n}- \frac{2(n_l-1)}{(n-1)}\right) \hat{f}_l. \end{align*} The formula can be used aas an alternative evaluation metric of which the more negative it is, the better the model fit the data. It can be viewed as a weighted sum of likelihood over the leaves. \section*{Appendix C. Proof for Theorem 1} From definition of $T_n$, $T_n$ is an optimal value of the log-objective function and hence it is also an optimal solution to $g_n$ as $n$ is sufficiently large due to the first condition. We have that \[ \ln \textrm{Obj}(T_n|X)-\ln \textrm{Obj}(T^*_{\textrm{MLE}}|X) \geq 0 \] by definition of $T_n$ as the maximizer of \textrm{Obj}. Because \textrm{Obj} becomes close to $g_n$, we have that \begin{equation}\label{thmeqn1} \ln g_n(T_n|X) - \ln g_n(T^*_{\textrm{MLE}}) \geq 0 \end{equation} as $n$ is sufficiently large. From Condition 2, we know that $\ln g_n(T|X)-\hat{l}_n(T|X) \xrightarrow{P} 0$ and from Condition 3, we have $\hat{l}_n(T|X)-l(T) \xrightarrow{P} 0$. Adding this up using the fact that convergence in probability is preserved under addition, we know that $\ln g_n(T|X) \xrightarrow{P} l(T)$. Hence by taking the limit of (\ref{thmeqn1}) as $n$ grows, we have that $\lim_{n \rightarrow \infty} l(T_n|X) \geq l(T^*_{\textrm{MLE}})$. Since $T^*_{\textrm{MLE}}$ is optimal for $l(T)$ by definition, and by Condition 4, we conclude that $T_n$ stays in $[T^*_{\textrm{MLE}}]$ when $n$ is sufficiently large. \section*{Appendix D. Optimal Density for the Likelihood Function} Denote the pointwise density estimate at $x$ to be $\hat{f}_{n,x}=\frac{n_{x}}{n}$. Denote the density estimate for all points within leaf $l$ similarly as $\hat{f}_{n,l}=\frac{\sum_{j\in \textrm{leaf }l} n_j}{n V_{l}}$. The true density from which the data are assumed to be generated is denoted $D$. We assume that $D$ arises from a tree over the input space (otherwise we would be back to standard analysis, where the proof is well-known). \noindent \textbf{Lemma 1}: Any tree achieving the maximum likelihood on the training data has pointwise density equal to $\hat{f_n}(x)$. This means for any $l$ in the tree and for any $x$, $\hat{f}_{n,l}(x)=\hat{f}_{n,x}(x)$. \noindent\textbf{Proof:} We will show that the pointwise histogram becomes better than using the tree if the tree is not correct. This is in some sense a version of a well-known result that the maximum likelihood is the pointwise maximum likelihood. We will show: \begin{equation*} \prod_{j\in \textrm{leaf } l} \hat{f}_{n,j}^{n_j} \geq \hat{f}_{n,l}^{\sum_{j\in \textrm{leaf } l} n_j}. \end{equation*} By taking logarithms, this reduces to \begin{align} &\sum_{j\in \textrm{leaf } l} n_j \log \hat{f}_{n,j} \geq \sum_{j\in \textrm{leaf } l} n_j \log \hat{f}_{n,l} \label{aftertakinglog}\\ &\sum_{j\in \textrm{leaf } l} n_j \log \left( \frac{n_j}{n} \right) \geq \sum_{j\in \textrm{leaf } l} n_j \log \left( \frac{\sum_{m\in \textrm{leaf } l } n_m}{n V_l}\right) \nonumber\\ &\sum_{j\in \textrm{leaf } l} n_j \log n_j \geq \sum_{j\in \textrm{leaf } l} n_j \log \left( \frac{\sum_{m\in \textrm{leaf } l} n_m}{ V_l}\right) \nonumber \\ &\sum_ {j\in \textrm{leaf } l} n_j\log \left( \frac{n_j}{\sum_{m\in \textrm{leaf } l } n_m} \right) \nonumber \\ &\geq \sum_{m\in \textrm{leaf } l} n_j \log \left( \frac{1}{ \sum_{m\in \textrm{leaf } l} V_m} \right) \nonumber \\ &\sum_{j\in \textrm{leaf } l} \frac{n_j}{\sum_{m\in \textrm{leaf } l } n_m} \log \left( \frac{n_j}{\sum_{m\in \textrm{leaf } l } n_m}\right) \nonumber \\ &\geq \sum_{j\in \textrm{leaf } l } \frac{n_j}{\sum_{m\in \textrm{leaf } l} n_m}\log \left( \frac{1}{ \sum_{m\in \textrm{leaf } l } V_m}\right).\nonumber \end{align} We know the last equation is true since this is just Gibb's inequality. Hence we have proven that the statement is true. To avoid singularity, we separately consider the case when one of the $\hat{f}_{n,j}=0$. For a particular value of $q$, if $\hat{f}_{n,q}=0$, then $n_q=0$ by definition. Hence, if we include a new $x$ within the leaf that has no training examples, we will find that the left hand side term of (\ref{aftertakinglog}) remains the same but since the volume increases when we add the new point, the quantity on the right decreases. Hence the inequality still holds. \end{document}
\begin{document} \title{An unconditionnally stable pressure correction scheme for compressible barotropic {N}avier-{S}tokes equations} \author{T. Gallou\"et} \address{Universit\'e de Provence, France ([email protected])} \author{L. Gastaldo} \address{Institut de Radioprotection et de S\^{u}ret\'{e} Nucl\'{e}aire (IRSN) ([email protected])} \author{R. Herbin} \address{Universit\'e de Provence, France ([email protected])} \author{J.C. Latch\'e} \address{Institut de Radioprotection et de S\^{u}ret\'{e} Nucl\'{e}aire (IRSN) ([email protected])} \begin{abstract} We present in this paper a pressure correction scheme for barotropic compressible Navier-Stokes equations, which enjoys an unconditional stability property, in the sense that the energy and maximum-principle-based a priori estimates of the continuous problem also hold for the discrete solution. The stability proof is based on two independent results for general finite volume discretizations, both interesting for their own sake: the $L^2$-stability of the discrete advection operator provided it is consistent, in some sense, with the mass balance and the estimate of the pressure work by means of the time derivative of the elastic potential. The proposed scheme is built in order to match these theoretical results, and combines a fractional-step time discretization of pressure-correction type to a space discretization associating low order non-conforming mixed finite elements and finite volumes. Numerical tests with an exact smooth solution show the convergence of the scheme. \end{abstract} \subjclass{35Q30,65N12,65N30,76M125} \keywords{Compressible Navier-Stokes equations, pressure correction schemes} \maketitle \section{Introduction} The problem addressed in this paper is the system of the so-called barotropic compressible Navier-Stokes equations, which reads: \begin{equation} \left| \begin{array}{l} \displaystyle \frac{\partial\,\rho}{\partial t}+\nabla\cdot(\rho\, u)=0 \\[2ex] \displaystyle \frac{\partial}{\partial t}(\rho\, u)+\nabla\cdot(\rho\, u\otimes u)+\nabla p -\nabla\cdot\tau(u)=f \\[2ex] \displaystyle \rho=\varrho(p) \end{array} \right. \label{nsb}\end{equation} where $t$ stands for the time, $\rho$, $u$ and $p$ are the density, velocity and pressure in the flow, $f$ is a forcing term and $\tau(u)$ stands for the shear stress tensor. The function $\varrho(\cdot)$ is the equation of state used for the modelling of the particular flow at hand, which may be the actual equation of state of the fluid or may result from assumptions concerning the flow; typically, laws as $\rho=p^{1/\gamma}$, where $\gamma$ is a coefficient specific to the fluid considered, are obtained by making the assumption that the flow is isentropic. This system of equations is posed over $\Omega\times (0,T)$, where $\Omega$ is a domain of $\xR^d$, $d\leq 3$ supposed to be polygonal ($d=2$) or polyhedral ($d=3$), and the final time $T$ is finite. It must be supplemented by boundary conditions and by an initial condition for $\rho$ and $u$. The development of pressure correction techniques for compressible Navier-Stokes equations may be traced back to the seminal work of Harlow and Amsden \cite{har-68-num, har-71-num} in the late sixties, who developped an iterative algorithm (the so-called ICE method) including an elliptic corrector step for the pressure. Later on, pressure correction equations appeared in numerical schemes proposed by several researchers, essentially in the finite-volume framework, using either a collocated \cite{pat-87-bar,dem-93-col,kob-96-cha,pol-97-pre,iss-98-pre,mou-01-hig} or a staggered arrangement \cite{cas-84-pre, iss-85-sol, iss-86-com, kar-89-pre, bij-98-uni, col-99-pro, van-01-sta, wal-02-sem, wen-02-mac, van-03-con, vid-06-sup} of unknowns; in the first case, some corrective actions are to be foreseen to avoid the usual odd-even decoupling of the pressure in the low Mach number regime. Some of these algorithms are essentially implicit, since the final stage of a time step involves the unknown at the end-of-step time level; the end-of-step solution is then obtained by SIMPLE-like iterative processes \cite{van-87-seg, kar-89-pre, dem-93-col, kob-96-cha, pol-97-pre, iss-98-pre, mou-01-hig}. The other schemes \cite{iss-85-sol, iss-86-com, pat-87-bar, bij-98-uni, col-99-pro, wes-01-pri, van-01-sta, wen-02-mac, van-03-con, vid-06-sup} are predictor-corrector methods, where basically two steps are performed in sequence: first a semi-explicit decoupled prediction of the momentum or velocity (and possibly energy, for non-barotropic flows) and, second, a correction step where the end-of step pressure is evaluated and the momentum and velocity are corrected, as in projection methods for incompressible flows (see \cite{cho-68-num, tem-69-sur} for the original papers, \cite{mar-98-nav} for a comprehensive introduction and \cite{gue-06-ove} for a review of most variants). The Characteristic-Based Split (CBS) scheme (see \cite{nit-06-cha} for a recent review or \cite{zie-95-gen} for the seminal paper), developped in the finite-element context, belongs to this latter class of methods. Our aim in this paper is to propose and study a simple non-iterative pressure correction scheme for the solution of \eqref{nsb}. In addition, this method is designed so as to be stable in the low Mach number limit, since our final goal is to apply it to simulate by a drift-flux approach a class of bubbly flows encountered in nuclear safety studies, where pure liquid (incompressible) and pure gaseous (compressible) zones may coexist. To this purpose, we use a low order mixed finite element approximation, which meets the two following requirements: to allow a natural discretization of the viscous terms and to provide a spatial discretization that is intrinsically stable (\ie \ without the adjunction of stabilization terms to circumvent the so-called {\it inf-sup} or BB condition) in the incompressible limit. In this work, a special attention is payed to stability issues. To be more specific, let us recall the {\it a priori} estimates associated to problem \eqref{nsb} with a zero forcing term, {\it i.e.} estimates which should be satisfied by any possible regular solution \cite{pll-98-mat, fei-04-dyn, nov-04-int}: \begin{equation} \left| \quad \begin{array}{llll} (i) & \displaystyle \rho(x,t) > 0, && \displaystyle \forall x \in \Omega,\ \forall t \in (0,T) \\[2ex] (ii) & \displaystyle \int_\Omega \rho(x,t)\ {\rm d} x = \int_\Omega \rho(x,0)\ {\rm d} x, && \displaystyle \forall t \in (0,T) \\[3ex] (iii) \quad & \displaystyle \frac{1}{2}\ \frac{d}{dt}\int_{\Omega}\rho(x,t)\, u(x,t)^{2} \, {\rm d}x +\frac{d}{dt}\int_{\Omega} \rho(x,t) \,P(\rho(x,t)) \, {\rm d}x \\ & \displaystyle +\int_{\Omega} \tau (u(x,t)):\nabla u(x,t) \, {\rm d}x = 0, & \qquad & \forall t \in (0,T) \end{array} \right. \label{apriori-e}\end{equation} In the latter relation, $P$, the "elastic potential", is a function derived from the equation of state, which satisfies: \begin{equation} P'(z)=\frac{\wp(z)}{z^2} \label{elasticpot_gene} \end{equation} where $\wp (\cdot)$ is the inverse function of $\varrho(\cdot)$, \ie\ the function giving the pressure as a function of the density. The usual choice is, provided that this expression makes sense: \begin{equation} P(z)=\int_0^z \frac{\wp(s)}{s^2}\ {\rm d}s \label{elasticpot} \end{equation} For these estimates to hold, the condition \eqref{apriori-e}-$(i)$ must be satisfied by the initial condition; note that a non-zero forcing term $f$ in the momentum balance would make an additional term appear at the right hand side of relation \eqref{apriori-e}-$(iii)$. This latter estimate is obtained from the second relation of \eqref{nsb} (\ie\ the momentum balance) by taking the scalar product by $u$ and integrating over $\Omega$. This computation then involves two main arguments which read: \begin{equation} \begin{array}{llll} (i) & \mbox{Stability of the advection operator:} \quad & \displaystyle \int_\Omega \left[ \frac{\partial}{\partial t}(\rho\, u)+\nabla\cdot(\rho\, u\otimes u) \right] \cdot u \, {\rm d}x= \frac{1}{2}\ \frac{d}{dt}\int_{\Omega}\rho\, u^2 \, {\rm d}x \\[3ex] (ii) & \mbox{Stability due to the pressure work:} \quad & \displaystyle \int_\Omega -p \, \nabla \cdot u \, {\rm d}x=\frac{d}{dt}\int_{\Omega} \rho(x,t) \,P(\rho(x,t)) \, {\rm d}x \end{array} \label{stab-arg}\end{equation} Note that the derivation of both relations make use of the fact that the mass balance equation holds in a crucial way. This paper is organized as follows. As a first step, we derive a bound similar to \eqref{stab-arg}-$(ii)$ for a given class of spatial discretizations; the latter are introduced in section \ref{subsec:disc} and the desired stability estimate (theorem \ref{VF2}) is stated and proven in section \ref{subsec:pwork}. We then show that this result allows to prove the existence of a solution for a fairly general class of discrete compressible flow problems. Section \ref{sec:darcy} gathers this whole study, and constitutes the first part of this paper. In a second part (section \ref{sec:scheme}), we turn to the derivation of a pressure correction scheme the solution of which satisfies the whole set of {\it a priori} estimates \eqref{apriori-e}. To this purpose, besides theorem \ref{VF2}, we need as a second key ingredient a discrete version of the bound \eqref{stab-arg}-$(i)$ relative to the stability of the advection operator, which is stated and proven in section \ref{subsec:conv} (theorem \ref{VF1}). We then derive a fully discrete algorithm which is designed to meet the assumptions of these theoretical results, and establish its stability. Some numerical experiments show that this scheme seems in addition to present, when the solution is smooth, the convergence properties which can be expected, namely first order in time convergence for all the variables and second order in space in $\xLtwo$ and discrete $\xLtwo$ norm for the velocity and the pressure, respectively. \section{Analysis of a class of discrete problems} \label{sec:darcy} The class of problems addressed in this section can be seen as the class of discrete systems obtained by space discretization by low-order non-conforming finite elements of continuous problems of the following form: \begin{equation} \left| \begin{array}{ll} \displaystyle A \, u + \nabla p = f & \mbox{ in } \Omega \\[2ex] \displaystyle \frac{\varrho(p)-\rho^\ast}{\delta t}+ \nabla \cdot \left( \varrho(p)\, u \right)=0 & \mbox{ in } \Omega \\[2ex] \displaystyle u=0 & \mbox{ on } \partial \Omega \end{array} \right . \label{pbdisc-cont}\end{equation} where the forcing term $f$ and the density field $\rho^\ast$ are known quantities, and $A$ stands for an abstract elliptic operator. The unknown of the problem are the velocity $u$ and the pressure $p$; the function $\varrho(\cdot)$ stands for the equation of state. The domain $\Omega$ is a polygonal ($d=2$) or polyhedral ($d=3$) open, bounded and connected subset of $\xR^d$. Of course, at the continuous level, this statement of the problem should be completed by a precise definition of the functional spaces in which the velocity and the pressure are searched for, together with regularity assumptions on the data. This is out of context in this section, as system \eqref{pbdisc-cont} is given only to fix ideas and we restrict ourselves here to prove some mathematical properties of the discrete problem, namely to establish some {\it a priori} estimates for its solution and to prove that this nonlinear problem admits some solution for fairly general equations of state. This section is organized as follows. We begin by describing the considered discretization and precisely stating the discrete problem at hand. Then we prove, for the particular discretization at hand, a fundamental result which is a discrete analogue of the elastic potential identity. The next section is devoted to the proof of the existence of a solution, and we finally conclude by giving some practical examples of application of the abstract theory developped here. \subsection{The discrete problem}\label{subsec:disc} Let ${\cal M}$ be a decomposition of the domain $\Omega$ either into convex quadrilaterals ($d=2$) or hexahedrons ($d=3$) or in simplices. By ${\cal E}$ and ${\cal E}(K)$ we denote the set of all $(d-1)$-edges $\sigma$ of the mesh and of the element $K \in {\cal M}$ respectively. The set of edges included in the boundary of $\Omega$ is denoted by ${\cal E}_{{\rm ext}}$ and the set of internal ones (\ie\ ${\cal E} \setminus {\cal E}_{{\rm ext}}$) is denoted by ${\cal E}_{{\rm int}}$. The decomposition ${\cal M}$ is supposed to be regular in the usual sense of the finite element literature (e.g. \cite{cia-91-bas}), and, in particular, ${\cal M}$ satisfies the following properties: $ \bar\Omega=\bigcup_{K\in {\cal M}} \bar K$; if $K,\,L \in {\cal M},$ then $\bar K \cap \bar L=\emptyset$ or $\bar K\cap \bar L$ is a common face of $K$ and $L$, which is denoted by $K|L$. For each internal edge of the mesh $\sigma=K|L$, $n_{KL}$ stands for the normal vector of $\sigma$, oriented from $K$ to $L$. By $|K|$ and $|\sigma|$ we denote the measure, respectively, of $K$ and of the edge $\sigma$. The spatial discretization relies either on the so-called "rotated bilinear element"/$P_0$ introduced by Rannacher and Turek \cite{ran-92-sim} for quadrilateral of hexahedric meshes, or on the Crouzeix-Raviart element (see \cite{cro-73-con} for the seminal paper and, for instance, \cite[p. 83--85]{ern-05-aid} for a synthetic presentation) for simplicial meshes. The reference element $\widehat K$ for the rotated bilinear element is the unit $d$-cube (with edges parallel to the coordinate axes); the discrete functional space on $\widehat K$ is $\tilde{Q}_{1}(\widehat K)^d$, where $\tilde{Q}_{1}(\widehat K)$ is defined as follows: \[ \tilde{Q}_{1}(\widehat K)= {\rm span}\left\{1,\,(x_{i})_{i=1,\ldots,d},\,(x_{i}^{2}-x_{i+1}^{2})_{i=1,\ldots,d-1}\right\} \] The reference element for the Crouzeix-Raviart is the unit $d$-simplex and the discrete functional space is the space $P_1$ of affine polynomials. For both velocity elements used here, the degrees of freedom are determined by the following set of nodal functionals: \begin{equation} \displaystyle \left\{F_{\sigma,i},\ \sigma \in {\cal E}(K),\, i=1,\ldots,d\right\}, \qquad F_{\sigma,i}(v)=|\sigma|^{-1}\int_{\sigma} v_{i} \, {\rm d} \gamma \label{vdof}\end{equation} The mapping from the reference element to the actual one is, for the Rannacher-Turek element, the standard $Q_1$ mapping and, for the Crouzeix-Raviart element, the standard affine mapping. Finally, in both cases, the continuity of the average value of discrete velocities (\ie, for a discrete velocity field $v$, $F_{\sigma,i}(v)$) across each face of the mesh is required, thus the discrete space $W_{h}$ is defined as follows: \[ \begin{array}{ll} \displaystyle W_h = & \displaystyle \lbrace \ v_h\in L^{2}(\Omega)\,:\, v_h|_K \in\tilde{Q}_{1}(K)^d,\,\forall K\in {\cal M};\ F_{\sigma,i}(v_h) \mbox{ continuous across each edge } \sigma \in {{\cal E}_{{\rm int}}},\ 1\leq i \leq d \,; \\[1ex] & \displaystyle \ \ F_{\sigma,i}(v_h)=0,\ \forall \sigma \in {\cal E}_{{\rm ext}},\ 1\leq i \leq d\ \rbrace \end{array} \] For both Rannacher-Turek and Crouzeix-Raviart discretizations, the pressure is approximated by the space $L_{h}$ of piecewise constant functions: \[ L_h=\left\{q_h\in L^{2}(\Omega)\,:\, q_h|_K=\mbox{ constant},\,\forall K\in {\cal M}\right\} \] Since only the continuity of the integral over each edge of the mesh is imposed, the velocities are discontinuous through each edge; the discretization is thus nonconforming in $H^1(\Omega)^d$. These pairs of approximation spaces for the velocity and the pressure are \textit{inf-sup} stable, in the usual sense for "piecewise $\xHone$" discrete velocities, \ie\ there exists $c_{\rm i}>0$ independent of the mesh such that: \[ \forall p \in L_h, \qquad \sup_{v \in W_h} \frac{\displaystyle \int_{\Omega,h} p \, \nabla \cdot v \, {\rm d}x}{\normHb{v}} \geq c_{\rm i} \normLd{p-\bar p} \] where $\bar p$ is the mean value of $p$ over $\Omega$, the symbol $\displaystyle \int_{\Omega,h}$ stands for $\displaystyle \sum_{K\in{\cal M}} \int_K$ and $\normHb{\cdot}$ stands for the broken Sobolev $\xHone$ semi-norm: \[ \normHbd{v}=\sum_{K\in {\cal M}} \int_K |\nabla v |^2 \, {\rm d}x=\int_{\Omega,h}| \nabla v |^2 \, {\rm d}x \] From the definition \eqref{vdof}, each velocity degree of freedom can be univoquely associated to an element edge. We take benefit of this correspondence by using hereafter, somewhat improperly, the expression "velocity on the edge $\sigma$" to name the velocity vector defined by the degrees of freedom of the velocity components associated to $\sigma$. In addition, the velocity degrees of freedom are indexed by the number of the component and the associated edge, thus the set of velocity degrees of freedom reads: \[ \lbrace v_{\sigma, i},\ \sigma \in {\cal E}_{{\rm int}},\ 1 \leq i \leq d \rbrace \] We define $v_\sigma=\sum_{i=1}^d v_{\sigma, i} e_i $ where $e_i$ is the $i^{th}$ vector of the canonical basis of $\xR^d$. We denote by $\varphi_\sigma^{(i)}$ the vector shape function associated to $v_{\sigma, i}$, which, by the definition of the considered finite elements, reads: \[ \varphi_\sigma^{(i)}=\varphi_\sigma \, e_i \] where $\varphi_\sigma$ is a scalar function. Similarly, each degree of freedom for the pressure is associated to a mesh $K$, and the set of pressure degrees of freedom is denoted by $\lbrace p_K,\ K \in {\cal M} \rbrace$. For any $K \in {\cal M}$, let $\rho^\ast_K$ be a quantity approximating a known density $\rho^\ast$ on $K$. The family of real numbers $(\rho^\ast_K)_{K\in{\cal M}}$ is supposed to be positive. The discrete problem considered in this section reads: \begin{equation} \left| \begin{array}{ll} \displaystyle a(u,\varphi_\sigma^{(i)}) - \int_{\Omega,h} p\ \nabla \cdot \varphi_\sigma^{(i)} = \int_\Omega f \cdot \varphi_\sigma^{(i)} \, {\rm d}x \qquad & \displaystyle \forall \sigma\in{\cal E}_{{\rm int}} \ (\sigma=K|L),\ 1 \leq i \leq d \\[2ex] \displaystyle |K|\ \frac{\varrho(p_K)-\rho^\ast_K}{\delta t} + \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K}^+\, \varrho(p_K) - \mathrm{v}_{\sigma,K}^-\ \varrho(p_L)=0 \qquad & \displaystyle \forall K \in {\cal M} \end{array}\right . \label{pbdisc-disc}\end{equation} where $\mathrm{v}_{\sigma,K}^+$ and $\mathrm{v}_{\sigma,K}^-$ stands respectively for $\mathrm{v}_{\sigma,K}^+ = \max (\mathrm{v}_{\sigma,K},\ 0)$ and $\mathrm{v}_{\sigma,K}^- = \max ( -\mathrm{v}_{\sigma,K},\ 0)$ with $\mathrm{v}_{\sigma,K}=|\sigma|\, u_\sigma \cdot n_{KL}=\mathrm{v}_{\sigma,K}^+- \mathrm{v}_{\sigma,K}^-$. The first equation is the standard finite element discretization of the first relation of \eqref{pbdisc-cont}, provided the following identity holds: \[ \forall v\in W_h,\ \forall w \in W_h,\qquad a(v,w)=\int_\Omega Av \cdot w \, {\rm d}x \] As the pressure is piecewise constant, the finite element discretization of the second relation of \eqref{pbdisc-cont}, \ie\ the mass balance, is similar to a finite volume formulation, in which we introduce the standard first-order upwinding. The bilinear form $a(\cdot,\cdot)$ is supposed to be elliptic on $W_h$, \ie\ to be such that the following property holds: \[ \exists c_{\rm a} >0 \mbox{ such that, } \forall v \in W_h,\qquad a(u,u) \geq c_{\rm a} \normsd{u} \] where $\norms{\cdot}$ is a norm over $W_h$. We denote by $\normS{\cdot}$ its dual norm with respect to the $\xLtwo(\Omega)^d$ inner product, defined by: \[ \forall v \in W_h, \qquad \normS{v}=\sup_{w \in W_h} \frac{\displaystyle \int_\Omega v \cdot w \, {\rm d}x}{\norms{w}} \] \subsection{On the pressure control induced by the pressure forces work}\label{subsec:pwork} The aim of this subsection is to prove that the discretization at hand satisfies a stability bound which can be seen as the discrete analogue of equation \eqref{stab-arg}-$(ii)$, which we recall here: \[ \int_\Omega p \, \nabla \cdot u \, {\rm d}x=\frac{d}{dt}\int_{\Omega} \rho \,P(\rho) \, {\rm d}x, \qquad \mbox{where } P(\cdot) \mbox{ is such that } P'(z)=\frac{\wp(z)}{z^2} \] The formal computation which allows to derive this estimate in the continuous setting is the following. The starting point is the mass balance, which is multiplied by $[\rho P(\rho)]'$: \[ [\rho P(\rho)]' \left[ \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho u) \right]=0 \] This relation yields: \begin{equation} \frac{\partial [\rho P(\rho)]}{\partial t} + [\rho P(\rho)]' \left[ u \cdot \nabla \rho + \rho \nabla \cdot u \right]=0 \label{elP-step1}\end{equation} And thus: \[ \frac{\partial [\rho P(\rho)]}{\partial t} + u \cdot \nabla [\rho P(\rho)] + [\rho P(\rho)]' \rho \nabla \cdot u =0 \] Developping the derivative, we get: \begin{equation} \frac{\partial [\rho P(\rho)]}{\partial t} + \underbrace{u \cdot \nabla [\rho P(\rho)] + \rho P(\rho) \nabla \cdot u}_{\displaystyle \nabla \cdot (\rho P(\rho)\, u)} + \underbrace{\rho^2 [P(\rho)]' \nabla \cdot u}_{\displaystyle p \nabla \cdot u} =0 \label{elP-step2}\end{equation} and the result follows by integration in space. We are going here to reproduce this computation at the discrete level. \begin{thrm}[Stability due to the pressure work]\label{VF2} Let the elastic potential $P$ defined by \eqref{elasticpot_gene} be such that the function $z \mapsto z\, P(z)$ is once continuously differentiable and strictly convex. Let $(\rho_K)_{K\in{\cal M}}$ and $(p_K)_{K\in{\cal M}}$ satisfy the second relation of \eqref{pbdisc-disc}. Then the following estimate holds: \begin{equation} - \int_\Omega p \, \nabla \cdot u \, {\rm d}x=\sum_{K\in{\cal M}} - p_K \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} \geq \frac{1}{\delta t}\ \sum_{K\in{\cal M}} |K| \left[ \rho_K \, P(\rho_K) - \rho_K^\ast \, P(\rho_K^\ast)\right] \label{pot-el}\end{equation} \end{thrm} \begin{proof} Let us write the divergence term in the discrete mass balance over $K$ (\ie\ the second relation of \eqref{pbdisc-disc}) under the following form: \[ \sum_{\sigma=K|L} \rho_\sigma \, \mathrm{v}_{\sigma,K} \] where $\rho_\sigma$ is either $\rho_K=\varrho(p_K)$ if $\mathrm{v}_{\sigma,K} \geq 0$ or $\rho_L=\varrho(P_L)$ if $\mathrm{v}_{\sigma,K} \leq 0$. Multiplying this term by the derivative with respect to $\rho$ of $\rho\, P(\rho)$ computed at $\rho_K$, denoted by $[\rho\, P(\rho)]'_{\rho_K}$, we obtain: \[ T_{{\rm div},K}= [\rho\, P(\rho)]'_{\rho_K} \sum_{\sigma=K|L} \rho_\sigma \, \mathrm{v}_{\sigma,K} = [\rho\, P(\rho)]'_{\rho_K} \left[ \sum_{\sigma=K|L} \ (\rho_\sigma-\rho_K) \, \mathrm{v}_{\sigma,K}+ \rho_K \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} \right] \] This relation is a discrete equivalent to equation \eqref{elP-step1}: up to the multiplication by $1/|K|$, the first summation in the right hand side is the analogue of $u \cdot \nabla \rho$ and the second one to $\rho\,\nabla \cdot u$. Developping the derivative, we obtain the equivalent of relation \eqref{elP-step2}: \begin{equation} T_{{\rm div},K}= [\rho\, P(\rho)]'_{\rho_K} \sum_{\sigma=K|L} (\rho_\sigma-\rho_K) \, \mathrm{v}_{\sigma,K} + \rho_K\, P(\rho_K) \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} + \rho_K^2\, P'(\rho_K) \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} \label{elPd1}\end{equation} By definition \eqref{elasticpot_gene} of $P$, the last term reads $p_K \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K}$. The process will be completed if we put the first two terms in divergence form. To this end, let us sum up the $T_{{\rm div},K}$ and reorder the summation: \begin{equation} \sum_{K\in{\cal M}} T_{{\rm div},K}= \sum_{K\in{\cal M}} p_K \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} + \sum_{\sigma\in{\cal E}_{{\rm int}}} T_{{\rm div},\sigma} \label{elPd2}\end{equation} where, if $\sigma=K|L$: \[ T_{{\rm div},\sigma}= \mathrm{v}_{\sigma,K}\ \left[ \rho_K\, P(\rho_K) + [\rho\, P(\rho)]'_{\rho_K} (\rho_\sigma-\rho_K) - \rho_L\, P(\rho_L) - [\rho\, P(\rho)]'_{\rho_L} (\rho_\sigma-\rho_L) \right] \] In this relation, there are two possible choices for the orientation of $\sigma$, \ie\ $K|L$ or $L|K$, where $K$ and $L$ are two cells such that $\sigma=\bar K\cap \bar L$; we choose this orientation to have $\mathrm{v}_{\sigma,K} \geq 0$. Let $\bar \rho_\sigma$ be defined by: \begin{equation} \left| \begin{array}{ll} \mbox{if }\rho_K \neq \rho_L \ :\qquad & \displaystyle \rho_K\, P(\rho_K) + [\rho\, P(\rho)]'_{\rho_K} (\bar \rho_\sigma-\rho_K) = \rho_L\, P(\rho_L) + [\rho\, P(\rho)]'_{\rho_L} (\bar \rho_\sigma-\rho_L) \\[2ex] \mbox{otherwise}: & \bar \rho_\sigma=\rho_K=\rho_L \end{array} \right. \label{defrhob}\end{equation} As the function $z \mapsto z\, P(z)$ is supposed to be once continuously differentiable and strictly convex, the technical lemma \ref{rhosigma} proven hereafter applies and $\bar \rho_\sigma$ is uniquely defined and satisfies $\bar\rho_{\sigma} \in [\min(\rho_{K},\rho_{L}), \, \max(\rho_{K},\rho_{L})]$. By definition, the choice $\rho_\sigma=\bar \rho_\sigma$ is such that the term $T_{{\rm div},\sigma}$ vanishes, which means that, by this definition, we would indeed have obtained that the first two terms of equation \eqref{elPd1} are a conservative approximation of the quantity $\nabla \cdot \rho P(\rho) u$ appearing in equation \eqref{elP-step2}, with the following expression for the flux: \[ \begin{array}{l} \displaystyle F_{\sigma,K}= [\rho P(\rho)]_\sigma \, \mathrm{v}_{\sigma,K}, \quad \mbox{with:} \\[2ex] \displaystyle \hspace{10ex} [\rho P(\rho)]_\sigma= \rho_K\, P(\rho_K) + [\rho\, P(\rho)]'_{\rho_K} (\bar \rho_\sigma-\rho_K) = \rho_L\, P(\rho_L) + [\rho\, P(\rho)]'_{\rho_L} (\bar \rho_\sigma-\rho_L) \end{array} \] Whatever the choice for $\rho_\sigma$ may be, we have: \[ T_{{\rm div},\sigma}= \mathrm{v}_{\sigma,K}\ (\rho_\sigma - \bar \rho_\sigma)\ ([\rho\, P(\rho)]'_{\rho_K}-[\rho\, P(\rho)]'_{\rho_L} ) \] With the orientation taken for $\sigma$, an upwind choice yields: \[ T_{{\rm div},\sigma}= \mathrm{v}_{\sigma,K}\ (\rho_K - \bar \rho_\sigma)\ ([\rho\, P(\rho)]'_{\rho_K}-[\rho\, P(\rho)]'_{\rho_L} ) \] and, using the fact that $z \mapsto [\rho\, P(\rho)]'_{z}$ is an increasing fuction since $z \mapsto z\, P(z)$ is convex and that $\bar\rho_{\sigma} \in [\min(\rho_{K},\rho_{L}), \, \max(\rho_{K},\rho_{L})]$, it is easily seen that $T_{{\rm div},\sigma}$ is non-negative. Multiplying by $[\rho\, P(\rho)]'_{\rho_K}$ the mass balance over each cell $K$ and summing for $K\in{\cal M}$ thus yields, by equation \eqref{elPd2}: \begin{equation} - \sum_{K\in{\cal M}} p_K \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} = R + \sum_{K\in{\cal M}} |K|\ [\rho\, P(\rho)]'_{\rho_K}\ \frac{\rho_K-\rho^\ast}{\delta t} \label{elPd3}\end{equation} where $R$ is non-negative, and the result follows invoking once again the convexity of $z \mapsto z\, P(z)$. \end{proof} \begin{rmrk}[On a non-dissipative scheme] The preceding proof shows that, for a scheme to conserve the energy (\ie\ to obtain a discrete equivalent of \eqref{apriori-e}-$(iii)$), besides other arguments, the choice of $\bar \rho_\sigma$ given by \eqref{defrhob} for the density at the face of the control volume in the discretization of the flux in the mass balance seems to be mandatory; any other choice leads to an artificial dissipation in the work of the pressure forces. Note however that, this discretization being essentially of centered type, the positivity of the density is not warranted in this case. \end{rmrk} In the course of the preceding proof, we used the following technical lemma. \begin{lmm}\label{rhosigma} Let $g(\cdot)$ be a strictly convex and once continuously derivable real function. Let $\sigma$ be an internal edge of the mesh separating the cells $K$ and $L$. Then the following relations: \begin{equation} \left| \begin{array}{ll} \mbox{if }\rho_K \neq \rho_L \ :\qquad & \displaystyle g(\rho_{K}) + g'(\rho_{K})(\bar\rho_\sigma-\rho_{K}) = g(\rho_{L}) + g'(\rho_{L})(\bar\rho_\sigma-\rho_{L}) \\[2ex] \mbox{otherwise}: & \bar \rho_\sigma=\rho_K=\rho_L \end{array} \right. \label{defrhob2}\end{equation} uniquely defines the real number $\bar\rho_\sigma$. In addition, we have $\bar\rho_{\sigma} \in [\min(\rho_{K},\rho_{L}),\,\max(\rho_{K},\rho_{L})]$. \end{lmm} \begin{proof} If $\rho_K=\rho_L$, there is nothing to prove. In the contrary case, without loss of generality, let us choose $K$ and $L$ in such a way that $\rho_K < \rho_L$. By reordering equation \eqref{defrhob2}, we get: \[ g(\rho_K) + g'(\rho_K)\,(\rho_L-\rho_K) -g(\rho_L)= (\bar \rho_\sigma-\rho_L)\,\left[ g'(\rho_L)-g'(\rho_K)\right] \] As, because $g(\cdot)$ is strictly convex, $g'(\rho_L)-g'(\rho_K)$ does not vanish, this equation proves that $\bar\rho_\sigma$ is uniquely defined. In addition, for the same reason, the left hand side of this relation is negative and $g'(\rho_L)-g'(\rho_K)$ is positive, thus we have $\bar \rho_\sigma < \rho_L$. Reordering in another way equation \eqref{defrhob2} yields: \[ g(\rho_L) + g'(\rho_L)\,(\rho_K-\rho_L) -g(\rho_K)= (\bar \rho_\sigma-\rho_K)\,\left[ g'(\rho_K)-g'(\rho_L) \right] \] which, considering the signs of the left hand side and of $g'(\rho_K)-g'(\rho_L)$, in turn implies $\bar \rho_\sigma > \rho_K$. \end{proof} \subsection{Existence of a solution} The aim of this section is to prove the existence of a solution to the discrete problem under study. It follows from a topological degree argument, linking by a homotopy the problem at hand to a linear system. This section begins by a lemma which is used further to prove the strict positivity of the pressure. \begin{lmm}\label{max} Let us consider the following problem: \begin{equation} \forall K \in {\cal M}, \qquad |K|\,\frac{\varphi_1(p_K)-\varphi_{1}(p^\ast_K)}{\delta t}+\sum_{\sigma=K|L}\,\mathrm{v}_{\sigma,K}^+\,\varphi_2(p_K)-\mathrm{v}_{\sigma,K}^-\,\varphi_2(p_L)=0 \label{eqk} \end{equation} where $\varphi_1$ is an increasing function and $\varphi_2$ is a non-decreasing and non-negative function. Suppose that there exists $\bar p$ such that: \begin{equation} \varphi_{1}(\bar p) + \delta t \, \varphi_2(\bar p) \,\max_{K\in{\cal M}} \left[ 0,\frac{1}{|K|} \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} \right] =\min_{K\in{\cal M}}[\varphi_1(p^\ast_{K})] \label{eqbarp} \end{equation} Then, $\forall K \in {\cal M}$, $p_K$ satisfies $ p_K \geq \bar{p}$. \end{lmm} \begin{proof} Let us assume that there exists a cell $\bar K$ such that $p_{\bar K}=\min_{K \in {\cal M}} p_K <\bar p$. Multiplying by $\delta t/|\bar K|$ the relation \eqref{eqk} written for $K=\bar K$, we get: \begin{equation} \varphi_1(p_{\bar K }) +\frac{\delta t}{|\bar K|}\,\sum_{\sigma=\bar K |L} \,(\mathrm{v}_{\sigma,K}^+\,\varphi_2(p_{\bar K})-\mathrm{v}_{\sigma,K}^-\,\varphi_2(p_L)) = \varphi_1(p^\ast_{\bar K}) \label{eqkbar} \end{equation} Then substracting from \eqref{eqbarp} we have: \[ \begin{array}{ll} \displaystyle \varphi_1(p_{\bar K})-\varphi_1(\bar p) +\frac{\delta t}{|\bar K|} \, \sum_{\sigma=\bar K |L} \mathrm{v}_{\sigma,K}^+ \,\varphi_2 (p_{\bar K})-\mathrm{v}_{\sigma,K}^-\,\varphi_2(p_L) & \\ \displaystyle - \delta t\,\varphi_2 (\bar p)\,\max_{K \in {\cal M}}\left[0,\frac{1}{|K|} \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K} \right] & \displaystyle =\varphi_1(p^\ast_{\bar K})-\min_{K \in {\cal M}}[\varphi_1(p^\ast_K)]\geq 0 \end{array} \] The previous relation can be written as $T_1 + T_2 + T_3 \geq 0$ with: \[ \begin{array}{l} T_1= \varphi_1(p_{\bar K})-\varphi_1(\bar p) \\[2ex] \displaystyle T_2= \delta t \, \varphi_2(p_{\bar K})\left[ \sum_{\sigma=\bar K|L} \frac{1}{|\bar K|} \mathrm{v}_{\sigma,K} \right] - \delta t \,\varphi_{2}(\bar p)\,\max_{K \in {\cal M}}\left[0,\sum_{\sigma=K|L}\frac{1}{|K|} \mathrm{v}_{\sigma,K} \right] \\[4ex] \displaystyle T_3= \frac{\delta t}{|\bar K|}\,\sum_{\sigma=\bar K |L} \mathrm{v}_{\sigma,K}^- \,(\varphi_2(p_{\bar K})-\varphi_{2}(p_L)) \end{array} \] As $\varphi_1(\cdot)$ is an increasing function and, by assumption, $p_{\bar K}<\bar p$, we have $T_1 <0$. Similarly, $0 \leq \varphi_2(p_{\bar K}) \leq \varphi_2(\bar p)$ and the discrete divergence over $\bar K$ (\ie\ $1/|K|\ \sum_{\sigma=\bar K|L} u_\sigma$) is necessarily smaller than the maximum of this quantity over the cells of the mesh, thus $T_2 <0$. Finally, as, by assumption, $p_{\bar K} \leq p_L$ for any neighbouring cell $L$ of $\bar K$, $\varphi_2(\cdot)$ is a non-decreasing function and $\mathrm{v}_{\sigma,K}^- \geq 0$, $T_3 \leq 0$. We thus obtain a contradiction with the initial hypothesis, which proves $p_K \geq \bar p,\ \forall K \in {\cal M}$. \end{proof} We now state the abstract theorem which will be used hereafter; this result follows from standard arguments of the topological degree theory (see \cite{deimling} for an exposition of the theory and \cite{eym-98-an} for another utilisation for the same objective as here, namely the proof of existence of a solution to a numerical scheme). \begin{thrm}[A result from the topological degree theory]\label{degre} Let $N$ and $M$ be two positive integers and $V$ be defined as follows: \[ V=\{ (x,y) \in \xR^N \times \xR^M \mbox{ such that } y>0\} \] where, for any real number $c$, the notation $y>c$ means that each component of $y$ is greater than $c$. Let $b\in \xR^N \times \xR^M$ and $f$ and $F$ be two continuous functions respectively from $V$ and $V\times[0,1]$ to $\xR^N \times \xR^M$ satisfying: \begin{enumerate} \item[(i)]$F(\cdot,\,1)=f(\cdot)$; \item[(ii)] $\forall \alpha \in [0,1]$, if $v$ is such that $F(v,\alpha)=b$ then $v \in W$, where $W$ is defined as follows: \[ W=\{ (x,y) \in \xR^N \times \xR^M \mbox{ s.t. } \Vert x \Vert < C_1 \mbox{ and } \epsilon< y < C_2\} \] with $C_1$, $C_2$ and $\epsilon$ three positive constants and $\Vert \cdot \Vert$ is a norm defined over $\xR^N$; \item[(iii)] the topological degree of $F(\cdot,0)$ with respect to $b$ and $W$ is equal to $d_0 \neq 0$. \end{enumerate} Then the topological degree of $F(\cdot,1)$ with respect to $b$ and $W$ is also equal to $d_0 \neq 0$; consequently, there exists at least a solution $v\in W$ such that $f(v)=b$. \end{thrm} We are now in position to prove the existence of a solution to the considered discrete Darcy system, for fairly general equations of states. \begin{thrm}[Existence of a solution] Let us suppose that the equation of state $\varrho(\cdot)$ is such that: \begin{enumerate} \item $\varrho(\cdot)$ is increasing, $\varrho(0)=0$ and $\displaystyle \lim_{z \rightarrow +\infty} \varrho(z) = +\infty$, \item there exists an elastic potential $P$ such that (\ref{elasticpot_gene}) holds, the function $z \mapsto z\, P(z)$ is once continuously differentiable and strictly convex and $z\,P(z)\geq -C_P,\ \forall z \in (0,+\infty)$, where $C_P$ is a non-negative constant. \end{enumerate} Then, there exists a solution $(u_{\sigma},p_{K})_{\sigma\in{\cal E}_{{\rm int}},\, K \in {\cal M}}$ to the discrete problem at hand \eqref{pbdisc-disc}. \label{existence_gene}\end{thrm} \begin{proof} This proof is obtained by applying theorem \ref{degre}. Let $N=d\ {\rm card}({\cal E}_{{\rm int}})$, $M=\ {\rm card}({\cal M})$ and the mapping $F:\ V \times [0,1] \rightarrow \xR^N \times \xR^M$ be given by: \[ F(u,p,\alpha)= \left| \begin{array}{ll} \displaystyle v_{\sigma,i},\ & \sigma \in {\cal E}_{{\rm int}},\ 1 \leq i \leq d \\[1ex] q_K,\ & K \in {\cal M} \end{array} \right. \] with: \begin{equation} \left| \begin{array}{l} \displaystyle v_{\sigma,i}= a(u,\varphi_\sigma^{(i)}) - \alpha \int_{\Omega,h} p\ \nabla \cdot \varphi_\sigma^{(i)} \, {\rm d}x - \int_\Omega f \cdot \varphi_\sigma^{(i)} \, {\rm d}x \\[2ex] \displaystyle q_K=\frac{|K|}{\delta t} \left[\varrho(p_K)-\varrho(p_K^\ast) \right] + \alpha \sum_{\sigma=K|L} \mathrm{v}_{\sigma,K}^+\, \varrho_\alpha(p_K) - \mathrm{v}_{\sigma,K}^-\, \varrho(p_L) \end{array} \right . \label{eqalpha} \end{equation} where, $\forall K \in {\cal M}$, $p_K^\ast$ is chosen such that $\rho_K^\ast=\varrho(p_K^\ast)$; note that, by assumption, $\varrho(\cdot)$ is one to one from $(0,+\infty)$ to $(0,+\infty)$, so the preceding definition make sense. The problem $F(u,p,1)=0$ is exactly the system \eqref{pbdisc-disc}. The present proof is built as follows: by applying theorem \ref{VF2}, we obtain a control on $u$ in the discrete norm, uniform with respect to $\alpha$. Since we work on a finite dimensional space, we then obtain a control on $p$ in $\xLinfty$ by using the conservativity of the system of equations. For the same reason, the control on $u$ yields a bound in $\xLinfty$ of the value of the discrete divergence, which is shown to allow, by lemma \ref{max}, to bound $p$ away from zero independently of $\alpha$. The proof finally ends by examining the properties of the system $F(u,p,0)=0$. \noindent \underline{\textit{Step 1}}: $\alpha\in (0,1]$, $\norms{\cdot}$ estimate for the velocity. \\[1ex] Setting $v_\sigma=0$ in \eqref{eqalpha}, multiplying the corresponding equation by $u_{\sigma,i}$ and summing over $\sigma \in {\cal E}_{{\rm int}}$ and $1 \leq i\leq d$ yields the following equation: \[ a(u,u) - \alpha \int_{\Omega,h} p \, \nabla \cdot u \, {\rm d}x = \int_\Omega f \cdot u \, {\rm d}x \] By a computation very similar to the proof of theorem \ref{VF2}, we see that, from the second relation of \eqref{eqalpha} with $q_K=0$: \[ -\alpha \int_{\Omega,h} p \, \nabla \cdot u \, {\rm d}x \geq \frac 1 \delta t \sum_{K\in{\cal M}} |K|\ \left[\rho_K\,P(\rho_K) - \rho^\ast_K\,P(\rho\ast_K)\right] \] where $\rho_K=\varrho(p_k)$. By the stability of the bilinear form $a(\cdot,\cdot)$ and Young's inequality, we thus get: \begin{equation} \underbrace{\frac{c_{\rm a}} 2 \normsd{u}}_{\displaystyle T_1} + \underbrace{\frac 1 \delta t \sum_{K\in{\cal M}} |K|\ \rho_K\,P(\rho_K)}_{\displaystyle T_2} \leq \frac 1 {2 c_{\rm a}}\normSd{f} + \frac 1 \delta t \sum_{K\in{\cal M}} |K|\ \rho^\ast_K\,P(\rho\ast_K) \label{e_est}\end{equation} By assumption, $T_2 \geq -C_p |\Omega|$ and we thus get the following estimate on the discrete norm of the velocity: \begin{equation} \norms{u}\leq C_{1} \label{est_u}\end{equation} where $C_1$ only depends on the data of the problem, \ie\ $\delta t$, $f$ and $\rho^\ast$ and not on $\alpha$. \noindent \underline{\textit{Step 2}}: $\alpha\in (0,1]$, $\xLinfty$ estimate for the pressure. \\[1ex] Let us now turn to the estimate of the pressure. By conservativity of the discrete mass balance, it is easily seen that: \[ \sum_{K\in {\cal M}} |K|\ \varrho(p_K)=\sum_{K\in {\cal M}} |K|\ \rho^\ast_K \] As each term in the sum on the left hand side is non-negative, we thus have: \[ \forall K \in {\cal M}, \qquad \varrho(p_K) \leq \frac 1 {\min_{K\in{\cal M}} |K|}\ \sum_{K\in {\cal M}} |K|\ \rho^\ast_K \] which, as, by assumption, $\displaystyle \lim_{z \rightarrow +\infty} \varrho(z) = +\infty$, yields: \begin{equation} \forall K \in {\cal M}, \qquad p_K \leq C_2 \label{est_p_max}\end{equation} where $C_2$ only depends on the data of the problem. \noindent \underline{\textit{Step 3}}: $\alpha\in (0,1]$, $p$ bounded away from zero. \\[1ex] Applying lemma \ref{max} with $\varphi_1(\cdot)=\varphi_2(\cdot)=\varrho(\cdot)$, we get: \[ \forall K \in {\cal M}, \qquad p_K \geq \bar p_\alpha \] where $\bar p_\alpha$ is given by: \[ \varrho(\bar p_\alpha)=\frac{\displaystyle \min_{K\in{\cal M}} \varrho(p^\ast)} {\displaystyle 1+\delta t \max_{K \in {\cal M}}\left[0,\alpha \sum_{\sigma=K|L}\frac{1}{|K|}\,\mathrm{v}_{\sigma,K} \right]} \] Note that $\bar p_\alpha$ is well defined since, by assumption, $\varrho(\cdot)$ is one to one from $(0,+\infty)$ to $(0,+\infty)$. As $\alpha \leq 1$, we get: \[ \varrho(\bar p_\alpha) \geq \frac{\displaystyle \min_{K\in{\cal M}} \varrho(p^\ast)} {\displaystyle 1+\delta t \max_{K \in {\cal M}}\left[0,\sum_{\sigma=K|L}\frac{1}{|K|}\,\mathrm{v}_{\sigma,K} \right]} \] and, by equivalence of norms in a finite dimensional space, the bound \eqref{est_u} also yields a bound in the $\xLinfty$ norm and, finally, an upper bound for the right hand side of this relation. We thus get, still since $\varrho(\cdot)$ is increasing on $(0,+\infty)$ that, $\forall \alpha \in (0,1]$, $\bar p_\alpha \geq \epsilon_1$, and, finally: \begin{equation} \forall K \in {\cal M}, \qquad p_K \geq \epsilon_1 \label{est_p_min}\end{equation} where $\epsilon$ only depends on the data. \noindent \underline{\textit{Step 4}}: conclusion. \\[1ex] For $\alpha=0$, the system $F(u,p,0)=0$ reads: \[ \left| \begin{array}{ll} \displaystyle a(u,\varphi_\sigma^{(i)}) = \int_\Omega f \cdot \varphi_\sigma^{(i)} \, {\rm d}x \hspace{10ex} & \forall \sigma \in {\cal E}_{{\rm int}},\ 1\leq i \leq d \\[3ex] \displaystyle \varrho(p_K)=\varrho(p_K^\ast) & \forall K \in {\cal M} \end{array} \right . \] Since $\varrho(\cdot)$ is one to one from $(0,+\infty)$ to $(0,+\infty)$ and thanks to the stability of the bilinear form $a(\cdot,\cdot)$, this system has one and only one solution (which, for the pressure, reads of course $p_K=p_K^\ast,\ \forall K \in {\cal M}$), which satisfies: \begin{equation} \norms{u}\leq C_{3},\qquad \epsilon_2=\min_{K\in{\cal M}} p_K^\ast \leq p \leq \max_{K\in{\cal M}} p_K^\ast=C_4 \label{est_up}\end{equation} In addition, the jacobian of this system is block diagonal: the first block, associated to the first relation, is constant (this part of the system is linear) and non-singular; the second one, associated to the second relation, is diagonal, and each diagonal entry is equal to the derivative of $\varrho(\cdot)$, taken at the considered point. As the function $\varrho(\cdot)$ is increasing, this jacobian matrix does not vanish for the solution of the system. Let $W$ be defined by: \[ W=\{(u,p) \in \xR^N \times \xR^M \mbox{ such that } \norms{\tilde\rho}{u} < 2 \max(C_1,C_3) \mbox{ and } \frac 1 2\,\min(\epsilon_1,\epsilon_2)< p < 2 \max(C_1,C_3) \} \] The topological degree of $F(\cdot,\cdot,0)$ with respect to $0$ and $W$ does not vanish, and by inequalities \eqref{est_u}, \eqref{est_p_min}, \eqref{est_p_max} and \eqref{est_up}, theorem \ref{degre} applies, which concludes the proof. \end{proof} \subsection{Some cases of application} First of all, let us give some examples for the bilinear form $a(\cdot,\cdot)$, for which the theory developped in this work holds. The first of them is: \[ a(u,v)=\int_\Omega u \cdot v \, {\rm d}x, \qquad \norms{u}=\normLdv{u}, \qquad \normS{f}=\normLdv{f} \] This choice for $a(\cdot,\cdot)$ yields a discrete Darcy-like problem which is, up to numerical integration technicalities, the projection step arising in the pressure correction scheme which is considered in the present paper (see section \ref{sec:scheme}). Note that, in this case, the boundary condition $u\in\xHone_0(\Omega)^d$ does not make sense at the continuous level; in addition, the considered discretization is known to be not consistent enough to yield convergence (see remark \ref{non-consistent!} hereafter) for the Darcy problem. The bilinear form associated to the Stokes problem provides another example of application. It may read in this case: \[ a(u,v)=\int_{\Omega,h} \nabla u \cdot \nabla v \, {\rm d}x, \qquad \norms{u}=|u|_{\xHone(\Omega)^d} \] or, without additional theoretical difficulties: \[ a(u,v)= \mu \int_{\Omega,h} \nabla u \cdot \nabla v \, {\rm d}x + \frac \mu 3 \int_{\Omega,h} (\nabla \cdot u) \ (\nabla \cdot v) \, {\rm d}x \] this latter form, where the real number $\mu >0$ is the viscosity, corresponding to the physical shear stress tensor expression for a compressible flow of a constant viscosity Newtonian fluid. In addition, consider the steady Navier-Stokes equations, or, more generally, a time step of a (semi-)implicit time discretization of the unsteady Navier-Stokes equations, in which case $a(\cdot,\cdot)$ and $f$ reads: \[ \begin{array}{l} \displaystyle a(u,v) = \frac 1 \delta t \int_\Omega \rho u \cdot v \, {\rm d}x + \int_{\Omega,h} (\nabla \cdot \rho w \otimes u) \cdot v \, {\rm d}x + \mu \int_{\Omega,h} \nabla u \cdot \nabla v \, {\rm d}x + \frac \mu 3 \int_{\Omega,h} (\nabla \cdot u) \ (\nabla \cdot v) \, {\rm d}x \\[3ex] \displaystyle f= \frac 1 \delta t \rho^\ast u^\ast + f_0 \end{array} \] where the steady case is obtained with $\delta t=+\infty$, $f_0$ is the physical forcing term, $\rho^\ast$ and $u^\ast$ stands for known density and velocity fields and $w$ is an advection field, which may be $u$ itself (and must be $u$ in the steady case) or be derived from the velocity obtained at the previous time steps. Let us suppose that the following identity holds: \[ \frac 1 \delta t \int_\Omega (\rho u - \rho^\ast u^\ast) \cdot u \, {\rm d}x + \int_{\Omega,h} (\nabla \cdot \rho w \otimes u) \cdot u \, {\rm d}x \geq \frac 1 {2 \delta t} \left[ \int_\Omega \rho |u|^2 - \int_\Omega \rho^\ast |u^\ast|^2 \right] \] which is the discrete counterpart of equation \eqref{stab-arg}-$(i)$. The algorithm considered in this paper provides an example where this condition is verified (see section \ref{sec:scheme}). Then the present theory applies with little modifications: in the proof of existence theorem \ref{existence_gene}, the right hand side of the preceding equation must be multiplied by the homotopy parameter $\alpha$ (an thus this term vanishes at $\alpha=0$, which yields the problem considered in step $4$ above); the (uniform with respect to $\alpha$) stability in step 1 stems from the diffusion term, and steps $2$ and $3$ remain unchanged. Note that, in the steady state case, an additional constraint is needed for the problem to be well posed, namely to impose the total mass $M$ of fluid in the computational domain to a given value. This constraint can be simply enforced by solving an approximate mass balance which reads: \[ c(h) \, \left[ \rho - \frac M {|\Omega|} \right] + \nabla \cdot \rho u =0 \] where $|\Omega|$ stands for the measure of $\Omega$, $h$ is the spatial discretization step and $c(h)>0$ must tend to zero with $h$, fast enough to avoid any loss of consistency. With this form of the mass balance, the theory developped here directly applies. Examining now the assumptions for the equation of state in theorem \ref{existence_gene}, we see that our results hold with equations of state of the form: \[ \varrho(p)= p^{1/\gamma} \qquad \mbox{or, equivalently} \qquad \rho=p^\gamma, \qquad \mbox{where } \gamma > 1 \] In this case, the elastic potential is given by equation \eqref{elasticpot}, which yields: \[ P(\rho)=\frac 1 {\gamma -1}\ \rho^{\gamma -1},\qquad \rho P(\rho)= \frac 1 {\gamma -1}\ \rho^\gamma \quad (=\frac 1 {\gamma -1}\ p) \] The same conclusion still holds with $\gamma=1$ (\ie \ $p=\rho$), with $P(\rho)=\log(\rho)$ satisfying equation \eqref{elasticpot_gene}. The case $\gamma > 1$ is for instance encountered for isentropic perfect gas flows, whereas $\gamma=1$ corresponds to the isothermal case. It is worth noting that this range of application is larger than what is known for the continuous case, for which the existence of a solution is known only in the case $\gamma > d/2$ \cite{pll-98-mat, fei-04-dyn, nov-04-int}. \section{A pressure correction scheme} \label{sec:scheme} In this section, we build a pressure correction numerical scheme for the solution of compressible barotropic Navier-Stokes equations \eqref{nsb}, based on the low order non-conforming finite element spaces used in the previous section, namely the Crouzeix-Raviart or Rannacher-Turek elements. The presentation is organized as follows. First, we write the scheme in the time semi-discrete setting (section \ref{subsec:sd}). Then we prove a general stability estimate which applies to the discretization by a finite volume technique of the convection operator (section \ref{subsec:conv}). The proposed scheme is built in such a way that the assumptions of this stability result hold (section \ref{subsec:mom}); this implies first to perform a prediction of the density, as a non-standard first step of the algorithm and, second, to discretize the convection terms in the momentum balance equation by a finite volume technique specially designed to this purpose. The discretization of the projection step (section \ref{subsec:proj}) also combines the finite element and finite volume methods, in such a way that the theory developped in section \ref{sec:darcy} applies; in particular, the proposed discretization allows to take benefit of the pressure or density control induced by the pressure work, \ie\ to apply theorem \ref{VF2}. The remaining steps of the algorithm are described in section \ref{subsec:renorm} and an overview of the scheme is given in section \ref{subsec:overview}. The following section (section \ref{subsec:stab}) is devoted to the proof of the stability of the algorithm. Finally, we shortly address some implementation difficulties (section \ref{subsec:impl}), then provide some numerical tests (section \ref{subsec:num-exp}) performed to assess the time and space convergence of the scheme. \subsection{Time semi-discrete formulation}\label{subsec:sd} Let us consider a partition $0=t_0 < t_1 <\ldots < t_n=T$ of the time interval $(0,T)$, which, for the sake of simplicity, we suppose uniform. Let $\delta t$ be the constant time step $\delta t=t_{k+1}-t_k$ for $k=0,1,\ldots,n$. In a time semi-discrete setting, the scheme considered in this paper reads: \vskip-.6cm \begin{eqnarray} && \mbox{1 -- Solve for } \tilde \rho^{n+1}\mbox{\ : \quad} \frac{\tilde \rho^{n+1}-\rho^{n}}{\delta t}+\nabla \cdot (\tilde \rho^{n+1}\,u^{n})=0 \label{stab-scheme-1}\\ && \mbox{2 -- Solve for } \tilde p^{n+1}\mbox{\ : \quad} -\nabla \cdot \left(\frac{1}{\tilde \rho ^{n+1}} \nabla \tilde p^{n+1}\right) = -\nabla \cdot \left(\frac{1}{\sqrt{\tilde \rho^{n+1}}\sqrt{\tilde \rho^{n}}}\nabla p^n \right) \label{stab-scheme-2}\\ && \mbox{3 -- Solve for } \tilde u^{n+1}\mbox{\ : \quad} \nonumber \\ && \quad \frac{\tilde \rho^{n+1}\, \tilde u^{n+1}-\rho^{n}\,u^{n}}{\delta t} +\nabla \cdot (\tilde \rho^{n+1}\,u^n\otimes\tilde u^{n+1}) +\nabla \tilde p^{n+1} -\nabla \cdot \tau(\tilde u^{n+1}) = f^{n+1} \label{stab-scheme-3}\\ && \mbox{4 -- Solve for } \bar u^{n+1},\, p^{n+1},\, \rho^{n+1}\mbox{\ : \quad} \nonumber \\ && \qquad \left| \begin{array}{l} \displaystyle \tilde \rho^{n+1}\,\frac{\bar u^{n+1}- \tilde u^{n+1}}{\delta t} + \nabla (p^{n+1}-\tilde p^{n+1})=0 \\ \displaystyle \frac{\varrho(p^{n+1})-\rho^n}{\delta t} + \nabla \cdot \left( \varrho(p^{n+1})\, \bar u^{n+1} \right)=0 \\ \displaystyle \rho^{n+1}=\varrho(p^{n+1}) \end{array} \right . \label{stab-scheme-4}\\ && \mbox{5 -- Compute } u^{n+1} \mbox{ given by: \quad} \sqrt{\rho^{n+1}}\,u^{n+1}=\sqrt{\tilde \rho^{n+1}}\bar u^{n+1} \label{stab-scheme-5} \end{eqnarray} The first step is a prediction of the density, used for the discretization of the time derivative of the momentum. As remarked by Wesseling \textit{et al} \cite{bij-98-uni, wes-01-pri}, this step can be avoided when solving the Euler equations: in this case, the mass flowrate may be chosen as an unknown, using the explicit velocity as an advective field in the discretization of the convection term in the momentum balance; the velocity is then updated by dividing by the density at the end of the time step. For viscous flows, if the discretization of the diffusion term is chosen to be implicit, both the mass flowrate and the velocity appear as unknowns in the momentum balance; this seems to impeed the use of this trick. Let us emphasize that the way the step one is carried out (\textit{i.e.} solving a discretization of the mass balance instead as, for instance, performing a Richardson's extrapolation) is crucial for the stability. Likewise, the second step is a renormalization of the pressure the interest of which is clarified only by the stability analysis. A similar technique has already been introduced by Guermond and Quartapelle for variable density incompressible flows \cite{gue-00-proj}. Step 3 consists in a classical semi-implicit solution of the momentum equation to obtain a predicted velocity. Step 4 is a nonlinear pressure correction step, which degenerates in the usual projection step as used in incompressible flow solvers when the density is constant (e.g. \cite{mar-98-nav}). Taking the divergence of the first relation of (\ref{stab-scheme-4}) and using the second one to eliminate the unknown velocity $\bar u^{n+1}$ yields a non-linear elliptic problem for the pressure. This computation is formal in the semi-discrete formulation, but, of course, is necessarily made clear at the algebraic level, as described in section \ref{subsec:impl}. Once the pressure is computed, the first relation yields the updated velocity and the third one gives the end-of-step density. \subsection{Stability of the advection operator: a finite-volume result} \label{subsec:conv} The aim of this section is to state and prove a discrete analogue to the stability identity \eqref{stab-arg}-$(i)$, which may be written for any sufficiently regular functions $\rho$, $z$ and $u$ as follows: \[ \int_\Omega \left[ \frac{\partial \rho z}{\partial t} + \nabla \cdot (\rho z u)\right ] z \, {\rm d}x = \frac{1}{2} \, \frac{d}{dt} \int_\Omega \rho z^2 \, {\rm d}x \label{VF1_continu} \] and holds provided that the following balance is satisfied by $\rho$ and $u$: \[ \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho u) =0 \] As stated in introduction, applying this identity to each component of the velocity yields the central argument of the proof of the kinetic energy theorem. The discrete analogue to this identity is the following. \begin{thrm}[Stability of the advection operator]\label{VF1}Let $(\rho_K^\ast)_{K\in {\cal M}}$ and $(\rho_K)_{K\in {\cal M}}$ be two families of positive real number satisfying the following set of equation: \begin{equation} \forall K \in {\cal M},\qquad \frac{|K|}{\delta t} \ (\rho_K - \rho^\ast_K) + \sum_{\sigma=K|L} F_{\edge,K}=0 \label{mass_bal}\end{equation} where $F_{\edge,K}$ is a quantity associated to the edge $\sigma$ and to the control volume $K$; we suppose that, for any internal edge $\sigma=K|L$, $F_{\edge,K}=-F_{\edge,L}$. Let $(z_K^\ast)_{K\in {\cal M}}$ and $(z_K)_{K\in {\cal M}}$ be two families of real numbers. For any internal edge $\sigma=K|L$, we define $z_\sigma$ either by $z_\sigma=\frac 1 2 (z_K +z_L)$, either by $z_\sigma=z_K$ if $F_{\edge,K} \geq 0$ and $z_\sigma=z_L$ otherwise. The first choice is referred to as an the "centered choice", the second one as "the upwind choice". In both cases, the following stability property holds: \begin{equation} \sum_{K \in {\cal M}} z_K \left[ \frac{|K|}\delta t\ (\rho_K\, z_K -\rho_K^\ast\, z_K^\ast)+ \sum_{\sigma=K|L} F_{\edge,K} \, z_\sigma \right] \geq \frac{1}{2}\ \sum_{K\in{\cal M}} \frac{|K|}\delta t\ \left[ \rho_K\, z_K^2 -\rho_K^\ast\, {z_K^\ast}^2\right] \label{ecinetique} \end{equation} \end{thrm} \begin{proof} We write: \[ \sum_{K \in {\cal M}} z_K \left[ \frac{|K|}\delta t\ (\rho_K\, z_K -\rho_K^\ast\, z_K^\ast)+ \sum_{\sigma=K|L} F_{\edge,K} \, z_\sigma \right] =T_1 + T_2 \] where $T_1$ and $T_2$ reads: \[ T_1= \sum_{K \in {\cal M}} \frac{|K|}\delta t\ z_K\ (\rho_K\, z_K -\rho_K^\ast\, z_K^\ast), \hspace{10ex} T_2=\sum_{K \in {\cal M}} z_K \left[ \sum_{\sigma=K|L} F_{\edge,K} \, z_\sigma \right] \] The first term reads: \[ T_1= \sum_{K \in {\cal M}} \frac{|K|}\delta t\ \left[z_K^2\, (\rho_K-\rho_K^\ast) + \rho_K^\ast \, z_K\, (z_K -z_K^\ast) \right] \] Developping the last term by the identity $a(a-b)=\frac 1 2 ( a^2 + (a-b)^2 -b^2)$, we get: \[ T_1= \underbrace{\sum_{K \in {\cal M}} \frac{|K|}\delta t\ z_K^2\, (\rho_K-\rho_K^\ast)}_{\displaystyle T_{1,1}} + \underbrace{\frac 1 2 \sum_{K \in {\cal M}} \frac{|K|}\delta t\ \rho_K^\ast\, (z_K^2 - {z_K^\ast}^2)}_{\displaystyle T_{1,2}} + \underbrace{\frac 1 2 \sum_{K \in {\cal M}} \frac{|K|}\delta t\ \rho_K^\ast\, (z_K - z_K^\ast)^2}_{\displaystyle T_{1,3}} \] The last term, namely $T_{1,3}$, is always non-negative and can be seen as a dissipation associated to the backward time discretization of equation \eqref{ecinetique}. We now turn to $T_2$: \[ T_2=\underbrace{\sum_{K \in {\cal M}} z_K^2 \left[ \sum_{\sigma=K|L} F_{\edge,K}\right]}_{\displaystyle T_{2,1}}+ \underbrace{\sum_{K \in {\cal M}} z_K \left[ \sum_{\sigma=K|L} F_{\edge,K} \, (z_\sigma-z_K) \right]}_{\displaystyle T_{2,2}} \] The first term, namely $T_{2,1}$, will cancel with $T_{1,1}$ by equation \eqref{mass_bal}. The second term reads, developping as previously the quantity $z_K\, (z_\sigma-z_K)$: \[ T_{2,2}= -\frac 1 2 \sum_{K \in {\cal M}} z_K^2 \left[ \sum_{\sigma=K|L} F_{\edge,K} \right] - \underbrace{\frac 1 2 \sum_{K \in {\cal M}} \left[ \sum_{\sigma=K|L} F_{\edge,K}\ [(z_\sigma-z_K)^2 - z_\sigma^2] \right]}_{\displaystyle T_{2,3}} \] Reordering the sum in the last term, we have, as $F_{\edge,K}=-F_{\edge,L}$: \[ T_{2,3}=\frac 1 2 \sum_{\sigma \in {\cal E}_{{\rm int}}\ (\sigma=K|L)} F_{\edge,K}\ [(z_\sigma-z_K)^2-(z_\sigma-z_L)^2] \] This expression can easily be seen to vanish with the centered choice. With the upwind choice, supposing without loss of generality that we have chosen for the edge $\sigma=K|L$ the orientation such that $F_{\edge,K} \geq 0$, we get, as $z_\sigma=z_K$: \[ T_{2,3}=- \frac 1 2 \sum_{\sigma \in {\cal E}_{{\rm int}}\ (\sigma=K|L)} F_{\edge,K}\ (z_K-z_L)^2 \leq 0 \] We thus have, by equation \eqref{mass_bal}: \[ T_{2,2} \geq -\frac 1 2 \sum_{K \in {\cal M}} z_K^2 \left[ \sum_{\sigma=K|L} F_{\edge,K} \right] =\frac 1 2 \sum_{K \in {\cal M}} \frac{|K|}\delta t z_K^2\, (\rho_K - \rho_K^\ast) \] and thus: \[ T_1 + T_2 \geq \frac 1 2 \sum_{K \in {\cal M}} \frac{|K|}\delta t \left[z_K^2\, (\rho_K - \rho_K^\ast) + \rho_K^\ast\, (z_K^2 - {z_K^\ast}^2)\right] \] which concludes the proof. \end{proof} \begin{rmrk} Equation \eqref{mass_bal} can be seen as a discrete mass balance, with $F_{\edge,K}$ standing for the mass flux across the edge $\sigma$, and the right hand side of \eqref{ecinetique} may be derived by the multiplication by $z_K$ and summation over the control volumes of the transport terms in a discrete balance equation for the quantity $\rho z$, reading: \[ \forall K \in {\cal M}, \qquad \frac{|K|}\delta t\ (\rho_K\, z_K -\rho_K^\ast\, z_K^\ast)+ \sum_{\sigma=K|L} F_{\edge,K} \, z_\sigma +\cdots \mbox{[possible diffusion terms]} \cdots =0 \] In this context, the relation \eqref{mass_bal} is known to be exactly the compatibility condition which ensures a discrete maximum principle for the solution $z$ of this transport equation, provided that the upwind choice (or any monotone choice) is made for the expression of $z_\sigma$ \cite{lar-91-how}. We proved here that the same compatibility condition ensures a $\xLtwo$ stability for $\rho^{1/2} z$. \end{rmrk} \subsection{Spatial discretization of the density prediction and the momentum balance equation}\label{subsec:mom} The main difficulty in the discretization of the momentum balance equation is to build a discrete convection operator which enjoys the stability property \eqref{stab-arg}-$(i)$. To this purpose, we derive for this term a finite volume discretization which satisfies the assumptions of theorem \ref{VF1}. The natural space discretization for the density is the same as for the pressure, \ie \ piecewise constant functions over each element. For the Rannacher-Turek element, this legitimates a standard mass lumping technique for the time derivative term, since no additional accuracy seems to have to be expected from a more precise integration. For the Crouzeix-Raviart element, the mass matrix is already diagonal. Let the quantity $|D_\sigma|$ be defined as follows: \begin{equation} |D_\sigma| \eqdef \int_\Omega \varphi_\sigma \, {\rm d}x >0 \label{int-phis}\end{equation} For the Crouzeix-Raviart element, $|D_\sigma|$ can be identified to the measure of the cone with basis $\sigma$ and with the mass center of the mesh as opposite vertex. The same property holds for the Rannacher-Turek element in the case of quandrangles ($d=2$) or cuboids ($d=3$), which are the only case considered here, even though extensions to non-perpendicular grids are probably possible. For each internal edge $\sigma=K|L$, this conic volume is denoted by $D_{K,\sigma}$; the volume $D_\sigma= D_{K,\sigma}\cup D_{L,\sigma}$ is referred to as the "diamond cell" associated to $\sigma$, and $D_{K,\sigma}$ is the half-diamond cell associated to $\sigma$ and $K$ (see figure \ref{diamonds}). The measure of $D_{K,\sigma}$ is denoted by $|D_{K,\sigma}|$. The discretization of the term $\rho^n \, u^n$ thus leads, in the equations associated to the velocity on $\sigma$, to an expression of the form $\rho^n_\sigma \, u^n_\sigma$, where $\rho^n_\sigma$ results from an average of the values taken by the density in the two elements adjacent to $\sigma$, weighted by the measure of the half-diamonds: \begin{equation} \forall \sigma \in {\cal E}_{{\rm int}},\qquad |D_\sigma|\ \rho^n_\sigma= |D_{K,\sigma}|\ \rho^n_K + |D_{L,\sigma}|\ \rho^n_L \label{rho_faces}\end{equation} \begin{figure} \caption{Dual finite volume mesh: the so-called "diamond cells".} \label{diamonds} \end{figure} In order to satisfy the compatibility condition introduced in the previous section, a prediction of the density is first performed, by a finite volume discretization of the mass balance equation, taking the diamond cells as control volumes: \begin{equation} \forall \sigma \in {\cal E}_{{\rm int}},\qquad \frac{|D_\sigma|} \delta t\ (\tilde \rho_\sigma^{n+1}- \rho^n_\sigma) + \sum_{\varepsilon \in {\cal E}(D_\sigma)} F_{\edged,\edge}^{n+1}=0 \label{mass-diamond}\end{equation} where ${\cal E}(D_\sigma)$ is the set of the edges of $D_\sigma$ and $F_{\edged,\edge}$ stands for the mass flux across $\varepsilon$ outward $D_\sigma$. This latter quantity is expressed as follows: \[ F_{\varepsilon}^{n+1}=|\varepsilon|\ u_{\varepsilon}^n\cdot n_{\varepsilon,\sigma}\ \tilde \rho_{\varepsilon}^{n+1} \] where $|\varepsilon|$ is the measure of $\varepsilon$, $n_{\varepsilon,\sigma}$ is the normal to $\varepsilon$ outward $D_\sigma$, the velocity $u_{\varepsilon}^n$ is obtained by interpolation at the center of $\varepsilon$ in $u^n$ (using the standard finite element expansion) and $\tilde \rho_{\varepsilon}^{n+1}$ is the density at the edge, calculated by the standard upwinding technique (\ie\ either $\tilde \rho_\sigma^{n+1}$ if $u_{\varepsilon}^n\cdot n_{\varepsilon,\sigma} \geq 0$ or $\tilde \rho_{\sigma'}^{n+1}$ otherwise, with $\sigma'$ such that $\varepsilon$ separates $D_\sigma$ and $D_{\sigma'}$, which we denote by $\varepsilon=D_\sigma | D_{\sigma'}$). The discretization convection terms of the momentum balance equation are built from relation \eqref{mass-diamond}, according to the structure which is necessary to apply theorem \ref{VF1}. This yields the following discrete momentum balance equation: \begin{equation} \begin{array}{l} \displaystyle \frac{|D_\sigma|} \delta t\ (\tilde \rho_\sigma^{n+1} \tilde u_{\sigma,i}^{n+1}- \rho^n_\sigma u_{\sigma,i}^n) + \sum_{\stackrel{\scriptstyle \varepsilon \in {\cal E}(D_\sigma),}{\scriptstyle \varepsilon=D_\sigma | D_{\sigma'}}}\frac 1 2\ F_{\edged,\edge}^{n+1}\ (\tilde u_{\sigma,i}^{n+1} + u_{\sigma',i}^{n+1}) \\ \hspace{8ex}\displaystyle + \int_{\Omega,h} \tau(\tilde u^{n+1}) : \nabla \varphi_\sigma^{(i)} \, {\rm d}x -\int_{\Omega,h} \tilde p^{n+1}\ \nabla \cdot \varphi_\sigma^{(i)} \, {\rm d}x = \int_\Omega f^{n+1} \cdot \varphi_\sigma^{(i)} \hspace{10ex} \sigma \in {\cal E}_{{\rm int}},\ 1 \leq i \leq d \end{array} \label{momentum}\end{equation} where we recall that $\varphi_\sigma^{(i)}$ the vector shape function associated to $v_{\sigma, i}$, which reads $\varphi_\sigma^{(i)}=\varphi_\sigma \, e_i$ with $e_i$ is the $i^{th}$ vector of the canonical basis of $\xR^d$ and $\varphi_\sigma$ the scalar shape function, and the notation $\int_{\Omega,h}$ means $\sum_{K\in{\cal M}} \int_K$. Note that, for Crouzeix-Raviart elements, a combined finite volume/finite element method, similar to the technique employed here for the discretization of the momentum balance, has already been analysed for a transient non-linear convection-diffusion equation by Feistauer and co-workers \cite{ang-98-ana, dol-02-err, fei-03-mat}. \subsection{Spatial discretization of the projection step}\label{subsec:proj} The fully discrete projection step used in the described algorithm reads: \begin{equation} \left| \begin{array}{ll} \displaystyle |D_\sigma|\,\frac{\tilde \rho_\sigma^{n+1}}{\delta t} \, (\bar u_{\sigma,i}^{n+1} - \tilde u_{\sigma,i}^{n+1}) + \int_{\Omega,h} (p^{n+1}-\tilde p^{n+1})\ \nabla \cdot \varphi_\sigma^{(i)} \, {\rm d}x = 0 \hspace{12ex} & \displaystyle \sigma \in {\cal E}_{{\rm int}},\ 1 \leq i \leq d \\[2ex] \displaystyle |K|\ \frac{\varrho(p^{n+1}_K)-\rho^n_K}{\delta t} + \sum_{\sigma=K|L} (\mathrm{v}_{\sigma,K}^+)^{n+1}\, \varrho(p_K^{n+1}) - (\mathrm{v}_{\sigma,K}^-)^{n+1}\, \varrho(p_L^{n+1})=0 & \displaystyle K \in {\cal M} \end{array}\right . \label{darcy-disc-un}\end{equation} where $(\mathrm{v}_{\sigma,K}^+)^{n+1}$ and $(\mathrm{v}_{\sigma,K}^-)^{n+1}$ stands respectively for $\max (\mathrm{v}_{\sigma,K}^{n+1},\ 0)$ and $\max ( -\mathrm{v}_{\sigma,K}^{n+1},\ 0)$ with $\mathrm{v}_{\sigma,K}^{n+1}=|\sigma|\, \bar u_\sigma^{n+1} \cdot n_{KL}$. The first (vector) equation may be seen as the finite element discretization of the first relation of the projection step \eqref{stab-scheme-4}, with the same lumping of the mass matrix for the Rannacher-Turek element as in the prediction step. As the pressure is piecewise constant, the finite element discretization of the second relation of \eqref{stab-scheme-4}, \ie\ the mass balance, is equivalent to a finite volume formulation, in which we introduce the standard first-order upwinding. Exploiting the expression of the velocity and pressure shape functions, the first set of relations of this system can be alternatively written as follows: \begin{equation} |D_\sigma|\,\frac{\tilde \rho_\sigma^{n+1}}{\delta t} \, (\bar u_\sigma^{n+1}-\tilde u_\sigma^{n+1}) + |\sigma| \, \left[ (p_K^{n+1}-\tilde p_K^{n+1})-(p_L^{n+1}-\tilde p_L^{n+1}) \right]\, n_{KL} = 0 \qquad \forall \sigma\in{\cal E}_{{\rm int}}, \ \sigma=K|L \label{darcy-disc-deux}\end{equation} or, in an algebraic setting: \begin{equation} \frac{1}{\delta t} \, {\rm M}_{\tilde \rho^{n+1}} \, (\bar u^{n+1} - \tilde u^{n+1}) + {\rm B}^t \, (p^{n+1}-\tilde p^{n+1}) = 0 \label{darcy-alg}\end{equation} In this relation, ${\rm M}_{w}$ stands for the diagonal mass matrix weighted by $(w_\sigma)_{\sigma \in {\cal E}_{{\rm int}}}$ (so, for $1\leq i \leq d$ and $\sigma \in {\cal E}_{{\rm int}}$, the corresponding entry on the diagonal of ${\rm M}_{\tilde \rho^{n+1}}$ reads $({\rm M}_{\tilde \rho^{n+1}})_{\sigma,i}= (|D_\sigma|\, \tilde \rho_\sigma^{n+1})$), ${\rm B}^t$ is the matrix of $\xR^{(d\,N) \times M}$, where $N$ is the number of internal edges (\ie\ $N={\rm card}\,({\cal E}_{{\rm int}})$) and $M$ is the number of control volumes in the mesh (\ie\ $M={\rm card}\,({\cal M})$), associated to the gradient operator; consequently, the matrix ${\rm B}$ is associated to the opposite of the divergence operator. Throughout this section, we use the same notation for the discrete function (defined as usual in the finite element context by its expansion using the shape functions) and for the vector gathering the degrees of freedom; so, in relation \eqref{darcy-alg}, $\bar u$ stands for the vector of $\xR^{d\,N}$ components $\bar u_{\sigma,i},\ 1\leq i \leq d,\ \sigma \in {\cal E}_{{\rm int}}$ and $p$ stand for the vector of $\xR^M$ of components $p_K,\ K \in {\cal M}$. Both forms \eqref{darcy-disc-deux} and \eqref{darcy-alg} are used hereafter. We have the following existence result. \begin{prpstn} Let the equation of state $\varrho(\cdot)$ be such that $\varrho(0)=0$, $\lim_{z \rightarrow + \infty} \varrho(z)= + \infty$ and there exists an elastic potential function $P(\cdot)$ such that the function $z \mapsto z\,P(z)$ is bounded from above in $(0, + \infty)$, once continuously differentiable and strictly convex. Then the nonlinear system \eqref{darcy-disc-un} admits at least one solution and any possible solution is such that $p_K >0$, $\forall K \in {\cal M}$ (or, equivalently, $\rho_K >0$, $\forall K \in {\cal M}$). \end{prpstn} \begin{proof} Let us suppose that $\tilde \rho_\sigma^{n+1} >0$, $\forall \sigma \in {\cal E}_{{\rm int}}$. Then the theory of section \ref{sec:darcy} applies, with: \[ \normsd{\bar u^{n+1}}=\sum_{\sigma \in {\cal E}_{{\rm int}}} |D_\sigma|\,\frac{\tilde \rho_\sigma^{n+1}}{\delta t} \, (\bar u_\sigma^{n+1})^2 \] This yields both the existence of a solution and the positivity of the pressure. In view of the form of the discrete density prediction \eqref{mass-diamond}, this latter property extends by induction to any time step of the computation (provided, of course, that the initial density is positive everywhere). \end{proof} We finish this section by some remarks concerning the projection step at hand. \begin{lmm}\label{bmbt}The following identity holds for each discrete pressure $q \in L_h$: \[ \forall K \in {\cal M}, \qquad ({\rm B}\,{\rm M}_{\tilde \rho^{n+1}}^{-1}\, {{\rm B}}^t\, q)_K = \sum_{\sigma=K|L} \frac{1}{\tilde \rho_\sigma^{n+1}}\ \frac{|\sigma|^2}{|D_\sigma|}\, (q_K-q_L) \] \end{lmm} \begin{proof} Let $q \in L_h$ be given. By relation \eqref{darcy-disc-deux}, we have: \[ ({\rm B}^t\,q)_{\sigma,i}=|\sigma|\ (q_K-q_L)\,n_{KL} \cdot e_i \] Let $1_K \in L_h$ be the characteristic function of $K$. Denoting by $(\cdot,\cdot )$ the standard Euclidian scalar product, by the previous relation and the definition of lumped velocity mass matrix, we obtain: \[ \begin{array}{ll} \displaystyle ({\rm B}\,{\rm M}_{\tilde \rho}^{-1}\, {\rm B}^t\, q,\ 1_K) & \displaystyle =({\rm M}_{\tilde \rho}^{-1}\, {\rm B}^t\, q, \ {\rm B}^{t}\,1_K) \\ & \displaystyle =\sum_{\sigma \in {\cal E}_{{\rm int}}} \quad \sum_{i=1}^d \quad \frac{1}{\tilde\rho_{\sigma}^{n+1} |D_\sigma|} ({\rm B}^{t}\,q)_{\sigma,i} \ ({\rm B}^{t}\,1_K)_{\sigma,i} \\ & \displaystyle = \sum_{\sigma \in {\cal E}_{{\rm int}},\ \sigma=K|L} \quad \sum_{i=1}^d \quad \frac{1}{\tilde\rho_{\sigma}^{n+1} |D_\sigma|} \left[ |\sigma|\, (q_K-q_L)\ n_{KL}\cdot e_i \right] \left[ |\sigma|\, n_{KL}\cdot e_i\right] \end{array} \] which, remarking that $\sum_{i=1}^d (n_{KL}\cdot e_i)^2=1$, yields the result. \end{proof} \begin{rmrk}[On spurious pressure boundary conditions] In the context of projection methods for uncompressible flow, it is known that spurious boundary conditions are to be imposed to the pressure in the projection step, in order to make the definition of this step of the algorithm complete. These boundary conditions are explicit when the process to derive the projection step is first to pose the problem at the time semi-discrete level and then discretize it in space; for instance, with a constant density equal to one and prescribed velocity boundary conditions on $\partial \Omega$, the semi-discrete projection step would take the form: \[ \left| \begin{array}{ll} \displaystyle - \Delta\, (p^{n+1}-\tilde p^{n+1})=- \frac 1 \delta t \nabla \cdot \tilde u^{n+1} \hspace{5ex} & \displaystyle \mbox{in } \Omega \\[2ex] \displaystyle \nabla \, (p^{n+1}-\tilde p^{n+1}) \cdot n =0 & \displaystyle \mbox{on } \partial \Omega \end{array} \right . \] When the elliptic problem for the pressure is built at the algebraic level, the boundary conditions for the pressure are somehow hidden in the discrete operator ${\rm B}\,{\rm M}^{-1}\, {{\rm B}}^t$. Lemma \ref{bmbt} shows that this matrix takes the form of a finite-volume Laplace discrete operator, with homogeneous Neumann boundary conditions, \ie \ the same boundary conditions as in the time semi-discrete problem above stated. \label{spurious-BC}\end{rmrk} \begin{rmrk}[On the non-consistency of the discretization at hand for the Darcy problem] Considering the semi-discrete problem \eqref{stab-scheme-4}, taking $\tilde \rho_\sigma^{n+1}=1,\ \forall \sigma \in {\cal E}_{{\rm int}}$, one could expect to recover a consistent discretization of a Poisson problem with homogeneous Neumann boundary conditions. The following example shows that this route is misleading. Let us take for the mesh a uniform square grid of step $h$. The coefficient ${|\sigma|^2}/{|D_\sigma|}$ can be easily evaluated, and we obtain: \[ ({\rm B}\,{\rm M}^{-1}\, {{\rm B}}^t\, q)_K = d \sum_{\sigma=K|L} \frac{|\sigma|}{h}\, (q_K-q_L) \] that is the usual finite volume Laplace operator, but multiplied by the space dimension $d$. This result is of course consistent with the wellknown non-consistency of the Rannacher-Turek element for the Darcy problem, and similar examples could also be given for simplicial grids, with the Crouzeix-Raviart element. \label{non-consistent!}\end{rmrk} \subsection{Renormalization steps}\label{subsec:renorm} The pressure renormalization (step 2 of the algorithm) is introduced for stability reasons, and its form can only be clarified by the stability study. It reads: \begin{equation} {\rm B}\,{\rm M}^{-1}_{\tilde \rho^{n+1}}\, {{\rm B}}^t \ \tilde p^{n+1} = {\rm B}\,{\rm M}^{-1}_{\sqrt{\tilde \rho^{n+1}}\,\sqrt{\rho^n}}\, {{\rm B}}^t \ p^n \label{p-renorm-mat}\end{equation} where the density at the face at time level $n$ necessary to the definition of ${\rm M}^{-1}_{\sqrt{\tilde \rho^{n+1}}\,\sqrt{\rho^n}}$ is obtained from the density field $\rho^n \in L_h$ by equation \eqref{rho_faces}. In view of the expression of these operators provided by lemma \ref{bmbt}, this relation equivalently reads: \begin{equation} \forall K \in {\cal M}, \qquad \qquad \sum_{\sigma=K|L} \frac{1}{\tilde \rho_\sigma^{n+1}}\ \frac{|\sigma|^2}{|D_\sigma|}\, (\tilde p_K^{n+1}- \tilde p_L^{n+1})= \sum_{\sigma=K|L} \frac{1}{\sqrt{\tilde \rho_\sigma^{n+1}}\sqrt{\rho_\sigma^n}}\ \frac{|\sigma|^2}{|D_\sigma|}\, (p_K^n- p_L^n) \label{p-renorm}\end{equation} As ${\rm B}^t$ and ${\rm B}$ stands for respectively the discrete gradient and (opposite of the) divergence operator, this system can be seen as a discretization of the semi-discrete expression of step 2; note however, as detailed in remark \ref{non-consistent!}, that this discretization is non-consistent. The velocity renormalization (step 5 of the algorithm) simply reads: \begin{equation} \forall \sigma \in {\cal E}_{{\rm int}}, \quad \sqrt{\rho^{n+1}_\sigma} \, u^{n+1}_\sigma = \sqrt{\tilde \rho^{n+1}_\sigma} \, \bar u^{n+1}_\sigma \hspace{10ex} \mbox{or} \qquad {\rm M}_{\sqrt{\rho^{n+1}}}\ u^{n+1} = {\rm M}_{\sqrt{\tilde \rho^{n+1}}}\ \bar u^{n+1} \label{v-renorm}\end{equation} \subsection{An overview of the algorithm}\label{subsec:overview} To sum up, the algorithm considered in this section is the following one: \begin{enumerate} \item Prediction of the density -- The density on the edges at $t^n$, $(\rho_\sigma^n)_{\sigma \in {\cal E}_{{\rm int}}}$, being given by \eqref{rho_faces}, compute $(\tilde \rho^{n+1})_{\sigma \in {\cal E}_{{\rm int}}}$ by the upwind finite volume discretization of the mass balance over the diamond cells \eqref{mass-diamond}. \item Renormalization of the pressure -- Compute a renormalized pressure $(\tilde p_K^{n+1})_{K\in{\cal M}}$ by equation \eqref{p-renorm}. \item Prediction of the velocity -- Compute $(\tilde u^{n+1}_\sigma)_{\sigma \in {\cal E}_{{\rm int}}}$ by equation \eqref{momentum}, obtained by a finite volume discretization of the transport terms over the diamond cells and a finite element discretization of the other terms. \item Projection step -- Compute $(\bar u^{n+1}_\sigma)_{\sigma \in {\cal E}_{{\rm int}}}$ and $(p_K^{n+1})_{K\in{\cal M}}$ from equation \eqref{darcy-disc-un}, obtained by a finite element discretization of the velocity correction equation and an upwind finite volume discretization of the mass balance (over the elements $K \in {\cal M}$). \item Renormalization of the velocity -- Compute $(u^{n+1}_\sigma)_{\sigma \in {\cal E}_{{\rm int}}}$ from equation \eqref{v-renorm}. \end{enumerate} The existence of a solution to step 4 is proven above; the other problems are linear, and their regularity follows from standard coercivity arguments, using the fact that the discrete densities (\ie\ $\rho^n$ and $\tilde \rho^{n+1}$) are positive, provided that this property is satisfied by the initial condition. \subsection{Stability analysis} \label{subsec:stab} In this section, we use the following discrete norm and semi-norm: \begin{equation} \begin{array}{ll} \displaystyle \forall v \in W_h, \qquad & \displaystyle \normLdiscd{\tilde \rho}{v} = \sum_{\sigma \in {\cal E}_{{\rm int}}} |D_\sigma|\, \tilde \rho_\sigma \, v_\sigma^2 \\[2ex] \forall q \in L_h, \qquad & \displaystyle \snormundiscd{\tilde \rho}{q}= \sum_{\sigma \in {\cal E}_{{\rm int}},\ \sigma=K|L} \frac{1}{\tilde \rho_\sigma}\ \frac{|\sigma|^2}{|D_\sigma|}\, (q_K-q_L)^2 \end{array} \end{equation} where $\tilde \rho=(\tilde \rho_\sigma)_{\sigma \in {\cal E}_{{\rm int}}}$ is a family of positive real numbers. The function $\normLdiscd{\tilde \rho}{\cdot}$ defines a norm over $W_h$, and $\snormundisc{\tilde \rho}{\cdot}$ can be seen as a weighted version of the $H^1$ semi-norm classical in the finite volume context \cite{eym-00-fin}. The links between this latter semi-norm and the problem at hand are clarified in the following lemma, which is a straightforward consequence of lemma \ref{bmbt}. \begin{lmm}\label{bmbt_norm}The following identity holds for each discrete pressure $q \in L_h$: \[ ({\rm B}\,{\rm M}_{\tilde \rho}^{-1}\, {{\rm B}}^t\, q,q)=\snormundiscd{\tilde \rho}{q} \] \end{lmm} We are now in position to state the stability of the scheme under consideration. \begin{thrm}[Stability of the scheme] Let the equation of state $\varrho(\cdot)$ be such that $\varrho(0)=0$, $\lim_{z \rightarrow + \infty} \varrho(z)= + \infty$ and there exists an elastic potential function $P(\cdot)$ satisfying \eqref{elasticpot_gene} such that the function $z \mapsto z\,P(z)$ is bounded from above in $(0, + \infty)$, once continuously differentiable and strictly convex. Let $(\tilde u^n)_{0\leq n \leq N}$, $(u^n)_{0\leq n \leq N}$, $(p^n)_{0\leq n \leq N}$ and $(\rho^n)_{0\leq n \leq N}$ be the solution to the considered scheme, whith a zero forcing term. Then the following bound holds for $n < N$: \begin{equation} \begin{array}{l} \displaystyle \frac 1 2 \normLdiscd{\rho^{n+1}}{u^{n+1}} + \int_\Omega \rho^{n+1} P(\rho^{n+1}) \, {\rm d}x + \delta t \sum_{k=1}^{n+1} \int_{\Omega,h} \nabla \tilde u^k:\tau(\tilde u^k)\, {\rm d}x + \frac{\delta t^2}{2} \snormundiscd{\tilde \rho^{n+1}}{p^{n+1}} \\ \displaystyle \hspace{50ex} \leq \frac 1 2 \normLdiscd{\rho^0}{u^0} + \int_\Omega \rho^0 P(\rho^0) \, {\rm d}x + \frac{\delta t^2}{2} \snormundiscd{\tilde \rho^0}{p^0}\qquad \label{stabres}\end{array} \end{equation} \label{stab}\end{thrm} \begin{proof} Multiplying each equation of the step 3 of the scheme \eqref{momentum} by the corresponding unknown (\textit{i.e} the corresponding component of the velocity $\tilde u^{n+1}$ on the corresponding edge $\sigma$) and summing over the edges and the components yields, by virtue of the stability of the discrete advection operator (theorem \ref{VF1}): \begin{equation} \frac{1}{2\, \delta t} \normLdiscd{\tilde \rho^{n+1}}{\tilde u^{n+1}} - \frac{1}{2\, \delta t} \normLdiscd{\rho^n}{u^n} + \int_{\Omega,h} \tau(\tilde u^{n+1}): \nabla \tilde u^{n+1} \, {\rm d}x - \int_{\Omega,h} \tilde p^{n+1} \nabla \cdot \tilde u^{n+1} \, {\rm d}x \leq 0 \label{stab1} \end{equation} On the other hand, reordering equation \eqref{darcy-alg} and multiplying by ${\rm M}_{\tilde \rho^{n+1}}^{-1/2}$ (recall that ${\rm M}_{\tilde \rho^{n+1}}$ is diagonal), we obtain: \[ \frac{1}{\delta t}\, {\rm M}_{\tilde \rho^{n+1}}^{1/2} \bar u^{n+1} + {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, p^{n+1} = \frac{1}{\delta t}\, {\rm M}_{\tilde \rho^{n+1}}^{1/2} \tilde u^{n+1} + {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1} \] Squaring this relation gives: \[ \begin{array}{l}\displaystyle \left(\frac{1}{\delta t}\, {\rm M}_{\tilde \rho^{n+1}}^{1/2} \bar u^{n+1} + {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, p^{n+1}, \ \frac{1}{\delta t} \, {\rm M}_{\tilde \rho^{n+1}}^{1/2} \bar u^{n+1} + {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, p^{n+1}\right) = \\ \displaystyle \hspace{20ex} \left(\frac{1}{\delta t}\, {\rm M}_{\tilde \rho^{n+1}}^{1/2} \tilde u^{n+1} + {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1}, \ \frac{1}{\delta t}\, {\rm M}_{\tilde \rho^{n+1}}^{1/2} \tilde u^{n+1} + {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1}\right) \end{array} \] which reads: \[ \begin{array}{l} \displaystyle \frac{1}{\delta t^{2}}\, \left( {\rm M}_{\tilde \rho^{n+1}} \bar u^{n+1}, \ \bar u^{n+1}\right) + \left({\rm M}_{\tilde \rho^{n+1}}^{-1}\,{\rm B}^t\, p^{n+1}, \ {\rm B}^t\, p^{n+1}\right) +\frac{2}{\delta t} \left(\bar u^{n+1},\ {\rm B}^t\, p^{n+1}\right)= \\ \displaystyle \hspace{20ex} \frac{1}{\delta t^{2}}\, \left({\rm M}_{\tilde \rho^{n+1}} \tilde u^{n+1}, \ \tilde u^{n+1}\right) +\left({\rm M}_{\tilde \rho^{n+1}}^{-1}\,{\rm B}^t\, \tilde p^{n+1}, \ {\rm B}^t\, \tilde p^{n+1}\right) +\frac{2}{\delta t}\, \left(\tilde u^{n+1}, \ \tilde p^{n+1}\right) \end{array} \] Multiplying by $\delta t/2$, remarking that, $\forall v \in W_h,\ ({\rm M}_{\tilde \rho^{n+1}} v, \ v)= \normLdiscd{\tilde \rho^{n+1}}{v}$ and that, thanks to lemma \ref{bmbt_norm}, $\forall q \in L_h,\ ({\rm M}_{\tilde \rho^{n+1}}^{-1}\,{\rm B}^t\, q, \ {\rm B}^t\, q)= ({\rm B} \, {\rm M}_{\tilde \rho^{n+1}}^{-1}\,{\rm B}^t\, q,q)=\snormundiscd{\tilde \rho^{n+1}}{q}$, we get: \begin{equation} \begin{array}{l} \displaystyle \frac{1}{2 \delta t} \normLdiscd{\tilde \rho^{n+1}}{\bar u^{n+1}} +\frac \delta t 2 \snormundiscd{\tilde \rho^{n+1}}{p^{n+1}} +(\bar u^{n+1}, {\rm B}^t\, p^{n+1}) \\ \hspace{20ex} \displaystyle -\frac{1}{2 \delta t} \normLdiscd{\tilde \rho^{n+1}}{\tilde u^{n+1}} -\frac \delta t 2\,\snormundiscd{\tilde \rho^{n+1}}{\tilde p^{n+1}} - (\tilde u^{n+1}, {\rm B}^t\, \tilde p^{n+1}) =0 \end{array} \label{stab2}\end{equation} The quantity $-(\tilde u^{n+1}, {\rm B}^t\, \tilde p^{n+1})$ is nothing more than the opposite of the term $\displaystyle \int_{\Omega,h} \tilde p^{n+1} \nabla \cdot \tilde u^{n+1} \, {\rm d}x$ appearing in (\ref{stab1}), so summing (\ref{stab1}) and (\ref{stab2}) makes these terms disappear, leading to: \[ \begin{array}{l} \displaystyle \frac{1}{2 \delta t} \normLdiscd{\tilde \rho^{n+1}}{\bar u^{n+1}} - \frac{1}{2\, \delta t} \normLdiscd{\rho^n}{u^n} + \int_{\Omega,h} \tau(\tilde u^{n+1}) : \nabla \tilde u^{n+1} \, {\rm d}x \\[2ex] \hspace{20ex} \displaystyle +\frac \delta t 2 \, \snormundiscd{\tilde \rho^{n+1}}{p^{n+1}} -\frac \delta t 2 \, \snormundiscd{\tilde \rho^{n+1}}{\tilde p^{n+1}} + (\bar u^{n+1}, {\rm B}^t\, p^{n+1})\leq 0 \end{array} \] Finally, $(\bar u^{n+1}, {\rm B}^t\, p^{n+1})$ is precisely the pressure work which can be bounded by the time derivative of the elastic potential, as stated in theorem \ref{VF2}: \begin{equation} \begin{array}{l} \displaystyle \frac{1}{2 \delta t} \normLdiscd{\tilde \rho^{n+1}}{\bar u^{n+1}} + \int_{\Omega,h} \tau(\tilde u^{n+1}) : \nabla \tilde u^{n+1} \, {\rm d}x +\frac \delta t 2 \,\snormundiscd{\tilde \rho^{n+1}}{p^{n+1}} -\frac \delta t 2 \,\snormundiscd{\tilde \rho^{n+1}}{\tilde p^{n+1}} \\ \hspace{30ex} \displaystyle +\frac{1}{\delta t} \int_\Omega \rho^{n+1} P(\rho^{n+1}) \, {\rm d}x \leq \frac{1}{2\, \delta t} \normLdiscd{\rho^n}{u^n} +\frac{1}{\delta t} \int_\Omega \rho^{n} P(\rho^{n}) \, {\rm d}x \end{array}\label{eqstep2}\end{equation} The proof then ends by using the renormalization steps (step 2 and 5 of the algorithm). Step 2 reads in an algebric setting: \[ {\rm B}\, {\rm M}_{\tilde \rho^{n+1}}^{-1}\,{\rm B}^t\, \tilde p^{n+1} ={\rm B}\, {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\, {\rm M}_{\tilde \rho^{n}}^{-1/2}\,{\rm B}^t\, p^{n} \] Multiplying by $\tilde p^{n+1}$, we obtain: \[ \left({\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1},\ {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1}\right) = \left({\rm M}_{\tilde \rho^{n}}^{-1/2}\,{\rm B}^t\, p^{n}, \ {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\, {\rm B}^t\, \tilde p^{n+1}\right) \] and thus, by Cauchy-Schwartz inequality: \[ \left({\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1},\ {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1}\right) \leq \left({\rm M}_{\tilde \rho^{n}}^{-1/2}\,{\rm B}^t\, p^{n}, \ {\rm M}_{\tilde \rho^{n}}^{-1/2}\,{\rm B}^t\, p^{n}\right)^{1/2}\, \left({\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1},\ {\rm M}_{\tilde \rho^{n+1}}^{-1/2}\,{\rm B}^t\, \tilde p^{n+1} \right)^{1/2} \] This relation yields $\snormundiscd{\tilde \rho^{n+1}}{\tilde p^{n+1}} \leq \snormundiscd{\tilde \rho^{n}}{p^{n}}$. In addition, step 5 of the algorithm gives $\normLdiscd{\rho^{n+1}}{u^{n+1}}=\normLdiscd{\tilde \rho^{n+1}}{\bar u^{n+1}}$. Using these two relations in \eqref{eqstep2}, we get: \[ \begin{array}{l} \displaystyle \frac{1}{2 \delta t} \normLdiscd{\rho^{n+1}}{u^{n+1}} + \int_{\Omega,h} \tau(\tilde u^{n+1}): \nabla \tilde u^{n+1} \, {\rm d}x +\frac \delta t 2 \,\snormundiscd{\tilde \rho^{n+1}}{p^{n+1}} + \frac{1}{ \delta t} \int_\Omega \rho^{n+1} P(\rho^{n+1}) \, {\rm d}x \\[2ex] \hspace{50ex} \displaystyle \leq \frac{1}{2\, \delta t} \normLdiscd{\rho^n}{u^n} + \frac{1}{\delta t} \int_\Omega \rho^{n} P(\rho^{n}) \, {\rm d}x + \frac \delta t 2\,\snormundiscd{\tilde \rho^{n}}{p^n} \end{array} \] and the estimate of theorem \ref{stab} follows by summing over the time steps. \end{proof} \begin{rmrk}[On the upwinding of the mass balance discretization, the \textit{inf-sup} stability of the discretization and the appearance of spurious pressure wiggles.] In the scheme considered in this section, the upwinding in the discretization of mass balance controls the onset of density oscillations. As long as the pressure and the density are linked by an increasing function, that is as long as the flow remains compressible with a reasonable equation of state, it will probably be sufficient to prevent from pressure oscillations. Besides, the fourth term of the left hand side of (\ref{stabres}), \textit{i.e.} the term involving $\snormundiscd{p^{n+1}}{\tilde \rho^{n+1}}$, provides a control on the discrete $H^1$ semi-norm of the pressure, at least for large time steps, and therefore also produces an additional pressure smearing. However, it comes up in the analysis as the composition of the discrete divergence with the discrete gradient; consequently, one will obtain such a smoothing effect only for \textit{inf-sup} stable discretizations. Note also that, even for steady state problems, some authors recommend the use of stable approximation space pairs to avoid pressure wiggles \cite{bris-90-com, for-93-fin}. \end{rmrk} \begin{rmrk}[On a different projection step] Some authors propose a different projection step \cite{bij-98-uni,wes-01-pri}, which reads in time semi-discrete setting: \[ \left| \begin{array}{l} \displaystyle \frac{\varrho(p^{n+1})\, u^{n+1}- \tilde \rho^{n+1}\, \tilde u^{n+1}}{\delta t} + \nabla (p^{n+1}-\tilde p^{n+1})=0 \\[2ex] \displaystyle \frac{\varrho(p^{n+1})-\rho^n}{\delta t} + \nabla \cdot \left( \varrho(p^{n+1})\, u^{n+1} \right)=0 \end{array} \right . \] Considering this system, one may be tempted by the following line of thought: choosing $q^{n+1}=\varrho(p^{n+1})\, u^{n+1}$ as variable, taking the discrete divergence of the first equation and using the second one will cause the convection term of the mass balance to disappear from the discrete elliptic problem for the pressure, whatever the discretization of this term may be. Consequently, the equation for the pressure will be free of the non-linearities induced by the upwinding and the dependency of the convected density on the pressure, while one still may hope to obtain a positive upwind (with respect to the density) scheme. In fact, this last point is incorrect. To be valid, it would necessitate that, from any solution $(q^{n+1}, p^{n+1})$, one be able to compute a velocity field $u^{n+1}$ by dividing $q^{n+1}$ by the density of the control volume located upstream with respect to $u^{n+1}$. Unfortunately, it is not always possible to obtain this upstream value; for instance, if for two neighbouring control volumes $K$ and $L$, $\rho_K<0$, $\rho_L>0$ and $q^{n+1} \cdot n_{K|L}>0$, neither the choice of $K$ nor $L$ for the upstream control volume is valid. Consequently, with this discretization, we are no longer able to guarantee neither the positivity of $\rho$ nor the absence of oscillations. However, as explained below, if the density remains positive, we will have a smearing of pressure or density wiggles due to the fact that the discretization is \textit{inf-sup} stable. \end{rmrk} \subsection{Implementation}\label{subsec:impl} The implementation of the first three steps (\ref{stab-scheme-1})-(\ref{stab-scheme-3}) of the numerical scheme is standard, and we therefore only describe here in details the fourth step, that is the projection step. The precise algebraic formulation of the system (\ref{stab-scheme-4}) reads: \begin{equation} \begin{array}{l} \left| \begin{array}{l} \displaystyle \frac{1}{\delta t} {\rm M}_{\tilde \rho^{n+1}} \, (\bar u^{n+1} - \tilde u^{n+1}) + {\rm B}^t\, (p^{n+1} -\tilde p^{n+1})=0 \\[2ex] \displaystyle \frac{1}{\delta t} {\rm R}\, (\varrho(p^{n+1})-\rho^n) - {\rm B} {\rm Q}_{\rho^{n+1}_{\rm up}} \bar u^{n+1}=0 \end{array} \right . \end{array} \label{proj-alg}\end{equation} where ${\rm M}_{\tilde \rho^{n+1}}$ and ${\rm Q}_{\rho^{n+1}_{\rm up}}$ are two diagonal matrices; for the first one, we recall that the entry corresponding to an edge $\sigma\in {\cal E}_{{\rm int}},\ \sigma=K|L$ is computed by multiplying the measure of the diamond associated to $\sigma$ by the predicted density (at the edge center) $\tilde \rho^{n+1}_\sigma$; in the second one, the same entry is obtained by just taking the density at $t^{n+1}$ in the element located upstream of $\sigma$ with respect to $\bar u^{n+1}$, {\it i.e.} either $\varrho\,(p_K^{n+1})$ or $\varrho\,(p_L^{n+1})$. Note that these definitions can be extended in a straightforward way for the boundary edges, if the velocity is not prescribed to zero on the boundary of the computational domain. The matrix ${{\rm R}}$ is diagonal and, for any $K\in{\cal M}$, its entry ${\rm R}_K$ is the measure of the element $K$. For the sake of simplicity, we suppose for the moment that the equation of state is linear: \[ \varrho(p^{n+1})=\frac{\partial \varrho}{\partial p}\ p^{n+1} \] The elliptic problem for the pressure is obtained by multiplying the first relation of (\ref{proj-alg}) by ${\rm B}\ {\rm Q}_{\rho^{n+1}_{\rm up}}\ ({\rm M}_{\tilde \rho^{n+1}})^{-1}$ and using the second one. This equation reads: \begin{equation} {\rm L}\, p^{n+1}+\frac{\partial \varrho}{\partial p}\ \frac{1}{\delta t^{2}}{\rm R}\, p^{n+1} = {\rm L}\,\tilde p^{n+1}+\frac{1}{\delta t^{2}}{\rm R} \rho^n +\frac{1}{\delta t}{\rm B}\,{\rm Q}_{\rho^{n+1}_{\rm up}}\,\tilde u^{n+1} \label{step41}\end{equation} where ${\rm L}={\rm B}\,{\rm Q}_{\rho^{n+1}_{\rm up}}\,({\rm M}_{\tilde \rho^{n+1}})^{-1}\,{\rm B}^t$ can be viewed, for the discretization at hand, as a finite volume discrete approximation of the Laplace operator with Neumann boundary conditions (when the velocity is prescribed at the boundary), multiplied by the space dimension $d$ and the densities ratio (see remarks \ref{spurious-BC} and \ref{non-consistent!}). We recall that, by a calculation similar to the proof of lemma \ref{bmbt}, this matrix can be evaluated directly in the "finite volume way", by the following relation, valid for each element $K$: \[ ({\rm L}\, p^{n+1})_K=\sum_{\sigma=K|L} \ d\ \frac{\displaystyle \rho_{{\rm up},\sigma}}{\tilde \rho^{n+1}_\sigma}\ \frac{|\sigma|^2}{|D_\sigma|}\, (p_K-p_L) \] where $\rho_{{\rm up},\sigma}$ stands for the upwind density associated to the edge $\sigma$. Provided that $p^{n+1}$ is known, the first relation of (\ref{proj-alg}) gives us the updated value of the velocity: \[ \bar u^{n+1}=\tilde u^{n+1}-\delta t\ ({\rm M}_{\tilde \rho^{n+1}})^{-1}\,{\rm B}^t\,(p^{n+1}-\tilde p^{n+1}) \label{step42} \] As, to preserve the positivity of the density, we want to use in the mass balance the value of the density upwinded with respect to $\bar u^{n+1}$, equations (\ref{step41}) and (\ref{step42}) are not decoupled, by contrast with what happens in usual projection methods. We thus implement the following iterative algorithm: \[ \begin{array}{ll} \displaystyle \mbox{Initialization: }\quad p_{0}^{n+1}= \tilde p^{n+1} \mbox{ and } \bar u_{0}^{n+1}=\tilde u^{n+1} \\[3ex] \qquad \left| \begin{array}{l} \displaystyle \mbox{Step 4.1 -- Solve for } p_{k+1/2}^{n+1}\ : \\[3ex] \displaystyle \hspace{5ex} \left[{\rm L}+\frac{\partial \varrho}{\partial p}\ \frac{1}{\delta t^{2}} {\rm R} \right]\, p_{k+1/2}^{n+1}= {\rm L}\,\tilde p^{n+1} + \frac{1}{\delta t^{2}} {\rm R} \, \rho^n +\frac{1}{\delta t} {\rm B}\, {\rm Q}_{\rho^{n+1}_{\rm up}}\, \tilde u^{n+1} \\[3ex] \displaystyle \quad \mbox{where the density in ${\rm L}$ and ${\rm Q}_{\rho^{n+1}_{\rm up}}$ is evaluated at $p^{n+1}_k$ and the upwinding} \\ \displaystyle \quad \mbox{in ${\rm Q}_{\rho^{n+1}_{\rm up}}$ is performed with respect to $\bar u^{n+1}_k$} \\[3ex] \displaystyle \mbox{Step 4.2 -- Compute } p_{k+1}^{n+1}\mbox{ as } p_{k+1}^{n+1}=\alpha \, p_{k+1/2}^{n+1} + (1-\alpha)\, p_{k}^{n+1} \\[3ex] \displaystyle \mbox{Step 4.3 -- Compute } \bar u_{k+1}^{n+1}\mbox{ as }: \\[2ex] \displaystyle \hspace{5ex} \bar u_{k+1}^{n+1}=\tilde u^{n+1}-\delta t \ ({\rm M}_{\tilde \rho^{n+1}})^{-1}\,{\rm B}^t\,(p_{k+1}^{n+1}-\tilde p^{n+1}) \end{array} \right. \\[20ex] \displaystyle \mbox{Convergence criteria: }\quad \max \left[ \normz{p_{k+1}^{n+1}-p_{k}^{n+1}},\normz{u^{n+1}_{k+1}-u_{k}^{n+1}}\right] < \varepsilon \end{array} \] The second step of the previous algorithm is a relaxation step which can be performed to ensure convergence; however, in the tests presented hereafter, we use $\alpha=1$ and obtain convergence in few iterations (typically less than 5). When the equation of state is nonlinear, step 4.1 is replaced by one iteration of Newton's algorithm. \subsection{Numerical experiments}\label{subsec:num-exp} In this section, we describe numerical experiments performed to assess the behaviour of the pressure correction scheme presented in this paper, in particular the convergence rate with respect to the space and time discretizations. With $\Omega=(0,1)\times (-\frac 1 2,\frac 1 2)$, we choose for the momentum and density the following expressions: \[ \begin{array}{l} \displaystyle \rho\,u= -\frac{1}{4} \cos(\pi t)\left[\begin{array}{l} \sin(\pi x^{(1)}) \\ \cos(\pi x^{(2)}) \end{array}\right] \\[3ex] \displaystyle \rho=1+\frac{1}{4}\,\sin(\pi t)\,\left[\cos(\pi x^{(1)})-\sin(\pi x^{(2)})\right] \end{array} \] These functions satisfy the mass balance equation; for the momentum balance, we add the corresponding right hand side. In this latter equation, the divergence of the stress tensor is given by: \[ \nabla \cdot \tau(u)= \mu \Delta u + \frac \mu 3 \nabla\, \nabla \cdot u, \qquad \mu=10^{-2} \] The pressure is linked to the density by the following equation of state: \[ p=\varrho(\rho)=\frac{\rho - 1}{\gamma\ {\rm Ma}^2}, \qquad \gamma = 1.4, \ {\rm Ma}=0.5 \] where the parameter $\rm Ma$ corresponds to the characteristic Mach number. \begin{figure} \caption{velocity error as a function of the time step. } \label{err_v} \end{figure} We use in these tests a special numerical integration of the forcing term of the momentum balance, which is designed to ensure that the discretization of a gradient is indeed a discrete gradient ({\it i.e.} if the forcing term $f$ can be recast under the form $f=\nabla g$, the discrete right hand side of the momentum balance belongs to the range of ${\rm B^t}$). \begin{figure} \caption{pressure error as a function of the time step. } \label{err_p} \end{figure} Velocity and pressure errors obtained at $t=0.5$, in respectively $L^2$ and discrete $L^2$ norms and as a function of the time step, are drawn on respectively figure \ref{err_v} and figure \ref{err_p}, for $20 \times 20$, $40 \times 40$ and $80 \times 80$ uniform meshes. For large time steps, these curves show a decrease corresponding to approximately a first order convergence in time for the velocity and the pressure, until a plateau is reached, due to the fact that errors are bounded from below by the residual spatial discretization error. For both velocity and pressure, the value of the errors on this plateau show a spatial convergence order (in $L^2$ norm) close to 2. \section{Conclusion} We presented in this paper a numerical scheme for the barotropic Navier-Stokes compressible equations, based on a pressure-correction time stepping algorithm. For the spatial discretization, it combines low-order non-conforming mixed finite elements with finite volumes; in the incompressible limit, one recovers a classical projection scheme based on an \textit{inf-sup} stable pair of approximation spaces for the velocity and the pressure. This scheme is proven to enjoy an unconditional stability property: irrespectively of the time step, the discrete solution obeys the \textit{a priori} estimates associated to the continuous problem, \textit{i.e.} strict positivity of the density, bounds in $L^\infty$-in-time norm of the quantity $\int_\Omega \rho\, u^2\, {\rm d}x$ and $\int_\Omega \rho \, P(\rho)\, {\rm d}x$ and in $L^2$-in-time norm of the viscous dissipation $ \int_\Omega \tau(u): \nabla u \, {\rm d}x$. To our knowledge, this result is the first one of this type for barotropic compressible flows. However, the scheme presented here is by no means "the ultimate scheme" for the solution to the compressible Navier-Stokes equations. It should rather be seen as an example of application (and probably one of the less sophisticated ones) of the mathematical arguments developped to obtain stability, namely theorems \ref{VF2} (discrete elastic potential identity) and \ref{VF1} (stability of the advection operator), and our hope is that these two ingredients could be used as such or adapted in the future to study other algorithms. For instance, a computation close to the proof of theorem \ref{stab} (and even simpler) would yield the stability of the fully implicit scheme; adding to this latter algorithm a prediction step for the density (as performed here) would also allow to linearize (once again as performed here) the convection operator without loss of stability. A stable pressure-correction scheme avoiding this prediction step can also be obtained, and is currently under tests at IRSN for the computation of compressible bubbly flows. Besides these variants, less diffusive schemes should certainly be searched for. Finally, the proposed scheme is currently the object of more in-depth numerical studies including, in particular, problems admitting less smooth solutions than the test presented here. \end{document}
\begin{document} \title{Boundary layer for a non-Newtonian flow \\ over a rough surface} \author{David G\'erard-Varet \thanks{Institut de Math\'{e}matiques de Jussieu et Universit\'{e} Paris 7, 175 rue du Chevaleret, 75013 Paris France ({\tt [email protected]})} , Aneta Wr\'oblewska-Kami\'nska \thanks{ Institute of Mathematics, Polish Academy of Sciences, ul. \'Sniadeckich 8, 00-956 Warszawa, Poland ({\tt [email protected]})} } \maketitle \section{Introduction} The general concern of this paper is the effect of rough walls on fluids. This effect is important at various scales. For instance, in the area of microfluidics, recent experimental works have emphasized the role of hydrophobic rough walls in the improvement of slipping properties of microchannels. Also, in geophysics, as far as large scale motions are concerned, topography or shore variations can be assimilated to roughness. For high Reynolds number flows, an important issue is to understand how localized roughness triggers instabilities, and transition to turbulence. For laminar flows, the point is rather to understand how distributed roughness may have a macroscopic impact on the dynamics. More precisely, the hope is to be able to encode an averaged effect through an effective boundary condition at a smoothened wall. Such boundary condition, called {\em a wall law}, will avoid to simulate the small-scale dynamics that takes place in a boundary layer in the vicinity of the rough surface. The derivation of wall laws for laminar Newtonian flows has been much studied, since the pioneering works of Achdou, Pironneau and Valentin \cite{Achdou:1995, Achdou:1998}, or J\"ager and Mikeli\'c \cite{Mikelic2001,Jager:2003}. See also \cite{Luchini:1995,Amirat:2001a,GV2003,Bresch,Mikelic2013}. A natural mathematical approach of this problem is by homogenization techniques, the roughness being modeled by a small amplitude/high frequency oscillation. Typically, one considers a Navier-Stokes flow in a channel $\Omega^\eps$ with a rough bottom: $$\Omega^\ep = \Omega \cup \Sigma_0 \cup R^\ep. $$ Precisely: \begin{itemize} \item $\Omega = (0,1)^2$ is the flat portion of the channel. \item $R^\eps$ is the rough portion of the channel: it reads $$ R^\eps = \{ x = (x_1,x_2), x_1 \in (0,1), 0 > x_2 > \eps \gamma(x_1/\eps) \}$$ with a bottom surface $\Gamma^\eps := \{x_2 = \eps \gamma(x_1/\eps) \}$ parametrized by $\eps \ll 1$. Function $\gamma = \gamma(y_1)$ is the {\em roughness pattern}. \item Eventually, $\Sigma_0 := (0,1) \times \{0\}$ is the interface between the rough and flat part. It is the artificial boundary at which the wall law is set. \end{itemize} Of course, within such model, the goal is to understand the asymptotic behavior of the Navier-Stokes solution $u^\eps$ as $\eps \rightarrow 0$. Therefore, the starting point is a formal approximation of $u^\eps$ under the form \begin{equation} \label{blexpansion} u^\eps_{app}(x) = u^0(x) + \eps u^1(x) + \dots + u^0_{bl}(x/\eps) + \eps u^1_{bl}(x/\eps) + \dots . \end{equation} In this expansion, the $u^i = u^i(x)$ describe the large-scale part of the flow, whereas the $u^i_{bl} = u^i_{bl}(y)$ describe the boundary layer. The typical variable $y=x/\eps$ matches the small-scale variations induced by the roughness. In the case of homogeneous Dirichlet conditions at $\Gamma^\eps$, one can check formally that: \begin{itemize} \item $u^0$ is the solution of the Navier-Stokes equation in $\Omega$, with Dirichlet condition at $\Sigma_0$. \item $u^0_{bl} = 0$, whereas $u^1_{bl}$ satisfies a Stokes equation in variable $y$ in the boundary layer domain $$\Omega_{bl} := \{y = (y_1, y_2), y_1 \in \R, y_2 > \gamma(y_1)\}.$$ \end{itemize} The next step is to solve this boundary layer system, and show convergence of $u^1_{bl}$ as $y_2 \rightarrow +\infty$ to a constant field $u^\infty = (U^\infty, 0)$. This in turn determines the appropriate boundary condition for the large scale correction $u^1$. From there, considering the large scale part $u^0 + \eps u^1$, one can show that: \begin{itemize} \item The limit wall law is a homogeneous Dirichlet condition. Let us point out that this feature persists even starting from a microscopic pure slip condition, under some non-degeneracy of the roughness: \cite{Bucur:2008,Bucur:2012,BoDaGe}. \item The $O(\eps)$ correction to this wall law is a slip condition of Navier type, with $O(\eps)$ slip length. \end{itemize} All these steps were completed in aforementioned articles, in the case of periodic roughness pattern $\gamma$ : $\gamma(y_1 + 1) = \gamma(y_1)$. Over the last years, the first author has extended this analysis to general patterns of roughness, with ergodicity properties (random stationary distribution of roughness, {\it etc}). We refer to \cite{BaGeVa2008, DGV:2008,GeVaMa2010}. See also \cite{DaGeVa2011} for some recent work on the same topic. The purpose of the present paper is to extend the former analysis to non-Newtonian flows. This may have various sources of interest. One can think of engineering applications, for instance lubricants to which polymeric additives confer a shear thinning behavior. One can also think of glaciology: as the interaction of glaciers with the underlying rocks is unavailable, wall laws can help. From a mathematical point of view, such examples may be described by a power-law model. Hence, we consider a system of the following form: \begin{equation} \label{EQ1} \left\{ \begin{aligned} -\dive S(Du) + \nabla p = e_1 & \quad \mbox{in} \: \Omega^\eps, \\ \dive u = 0 & \quad \mbox{in} \: \Omega^\eps, \\ u\vert_{\Gamma^\eps} = 0, \quad u\vert_{x_2 = 1} = 0&, \quad u \: \mbox{$1$-periodic in $x_1$}. \end{aligned} \right. \end{equation} As usual, $u = u(x) \in \R^2$ is the velocity field, $p = p(x) \in \R$ is the pressure. The source term $e_1$ at the right-hand side of the first equation corresponds to a constant pressure gradient $e_1 = (1,0)^t$ throughout the channel. Eventually, the left-hand side involves the stress tensor of the fluid. As mentioned above, it is taken of power-law type: $S : \R^{2\times 2}_{\rm sym} \to \R^{2\times 2}_{\rm sym}$ is given by \begin{equation} \label{defstress} S : \R^{2\times 2}_{\rm sym} \to \R^{2\times 2}_{\rm sym}, \quad S(A) = \nu |A|^{p-2}A, \quad \nu > 0, \quad 1 < p < +\infty , \end{equation} where $|A| = (\sum_{i,j} a_{i,j}^2)^{1/2}$ is the usual euclidean norm of the matrix $A$. {\em For simplicity, we shall take $\nu = 1$}. Hence, $S(Du) = |Du|^{p-2} Du$, where we recall that $Du = \frac{1}{2} (\nabla u + (\nabla u)^t)$ is the symmetric part of the jacobian. Following classical terminology, the case $p < 2$ resp. $p > 2$ corresponds to {\em shear thinning} fluids, resp. {\em shear thickening} fluids. The limit case $p=2$ describes a Newtonian flow. Note that we complete the equation in system \eqref{EQ1} by a standard no-slip condition at the top and bottom boundary of the channel. For the sake of simplicity, we assume periodicity in the large scale horizontal variable $x_1$. Finally, we also make a simplifying periodicity assumption on the roughness pattern $\gamma$: \begin{equation} \mbox{$\gamma$ is $C^{2,\alpha}$ for some $\alpha >0$, has values in $(-1,0)$, and is $1$-periodic in $y_1$}. \end{equation} For every $\eps > 0$ and any value of $p$, the generalized Stokes system \eqref{EQ1} has a unique solution $$ (u^\eps, p^\eps) \in W^{1,p}(\Omega^\ep) \times L^{p'}(\Omega^\ep)/\R .$$ The main point is to know about the asymptotic behavior of $u^\eps$, precisely to build some good approximate solution. With regards to the Newtonian case, we anticipate that this approximation will take a form close to \eqref{blexpansion}. Our plan is then: \begin{itemize} \item to derive the equations satisfied by the first terms of expansion \eqref{blexpansion}. \item to solve these equations, and show convergence of the boundary layer term to a constant field away from the boundary. \item to obtain error estimates for the difference between $u^\eps$ and $u^\eps_{app}$. \item to derive from there appropriate wall laws. \end{itemize} This program will be more difficult to achieve for non-Newtonian fluids, in particular for the shear thinning case $p < 2$, notably as regards the study of the boundary layer equations on $u_{bl} \: := u^1_{bl}$. In short, these equations will be seen to read $$ - \dive(S(A + D u_{bl})) + \nabla p = 0, \dive u = 0, \quad y \in \Omega_{bl} $$ for some explicit matrix $A$, together with periodicity condition in $y_1$ and a homogeneous Dirichlet condition at the bottom of $\Omega_{bl}$. Due to the nonlinearity of these equations and the fact that $A \neq 0$, the analysis will be much more difficult than in the Newtonian case, notably the proof of the so-called Saint-Venant estimates. We refer to section \ref{subsecstvenant} for all details. Let us conclude this introduction by giving some references on related problems. In \cite{MarusicPaloka2000}; E. Maru\v{s}i\'c-Paloka considers power-law fluids with convective terms in infinite channels and pipes (the non-Newtonian analogue of the celebrated Leray's problem). After an appropriate change of unknown, the system studied in \cite{MarusicPaloka2000} bears some strong similarity to our boundary layer system. However, it is different at two levels : first, the analysis is restricted to the case $p > 2$. Second, our lateral periodicity condition in $y_1$ is replaced by a no-slip condition. This allows to use Poincar\'e's inequality in the transverse variable, and control zero order terms (in velocity $u$) by $\nabla u$, and then by $D u$ through the Korn inequality. It simplifies in this way the derivation of exponential convergence of the boundary layer solution (Saint-Venant estimates). The same simplification holds in the context of paper \cite{BoGiMa-Pa}, where the behaviour of a Carreau flow through a thin filter is analysed. The corrector describing the behaviour of the fluid near the filter is governed by a kind of boundary layer type system, in a slab that is infinite vertically in both directions. In this setting, one has $A = 0$, and the authors refer to \cite{MarusicPaloka2000} for well-posedness and qualitative behaviour. We also refer to the recent article \cite{Suarez-Grau_2015} dedicated to power-law fluids in thin domains, with Navier condition and anisotropic roughness (with a wavelength that is larger than the amplitude). In this setting, no boundary layer analysis is needed, and the author succeeds to describe the limit asymptotics by the unfolding method. Finally, we point out the very recent paper \cite{ChupinMartin2015}, where an Oldroyd fluid is considered in a rough channel. In this setting, no nonlinearity is associated to the boundary layer, which satisfies a Stokes problem. \section{Boundary layer analysis} From the Newtonian case, we expect the solution $(u^\eps, p^\eps)$ of \eqref{EQ1} to be approximated by $$ u^\eps \approx u^0(x) + \eps u_{bl}(x/\eps), \quad p^\eps \approx p^0(x) + p_{bl}(x/\eps) ,$$ where \begin{itemize} \item $(u^0, p^0)$ describes the flow away from the boundary layer. We shall take $u^0 = 0$ and $p^0 = 0$ in the rough part $R^\eps$ of the channel. \item $(u_{bl}, p_{bl}) = (u_{bl}, p_{bl})(y)$ is a boundary layer corrector defined on the slab $$\Omega_{bl} := \{y = (y_1, y_2), y_1 \in \T, y_2 > \gamma(y_1)\},$$ where $\T$ is the torus $\R/\Z$. This torus corresponds implicitly to a periodic boundary condition in $y_1$, which is inherited from the periodicity of the roughness pattern $\gamma$. We denote $$ \Omega_{bl}^{\pm} \: := \: \Omega_{bl} \cap \{ \pm y_2 > 0 \} $$ its upper and lower parts, and $$ \Gamma_{bl} := \{ y = (y_1, y_2), y_1 \in \T, y_2 = \gamma(y_1)\} $$ its bottom boundary. As the boundary layer corrector is supposed to be localized, we expect that $$ \nabla u_{bl} \rightarrow 0 \quad \mbox{as $y_2 \rightarrow +\infty$}. $$ \end{itemize} With this constraint in mind, we take $(u^0, p^0)$ to be the solution of \begin{equation} \label{EQ2} \left\{ \begin{aligned} -\dive S(Du^0) + \nabla p^0 = e_1 & \quad \mbox{in} \: \Omega, \\ \dive u^0 = 0 & \quad \mbox{in} \: \Omega, \\ u^0\vert_{\Sigma_0} = 0, \quad u^0\vert_{x_2 = 1} = 0&, \quad u^0 \mbox{ $1$-periodic in $x_1$}. \end{aligned} \right. \end{equation} The solution is explicit and generalizes the Poiseuille flow. A simple calculation yields: for all $x \in \Omega$, \begin{equation} \label{Poiseuille} p^0(x) = 0, \quad u^0(x) = (U(x_2), 0), \quad U(x_2) = \frac{p-1}{p} \left(\sqrt{2}^{-\frac{p}{(p-1)}} - \sqrt{2}^{\frac{p}{(p-1)}} \left|x_2 - \frac{1}{2}\right|^{\frac{p}{p-1}} \right). \end{equation} We extend this solution to the whole rough channel by taking: $u^0 = 0, p^0 = 0$ in $R^\eps$. This zero order approximation is clearly continuous across the interface $\Sigma_0$, but the associated stress is not: denoting \begin{equation} \label{defA} A \: := D(u^0)\vert_{y_2 = 0^+}\: = \frac{1}{2}\begin{pmatrix} 0 & U'(0) \\ U'(0) & 0 \end{pmatrix}, \quad \mbox{with $U'(0) = \sqrt{2}^{\frac{p-2}{p-1}}$} \end{equation} we obtain $$ [ S(Du^0) n - p^0 n ]\vert_{\Sigma_0} = |A|^{p-2}A n = \left( \begin{smallmatrix} - \sqrt{2}^{-p} U'(0)^{p-1} \\ 0 \end{smallmatrix}\right) = \left( \begin{smallmatrix} - \frac{1}{2} \\ 0 \end{smallmatrix}\right) $$ with $n = - e_2 = -(0,1)^t$ and $[f] := f\vert_{x_2 = 0^+} - f\vert_{x_2 = 0^-}$. This jump should be corrected by $u_{bl}$, so that the total approximation $u^0(x) + \eps u_{bl}(x/\eps)$ has no jump. This explains the amplitude $\eps$ of the boundary layer term, as its gradient will then be $O(1)$. By Taylor expansion $U(x_2) = U(\eps y_2) = U(0) + \eps U'(0) y_2 + \dots$ we get formally $D(u^0 + \eps u_{bl}(\cdot/\eps)) \approx A + D u_{bl}$, where the last symmetric gradient is with respect to variable $y$. We then derive the following boundary layer system: \begin{equation} \label{BL1} \left\{ \begin{aligned} - \dive S(A + D u_{bl}) + \nabla p_{bl} & = 0 \quad \mbox{in} \: \Omega_{bl}^+, \\ - \dive S(D u_{bl}) + \nabla p_{bl} & = 0 \quad \mbox{in} \: \Omega_{bl}^-, \\ \dive u_{bl} & = 0 \quad \mbox{in} \: \Omega_{bl}^+ \cup \Omega_{bl}^-, \\ u_{bl}\vert_{\Gamma_{bl}} & = 0, \\ u_{bl}\vert_{y_2 = 0^+} - u_{bl}\vert_{y_2 = 0^-} & = 0, \end{aligned} \right. \end{equation} together with the jump condition \begin{equation} \label{BL2} \left( S( A + D u_{bl}) n - p_{bl} n \right) |_{y_2 = 0^+} - \left(S(D u_{bl}) n - p_{bl} n\right) |_{y_2= 0^-} = 0, \quad n = (0,-1)^t. \end{equation} Let us recall that the periodic boundary condition in $y_1$ is encoded in the definition of the boundary layer domain. The rest of this section will be devoted to the well-posedness and qualitative properties of \eqref{BL1}-\eqref{BL2}. We shall give detailed proofs only for the more difficult case $p < 2$, and comment briefly on the case $p \ge 2$ at the end of the section. Our main results will be the following: \begin{Theorem} {\bf (Well-posedness)} \label{thWP} \noindent For all $1 < p < 2$, \eqref{BL1}-\eqref{BL2} has a unique solution $ (u_{bl},p_{bl}) \in W^{1,p}_{loc}(\Omega_{bl}) \times L^{p'}_{loc}(\Omega_{bl})/\R $ satisfying for any $M > |A|$: $$D u_{bl} \, 1_{\{|D u_{bl}| \le M\}} \in L^2(\Omega_{bl}), \quad D u_{bl} \, 1_{\{|D u_{bl}| \ge M\}} \in L^p(\Omega_{bl}).$$ \noindent For all $p \ge 2$, \eqref{BL1}-\eqref{BL2} has a unique solution $ (u_{bl},p_{bl}) \in W^{1,p}_{loc}(\Omega_{bl}) \times L^{p'}_{loc}(\Omega_{bl})/\R $ s.t. $D u_{bl} \in L^p(\Omega_{bl}) \cap L^2(\Omega_{bl})$. \end{Theorem} \begin{Theorem} {\bf (Exponential convergence)} \label{thEC} \noindent For any $1 < p < +\infty$, the solution given by the previous theorem converges exponentially, in the sense that for some $C, \delta > 0$ $$ | u_{bl}(y) - u^\infty | \le C e^{-\delta y_2} \quad \forall \: y \in \Omega_{bl}^+,$$ where $u^\infty= (U^\infty, 0)$ is some constant horizontal vector field. \end{Theorem} \subsection{Well-posedness} \label{sectionWP} \subsubsection*{A priori estimates} We focus on the case $1 < p < 2$, and provide the {\it a priori} estimates on which the well-posedness is based. The easier case $p \ge 2$ is discussed at the end of the paragraph. As $A$ is a constant matrix, we have from \eqref{BL1}: $$ - \dive S(A + D u_{bl}) + \dive S(A) + \nabla p_{bl} = 0 \quad \mbox{in} \: \Omega_{bl}^+, \quad - \dive S(D u_{bl}) + \nabla p_{bl} = 0 \quad \mbox{in} \: \Omega_{bl}^-. $$ We multiply the two equations by $D u_{bl}$ and integrate over $\Omega_{bl}^+$ and $\Omega_{bl}^-$ respectively. After integrations by parts, accounting for the jump conditions at $y_2 = 0$, we get \begin{equation} \label{variationalBL} \int_{\Omega_{bl}^+} (S(A + Du_{bl}) - S(A)) : D u_{bl} \dy + \int_{\Omega_{bl}^-} S(D u_{bl}) : D u_{bl} \dy = -\int_{y_2 = 0} S(A)n \cdot u_{bl} {\rm\,d}S. \end{equation} The right-hand side is controlled using successively Poincar\'e and Korn inequalities (for the Korn inequality, see the appendix): \begin{equation} |\int_{y_2 = 0} S(A)n \cdot u_{bl} \dy | \le C \| u_{bl} \|_{L^p(\{ y_2 = 0\})} \le C' \| \nabla u_{bl} \|_{L^p(\Omega_{bl}^-)} \le C'' \| D u_{bl} \|_{L^p(\Omega_{bl}^-)} . \end{equation} As regards the left-hand side, we rely on the following vector inequality, established in \cite[p74, eq. (VII)]{Lind}: for all $1 < p \le 2$, for all vectors $a,b$ \begin{equation} \label{vectorinequality} ( |b|^{p-2}b - |a|^{p-2} a \: | \: b-a) \: \ge \: (p-1) |b-a|^2 \int_0^1 |a + t(b-a)|^{p-2} dt. \end{equation} In particular, for any $M > 0$, if $|b-a| \le M$, one has \begin{equation} \label{ineq1} ( |b|^{p-2}b - |a|^{p-2} a \: | \: b-a) \: \ge \: \frac{p-1}{(|a| + M)^{2-p}} |b-a|^2 , \end{equation} whereas if $|b-a| > M > |a|$, we get \begin{equation} \label{ineq2} ( |b|^{p-2}b - |a|^{p-2} a \: | \: b-a) \: \ge (p-1) |b-a|^2 \int_{\frac{|a|}{|b-a|}}^1 \left(2 t |b-a|\right)^{p-2} dt \ge 2^{p-3} \left(1 - (|a|/M)^{p-1}\right) |b-a|^p. \end{equation} We then apply such inequalities to \eqref{variationalBL}, taking $a = A$, $b = A + Du_{bl}$. For $M > |A|$, there exists $c$ dependent on $M$ such that $$ \int_{\Omega_{bl}^+} (S(A + Du_{bl}) - S(A)) : D u_{bl} \dy \ge c \int_{\Omega_{bl}^+} \bbbone_{\{ |Du_{bl}| \le M \}} |Du_{bl}|^2 \dy \: + \: \int_{\Omega_{bl}^+} \bbbone_{\{ |Du_{bl}| > M \}} |D u_{bl}|^p \dy , $$ so that for some $C$ dependent on $M$ \begin{equation*} \int_{\Omega_{bl}^+}| \bbbone_{\{ |Du_{bl}| \le M \}} Du_{bl}|^2 \dy \: + \: \int_{\Omega_{bl}^+} |\bbbone_{\{ |Du_{bl}| > M \}} D u_{bl}|^p \dy + \int_{\Omega_{bl}^-} | D u_{bl}|^p \dy \: \le \: C \, \| D u_{bl} \|_{L^p(\Omega_{bl}^-)} . \end{equation*} Hence, still for some $C$ dependent on $M$: \begin{equation} \label{aprioriestimate} \int_{\Omega_{bl}^+} 1_{\{ |Du_{bl}| \le M \}} |Du_{bl}|^2 \dy \: + \: \int_{\Omega_{bl}^+} 1_{\{ |Du_{bl}| > M \}} |D u_{bl}|^p \dy + \int_{\Omega_{bl}^-} | D u_{bl}|^p \dy \: \le \: C . \end{equation} This is the {\it a priori} estimate on which Theorem \ref{thWP} can be established (for $p \in ]1,2]$). Note that this inequality implies that for any height $h$, $$ \| Du_{bl} \|_{L^p(\Omega_{bl} \cap \{y_2 \le h \})} \le C_h $$ (bounding the $L^p$ norm by the $L^2$ norm on a bounded set). Combining with Poincar\'e and Korn inequalities, we obtain that $u_{bl}$ belongs to $W^{1,p}_{loc}(\Omega_{bl})$. In the case $p\geq 2$, we can directly use the following inequality, which holds for all $a,\ b \in \R^n$: \begin{equation}\label{abp_1} | a - b |^p 2^{2-p} \leq 2^{-1} \left( |b|^{p-2} + |a|^{p-2}\right) |b-a|^{2} \leq\left\langle |a|^{p-2} a - |b|^{p-2}|b|, a-b \right\rangle . \end{equation} It provides both an $L^2$ and $L^p$ control of the symmetric gradient of the solution. Indeed, taking $a = \tA + \tD_y \ub$, $b= \tA$ and using \eqref{variationalBL} we get the following a'priori estimates for $p\geq 2$ \begin{equation}\label{apesDup} \begin{split} & 2^{2-p} \int_{\Omega_{bl}^{+}} | \tD \ub |^p \dy + \int_{\Omega_{bl}^{-}} | \tD \ub |^p \dy + 2^{-1} |A|^{p-2} \int_{\Omega_{bl}^+} | \tD \ub |^2 \dy \: \\ & {\leq} \: \int_{\Omega_{bl}^{+}} \left( S(A+ Du_{bl}) - S(A) \right) : D \ub \dy + \int_{\Omega_{bl}^{-}} S(D u_{bl} ) : D \ub \dy \\ & = \: - \int_{\Sigma_0} S(A) n \cdot u_{bl} {\rm \, d}S \\ & \leq \: c(\alpha) \| S(A) \|^{p'}_{L^{p'}(\Sigma_0)} + \alpha \| \ub \|^{p}_{L^p(\Sigma_0)} \leq \: c(\alpha) \| S(A) \|^{p'}_{L^{p'}(\Sigma_0)} + \alpha C_\Gamma \| \nabla \ub \|^{p}_{L^p(\Omega_{bl}^{-})} \\ & \leq c(\alpha) \| S(A) \|^{p'}_{L^{p'}(\Sigma_0)} + \alpha C_\Gamma C_K\| \tD \ub \|^{p}_{L^p(\Omega_{bl}^{-})} , \end{split} \end{equation} where the trace theorem, the Poincar\'e inequality and the Korn inequality were employed. By choosing the coefficient $\alpha$ small enough, and by the imbedding of $L^p(\Omega_{bl}^-)$ in $L^2(\Omega_{bl}^-)$, \eqref{apesDup} provides \begin{equation}\label{es:Dup} \int_{\Omega_{bl}} |\tD \ub|^{p} + |\tD \ub|^{2} \dy \leq C \| S(A) \|^{p'}_{L^{p'}(\Sigma_0)} < \infty . \end{equation} Eventually, by Korn and Poincar\'e inequalities: $\ub \in W^{1,p}(\Omega_{bl})$ for $2\leq p < \infty$. \subsubsection*{Construction scheme for the solution} We briefly explain how to construct a solution satisfying the above estimates. We restrict to the most difficult case $p \in ]1,2]$. There are two steps: {\em Step 1}: we solve the same equations, but {\em in the bounded domain $\Omega_{bl,n} = \Omega_{bl} \cap \{ y_2 < n \}$}, with a Dirichlet boundary condition at the top. As $\Omega_{bl,n}$ is bounded, the imbedding of $W^{1,p}$ in $L^p$ is compact, so that a solution $u_{bl,n}$ can be built in a standard way. Namely, one can construct a sequence of Galerkin approximations $u_{bl,n,m}$ by Schauder's fixed point theorem. Then, as the estimate \eqref{aprioriestimate} holds for $u_{bl,n,m}$ uniformly in $m$ and $n$, the sequence $D u_{bl,n,m}$ is bounded in $L^p(\Omega_{bl,n})$ uniformly in $m$. Sending $m$ to infinity yields a solution $u_{bl,n}$, the convergence of the nonlinear stress tensor follows from Minty's trick. Note that one can then perform on $u_{bl,n}$ the manipulations of the previous paragraph, so that it satisfies \eqref{aprioriestimate} uniformly in $n$. {\em Step 2}: we let $n$ go to infinity. We first extend $u_{bl,n}$ by $0$ for $y_2 > n$, and fix $M > |A|$. From the uniform estimate \eqref{aprioriestimate}, we get easily the following convergences (up to a subsequence): \begin{equation} \begin{aligned} & u_{bl,n} \rightarrow u_{bl} \mbox{ weakly in } W^{1,p}_{loc}(\Omega_{bl}), \\ & D u_{bl,n} \rightarrow D u_{bl} \mbox{ weakly in } L^p(\Omega^-_{bl}), \\ & D u_{bl,n} {\bbbone}_{|Du_{bl,n}| < M}\rightarrow V_1 \mbox{ weakly in } L^2(\Omega^+_{bl}), \quad \mbox{ weakly-* in } L^\infty(\Omega_{bl}^+), \\ & D u_{bl,n} {\bbbone}_{|D u_{bl,n}| \ge M}\rightarrow V_2 \mbox{ weakly in } L^p(\Omega^+_{bl}). \end{aligned} \end{equation} Of course, $D u_{bl} = V_1 + V_2$ in $\Omega_{bl}^+$. A key point is that $$\mbox{$S(A + D u_{bl,n}) - S(A)$ is bounded uniformly in $n$ in $(L^{p}(\Omega^+_{bl}))' = L^{p'}(\Omega^+_{bl})$ and in $\left(L^2(\Omega^+_{bl}) \cap L^\infty(\Omega_{bl}^+)\right)'$.}$$ and converges weakly-* to some $S^+$ in that space. To establish this uniform bound, we treat separately $$ S_{n,1} \: := \: (S(A + D u_{bl,n}) - S(A)) {\bbbone}_{|Du_{bl,n}| < M}, \quad S_{n,2} \: := \: (S(A + D u_{bl,n}) - S(A)) {\bbbone}_{|Du_{bl,n}| \ge M}. $$ \begin{itemize} \item For $S_{n,1}$, we use the inequality \eqref{ineq3}. It gives $|S_{n,1}| \le C |D u_{bl,n}| {\bbbone}_{|Du_{bl,n}| < M}$, which provides a uniform bound in $L^{2} \cap L^\infty$, and so in particular in $L^{p'}$ and in $L^2$. \item For $S_{n,2}$, we use first that $|S_{n,2}| \le C |D u_{bl,n}|^{p-1} {\bbbone}_{|D u_{bl,n}| \ge M}$, so that it is uniformly bounded in $L^{p'}$. We use then \eqref{ineq3}, so that $ |S_{n,2}| \le C |D u_{bl,n}| {\bbbone}_{|D u_{bl,n}| \ge M}$, which yields a uniform bound in $L^p$, in particular in $(L^2 \cap L^\infty)'$ ($p \in ]1,2]$). \end{itemize} From there, standard manipulations give $$ \int_{\Omega_{bl}^+} (S(A+ D u_{bl,n}) - S(A)) : D u_{bl,n} \rightarrow \int_{\Omega_{bl}^+} S^+ : (V_1 + V_2) = \int_{\Omega_{bl}^+} S^+ : D u_{bl} $$ One has even more directly $$ \int_{\Omega_{bl}^-} S(D u_{bl,n}) : D u_{bl,n} \rightarrow \int_{\Omega_{bl}^-} S^- : D u_{bl} $$ and one concludes by Minty's trick that $S^+ = S(A+D u_{bl}) - S(A)$, $\: S^- = S(D u_{bl})$. It follows that $u_{bl}$ satisfies \eqref{BL1}-\eqref{BL2} in a weak sense. Finally, one can perform on $u_{bl}$ the manipulations of the previous paragraph, so that it satisfies \eqref{aprioriestimate}. \subsubsection*{Uniqueness} Let $u_{bl}^1$ and $u_{bl}^2$ be weak solutions of \eqref{BL1}-\eqref{BL2}, that is satisfying the variational formulation \begin{equation} \label{VF} \int_{\Omega_{bl}^+} S(A+ D u_{bl}^i) : D \varphi \: + \: \int_{\Omega_{bl}^-} S(D u_{bl}^i) : D \varphi = -\int_{y_2 = 0} S(A)n \cdot \varphi {\rm\,d}S, \quad i=1,2 \end{equation} for all smooth divergence free fields $\varphi \in C^\infty_c(\Omega_{bl})$. The point is then to replace $\varphi$ by $u_{bl}^1 - u_{bl}^2$, to obtain \begin{equation} \label{zeroidentity} \int_{\Omega_{bl}^+} \left( S(A+ D u_{bl}^1) - S(A + D u_{bl}^2) \right) : D (u_{bl}^1 - u_{bl}^2) \: + \: \int_{\Omega_{bl}^-} (S(D u_{bl}^1) - S(D u_{bl}^2) : D (u_{bl}^1 - u_{bl}^2) = 0 . \end{equation} Rigorously, one constructs by convolution a sequence $\varphi_n$ such that $D \varphi_n$ converges appropriately to $D u_{bl}^1 - D u_{bl}^2$. In the case $p < 2$, the convergence holds in $(L^2(\Omega_{bl}^+) \cap L^\infty(\Omega_{bl}^+)) + L^p(\Omega_{bl}^+)$, respectively in $L^p(\Omega_{bl}^-)$. One can pass to the limit as $n$ goes to infinity because $$ S(A+D u_{bl}^1) - S(A+D u_{bl}^2) = \left( S(A+D u_{bl}^1) - S(A) \right) + \left( S(A) - S(A+D u_{bl}^2) \right), $$ respectively $S(D u_{bl}^1) - S(D u_{bl}^2)$, belongs to the dual space: see the arguments of the construction scheme of section \ref{sectionWP}. Eventually, by strict convexity of $M \rightarrow |M|^p$ ($p > 1$), \eqref{zeroidentity} implies that $D u_{bl}^1 = D u_{bl}^{2}$. This implies that $u_{bl}^1 - u_{bl}^2$ is a constant (dimension is $2$), and due to the zero boundary condition at $\pa \Omega_{bl}$, we get $u_{bl}^1 = u_{bl}^2$. \subsection{Saint-Venant estimate} \label{subsecstvenant} We focus in this paragraph on the asymptotic behaviour of $u_{bl}$ as $y_2$ goes to infinity. The point is to show exponential convergence of $u_{bl}$ to a constant field. At first, we can use interior regularity results for the generalized Stokes equation in two dimensions. In particular, pondering on the results of \cite{Wolf} for $p < 2$, and \cite{Kaplicky2002} for $p \ge 2$, we have : \begin{Lemma} \label{lemma_unifbound} The solution built in Theorem \ref{thWP} satisfies: $u_{bl}$ has $C^{1,\alpha}$ regularity over $\Omega_{bl} \cap \{ y_2 > 1\}$ for some $0 < \alpha < 1$. In particular, $\nabla u_{bl}$ is bounded uniformly over $\Omega_{bl} \cap \{ y_2 > 1\}$. \end{Lemma} {\em Proof}. Let $0 \le t < s$. We define $\Omega_{bl}^{t,s} \: := \: \Omega_{bl} \cap \{t < y_2 \le s\}$. Note that $\Omega_{bl} \cap \{y_2 > 1 \} = \cup_{t \in \N_*} \Omega_{bl}^{t,t+1}$. Moreover, from the {\it a priori estimate} \eqref{aprioriestimate} or \eqref{es:Dup}, we deduce easily that \begin{equation} \label{uniformLp} \| D u_{bl} \|_{L^p(\Omega_{bl}^{t,t+2})} \le C \end{equation} for all $t > 0$, for a constant $C$ that does not depend on $t$. We then introduce: \begin{equation*} v_t \: := \: 2 A \left(0,y_2 - (t+\frac{1}{2})\right) + u_{bl} \: - \frac{1}{2} \: \int_{\Omega_{bl}^{t-\frac{1}{2},t+\frac{3}{2}}} u_{bl} \dy, \quad y \in \Omega_{bl}^{t-\frac{1}{2},t+\frac{3}{2}}, \quad \forall t \in \N_*. \end{equation*} From \eqref{BL1}: $$ -\dive(S(D v_t)) + \nabla p_{bl} = 0, \quad \dive v_t= 0 \quad \mbox{ in } \: \Omega_{bl}^{t-1/2,t+3/2}, \quad \forall t \in \N_*. $$ Moreover, we get for some $C$ independent of $t$: \begin{equation} \label{uniformLpv} \| v_t \|_{W^{1,p}(\Omega_{bl}^{t-1/2,t+3/2})} \le C \quad \forall t \in \N_*. \end{equation} Note that this $W^{1,p}$ control follows from \eqref{uniformLp}: indeed, one can apply the Poincar\'e inequality for functions with zero mean, and then the Korn inequality. One can then ponder on the interior regularity results of articles \cite{Wolf} and \cite{Kaplicky2002}, depending on the value of $p$: $v_{t}$ has $C^{1,\alpha}$ regularity over $\Omega_{bl}^{t,t+1}$ for some $\alpha \in (0,1)$ (independent of $t$): for some $C'$, $$\| v_t \|_{C^{1,\alpha}(\Omega_{bl}^{t,t+1})} \le C', \quad \mbox{and in particular} \quad \| \nabla v_t \|_{L^\infty(\Omega_{bl}^{t,t+1})} \le C' \quad \forall t \in \N_*.$$ Going back to $u_{bl}$ concludes the proof of the lemma. We are now ready to establish a keypoint in the proof of Theorem \ref{thEC}, called a {\it Saint-Venant estimate}: namely, we show that the energy of the solution located above $y_2 = t$ decays exponentially with $t$. In our context, a good energy is $$ E(t) \: := \: \int_{\{y_2 > t\}} | \nabla u_{bl} |^2 \dy $$ for $t > 1$. Indeed, from Lemma \ref{lemma_unifbound}, there exists $M$ such that $|D u_{bl}| \le M$ for all $y$ with $y_2 > 1$. In particular, in the case $p < 2$, when localized above $y_2 =1$, the energy functional that appears at the left hand-side of \eqref{aprioriestimate} only involves the $L^2$ norm of the symmetric gradient (or of the gradient by the homogeneous Korn inequality, {\it cf} the appendix). Hence, $\nabla u_{bl} \in L^2(\Omega_{bl} \cap \{ y_2 > 1 \})$. The same holds for $p \ge 2$, thanks to \eqref{es:Dup}. \begin{Proposition} \label{expdecay} There exists $C,\delta > 0$, such that $E(t) \le C \exp(-\delta t)$. \end{Proposition} {\em Proof}. Let $t > 1$, $\Omega_{bl}^t := \Omega_{bl} \cap \{ y_2 > t \}$. Let $M$ such that $|D u_{bl}|$ is bounded by $M$ over $\Omega_{bl}^1$, which exists due to Lemma~\ref{lemma_unifbound}. As explained just above, one has $ \int_{\Omega_{bl}^1 } |D u_{bl}|^2 < +\infty$, and by Korn inequality $E(1)$ is finite. In particular, $E(t)$ goes to zero as $t \rightarrow +\infty$ and the point is to quantify the speed of convergence. By the use of inequality \eqref{ineq1} (with $a = A$, $b = A + D u_{bl}$), we find \begin{equation} \label{boundE} \begin{aligned} E(t) \le C \int_{\Omega_{bl}^t} |D u_{bl}|^2 \dy & \le C' \int_{\Omega_{bl}^t} \left( |A + D u_{bl}|^{p-2}(A + D u_{bl}) - | A |^{p-2} A \right): D u_{bl} \dy \\ & \le C' \lim\limits_{n \to \infty } \int_{\Omega_{bl}} \left( |A + D u_{bl}|^{p-2}( A + D u_{bl}) - |A |^{p-2} A \right): D u_{bl} \, \chi_n (y_2) \dy \end{aligned} \end{equation} for a smooth $\chi_n$ with values in $[0,1]$ such that $\chi_n = 1$ over $[t,t+n$], $\chi_n=0$ outside $[t-1, t+n+1]$, and $|\chi'_n| \le 2$. Then, we integrate by parts the right-hand side, taking into account the first equation in \eqref{BL1}. We write \begin{align} \nonumber & \int_{\Omega_{bl}} \left( |A + D u_{bl}|^{p-2}( A + D u_{bl}) - |A |^{p-2} A \right): D u_{bl} \, \chi_n (y_2) \dy \\ \nonumber = & - \int_{\Omega_{bl}} \nabla p_{bl} \cdot u_{bl} \chi_n(y_2) \dy - \int_{\Omega_{bl}} \left( (S(A + D u_{bl}) - S(A)) \left( \begin{smallmatrix} 0 \\ \chi'_n \end{smallmatrix} \right) \right) \cdot u_{bl} \dy \\ \label{I1I2} = &\int_{\Omega_{bl}} \left(S(A) - S(A+D u_{bl})) \left( \begin{smallmatrix} 0 \\ \chi'_n \end{smallmatrix} \right) \right) \cdot u_{bl} \dy + \int_{\Omega_{bl}} p_{bl} \chi'_n u_{bl,2} \dy \: := \: I_1 + I_2. \end{align} To estimate $I_1$ and $I_2$, we shall make use of simple vector inequalities. Namely: \begin{equation} \label{ineq3} \mbox{for all $p \in ]1,2]$, for all vectors $a,b$, $a\neq 0$, }, \quad | |b|^{p-2} b - |a|^{p-2} a | \: \le \: C_{p,a} \, |b-a| , \end{equation} whereas \begin{equation} \label{ineq3duo} \mbox{for all $p > 2$, for all vectors $a,b$, $|b| \le M$}, \quad | |b|^{p-2} b - |a|^{p-2} a | \: \le \: C_{p,a,M} \, |b-a| . \end{equation} The latter is a simple application of the finite increment inequality. As regards the former, we distinguish between two cases: \begin{itemize} \item If $|b-a| < \frac{|a|}{2}$, it follows from the finite increments inequality. \item If $|b-a| \ge \frac{|a|}{2}$, we simply write \begin{align*} | |b|^{p-2} b - |a|^{p-2} a | \: & \le \: |b|^{p-1} + |a|^{p-1} \: \le \: (3^{p-1} + 2^{p-1}) |b-a|^{p-1} \le \: (3^{p-1} + 2^{p-1}) (\frac{|a|}{2})^{p-2} |b-a| \end{align*} using that $\left(\frac{2 |b-a|}{|a|}\right)^{p-1} \le \left(\frac{2 |b-a|}{|a|}\right)$ for $1 < p \le 2$. \end{itemize} We shall also make use of the following: \begin{Lemma} \label{lem_averages} For any height $t > 0$ \begin{description} \item[i)] $\int_{\{y_2 =t\}} u_{bl,2} = 0$. \item[ii)] $\int_{\{y_2=t\}} (S(A+D u_{bl}) - S(A)) \left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right) \cdot \left( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right) = 0$. \end{description} \end{Lemma} {\em Proof of the lemma}. \noindent i) The integration of the divergence-free condition over $\Omega_{bl}^{0,t}$ leads to \begin{align*} 0 = \int_{\Omega_{bl}^{0,t}} \dive u_{bl} & = \int_{\{y_2 = t \}} u_{bl,2} - \int_{\{y_2 = 0^+ \}}u_{bl,2} = \int_{\{y_2 = t \}} u_{bl,2} - \int_{\{y_2 = 0^- \}} u_{bl,2} \\ & = \int_{\{y_2 = t \}} u_{bl,2} - \int_{\Omega_{bl^-}} \dive u_{bl} + \int_{\Gamma_{bl}} u_{bl} \cdot n = \int_{\{y_2 = t \}} u_{bl,2} , \end{align*} where the second and fourth inequalities come respectively from the no-jump condition of $u_{bl}$ at $y_2 = 0$ and the Dirichlet condition at $\Gamma_{bl}$. \noindent ii) By integration of the first equation in \eqref{BL1} over $\Omega_{bl}^{0,t}$ we get: $$ \int_{y_2 = t} (S(A+D u_{bl}) - S(A) - p_{bl} Id) \left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right) = \int_{y_2 = 0^+} (S(A+D u_{bl}) - S(A) - p_{bl} Id) \left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right). $$ In particular, the quantity $$ I := \int_{y_2 = t} (S(A+D u_{bl}) - S(A) - p_{bl} Id) \left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right) \cdot \left( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right) = \int_{y_2 = t} ( S(A+D u_{bl}) - S(A)) \left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right) \cdot \left( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \right) $$ is independent of the variable $t$. To show that it is zero, we apply inequality \eqref{ineq3} or \eqref{ineq3duo} with $a = A$ and $b = A + D u_{bl}$, so that $$I^2 \: \le \: C \left( \int_{\{y_2=t\}} |D u_{bl}| \right)^2 \: \le \: C' \int_{\{y_2=t\}} |D u_{bl}|^2 $$ ($C$ is bounded by Lemma~\ref{lemma_unifbound}). As $D u_{bl}$ belongs to $L^2(\Omega^1_{bl})$, there exists a sequence $t_n$ such that $ \int_{\{y_2=t_n\}} |D u_{bl}|^2 \rightarrow 0$ as $n \rightarrow +\infty$. It follows that $I = 0$. This concludes the proof of the Lemma. We can now come back to the treatment of $I_1$ and $I_2$. \begin{itemize} \item Treatment of $I_1$. \end{itemize} We note that $\chi'_n$ is supported in $[t-1,t] \cup [t+n,t+n+1]$. By Lemma \ref{lem_averages} we can write \begin{align}\label{I1a} I_1 & = \int_{\Omega_{bl}^{t-1,t}} \left(S(A) - S(A+D u_{bl})) \left( \begin{smallmatrix} 0 \\ \chi'_n \end{smallmatrix} \right) \right) \cdot (u_{bl} - \overline{c}) \\ & + \: \int_{\Omega_{bl}^{t+n,t+n+1}} \left(S(A) - S(A+D u_{bl})) \left( \begin{smallmatrix} 0 \\ \chi'_n \end{smallmatrix} \right) \right) \cdot (u_{bl} - \overline{c}_n) \: := I_{1,1} + I_{1,2} , \end{align} where \begin{equation}\label{mean} \overline{c} := \Xint-_{\Omega_{bl}^{t-1,t}} u_{bl} = \left( \Xint-_{\Omega_{bl}^{t-1,t}} u_{bl,1} , 0 \right) \quad \mbox{ and } \quad \overline{c}_n := \Xint-_{\Omega_{bl}^{t+n,t+n+1}} u_{bl} = \left( \Xint-_{\Omega_{bl}^{t+n,t+n+1}} u_{bl,1} , 0 \right). \end{equation} Again, we apply inequality \eqref{ineq3} or \eqref{ineq3duo} to find $$ I_{1,1} \le C \int_{\Omega_{bl}^{t-1,t}} | D u_{bl} | \, |u_{bl} - \overline{c}| $$ and by the Poincar\'e inequality for functions with zero mean, we easily deduce that $$ I_{1,1} \le C' \int_{\Omega_{bl}^{t-1,t}} | \nabla u_{bl} |^2 = C' \left( E(t-1) - E(t) \right). $$   An upper bound on $I_{1,2}$ can be derived in the same way: $$ I_{1,2} \le C' (E(t+n) - E(t+n+1)) ,$$ where the right-hand side going to zero as $n \rightarrow +\infty$ since $E(t') \to 0$ as $t'\to \infty$. Eventually: \begin{equation} \label{estimI1} \limsup_{n \rightarrow +\infty} I_{1} \: \le \: C \left(E(t-1) - E(t) \right). \end{equation} \begin{itemize} \item Treatment of $I_2$. \end{itemize} We can again use the decomposition \begin{equation}\label{I2a} I_2 = \int_{\Omega_{bl}^{t-1,t}} p_{bl} \chi'_n u_{bl,2} \: + \: \int_{\Omega_{bl}^{t+n,t+n+1}} p_{bl} \chi'_n u_{bl,2} \: := \: I_{2,1} + I_{2,2}. \end{equation} From Lemma \ref{lem_averages} i), we infer that $$ \int_{\Omega_{bl}^{t-1,t}} \chi'_n(y_2) u_{bl,2}(y) \, \dy = 0. $$ By standard results, there exists $w \in H^1_0(\Omega_{bl}^{t-1,t})$ satisfying $ \dive w(y) = \chi'_n(y_2) u_{bl,2}(y), \quad y \in \Omega_{bl}^{t-1,t}$, and the estimate $$\| w \|_{H^1(\Omega_{bl}^{t-1,t})} \le C \| \chi'_n(y_2) u_{bl,2}(y) \|_{L^2(\Omega_{bl}^{t-1,t})} \le C' \| u_{bl,2} \|_{L^2(\Omega_{bl}^{t-1,t})}, $$ for constants $C,C'$ that do not depend on $t$. As $w$ is zero at the boundary: $$ I_{1,2} = \int_{\Omega_{bl}^{t-1,t}} p_{bl} \dive w = - \int_{\Omega_{bl}^{t-1,t}} \nabla p_{bl} \cdot w = \int_{\Omega_{bl}^{t-1,t}} (S(A+D u_{bl}) - S(A)) \cdot \nabla w $$ where the last equality comes from \eqref{BL1}. We find as before ({\it cf} \eqref{ineq3} or \eqref{ineq3duo}): \begin{align*} |I_{1,2}| & \le C \int_{\Omega_{bl}^{t-1,t}} | D u_{bl} | |\nabla w| \le C \| D u_{bl} \|_{L^2(\Omega_{bl}^{t-1,t})} \| \nabla w \|_{L^2(\Omega_{bl}^{t-1,t})} \\ & \le C' \| D u_{bl} \|_{L^2(\Omega_{bl}^{t-1,t})} \, \|u_{bl,2} \|_{L^2(\Omega_{bl}^{t-1,t})} \le C'' \| \nabla u_{bl} \|_{L^2(\Omega_{bl}^{t-1,t})}^2 \end{align*} where we have controlled the $L^2$ norm of $u_{bl,2}$ by the $L^2$ norm of its gradient (we recall that $u_{bl,2}$ has zero mean). A similar treatment can be performed with $I_{2,2}$, so that $I_{2,1} \le C (E(t-1) - E(t))$, $\: I_{2,2} \le C ( E(t+n) - E(t+n+1))$ and \begin{equation} \label{estimI2} \limsup_{n \rightarrow +\infty} I_{2} \: \le \: C \left(E(t-1) - E(t) \right). \end{equation} Finally, combining \eqref{boundE}, \eqref{I1I2}, \eqref{estimI1} and \eqref{estimI2}, we get $$ E(t) \le C (E(t-1) - E(t)) $$ for some $C > 0$. It is well-known that this kind of differential inequality implies the exponential decay of Proposition \ref{expdecay} (see the appendix). The proof of the Proposition is therefore complete. We have now all the ingredients to show Theorem \ref{thEC}. {\em Proof of Theorem \ref{thEC}}. Thanks to the regularity Lemma \ref{lemma_unifbound}, we know that $\nabla u_{bl}$ is uniformly bounded over $\Omega_{bl}^1$, and belongs to $L^2(\Omega_{bl}^1)$. Of course, this implies that $\nabla u_{bl}$ belongs to $L^q(\Omega_{bl}^1)$ for all $q \in [2,+\infty]$. More precisely, combining the $L^\infty$ bound with the $L^2$ exponential decay of Proposition \ref{expdecay}, we have that \begin{equation} \label{expdecayLq} \| \nabla u_{bl} \|_{L^q(\Omega_{bl}^t)} \le C \exp(-\delta t) \end{equation} (for some $C$ and $\delta$ depending on $q$). This exponential decay extends straightforwardly to all $1 \le q < +\infty$. Let us now fix $q > 2$. To understand the behavior of $u$ itself, we write the Sobolev inequality: for all $y$ and $y' \in B(y,r)$, \begin{equation} \label{Sobolevineq} |u(y') - u(y)| \: \le \: C r^{1-\frac{2}{q}} \left( \int_{B(y,2r)} |\nabla u(z)|^q dz \right)^{1/q}. \end{equation} We deduce from there that: for all $y_2 > 2$, for all $s \ge 0$, \begin{align*} & |u_{bl}(y_1,y_2+s) - u_{bl}(y_1,y_2)| \\ & \le |u_{bl}(y_1,y_2+s) - u_{bl}(y_1,y_2+ \lfloor s \rfloor) | + \sum_{k=0}^{\lfloor s \rfloor-1} |u_{bl}(y_1,y_2+k+1) - u_{bl}(y_1,y_2+k)| \\ & \le C \left( \| \nabla u_{bl} \|_{L^q(B((y_1,y_2+s)^t,1))} + \sum_{k=0}^{\lfloor s \rfloor-1} \| \nabla u_{bl} \|_{L^q(B((y_1,y_2+k)^t,1))} \right) \\ & \le C' \left( e^{-\delta(y_2+s)} + \sum_{k=0}^{\lfloor s \rfloor-1} e^{-\delta (y_2 + k)} \right) \end{align*} where the last inequality comes from \eqref{Sobolevineq}. This implies that $u_{bl}$ satisfies the Cauchy criterion uniformly in $y_1$, and thus converges uniformly in $y_1$ to some $u^\infty = u^\infty(y_1)$ as $y_2 \rightarrow +\infty$. To show that $u^\infty$ is a constant field, we rely again on \eqref{Sobolevineq}, which yields for all $|y_1 - y'_1| \le 1$: $$ |u_{bl}(y_1,y_2) - u_{bl}(y'_1, y_2) | \le C |y_1 - y'_1|^{1 - \frac{2}{q}} \| \nabla u_{bl} \|_{L^q(B((y_1,y_2)^t,1))} \le C' e^{-\delta y_2}. $$ Sending $y_2$ to infinity gives: $u^\infty(y_1) = u^\infty(y'_1)$. Finally, the fact that $u^\infty$ is a horizontal vector field follows from Lemma \ref{lem_averages}, point i). This concludes the proof of the Theorem~\ref{thEC}. Eventually, for later purposes, we state \begin{Corollary} {\bf (higher order exponential decay)} \label{higherorder} \begin{itemize} \item There exists $\alpha \in (0,1)$, such that for all $s \in [0,\alpha)$, for all $1 \leq q < +\infty$, one can find $C$ and $\delta > 0$ with $$ \| u_{bl} - u^\infty \|_{W^{s+1,q}(\Omega_{bl}^t)} \le C \exp(-\delta t), \quad \forall t \ge 1. $$ \item There exists $\alpha \in (0,1)$, such that for all $s \in [0,\alpha)$, for all $1 \leq q < +\infty$, one can find $C$ and $\delta > 0$ with $$ \| p_{bl} - p^t \|_{W^{s,q}(\Omega_{bl}^{t,t+1})} \: \le \: C \exp(-\delta t), \quad \forall t \ge 1, \quad \mbox{for some constant $p^t$}.$$ \end{itemize} \end{Corollary} {\em Proof of the corollary}. It was established above that $$ |u(y_1,y_2+s) - u(y_1,y_2)| \le C' \left( e^{-\delta'(y_2+s)} + \sum_{k=0}^{\lfloor s \rfloor -1} e^{-\delta' (y_2 + k)}\right) . $$ for some $C'$ and $\delta' > 0$. From there, after sending $s$ to infinity, it is easily deduced that $$ \| u_{bl} - u^\infty \|_{L^q(\Omega_{bl}^t)} \le C \exp(-\delta t) .$$ It then remains to control the $W^{s,q}$ norm of $\nabla u_{bl}$. This control comes from the $C^{0,\alpha}$ uniform bound on $\nabla u_{bl}$ over $\Omega_{bl}^1$, see Lemma \ref{lemma_unifbound}. By Sobolev imbedding, it follows that $$ \| \nabla u_{bl} \|_{W^{s,q}(\Omega_{bl}^{t,t+1})} \le C, \quad \forall s \in [0,\alpha), \forall 1\leq q < +\infty $$ uniformly in $t$. Interpolating this bound with the bound $\| \nabla u_{bl} \|_{L^q(\Omega_{bl}^{t,t+1})} \le C' \exp(-\delta' t)$ previously seen, we get $$ \| \nabla u_{bl} \|_{W^{s,q}(\Omega_{bl}^{t,t+1})} \le C'' \exp(-\delta'' t), \quad \forall s \in [0,\alpha), \forall 1\leq q < +\infty .$$ The first inequality of the Lemma follows. The second inequality, on the pressure $p_{bl}$, is derived from the one on $u_{bl}$. This derivation is somehow standard, and we do not detail it for the sake of brevity. \section{Error estimates, wall Laws} \subsection{Approximation by the Poiseuille flow.} We now go back to our primitive system \eqref{EQ1}. A standard estimate on $u^\eps$ leads to $$ \int_{\Omega^\eps} |D u^\eps|^p \: \le \: \int_{\Omega^\eps} e_1 \cdot u^\eps. $$ The Korn inequality implies that $$ \int_{\Omega^\eps} |\nabla u^\eps|^p \: \le \: C \int_{\Omega^\eps} |D u^\eps|^p $$ for a constant $C$ independent of $\eps$: indeed, one can extend $u^\eps$ by $0$ for $x_2 < \eps \gamma(x_1/\eps)$ and apply the inequality on the square $\T \times [-1,1]$, {\it cf} the appendix. Also, by the Poincar\'e inequality: $$ | \int_{\Omega^\eps} e_1 \cdot u^\eps | \le C \| u^\eps \|_{L^p(\Omega^\eps)} \le C' \| \nabla u^\eps \|_{L^p(\Omega^\eps)}. $$ We find that \begin{equation} \label{basic_estimate} \| u^\eps \|_{W^{1,p}(\Omega^\eps)} \le C. \end{equation} In particular, it provides strong convergence of $u^\eps$ in $L^p$ by the Rellich theorem (up to a subsequence). As can be easily guessed, the limit of $u^0$ in $\Omega$ is the generalized Poiseuille flow $u^0$. One can even obtain an error estimate by a direct energy estimate of the difference (extending $u^0$ and $p^0$ by zero in $R^\eps$). We focus on the case $1 < p \le 2$, and comment briefly the easier case $p \ge 2$ afterwards. We write $\ue = \uz + \we$ and $p^\ep = p^0 + q^\ep$. We find, taking into account \eqref{EQ2}: \begin{equation}\label{EQ4} \begin{split} - \Div \bS(\tD \uz + \tD \we) + \Div \bS(\tD \uz) + \nabla q^\ep & = {\bbbone}_{R^\eps} e_1 \quad \mbox{ in } \Omega^\ep \setminus \Sigma_0 ,\\ \Div \we & = 0 \quad \mbox{ in } \Omega^\ep , \\ \we & = 0 \quad \mbox{ on } \Gamma^\ep \cup \Sigma_1 , \\ \we & \mbox{ is periodic in } x_1 \mbox{ with period } 1 , \\ [\we]|_{\Sigma_0} = 0, \quad [\tS ( \tD \uz + \tD \we ) {n} - \tS ( \tD \uz ) {n} - q^\ep n ]|_{\Sigma_0} & = -\tS ( \tD \uz) {n}|_{x_2=0^+} . \end{split} \end{equation} In particular, performing an energy estimate and distinguishing between $\Omega$ and $R^\eps$, we find \begin{equation}\label{weakEQ4} \int_{\Omega} \left( \tS( \tD \uz + \tD \we) - \tS( \tD \uz) \right) : \tD \we + \int_{R^\ep} \tS( \tD \we ) : \tD \we = - \int_{\Sigma_0} \tS (\tD \uz ){n}\vert_{x_2 = 0^+} \cdot \we {\rm d}S + \int_{R^\eps} e_1 \cdot \we \end{equation} Relying on inequalities \eqref{ineq1}-\eqref{ineq2}, we get for any $M > \| Du^0 \|_{L^\infty}$: \begin{multline}\label{zeroap1} \| D \we \|_{L^p(\Omega \cap \{ |D \we| \ge M\})}^{p} + \| D \we \|_{L^2(\Omega \cap \{ |D \we| \le M\})}^{2} + \| D \we \|_{L^p(R^\eps)}^{p} \\ \le C \left( \left| \int_{\Sigma_0} \bS(\tD \uz) {n} \cdot \we {\rm\,d}S \right| + \left| \int_{R^\eps} e_1 \cdot \we \right| \right) \end{multline} Then by the H\"older inequality and by Proposition~\ref{rescaledTracePoincare} in the appendix, we have that \begin{equation}\label{es1} | \int_{R^\ep} e_1 \cdot \we | \leq \eps^{\frac{p-1}{p}} \| \we \|_{L^p(R^\eps)} \leq C \eps^{1 + \frac{p-1}{p}} \| \nabla \we \|_{L^p(R^\ep)} . \end{equation} Next, since $D \uz$ is given explicitly and uniformly bounded, the Proposition~\ref{rescaledTracePoincare} provides \begin{equation}\label{es2} \begin{split} & | \int_{\Sigma_0} ( \tS(\tD \uz) {n}\vert_{x_2 = 0^+} \cdot \we {\rm\,d}S | \leq C \| \we \|_{L^{p}(\Sigma_0)} \leq C' \ep^{\frac{p-1}{p}} \| \nabla \we \|_{L^{p}(R^\ep)} . \end{split} \end{equation} Note that, as $\we$ is zero at the lower boundary of the channel, we can extend it by $0$ below $R^\eps$ and apply Korn inequalities in a strip (see the appendix). We find $$ \| \nabla \we \|_{L^p(R^\ep)} \le C \| D \we \|_{L^p(R^\ep)} $$ for some constant $C > 0$ independent of $\eps$. Summarising, we get \begin{equation*} \| D \we \|_{L^p(\Omega \cap \{ |D \we| \ge M\})}^{p} + \| D \we \|_{L^2(\Omega \cap \{ |D \we| \le M\})}^{2} + \| D \we \|_{L^p(R^\eps)}^{p} \le C \ep^{\frac{p-1}{p}} \| D \we \|_{L^p(R^\ep)} \end{equation*} and consequently \begin{equation}\label{zeroap2} \| D \we \|_{L^p(\Omega \cap \{ |D \we| \ge M\})}^{p} + \| D \we \|_{L^2(\Omega \cap \{ |D \we| \le M\})}^{2} + \| D \we \|_{L^p(R^\eps)}^{p} \le C \ep \end{equation} In the case $p \ge 2$ one needs to use \eqref{abp_1} instead of \eqref{ineq1}-\eqref{ineq2} what yields \begin{equation} \label{zeroap3}  \| D \we \|_{L^p(\Omega^\eps)} \le C \eps^{\frac{1}{p}}, \quad p \in [2, \infty). \end{equation} \subsection{Construction of a refined approximation} The aim of this section is to design a better approximation of the exact solution $u^\eps$ of \eqref{EQ1}. This approximation will of course involve the boundary layer profile $u_{bl}$ studied in the previous section. Consequences of this approximation in terms of wall laws will be discussed in paragraph \ref{parag_wall_laws}. From the previous paragraph, we know that the Poiseuille flow $u^0$ is the limit of $u^\eps$ in $W^{1,p}(\Omega)$. However, the extension of $u^0$ by $0$ in the rough part of the channel was responsible for a jump of the stress tensor at $\Sigma_0$. This jump was the main limitation of the error estimates \eqref{zeroap2}-\eqref{zeroap3}, and the reason for the introduction of the boundary layer term $u_{bl}$. Hence, we hope to have a better approximation replacing $u^0$ by $u^0 + \eps u_{bl}(\cdot/\eps)$. Actually, one can still improve the approximation, accounting for the so-called boundary layer tail $u^\infty$. More precisely, {\em in the Newtonian case}, a good idea is to replace $u^0$ by the solution $u^{0,\eps}$ of the Couette problem: $$ -\Delta u^{0,\eps} + \nabla p^{0,\eps} = 0, \quad \dive u^{0,\eps} = 0, \quad u^{0,\eps}\vert_{\Sigma_0} = \eps u^\infty, \quad u^{0,\eps}\vert_{x_2 = 1} = 0. $$ One then defines: $$ u^{\eps} = u^{0,\eps} + \eps (u_{bl}(\cdot/\eps) - u_\infty) + r^\eps \: \mbox{ in $\Omega$}, \quad u^\eps = \eps u_{bl}(\cdot/\eps) \: \mbox{ in $R^\eps$},$$ where $r^\eps$ is a small divergence-free remainder correcting the $O(\exp(-\delta/\eps))$ trace of $u_{bl} - u^\infty$ at $\{x_2 = 1 \}$. However, for technical reasons, the above approximation is not so successful in our context, so that we need to modify it a little. We proceed as follows. Let $N$ a large constant to be fixed later. We introduce: $$ \Omega^{\eps}_N := \Omega^\eps \cap \{x_2 > N \eps |\ln \eps| \}, \quad   \Omega^{\eps}_{0,N} = \Omega^\eps \cap \{ 0 < x_2 < N \eps |\ln \eps| \}, \quad \mbox{and} \quad \Sigma_N = \Pi \times \{ x_2 = N \eps |\ln \eps| \}. $$ First, we introduce the solution $u^{0,\eps}$ of \begin{equation} \label{u0eps} \left\{ \begin{aligned} - \dive S(D u^{0,\eps}) + \nabla p^{0,\eps} & = e_1, \quad x \in \Omega^\eps_N, \\ \dive u^{0,\eps} & = 0, \quad x \in \Omega^\eps_N, \\ u^{0,\eps}\vert_{\Sigma_N} & = \left( x \rightarrow \left( \begin{smallmatrix} U'(0) x_2 \\ 0 \end{smallmatrix} \right) + \eps u^\infty \right)\vert_{\Sigma_N}, \\ u^{0,\eps}\vert_{\{ x_2 = 1 \}} & = 0. \end{aligned} \right. \end{equation} As for the generalized Poiseuille flow, the pressure $p^{0,\eps}$ is zero, and one has an explicit expression for $u^{0,\eps} = (U^\eps(x_2),0)$. In particular, one can check that \begin{equation} \label{explicit1} U^\eps(x_2) = \beta(\eps) - \frac{(\sqrt{2})^{p'}}{p'} \left| \frac{1}{2} + \alpha(\eps) -x_2 \right|^{p'}, \end{equation} where $\alpha(\eps)$ satisfies the equation ($x_{2,N} := N \eps |\ln \eps|$) \begin{equation} \label{explicit2} -\frac{1}{p'}(\sqrt{2})^{p'} \left( \left| \frac{1}{2} + \alpha(\eps) - x_{2,N} \right|^{p'} - \left| \frac{1}{2} - \alpha(\eps) \right|^{p'} \right) = U'(0) x_{2,N} + \eps U^\infty \end{equation} and $$ \beta(\eps) = \frac{(\sqrt{2})^{p'}}{p'} \left| \frac{1}{2} - \alpha(\eps)\right|^{p'} . $$ By the Taylor expansion, we find that \begin{equation} \label{explicit3} \alpha(\eps) = - \sqrt{2}^{p'-4} \eps U^\infty + O(\eps^2 |\ln \eps|^2). \end{equation} This will be used later. Then, we consider the Bogovski problem \begin{equation} \label{Bogov} \left\{ \begin{aligned} \dive r^\eps & = 0 \quad \mbox{in} \: \Omega^\eps_{N}, \\ r^\eps\vert_{\Sigma_N} & = \eps (u_{bl}(\cdot/\eps) - u^\infty)\vert_{\Sigma_N}, \\ r^\eps\vert_{\{x_2 = 1\}} & = 0. \end{aligned} \right. \end{equation} Since $u^\infty = (U^\infty,0)$, note that $$ \int_{\Sigma_N} \eps (u_{bl}(\cdot/\eps) - u^\infty) \cdot e_2 = \int_{\Omega^\eps_N \cup \overline{R^\eps}} {\rm div}_y u_{bl}(\cdot/\eps) = 0. $$ Hence, the compatibility condition for solvability of \eqref{Bogov} is fulfilled: there exists a solution $r^\eps$ satisfying $$ \| r^\eps \|_{W^{1,p}(\Omega^\eps_N)} \le C \eps \| u_{bl}(\cdot/\eps) - u^\infty \|_{W^{1-\frac{1}{p},p}(\Sigma_N)}. $$ Using the first estimate of Corollary \ref{higherorder}, we find \begin{equation} \label{estimreps} \| r^\eps \|_{W^{1,p}(\Omega^\eps_N)} \le C \eps^{\frac{1}{p}} \exp(-\delta N |\ln \eps|). \end{equation} Finally, we define the approximation $(u^\eps_{app}, p^\eps_{app})$ by the formula \begin{equation} \label{uepsapp} u^\eps_{app}(x) = \left\{ \begin{aligned} & u^{0,\eps}(x) + r^\eps(x) \quad x \in \Omega^\eps_N ,\\ & \left( \begin{smallmatrix} U'(0)x_2 \\ 0 \end{smallmatrix} \right) + \eps u_{bl}(x/\eps), \quad x \in \Omega^\eps_{0,N}, \\ & \eps u_{bl}(x/\eps), \quad x \in R^\eps , \end{aligned} \right. \end{equation} whereas \begin{equation} p^\eps_{app}(x) = \left\{ \begin{aligned} & 0 \quad x \in \Omega^\eps_N ,\\ & p_{bl}(x/\eps) \quad \quad x \in \Omega^\eps_{0,N} \cup R^\eps. \end{aligned} \right. \end{equation} With such a choice: $$u^\eps_{app}\vert_{\partial \Omega^\eps} = 0, \quad \dive u^\eps_{app} = 0 \quad \mbox{over $\Omega^\eps_N \cup \Omega^\eps_{0,N} \cup R^\eps$}.$$ Moreover, $u^\eps_{app}$ has zero jump at the interfaces $\Sigma_0$ and $\Sigma_N$: $$ [u^\eps_{app}]\vert_{\Sigma_0} = 0, \quad [u^\eps_{app}]\vert_{\Sigma_N} = 0. $$ Still, the stress tensor has a jump. More precisely, we find \begin{equation} \label{stressjump} \begin{aligned} \left[S(D u^\eps_{app})n - p^\eps_{app}n\right]\vert_{\Sigma_0} & = 0, \\ \left[S(D u^\eps_{app})n - p^\eps_{app}n\right]\vert_{\Sigma_N} & = \left( S(D u^0_\eps + D r^\eps)\vert_{\{x_2 = (N \eps |\ln \eps|)^+\}} - S(A + Du_{bl}(\cdot/\eps))\vert_{\{x_2 = (N \eps |\ln \eps|)^-\}}\right) e_2 \\ & - p_{bl}(\cdot/\eps) \vert_{\{x_2 = (N \eps |\ln \eps|)^-\}} e_2 . \end{aligned} \end{equation} The next step is to obtain error estimates on $u^\eps - u^\eps_{app}$. \subsection{Error estimates} We prove here: \begin{Theorem} {\bf (Error estimates)} \label{thmerror} \begin{itemize} \item For $1 < p \le 2$, there exists $C$ such that $$ \| u^\eps - u^\eps_{app} \|_{W^{1,p}(\Omega^\eps)} \le C (\eps |\ln \eps|)^{1+\frac{1}{p'}} .$$ \item For $p \ge 2$, there exists $C$ such that $$ \| u^\eps - u^\eps_{app} \|_{W^{1,p}(\Omega^\eps)} \le C (\eps |\ln \eps|)^{\frac{1}{p-1}+\frac{1}{p}} . $$ \end{itemize} \end{Theorem} \begin{Remark} A more careful treatment would allow to get rid of the $\ln$ factor in the last estimate ($p \ge 2$). We do not detail this point here, as we prefer to provide a unified treatment. Also, we recall that the shear thinning case ($1 < p \le 2$) has a much broader range of applications. More comments will be made on the estimates in the last paragraph \ref{parag_wall_laws}. \end{Remark} {\em Proof of the theorem.} We write $v^\eps = u^\eps - u^\eps_{app}$, $q^\eps = p^\eps - p^\eps_{app}$. We start from the equation \begin{equation} \label{eq_error} -\dive S(D u^\eps) + \dive S(D u^\eps_{app}) + \nabla q^\eps = e_1 + \dive S(D u^\eps_{app}) + \nabla p^\eps_{app} := F^\eps \end{equation} satisfied in $\Omega^\eps \setminus (\Sigma_0 \cup \Sigma_N)$. A quick computation shows that \begin{equation*} F^\eps = \left\{ \begin{aligned} & \dive S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps}), \quad x \in \Omega^\eps_N, \\ & e_1, \quad x \in \Omega^\eps_{0,N} \cup R^\eps. \end{aligned} \right. \end{equation*} Defining $$ \langle F^\eps, v^\eps \rangle := \int_{\Omega^\eps_N} F^\eps \cdot v^\eps + \int_{\Omega^\eps_{0,N}} F^\eps \cdot v^\eps + \int_{R^\eps} F^\eps \cdot v^\eps $$ we get: $$ | \langle F^\eps, v^\eps \rangle | \le \alpha_\eps \| \nabla v^\eps \|_{L^p(\Omega^\eps_N)} + \beta_\eps \| v^\eps \|_{L^p(\Sigma_N)} + \| v^\eps \|_{L^1(\Omega^\eps \setminus \Omega^\eps_N)} $$ where $$ \alpha_\eps := \| S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps}) \|_{L^{p'}(\Omega^\eps_N)}, \quad \beta_\eps := \| \left(S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps})\right)e_2 \|_{L^{p'}(\Sigma_N)}. $$ We then use the inequalities \begin{equation} \label{poincarelike} \begin{aligned} \| v^\eps \|_{L^p(\Sigma_N)} \: \le \: C (\eps |\ln \eps|)^{1/p'} \| \nabla v^\eps \|_{L^p(\Omega^\eps)}, \\ \| v^\eps \|_{L^1(\Omega^\eps \setminus \Omega^\eps_N)} \le C \eps^{\frac{1}{p'}} \| v^\eps \|_{L^p(\Omega^\eps \setminus \Omega^\eps_N)} \: \le \: C \eps^{\frac{1}{p'}} (\eps |\ln \eps|) \| \nabla v^\eps \|_{L^p(\Omega^\eps)} \end{aligned} \end{equation} (see the appendix for similar ones). We end up with \begin{equation} | \langle F^\eps, v^\eps \rangle | \le C \left( \alpha_\eps + \beta_\eps (\eps |\ln \eps|)^{1/p'} + \eps^{\frac{1}{p'}} (\eps |\ln \eps|) \right) \| \nabla v^\eps \|_{L^p(\Omega^\eps)} . \end{equation} Back to \eqref{eq_error}, after multiplication by $v^\eps$ and integration over $\Omega^\eps$, we find: \begin{equation*} \begin{aligned} & \int_{\Omega^\eps} \left( S(D u^\eps) - S(Du^\eps_{app}) \right) :\nabla v^\eps \\ & \le C \left( \alpha_\eps + \beta_\eps (\eps |\ln \eps|)^{1/p'} + \eps^{\frac{1}{p'}} (\eps |\ln \eps|) \right) \| \nabla v^\eps \|_{L^p(\Omega^\eps)} + \int_{\Sigma_N} \left( [S(D u^\eps_{app}) e_2]\vert_{\Sigma_N} \cdot v^\eps - [p^\eps_{app}]\vert_{\Sigma_N} v^\eps_2 \right). \end{aligned} \end{equation*} Let $p^{\eps,N}$ be a constant to be fixed later. As $v^\eps$ is divergence-free and zero at $\Gamma^\eps$, its flux through $\Sigma_N$ is zero: $\int_{\Sigma_N} v^\eps_2 = 0$. Hence, we can add $p^{\eps,N}$ to the pressure jump $[p^\eps_{app}]\vert_{\Sigma_N}$ without changing the surface integral. We get: \begin{equation} \label{final_estimate} \begin{aligned} & \int_{\Omega^\eps} \left( S(D u^\eps) - S(Du^\eps_{app}) \right) : \nabla v^\eps \\ & \le C \left( \alpha_\eps + \beta_\eps (\eps |\ln \eps|)^{1/p'} + \eps^{\frac{1}{p'}} (\eps |\ln \eps|) \right) \| \nabla v^\eps \|_{L^p(\Omega^\eps)} + \int_{\Sigma_N} \left( [S(D u^\eps_{app}) e_2]\vert_{\Sigma_N} \cdot v^\eps - ([p^\eps_{app}]\vert_{\Sigma_N} - p^{\eps,N}) v^\eps_2 \right) \\ & \le \left( \alpha_\eps + \beta_\eps (\eps |\ln \eps|)^{1/p'} + \eps^{\frac{1}{p'}} (\eps |\ln \eps|) \right) \| \nabla v^\eps \|_{L^p(\Omega^\eps)} + \gamma_\eps \| v^\eps \|_{L^p(\Sigma_N)} \\ & \le C \left( \alpha_\eps + (\beta_\eps + \gamma^\eps) (\eps |\ln \eps|)^{1/p'} + \eps^{\frac{1}{p'}} (\eps |\ln \eps|) \right) \| \nabla v^\eps \|_{L^p(\Omega^\eps)} , \end{aligned} \end{equation} where $$ \gamma_\eps := \| [\left(S(D u^\eps_{app})]\vert_{\Sigma_N} - ([p^\eps_{app}]\vert_{\Sigma_N} - p^{\eps,N}\right) e_2) \|_{L^{p'}(\Sigma_N)}. $$ Note that we used again the first bound in \eqref{poincarelike} to go from the third to the fourth inequality. \begin{Lemma} \label{bounds} For $N$ large enough, and a good choice of $p^{\eps,N}$ there exists $C = C(N)$ such that $$ \alpha_\eps \le C \eps^{10}, \quad \beta^\eps \le C \eps^{10}, \quad \gamma_\eps \le C \eps |\ln \eps|. $$ \end{Lemma} Let us temporarily admit this lemma. Then, we can conclude the proof of the error estimates: \begin{itemize} \item In the case $1 \le p \le 2$, we rely on the inequality established in \cite[Proposition~5.2]{GM1975}: for all $p \in ]1,2]$, there exists $c$ such that for all $u,u' \in W_0^{1,p}(\Omega^\eps)$ $$ \int_{\Omega^\eps} \left( S(D u) - S(D u') \right) \cdot \nabla (u - u') \ge c \frac{\| Du - Du' \|^2_{L^p(\Omega^\eps)}}{(\| D u \|_{L^p(\Omega^\eps)} + \| D u' \|_{L^p(\Omega^\eps)})^{2-p}} $$ We use this inequality with $u = u^\eps$, $u' = u^\eps_{app}$. With the estimate \eqref{basic_estimate} and the Korn inequality in mind, we obtain $$ \int_{\Omega^\eps} \left( S(D u^\eps) - S(Du^\eps_{app}) \right) \cdot \nabla v^\eps \ge c \| \nabla v^\eps \|_{L^p}^2. $$ Combining this lower bound with the upper bounds on $\alpha_\eps, \beta_\eps, \gamma_\eps$ given by the lemma, we deduce from \eqref{final_estimate} the first error estimate in Theorem \ref{thmerror}. \item In the case $2 \le p$, we use the easier inequality $$ \int_{\Omega^\eps} \left( S(D u) - S(D u') \right) \cdot \nabla (u - u') \ge c \| D u - D u' \|_{L^p(\Omega^\eps)}^p, $$ so that $$ \int_{\Omega^\eps} \left( S(D u^\eps) - S(Du^\eps_{app}) \right) \cdot \nabla v^\eps \ge c \| \nabla v^\eps \|_{L^p(\Omega^\eps)}^p. $$ The second error estimate from Theorem \ref{thmerror} follows. \end{itemize} The final step is to establish the bounds of Lemma \ref{bounds}. {\em Bound on $\alpha_\eps$ and $\beta_\eps$}. From Corollary \ref{higherorder} and the trace theorem, we deduce that \begin{equation} \label{traceubl} \| u_{bl}(\cdot/\eps) - u^\infty \|_{W^{1+s-\frac{1}{q},q}(\{ x_2 = t \})} \le C \eps^{\frac{1}{q}-s-1}\exp(-\delta t/\eps) \end{equation} for some $s < \alpha$ (where $\alpha \in (0,1)$) and any $q > \frac{1}{s}$. Let $q > \max(p', \frac{2}{s})$. The solution $r^\eps$ of \eqref{Bogov} satisfies: $r^\eps \in W^{1+s,q}(\Omega^\eps_N)$ with \begin{equation*} \| r^\eps \|_{W^{1+s,q}(\Omega^\eps_N)} \le C \eps^{\frac{1}{q}-s} \exp(-N\delta|\ln \eps|) \end{equation*} so that by Sobolev imbedding \begin{equation} \label{estimreps2} \| D r^\eps \|_{L^\infty(\Sigma_N)} + \| D r^\eps \|_{L^q(\Sigma_N)} + \| D r^\eps \|_{L^\infty(\Omega^\eps_N)} \le C \| D r^\eps \|_{W^{s,q}(\Omega^\eps_N)} \le C \eps^{\frac{1}{q}-s} \exp(-N\delta|\ln \eps|) \end{equation} This last inequality allows to evaluate $\beta_\eps$. Indeed, for $x \in \Sigma_N$, $C \ge |Du^{0,\eps}(x)| \ge c > 0$ uniformly in $x$. We can then use the upper bound \eqref{ineq3} for $p < 2$, or \eqref{ineq3duo} for $p \ge 2$, to obtain \begin{equation}\label{betaep} \beta_\eps \le C \| D r^\eps \|_{L^{p'}(\Sigma_N)} \le C \| D r^\eps \|_{L^q(\Sigma_N)} \le C' \eps^{\frac{1}{q}-s} \exp(-N\delta|\ln \eps|) \le C' \eps^{10}\end{equation} for $N$ large enough. To treat $\alpha_\eps$, we still have to pay attention to the cancellation of $D u^{0,\eps}$. Indeed, from the explicit expression of $u^{0,\eps}$, we know that there is some $x_2(\eps) \sim \frac{1}{2}$ at which $D u^{0,\eps}\vert_{x_2 = x_2(\eps)} = 0$. Namely, we write \begin{align*} & \int_{\Omega^\eps_N} |S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps})|^{p'} \\ & = \int_{\{x \in \Omega^\eps_N,\, | x_2 - x_2(\eps) | \le \eps^{10 p'}\}} |S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps})|^{p'} + \int_{\{x\in \Omega^\eps_N,\, | x_2 - x_2(\eps) | \ge \eps^{10 p'}\}} |S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps})|^{p'} \\ & := I_1 + I_2. \end{align*} The first integral is bounded by $$ I_1 \le C \int_{\{x\in \Omega^\eps_N,\, | x_2 - x_2(\eps) | \le \eps^{10 p'}\}} | D u^{0,\eps} |^p + | D r^\eps |^p \le C \eps^{10 p'}, $$ where we have used the uniform bound satisfied by $D u^{0,\eps}$ and $D r^\eps$ over $\Omega^\eps_N$, see \eqref{estimreps2}. For the second integral, we can distinguish between $p < 2$ and $p \ge 2$. For $p < 2$, see \eqref{ineq3} and its proof, we get \begin{equation}\label{I_2} I_2 \le C \int_{\{x\in \Omega^\eps_N,\, | x_2 - x_2(\eps) | \ge \eps^{10 p'}\}} |D u^{0,\eps}|^{(p-2)p'} |D r^\eps|^{p'} \le C' \eps^{-M} \exp(-\delta' N|\ln \eps|) \end{equation} for some $M, C', \delta' > 0$, see \eqref{estimreps2}. In the case $p \ge 2$, as $D u^{0,\eps}$ and $D r^\eps$ are uniformly bounded, we derive a similar inequality by \eqref{ineq3duo}. In both cases, taking $N$ large enough, we obtain $I_2 \le C'' \eps^{10 p'}$, to end up with $\alpha_\eps \le C \eps^{10}$. {\em Bound on $\gamma_\eps$}. We have \begin{align*} \gamma_\eps & \le \| \left(S(D u^{0,\eps} + D r^\eps) - S(D u^{0,\eps})\right) e_2 \|_{L^{p'}(\Sigma_N)} + \| \left(S(D u^{0,\eps}) - S(A)\right) e_2 \|_{L^{p'}(\Sigma_N)} \\ & + \| (S(A) - S(A + D u_{bl}(\cdot/\eps))) e_2 \|_{L^{p'}(\Sigma_N)} + \| p_{bl}(\cdot/\eps) - p^{\eps,N} \|_{L^{p'}(\Sigma_N)} . \end{align*} The first term is $\beta_\eps$, so $O(\eps^{10})$ by previous calculations. The third term can be treated similarly to $\beta_\eps$. As $A \neq 0$, \eqref{ineq3} implies that \begin{equation}\label{I_3bis} \| (S(A) - S(A + D u_{bl}(\cdot/\eps))) e_2 \|_{L^{p'}(\Sigma_N)} \le C \| D u_{bl}(\cdot/\eps) \|_{L^{p'}(\Sigma_N)} \le C' \exp(-\delta' N \ln \eps) , \end{equation} where the last inequality can be deduced from \eqref{traceubl}. It is again $O(\eps^{10})$ for $N$ large enough. For the second term of the right-hand side, we rely on the explicit expression of $u^{0,\eps}$. On the basis of \eqref{explicit1}-\eqref{explicit3}, we find that $$ D(u^{0,\eps})\vert_{\Sigma_N} = A + O(\eps |\ln \eps|) $$ resulting in $$ \| \left(S(D u^{0,\eps}) - S(A)\right) e_2 \|_{L^{p'}(\Sigma_N)} \le C \eps |\ln \eps|. $$ Finally, to handle the pressure term, we use the second term of Corollary \ref{higherorder}, which implies $$ \| p_{bl} - p^t \|_{L^q(\{ y_2 = t\})} \: \le \: C \exp(-\delta t) \quad \mbox{for some constant $p^t$}.$$ We take $t = N \ln \eps$ and $p^{\eps,N} = p^t$ to get $$ \| p_{bl}(\cdot/\eps) - p^{\eps,N} \|_{L^{p'}(\Sigma_N)} \le C' \exp(-\delta' N |\ln \eps|). $$ Taking $N$ large enough, we can make this term neglectible, say $O(\eps^{10})$. Gathering all contributions, we obtain $ \gamma_\eps \le C \eps |\ln \eps|$ as stated. \subsection{Comment on possible wall laws} \label{parag_wall_laws} On the basis of the previous error estimates, we can now discuss the appropriate wall laws for a non-Newtonian flow above a rough wall. We focus here again on the shear thinning case ($1 < p \le 2$). We first notice that the field $u^\eps_{app}$ (see \eqref{uepsapp}) involves in a crucial way the solution $u^ {0,\eps}$ of \eqref{u0eps}. Indeed, we know from \eqref{estimreps} that the contribution of $r^\eps$ in $W^{1,p}(\Omega^\eps_N)$ is very small for $N$ large enough. Hence, the error estimate of Theorem \ref{thmerror} implies that $$ \| u^\eps - u^{0,\eps} \|_{W^{1,p}(\Omega^\eps_N)} = O((\eps |\ln \eps|)^{1+\frac{1}{p'}}) . $$ In other words, away from the boundary layer, $u^\eps$ is well approximated by $u^{0,\eps}$, with a power of $\eps$ strictly bigger than $1$. Although such estimate is unlikely to be optimal, it is enough to emphasize the role of the boundary layer tail $u^\infty$. Namely, the addition of the term $\eps u^\infty$ in the Dirichlet condition for $u^{0,\eps}$ (see the third line of \eqref{u0eps}) allows to go beyond a $O(\eps)$ error estimate. {\it A contrario}, the generalized Poiseuille flow $u^0$ leads to a $O(\eps)$ error only (away from the boundary layer). Notably, \begin{equation} \label{estimu0} \| u^\eps - u^0 \|_{W^{1,p}(\Omega^\eps_N)} \ge \| u^{0,\eps} - u^0 \|_{W^{1,p}(\Omega^\eps_N)} - \| u^\eps - u^{0,\eps} \|_{W^{1,p}(\Omega^\eps_N)} \ge c \eps - o(\eps) \ge c' \eps , \end{equation} where the lower bound for $u^{0,\eps} - u^0$ is obtained using the explicit expressions. Let us further notice that instead of considering $u^{0,\eps}$, we could consider the solution $u^0_\eps$ of \begin{equation} \label{u0epsbis} \left\{ \begin{aligned} - \dive S(u^0_\eps)) + \nabla p^0_\eps & = e_1, \quad x \in \Omega^\eps_N, \\ \dive u^0_\eps & = 0, \quad x \in \Omega^\eps_N, \\ u^0_\eps \vert_{\Sigma_0} & = \eps u^\infty, \\ u^0_\eps\vert_{\{ x_2 = 1 \}} & = 0. \end{aligned} \right. \end{equation} It reads $u^0_\eps = (U_\eps, 0)$ with $$ U_\eps(x_2) = \beta'(\eps) - \frac{(\sqrt{2})^{p'}}{p'} \left| \frac{1}{2} + \alpha'(\eps) -x_2 \right|^{p'} $$ for $\alpha'$ and $\beta'$ satisfying $$ -\frac{1}{p'}(\sqrt{2})^{p'} \left( \left| \frac{1}{2} + \alpha'(\eps) \right|^{p'} - \left| \frac{1}{2} - \alpha'(\eps) \right|^{p'} \right) = \eps u^{\infty}_1 \quad \mbox{ and } \quad \beta'(\eps) = \frac{(\sqrt{2})^{p'}}{p'} \left| \frac{1}{2} - \alpha'(\eps)\right|^{p'}. $$ We can compare directly these expressions to \eqref{explicit1}-\eqref{explicit2} and deduce that $$ \| u^{0,\eps} - u^0_\eps \|_{W^{1,p}(\Omega^\eps_N)} = O(\eps |\ln \eps|), $$ which in turn implies that \begin{equation} \label{estimu0eps} \| u^\eps - u^0_\eps \|_{W^{1,p}(\Omega^\eps_N)} = O(\eps |\ln \eps|). \end{equation} Hence, in view of \eqref{estimu0} and \eqref{estimu0eps}, we distinguish between two approximations (outside the boundary layer): \begin{itemize} \item A crude approximation, involving the generalized Poiseuille flow $u^0$. \item A refined approximation, involving $u^0_\eps$. \end{itemize} The first choice corresponds to the Dirichlet wall law $u\vert_{\Sigma_0} = 0$, and neglects the role of the roughness. The second choice takes it into account through the inhomogeneous Dirichlet condition: $u\vert_{\Sigma_0} = \eps u^\infty = \eps (U^\infty,0)$. Note that this last boundary condition can be expressed as a wall law, although slightly abstract. Indeed, $U^\infty$ can be seen as a function of the tangential shear $(D(u^0)n)_\tau\vert_{\Sigma_0} = \pa_2 u^0_1\vert_{\Sigma_0} = U'(0)$, through the mapping $$ U'(0) \: \rightarrow \: A := \left( \begin{smallmatrix} 0 & U'(0) \\ U'(0) & 0 \end{smallmatrix} \right) \: \rightarrow \: u_{bl} \:\: \mbox{solution of \eqref{BL1}-\eqref{BL2}} \: \rightarrow \: U^\infty = \lim_{y_2 \rightarrow +\infty} u_{bl,1}. $$ Denoting by ${\cal F}$ this application, we write $$(u^0_\eps)_\tau \vert_{\Sigma_0} = \eps {\cal F}((D(u^0)n)_\tau\vert_{\Sigma_0}) \approx \eps {\cal F}((D(u^0_\eps)n)_\tau\vert_{\Sigma_0})$$ whereas $ (u^0_\eps)_n = 0$. This provides the following refined wall law : $$ u_n\vert_{\Sigma_0} = 0, \quad u_\tau\vert_{\Sigma_0} = \eps {\cal F}\bigl((D(u) n)_\tau\vert_{\Sigma_0}\bigr). $$ This wall law generalizes the Navier wall law derived in the Newtonian case, where ${\cal F}$ is simply linear. Of course, it is not very explicit as it involves the nonlinear system \eqref{BL1}-\eqref{BL2}. More studies will be necessary to obtain qualitative properties of the function ${\cal F}$, leading to a more effective boundary condition. {\bf Acknowledgements: } The work of AWK is partially supported by Grant of National Science Center Sonata, No 2013/09/D/ST1/03692. \section{Appendix : A few functional inequalities} \begin{Proposition} {\bf (Korn inequality)} Let $S_a := \T \times (a, a+1)$, $a \in \R$. For all $1 < p < +\infty$, there exists $C > 0$ such that: for all $a \in \R$, for all $u \in W^{1,p}(S_a)$, \begin{equation} \label{Korn1} \| \nabla u \|_{L^p(S_a)} \: \le \: C \| D u \|_{L^p(S_a)}. \end{equation} \end{Proposition} {\em Proof.} Without loss of generality, we can show the inequality for $a = 0$: the independence of the constant $C$ with respect to $a$ follows from invariance by translation. Let us point out that the keypoint of the proposition is that the inequality is homogeneous. Indeed, it is well-known that the inhomogeneous Korn inequality \begin{equation} \label{Korn2} \| \nabla u \|_{L^p(S_0)} \: \le \: C' \left( \| D u \|_{L^p(S_0)} + \| u \|_{L^p(S_0)}\right) \end{equation} holds. To prove the homogeneous one, we use reductio at absurdum : if \eqref{Korn1} is wrong, there exists a sequence $u_n$ in $W^{1,p}(S_0)$ such that \begin{equation} \label{absurd} \| \nabla u_n \|_{L^p(S_0)} \: \ge \: n \| D u_n \|_{L^p(S_0)}. \end{equation} Up to replace $u_n$ by $u'_n := (u_n - \int_{S_0} u_n)/\| u_n\|_{L^p}$, we can further assume that $$ \| u_n \|_{L^p} = 1, \quad \int_{S_0} u_n = 0.$$ Combining \eqref{Korn2} and \eqref{absurd}, we deduce that $1 \ge \frac{n - C'}{C'} \| D(u_n) \|_{L^p}$ which shows that $D(u_n)$ converges to zero in $L^p$. Using again \eqref{Korn2}, we infer that $(u_n)$ is bounded in $W^{1,p}$, so that up to a subsequence it converges weakly to some $u \in W^{1,p}$, with strong convergence in $L^p$ by Rellich Theorem. We have in particular \begin{equation} \label{contradiction} \| u \|_{L^p} = \lim_n \| u_n \|_{L^p} = 1, \quad \int_{S_0} u = \lim_n \int_{S_0} u_n = 0. \end{equation} Moreover, as $D(u_n)$ goes to zero, we get $D(u) = 0$. This implies that $u$ must be a constant (dimension is 2), which makes the two statements of \eqref{contradiction} contradictory. \begin{Corollary} Let $H_a := \T \times (a, + \infty)$. For all $1 < p < +\infty$, there exists $C > 0$ such that: for all $a \in \R$, for all $u \in W^{1,p}(H_a)$, \begin{equation*} \| \nabla u \|_{L^p(H_a)} \: \le \: C \| D u \|_{L^p(H_a)}. \end{equation*} \end{Corollary} {\em Proof.} From the previous inequality, we get for all $n \in \N$: $$ \int_{S_{a+n}} | \nabla u |^p \: \le C \: \int_{S_{a+n}} | D u |^p. $$ The result follows by summing over $n$. \begin{Corollary} Let $1 < p < +\infty$. There exists $C > 0$, such that for all $u \in W^{1,p}(\Omega_{bl}^-)$, resp. $u \in W^{1,p}(\Omega_{bl})$, satisfying $u\vert_{\Gamma_{bl}} = 0$, one has $$ \| \nabla u \|_{L^p(\Omega_{bl}^-)} \le C \|  D u \|_{L^p(\Omega_{bl}^-)}, \quad \mbox{resp.} \: \| \nabla u \|_{L^p(\Omega_{bl})} \le C \|  D u \|_{L^p(\Omega_{bl})}. $$ \end{Corollary} {\em Proof}. One can extend $u$ by $0$ for all $y$ with $-1 < y_2 < \gamma(y_1)$, and apply the previous inequality on $S_{-1}$, resp. $H_{-1}$. \begin{Proposition}[Rescaled trace and Poincar\'e inequalities]\label{rescaledTracePoincare} Let $\varphi \in W^{1,p}(R^\ep)$. We have \begin{equation}\label{IQ1} \| \varphi \|_{L^{p}(\Sigma)} \leq C \ep^{\frac{1}{p'}} \| \nabla_x \varphi \|_{L^p(R_\ep)} , \end{equation} \begin{equation}\label{IQ2} \| \varphi \|_{L^p(R_\ep)} \leq C \ep \| \nabla_x \varphi \|_{L^p(R_\ep)} . \end{equation} \end{Proposition} {\em Proof.} Let $\tilde\varphi(y) = \varphi(\ep y)$, where $y\in S_k = S + (k,-1)$ (a rescaled single cell of rough layer). Then $\tilde \varphi \in W^{1,p}(S_k)$ for all $k\in \N$, and $\varphi = 0 $ on $\Gamma$. By the trace theorem and the Poincar\'e inequality: for all $p \in [1,\infty )$ $$\int_{S_k \cap \{y_2 = 0 \}} | \tilde\varphi(\bar{y},0)|^p {\rm\,d}\bar{y} \leq C \int_{S_k} |\nabla_y \tilde \varphi |^p {\rm\,d}y .$$ A change of variables provides $$\int_{\ep S_k \cap \{x_2 = 0 \}} | \varphi(\bar{x},0)|^p \ep^{-1} {\rm\,d}\bar{x} \leq C \int_{\ep S_k} \ep^p |\nabla_x \tilde\varphi(x) |^p \ep^{-2} {\rm\,d}x .$$ Summing over $k$ we obtain $$\left( \int_{\Sigma} | \varphi(\tilde{x},0) |^p {\rm\, d} \tilde{x} \right)^{\frac{1}{p}} \leq C \ep^{\frac{p-1}{p}} \left( \int_{R^\ep} | \nabla_{x} \varphi (x) |^p \dx \right)^{\frac{1}{p}} $$ and \eqref{IQ1} is proved. The inequality \eqref{IQ2} is proved in the same way, as a consequence of the (one-dimensional) Poincar\'e inequality. $\Box $ \end{document}
\begin{document} \title{ Extreme violation of local realism in quantum hypergraph states} \date{\today} \author{Mariami Gachechiladze} \author{Costantino Budroni} \author{Otfried G\"uhne} \affiliation{Naturwissenschaftlich-Technische Fakult\"at, Universit\"at Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany} \begin{abstract} Hypergraph states form a family of multiparticle quantum states that generalizes the well-known concept of Greenberger-Horne-Zeilinger states, cluster states, and more broadly graph states. We study the nonlocal properties of quantum hypergraph states. We demonstrate that the correlations in hypergraph states can be used to derive various types of nonlocality proofs, including Hardy-type arguments and Bell inequalities for genuine multiparticle nonlocality. Moreover, we show that hypergraph states allow for an exponentially increasing violation of local realism which is robust against loss of particles. Our results suggest that certain classes of hypergraph states are novel resources for quantum metrology and measurement-based quantum computation. \end{abstract} \pacs{03.65.Ta, 03.65.Ud} \maketitle {\it Introduction.---} Multiparticle entanglement is central for discussions about the foundations of quantum mechanics, protocols in quantum information processing, and experiments in quantum optics. Its characterization has, however, turned out to be difficult. One problem hindering the exploration of multiparticle entanglement is the exponentially increasing dimension of the Hilbert space. This implies that making statements about general quantum states is difficult. So, one has to concentrate on families of multiparticle states with an easier-to-handle description. In fact, symmetries and other kinds of simplifications seem to be essential for a state to be a useful resource. Random states can often be shown to be highly entangled, but useless for quantum information processing \cite{entangleduseful}. An outstanding class of useful multiparticle quantum states is given by the family of graph states \cite{hein}, which includes the Greenberger-Horne-Zeilinger (GHZ) states and the cluster states as prominent examples. Physically, these states have turned out to be relevant resources for quantum metrology, quantum error correction, or measurement-based quantum computation \cite{hein}. Mathematically, these states are elegantly given by graphs, which describe the correlations and also a possible interaction structure leading to the graph state. In addition, graph states can be defined via a so-called stabilizer formalism: A graph state is the unique eigenstate of a set of commuting observables, which are local in the sense that they are tensor products of Pauli measurements. These stabilizer observables are important for easily computing correlations leading to violations of Bell inequalities \cite{Mermin, gthb}, as well as designing simple schemes to characterize graph states experimentally \cite{gtreview}. Recently, this family of states has been generalized to hypergraph states \cite{Kruszynska2009, Qu2013_encoding, Rossi2013, Otfried, chenlei, lyons}. These states have been recognized as special cases of the so-called locally maximally entangleable (LME) states \cite{Kruszynska2009}. Mathematically, they are described by hypergraphs, a generalization of graphs, where a single hyperedge can connect more than two vertices. They can also be described by a stabilizer formalism, but this time, the stabilizing operators are not local. So far, hypergraph states have turned out to play a role for search algorithms in quantum computing \cite{scripta}, quantum fingerprinting protocols \cite{mora}, and they have been shown to be complex enough to serve as witnesses in all QMA problems \cite{qma}. They have recently been investigated in condensed matter physics as ground states of spin models with interesting topological properties \cite{Yoshida, Akimasa}. In addition, equivalence classes and further entanglement properties of hypergraph states have been studied \cite{Otfried}. \begin{figure}\label{fig-hgbild} \end{figure} In this paper we show that hypergraph states violate local realism in an extreme manner, but in a way that is robust against loss of particles. We demonstrate that this leads to applications of these states in quantum metrology and quantum computation. We see that the stabilizer formalism describing hypergraph states, despite being nonlocal, can be used to derive Hardy-type nonlocality arguments \cite{Hardy92}, Bell inequalities for genuine multiparticle entanglement \cite{Svetlichny87}, or a violation of local realism with a strength exponentially increasing with the number of particles. Our approach starts precisely with the properties of the stabilizer, in order to identify the useful correlations provided by quantum mechanics. This is in contrast to previous approaches that were either too general, e.g. Bell inequalities for general multiparticle states \cite{popescurohrlich,wwzb}, or too restricted, considering only few specific examples of hypergraph states and leading to non robust criteria \cite{Otfried}. The violation of local realism is the key to further applications in information processing: Indeed, it is well known that violation of a Bell inequality leads to advantages, in distributed computation scenarios \cite{brunnerreview, brukner}. In addition, we will explicitly show that certain classes of hypergraph states lead to Heisenberg scaling in quantum metrology and advantages in measurement-based quantum computation. {\it Hypergraph states.---} A hypergraph $H=(V,E)$ consists of a set of vertices $V=\{1,...,N\}$ and a set of hyperedges $E\subset 2^V$, with $2^V$ the power set of $V$. While for graphs edges connect only two vertices, hyperedges can connect more than two vertices; examples of hypergraphs are depicted in Fig.~\ref{fig-hgbild}. For any hypergraph we define the corresponding hypergraph state $\ket{H}$ as the $N$-qubit state \begin{equation} \ket{H}=\prod_{e\in E} C_e\ket{+}^{\otimes N}, \label{eq-hg-creation} \end{equation} where $\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$, $e$ is a hyperedge and $C_e$ is a multi-qubit phase gate acting on the Hilbert space associated with the vertices $v\in e$, given by the matrix $C_e =\mathbbm {1} - 2\ket{1\dots 1}\bra{1\dots 1}$. The first nontrivial hypergraph state consists of $N=3$ qubits connected by a single hyperedge [see Fig.~\ref{fig-hgbild}(a)]. Hypergraph states have been recognized as special cases of LME states, generated via a fixed interaction phase of $\phi=\pi$ \cite{Kruszynska2009}. Alternatively, we can define the hypergraph states using a stabilizer formalism \cite{Otfried}. For each qubit $i$ we define the operator \begin{equation} g_i=X_i\bigotimes_{e\in E} C_{e\backslash \{i\}}. \label{eq-hg-stabilizer} \end{equation} Here and in what follows, we denote by $X_i$ and $Z_i$ the Pauli matrices, acting on $i^{th}$ qubit. The hypergraph state can be defined as the unique eigenstate for all of them, $g_i \ket{H}=\ket{H}$ with the eigenvalue $+1$. Consequently, the hypergraph state is an eigenstate of the entire stabilizer, i.e., the commutative group formed by all the products of the $g_i$. It should be noted that the $g_i$ are, in general, non-local operators, as they are not tensor-products of operators acting on single parties. We say that a hyperedge has \textit{cardinality} $k$, if it circumscribes $k$ vertices and a hypergraph is \textit{$k$-uniform}, if all edges are $k$-edges. Finally, note that different hypergraphs may lead to equivalent hypergraph states, in the sense that the two states can be converted into one other by a local basis change. For small numbers of qubits, the resulting equivalence classes have been identified \cite{Otfried}. {\it Local correlations from the nonlocal stabilizer.---} The key observation for the construction of our nonlocality arguments is that the stabilizer of hypergraph states, despite being nonlocal, predicts perfect correlations for some local measurements. In the following, we explain this for the three-qubit hypergraph state $\ket{H_3}$, but the method is general. The stabilizing operators for the three-qubit hypergraph state are \begin{equation}\label{1} g_1=X_1\otimes C_{23}, \quad g_2=X_2\otimes C_{13}, \quad g_3=X_3\otimes C_{12}. \end{equation} We can expand the controlled phase gate $C_{ij}$ on two qubits, leading to \begin{equation} \label{phasegate} g_1=X_1\otimes (\ketbra{00}+\ketbra{01}+\ketbra{10}-\ketbra{11}) \end{equation} and similar expressions for the other $g_i$. Since $g_1 \ket{H_3}=+\ket{H_3}$, the outcomes for $X$ measurements on the first qubit and $Z$ measurements on the second and third qubits are correlated: if one measures $"+"$ on the first qubit, then the other two parties cannot both measure $"-"$ in $Z$ direction, as this would produce $-1$ as the overall eigenvalue. So, we extract the first correlation from the stabilizer formalism: \begin{equation} \label{A1} P(+--|XZZ)=0. \end{equation} The l.h.s of Eq.~(\ref{A1}) denotes the probability of measuring $+--$ in $XZZ$ on the qubit $1,2$, and $3$, respectively. Similarly, it follows that if one measures $"-"$ in $X$ direction on the first qubit, then the other parties, both have to measure $"-"$ in $Z$ direction. So we have: \begin{equation} \label{A} P(-++|XZZ)+ P(-+-|XZZ)+P(--+|XZZ)=0, \end{equation} which implies, of course, that each of the probabilities is zero. Since the three-qubit hypergraph state is symmetric, the same correlations for measuring $X$ on other qubits can be obtained by considering $g_2$ and $g_3$, leading to permutations of correlations in Eq.~(\ref{A1} ,\ref{A}). {\it The three-qubit hypergraph state $\ket{H_3}$.---} We start with the discussion of fully local hidden variable (HV) models. Such models assign for any value of the HV $\lambda$ results to all measurements of the parties in a local manner, meaning that the probabilities for a given HV factorize. If we denote by $r_i$ the result and by $s_i$ the measurement setting on the $i^{th}$ particle, respectively, then the probabilities coming from local models are of the form \begin{align} P & (r_1,r_2,r_3 |s_1,s_2,s_3) = \\ = & \int d\lambda p(\lambda) \chi^A(r_1|s_1,\lambda) \chi^B(r_2|s_2,\lambda) \chi^C(r_3|s_3,\lambda) \nonumber. \end{align} For probabilities of this form, it is well known that it suffices to consider models which are, for a given $\lambda$, deterministic. This means that $\chi^i$ takes only the values $0$ or $1$, and there is only a finite set of $\chi^i$ to consider. \noindent {\bf Observation 1.} {\it If a fully local hidden variable model satisfies the conditions from Eq.~(\ref{A1}, \ref{A}) and their symmetric correlations coming from the permutations, then it must fulfill} \begin{equation} P(+--|XXX)+P(-+-|XXX)+P(--+|XXX) = 0. \end{equation} The proof of this statement is done by exhausting all possible local deterministic assignments. In contrast, for $\ket{H_3}$ we have \begin{align} P(+--|XXX)=\frac{1}{16} \label{eq-h3corr} \end{align} and the same holds for the permutations of the qubits. The above is a so-called Hardy argument \cite{Hardy92}, namely, a set of joint probabilities equal to $0$ or, equivalently, logical implications that together implies that some other probability is equal to zero. Our method shows how the correlations of the nonlocal stabilizer can be used for Hardy-type arguments. We recall that Hardy-type arguments have been obtained for all permutation-symmetric states \cite{Wang, Abramsky}. However, they involved different settings and have no direct connection with the stabilizer formalism, making a generalization complicated. In contrast, we will see that our measurements can even be used to prove genuine multiparticle nonlocality of the hypergraph state. First, we translate the Hardy-type argument into a Bell inequality: \noindent {\bf Remark 2.} {\it Putting together all the null terms derived from the stabilizer formalism and subtracting the terms causing a Hardy-type argument, we obtain the Bell inequality \begin{align} \label{3correlations} \langle \ensuremath{\mathcal{B}}_3^{(1)} \rangle&= \big[ P(+--|XZZ)+ P(-++|XZZ) \nonumber \\ +&P(-+-|XZZ)+P(--+|XZZ)+ \mbox{ permutat.} \big] \nonumber \\ -&[ P(+ --|XXX)+ \mbox{ permutations }\big] \geq 0, \end{align} where the permutations include all distinct terms that are obtained by permuting the qubits. The three-uniform hypergraph state violates the inequality (\ref{3correlations}) with the value of $\langle \ensuremath{\mathcal{B}}_3^{(1)} \rangle=-3 / 16$.} This Bell inequality follows from the Hardy argument: If a deterministic local model predicts one of the results with the minus signs, it also has to predict at least one of the results corresponding to the terms with a plus sign, otherwise it contradicts with the Hardy argument. In addition, all the terms with a minus sign are exclusive, so a deterministic LHV model can predict only one of them. The Hardy-type argument and the Bell inequality can be generalized to a higher number of qubits, if we consider $N$-qubit hypergraphs with the single hyperedge having a cardinality $N$: \noindent {\bf Observation 3.} {\it Consider the $N$-qubit hypergraph state with a single hyperedge of cardinality $N$. Then, all the correlations coming from the stabilizer [as generalizations of Eqs.~(\ref{A1},\ref{A})] imply that for any possible set of results $\{r_i\}$ where one $r_{i_1}=+1$ and two $r_{i_2}= r_{i_3} = -1$ one has \begin{equation} P(r_1, r_2, ..., r_N|X_1 X_2 ... X_N) = 0. \end{equation} For the hypergraph state, however, this probability equals $1/2^{(2N-2)}.$ This Hardy-type argument leads to a Bell inequality as in Eq.~(\ref{3correlations}) which is violated by the state with a value of $-(2^N-N-2)/2^{(2N-2)}.$} Clearly, the violation of the Bell inequality is not strong, as it does not increase with the number of particles. Nevertheless, Observation~3 shows that the nonlocal stabilizer formalism allows one to easily obtain nonlocality proofs. In fact, one can directly derive similar arguments for other hypergraph states (e.g. states with one hyperedge of cardinality $N$ and one further arbitrary hyperedge), these results will be presented elsewhere. Note that these states are not symmetric, so the results of Refs.~\cite{Wang, Abramsky} do not apply. So far, we only considered fully local models, where for a given HV all the probabilities factorise. Now we go beyond this restricted type of models to the so-called hybrid models \cite{Svetlichny87}. We consider a bipartition of the three particles, say $A|BC$, and consider a model of the type $ P (r_1,r_2,r_3 |s_1,s_2,s_3) = \int d\lambda p(\lambda) \chi^A(r_1|s_1,\lambda) \chi^{BC}(r_2, r_3|s_2, s_3,\lambda). $ Here, Alice is separated from the rest, but $\chi^{BC}$ may contain correlations, e.g., coming from an entangled state between $B$ and $C$. In order to be physically reasonable, however, we still request $\chi^{BC}$ not to allow instantaneous signaling. This kind of models, even if different bipartitions are mixed, cannot explain the correlations of the hypergraph state, meaning that the hypergraph state is genuine multiparticle nonlocal. First, one can see by direct inspection that the stabilizer conditions from Eqs.~(\ref{A1}, \ref{A}) are not compatible with the hypergraph correlations $P(---|XXX) = 1/16$ and $P(---|ZZZ) = 1/8$. Contrary to the correlations in Eq.~(\ref{eq-h3corr}) these are symmetric, and allow the construction of a Bell-Svetlichny inequality \cite{Svetlichny87} valid for all the different bipartitions: \noindent {\bf Observation 4.} {\it Putting all the terms from the hypergraph stabilizer formalism and the correlations $P(---|XXX)$ and $P(---|ZZZ)$ together, we obtain the following Bell-Svetlichny inequality for genuine multiparticle nonlocality, \begin{align} \label{3gen} \langle\ensuremath{\mathcal{B}}_3^{(2)}\rangle&= \big[P(+--|XZZ)+ P(-++|XZZ) \nonumber \\ +&P(-+-|XZZ)+P(--+|XZZ)+ \mbox{ permutat.} \big] \nonumber \\ +&P(- --|XXX) - P(- - -|ZZZ) \geq 0, \end{align} which is violated by the state $\ket{H_3}$ with $\langle\ensuremath{\mathcal{B}}_3^{(2)}\rangle=-1 \backslash 16$.} The proof is done by an exhaustive assignments of nonsignaling and local models. To investigate the noise tolerance of Ineq.~\eqref{3gen}, we consider states of the type $ \varrho= (1-\varepsilon)\ket{H}\bra{H}+\varepsilon \mathbbm{1}/8 $ and ask how much noise can be added, while the inequality is still violated. The white noise tolerance of the Ineq.~(\ref{3gen}) is $\varepsilon = {1}/{13} \approx 7.69\%$ and is optimal in the sense that for larger values of $\varepsilon$ a hybrid model can be found which explains all possible measurements of $X$ and $Z$ (within numerical precision). The existence of such a model can be shown by linear programming (see Appendix A \cite{appremark}). With the same method we can also prove that the state becomes fully local with respect to $X$ and $Z$ measurements for $\varepsilon \geq 2/3 \approx 66.6\% $. {\it Three uniform hypergraph states.---} Let us extend our analysis to hypergraph states with a larger number of particles. Here, it is interesting to ask whether the violation of Bell inequalities increases exponentially with the number of parties. Such a behaviour has previously been observed only for GHZ states \cite{Mermin} and some cluster states \cite{gthb}. GHZ states are described by fully connected graphs (see Fig.~\ref{fig-hgbild}), i.e., fully connected two-uniform hypergraph states. It is thus natural to start with fully connected three-uniform hypergraph states. First, we observe that for such states on $N$ qubits and for even $m$ with $1<m<N$ \begin{equation}\label{lemma3} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{1}{2} & \mbox{\ensuremath{\mbox{if \ensuremath{m=2} mod \ensuremath{4}}}},\\ -\frac{1}{2} & \mbox{\ensuremath{\mbox{if \ensuremath{m=0} mod \ensuremath{4}}}}. \end{array}\end{cases} \end{equation} Moreover, if $m=N$, then the correlations are given by \begin{equation}\label{lemma33} \langle \underset{N}{\underbrace{XX\dots XX}}\rangle =\begin{cases} \begin{array}[t]{cc} 0 & \mbox{\ensuremath{\mbox{if \ensuremath{N=0} mod \ensuremath{4}}}},\\ 1 & \mbox{\ensuremath{\mbox{if \ensuremath{N=2} mod \ensuremath{4}}}}. \end{array}\end{cases} \end{equation} Finally, we always have $\langle{{ZZ\dots ZZ}}\rangle =0$ (see Appendix B for details \cite{appremark}). We then consider the following Bell operator \begin{align} \label{bell2} \ensuremath{\mathcal{B}}_N & = - \big[AAA\dots AA\big] + \big[BBA\dots A + \;\mbox{permutat.}\big] - \nonumber\\ - & \big[BBBBA\dots A + \;\mbox{permutat.}\big] + \big[\dots\big] - \dots \end{align} Note that this Bell operator is similar to $\langle \ensuremath{\mathcal{B}}_N^M \rangle $ of the original Mermin inequality \cite{Mermin}, but it differs in the number of $B$ (always even) considered. Using the correlations computed above, we can state: \noindent {\bf Observation 5.} {\it If we fix in the Bell operator $\ensuremath{\mathcal{B}}_N$ in Eq.~(\ref{bell2}) the measurements to be $A=Z$ and $B=X$, then the $N$-qubit fully-connected three-uniform hypergraph state violates the classical bound, by an amount that grows exponentially with number of qubits, namely \begin{align} \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_C &\leq 2^{\left\lfloor N/2\right\rfloor} \quad \mbox{ for local HV models, and} \nonumber \\ \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q &\geq 2^{N-2}-\frac{1}{2} \quad \mbox{ for the hypergraph state.} \end{align} } The proof is given in Appendix C \cite{appremark}. {\it Four-uniform hypergraph states.---} Finally, let us consider four-uniform complete hypergraph states. For them, the correlations of measurements as in Eq.~(\ref{lemma3}) are not so simple: They are not constant, and depend on $m$ as well as on $N$. Nevertheless, they can be explicitly computed, and detailed formulas are given in the Appendix D \cite{appremark}. {From} these correlations, we can state: \noindent {\bf Observation 6.} {\it The $N$-qubit fully-connected four-uniform hypergraph state violates local realism by an amount that grows exponentially with number of qubits. More precisely, one can find a Mermin-like Bell operator $\ensuremath{\mathcal{B}}_N$ such that \begin{align} \frac{\left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q} {\left\langle \ensuremath{\mathcal{B}}_N \right\rangle_C} \stackrel{N\rightarrow \infty}{\sim} \frac{\Big(1+\frac{1}{\sqrt{2}}\Big)^{N-1} }{\sqrt{2}^{N+3}} \approx \frac{1.20711^{N}}{2\sqrt{2}+2}. \end{align}} A detailed discussion is provided in Appendix E \cite{appremark}. {\it Robustness.---} So far, we have shown that three- and four-uniform hypergraph states violate local realism comparable to GHZ states. A striking difference is, however, that the entanglement and Bell inequality violation of hypergraph states is robust under particle loss. This is in stark contrast to GHZ states, which become fully separable if a particle is lost. We can state: \noindent {\bf Observation 7.} {\it The $N$-qubit fully-connected four-uniform hypergraph state preserves the violation of the local realism even after loss of one particle. More precisely, for $N=8k+4$ we have ${\left\langle \ensuremath{\mathcal{B}}_{N-1}\right\rangle_Q}/ {\left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q} \stackrel{N\rightarrow \infty}{\sim} {1}/({\sqrt{2}+1}).$ This means that the reduced state shows the same exponential scaling of the Bell inequality violation as the original state. } For the detailed discussions see Appendix F \cite{appremark}. For three-uniform complete hypergraph states we can prove that the reduced states are highly entangled, as they violate inequalities testing for separability \cite{Roy} exponentially. This violation decreases with the number of traced out qubits, but persists even if several qubits are lost. This suggests that this class of hypergraph states is also more robust than GHZ states, details can be found in Appendix G \cite{appremark}. Despite the structural differences, this property resembles of the W state, which is itself less entangled but more robust than the GHZ state \citep{Buzek}. In addition, this may allow the lower detection efficiency in the experiments. {\it Discussion and conclusion.---} A first application of our results is quantum metrology. In the standard scheme of quantum metrology one measures an observable $M_\theta$ and tries to determine the parameter $\theta$ which describes a phase in some basis \cite{giovanetti, weibo}. If one takes product states, one obtains a signal $\mean{M_\theta} \sim \cos{(\theta)}$ on a single particle, repeating it on $N$ particles allows to determine $\theta$ with an accuracy $\delta \theta \sim 1/\sqrt{N}$, the so-called standard quantum limit. Using an $N$-qubit GHZ state, however, one observes $\mean{(M_\theta)^{\otimes N}} \sim \cos{(N\theta)}$ and this phase super-resolution allows to reach the Heisenberg limit $\delta \theta \sim 1/{N}$. For a general state $\ensuremath{\varrho}$, it has been shown that the visibility of the phase super-resolution is given by the expectation value of the Mermin-type inequality, $V=Tr(\ensuremath{\mathcal{B}}_N \ensuremath{\varrho})/2^{N-1}$ \cite{weibo}. So, since the three-uniform hypergraph states violate this inequality with a value $\mean{\ensuremath{\mathcal{B}}_N}_Q \sim 2^{(N-2)}$ the visibility is $V \sim 1/2$, independently of the number of particles. This means that these states can be used for Heisenberg-limited metrology, and from our results they can be expected to have the advantage of being more robust to noise and particle losses. The second application of exponential violation of Bell inequalities is \textit{nonadaptive measurement based quantum computation with linear side-processing} ($NMQC_{\oplus}$) \cite{Hoban11}. $NMQC_{\oplus}$ is a non-universal model of quantum computation where linear classical side-processing is combined with quantum measurements in a nonadaptive way, i.e., the choice of settings is independent of previous outcomes. In Ref.~\cite{Hoban11} the authors connect the expectation value of a full-correlation Bell expression \cite{wwzb} with the success probability of computing a Boolean function, specified as a function of the inequality coefficients, via $NMQC_{\oplus}$. In particular, the exponential violation of generalized Svetlichny inequalities \cite{Collins02} (equal to Mermin inequalities for even $N$), corresponds to a constant success probability $P_{\rm succ}$ of computing the pairwise $\mathrm{AND}$ on $N$ bits extracted from a uniform distribution, whereas in the classical case $P_{\rm succ}-1/2$ decrease exponentially with $N$. As a consequence, the exponential violation of the full-correlation Bell expression $\mathcal{B}_N$ can be directly related to an exponential advantage for computation tasks in the $NMQC_{\oplus}$ framework. Moreover, in several cases, e.g., $4$-uniform hypergraph states of $N = 6\ {\rm mod}\ 8$ qubits, also the Svetlichny inequality is violated exponentially, providing an advantage for computation of the pairwise $\mathrm{AND}$ discussed in Ref.~\cite{Hoban11}. In summary, we have shown that hypergraph states violate local realism in many ways. This suggests that they are interesting resources for quantum information processing, moreover, this makes the observation of hypergraph states a promising task for experimentalists. In our work, we focused only on some classes of hypergraph states, but for future research, it would be desirable to identify classes of hypergraph states which allow for an all-versus-nothing violation of local realism or which are strongly genuine multiparticle nonlocal. We thank D.~Nagaj, F.~Steinhoff, M.~Wie\'sniak, and B.~Yoshida for discussions. This work has been supported by the EU (Marie Curie CIG 293993/ENFOQI), the FQXi Fund (Silicon Valley Community Foundation), the DFG and the ERC. \onecolumngrid \section{Appendix A. Genuine multiparticle nonlocality with linear programming}\label{App:LP} Fixed the number of measurement settings and outcomes, probabilities arising from a hybrid local-nonsignalling model, as the one described in the main text for the splitting $A|BC$, form a polytope whose extremal points are given by combination of deterministic local assignments for the party $A$ and extremal nonsignalling assignments, i.e., local deterministic and PR-boxes \cite{brunnerreview}, for the parties $BC$. In order to detect genuine multiparticle nonlocality, one has to consider all combinations of probabilities arising from the other local-nonsignalling splitting, namely $C|AB$ and $B|AC$. Geometrically, this corresponds to take the convex hull of the three polytopes associated with the three different splitting. Let us denote such polytopes as $\mathcal{P}_{A|BC}, \mathcal{P}_{C|AB} ,\mathcal{P}_{B|AC}$ and their convex hull as $\mathcal{P}_{L-NS}$. By definition of convex hull, every vector $\mathbf{p}\in \mathcal{P}_{L-NS}$ can be written as a convex combination of three vectors $\mathbf{p}_{ A|BC}, \mathbf{p}_{C|AB}$ and $\mathbf{p}_{B|AC}$, which, in turn, can be written as a convex combination of the vertices of the corresponding polytope. To check whether a given point $\mathbf{p}$ belongs to $\mathcal{P}_{L-NS}$ it is, therefore, sufficient the description in terms of the extremal points of $\mathcal{P}_{A|BC}, \mathcal{P}_{C|AB} ,\mathcal{P}_{B|AC}$. Let us denote them as $\{\mathbf{v}_i\}$. The membership problem can then be formulated as a linear program (LP) \cite{brunnerreview} \begin{equation}\begin{split}\label{e:lp} \text{maximize: } &\mathbf{\lambda} \cdot \mathbf{p} - C \\ \text{subject to: }& \mathbf{\lambda} \cdot \mathbf{v}_i - C \leq 0\text{ , for all }\mathbf{v}_i \\ & \mathbf{\lambda} \cdot \mathbf{p} - C \leq 1. \end{split}\end{equation} The variable of the LP are $\{\mathbf{\lambda},C\}$, where $\mathbf{\lambda}$ represents the coefficient of a Bell-Svetlichny inequality, detecting genuine multiparticle nonlocality, and $C$ the corresponding local-nonsignalling bound. The LP optimizes the coefficients $\mathbf{\lambda}$ to obtain the maximal value (at most $C+1$) for the quantum probabilities, while keeping the local-nonsignaling bound $C$. As a consequence, the vector $\mathbf{p}$ can be written as a convex combination of $\{\mathbf{v}_i\}$ if and only if the optimal value of the LP is 0. The noise tolerance for $\ket{H_3}$ can then be computed by mixing it with white noise, i.e., $\ket{H_3}\bra{H_3}\mapsto \varrho= (1-\varepsilon)\ket{H}\bra{H}+\varepsilon \mathbbm{1}/8$, and compute for which values of $\varepsilon$ the LP \eqref{e:lp} gives optimal value $0$. Standard numerical techniques for LP give that up to $\varepsilon\approx {1}/{13} \approx 7.69\%$ the probabilities for $X$ and $Z$ measurements cannot be explained by a hybrid local-nonsignalling model. \section{APPENDIX B: Correlations for three-uniform hypergraph states} \subsection{B.1. Preliminary calculations} Before starting with the actual calculations, we need to settle couple of identities and a look-up table, which we will refer to throughout the main proofs. The first and probably the most important identity is a commutation relation between multi-qubit phase gates and Pauli X matrices \cite{Otfried}, \begin{equation}{\label{identity}} C_e\big(\bigotimes_{i \in K} X_k \big)=(\bigotimes_{i \in K} X_k \big)\big(\prod_{f\in \mathcal{P}(K)}C_{e\backslash\{f\}}\big). \end{equation} Here, $\mathcal{P}(K)$ denotes the power set of the index set $K$. Note that the product of the $C_{e\backslash\{f\}}$ may include the term $C_{\emptyset}$, which is defined to be $-\openone$ and leads to a global sign. Furthermore, it turns out to be useful to recall some basic facts about binomial coefficients, as the appear frequently in the following calculations. \begin{lemma} The following equalities hold: \begin{equation}\label{identity1} Re\Big[(1+i)^n\Big]=\sum_{k=0,4,\dots}^n\binom{n}{k}-\binom{n}{k+2}, \end{equation} \begin{equation}\label{identity2} Im\Big[(1+i)^n\Big]=\sum_{k=0,4,\dots}^n\binom{n}{k+1}-\binom{n}{k+3}. \end{equation} \end{lemma} \begin{proof} Here we derive (\ref{identity1}) and (\ref{identity2}) together: \begin{equation} s:=(1+i)^n=\sum^n_{k=0} \binom{n}{k}i^k =\sum^n_{k=0,4,\dots} \binom{n}{k}+ i\binom{n}{k+1} -\binom{n}{k+2}-i\binom{n}{k+3}. \end{equation} It is easy to spot that $Re[s]$ and $Im[s]$ indeed leads to the identities (\ref{identity1}) and (\ref{identity2}) respectively. \end{proof} The following look-up table represents the values of (\ref{identity1}) and (\ref{identity2}) for different $n$. These values can be derived from the basic properties of complex numbers: \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \# & $n$ & $Re\Big[(1+i)^{n}\Big]$ & $Im\Big[(1+i)^{n}\Big]$ & $Re\Big[(1+i)^{n}\Big]$$+Im\Big[(1+i)^{n}\Big]$ & $Re\Big[(1+i)^{n}\Big]$$-Im\Big[(1+i)^{n}\Big]$\tabularnewline \hline \hline 1. & $n=0$ mod 8 & $+2^{\frac{n}{2}}$ & $0$ & $+2^{\frac{n}{2}}$ & $+2^{\frac{n}{2}}$\tabularnewline \hline 2. & $n=1$ mod 8 & $+2^{\frac{n-1}{2}}$ & $+2^{\frac{n-1}{2}}$ & $+2^{\frac{n+1}{2}}$ & $0$\tabularnewline \hline 3. & $n=2$ mod 8 & $0$ & $+2^{\frac{n}{2}}$ & $+2^{\frac{n}{2}}$ & $-2^{\frac{n}{2}}$\tabularnewline \hline 4. & $n=3$ mod 8 & $-2^{\frac{n-1}{2}}$ & $+2^{\frac{n-1}{2}}$ & $0$ & $-2^{\frac{n+1}{2}}$\tabularnewline \hline 5. & $n=4$ mod 8 & $-2^{\frac{n}{2}}$ & $0$ & $-2^{\frac{n}{2}}$ & $-2^{\frac{n}{2}}$\tabularnewline \hline 6. & $n=5$ mod 8 & $-2^{\frac{n-1}{2}}$ & $-2^{\frac{n-1}{2}}$ & $-2^{\frac{n+1}{2}}$ & $0$\tabularnewline \hline 7. & $n=6$ mod 8 & $0$ & $-2^{\frac{n}{2}}$ & $-2^{\frac{n}{2}}$ & $+2^{\frac{n}{2}}$\tabularnewline \hline 8. & $n=7$ mod 8 & $+2^{\frac{n-1}{2}}$ & $-2^{\frac{n-1}{2}}$ & $0$ & $+2^{\frac{n+1}{2}}$\tabularnewline \hline \end{tabular}\\ $\quad$\\ \textbf{Table 0.} Look-up table for the values of (\ref{identity1}) and (\ref{identity2}) . \end{center} \subsection{B.2. Correlations for X and Z measurements on fully-connected three-uniform hypergraph states} \begin{lemma} \label{lemma3full} Consider an arbitrary number of qubits $N$ and the three-uniform fully-connected hypergraph (HG) states. Then, if $m$ is even with $1<m<N$ the following equality holds \begin{equation} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{1}{2} & \mbox{\ensuremath{\mbox{if \ensuremath{m=2} mod \ensuremath{4}}}},\\ -\frac{1}{2} & \mbox{\ensuremath{\mbox{if \ensuremath{m=0} mod \ensuremath{4}}}}. \end{array}\end{cases} \end{equation} \end{lemma} \begin{proof} We can write: \begin{equation}\label{appendixAeq1} K:=\bra{HG} X\dots XZ\dots Z\ket{HG}=\bra{+}^{\otimes N}\bigg(\prod_{e\in E} C_e\bigg) X\dots XZ\dots Z\bigg( \prod_{e\in E} C_e\bigg) \ket{+}^{\otimes N}. \end{equation} We can group all the controlled phase gates on the right hand side of the expression (\ref{appendixAeq1}). Note that the operators $C_e$ and $X_i$ do not commute, but we can use the identity ($\ref{identity}$). While regrouping we count the multiplicity of each phase gate. If each phase gate appears even times, we get an identity as $C^2=\mathbbm{1}$, if not, we keep these phase gates with the multiplicity one for the further calculations. For the purposes which will become apparent shortly, we denote the parties which measure in $X$ direction by $\circledast$ and ones in $Z$ direction by $\bigtriangleup$, in a way that, for example, if an arbitrary phase gate acts on $XXXZZ$, it is represented as $\circledast\circledast\circledast\bigtriangleup\bigtriangleup$. Without loss of generality, we fix one phase gate $C_e$ and consider all the possible scenarios of $\circledast$ and $\bigtriangleup$ it can be acting on. Since we work on three-uniform HG states, every phase gate acts on bashes of different three party systems. These parties can be either of type $\circledast$ or $\bigtriangleup$ and we have to consider all possible scenarios. Since we are working with symmetric states, we can sort the parties such that we have $m$ $\circledast$'s followed by $(N-m)$ $\bigtriangleup$'s: \begin{equation} \label{appendixAeq2} \underset{m}{\underbrace{\circledast\dots \circledast}} \underset{N-m}{\underbrace{\bigtriangleup\dots\bigtriangleup}} \end{equation} and then we can enumerate all the scenarios of one phase gate acting on (\ref{appendixAeq2}): \\ \begin{enumerate} \item $C_eZZZ $ corresponds to $\bigtriangleup\bigtriangleup\bigtriangleup \quad\quad\quad\quad$ $3. \; C_eXXZ$ corresponds to $\circledast\circledast\bigtriangleup$\\ \item $C_eXZZ$ corresponds to $\circledast\bigtriangleup\bigtriangleup\quad\quad\quad\quad$ $4.\; C_eXXX$ corresponds to $\circledast\circledast\circledast$\\ \end{enumerate} We consider each case separately:\\ 1. $C_eZZZ=ZZZC_e$ as $C_e$ and $Z$ commute. $C_e$ moves on the right side with the multiplicity one. To save us writing in the future, we will denote the multiplicity of the phase gate moving on the right side by $\# e$. In this particular case it is $ \# \bigtriangleup\bigtriangleup\bigtriangleup =1$. However, on the right side of the equation (\ref{appendixAeq1}) we have a product of all three-party phase gates. Therefore, we get $C_e$ with the multiplicity of two and $C_e^2=\mathbbm{1}$. Note that, as we have chosen an arbitrary three-qubit phase gate, the same result holds for every such phase gate. So, all three-qubit phase gates coming from the case 1, cancel out. We will see that with the same reasoning all three qubit phase gates cancel out (give an identity). \\ 2. For $C_eXZZ$, we use the identity ($\ref{identity}$): \begin{equation} \label{appendixAeq3} C_eXZZ=XC_eC_{\{e\backslash X\}}ZZ. \end{equation} The three-qubit phase gate $C_e$, from (\ref{appendixAeq3}), appears with the multiplicity one $(\# \circledast\bigtriangleup\bigtriangleup=1)$ and like in the case 1, it gives an identity when being multiplied by the same three-qubit phase gate at the right side of the expression in (\ref{appendixAeq1}). It is more tricky to calculate the multiplicity of $C_{\{e\backslash X\}}$ ($\# \bigtriangleup\bigtriangleup$ as the $\circledast$ part (or equivalently, $X$ part) is removed from the set of vertices $e$.). For this we need to fix $\bigtriangleup\bigtriangleup$ and count all the scenarios when an arbitrary $C_{e}$ is reduced to $\bigtriangleup\bigtriangleup$. As we are working with the symmetric case, such scenario repeats $\binom{m}{1}=m$ times, where $m$ is the number of parties measuring in $X$ direction.We shortly denote this as $\# \bigtriangleup\bigtriangleup =\binom{m}{1}=m$. So, as $m$ is an even number, $(C_{\{e\backslash X\}})^m=(C_{\bigtriangleup\bigtriangleup})^m=\mathbbm{1}$.\\ Note that the gate $C_{\bigtriangleup\bigtriangleup}$ can only be generated from in case 2.\\ 3. For $C_eXXZ$, we use the identity ($\ref{identity}$): \begin{equation} \label{appendixAeq4} C_eXXZ=XXC_eC_{\{e\backslash XX\}} \Big[\prod_{\forall X} C_{\{e\backslash X\}}\Big] Z. \end{equation} The three-qubit phase gate $C_e$, from (\ref{appendixAeq3}), appears with the multiplicity one $(\# \circledast\circledast\bigtriangleup=1)$; therefore, like in the two previous cases, it cancels out on the right side of the expression. A multiplicity of $C_{\{e\backslash XX\}}$ is calculated by fixing a concrete $\bigtriangleup$ and counting all possible appearance of arbitrary $\circledast\circledast$. As the number of parties measuring in direction is $X$ is $m$, this means that it is all combination of two parties with $X$ measurements out of total $m$ parties. So, \begin{equation} \# \bigtriangleup=\binom{m}{2}=\frac{m(m-1)}{2}=\begin{cases} \begin{array}[t]{ccc} \mbox{even, if} & m=0 \; mod\; 4 & \Rightarrow C_{\triangle}\mbox{ cancels out.}\\ \mbox{odd, if } & m=2 \; mod \; 4 & \Rightarrow C_{\triangle}\mbox{ remains.} \end{array}\end{cases} \end{equation} The last one from this case is the multiplicity of $C_{\{e\backslash X\}}$ or $\# \circledast\bigtriangleup$. Here we fix one qubit from $m$ ($X$ direction) and one from $N-m$ ($Z$ direction) and count the number of such occurrences, when the third qubit is an arbitrary one of the type $\circledast$, which is exactly $\binom{m-1}{1}$. Therefore, \begin{equation} \#\circledast\bigtriangleup=\binom{m-1}{1}=m-1,\mbox{which is odd},\Rightarrow\; C_{\circledast\bigtriangleup}\mbox{remains}. \end{equation}\\ 4. For $C_eXXX$, we use the identity ($\ref{identity}$): \begin{equation} \label{appendixAeq5} C_eXXX=XXXC_e \Big[\prod_{\forall X} C_{\{e\backslash X\}}\Big] \Big[\prod_{\forall X} C_{\{e\backslash XX\}}\Big]C_{\{\}}. \end{equation} $C_e$ occurs once and it gives an identity with the other one from the right side like in the previous cases. The multiplicity of $C_{\{e\backslash X\}}$ is $\# \circledast\circledast$. Here we fix two parties in $X$ direction and count the occurrence of this scenario by altering the third party, from the remaining $m-2$, in $X$ direction. Therefore, \begin{equation} \#\circledast\circledast=\binom{m-2}{1}=m-2,\mbox{which is even},\Rightarrow\; C_{\circledast\circledast}\mbox{ cancels out}. \end{equation} Similarly for $C_{\{e\backslash XX\}}$, we fix one party in $X$ direction and count all possibilities of choosing two parties out of remaining $m-1$. Therefore, \begin{equation} \# \circledast=\binom{m-1}{2}=\frac{(m-1)(m-2)}{2}=\begin{cases} \begin{array}[t]{ccc} \mbox{odd, if} & m=0 \; mod\; 4 & \Rightarrow\; C_\circledast \mbox{ remains.}\\ \mbox{even if, } & m=2\; mod\; 4 & \Rightarrow\; C_\circledast \mbox{ cancels out.} \end{array}\end{cases} \end{equation} At last, we consider $C_{\{\}}$. This gate determines the global sign of the expectation value and it appears only when $C_e$ acts on systems which are all measured in $X$ direction. Therefore, \begin{equation}{\label{3unifGL}} \#\{\}=\binom{m}{3}=\frac{m(m-1)(m-2)}{2 \cdot 3},\mbox{which is even}\Rightarrow\; \mbox{ the global sign is positive }. \end{equation}\\ To go on, we need to consider two cases \#1: $m=0$ mod 4 and \#2: $m=2$ mod 4 and calculate the expectation value separately for both: \\ \textbf{Case \#1:} When $m=0 \; mod\; 4$, we write out all remaining phase gates and continue the derivation from equation (\ref{appendixAeq1}): \begin{equation} \langle K \rangle :=\bra{+}^{\otimes N} X\dots X Z\dots Z \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}\ket{+}^{\otimes N}. \end{equation}\\ Using the fact that $X$ is an eigenstate of $\bra{+}$, we can get rid of all $X$s and then we can write $C_{\bigtriangleup}$ instead of $Z$: \begin{align}\label{appendixAeq6} \begin{split} \langle K \rangle= & \bra{+}^{\otimes N}\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}C_{\bigtriangleup}\ket{+}^{\otimes N}=\frac{1}{\sqrt{2}^N}\sum_{i=00\dots 00}^{11\dots 11}\bra{i}\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}C_{\bigtriangleup} \frac{1}{\sqrt{2}^N}\sum_{j=00\dots 00}^{11\dots 11}\ket{j}\\ = & \frac{1}{2^N}\bigg[\bra{00\dots 00}\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}C_{\bigtriangleup}\ket{00\dots 00}+\dots +\bra{11\dots 11}\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}C_{\bigtriangleup}\ket{11\dots 11}\bigg]\\ = &\frac{1}{2^N}Tr\bigg[\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}C_{\bigtriangleup}\bigg]. \end{split} \end{align} In (\ref{appendixAeq6}), to get line two from the line one, note that $\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\circledast}C_{\bigtriangleup}$ is a diagonal matrix. \\ To evaluate the trace of the given diagonal matrix, we need to find the difference between the number of $+1$ and $-1$ on the diagonal. We write every row in the computational basis by enumerating it with the binary notation. For each row, we denote by $\alpha$ the number of 1's in binary notation appearing in the first $m$ columns and by $\beta$, the same on the rest. For example, for $N=7$ , and $m=4$, the basis element $\ket{1101110}$ leads to $\alpha$=3 and $\beta=2$. Considering the phase gates in the equation (\ref{appendixAeq6}), the expression $(-1)^s$ defines whether in the given row the diagonal element is $+1$ or $-1$, where : \begin{equation} s:=\binom{\alpha}{1}\binom{\beta}{1}+\binom{\alpha}{1}+\binom{\beta}{1}=\alpha \beta +\alpha+\beta . \end{equation} In $s$, $\binom{\alpha}{1}\binom{\beta}{1}$ denotes how many $C_{\circledast\bigtriangleup} $ acts on the row. Also, $\binom{\alpha}{1}$ determines the number of $C_{\circledast}$ and $\binom{\beta}{1}$, number of $C_{\bigtriangleup} $. Every time when the phase gate acts, it changes the sign of the diagonal element on the row. Therefore, we need to determine the number s:\\ To see whether $s$ is even or odd, we have to consider the following cases exhaustively:\\ $\quad$\\ \begin{tabular}{lllll} \textbf{1. } & $\alpha$ is even \& $\beta$ is even & $(-1)^{s}=+1$ & & \multirow{2}{*}{$\Rightarrow$ These two cases sum up to zero.}\tabularnewline \textbf{2. } & $\alpha$ is even \& $\beta$ is odd & $(-1)^{s}=-1$ & & \tabularnewline \end{tabular}\\ $\quad$\\ \begin{tabular}{lllll} \textbf{3. } & $\alpha$ is odd \& $\beta$ is even & $(-1)^{s}=-1$ & & \multirow{2}{*}{$\Rightarrow$ These two cases contribute with the negative sign.}\tabularnewline \textbf{4.} & $\alpha$ is odd \& $\beta$ is odd & $(-1)^{s}=-1$ & & \tabularnewline \end{tabular}\\ From the cases 3 and 4, one can directly calculate the trace: \begin{equation}\label{appendixAeq7} \langle K \rangle=\frac{1}{2^N}\bigg[-\sum_{\alpha=1,3,\dots }^m\binom{m}{\alpha}\sum_{\beta=0}^{N-m}\binom{N-m}{\beta}\bigg]=-\frac{2^{m-1}2^{N-m}}{2^N}=-\frac{1}{2}. \end{equation} So, we get that if $m$ is divisible by 4, \begin{equation} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =-\frac{1}{2}. \end{equation} \textbf{Case \#2:} We use the identical approach: when $m=2 \; mod \; 4 $, we write out all remaining phase gates and continue the derivation from equation (\ref{appendixAeq1}): \begin{equation}\label{appendixAeq8} \langle K \rangle=\bra{+}^{\otimes N} X\dots X Z\dots Z \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} C_{\bigtriangleup}\ket{+}^{\otimes N}. \end{equation}\\ Again we use the fact that $X$ is an eigenstate of $\bra{+}$ and $Z=C_{\bigtriangleup}$. As in (\ref{appendixAeq8}), there is already one $(C_{\bigtriangleup})$, they cancel. Therefore, we are left with: \begin{equation}\label{appendixAeq8} \langle K \rangle=\bra{+}^{\otimes N} \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} \ket{+}^{\otimes N}=\frac{1}{2^N}Tr\bigg[\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\bigtriangleup} \bigg]. \end{equation}\\ We need to define the sign of the diagonal element by $(-1)^s$, where \begin{equation} s=\binom{\alpha}{1}\binom{\beta}{1}=\alpha\beta. \end{equation} \begin{tabular}{lllll} \textbf{1. } & $\alpha$ is even & $(-1)^{s}=+1$ & & $\Rightarrow$ This case contributes with the positive sign in the trace \tabularnewline \end{tabular} $ $ \begin{tabular}{lllll} \textbf{2. } & $\alpha$ is odd \& $\beta$ is even & $(-1)^{s}=+1$ & & \multirow{2}{*}{$\Rightarrow$ These two give zero contribution together.}\tabularnewline \textbf{3.} & $\alpha$ is odd \& $\beta$ is odd & $(-1)^{s}=-1$ & & \tabularnewline \end{tabular} As the case 2 and 3 add up to zero, we only consider the case 1: \begin{equation} \langle K \rangle=\frac{1}{2^N}\sum_{\alpha=0,2,\dots }^m\binom{m}{\alpha}\sum_{\beta=0}^{N-m}\binom{N-m}{\beta}=\frac{2^{m-1}2^{N-m}}{2^N}=\frac{1}{2}. \end{equation} So, we get that if $m$ is NOT divisible by 4, \begin{equation} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\frac{1}{2}. \end{equation} This completes the proof. \end{proof} \subsection{B.3. Correlations for X measurements on fully-connected three-uniform hypergraph states} \begin{lemma} If every party makes a measurement in $X$ direction, then the expectations value is \begin{equation} \langle \underset{N}{\underbrace{XX\dots XX}}\rangle =\begin{cases} \begin{array}[t]{cc} 0 & \mbox{\ensuremath{\mbox{if \ensuremath{N=0} mod \ensuremath{4}}}},\\ 1 & \mbox{\ensuremath{\mbox{if \ensuremath{N=2} mod \ensuremath{4}}}}. \end{array}\end{cases} \end{equation} \end{lemma} \begin{proof} In this proof we employ the notation introduced in details in the proof of the lemma \ref{lemma3full}. \begin{equation} \label{allX3unif} \langle K \rangle:=\bra{H_3^N} XX \dots XX\ket{H_3^N}=\bra{+}^{\otimes N} \Big[ \prod_{e\in E} C_e \Big] XX\dots XX \Big[ \prod_{e\in E} C_e\Big] \ket{+}^{\otimes N}. \end{equation} We use the identity (\ref{identity}), to regroup the phase gates on the right hand side of the expression (\ref{allX3unif}). Therefore, we count the multiplicity of the remaining phase gates:\\ \begin{tabular}{crlcc} & \#$\circledast\circledast\circledast$ & Each $C_{e},$ where $|e|=3$, occurs once and cancels with the one on right hand side. & & \tabularnewline & \#$\circledast\circledast$ & $=\binom{N-2}{1}$ is even $\Rightarrow C_{\circledast\circledast}$ cancels& . & \tabularnewline & \#$\circledast$ & $=\binom{N-1}{2}=\frac{(N-1)(N-2)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{odd, if }N=0\ mod\ 4 & \Rightarrow C_{\circledast}\mbox{ remains.}\\ \mbox{even, if \ensuremath{N=2\ }\ensuremath{mod\ 4}} &\Rightarrow C_{\circledast}\mbox{ cancels.} \end{array}\end{cases}$ & & \tabularnewline & \#$\{\}$ & $=\binom{N}{3}=\frac{N(N-1)(N-2)}{2\cdot3}\ $ is even $\Rightarrow$ global sign $GS$ is positive. & & \tabularnewline \end{tabular}\\ Therefore, we need to consider two cases to continue the derivation of the expression (\ref{allX3unif}): \\ \textbf{Case \#1:} If $N=0\ mod \; 4$, then \begin{equation} \langle K \rangle =\bra{+}^{\otimes N}XX\dots XX \prod_{\circledast\in E} C_{\circledast} \ket{+}^{\otimes N}= \bra{+}^{\otimes N}\prod_{\circledast\in E} C_{\circledast} \ket{+}^{\otimes N} =0. \end{equation} \textbf{Case \#2:} If $N=2\ mod \; 4$, then \begin{equation} \langle K \rangle = \bra{+}^{\otimes N}XX\dots XX \ket{+}^{\otimes N}=\Big[\braket{+}{+}\Big] ^N=1. \end{equation} \end{proof} \subsection{B.4. Correlations for Z measurements on fully-connected three-uniform hypergraph states} \begin{lemma} If every party makes a measurement in $Z$ direction, the expectation value is zero. Therefore, we need to show that \begin{equation} \langle K \rangle :=\langle ZZ\dots ZZ\rangle=0 \end{equation} \end{lemma} \begin{proof} \begin{equation} \langle K \rangle = \bra{+}^{\otimes N}\Big[\prod_{e \in E}C_e\Big]ZZ\dots ZZ\Big[\prod_{e \in E}C_e\Big] \ket{+}^{\otimes N}=\bra{+}^{\otimes N}ZZ\dots ZZ \ket{+}^{\otimes N}=\pm\frac{1}{2^N} Tr(Z\dots ZZ)=0. \end{equation} \end{proof} \section{Appendix C: Proof of Observation 5} \noindent {\bf Observation 5.} {\it If we fix in the Bell operator $\ensuremath{\mathcal{B}}_N$ in Eq.~(\ref{bell2}) the measurements to be $A=Z$ and $B=X$, then the $N$-qubit fully-connected three-uniform hypergraph state violates the classical bound, by an amount that grows exponentially with number of qubits, namely \begin{align} \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_C &\leq 2^{\left\lfloor N/2\right\rfloor} \quad \mbox{ for local HV models, and} \nonumber \\ \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q &\geq 2^{N-2}-\frac{1}{2} \quad \mbox{ for the hypergraph state.} \end{align} } The classical bound can be computed using the bound for Mermin inequality, i.e., $\mean{\ensuremath{\mathcal{B}}^M_N}_C\leq 2^{\lfloor N/2 \rfloor}$. If $N$ is odd, our Bell inequality is exactly the Mermin inequality. If $N$ is even, $\mean{\ensuremath{\mathcal{B}}_N}_C = \mean{A \cdot \ensuremath{\mathcal{B}}^M_{N-1} +B \cdot \tilde\ensuremath{\mathcal{B}}^M_{N-1}}_C \leq 2 \mean{\ensuremath{\mathcal{B}}^M_{N-1}}_C $, where $\tilde\ensuremath{\mathcal{B}}$ denotes the inequality where $A$ and $B$ are exchanged, and the claim follows. The quantum value is computed directly. In particular, for odd $N$, we have $ \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q=\sum_{k \;even}^N\binom{N}{k}\frac{1}{2}=2^{N-2}. $ If $N=0\ {\rm mod}\ 4$, then $ \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q=\sum_{k\ even}^{N-1}\binom{N}{k}\frac{1}{2}-\left\langle X\dots X \right\rangle=2^{N-2}-{1}/{2} $ and for $N=2\ mod\ 4$, we have $ \left\langle \ensuremath{\mathcal{B}}_N \right\rangle_Q=\sum_{k\ even}^{N-1}\binom{N}{k}\frac{1}{2}-\left\langle X\dots X \right\rangle=2^{N-2}+{1}/{2}. $ Notice that, since Eq.~\eqref{bell2} in the main text is a full-correlation Bell inequality, it can be written as a sum of probabilities minus a constant equal to the number of terms. With the proper normalization, such probabilities exactly correspond to the success probability of computing a Boolean function (cf. conclusive discussion and Ref.~\cite{Hoban11}). \section{Appendix D: Correlations for four-uniform hypergraph states} \begin{lemma}\label{lemma4full} The following statements hold for N-qubit, four-uniform hypergraph states: \begin{enumerate} \item For the case $N=8k-2 \mbox{ or } \; N=8k-1,\; \mbox{ or } 8k $, we have:\\ $(i)$ \begin{equation}\label{third} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }} & \mbox{\ensuremath{\mbox{if \ensuremath{(m-1)=0} mod \ensuremath{4}}}},\\ -\frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }} & \mbox{\ensuremath{\mbox{if \ensuremath{(m-1)=2} mod \ensuremath{4}}}}. \end{array}\end{cases} \end{equation} $(ii)$ For $N=8k-1$, we have: \begin{equation} \langle \underset{N}{\underbrace{XX\dots XX}}\rangle =-1. \end{equation} For $N=8k-2$ or $N=8k$, these correlations will not be needed.\\ \item For $N=4k+1$, we have: \begin{equation} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{1}{2^{\left\lceil m/2\right\rceil }}, & \mbox{if \ensuremath{(m-1)=0 \ mod\ 4}},\\ -\frac{1}{2^{\left\lceil m/2\right\rceil }}, & \ \mbox{if \ensuremath{(m-1)=2 \ mod\ 4}},\\ \frac{1}{2^{\left\lfloor N/2\right\rfloor }}, & \mbox{if }\mbox{\ensuremath{m=N}}. \end{array}\end{cases} \end{equation}\\ \item For $N=8k+2,\; \mbox{ or } 8k+4 $, we have for even $m$:\\ (i) \begin{equation} \langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{2^{m/2-1}}{2^{N/2 }} & \mbox{if \ensuremath{(N-m)=0\ mod\ 4 }},\\ -\frac{2^{m/2-1}}{2^{N/2 }} & \mbox{if \ensuremath{(N-m)=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation}\\ (ii) \begin{equation} \langle XX\dots XX\rangle =\frac{2^{\frac{N}{2}-1}+1}{2^{\frac{N}{2}}}. \end{equation} \\ \item For $N=8k+3$, $\langle \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle$ for even $m$ gives the same exact result as the part 3, so we have: \begin{equation} \langle \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle=\begin{cases} \begin{array}[t]{cc} +\frac{2^{m/2-1}}{2^{M/2 }} & \mbox{if \ensuremath{(M-m)=0\ mod\ 4 }},\\ -\frac{2^{m/2-1}}{2^{M/2 }} & \mbox{if \ensuremath{(M-m)=2 \ mod\ 4 }}, \end{array}\end{cases} \end{equation} where $M=N-1.$ \end{enumerate} \end{lemma} \begin{proof} Each part of the theorem needs a separate consideration. note that the notation and machinery that is employed in the proof is based on the proof of the Lemma \ref{lemma3full}. Therefore, we advise the reader to become familiar with that one first. \\ \textbf{ Part 1. } We consider the cases when $N=8k-2, \; N=8k-1\; \mbox{ or } 8k $ and odd $m$ together. \\ We prove $(i)$ first:\\ \begin{equation}\label{part1_1} \langle G_1 \rangle = \bra{H_4^N} \underset{m}{\underbrace{X\dots X}}Z\dots Z\ket{H_4^N} =\bra{+}^{\otimes N}\Big(\prod_{e\in E}C_e \Big)XX\dots XZ\dots Z \Big(\prod_{e\in E}C_e \Big) \ket{+}^{\otimes N}. \end{equation} We need to use the identity (\ref{identity}) to regroup all the phase gates on the right hand side of the expression (\ref{part1_1}). If each phase gate occurs even number of times, they give an identity, otherwise, they are used in the further calculations. We consider each case separately in the following table: \begin{tabular}{rrllll} & & \multicolumn{4}{l}{}\tabularnewline \textbf{1.} & \#$\bigtriangleup\bigtriangleup\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m}{1}$ is odd $\Rightarrow C_{\bigtriangleup\bigtriangleup\bigtriangleup}$ remains. }\tabularnewline & & & & & \tabularnewline \textbf{2.} & \#$\bigtriangleup\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m}{2}=\frac{m(m-1)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if (\ensuremath{m-1)=0} mod 4 } & \Rightarrow C_{\bigtriangleup\bigtriangleup}\mbox{ cancels.}\\ \mbox{odd, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow C_{\bigtriangleup\bigtriangleup}\mbox{ remains.} \end{array}\end{cases}$ }\tabularnewline & \#$\circledast\bigtriangleup\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m-1}{1}$ is even $\Rightarrow\; C_{\circledast\bigtriangleup\bigtriangleup}$ cancels.}\tabularnewline & & & & & \tabularnewline \textbf{3.} & \#$\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m}{3}=\frac{m(m-1)(m-2)}{2\cdot3}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if (\ensuremath{m-1)=0} mod 4 } &\Rightarrow C_{\bigtriangleup}\mbox{ cancels.}\\ \mbox{odd, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow C_{\bigtriangleup}\mbox{ remains.} \end{array}\end{cases}$}\tabularnewline & \#$\circledast\circledast\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m-2}{1}$ is odd $\Rightarrow$ $C_{\circledast\circledast\bigtriangleup}$ remains.}\tabularnewline & \#$\circledast\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m-1}{2}=\frac{(m-1)(m-2)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if (\ensuremath{m-1)=0} mod 4 } & \Rightarrow C_{\circledast\bigtriangleup}\mbox{ cancels.}\\ \mbox{odd, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow C_{\circledast\bigtriangleup}\mbox{ remains.} \end{array}\end{cases}$}\tabularnewline & & & & & \tabularnewline \textbf{4.} & \#$\circledast\circledast\circledast$ & \multicolumn{4}{l}{$=\binom{m-3}{1}$ is even $\Rightarrow\; C_{\circledast\circledast\circledast}$ cancels.}\tabularnewline & \#$\circledast\circledast$ & \multicolumn{4}{l}{$=\binom{m-2}{2}=\frac{(m-2)(m-3)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{odd, if (\ensuremath{m-1)=0} mod 4 } & \Rightarrow C_{\circledast\circledast}\mbox{ remains.}\\ \mbox{even, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow C_{\circledast\circledast}\mbox{ cancels.} \end{array}\end{cases}$}\tabularnewline & \#$\circledast$ & \multicolumn{4}{l}{$=\binom{m-1}{3}=\frac{(m-1)(m-2)(m-3)}{2\cdot3}$ is even $\Rightarrow\; C_{\circledast}$ cancels. }\tabularnewline & \#$\{\}$ & \multicolumn{4}{l}{$=\binom{m}{4}=\frac{(m-1)(m-2)(m-3)(m-4)}{2\cdot3\cdot4}\ $ affects global sign ($GL$) and will be discussed separately.}\tabularnewline \end{tabular} \begin{center} \textbf{Table 1.} Counting phase gates for a four-uniform case. $m$ is odd. \end{center} $\quad $\\ \textbf{Remark:} All four-qubit phase gates move with the multiplicity one to the right side and therefore, cancel out with the same phase gate on the right. The detailed reasoning was discussed in proof of the Lemma (\ref{lemma3full}) in the Appendix A. So, we skipped such scenarios in the Table 1. Now we consider two cases of $(m-1)$ separately and for each case we fix the global sign ($GL$) defined in the Table 1.\\ \textbf{Case \# 1:} $(m-1)=0$ mod 4: \begin{equation}\label{part1_2} \langle G_1 \rangle =\pm\bra{+}^{\otimes N} XX\dots XZ\dots Z \prod_{\forall \circledast, \forall\bigtriangleup } C_{\bigtriangleup \bigtriangleup \bigtriangleup} C_{\circledast \circledast \bigtriangleup}C_{\circledast\circledast} \ket{+}^{\otimes N}=\pm \frac{1}{2^N} Tr( \prod_{\forall \circledast, \forall\bigtriangleup } C_{\bigtriangleup \bigtriangleup \bigtriangleup} C_{\circledast \circledast \bigtriangleup}C_{\circledast\circledast}C_{\bigtriangleup}). \end{equation} \textbf{Remark:} We write the '$\pm$' sign as we have not fixed the global sign yet.\\ To evaluate the trace of the given diagonal matrix, we need to find the difference between the number of $+1$'s and $-1$'s on the diagonal. We write every row in the computational basis by enumerating it with the binary notation. Due to the symmetry of the problem, we assign the first $m$ columns to $X$ measurement ($\circledast$) and the rest to, $Z$($\bigtriangleup$). For each row, we denote by $\alpha$ the number of 1's in binary notation appearing in the first $m$ column and by $\beta$, the same on the rest. This notation is also adopted. See the proof of Lemma \ref{lemma3full} for more detailed explanation.\\ Considering the phase gates in (\ref{part1_2}), the expression $(-1)^s$ defines whether in the given row the diagonal element is $+1$ or $-1$, where : \begin{equation} s:=\binom{\beta}{ 3}+\binom{\alpha}{2}\binom{\beta}{1}+\binom{\alpha}{2}+\binom{\beta}{1}=\frac{\beta (\beta-1)(\beta-2)}{2\cdot 3}+\frac{\alpha(\alpha -1)}{2}(\beta+1)+\beta. \end{equation} The sign of the diagonal element is determined at follows: \begin{tabular}{lllc} \textbf{1. } & $\alpha$ is even \& $\beta$ is even: & \multicolumn{2}{l}{if $\alpha=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2 $ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\alpha$ is odd \& $\beta$ is even: & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{3.} & (Any $\alpha$) \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ Having established the $\pm 1$ values for each row, we can sum them up to find the trace in (\ref{part1_2}). Here we use the identities (\ref{identity1}) and (\ref{identity2}) and afterwards the look-up Table 0 to insert the numerical values where necessary: \begin{align} \label{part1_3} \begin{split} \langle G_1 \rangle = &\pm\frac{1}{2^N}\Bigg[\sum^{N-m}_{\beta=0,2,4\dots } \binom{N-m}{\beta}\bigg[\sum_{\alpha =0,4,\dots}^{m}\binom{m}{\alpha}+\binom{m}{\alpha +1}-\binom{m}{\alpha +2}-\binom{m}{\alpha +3}\bigg] \\ +&\sum_{\beta =1,5,\dots} \bigg[-\binom{N-m}{\beta}\sum_\alpha \binom{m}{\alpha}+\binom{N-m}{\beta+2}\sum_\alpha \binom{m}{\alpha}\bigg]\Bigg]\\ = &\pm\frac{1}{2^N}\Bigg[\bigg[Re\Big[(1+i)^m \Big] +Im\Big[(1+i)^m\Big]\bigg]2^{N-m-1} + 2^m \sum^{N-m}_{\beta =1,5,\dots} \bigg[-\binom{N-m}{\beta}+\binom{N-m}{\beta+2}\bigg]\Bigg]\\ =&\pm\frac{1}{2^N}\Bigg[\bigg[Re \Big[(1+i)^m \Big] +Im\Big[(1+i)^m\Big]\bigg]2^{N-m-1} - 2^m Im\Big[(1+i)^{N-m}\Big]\Bigg] \equiv\pm\frac{1}{2^N} E. \end{split} \end{align} We have to consider $N=8k-1$ and $N=8k$ or $N=8k-2$ separately to continue the derivation of (\ref{part1_3}):\\ 1. For $N=8k-1$, using the values from the Table 0: \begin{equation} \langle G_1 \rangle=\pm \frac{1}{2^N} \Bigg[2^{\frac{m+1}{2}}2^{N-m-1}- 2^m Im\Big[(1+i)^{N-m}\Big]\Bigg] =\pm\frac{2^{\frac{m+1}{2}}2^{N-m-1}+2^m2^{\frac{N-m}{2}}}{2^N}= \pm \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} \\ 2. For $N=8k$ or$N=8k-2$ , using the values from the Table 0: \begin{equation} \langle G_1 \rangle=\pm \frac{1}{2^N} \Bigg[2^{\frac{m+1}{2}}2^{N-m-1}- 2^m Im\Big[(1+i)^{N-m}\Big]\Bigg] =\pm\frac{2^{\frac{m+1}{2}}2^{N-m-1}+2^m2^{\frac{N-m-1}{2}}}{2^N}= \pm \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} Therefore, \begin{equation}\label{part1_4} \langle G_1 \rangle = \pm \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} Concerning the sign in (\ref{part1_4}), it is affected by the product of two components: one from the case \textbf{4} from Table 1: $GL$ and the other by $E$ from (\ref{part1_3}). If $(m-1)=0$ mod 8, the equation $E$ has a positive sign and also $GL=+1$. And if $(m-1)=4$ mod 8, $E$ has a negative sign and $GL=-1$. Therefore, in both cases or equivalently, for $(m-1) =0\; mod\; 4$, \begin{equation} \langle G_1\rangle =+\frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} \textbf{Case \# 2:} $(m-1)=2$ mod 4: \begin{align}\label{par2_1} \begin{split} \langle G_1 \rangle & =\pm\bra{+}^{\otimes N} XX\dots XZ\dots Z \prod_{\forall \circledast, \forall\bigtriangleup } C_{\bigtriangleup \bigtriangleup \bigtriangleup} C_{\bigtriangleup \bigtriangleup } C_{\circledast \circledast \bigtriangleup}C_{\circledast\bigtriangleup}C_{\bigtriangleup } \ket{+}^{\otimes N}\\ & =\pm \frac{1}{2^N} Tr( \prod_{\forall \circledast, \forall\bigtriangleup } C_{\bigtriangleup \bigtriangleup \bigtriangleup} C_{\bigtriangleup \bigtriangleup } C_{\circledast \circledast \bigtriangleup}C_{\circledast\bigtriangleup}). \end{split} \end{align} In this case, we apply the same technique to determine the sign: \begin{equation} s:=\binom{\beta}{ 3}+\binom{\beta}{ 2}+\binom{\alpha}{2}\binom{\beta}{1}+\binom{\alpha}{1}\binom{\beta}{1}=\frac{\beta (\beta-1)(\beta-2)}{2\cdot 3}+\frac{\beta(\beta-1)}{2}+\frac{\alpha(\alpha -1)}{2}\beta+\alpha\beta. \end{equation} The sign of $s$, is determined at follows: \\ \begin{tabular}{lllc} \textbf{1. } & $\beta$ is even \& any $\alpha$ & \multicolumn{2}{l}{if $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\beta$ is odd \& $\alpha$ is even: & \multicolumn{2}{l}{if $\alpha=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{3.} & $\beta$ is odd \& $\alpha$ is odd: & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ \begin{align} \label{part2_2} \begin{split} \langle G_1\rangle =& \pm\frac{1}{2^N} \Bigg[\sum_{\beta=0,4\dots } \bigg[\binom{N-m}{\beta}\sum_{\alpha =0}^{m}\binom{m}{\alpha}-\binom{N-m}{\beta +2}\sum_{\alpha =0}^{m}\binom{m}{\alpha}\bigg] \\ + &\sum_{\beta=1,3,\dots } \binom{N-m}{\beta}\bigg[\sum_{\alpha =0,4,\dots}^{m}\binom{m}{\alpha}-\binom{m}{\alpha +1}-\binom{m}{\alpha +2}+\binom{m}{\alpha +3}\bigg]\Bigg]\\ = & \pm \frac{1}{2^N}\Bigg[2^m \sum_{\beta =0,4,\dots} \bigg[\binom{N-m}{\beta}-\binom{N-m}{\beta+2}\bigg]+2^{N-m-1} \bigg[Re\Big[(1+i)^m \Big] -Im\Big[(1+i)^m\Big]\bigg]\Bigg]\\ = & \pm \frac{1}{2^N}\Bigg[2^m Re\Big[(1 +i)^{N-m}\Big] +2^{N-m-1} \bigg[Re\Big[(1+i)^m \Big] -Im\Big[(1+i)^m\Big]\bigg]\Bigg]\equiv \frac{1}{2^N}E. \end{split} \end{align} We have to consider $N=8k-1$ and $N=8k$ or $N=8k-2$ separately to continue the derivation of (\ref{part1_3}):\\ 1. For $N=8k-1$, using the values from the Table 0: \begin{equation} \langle G_1 \rangle=\pm \frac{1}{2^N} \Bigg[ 2^m Re\Big[(1+i)^{N-m}\Big]+2^{\frac{m+1}{2}}2^{N-m-1}\Bigg] =\pm\frac{2^m2^{\frac{N-m}{2}}+2^{\frac{m+1}{2}}2^{N-m-1}}{2^N}= \pm \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} 2. For $N=8k$ or$N=8k-2$ , using the values from the Table 0: \begin{equation} \langle G_1 \rangle=\pm \frac{1}{2^N} \Bigg[ 2^m Re\Big[(1+i)^{N-m}\Big]+2^{\frac{m+1}{2}}2^{N-m-1}\Bigg] =\pm\frac{2^m2^{\frac{N-m-1}{2}}+2^{\frac{m+1}{2}}2^{N-m-1}}{2^N}= = \pm \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} Therefore, \begin{equation}\label{part2_3} \langle G_1 \rangle = \pm \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} Concerning the sign in (\ref{part2_3}), it is affected by the product of two components: one from the case \textbf{4} from Table 1: $GL$ and the other by $E$ in (\ref{part2_2}).\\ Hence, if $(m-3)=0$ mod 8, $E$ has a negative sign and $GL=+1$. And if $(m-3)=4$ mod 8, $E$ has a positive sign and $GL=-1$. Therefore, in both cases or equivalently, for $(m-1) =2\; mod\; 4$, \begin{equation}\label{part2_4} \langle G_1 \rangle = - \frac{2^{\left\lfloor N/ 2\right\rfloor -m}+1}{2^{\left\lfloor N/ 2\right\rfloor - \left\lfloor m/ 2\right\rfloor }}. \end{equation} This completes the proof of part 1 (i).\\ \textbf{Part 1} $(ii)$ For $N=8k-1$, show that \begin{equation} \langle G_2\rangle=\langle \underset{N}{\underbrace{XX\dots XX}}\rangle =-1. \end{equation} Here as well we use the identity (\ref{identity}) to count the multiplicity of remaining phase gates. Since all the measurements are in $X$ direction, we need to make a new table with the same notations as in the previous case:\\ \begin{tabular}{lrl} & & \tabularnewline $1.$& $\#\circledast\circledast\circledast\circledast$ & every gate occurs only once $\Rightarrow$ every $C_{e}$ cancels with the $C_{e}$ on the right hand side.\tabularnewline & & \tabularnewline $2.$ & $\#\circledast\circledast\circledast$ & $\binom{8k-4}{1}$ is even $\Rightarrow \; C_{\circledast\circledast\circledast}$ cancels.\tabularnewline & & \tabularnewline $3.$ & $\#\circledast\circledast$ &$ \binom{8k-3}{2}$ is even $\Rightarrow \; C_{\circledast\circledast}$ cancels.\tabularnewline & & \tabularnewline $4.$ & $\#\circledast$ & $\binom{8k-2}{3}$ is even $\Rightarrow C_{\circledast}$ cancels.\tabularnewline & & \tabularnewline $5.$ & $\#\{\}$ & $\binom{8k-1}{4}$ is odd $\Rightarrow$ we get a global negative sign. \tabularnewline \end{tabular} \begin{center} \textbf{Table 2.} Counting phase gates for a four-uniform HG when each system is measured in $X$ direction. \end{center} $\quad $\\ Therefore, \begin{equation} \langle G_2 \rangle=-\frac{1}{2^N}Tr\big(\mathbbm{1}\big)=-1. \end{equation} This finishes the proof of part 1.\\ \textbf{Part 2:} We show that, for $N=4k+1$: \begin{equation} \langle G_3\rangle=\langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{1}{2^{\left\lceil m/2\right\rceil }}, & \mbox{if \ensuremath{(m-1)=0 \ mod\ 4}},\\ -\frac{1}{2^{\left\lceil m/2\right\rceil }}, & \ \mbox{if \ensuremath{(m-1)=2 \ mod\ 4}},\\ \frac{1}{2^{\left\lfloor N/2\right\rfloor }}, & \mbox{if }\mbox{\ensuremath{m=N}}. \end{array}\end{cases} \end{equation} Since the number of systems measured in the $X$ direction is the same in this part as it was in part 1, we can use the results demonstrated in the Table 1. Therefore, we use equation (\ref{part1_3}) when $m-1=0$ mod 4 and (\ref{part2_2}), for $m-1=2$ mod 4. \\ \textbf{Case \# 1:} $(m-1)=0$ mod 4: \begin{equation}\label{part3_1} \langle G_3 \rangle=\pm\frac{1}{2^N}\Bigg[\bigg[Re \Big[(1+i)^m \Big] +Im\Big[(1+i)^m\Big]\bigg]2^{N-m-1} - 2^m Im\Big[(1+i)^{N-m}\Big]\Bigg]=\pm\frac{1}{2^N} E. \end{equation} As $N=4k+1$, we have that $\; N-m=4k+1-m=4k-(m-1)$, which is divisible by $4$. Therefore $Im\Big[(1+i)^{N-m}\Big]=0$. and equation (\ref{part3_1}) reduces to: \begin{equation}\label{part3_2} \langle G_3 \rangle=\pm\frac{2^{N-m-1}}{2^N}\bigg[Re \Big[(1+i)^m \Big] +Im\Big[(1+i)^m\Big]\bigg]=\pm \frac{1}{2^{\left\lceil m/2\right\rceil }}. \end{equation} We need to fix the global sign $GL$ from the Table 1. For this, we consider two cases. First, if $(m-1)=0$ mod 8, then $GL=+$ and so is the sign $E$ in equation (\ref{part3_1}): \begin{equation} \langle G_3 \rangle=+\frac{1}{2^{\left\lceil m/2\right\rceil }}. \end{equation} Second, if $(m-1)=4$ mod 8, then $GL=-$ and so is the sign of$E$ in equation (\ref{part3_1}): \begin{equation} \langle G_3 \rangle=+\frac{1}{2^{\left\lceil m/2\right\rceil }}. \end{equation}\\ \textbf{Case \# 2:} For $(m-1)=2$ mod 4: \begin{equation}\label{part3_3} \langle G_3 \rangle= \pm \frac{1}{2^N}\Bigg[2^m Re\Big[(1 +i)^{N-m}\Big] +2^{N-m-1} \bigg[Re\Big[(1+i)^m \Big] -Im\Big[(1+i)^m\Big]\bigg]\Bigg]. \end{equation} As $N=4k+1$, we have $ N-m=4k+1-m=4k-(m-1)$, which is not divisible by $4$ but is an even number. Therefore, $Re\Big[(1+i)^{N-m}\Big]=0$. So, the equation (\ref{part3_3}) reduces to: \begin{equation}\label{part3_4} \langle G_3 \rangle= \pm \frac{2^{N-m-1}}{2^N} \bigg[Re\Big[(1+i)^m \Big] -Im\Big[(1+i)^m\Big]\bigg]\equiv \pm \frac{2^{N-m-1}}{2^N} E. \end{equation} We need to fix the global sign $GL$ from the Table 1. For this, we consider two cases. First, if $(m-3)=0$ mod 8, then the global sign is positive but the sign of $E$ in (\ref{part3_4}) is negative. Therefore, \begin{equation} \langle G_3 \rangle=-\frac{1}{2^{\left\lceil m/2\right\rceil }}. \end{equation} Second, if $(m-3)=4$ mod 8, then the global sign is negative but the sign of $E$ in (\ref{part3_4}) is positive. Therefore, \begin{equation} \langle G_3 \rangle=-\frac{1}{2^{\left\lceil m/2\right\rceil }}. \end{equation} \textbf{Case \# 3:} $m=N$ resembles part 1 $(ii)$. The only difference comes in with the number of qubits we are currently working with: \begin{tabular}{lrl} & & \tabularnewline $1$ & $\#\circledast\circledast\circledast\circledast$ & e every gate occurs only once $\Rightarrow$ every $C_{e}$ cancels with the $C_{e}$ on the right hand side.\tabularnewline & & \tabularnewline $2.$ & $\#\circledast\circledast\circledast$ & $\binom{4k-2}{1}$ is even $\Rightarrow \; C_{\circledast\circledast\circledast}$ cancels.\tabularnewline & & \tabularnewline $3.$ & $\#\circledast\circledast$ & $\binom{4k-1}{2}$ is odd $\Rightarrow C_{\circledast\circledast}$ remains. \tabularnewline & & \tabularnewline $4.$ & $\#\circledast$ & $\binom{4k}{3}$ is even $\Rightarrow C_{\circledast}$ cancels.\tabularnewline & & \tabularnewline $5.$ & $\#\{\}$ & the global sign depends on $k$, as $\binom{4k+1}{4}=\frac{(4k+1)4k(4k-1)(4k-2)}{2\cdot 3\cdot 4}$ \tabularnewline \end{tabular}\\ \begin{center} \textbf{Table 3}. Counting phase gates for a four-uniform HG when each system is measured in $X$ direction. \end{center} $\quad $\\ Back to the expectation value, \begin{equation} \langle G_3 \rangle= \pm \frac{1}{2^N}Tr\bigg[\prod_{\forall \circledast}C_{\circledast\circledast}\bigg]. \end{equation} So, we have to count the difference between the amount of $+1$'s and $-1$'s on the diagonal. As we use exactly the same techniques before, we will skip the detailed explanation. The sign on the diagonal is: \begin{equation} (-1)^{\binom{\alpha}{2}}=(-1)^{\frac{\alpha(\alpha-1)}{2}}. \end{equation} and it is straightforward to evaluate it for each value of $\alpha$. \\ \begin{equation}\label{aaa} \langle G_3 \rangle=\pm\frac{1}{2^N}\Bigg [\sum_{\alpha =0,4,\dots}^{N}\binom{N}{\alpha}+\binom{N}{\alpha +1}-\binom{N}{\alpha +2}-\binom{N}{\alpha +3}\Bigg ] =\pm\frac{1}{2^N}\bigg[Re \Big[(1+i)^N \Big] +Im\Big[(1+i)^N\Big]\bigg]\equiv \pm\frac{1}{2^N}E. \end{equation}\\ Keeping in mind that $N=4k+1$, the global sign from the Table 3 is positive for even $k$ and negative for odd. The the sign of $E$ in (\ref{aaa}) is positive if $k$ is even and negative, otherwise. Therefore, \begin{equation} \langle G_3 \rangle=\frac{1}{2^{\left\lfloor N/2\right\rfloor }}. \end{equation} This completes the proof of part 2. \textbf{Part 3: } We start with (ii). We show that for $N=8k+2,\; \mbox{ or } 8k+4 $:\\ (ii) \begin{equation} \langle G_4 \rangle= \langle XX\dots XX\rangle =\frac{2^{\frac{N}{2}-1}+1}{2^{\frac{N}{2}}}. \end{equation} Although the result seems identical, unfortunately, each case needs a separate treatment. The technique is similar to the previous proofs, though. We just mind the number of qubits we are working with:\\ For $N=8k+2$ we find the remaining phase gates as follows: \\ \begin{tabular}{lrl} & & \tabularnewline $1$ & $\#\circledast\circledast\circledast\circledast$ & each gate only once; thus, every $C_{e}$ cancels with the $C_{e}$ on the right hand side.\tabularnewline & & \tabularnewline $2.$ & $\#\circledast\circledast\circledast$ & $\binom{8k-1}{1}$ is odd $\Rightarrow \; C_{\circledast\circledast\circledast}$ remains.\tabularnewline & & \tabularnewline $3.$ & $\#\circledast\circledast$ & $\binom{8k}{2}$ is even $\Rightarrow \; C_{\circledast\circledast}$ cancels.\tabularnewline & & \tabularnewline $4.$ & $\#\circledast$ & $\binom{8k+1}{3}$ is even $\Rightarrow C_{\circledast }$ cancels.\tabularnewline & & \tabularnewline $5.$ & $\#\{\}$ & $\binom{8k+2}{4}$ is even times $\Rightarrow$ we get a global positive sign. \tabularnewline & & \tabularnewline \end{tabular} \begin{center} \textbf{Table 4.} Counting phase gates for a four-uniform HG when each system is measured in $X$ direction. \end{center} $\quad $\\ Therefore, \begin{equation} \langle G_4 \rangle=\frac{1}{2^N}Tr\Big[ \prod_{\forall \circledast}C_{\circledast\circledast\circledast}\Big]. \end{equation} We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{3}$. So, after considering all possible values of $\alpha$, it is directly obtained that \begin{align} \begin{split} \langle G_4 \rangle= \frac{1}{2^N} Tr(C_{\circledast\circledast\circledast}) & =\frac{1}{2^N}\bigg[\sum_{\alpha=0,2,\dots}^N\binom{N}{\alpha} + \sum_{\alpha=1,5,\dots}\Big[ \binom{N}{\alpha}-\binom{N}{\alpha+2}\Big] \bigg] \\ & =\frac{1}{2^N}\bigg[2^{N-1} +Im\big[(1+i)^N \big]\bigg] =\frac{2^{\frac{N}{2}-1}+1}{2^{\frac{N}{2}}}. \end{split} \end{align}\\ For $N=8k+4$ we find the remaining phase gates as follows: \\ \begin{tabular}{lrl} & & \tabularnewline $1.$ & $\#\circledast\circledast\circledast\circledast$ & each gate occurs only once $\Rightarrow$ every $C_{e}$ cancels with the $C_{e}$ on the right hand side.\tabularnewline & & \tabularnewline $2.$ & $\#\circledast\circledast\circledast$ & $ \binom{8k+1}{1}$ is odd $\Rightarrow \; C_{\circledast\circledast\circledast}$ remains.\tabularnewline & & \tabularnewline $3.$ & $\#\circledast\circledast$ & $\binom{8k+2}{2}$ is odd $\Rightarrow\; C_{\circledast\circledast}$ remains.\tabularnewline & & \tabularnewline $4. $ & $\#\circledast$ & $\binom{8k+3}{3}$ is odd $\Rightarrow \; =C_{\circledast}$ remains.\tabularnewline & & \tabularnewline $5. $ & $\#\{\}$ & $\binom{8k+4}{4}$ is odd $\Rightarrow$ we get a global negative sign, $GL=-1$. \tabularnewline & & \tabularnewline \end{tabular} \nopagebreak[9] \begin{center} \nopagebreak[9] \textbf{Table 5.} Counting phase gates for a four-uniform case in all $X$ direction. \end{center} Therefore, \begin{equation} \langle G_4\rangle =-\frac{1}{2^N} Tr\Big[\prod_{\forall \circledast}C_{\circledast\circledast\circledast}C_{\circledast\circledast}C_{\circledast}\Big]. \end{equation} We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{3}+\binom{\alpha}{2}+\binom{\alpha}{1}$. So, after considering all possible values of $\alpha$, it is directly obtained that \begin{align} \begin{split} \langle G_4\rangle =-\frac{1}{2^N} Tr(C_{\circledast\circledast\circledast}C_{\circledast\circledast}C_{\circledast})& =-\frac{1}{2^N}\bigg[\sum_{\alpha=0,4,\dots}^N\Big[\binom{N}{\alpha}-\binom{N}{\alpha+2}\Big] -\sum_{\alpha=1,3,\dots}^N \binom{N}{\alpha}\bigg]\\ & =-\frac{1}{2^N}\bigg[-2^{N-1} +Re\big[(1+i)^N \big]\bigg] =\frac{2^{N-1}+2^{N/2}}{2^N} =\frac{2^{\frac{N}{2}-1}+1}{2^{\frac{N}{2}}}. \end{split} \end{align} This finishes the proof of part $(i)$. (ii) We need to show that \begin{equation} \langle G_4\rangle =\langle \underset{m}{\underbrace{X\dots X}}Z\dots Z\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{2^{m/2-1}}{2^{N/2 }} & \mbox{if \ensuremath{(N-m)=0\ mod\ 4 }},\\ -\frac{2^{m/2-1}}{2^{N/2 }} & \mbox{if \ensuremath{(N-m)=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation} Note that in this case $m$ is an even number. Therefore, we have to derive again from the scratch how phase gates can be moved to the right hand side of the expression and for this we use the identity ($\ref{identity}$). \begin{tabular}{rrllll} & & \multicolumn{4}{l}{}\tabularnewline \textbf{1.} & \#$\bigtriangleup\bigtriangleup\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m}{1}$ is even $\Rightarrow\; C_{\bigtriangleup\bigtriangleup\bigtriangleup}$ cancels.}\tabularnewline & & & & & \tabularnewline \textbf{2.} & \#$\bigtriangleup\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m}{2}=\frac{m(m-1)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if \ensuremath{m=0} mod 4} & \Rightarrow C_{\bigtriangleup\bigtriangleup}\mbox{ cancels.}\\ \mbox{odd, if \ensuremath{m=2} mod 4} & \Rightarrow C_{\bigtriangleup\bigtriangleup}\mbox{ remains.} \end{array}\end{cases}$ }\tabularnewline & \#$\circledast\bigtriangleup\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m-1}{1}$ is odd $\Rightarrow C_{\circledast\bigtriangleup\bigtriangleup}$ remains.}\tabularnewline & & & & & \tabularnewline \textbf{3.} & \#$\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m}{3}=\frac{m(m-1)(m-2)}{2\cdot3}$ is even $\Rightarrow\; C_{\bigtriangleup}$ cancels.}\tabularnewline & \#$\circledast\circledast\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m-2}{1}$ is even $\Rightarrow\; C_{\circledast\circledast\bigtriangleup} $ cancels.}\tabularnewline & \#$\circledast\bigtriangleup$ & \multicolumn{4}{l}{$=\binom{m-1}{2}=\frac{(m-1)(m-2)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{odd, if \ensuremath{m=0} mod 4} & \Rightarrow C_{\circledast\bigtriangleup}\mbox{ remains.}\\ \mbox{even, if \ensuremath{m=2} mod 4} & \Rightarrow C_{\circledast\bigtriangleup}\mbox{ cancels.} \end{array}\end{cases}$}\tabularnewline & & & & & \tabularnewline \textbf{4.} & \#$\circledast\circledast\circledast$ & \multicolumn{4}{l}{$=\binom{m-3}{1}$ is odd $\Rightarrow \; C_{\circledast\circledast\circledast}$ remains. }\tabularnewline & \#$\circledast\circledast$ & \multicolumn{4}{l}{$=\binom{m-2}{2}=\frac{(m-2)(m-3)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{odd, if \ensuremath{m=0} mod 4} & \Rightarrow C_{\circledast\circledast}\mbox{ remains.}\\ \mbox{even, if \ensuremath{m=2} mod 4} & \Rightarrow C_{\circledast\circledast}\mbox{ cancels.} \end{array}\end{cases}$}\tabularnewline & \#$\circledast$ & \multicolumn{4}{l}{$=\binom{m-1}{3}=\frac{(m-1)(m-2)(m-3)}{2\cdot3}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{odd, if \ensuremath{m=0} mod 4} & \Rightarrow C_{\circledast}\mbox{ remains.}\\ \mbox{even, if \ensuremath{m=2} mod 4} & \Rightarrow C_{\circledast}\mbox{ cancels.}\end{array}\end{cases}$ }\tabularnewline & \#$\{\}$ & \multicolumn{4}{l}{$=\binom{m}{4}=\frac{m(m-1)(m-2)(m-3)}{2\cdot3\cdot4}\ $ affects the global sign ($GL$) and will be discussed separately.}\tabularnewline \end{tabular} \nopagebreak[9] \begin{center} \textbf{Table 6.} Counting phase gates for a four-uniform HG state, for even $m$. \end{center} $\;$\\ \textbf{Remark:} Similarly to previous proofs the four-qubit phase gates cancel out. Therefore, we directly skip the discussion about them. We need to consider two cases, when $m=0$ mod 4 and $m=2$ mod 4 for each $N=8k+2$ and $8k+4$ separately:\\ \textbf{Case \# 1:} If $m=0$ mod 4:\\ \begin{align}\label{part4_1} \begin{split} \langle G_4 \rangle & =\pm\bra{+}^{\otimes N} XX\dots XZ\dots Z \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast \bigtriangleup \bigtriangleup}C_{\circledast\bigtriangleup} C_{\circledast \circledast \circledast } C_{\circledast \circledast }C_{\circledast} \ket{+}^{\otimes N}\\ & =\pm \frac{1}{2^N} Tr\Big[ \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast \bigtriangleup \bigtriangleup}C_{\circledast\bigtriangleup} C_{\circledast \circledast \circledast } C_{\circledast \circledast }C_{\circledast}C_{\bigtriangleup} \Big]. \end{split} \end{align} We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{1}\binom{\beta}{2}+\binom{\alpha}{1}\binom{\beta}{1}+\binom{\alpha}{3}+\binom{\alpha}{2}+\binom{\alpha}{1}+\binom{\beta}{1}$. If $s$ is even, the value on the diagonal is $+1$ and $-1$, otherwise. We consider all possible values of $\alpha$ and $\beta$:\\ \begin{tabular}{lllc} \textbf{1. } & $\alpha$ is even \& $\beta$ is even & \multicolumn{2}{l}{if $\alpha=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\alpha$ is even \& $\beta$ is odd: & \multicolumn{2}{l}{if $\alpha=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ From here one can easily spot that for even $\alpha$, there is equal number of $+1$ and $- 1$ on the diagonal. So, they do not contribute in the calculations. We now consider the odd $\alpha$:\\ \begin{tabular}{lllc} \textbf{3. } & $\alpha$ is odd \& $\beta$ is even & \multicolumn{2}{l}{if $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & & \tabularnewline \textbf{4. } & $\alpha$ is odd \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ We now continue calculation of the trace from ($\ref{part4_1}$): \begin{align}{\label{part4_2}} \begin{split} \langle G_4 \rangle & =\pm \frac{1}{2^N} \sum_{\alpha=1,3,5\dots } \binom{m}{\alpha} \bigg[\sum_{\beta =0,4,\dots}^{N-m}-\binom{N-m}{\beta}-\binom{N-m}{\beta +1}+\binom{N-m}{\beta +2}+\binom{N-m}{\beta +3}\bigg] \\ & = \pm \frac{2^{m-1}}{2^N} \bigg[-Re \Big[(1+i)^{N-m} \Big] - Im\Big[(1+i)^{N-m}\Big]\bigg]=\pm \frac{2^{m-1}}{2^N} \bigg( \mp2^{\frac{N-m}{2}} \bigg). \end{split} \end{align} We have to take care of the sign which appears from the product of the sign of the sum of real and imaginary part in ($\ref{part4_2}$) and global sign $(GL)$, which we defined while deriving the remaining phase gates. If $m$ is divisible by $8$, $GL=+1$ and since we are in $N=8k+2$ case, $(N-m=8k+2-m)-2$ mod $8$ and therefore: \begin{equation} \langle G_4 \rangle= \frac{2^{m-1}}{2^N} \bigg( -2^{\frac{N-m}{2}} \bigg) = -\frac{2^{m/2-1}}{2^{N/2 }}. \end{equation} If $m$ is not divisible by $8$, the global sign $GL=-1$, and the sum of real and imaginary part also contribute with a negative sign. Thus, \begin{equation} \langle G_4 \rangle= -\frac{2^{m-1}}{2^N} \bigg( -(-2^{\frac{N-m}{2}}) \bigg) = -\frac{2^{m/2-1}}{2^{N/2 }}. \end{equation}\\ Since the $N=8k+4$ case is identical, we only have to mind the sign of the sum of the real and imaginary part. Here as well we consider two cases: if $m$ is divisible by $8$, then the global sign $GL=+1$, and the sign of the sum of real and imaginary part is $"-"$. Therefore, \begin{equation} \langle G_4 \rangle=\frac{2^{m-1}}{2^N} \bigg( -(-2^{\frac{N-m}{2}}) \bigg) = +\frac{2^{m/2-1}}{2^{N/2 }}. \end{equation} And if $m$ is not divisible by $8$, $GL=-1$, and the sign of real and imaginary part is $"+"$. Therefore, \begin{equation} \langle G_4 \rangle= -\frac{2^{m-1}}{2^N} \bigg( -(+2^{\frac{N-m}{2}}) \bigg) = +\frac{2^{m/2-1}}{2^{N/2 }}. \end{equation}\\ \textbf{Case \# 2:} If $m=2$ mod 4: \begin{align}\label{part2ii} \begin{split} \langle G_4\rangle & =\pm\bra{+}^{\otimes N} XX\dots XZ\dots Z \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast \bigtriangleup \bigtriangleup} C_{\circledast \circledast \circledast } C_{\bigtriangleup \bigtriangleup } \ket{+}^{\otimes N}\\ & =\pm \frac{1}{2^N} Tr( \prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast \bigtriangleup \bigtriangleup} C_{\circledast \circledast \circledast } C_{\bigtriangleup \bigtriangleup } C_{ \bigtriangleup } ). \end{split} \end{align}\\ We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{1}\binom{\beta}{2}+\binom{\alpha}{3}+\binom{\beta}{2}+\binom{\beta}{1}$. If $s$ is even, the value on the diagonal is $+1$ and $-1$, otherwise. We consider all possible values of $\alpha$ and $\beta$:\\ Considering the terms from odd $\alpha$:\\ \begin{tabular}{lllc} \textbf{1. } & $\alpha$ is odd \& $\beta$ is even & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\alpha$ is odd \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular} It is easy to see that these cases adds up to $0$. \\ \begin{tabular}{lllc} \textbf{3. } & $\alpha$ is even \& $\beta$ is even & \multicolumn{2}{l}{if $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{4. } & $\alpha$ is even \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ Therefore, \begin{align} \begin{split} \langle G_4\rangle=\pm\frac{1}{2^N}\sum_{\alpha=0,2,4\dots } \binom{m}{\alpha} \bigg[\sum_{\beta =0,4,\dots}^{N-m}\binom{N-m}{\beta} & -\binom{N-m}{\beta +1}-\binom{N-m}{\beta +2}+\binom{N-m}{\beta +3}\bigg] \\ =\pm 2^{m-1} \bigg[Re \Big[(1+i)^{N-m} & \Big] - Im\Big[(1+i)^{N-m}\Big]\bigg]=\pm\frac{2^{m/2-1}}{2^{N/2 }}. \end{split} \end{align} To fix the sign, we need to first consider $N=8k+2$, and $(m-2)=0$ mod $8$. Then the global sign $GL=+1$ and type of $N-m$ also yields a positive sign. But if $(m-2)=4$ mod $8$, global sign in negative and the $N-m$ also yields the negative sign. So, \\ \begin{equation} \langle G_4 \rangle=\frac{2^{m/2-1}}{2^{N/2 }}. \end{equation} $N=8k+4$ case is identical to $N=8k+2$, therefore we will just state the result. For $N=8k+4$ \begin{equation} \langle G_4 \rangle=-\frac{2^{m/2-1}}{2^{N/2 }}. \end{equation} To sum up, \begin{equation} \langle G_4\rangle =\begin{cases} \begin{array}[t]{cc} +\frac{2^{m/2-1}}{2^{N/2 }} & \mbox{if \ensuremath{(N-m)=0\ mod\ 4 }},\\ -\frac{2^{m/2-1}}{2^{N/2 }} & \mbox{if \ensuremath{(N-m)=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation}\\ This finishes the proof of part 3.\\ \textbf{Part 4: } We show that for $N=8k+3$, $\langle \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}\mathbbm1}\rangle$ for even $m$ gives the same exact result as the part 3 (i).\\ We tackle the problem as follows: We make a measurement on one of the qubits in $Z$ direction and depending on the measurement outcome, we obtain the new $\ket{H_{4_{new}}^{M}}$ state, where $M:=N-1$. Then we consider the expectation values for the all possible measurement outcomes and $\ket{H_{4_{new}}^{M}}$. From that we conclude the statement in the part 4. \\ Initial HG state $\ket{H_4^N}$ can be written in the following form as well: \begin{equation} \ket{H_4^N}=\prod_e C_e\ket{+}^{\otimes N}=\prod_{e',e''}C_{e'}C_{e''}\ket{+}^{\otimes N}=\prod_{e',e''}\big[\mathbbm{1} _{e'\backslash N}\ket{0}_N\bra{0}_N + C_{e'\backslash N}\ket{1}_N\bra{1}_N \big]C_{e''}\ket{+}^{\otimes N}, \end{equation} where $e'$ represents the gates containing the last qubit, $N$ and $e''$ represents the ones which do not contain $N^{th}$ qubit. And $e=e'+e''$. Then if one makes a measurement in $Z$ basis on the last qubit and obtains outcome $+$, \begin{align} \begin{split} \ket{H_{4_{new}}^{M+}}=\braket{0_N}{H_4^N}&=\prod_{e',e''}\big[\bra{0}_N\mathbbm{1} _{e'\backslash N}\ket{0}_N\bra{0}_N+\bra{0}_NC_{e'\backslash N}\ket{1}_N\bra{1}_N \big]C_{e''}\ket{+}^{\otimes N}\\ &=\prod_{e',e''}\mathbbm{1} _{e'\backslash N}\bra{0}_NC_{e''}\ket{+}^{\otimes N}=\bra{0}_N\prod_{e',e''}\mathbbm{1} _{e'\backslash N}C_{e''}\ket{+}^{\otimes M}\big(\ket{0}_N+\ket{1}_N\big)\\ &=\prod_{e',e''}\mathbbm{1} _{e'\backslash N}C_{e''}\ket{+}^{\otimes M}=\prod_{e''}C_{e''}\ket{+}^{\otimes M}. \end{split}. \end{align} So, $+$ outcome after measuring in $Z$ direction leaves us with $\ket{H_{4_{new}}^{M+}}$, which is precisely four uniform $M$-qubit HG state. Now, let us see what is the remaining state if one gets $-$ as an outcome result: \begin{align} \begin{split} \ket{H_{4_{new}}^{M-}}=\braket{1_N}{H_4^N}&=\prod_{e',e''}\big[\bra{1}_N\mathbbm{1} _{e'\backslash N}\ket{0}_N\bra{0}_N+\bra{1}_NC_{e'\backslash N}\ket{1}_N\bra{1}_N \big]C_{e''}\ket{+}^{\otimes N}\\ &=\prod_{e',e''}C_{e'\backslash N}\bra{1}_N C_{e''}\ket{+}^{\otimes N}= \bra{1}_N \prod_{e',e''}C_{e'\backslash N}C_{e''}\ket{+}^{\otimes M}\big(\ket{0}_N+\ket{1}_N\big)\\ &=\prod_{e',e''}C_{e'\backslash N}C_{e''}\ket{+}^{\otimes M}. \end{split} \end{align} So, $-$ outcome after measuring in $Z$ direction leaves us with $\ket{H_{4_{new}}^{M-}}$, which is precisely a symmetric $M$-qubit HG state with all possible edges of cardinality four and three. We will call such HG state a three- and four-uniform HG state.\\ Therefore, problem boils down to showing that,\\ $(i)$ If the measurement outcome is $+$, we get the $M=8k+2$ four-uniform HG state and the correlations are given in part 3.\\ $(ii)$ If the measurement outcome is $-$, we get $M=8k+2$ three- and fouruniform HG state and the following holds: \begin{equation} \langle G_5^-\rangle=\bra{H_{4_{new}}^{M-}} \underset{m}{\underbrace{X\dots X}}Z\dots Z\ket{H_{4_{new}}^{M-}}=\begin{cases} \begin{array}[t]{cc} -\frac{2^{m/2-1}}{2^{M/2 }} & \mbox{if \ensuremath{(M-m)=0\ mod\ 4 }},\\ +\frac{2^{m/2-1}}{2^{M/2 }} & \mbox{if \ensuremath{(M-m)=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation} $ (i)$ $\ket{H_{4_{new}}^{M+}}$, where $M=8k+2$ was already considered in part 3.\\ (ii). For $\ket{H_{4_{new}}^{M-}}$, \begin{equation} \langle G_5^-\rangle=\bra{+}^{\otimes M}\Big[\prod_{e',e''\in E} C_{e' }C_{e''}\Big]\underset{m}{\underbrace{X\dots X}}Z\dots Z\Big[\prod_{e',e''\in E} C_{e }C_{e''}\Big]\ket{+}^{\otimes M}. \end{equation} Before, we treated three- and four-uniform cases separately. Now, we just need to put them together. \\ \textbf{Case \# 1:} If $m=0$ mod 4:\\ Then from equations (\ref{appendixAeq6}) and (\ref{part4_1}), we can directly write down that \begin{equation} \langle G_5^-\rangle=\frac{1}{2^M}Tr\bigg[\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast \bigtriangleup \bigtriangleup} C_{\circledast \circledast \circledast } C_{\circledast \circledast } C_{\bigtriangleup}\bigg]. \end{equation} We check the sign of each term on the diagonal by $(-1)^s$, where $s=\binom{\alpha}{1}\binom{\beta}{2}+\binom{\alpha}{3}+\binom{\alpha}{2}+\binom{\beta}{1}$. For this we need to consider each value of $\alpha$ and $\beta$ separately.\\ \begin{tabular}{lllll} \multirow{2}{*}{\textbf{1. }} & \multirow{2}{*}{$\alpha$ is even \& $\beta$ is even} & $\alpha=0$ mod 4 $\Rightarrow$ $+1$ & & \multirow{5}{*}{$\Rightarrow$ These two give zero contribution together.}\tabularnewline & & $\alpha=2$ mod 4 $\Rightarrow$ $-1$ & & \tabularnewline & & & & \tabularnewline \multirow{2}{*}{\textbf{2.}} & \multirow{2}{*}{$\alpha$ is even \& $\beta$ is odd} & $\alpha=0$ mod 4 $\Rightarrow$ $-1$ & & \tabularnewline & & $\alpha=2$ mod 4 $\Rightarrow$ $+1$ & & \tabularnewline \end{tabular}\\ \\ \begin{tabular}{lllll} \multirow{2}{*}{\textbf{3. }} & \multirow{2}{*}{$\alpha$ is odd \& $\beta$ is even} & $\beta=0$ mod 4 $\Rightarrow$ $+1$ & & \multirow{5}{*}{}\tabularnewline & & $\beta=2$ mod 4 $\Rightarrow$ $-1$ & & \tabularnewline & & & & \tabularnewline \multirow{2}{*}{\textbf{4.}} & \multirow{2}{*}{$\alpha$ is odd \& $\beta$ is odd} & $\beta-1=0$ mod 4 $\Rightarrow$ $-1$& & \tabularnewline & & $\beta-1=2$ mod 4 $\Rightarrow$ $+1$ & & \tabularnewline \end{tabular}\\ \begin{align}\label{part5_1} \begin{split} \langle G_5^- \rangle & =\pm\frac{1}{2^{M}}\sum_{\alpha\ odd}^{m}\binom{m}{\alpha}\bigg[\sum_{\beta=0,4}^{M-m}\binom{M-m}{\beta}-\binom{M-m}{\beta+1}-\binom{M-m}{\beta+2}+\binom{M-m}{\beta+3}\bigg]\\ &=\pm\frac{2^{m-1}}{2^{M}}\bigg[Re(1+i)^{M-m}-Im(1+i)^{M-m}\bigg]=\pm\frac{2^{\frac{m}{2}-1}}{2^{M/2}}. \end{split} \end{align} If $m=0$ mod 8, real and imaginary part in (\ref{part5_1}) has a negative sign and the global sign coming from Table 6, $GL$ is positive. Note from equation (\ref{3unifGL}) that three uniform gate moving does not introduce any global signs. And if $m=4$ mod 8, real and imaginary part in (\ref{part5_1}) has a positive sign and the global sign coming from Table 6, $GL$ is negative.Therefore, \begin{equation} \langle G_5^+ \rangle =-\frac{2^{\frac{m}{2}-1}}{2^{M/2}}. \end{equation} \textbf{Case \# 2:} If $m=2$ mod 4:\\ Then from equations (\ref{appendixAeq8}) and (\ref{part2ii}), we can directly write down that \begin{equation} \langle G_5^-\rangle=\pm\frac{1}{2^{M}}Tr\bigg[\prod_{\forall \circledast, \forall\bigtriangleup } C_{\circledast\triangle\triangle}C_{\circledast\triangle}C_{\circledast\circledast\circledast}C_{\triangle\triangle}\bigg]. \end{equation} We check the sign of each term on the diagonal by $(-1)^s$, where $s=\binom{\alpha}{1}\binom{\beta}{2}+\binom{\alpha}{3}+\binom{\beta}{2}+\binom{\alpha}{1}\binom{\beta}{1}$. For this we need to consider each value of $\alpha$ and $\beta$ separately.\\ \begin{tabular}{lllll} \multirow{2}{*}{\textbf{1. }} & \multirow{2}{*}{$\alpha$ is even \& $\beta$ is even} & $\beta=0$ mod 4 $\Rightarrow$ $+1$ & & \multirow{5}{*}{}\tabularnewline & & $\beta=2$ mod 4 $\Rightarrow$ $-1$ & & \tabularnewline & & & & \tabularnewline \multirow{2}{*}{\textbf{2.}} & \multirow{2}{*}{$\alpha$ is even \& $\beta$ is odd} & $\beta-1=0$ mod 4 $\Rightarrow$ $+1$ & & \tabularnewline & & $\beta-1=2$ mod 4 $\Rightarrow$ $-1$ & & \tabularnewline \end{tabular}\\ $\quad$\\ \begin{tabular}{lllll} \multirow{2}{*}{\textbf{3. }} & \multirow{2}{*}{$\alpha$ is odd \& $\beta$ is even} & & $\alpha-1=0$ mod 4 $\Rightarrow$ $+1$ & \multirow{4}{*}{$\Rightarrow$ These two give zero contribution together.}\tabularnewline & & & $\alpha-1=2$ mod 4 $\Rightarrow$ $-1$ & \tabularnewline \multirow{2}{*}{\textbf{4.}} & \multirow{2}{*}{$\alpha$ is odd \& $\beta$ is odd} & & $\alpha-1=0$ mod 4 $\Rightarrow$ $-1$ & \tabularnewline & & & $\alpha-1=2$ mod 4 $\Rightarrow$ $+1$ & \tabularnewline \end{tabular} \begin{align}\label{part5_2} \begin{split} \langle G_{5}^-\rangle & = \pm\frac{1}{2^{M}}\sum_{\alpha\ even}^{m}\binom{m}{\alpha}\bigg[\sum_{\beta=0,4}^{M-m}\binom{M-m}{\beta}+\binom{M-m}{\beta+1}-\binom{M-m}{\beta+2}-\binom{M-m}{\beta+3}\bigg]\\ & = \pm\frac{1}{2^{M}} 2^{m-1}\bigg[Re(1+i)^{M-m}+Im(1+i)^{M-m}\bigg]=\pm\frac{2^{\frac{m}{2}-1}}{2^{M/2}}. \end{split} \end{align} If $m-2=0$ mod 8, real and imaginary part in (\ref{part5_2}) has a positive sign and the global sign coming from Table 6, $GL$ is positive. Note from equation (\ref{3unifGL}) that the three uniform gate moving does not introduce any global signs. And if $m-2=4$ mod 8, real and imaginary part in (\ref{part5_2}) has a negative sign and the global sign coming from Table 6, $GL$ is negative.Therefore, \begin{equation} \langle G_5^- \rangle =+\frac{2^{\frac{m}{2}-1}}{2^{M/2}}. \end{equation} Finally, we can put everything together. Since one can observe that $\langle G_5^- \rangle=-\langle G_5^+ \rangle$, \begin{equation} \ket{0}\bra{0}\langle G_5^+ \rangle-\ket{1}\bra{1}\langle G_5^- \rangle=\ket{0}\bra{0}\langle G_5^+ \rangle+\ket{1}\bra{1}\langle G_5^+\rangle =\mathbbm{1}\langle G_5^+\rangle= \langle \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle. \end{equation} This completes the proof of part 4 and entire lemma. \end{proof} \section{Appendix E: Bell inequality violations for fully-connected four-uniform hypergraph states} \begin{lemma}\label{theorem4unifviol} An arbitrary four-uniform HG state violates the classical bound by an amount that grows exponentially along with number of qubits. \end{lemma} \begin{proof} To show this, either in Eq.~(\ref{bell2}) or in the original Mermin operator, $\langle \ensuremath{\mathcal{B}}_N^M \rangle $, we fix $A=Z$ and $B=X$. The choice of the Bell operator depends on the number of qubits: From Lemma~(\ref{lemma4full}) for a given $N$ either the correlations for an even $m$ or an odd $m$ are given. If $m$ is even, we choose Eq.~(\ref{bell2}) and $\langle \ensuremath{\mathcal{B}}_N^M \rangle $, otherwise. From Lemma \ref{lemma4full}, it is clear that we need to consider separate cases. However, we choose the one which encounters the smallest growth in violation and this is the $N=8k+3$ case. Other cases just encounter different factors in the growth or are greater. For the $N=8k+3$, the strategy consists of measuring the Bell operator from Eq.~(\ref{bell2}) on $M=N-1$ qubits. Then we have: \begin{align} \begin{split} \langle \ensuremath{\mathcal{B}}_N \rangle _Q & \geq\sum_{m=2,4\dots}^{M}\binom{M}{m}\Big(\frac{1}{\sqrt{2}}\Big)^{M-m+2} = \frac{1}{2}\bigg[\sum_{m\ even}^{M} \binom{M}{m} \Big(\frac{1}{\sqrt{2}}\Big)^{M-m}\bigg]-\Big(\frac{1}{\sqrt{2}}\Big)^{M+2}\\ & =\frac{1}{4}\bigg[ \Big(1+\frac{1}{\sqrt{2}} \Big)^M -\Big(1-\frac{1}{\sqrt{2}} \Big)^M\bigg]-\Big(\frac{1}{\sqrt{2}}\Big)^{M+2}. \end{split} \end{align} Checking the ratio of the quantum and classical values, we have that \begin{align} \begin{split} \frac{\langle \ensuremath{\mathcal{B}}_N \rangle _Q}{\langle \ensuremath{\mathcal{B}}_N \rangle _C} \stackrel{N\rightarrow \infty}{\sim} \frac{\frac{1}{4}\Big(1+\frac{1}{\sqrt{2}} \Big)^{N-1} }{2^{\frac{N-1}{2}}} = \frac{\Big(1+\frac{1}{\sqrt{2}}\Big)^{N-1} }{\sqrt{2}^{N+3}} \approx \frac{1.20711^{N}}{2\sqrt{2} +2} . \end{split} \end{align} Looking at the expectation values from Lemma \ref{lemma4full}, it is straightforward to see that in all other cases of $N$, correlations are stronger than in the $N=8k+3$ case, so the quantum violation increases. \end{proof} \section{Appendix F: Bell and separability inequality violations for fully-connected four-uniform hypergraph states after loosing one qubit} \begin{lemma} The following statement holds for $N=8k+4$ qubit, four-uniform complete hypergraph states: \begin{equation}\label{reducedlemma} \langle G_6 \rangle = \langle \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle=\begin{cases} \begin{array}[t]{cc} -\Big(\frac{1}{\sqrt{2}}\Big)^{N-m+2} & \mbox{if \ensuremath{m=0\ mod\ 4 }},\\ +\Big(\frac{1}{\sqrt{2}}\Big)^{N-m+2} & \mbox{if \ensuremath{m=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation} \end{lemma} \begin{proof} The derivation of this result is very similar to the combinatorial calculations in the appendices D and E. Since $m$ is even, we refer to the Table 6 to see what gates remain after regrouping hyperedges on the right hand side of the expression (\ref{reducedlemma}): \textbf{Case \# 1:} If $m=0$ mod 4:\\ \begin{align}\label{part4_1} \begin{split} \langle G_6 \rangle & =\pm\bra{+}^{\otimes N} \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1 \prod_{\forall \circledast,\bigtriangleup,\diamondsuit } C_{\circledast \bigtriangleup \bigtriangleup}C_{\circledast\bigtriangleup} C_{\circledast \bigtriangleup \diamondsuit}C_{\circledast\diamondsuit} C_{\circledast \circledast \circledast } C_{\circledast \circledast }C_{\circledast} \ket{+}^{\otimes N}\\ & =\pm \frac{1}{2^N} Tr\Big[\prod_{\forall \circledast,\bigtriangleup,\diamondsuit } C_{\circledast \bigtriangleup \bigtriangleup}C_{\circledast\bigtriangleup} C_{\circledast \bigtriangleup \diamondsuit}C_{\circledast\diamondsuit} C_{\circledast \circledast \circledast }C_{\circledast \circledast }C_{\circledast}C_{\bigtriangleup} \Big]. \end{split} \end{align} Here $\circledast$ again refers to X operator, $\bigtriangleup$ to Z and $\diamondsuit$ to $\mathbbm1$ and it is denoted as $\gamma$. The strategy now is similar to the previous proofs: count the number of +1's and -1's on the diagonal and then their difference divided by $2^N$ gives the trace. We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{1}\binom{\beta}{2}+\binom{\alpha}{1}\binom{\beta}{1}+\binom{\alpha}{1}\binom{\beta}{1}\binom{\gamma}{1}+\binom{\alpha}{1}\binom{\gamma}{1}+\binom{\alpha}{3}+\binom{\alpha}{2}+\binom{\alpha}{1}+\binom{\beta}{1}$. If $s$ is even, the value on the diagonal is $+1$ and $-1$, otherwise. We consider all possible values of $\alpha$, $\beta$, and $\gamma$:\\ a) If $\gamma$ is even (that is $\gamma=0$):\\ \begin{tabular}{lllc} \textbf{1. } & $\alpha$ is even \& $\beta$ is even & \multicolumn{2}{l}{if $\alpha=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\alpha$ is even \& $\beta$ is odd: & \multicolumn{2}{l}{if $\alpha=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ From here one can easily spot that for even $\alpha$, there is equal number of $+1$ and $- 1$ on the diagonal. So, they do not contribute in the calculations. We now consider the odd $\alpha$:\\ \begin{tabular}{lllc} \textbf{3. } & $\alpha$ is odd \& $\beta$ is even & \multicolumn{2}{l}{if $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & & \tabularnewline \textbf{4. } & $\alpha$ is odd \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ b) If $\gamma$ is odd (that is $\gamma=1$):\\ The cases \textbf{1} and \textbf{2} don't change. Therefore, they sum up to 0. \\ \begin{tabular}{lllc} \textbf{3. } & $\alpha$ is odd \& $\beta$ is even & \multicolumn{2}{l}{if $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \end{tabular}\\ \textbf{4. } Stays the same as in the previous case. Therefore the result is: \begin{align}\label{F_1} \begin{split} \langle G_{6}\rangle & = \pm\frac{1}{2^{N}}\Big(\sum_{\gamma\; even}\binom{\gamma}{0}\Bigg[\sum_{\alpha\ odd}^{m}\binom{m}{\alpha}\bigg[\sum_{\beta=0,4\dots}^{N-m-1}-\binom{N-m-1}{\beta}-\binom{N-m-1}{\beta+1}+\binom{N-m-1}{\beta+2}+\binom{N-m-1}{\beta+3}\bigg]\Bigg]\\ &+\sum_{\gamma\; odd}\binom{\gamma}{1}\Bigg[\sum_{\alpha\ odd}^{m}\binom{m}{\alpha}\bigg[\sum_{\beta=0,4\dots}^{N-m-1}\binom{N-m-1}{\beta}-\binom{N-m-1}{\beta+1}-\binom{N-m-1}{\beta+2}+\binom{N-m-1}{\beta+3}\bigg]\Bigg]\Bigg)\\ &=\pm \frac{2^m}{2^N}\cdot (-Im[(1+i)^{N-m-1}])= \mp \bigg(\frac{1}{\sqrt{2}}\bigg)^{N-m+2} \end{split} \end{align} It is time to fix a sign. One needs to keep in mind that the sign of the Eq. (\ref{F_1}) is negative: if $m=4$ mod $8$, the global sign from the Table 6 is negative, and $Im[(1+i)^{N-m-1}]=-2^\frac{N-m-2}{2}$. Therefore, an overall sign in negative. If $m=0$ mod $8$, global sign is positive and $Im[(1+i)^{N-m-1}]=2^\frac{N-m-2}{2}$. Therefore, an overall sign in negative. \pagebreak \textbf{Case \# 2:} If $m=2$ mod 4:\\ \begin{align}\label{part2ii} \begin{split} \langle G_6\rangle & =\pm\bra{+}^{\otimes N} \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1 \prod_{\forall \circledast,\bigtriangleup,\diamondsuit } C_{\circledast \bigtriangleup \bigtriangleup} C_{\circledast \circledast \circledast } C_{\bigtriangleup \bigtriangleup } C_{\circledast \bigtriangleup \diamondsuit} C_{\bigtriangleup \diamondsuit }\ket{+}^{\otimes N}\\ & =\pm \frac{1}{2^N} Tr( \prod_{\forall \circledast,\bigtriangleup,\diamondsuit } C_{\circledast \bigtriangleup \bigtriangleup} C_{\circledast \circledast \circledast } C_{\bigtriangleup \bigtriangleup } C_{\circledast \bigtriangleup \diamondsuit} C_{\bigtriangleup \diamondsuit } C_{ \bigtriangleup } ). \end{split} \end{align}\\ We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{1}\binom{\beta}{2}+\binom{\alpha}{1}\binom{\beta}{1}\binom{\gamma}{1}+\binom{\alpha}{3}+\binom{\beta}{2}+\binom{\beta}{1}\binom{\gamma}{1}+\binom{\beta}{1}$. If $s$ is even, the value on the diagonal is $+1$ and $-1$, otherwise. We consider all possible values of $\alpha$, $\beta$, and $\gamma$:\\ a) If $\gamma$ is even (that is $\gamma=0$): Considering the terms from even $\alpha$:\\ \begin{tabular}{lllc} \textbf{1. } & $\alpha$ is even \& $\beta$ is even & \multicolumn{2}{l}{if $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\alpha$ is even \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular} Considering odd $\alpha$: \begin{tabular}{lllc} \textbf{3. } & $\alpha$ is odd \& $\beta$ is even & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{4. } & $\alpha$ is odd \& $\beta$ is odd: & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline \end{tabular}\\ It is easy to see that cases \textbf{3} and \textbf{4} adds up to $0$. \\ b) If $\gamma$ is odd (that is $\gamma=1$):\\ \textbf{1.} Stays the same as in the previous case. \\ \textbf{2.} Gets an opposite sign, therefore, will cancel out with the a) case \textbf{2} in the sum. \\ \textbf{3. } Stays the same as in the previous case.\\ \textbf{4. } Stays the same as in the previous case. Therefore the result is: \begin{align}\label{part5_2} \begin{split} \langle G_{6}\rangle & = \pm\frac{1}{2^{N}} 2^m\bigg[\sum_{\beta=0,4..}^{N-m-1}\binom{N-m-1}{\beta}-\binom{N-m-1}{\beta+2}\bigg]=\pm\frac{2^m}{2^N}Re[(1+i)^{N-m-1}]. \end{split} \end{align} It is time to fix a sign: if $m=2$ mod $8$, the global sign is positive from the Table 6, and $Re[(1+i)^{N-m-1}]=2^\frac{N-m-2}{2}$. Therefore, an overall sign is positive. If $m=6$ mod $8$, overall sign in negative and $Re[(1+i)^{N-m-1}]=-2^\frac{N-m-2}{2}$. Therefore, an overall sign in positive. \end{proof} \begin{lemma}\label{theorem4unifviol} A $N$-qubit ($N=8k+4$) four-uniform HG state even after tracing out one party violates the classical bound by an amount that grows exponentially with number of qubits. Moreover, the violation only decreases with the constant factor. \end{lemma} \begin{proof} Denote $M\equiv N-1$. Then \begin{align} \begin{split} \langle \ensuremath{\mathcal{B}}_N \rangle _Q & =\sum_{m=2,4\dots}^{M}\binom{M}{m}\Big(\frac{1}{\sqrt{2}}\Big)^{M-m+3}= \frac{1}{2\sqrt{2}}\bigg[\sum_{m\ even}^{M} \binom{M}{m} \Big(\frac{1}{\sqrt{2}}\Big)^{M-m}\bigg]-\Big(\frac{1}{\sqrt{2}}\Big)^{M+3}\\ & =\frac{1}{4\sqrt{2}}\bigg[ \Big(1+\frac{1}{\sqrt{2}} \Big)^M -\Big(1-\frac{1}{\sqrt{2}} \Big)^M\bigg]-\Big(\frac{1}{\sqrt{2}}\Big)^{M+3} \end{split} \end{align} Checking the ratio of the quantum and classical values, we have that \begin{align} \begin{split} \frac{\langle \ensuremath{\mathcal{B}}_{N-1} \rangle _Q}{\langle \ensuremath{\mathcal{B}}_{N-1} \rangle _C} \stackrel{N\rightarrow \infty}{\sim} \frac{\frac{1}{4\sqrt{2}}\Big(1+\frac{1}{\sqrt{2}} \Big)^{N-1} }{2^{\frac{N-2}{2}}} = \frac{\Big(1+\frac{1}{\sqrt{2}}\Big)^{N-1} }{\sqrt{2}^{N+3}} \approx \frac{1.20711^{N}}{2\sqrt{2} +2} . \end{split} \end{align} While the same value for the $N=8k+4$ qubit four-uniform complete HG state is $\frac{\langle \ensuremath{\mathcal{B}}_{N} \rangle _Q}{\langle \ensuremath{\mathcal{B}}_{N} \rangle _C} \stackrel{N\rightarrow \infty}{\sim}\approx\frac{1.20711^{N}}{4}.$ Therefore, after tracing out a single qubit, the local realism violation decreases with the small constant factor. \end{proof} It is important to note that the similar violation is maintained after tracing out more than one qubit. For example, numerical evidence confirms for $N=12$, that if one takes a Bell inequality with the odd number of $X$ measurements, instead of the even ones as we have chosen in the proof, an exponential violation is maintained after tracing out 2 qubits. But even if 5 qubit is traced out, the state is stilled entangled and this can be verified using the separability inequality \cite{Roy}. Exact violation is given in the following table: \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \#k & Quantum Value & Classical Bound & Separability Bound & $\approx$Ratio\tabularnewline \hline \hline \textbf{0} & \textbf{153.141} & \textbf{64} & & \textbf{2.39283}\tabularnewline \hline \textcolor{red}{$1$} & \textcolor{red}{89.7188} & \textcolor{red}{32} & & \textcolor{red}{2.78125}\tabularnewline \hline $2$ & 37.1563 & 32 & & 1.16113\tabularnewline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}\tabularnewline \hline $3$ & 15.4219 & - & $\sqrt{2}$ & 10.9049\tabularnewline \hline $4$ & 6.375 & - & $\sqrt{2}$ & 4.50781\tabularnewline \hline $5$ & 2.70313 & - & $\sqrt{2}$ & 1.9114\tabularnewline \hline \end{tabular}\\ $\quad$\\ \textbf{Table 7.} Violation of Bell (for odd $m$) and Separability inequalities in $N=12$ qubit 4-uniform HG state. Here k is the number of traced out qubits. Red line represents that when k = 1, or equivalently one qubit is traced out, the violation of Bell inequalities increases. This is caused by decrease in the classical bound \cite{Mermin}. \end{center} \section{Appendix G: Separability inequality violations for fully-connected three-uniform hypergraph states after loosing one qubit} \begin{lemma} The following statements holds three-uniform complete hypergraph states:\\ (i) For $N=8k+5$, $N=8k+6$, or $N=8k+7$: \begin{equation}\label{G_1} \langle G_7 \rangle =\langle\underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle=\begin{cases} \begin{array}[t]{cc} -\frac{1}{2^{\left\lfloor\frac{N-1}{2}\right\rfloor}} & \mbox{if \ensuremath{(m-1)=0\ mod\ 4 }},\\ +\frac{1}{2^{\left\lfloor\frac{N-1}{2}\right\rfloor}} & \mbox{if \ensuremath{(m-1)=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation} (ii) For $N=8k+1$, $N=8k+2$, or $N=8k+3$: \begin{equation}\label{G_2} \langle G_7 \rangle =\langle\underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle=\begin{cases} \begin{array}[t]{cc} +\frac{1}{2^{\left\lfloor\frac{N-1}{2}\right\rfloor}} & \mbox{if \ensuremath{(m-1)=0\ mod\ 4 }},\\ -\frac{1}{2^{\left\lfloor\frac{N-1}{2}\right\rfloor}} & \mbox{if \ensuremath{(m-1)=2 \ mod\ 4 }}. \end{array}\end{cases} \end{equation} (iii) For $N=4k$: \begin{equation}\label{G_3} \langle G_7 \rangle =\langle\underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1\rangle=0 \end{equation} \end{lemma} \begin{proof} As a starting point, we derive the remaining gates on the right hand side of the equations (\ref{G_1}), the same derivation turns out to be working for Eq. (\ref{G_2}) and (\ref{G_3}). This approach is analogous to the Appendix B, but now, $m$ is odd. \begin{tabular}{lrl} & & \tabularnewline $1.$ & $\# \circledast\circledast=$ & $\binom{m-2}{1}$ is odd $\Rightarrow \;C_{\circledast\circledast}$ remains.\tabularnewline & & \tabularnewline $2.$ & $\#\circledast=$ & $\binom{m-1}{2}=\frac{(m-1)(m-2)}{2}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if (\ensuremath{m-1)=0} mod 4 } &\Rightarrow C_{\circledast}\mbox{ cancels.}\\ \mbox{odd, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow C_{\circledast}\mbox{ remains.} \end{array}\end{cases}$ \tabularnewline & & \tabularnewline $3.$ & $ \#\{\}=$ & $\binom{m}{3}=\frac{m(m-1)(m-2)}{2\cdot3}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if (\ensuremath{m-1)=0} mod 4 } &\Rightarrow\mbox{ gives a positive sign.}\\ \mbox{odd, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow\mbox{ gives a negative sign.} \end{array}\end{cases}$ \tabularnewline & & \tabularnewline $4.$ & $\#\bigtriangleup\bigtriangleup=$ & $\binom{m}{1}$ is odd $\Rightarrow \; C_{\bigtriangleup\bigtriangleup}$ remains. \tabularnewline $5.$ & $\#\circledast\bigtriangleup=$ & $\binom{m-1}{1}$ is even $\Rightarrow \; C_{\circledast\bigtriangleup}$ cancels. \tabularnewline $6.$ & $ \# \bigtriangleup=$ & $\binom{m}{2}=\frac{m(m-1)}{2\cdot3}$ is $\begin{cases} \begin{array}[t]{cc} \mbox{even, if (\ensuremath{m-1)=0} mod 4 } &\Rightarrow C_{\bigtriangleup}\mbox{ cancels.}\\ \mbox{odd, if (\ensuremath{m-1)=2} mod 4 } & \Rightarrow C_{\bigtriangleup}\mbox{ remains.} \end{array}\end{cases}$ \tabularnewline & & \tabularnewline \end{tabular} \begin{center} \textbf{Table 8.} Counting phase gates for a three-uniform HG when $m$ ($m$ is odd) systems are measured in $X$ direction. \end{center} Consider two cases:\\ \textbf{1.} If $(m-1)=0$ mod $4$: \begin{align}\label{part4_1} \begin{split} \langle G_7 \rangle & =\pm\bra{+}^{\otimes N} \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1 \prod_{\forall \circledast,\bigtriangleup,\diamondsuit } C_{\circledast\circledast}C_{\bigtriangleup\bigtriangleup} C_{\bigtriangleup\diamondsuit} \ket{+}^{\otimes N}\\ & =\pm \frac{1}{2^N} Tr\Big[C_{\circledast\circledast}C_{\bigtriangleup\bigtriangleup} C_{\bigtriangleup\diamondsuit} C_{\bigtriangleup} \Big]. \end{split} \end{align} Here $\circledast$ again refers to X operator, $\bigtriangleup$ to Z and $\diamondsuit$ to $\mathbbm1$ and is denoted by $\gamma$. The strategy is similar to the previous case: count the number of +1's and -1's on the diagonal. Their difference divided by $2^N$, gives the trace. \\ We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{2}+\binom{\beta}{2}+\binom{\beta}{1}\binom{\gamma}{1}+\binom{\beta}{1}$. If $s$ is even, the value on the diagonal is $+1$ and $-1$, otherwise. We consider all possible values of $\alpha$, $\beta$ and $\gamma$:\\ a) If $\gamma$ is even (that is $\gamma=0$ ):\\ Considering the terms from even $\beta$:\\ \begin{tabular}{lllc} \textbf{1. } & $\beta$ is even \& $\alpha$ is even & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\beta$ is even \& $\alpha$ is odd & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & & \tabularnewline \end{tabular}\\ Considering odd $\beta$:\\ \begin{tabular}{lllc} \textbf{3. } & $\beta$ is odd \& $\alpha$ is even & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \textbf{4. } & $\beta$ is odd \& $\alpha$ is odd & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \end{tabular} b) If $\gamma$ is odd (that is $\gamma=1$):\\ Considering the terms from even $\beta$:\\ \textbf{1.} and \textbf{2.} Nothing changes in comparison to a) \textbf{1.} and \textbf{2.}\\ \textbf{3.} and \textbf{4.} These two terms have opposite sign from a) \textbf{3.} and \textbf{4.} Therefore, in the sum they cancel ($\gamma$ is always 1.)\\ Therefore, \begin{align} \begin{split} \langle G_{7}\rangle & = \frac{1}{2^{N-1}}\bigg[\sum_{\beta=0,4..}^{N-m-1}\binom{N-m-1}{\beta}-\binom{N-m-1}{\beta+2}\bigg]\bigg[\sum_{\alpha=0,4..}^{m}\binom{m}{\alpha}+\binom{m}{\alpha+1}-\binom{m}{\alpha+2}-\binom{m}{\alpha+3}\bigg]\\ &= \frac{1}{2^{N-1}}Re[(1+i)^{N-m-1}]\cdot\Big(Re[(1+i)^m]+Im[(1+i)^m]\Big). \end{split} \end{align} Now we can consider each cases separately, for this we use the lookup \textbf{table 0}:\\ $(i)$ If $N=8k+5$ and if $(m-1)=4$ mod $8$, $Re[(1+i)^{N-m-1}]=-2^\frac{N-m-2}{2}$ and $Re[(1+i)^m]+Im[(1+i)^m]=2^\frac{m+1}{2}$. Therefore, \begin{equation} \langle G_{7}\rangle =-\frac{2^\frac{N-m-2}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=-\Big(\frac{1}{2}\Big)^{\frac{N-1}{2}}. \end{equation} And, if $(m-1)=0$ mod $8$, $Re[(1+i)^{N-m-1}]=+2^\frac{N-m-2}{2}$ and $Re[(1+i)^m]+Im[(1+i)^m]=-2^\frac{m+1}{2}$. Therefore, \begin{equation} \langle G_{7}\rangle =-\frac{2^\frac{N-m-2}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=-\Big(\frac{1}{2}\Big)^{\frac{N-1}{2}}. \end{equation} Exactly the same is true for $N=8k+7$. But for $N=8k+6$ and if $(m-1)=4$ mod $8$, $Re[(1+i)^{N-m-1}]=-2^\frac{N-m-1}{2}$ and $Re[(1+i)^m]+Im[(1+i)^m]=2^\frac{m+1}{2}$. Therefore, \begin{equation} \langle G_{7}\rangle =-\frac{2^\frac{N-m-1}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=-\Big(\frac{1}{2}\Big)^{\left\lfloor\frac{N-1}{2}\right\rfloor}. \end{equation} And, if $(m-1)=0$ mod $8$, $Re[(1+i)^{N-m-1}]=2^\frac{N-m-1}{2}$ and $Re[(1+i)^m]+Im[(1+i)^m]=-2^\frac{m+1}{2}$. Therefore, \begin{equation} \langle G_{7}\rangle =-\frac{2^\frac{N-m-1}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=-\Big(\frac{1}{2}\Big)^{\left\lfloor\frac{N-1}{2}\right\rfloor}. \end{equation}\\ $(ii)$ If $N=8k+1$ and if $(m-1)=4$ mod $8$, $Re[(1+i)^{N-m-1}]=-2^\frac{N-m-2}{2}$ and $Re[(1+i)^m]+Im[(1+i)^m]=-2^\frac{m+1}{2}$. Therefore, \begin{equation} \langle G_{7}\rangle =\frac{2^\frac{N-m-2}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=\Big(\frac{1}{2}\Big)^{\frac{N-1}{2}}. \end{equation} And, if $(m-1)=0$ mod $8$, $Re[(1+i)^{N-m-1}]=2^\frac{N-m-1}{2}$ and $Re[(1+i)^m]+Im[(1+i)^m]=2^\frac{m+1}{2}$. Therefore, \begin{equation} \langle G_{7}\rangle =\frac{2^\frac{N-m-1}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=\Big(\frac{1}{2}\Big)^{\left\lfloor\frac{N-1}{2}\right\rfloor}. \end{equation} It it analogous for other two cases as well.\\ $(iii)$ For $N=4k$, $Re[(1+i)^{N-m-1}]=0$. Therefore, $\langle G_{7}\rangle =0.$ \\ \textbf{2.} If $(m-1)=2$ mod $4$: \begin{align}\label{part4_1} \begin{split} \langle G_7 \rangle & =\bra{+}^{\otimes N} \underset{m}{\underbrace{X\dots X}}\underset{N-m-1}{\underbrace{Z\dots Z}}\mathbbm1 \prod_{\forall \circledast,\bigtriangleup,\diamondsuit } C_{\circledast\circledast}C_{\circledast}C_{\bigtriangleup\bigtriangleup} C_{\bigtriangleup\diamondsuit}C_{\bigtriangleup}C_{\diamondsuit} \ket{+}^{\otimes N}\\ & = \frac{1}{2^N} Tr\Big[C_{\circledast\circledast}C_{\circledast}C_{\bigtriangleup\bigtriangleup} C_{\bigtriangleup\diamondsuit}C_{\diamondsuit} \Big]. \end{split} \end{align} Here $\circledast$ again refers to X operator, $\bigtriangleup$ to Z and $\diamondsuit$ to $\mathbbm1$ and is denoted by $\gamma$. The strategy is similar to the previous case: count the number of +1's and -1's on the diagonal and their difference divided by $2^N$, gives the trace. \\ We use $(-1)^s$ to define the sign of the diagonal element and $s=\binom{\alpha}{2}+\binom{\alpha}{1}+\binom{\beta}{2}+\binom{\beta}{1}\binom{\gamma}{1}+\binom{\gamma}{1}$. If $s$ is even, the value on the diagonal is $+1$ and $-1$, otherwise. We consider all possible values of $\alpha$, $\beta$ and $\gamma$:\\ a) If $\gamma$ is even (that is $\gamma=0$):\\ Considering the terms from even $\beta$:\\ \begin{tabular}{lllc} \textbf{1. } & $\beta$ is even \& $\alpha$ is even & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & & \tabularnewline \textbf{2. } & $\beta$ is even \& $\alpha$ is odd & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $\beta=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $\beta=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \end{tabular}\\ Considering odd $\beta$:\\ \begin{tabular}{lllc} \textbf{3. } & $\beta$ is odd \& $\alpha$ is even & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=0$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $\alpha=2$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & & \tabularnewline \textbf{4. } & $\beta$ is odd \& $\alpha$ is odd & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=0$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $(\beta-1)=0$ mod 4 $\Rightarrow$ $(-1)^{s}=+1$}\tabularnewline & & \multicolumn{2}{l}{if $(\alpha-1)=2$ mod 4 and $(\beta-1)=2$ mod 4 $\Rightarrow$ $(-1)^{s}=-1$}\tabularnewline & & & \tabularnewline \end{tabular} b) If $\gamma$ is odd (that is $\gamma=1$)::\\ \textbf{1.} and \textbf{2.} These two terms have opposite sign from a) \textbf{1.} and \textbf{2.} Therefore, in the sum they cancel ($\gamma$ is always 1.)\\ \textbf{3.} and \textbf{4.} Nothing changes in comparison to a) \textbf{3.} and \textbf{4.}\\ Therefore, \begin{align} \begin{split} \langle G_{7}\rangle & = \frac{1}{2^{N-1}}\bigg[\sum_{\beta=0,4..}^{N-m-1}\binom{N-m-1}{\beta+1}-\binom{N-m-1}{\beta+3}\bigg]\bigg[\sum_{\alpha=0,4..}^{m}\binom{m}{\alpha}-\binom{m}{\alpha+1}-\binom{m}{\alpha+2}+\binom{m}{\alpha+3}\bigg]\\ &= \frac{1}{2^{N-1}}Im[(1+i)^{N-m-1}]\cdot\Big(Re[(1+i)^m]-Im[(1+i)^m]\Big). \end{split} \end{align} Now we can consider each cases separately, for this we use the lookup \textbf{table 0}:\\ $(i)$ If $N=8k+5$ and if $(m-1)=4$ mod $8$, $Im[(1+i)^{N-m-1}]=2^\frac{N-m-2}{2}$ and $Re[(1+i)^m]-Im[(1+i)^m]=-2^\frac{m+1}{2}$. And the overall sign from the Table 8 is negative. Therefore, \begin{equation} \langle G_{7}\rangle =-\frac{2^\frac{N-m-2}{2}\cdot (-2^\frac{m+1}{2}) }{2^{N-1}}=\Big(\frac{1}{2}\Big)^{\frac{N-1}{2}}. \end{equation} And, if $(m-1)=0$ mod $8$, $Im[(1+i)^{N-m-1}]=-2^\frac{N-m-2}{2}$ and $Re[(1+i)^m]-Im[(1+i)^m]=2^\frac{m+1}{2}$. Overall sign in negative. Therefore, \begin{equation} \langle G_{7}\rangle =-\frac{-2^\frac{N-m-2}{2}\cdot 2^\frac{m+1}{2} }{2^{N-1}}=\Big(\frac{1}{2}\Big)^{\frac{N-1}{2}}. \end{equation} The same holds for $N=8k+6$ and $N=8k+7$. Besides, $(ii)$ differs with the sign flip and is trivial to check.It is also trivial to prove $(iii)$, as when $N=4k$, $Im[(1+i)^{N-m-1}]=0$. \end{proof} \begin{lemma}\label{theorem3unifviol} A $N$-qubit (except when $N=4k$) three-uniform HG state violates the separability inequality exponentially after tracing out a single qubit. \end{lemma} \begin{proof} We consider only one case, $N=8k+5$, as others are analogous. Here $M=N-1$:\\ \begin{align} \begin{split} \langle \ensuremath{\mathcal{B}}_N \rangle _Q & =\sum_{m \; odd}^{M}\binom{M}{m}\Big(\frac{1}{\sqrt{2}}\Big)^{M}= 2^{M-1}\cdot2^{-M/2}=\sqrt{2}^{N-3}. \end{split} \end{align} Separability bound is $\sqrt{2}$ \cite{Roy} and it does not depend on the number of qubits. \end{proof} It is important to note that the similar violation is maintained after tracing out more than one qubit. Numerical evidence for $N=11$ is presented below.\\ \begin{center} \begin{tabular}{|c|c|c|c|} \hline \#k & Quantum Value & Separability Bound & $\approx$Ratio\tabularnewline \hline \hline \textbf{0} & 511.5 & $\sqrt{2}$ & 361.69\tabularnewline \hline {$1$} & 16 & $\sqrt{2}$ & 11.31\tabularnewline \hline $2$ & 8 & $\sqrt{2}$ & 5.66\tabularnewline \hline $3$ & 4 & $\sqrt{2}$ & 2.83\tabularnewline \hline $4$ & 2 & $\sqrt{2}$ & 1.414\tabularnewline \hline \end{tabular}\\ $\quad$\\ \textbf{Table 9.} Violation of the separability inequalities \cite{Roy} in $N=11$-qubit 3-uniform HG states. Here $k$ is the number of traced out qubits. When $k =0$, the Mermin inequality is violated as expected. \end{center} \twocolumngrid \end{document}
\begin{document} \baselineskip=20pt \hoffset=-3cm \voffset=0cm \oddsidemargin=3.2cm \evensidemargin=3.2cm \thispagestyle{empty} \hbadness=10000 \tolerance=10000 \hfuzz=150pt \baselineskip=20pt \hoffset=-3cm \voffset=0cm \oddsidemargin=3.2cm \evensidemargin=3.2cm \thispagestyle{empty} \hbadness=10000 \tolerance=10000 \hfuzz=150pt \title{\textbf{Relative Morse index theory and applications in wave equations}} \author{\Large Qi Wang $^{{\rm a,b}}$,$\quad$ Li Wu $^{{\rm b}}$} \date{} \maketitle \begin{center} \it\scriptsize ${}^{\rm a}$ School of Mathematics and Statistics, Henan University, Kaifeng 475000, PR China\\ ${}^{\rm b}$Department of Mathematics, Shandong University, Jinan, Shandong, 250100, PR China\\ \end{center} \footnotetext[0]{$^a${\bf Corresponding author.} Supported by NNSF of China(11301148) and PSF of China(188576).} \footnotetext[0]{\footnotesize\;{\it E-mail address}: [email protected]. (Qi Wang), [email protected] (Li Wu).} \noindent {\bf Abstract:} { We develop the relative Morse index theory for linear self-adjoint operator equation without compactness assumption and give the relationship between the index defined in \cite{Wang-Liu-2016} and \cite{Wang-Liu-2017}. Then we generalize the method of saddle point reduction and get some critical point theories by the index, topology degree and critical point theory. As applications, we consider the existence and multiplicity of periodic solutions of wave equations.} \noindent{\bf Keywords:} {\small}Relative Morse index; Periodic solutions; Wave equations \section{Introduction}\label{section-introduction} Many problems can be displayed as a self-adjoint operator equation \[ Au=F'(u),\;u\in D(A)\subset \mathbf H,\eqno{(O.E.)}, \] where $\mathbf H$ is an infinite-dimensional separable Hilbert space, $A$ is a self-adjoint operator on $\mathbf H$ with its domain $D(A)$, $F$ is a nonlinear functional on $\mathbf H$. Such as boundary value problem for Laplace's equation on bounded domain, periodic solutions of Hamiltonian systems, Schr\"{o}dinger equation, periodic solutions of wave equation and so on. By variational method, we know that the solutions of (O.E.) correspond to the critical points of a functional. So we can transform the problem of finding the solutions of (O.E.) into the problem of finding the critical points of the functional. From 1980s, begin with Ambrosetti and Rabinowitz's famous work\cite{Ambrosetti-Rabinowitz-1973}(Mountain Pass Theorem), many crucial variational methods have been developed, such as Minimax-methods, Lusternik-Schnirelman theory, Galerkin approximation methods, saddle point reduction methods, dual variational methods, convex analysis theory, Morse theory and so on (see\cite{Amann-1976},\cite{Amann-Zehnder-1980},\cite{Aubin-Ekeland-1984},\cite{Chang-1993},\cite{Ekeland-1990},\cite{Ekeland-Temam-1976} and the reference therein). We classified all of these variational problems into three kinds by the spectrum of $A$. For simplicity, denote by $\sigma(A)$, $\sigma_e(A)$ and $\sigma_d(A)$ the spectrum, the essential spectrum and the discrete finite dimensional point spectrum of $A$ respectively. The first is $\sigma(A)=\sigma_d(A)$ and $\sigma(A)$ is bounded from below(or above), such as boundary value problem for Laplace's equation on bounded domain and periodic problem for second order Hamiltonian systems. Morse theory can be used directly in this kind and this is the simplest situation. The second is $\sigma(A)=\sigma_d(A)$ and $\sigma(A)$ is unbounded from above and below, such as periodic problem for first order Hamiltonian systems. In this kind, Morse theory cannot be used directly because in this situation the functionals are strongly indefinite and the Morse indices at the critical points of the functional are infinite. In order to overcome this difficulty, the index theory is worth to note here. By the work \cite{Ekeland-1984} of Ekeland, an index theory for convex linear Hamiltonian systems was established. By the works \cite{Conley-Zehnder-1984,Long-1990,Long-1997,Long-Zehnder-1990} of Conley, Zehnder and Long, an index theory for symplectic paths was introduced. These index theories have important and extensive applications, e.g \cite{Dong-Long-1997,Ekeland-Hofer-1985,Ekeland-Hofer-1987,Liu-Long-Zhu-2002,Long-Zhu-2000}. In \cite{Zhu-Long-1999, Long-Zhu-2000-2} Long and Zhu defined spectral flows for paths of linear operators and redefined Maslov index for symplectic paths. Additionally, Abbondandolo defined the concept of relative Morse index theory for Fredholm operator with compact perturbation (see\cite{Abb-2001} and the references therein). In the study of the $L$-solutions (the solutions starting and ending at the same Lagrangian subspace $L$) of Hamiltonian systems, Liu in \cite{Liu-2007} introduced an index theory for symplectic paths using the algebraic methods and gave some applications in \cite{ Liu-2007, Liu-2007-2}. This index had been generalized by Liu, Wang and Lin in \cite{Liu-Wang-Lin-2011}. In addition to the above index theories defined for specific forms, Dong in \cite{Dong-2010} developed an index theory for abstract operator equations (O.E.). The third is $\sigma_e(A)\neq \emptyset$, the most complex situation. Since lack of compactness, many classical methods can not be used here. Specially, if $\sigma_e(A)\cap(-\infty,0)\neq \emptyset$ and $\sigma_e(A)\cap(0,\infty)\neq \emptyset$, Ding established a series of critical points theories and applications in homoclinic orbits in Hamiltonian systems, Dirac equation, Schr\"{o}dinger equation and so on, he named these problems???very strongly indefinite problems (see \cite{Ding-2007},\cite{Ding-2017}). Wang and Liu defined the index theory ($i_A(B),\nu_A(B)$) for this kind and gave some applications in wave equation, homoclinic orbits in Hamiltonian systems and Dirac equation, the methods include dual variation and saddle point reduction(see \cite{Wang-Liu-2016} and\cite{Wang-Liu-2017}). Additionally, Chen and Hu in \cite{Chen-Hu-2007} defined the index for homoclinic orbits of Hamiltonian systems. Recently, Hu and Portaluri in \cite{Hu-Portaluri-2017} defined the index theory for heteroclinic orbits of Hamiltonian systems.\\ In this paper, consider the kind of $\sigma_e(A)\neq \emptyset$. Firstly, we develop the relative Morse index theory. Compared with Abbondandolo's work(\cite{Abb-2001}), we generalize the concept of relative Morse index $i^*_A(B)$ for Fredholm operator without the compactness assumption on the perturbation term(see Section \ref{section-relative Morse index}). And we gave the relationship between the relative Morse index $i^*_A(B)$ and the index $i_A(B)$ defined in \cite{Wang-Liu-2016} and\cite{Wang-Liu-2017}. The bridge between them is the concept of spectral flow. As far as we know, the spectral flow is introduced by Atiyah-Patodi-Singer(see\cite{Atiyah-Patodi-Singer-1976}). Since then, many interesting properties and applications of spectral flow have been subsequently established(see\cite{Cappell-Lee-Miller-1994},\cite{Floer-1988},\cite{Robbin-Salamon-1993},\cite{Robbin-Salamon-1995} and \cite{Zhu-Long-1999}). Secondly, we generalize the method of saddle point reduction and get some critical point theories. With the relative Morse index defined above, we will establish some new abstract critical point theorems by saddle point reduction, topology degree and Morse theory, where we do not need the nonlinear term to be $C^2$ continuous(see Section \ref{section-saddle point reduction}). Lastly, as applications, we consider the existence and multiplicity of the periodic solutions for wave equation and give some new results(sec Section \ref{section-applications}). To the best of the authors' knowledge, the problem of finding periodic solutions of nonlinear wave equations has attracted much attention since 1960s. Recently, with critical point theory, there are many results on this problem. For example, Kryszewski and Szulkin in \cite{Kryszewski-Szulkin-1997} developed an infinite dimensional cohomology theory and the corresponding Morse theory, with these theories, they obtained the existence of nontrivial periodic solutions of one dimensional wave equation. Zeng, Liu and Guo in \cite{Zeng-Liu-Guo-2004}, Guo and Liu in \cite{Gou-Liu-2007} obtained the existence and multiplicity of nontrivial periodic solution of one dimensional wave equation and beam equation by their Morse index theory developed in \cite{Guo-Liu-Zeng-2004}. Tanaka in \cite{Tanaka-2006} obtained the existence of nontrivial periodic solution of one dimensional wave equation by linking methods. Ji and Li in \cite{Ji-Li-2006} considered the periodic solution of one dimensional wave equation with $x$-dependent coefficients. By minimax principle, Chen and Zhang in \cite{Chen-Zhang-2014} and \cite{Chen-Zhang-2016} obtained infinitely many symmetric periodic solutions of $n$-dimensional wave equation. Ji in \cite{Ji-2018} considered the periodic solutions for one dimensional wave equation with bounded nonlinearity and $x$-dependent coefficients. \section{Relative Morse Index $i^*_A(B)$ and the relationship with $i_A(B)$}\label{section-relative Morse index} Let $\mathbf H$ be an infinite dimensional separable Hilbert space with inner product $(\cdot,\cdot)_\mathbf H$ and norm $\|\cdot\|_\mathbf H$. Denote by $\mathcal O(\mathbf H)$ the set of all linear self-adjoint operators on $\mathbf H$. For $A\in \mathcal O(\mathbf H)$, we denote by $\sigma(A)$ the spectrum of $A$ and $\sigma_e(A)$ the essential spectrum of $A$. We define a subset of $\mathcal O( \mathbf H)$ as follows \[ \mathcal O^0_e(a,b)=\{A\in \mathcal O(\mathbf H)|\;\sigma_e(A)\cap(a,b)=\emptyset \;{\rm and}\;\sigma(A)\cap (a, b)\ne \emptyset\}. \] Denote $\mathcal{L}_s(\mathbf H)$ the set of all linear bounded self-adjoint operators on $\mathbf H$ and a subset of $\mathcal{L}_s(\mathbf H)$ as follows \begin{equation}\label{eq-L0} \mathcal{L}_s(\mathbf H,a,b)=\{B\in \mathcal{L}_s(\mathbf H), \;a\cdot I<B< b\cdot I\}, \end{equation} where $I$ is the identity map on $\mathbf H$, $B< b\cdot I$ means that there exists $\delta>0$ such that $(b-\delta)\cdot I-B$ is positive define, $B> a\cdot I$ has the similar meaning. For any $B\in\mathcal{L}_s(\mathbf H,a,b)$, we have the index pair ($i_A(B),\nu_A(B)$)(see \cite{Wang-Liu-2016,Wang-Liu-2017} for details). In this section, we will define the relative Morse index $i^*_A(B)$ and give the relationship with $i_A(B)$. \subsection{Relative Morse Index $i^*_A(B)$} As the beginning of this subsection, we will give a brief introduction of relative Morse index. The relative Morse index can be derived in different ways (see\cite{Abb-2001,Chang-Liu-Liu-1997,Fei-1995,Zhu-Long-1999}). Such kinds of indices have been extensively studied in dealing with periodic orbits of first order Hamiltonian systems. As far as authors known, the existing relative Morse index theory can be regarded as compact perturbation for Fredholm operator. Assume $A$ is a self-adjoint Fredholm operator on Hilbert space $\mathbf H$, with the orthogonal splitting \begin{equation}\label{eq-decomposition of space H} \mathbf H=\mathbf H^-_A\oplus \mathbf H^0_A\oplus \mathbf H^+_A, \end{equation} where $A$ is negative, zero and positive definite on $\mathbf H^-_A,\;\mathbf H^0_A$ and $\mathbf H^+_A$ respectively. Let $P_A$ denote the orthogonal projection from $\mathbf H$ to $\mathbf H^-_A$. If the perturbation term $F$ is a compact self-adjoint operator on $\mathbf H$, then we have $P_A-P_{A-F}$ is compact and $P_A:\mathbf H^-_{A-F}\to \mathbf H^-_A$ is a Fredholm operator and we can define the so called relative Morse index by the Fredholm index of $P_A:\mathbf H^-_{A-F}\to \mathbf H^-_A$. Generally, if the operator $A$ is not Fredholm operator or the perturbation $F$ is not compact, $P_A:\mathbf H^-_{A-F}\to \mathbf H^-_A$ will not be Fredholm operator and the concept of relative Morse index will be meaningless, but if the perturbation lies in the gap of $\sigma_{e}(A)$, that is to say $A\in \mathcal O^0_e(\lambda_a,\lambda_b)$ for some $\lambda_a,\lambda_b\in\mathbb{R}$ and the perturbation $B\in \mathcal{L}_s(\mathbf H,\lambda_a,\lambda_b)$, we can also defined the relative Morse index $i^*_A(B)$ and give the relationship with the index $i_A(B)$ defined in \cite{Wang-Liu-2017}. Firstly, we need two abstract lemmas. \begin{lem} \label{Fredholm projection} Let $A:\mathbf H\rightarrow\mathbf H$ be a bounded self-adjoint operator. Let $W,V$ be closed spaces of $\mathbf H$. Denote the orthogonal projection $\mathbf H\rightarrow Y$ by $P_Y$ for any closed linear subspace $Y$ of $\mathbf H$. Assume that\\ \noindent (1). $(Ax,x)_\mathbf H<-\epsilon_1 \|x\|^2_\mathbf H,\;\forall x\in W\backslash\{0\}$, with some constant $\epsilon_1>0$,\\ \noindent (2). $(Ax,x)_\mathbf H> 0,\; \forall x\in V^{\bot}\backslash\{0\} $,\\ \noindent (3). $ (Ax,y)_\mathbf H=0, \forall x\in V,y\in V^\bot$.\\ Then $P_V|_{W}$ is an injection and $P_V(W)$ is a closed subspace of $\mathbf H$. Furthermore, if we assume \\ \noindent (4). $(Ax,x)_\mathbf H\leq 0,\; \forall x\in V\backslash\{0\}$,\\ and there is a closed subspace $U$ of $W^\bot$ such that\\ \noindent (5). $W^\bot/U$ is finite dimensional,\\ \noindent (6). $(Ax,x)_\mathbf H>0,\;\forall x\in U\setminus \{0\}$. \\ Then $P_V:W\rightarrow V$ and $P_W:V\rightarrow W$ are both Fredholm operators and \[ {\rm ind}(P_W:V\rightarrow W)=-{\rm ind}(P_V:W\rightarrow V). \] \end{lem} \begin{proof} Note that $\ker P_V|_{W}= \ker P_V\cap W =V^\bot \cap W$. From condition (1) and (2), we have $V^\bot \cap W=\{0\}$, so $P_V|_{W}$ is an injection. For $x\in W$, from condition (2) and (3), we have \begin{align*} -\|A\|\|P_Vx\|^2_\mathbf H&\leq(AP_V x,P_V x)_\mathbf H\\ &=(Ax,x)_\mathbf H-(A(I-P_V)x,(I-P_V)x)_\mathbf H\\ &\leq (Ax,x)_\mathbf H\\ &< -\epsilon_1 \|x\|^2_\mathbf H \end{align*} It follows that \begin{equation}\label{eq-relative Morse index-3} \|P_V x\|_\mathbf H\ge \sqrt{\frac{\epsilon_1}{\|A\|}}\|x\|_\mathbf H,\;\forall x\in W, \end{equation} so $P_V(W)$ is a closed subspace of $\mathbf H$. For any $x\in (P_V(W))^\bot \cap V $, that is to say $x\bot P_V(W)$ and $x\bot (I-P_V)(W)$, so we have $x\bot W$ and \begin{equation}\label{eq-relative Morse index-1} P_V(W)^\bot \cap V \subset W^\bot. \end{equation} From condition (4) and (6), \begin{align}\label{eq-relative Morse index-2} ((P_V(W))^\bot \cap V) \cap U&\subset V\cap U\nonumber\\ & =\{0\}. \end{align} From \eqref{eq-relative Morse index-1}, \eqref{eq-relative Morse index-2} and condition (5), $(P_V(W))^\bot \cap V$ is finite dimensional. It follows that $P_V:W\rightarrow V$ is a Fredholm operator. From \eqref{eq-relative Morse index-3}, we have \begin{align*} \|(I-P_V)x\|^2&=\|x\|^2-\|P_V x\|^2\\ &\leq (1-\epsilon_1/\|A\|)\|x\|^2,\;\forall x\in W. \end{align*} It follows that $\|I-P_V|_W\|<1$. So the operator $P_WP_V=P_W-P_W(I-P_V):W\rightarrow W$ is invertible. It follows that $P_W:V\rightarrow W$ is surjective, and \begin{equation}\label{eq-relative Morse index-4} \ker P_W\cap P_V(W)={0}. \end{equation} Note that $V$ has the following decomposition \[ V=P_V(W)\bigoplus ((P_V(W))^\bot \cap V), \] from \eqref{eq-relative Morse index-4} and $\dim((P_V(W))^\bot \cap V)<\infty$, we have $\ker P_W \cap V$ is finite dimensional. So the operator $P_W:V\rightarrow W$ is a Fredholm operator. Since $P_WP_V:W\rightarrow W$ is invertible, we have \begin{align*} 0&={\rm ind}(P_WP_V:W\rightarrow W)\\ &={\rm ind}(P_W:V\rightarrow W)+{\rm ind}(P_V:W\rightarrow V). \end{align*} Thus we have proved the lemma. \end{proof} \begin{lem}\label{finite_pertubation} Let $V_1\subset V_2$,$ W_1\subset W_2$ be linear closed subspaces of $\mathbf H$ such that $V_2/V_1$ and $W_2/W_1$ are finite dimensional linear spaces. Let $P_{V_i}$, $P_{W_j}$ be the orthogonal projections onto $V_i$ and $W_j$ and respectively, $i,j=1,2$. Assume that $P_{W_{j^*}}:V_{i^*}\rightarrow W_{j^*}$ is a Fredholm operator for some fixed $i^*,j^*\in \{1,2\}$. Then $P_{W_j}:V_i \rightarrow W_j$, $i,j=1,2$ are all Fredholm operators. Furthermore, we have \begin{align*} {\rm ind}(P_{W_j}:V_i\rightarrow W_j)=&{\rm ind}(P_{V_{i^*}}:V_i\rightarrow V_{i^*})+{\rm ind}(P_{W_{j^*}}:V_{i^*}\rightarrow W_{j^*})\\ &+{{\rm ind}}(P_{W_j}:W_{j^*}\rightarrow W_j). \end{align*} \end{lem} \begin{proof} Since $V_2/V_1$ and $W_2/W_1$ are finite dimensional linear spaces, $P_{W_j}-P_{W_{j^*}}$ and $P_{V_i}-P_{V_{i^*}}$ are both compact operator. So $P_{W_j}P_{V_i}-P_{W_{j^*}}P_{V_{i^*}}$ is also compact operator. Note that on $V_i$, \[ (P_{W_j}-P_{W_j}P_{W_{j^*}}P_{V_{i^*}})|_{V_i}=P_{W_j}(P_{W_j}P_{V_i}-P_{W_{j^*}}P_{V_{i^*}})|_{V_i}. \] It follows that $P_{W_j}-P_{W_j}P_{W_{j^*}}P_{V_{i^*}}:V_i\rightarrow W_j$ is compact. Then we can conclude that \begin{align*} {{\rm ind}}(P_{W_j}:V_i\rightarrow W_j)=&{\rm ind}(P_{W_j}P_{W_{j^*}}P_{V_{i^*}}:V_i\rightarrow W_j)\\ =&{\rm ind}(P_{V_{i^*}}:V_i\rightarrow V_{i^*})+{\rm ind}(P_{W_{j^*}}:V_{i^*}\rightarrow W_{j^*})\\ &+{{\rm ind}}(P_{W_j}:W_{j^*}\rightarrow W_j). \end{align*} We have proved the lemma. \end{proof} With these two lemmas, we can define the relative Morse index. We consider a normal type that is $A\in \mathcal O^0_e(-1,1)$ for simplicity. Let $B\in \mathcal{L}_s(\mathbf H,-1,1)$ with its norm $\|B\|=c_B$, so we have $0\leq c_B<1$. Then $A-tB$ is a self-adjoint Fredholm operator for $t\in [0,1]$. We have $\sigma_{ess}(A-tB)\cap (-1+tc_B,1-tc_B)=0$. Let $E_{A-tB}(z)$ be the spectral measure of $A-tB$. Denote \begin{equation}\label{eq-projection splitting} P(A-tB,U)= \int_{U} dE_{A-tB}(z), \end{equation} with $U\subset \mathbb{R}$, and rewrite it as $P(t,U) $ for simplicity. Let \[ V(A-tB,U)={\rm im} P(t,U) \] and rewrite it as $V(t,U)$ for simplicity. For any $c_0\in\mathbb{R}$ satisfying $c_B<c_0<1$, we have \[ ((A-B)x,x)_\mathbf H>(c_0-c_B)\|x\|^2_\mathbf H,\;x\in V(0,(c_0,+\infty))\cap D(A). \] So there is $\epsilon >0$, such that \begin{equation}\label{eq-relative Morse index-5} ((A-B)x,x)_\mathbf H>\epsilon ((|A-B|+I)x,x)_\mathbf H ,\forall x\in V(0,(c_0,+\infty))\cap D(A) \end{equation} Similarly, we have \begin{equation}\label{eq-relative Morse index-6} ((A-B)x,x)_\mathbf H<-\epsilon ((|A-B|+I)x,x)_\mathbf H ,\forall x\in V(0,(-\infty,-c_0))\cap D(A) \end{equation} Denote \[ P_{s,a}^{t,b}:=P(t,(-\infty,b))|_{V(s,(-\infty,a))},\forall t,s\in [0,1]\;{\rm and}\;a,b\in \mathbb{R}. \] Clearly, we have $P(t,(-\infty,b))=P_{s,+\infty}^{t,b}$ , $\forall s\in [0,1] $. \begin{lem}\label{inverse_relation} For any $a\in[-c_0,c_0]$, the map $P_{0,a}^{1,0}$ $P_{1,0}^{0,a}$ are both Fredholm operators. Furthermore, we have ${\rm ind}(P_{0,a}^{1,0})=-{\rm ind}(P_{1,0}^{0,a})$. \end{lem} \begin{proof} From \eqref{eq-relative Morse index-5} and \eqref{eq-relative Morse index-6}, there is $\epsilon>0$ such that \[ ((A-B)(|A-B|+I)^{-1}x,x)>\epsilon \|x\|^2,\;\forall x\in V(0,(c_0,+\infty)), \] and \[ ((A-B)(|A-B|+I)^{-1}x,x)<-\epsilon \|x\|^2,\;\forall x\in V(0,(-\infty,-c_0)). \] Now, let the operator $(A-B)(|A-B|+I)^{-1}$, the spaces $V(0,(-\infty,-c_0))$, $V(1,(-\infty,0])$ and $V(0,(c_0,+\infty))$ be the operator $A$ and the spaces $W,V$ and $U$ in Lemma \ref{Fredholm projection} correspondingly. It's easy to verify that condition (1), (2), (3), (4) and (6) are satisfied, and since $A\in \mathcal O^0_e(-1,1)$, $V(0,[-c_0,c_0])$ is finite dimensional, so condition (5) is satisfied. Then $P_{0,-c_0}^{1,0} $ and $P_{1,0}^{0,-c_0} $ are both Fredholm operators. We also have \[ {\rm ind}(P_{0,-c_0}^{1,0})=-{\rm ind}(P_{1,0}^{0,-c_0}). \] By Lemma \ref{finite_pertubation}, $ P_{0,a}^{1,0}$ and $P_{1,0}^{0,a}$ are both Fredholm operators with $a\in [-c_0,c_0]$, and we have \[ {\rm ind}(P_{0,a}^{1,0})=-{\rm ind}(P_{1,0}^{0,a}), a\in [-c_0,c_0]. \] \end{proof} \begin{rem}\label{inverse_relation(general result)} Generally, we have $P_{s,a}^{t,b}\; and\; P_{t,b}^{s,a}$ are both Fredholm operators with $a\in (-1+sc_B,1-sc_B)$, $b\in (-1+tc_B,1-tc_B)$ and we have \[ {{\rm ind}}(P_{s,a}^{t,b})=-{{\rm ind}}(P_{t,b}^{s,a}). \] \end{rem} Here we replace $A,B$ by $A'=A-sB$ and $B'=(t-s)B$ respectively in Lemma \ref{inverse_relation}, then all the proof will be same, so we omit the proof here. \begin{defi}\label{defi-index defined by relative Morse index} Define the relative Morse index by \[ i^*_A(B):={\rm ind}(P^{0,0}_{1,0}),\;\forall B\in\mathcal{L}_s(\mathbf H,-1,1). \] \end{defi} \subsection{The relationship between $i^*_A(B)$ and $i_A(B)$} Now, we will prove that $i^*_A(B)=i_A(B)$ by the concept of spectral flow. We need some preparations. There are some equivalent definitions of spectral flow. We use the Definition 2.1, 2.2 and 2.6 in \cite{Zhu-Long-1999}. Let $A_s$ be a path of self-adjoint Fredholm operators. The APS projection of $A_s$ is defined by $Q_{A_s}=P(A_s,[0,+\infty))$ . Recall that locally, the spectral flow of $A_s$ is the s-flow of $Q_{A_s}$. Choose $\epsilon>0$ such that $V(A_{s_0},[0,+\infty))=V(A_{s_0},[-\epsilon,+\infty))$. Then $\epsilon \notin \sigma (A_{s_0})$. Let $P_{A_s}=P(A_s,[-\epsilon,+\infty))$. Then there is $\delta>0$ such that $P_{A_s}$ is continuous on $(s_0-\delta,s_0+\delta)$ and $P_{A_s}-Q_{A_s}$ is compact for $s\in(s_0-\delta,s_0+\delta)$ . The the s-flow of $Q_{A_s}$ on $[s_0,b]\subset (s_0-\delta,s_0+\delta)$ can be calculated as \begin{align*} sfl(Q_{A_s},[s_0,b])=& -{\rm ind}(P_{A_{s_0}}:V(A_{s_0},[0,\infty)\to V(A_{s_0},[0,\infty))\\ &+{\rm ind}(P_{A_{s_b}}:V(A_{s_b},[0,\infty)\to V(A_{s_b},[-\epsilon,\infty))\\ =&-{\rm dim}(V(A_{s_b},[-\epsilon,0)))\\ =&{\rm ind}({\rm Id}-P_{A_{s_b}}:V(A_{s_b},(-\infty,-\epsilon)\to V(A_{s_b},(-\infty,0)). \end{align*} If $A_s=A-sB$, with $\epsilon$ and $\delta$ chosen like above, we have $sf\{A-sB,[s_0,s_1]\}={\rm ind}P_{s_1,-\epsilon}^{s_1,0}$ for $[s_1,s_2]\subset [s_1,s_1+\delta]$. \begin{lem}\label{continuity} Let $t_0\in [0,1]$. Let $a\in (-1+t_0c_B,1-t_0c_B)\backslash\sigma(A-t_0B) $. Then we have \[ \lim_{s\to t_0}\|P_{t_0,a}^{t,0}-P_{s,a}^{t,0}P_{t_0,a}^{s,a}\|=0,\forall t\in[0,1]. \] and \begin{align*} \lim_{s\to t_0}{{\rm ind}}(P_{s,a}^{t,0}))={{\rm ind}}(P_{t_0,a}^{t,0})\\ \lim_{s\to t_0}{{\rm ind}}(P_{t,0}^{s,a}))={{\rm ind}}(P_{t,0}^{t_0,a}). \end{align*} \end{lem} \begin{proof} Since $a\notin\sigma(A-t_0B) $, there is $\delta_1>0$ such that $P(\cdot,(-\infty,a))$ is a continuous path of operators on $(t_0-\delta_1,t_0+\delta_1)$, and \[\|(P(s,(-\infty,a))-P(t_0,(-\infty,a)))\|<1\] with $s\in (t_0-\delta_1,t_0+\delta_1)$. Then $P_{t_0,a}^{s,a}$ and $ P_{s,a}^{t_0,a}$ are both homeomorphisms. Note that on $V(t_0,(-\infty,a))$, we have \[ P_{t_0,a}^{t,0}-P_{s,a}^{t,0}P_{t_0,a}^{s,a} =P(t,(-\infty,0))(P(t_0,(-\infty,a))-P(s,(-\infty,a)))|_{V(t_0,(-\infty,a))}. \] By the continuity of $P(s,(-\infty,a))$, it follows that \[ \lim_{s\to t_0}\|P_{t_0,a}^{t,0}-P_{s,a}^{t,0}P_{t_0,a}^{s,a}\|=0. \] Then we have \begin{align*} {\rm ind}(P_{t_0,a}^{t,0})&=\lim_{s\to t_0}{\rm ind}(P_{s,a}^{t,0}P_{t_0,a}^{s,a})\\ &=\lim_{s\to t_0}{\rm ind}(P_{s,a}^{t,0})+{\rm ind}(P_{t_0,a}^{s,a}))\\ &=\lim_{s\to t_0}{{\rm ind}}(P_{s,a}^{t,0}). \end{align*} By remark \ref{inverse_relation(general result)}, we get \begin{align*} {\rm ind}(P_{t,0}^{t_0,a})&=-{\rm ind} (P_{t_0,a}^{t,0})\\ &=-\lim_{s\to t_0}{\rm ind}(P_{s,a}^{t,0})\\ &=\lim_{s\to t_0}{\rm ind}(P_{t,0}^{s,a}). \end{align*} \end{proof} \begin{lem}\label{local_flow} For each $t_1\in [0,1]$, there is $\delta >0$ such that \[{\rm ind} (P_{t_1,0}^{t_2,0})=sf\{A-t_1 B-s(t_2-t_1)B,[0,1]\}\] with $|t_2-t_1|<\delta$. \end{lem} \begin{proof} Since $A-t_1B$ is a Fredholm operator, there is $\epsilon>0$ such that $P(t_1,(-\infty,0))=P(t_1,(-\infty,-\epsilon))$. It follows that $\epsilon \notin \sigma (A-t_1B)$, and we have \[ P_{t_1,-\epsilon}^{t_2,0}=P_{t_1,0}^{t_2,0}. \] By lemma \ref{continuity} we have \[ \lim_{t_2\to t_1}{\rm ind}(P_{t_1,-\epsilon}^{t_2,-\epsilon})={\rm ind}(P_{t_1,-\epsilon}^{t_1,-\epsilon})=0. \] It follows that \begin{align*} \lim_{t_2\to t_1}{\rm ind}(P_{t_1,-\epsilon}^{t_2,0})&=\lim_{t_2\to t_1}{\rm ind}(P_{t_2,-\epsilon}^{t_2,0}P_{t_1,-\epsilon}^{t_2,-\epsilon})\\ &=\lim_{t_2\to t_1}{\rm ind}(P_{t_2,-\epsilon}^{t_2,0})+\lim_{t_2\to t_1}{\rm ind}(P_{t_1,-\epsilon}^{t_2,-\epsilon})\\ &=\lim_{t_2\to t_1}{\rm ind}(P_{t_2,-\epsilon}^{t_2,0}). \end{align*} So there is $\delta>0$ such that ${{\rm ind}}(P_{t_1,-\epsilon}^{t_2,0})={\rm ind}(P_{t_2,-\epsilon}^{t_2,0}) $ with $|t_2-t_1|<\delta$, and $P(t,(-\infty,-\epsilon))$ is continuous on $(t_1-\delta,t_1+\delta)$. Note that $sf\{A-t_1 B-s(t_2-t_1)B,[0,1]\}={{\rm ind}} P_{t_2,-\epsilon}^{t_2,0}$ by continuation of $P(t,(-\infty,-\epsilon))$. Then the lemma follows. \end{proof} \begin{lem}\label{additional} ${{\rm ind}}(P_{0,0}^{1,0})={{\rm ind}}(P_{t,0}^{1,0})+{{\rm ind}}( P_{0,0}^{t,0})$ with $\forall t\in [0,1]$. \end{lem} \begin{proof} By Lemma \ref{finite_pertubation} and Lemma \ref{inverse_relation(general result)}, for any $t_0\in [0,1]$ \begin{align*} {\rm ind}(P_{t_0,0}^{1,0})+{\rm ind}( P_{0,0}^{t_0,0})&={\rm ind}(P_{t_0,a}^{1,0})+{\rm ind}( P_{t_0,0}^{t_0,a})+{\rm ind}( P_{t_0,a}^{t_0,0})+{\rm ind}( P_{0,0}^{t_0,a})\\ &={\rm ind}(P_{t_0,a}^{1,0})+{{\rm ind}}( P_{0,0}^{t_0,a}),\; \forall a\in (-1+sc_B,1-sc_B). \end{align*} Choose $a_{t_0}\in (-1+t_0c_B,1-t_0c_B)$ and $a_{t_0}\notin \sigma(A-t_0B)$. By lemma \ref{continuity}, \[ f:t\to {\rm ind}(P_{t,a_{t_0}}^{1,0})+{\rm ind}( P_{0,0}^{t,a_{t_0}}) \] is continuous at $t_0$. So the function $f:t\to {{\rm ind}}(P_{t,0}^{1,0})+{{\rm ind}}( P_{0,0}^{t,0}) $ is continuous on $[0,1]$. So it must be a constant function. It follows that \[ {{\rm ind}}(P_{0,0}^{1,0})=f(1)=f(t)={{\rm ind}}(P_{t,a}^{1,0})+{{\rm ind}}( P_{0,0}^{t,a}). \] \end{proof} \begin{rem} In fact, we have \[{{\rm ind}}(P_{a,0}^{b,0})={{\rm ind}}(P_{s,0}^{b,0})+{{\rm ind}}( P_{a,0}^{s,0})\] with $s\in [0,1]$. \end{rem} \begin{thm}\label{thm-relative morse index and spectral flow} We have \[ sf\{A-tB,[a,b]\}={{\rm ind}}(P_{a,0}^{b,0})=-{{\rm ind}}(P_{b,0}^{a,0}) \] with $[a,b]\subset [0,1]$. \end{thm} \begin{proof} It is a direct consequence of Lemma \ref{local_flow} and Lemma \ref{additional}. \end{proof} Now by the property of $i_A(B)$(see \cite[Lemma 2.9]{Wang-Liu-2016},\cite[Lemma 2.3]{Wang-Liu-2017}) and Theorem \ref{thm-relative morse index and spectral flow}, we have the following result. \begin{prop}\label{prop-relations between indexes} $ i^*_A(B)=i_A(B), A\in\mathcal{O}^0_e(-1,1),B\in\mathcal{L}_s(\mathbf H,-1,1). $ \end{prop} Generally, with the same method we can define the relative Morse index $i^*_A(B)$ for $A\in\mathcal{O}^0_e(\lambda_a,\lambda_b)$, $B\in\mathcal{L}_s(\mathbf H,\lambda_a,\lambda_b)$ and we can prove the index $i^*_A(B)$ coincide with $i_A(B)$ by the concept of spectral flow, we omit them here. \section{Saddle point reduction of (O.E.) and some abstract critical points Theorems}\label{section-saddle point reduction} Now for simplicity, let $b>0$ and $a=-b$, for $A\in\mathcal O^0_e(-b,b)$, we consider the following operator equation \[ Az=F'(z),\;z\in D(A)\subset \mathbf H, \eqno(O.E.) \] where $F\in C^1(\mathbf H,\mathbb{R})$. Assume\\ \noindent($F_1$) $F\in C^1(\mathbf H,\mathbb{R})$, $F':\mathbf H\to\mathbf H$ is Lipschitz continuous \begin{equation}\label{eq-the Lipschitz continuity of F'} \|F'(z+h)-F'(z)\|_{\mathbf H}\leq l_F\|h\|_{\mathbf H},\;\forall z,h\in\mathbf H, \end{equation} with its Lipschitz constant $l_F<b$. \subsection{Saddle point reduction of (O.E.)} In this part, assume $A\in\mathcal{O}^0_e(-b,b)$ and $F$ satisfies condition ($F_1$), we will consider the method of saddle point reduction without assuming the nonlinear term $F\in C^2(D(|A|^{1/2}))$, then we will give some abstract critical point theorems. Let $E_A(z)$ the spectrum measure of $A$, since $\sigma_e(A)\cap(-b,b)=\emptyset$, we can choose $l\in (l_F,b)$, such that \[ -l, l\notin\sigma(A). \] Different from the above section, in this section, consider projection map $P(A,U)$ defined in \eqref{eq-projection splitting} on $H$, for simplicity, we rewrite them as \begin{equation}\label{eq-projections} P^-_A:=P(A,(-\infty,-l)),\;P^+_A:=P(A,(l,\infty)),\;P^0_A:=P(A,(-l,l)), \end{equation} in this section. Then we have the following decomposition which is different from \eqref{eq-decomposition of space H}, \[ \mathbf H=\widehat{\mathbf H}^-_A\oplus\widehat{\mathbf H}^+_A \oplus\widehat{\mathbf H}^0_A, \] where $\widehat{\mathbf H}^*_A:=P^*_A\mathbf H$($*=\pm, 0$) and $\widehat{\mathbf H}^0_A$ is finite dimensional subspace of $\mathbf H$, for simplicity we rewrite $\mathbf H^*:=\widehat{\mathbf H}^*_A$. Denote $A^*$ the restriction of $A$ on $\mathbf H^*$($*=\pm, 0$), thus we have $(A^\pm)^{-1}$ are bounded self-adjoint linear operators on $\mathbf H^\pm$ respectively and satisfying \begin{equation}\label{eq-the norm of the inverse of Apm} \|(A^\pm)^{-1}\|\leq \frac{1}{l}. \end{equation} Then (OE) can be rewritten as \begin{equation}\label{eq-decomposition 1 of OE} z^\pm =(A^\pm)^{-1}P^\pm_A F'(z^++z^-+z^0), \end{equation} and \begin{equation}\label{eq-decomposition 2 of OE} A^0z^0=P^0_AF'(z^++z^-+z^0), \end{equation} where $z^*=P^*_Az$($*=\pm, 0$), for simplicity, we rewrite $x:=z^0$. From \eqref{eq-the Lipschitz continuity of F'} and \eqref{eq-the norm of the inverse of Apm}, we have $(A^\pm)^{-1}P^\pm_A F'$ is contraction map on $\mathbf H^+\oplus \mathbf H^-$ for any $x\in\mathbf H^0$. So there is a map $z^\pm(x):\mathbf H^0\to\mathbf H^\pm$ satisfying \begin{equation}\label{eq-zpm(x)} z^\pm(x)=(A^\pm)^{-1}P^\pm F'(z^\pm(x)+x),\;\forall x\in\mathbf H, \end{equation} and the following properties. \begin{prop}\label{prop-continuous and property of saddle point reduction} (1) The map $z^\pm(x):\mathbf H^0\to \mathbf H^\pm$ is continuous, in fact we have \[ \|(z^++z^-)(x+h)-(z^++z^-)(x)\|_\mathbf H\leq\frac{l_F}{l-l_F}\|h\|_{\mathbf H},\;\;\forall x,h\in\mathbf H^0. \] (2) $\|(z^++z^-)(x)\|_\mathbf H\displaystyle\leq \frac{l_F}{l-l_F}\|x\|_{\mathbf H}+\frac{1}{l-l_F}\|F'(0)\|_{\mathbf H}$. \end{prop} \noindent\textbf{Proof.}(1) For any $x,\;h\in \mathbf H^0$, here we write $z^\pm(x):=z^+(x)+z^-(x)$ and $(A^\pm)^{-1}P^\pm_A:=(A^+)^{-1}P^+_A+(A^-)^{-1}P^-_A$ for simplicity, we have \begin{align*} \|z^\pm(x+h)-z^\pm(x)\|_{\mathbf H} &=\|(A^\pm)^{-1}P^\pm_A F'(z^\pm(x+h)+x+h)-(A^\pm)^{-1}P^\pm_A F'(z^\pm(x)+x)\|_{\mathbf H}\\ &\leq\frac{1}{l}\|F'(z^\pm(x+h)+x+h)-F'(z^\pm(x)+x)\|_{\mathbf H}\\ &\leq\frac{l_F}{l}\|z^\pm(x+h)-z^\pm(x)+h\|_{\mathbf H}\\ &\leq\frac{l_F}{l}\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf H}+\frac{l_F}{l}\|h\|_{\mathbf H}. \end{align*} So we have $\|z^\pm(x+h)-z^\pm(x)\|_\mathbf H\leq\frac{l_F}{l-l_F}\|h\|_{\mathbf H}$ and the map $z^\pm(x):{\mathbf H}^0\to {\mathbf H}^\pm$ is continuous.\\ (2)Similarly, \begin{align*} \|z^\pm(x)\|_{\mathbf H}&=\|(A^\pm)^{-1}P^\pm_A F'(z^\pm(x)+x)\|_{\mathbf H}\\ &\leq\frac{1}{l}\|F'(z^\pm(x)+x)\|_{\mathbf H}\\ &\leq\frac{1}{l}\|F'(z^\pm(x)+x)-F'(0)\|_{\mathbf H}+\frac{1}{l}\|F'(0)\|_{\mathbf H}\\ &\leq\frac{l_F}{l}(\|z^\pm(x)\|_{\mathbf H}+\|x\|_H)+\frac{1}{l}\|F'(0)\|_{\mathbf H}. \end{align*} So we have $\|z^\pm(x)\|_\mathbf H\leq \frac{l_F}{l-l_F}\|x\|_{\mathbf H}+\frac{1}{l-l_F}\|F'(0)\|_{\mathbf H}$.$ \Box$ \begin{rem}Denote $\mathbf E=D(|A|^{\frac{1}{2}})$, with its norm \[ \|z\|^2_\mathbf E:=\||A|^{\frac{1}{2}}(z^++z^-)\|^2_\mathbf H+\|x\|^2_\mathbf H,\;u\in \mathbf E. \] From \eqref{eq-zpm(x)}, we have $z^\pm(x)\in D(A)\subset\mathbf E$, and we have\\ (1) The map $z^\pm(x):\mathbf H^0\to \mathbf E$ is continuous, and \begin{equation}\label{eq-uniform continuous of z in E} \|(z^++z^-)(x+h)-(z^++z^-)(x)\|_\mathbf E\leq\frac{l_F \cdot l^\frac{1}{2}}{l-l_F}\|h\|_{\mathbf H},\;\;\forall x,h\in\mathbf H^0. \end{equation} (2) $\|(z^++z^-)(x)\|_\mathbf E\displaystyle\leq \frac{l^\frac{1}{2}}{l-l_F}(l_F\cdot\|x\|_{\mathbf H}+\|F'(0)\|_{\mathbf H})$. \end{rem} \noindent{\bf Proof.} The proof is similar to Proposition \ref{prop-continuous and property of saddle point reduction}, we only prove (1). \begin{align*} \|z^\pm(x+h)-z^\pm(x)\|_{\mathbf E} &=\|(|A|^{\frac{1}{2}})[z^\pm(x+h)-z^\pm(x)]\|_{\mathbf H}\\ &=\|(A^\pm)^{-\frac{1}{2}}[P^\pm_A F'(z^\pm(x+h)+x+h)-P^\pm_A F'(z^\pm(x)+x)]\|_{\mathbf H}\\ &\leq\frac{1}{l^\frac{1}{2}}\|F'(z^\pm(x+h)+x+h)-F'(z^\pm(x)+x)\|_{\mathbf H}\\ &\leq\frac{l_F}{l^\frac{1}{2}}\|z^\pm(x+h)-z^\pm(x)+h\|_{\mathbf H}\\ &\leq\frac{l_F}{l^\frac{1}{2}}\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf H}+\frac{l_F}{l^\frac{1}{2}}\|h\|_{\mathbf H}\\ &\leq\frac{l_F}{l}\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf E}+\frac{l_F}{l^\frac{1}{2}}\|h\|_{\mathbf H}, \end{align*} where the last inequality depends on the fact that $\|z^\pm\|_\mathbf E\geq l^\frac{1}{2}\|z^\pm\|_\mathbf H$, so we have \eqref{eq-uniform continuous of z in E}. Now, define the map $z:\mathbf H^0\to\mathbf H$ by \[ z(x)=x+z^+(x)+z^-(x). \] Define the functional $a:\mathbf H^0\to \mathbb{R}$ by \begin{equation}\label{eq-saddle point reduction} a(x)=\frac{1}{2}(Az(x),z(x))_\mathbf H-F(z(x)),\;x\in \mathbf H^0. \end{equation} With standard discussion, the critical points of $a$ correspond to the solutions of (O.E.), and we have \begin{lem}\label{lem-the smoothness of a} Assume $F$ satisfies ($F_1$), then we have $a\in C^1(\mathbf H^0,\mathbb{R})$ and \begin{equation}\label{eq-saddle point reduction-the derivative of a} a'(x)=Az(x)-F'(z(x)),\;\;\forall x\in \mathbf H^0. \end{equation} Further more, if $F\in C^2(\mathbf H,\mathbb R)$, we have $a\in C^2(\mathbf H^0,\mathbb{R})$, for any critical point $x$ of $a$, $F''(z(x))\in \mathcal{L}_s(\mathbf H,-b,b)$ and the morse index $m^-_a(x)$ satisfies the following equality \begin{equation}\label{eq-relation between morse index and our index} m^-_a(x_2)-m^-_a(x_1)=i^*_A(F''(z(x_2)))-i^*_A(F''(z(x_1))),\;\;\forall x_1,x_2\in \mathbf H^0. \end{equation} \end{lem} \noindent{\bf Proof.} For any $x,h\in\mathbf H^0$, write \[ \eta(x,h):=z^+(x+h)+z^-(x+h)-z^+(x)-z^-(x)+h \] for simplicity, that is to say \[ z(x+h)=z(x)+\eta(x,h),\;\;\forall x,h\in\mathbf H^0, \] and from \eqref{eq-uniform continuous of z in E}, we have \begin{equation}\label{eq-uniform astimate of eta} \|\eta(x,h)\|_\mathbf H\leq C\|h\|_{\mathbf H},\;\;\forall x,h\in\mathbf H^0, \end{equation} where $C=\displaystyle \frac{l+l_F}{l-l_F}$. Let $h\to 0$ in $\mathbf H^0$, and for any $x\in\mathbf H^0$, we have \begin{align*} a(x+h)-a(x)=&\frac{1}{2}[(Az(x+h),z(x+h))_\mathbf H-(Az(x),z(x))_\mathbf H]-[F(z(x+h))-F(z(x))]\\ =&(Az(x),\eta(x,h))_\mathbf H+\frac{1}{2}(A\eta(x,h),\eta(x,h))_\mathbf H\\ &-(F'(z(x)),\eta(x,h))_\mathbf H+o(\|\eta(x,h)\|_{\mathbf H}). \end{align*} From \eqref{eq-uniform astimate of eta} we have \[ a(x+h)-a(x)=(Az(x)-F'(z(x)),\eta(x,h))_\mathbf H+o(\|h\|_\mathbf H),\;\;\forall x\in \mathbf H^0,\; {\rm and}\;\|h\|_\mathbf H\to 0. \] Since $z^\pm(x)$ is the solution of \eqref{eq-zpm(x)} and from the definition of $\eta(x,h)$, we have \[ (Az(x)-F'(z(x)),\eta(x,h))_\mathbf H=(Az(x)-F'(z(x)),h)_\mathbf H,\;\;\forall x, h\in \mathbf H^0, \] so we have \[ a(x+h)-a(x)=(Az(x)-F'(z(x)),h)_\mathbf H+o(\|h\|_\mathbf H),\;\;\forall x\in \mathbf H^0,\; {\rm and}\;\|h\|_\mathbf H\to 0, \] and we have proved \eqref{eq-saddle point reduction-the derivative of a}. If $F\in C^2(\mathbf H,\mathbb R)$, from \eqref{eq-zpm(x)} and by Implicit function theorem, we have $z^\pm \in C^1(\mathbf H^0,\mathbf H^\pm)$. From \eqref{eq-zpm(x)} and\eqref{eq-saddle point reduction-the derivative of a}, we have \[ a'(x)=Ax-P^0F(z(x)) \] and \[ a''(x)=A|_{\mathbf H_0}-P^0F''(z(x))z'(x), \] that is to say $a\in C^2(\mathbf H^0,\mathbb R)$. Finally, from Theorem \ref{thm-relative morse index and spectral flow} received above, Definition 2.8 and Lemma 2.9 in \cite{Wang-Liu-2016}, we have \eqref{eq-relation between morse index and our index}. $ \Box$ \subsection{Some abstract critical points Theorems} In this part, we will give some abstract critical points Theorems for (O.E.) by the method of saddle point reduction introduced above. Since we have Proposition \ref{prop-relations between indexes}, we will not distinguish $i^*_A(B)$ from $i_A(B)$. Beside condition ($F_1$), assume $F$ satisfying the following condition.\\ \noindent ($F_2$) There exist $B_1,B_2\in \mathcal{L}_s(\mathbf H,-b,b)$ and $B:\mathbf H\to \mathcal{L}_s(\mathbf H,-b,b)$ satisfying \[ B_1\leq B_2,\;i_A(B_1)=i_A(B_2),\; {\rm and}\; \nu_A(B_2)=0, \] \[ B_1\leq B(z) \leq B_2,\forall z\in\mathbf H, \] such that \[ F'(z)-B(z)z=o(\|z\|_\mathbf H),\|z\|_\mathbf H\to\infty. \] Before the following Theorem, we need a Lemma. \begin{lem}\label{lem-0 has a positive distance from sigma(A-B)} Let $B_1,B_2\in \mathcal{L}_s(\mathbf H,-b,b)$ with $B_1\leq B_2,\;i_A(B_1)=i_A(B_2),\; {\rm and}\; \nu_A(B_2)=0$, then there exists $\varepsilon>0$, such that for all $B\in\mathcal{L}_s(\mathbf H)$ with \[ B_1\leq B \leq B_2, \] we have \[ \sigma(A-B)\cap (-\varepsilon,\varepsilon)=\emptyset. \] \end{lem} {\noindent}{\bf Proof.} For the property of $i_A(B)$, we have $\nu_A(B_1)=0$. So there is $\varepsilon>0$, such that \[ i_A(B_{1,\varepsilon})=i_A(B_1)=i_A(B_2)=i_A(B_{2,\varepsilon}), \] with $B_{*,\varepsilon}=B_*+\varepsilon\cdot I,(*=1,2)$. Since $B_{1,\varepsilon}\leq B-\varepsilon I<B+\varepsilon I\leq B_2'$. It follows that $i_A(B-\varepsilon I)=i_A(B+\varepsilon I)$. Note that \[ i_A(B+\varepsilon)-i_A(B-\varepsilon)=\sum_{-\varepsilon < t \le \varepsilon } \nu_A(B-t \cdot I). \] We have $0\notin \sigma(A-B-\eta),\;\forall \eta\in(-\varepsilon,\varepsilon)$, thus the proof is complete.$ \Box$ \begin{thm}\label{thm-abstract thm 1 for the existence of solution} Assume $A\in\mathcal O^0_e(-b,b)$. If $F$ satisfies conditions ($F_1$) and ($F_2$), then (O.E.) has at least one solution. \end{thm} \noindent{\bf Proof.} Firstly, for $\lambda\in[0,1]$, consider the following equation \[ Az=(1-\lambda) B_1z+\lambda F'(z).\eqno(O.E.)_\lambda \] We claim that the set of all the solutions ($z,\lambda$) of (O.E.)$_\lambda$ are a priori bounded. If not, assume there exist $\{(z_n,\lambda_n)\}$ satisfying (O.E.)$_\lambda$ with $\|z_n\|_{\mathbf H}\to\infty$. Without lose of generality, assume $\lambda_n\to\lambda_0\in[0,1]$. Denote by \[ F_\lambda(z)=\frac{1-\lambda}{2}(B_1z,z)_{\mathbf H}+\lambda F(z),\;\forall z\in \mathbf H. \] Since $F$ satisfies condition ($F_1$) and $B_1\in \mathcal{L}_s(\mathbf H,-b,b)$, we have $F'_\lambda:\mathbf H\to\mathbf H$ is Lipschitz continuous with its Lipschitz constant less than $b$, that is to say there exists $\hat{l}\in [\l_F,b)$ such that \[ \|F'_\lambda(z+h)-F'_\lambda(z)\|_{\mathbf H}\leq \hat{l}\|h\|_{\mathbf H},\;\forall z,h\in\mathbf H,\lambda\in[0,1]. \] Now, consider the projections defined in \eqref{eq-projections}, choose $l\in (\hat{l},b)$ satisfying $-l,l\notin \sigma(A)$, from \eqref{eq-decomposition 1 of OE} and \eqref{eq-decomposition 2 of OE}, we decompose $z_n$ by \[ z_n=z^+_n+z^-_n+x_n, \] with $z^*_n\in\mathbf H^*$($*=\pm,0$) and $z^{\pm}_n$ satisfies Proposition \ref{prop-continuous and property of saddle point reduction} with $l_F$ replaced by $\hat{l}$. So we have $\|x_n\|_\mathbf H\to\infty$. Denote by \[ y_n=\frac{z_n}{\|z_n\|_\mathbf H}, \] and $\bar{B}_n:=(1-\lambda_n)B_1+\lambda_nB(z_n)$, we have \begin{equation}\label{eq-the equation of yn} Ay_n=\bar{B}_ny_n+\frac{o(\|z_n\|_\mathbf H)}{\|z_n\|_\mathbf H}. \end{equation} Decompose $y_n=y^{\pm}_n+y^0_n$ with $y^*_n=z^*_n/\|z_n\|_\mathbf H$, we have \begin{align*} \|y^0_n\|_\mathbf H&=\frac{\|x_n\|_\mathbf H}{\|z_n\|_\mathbf H}\\ &\geq\frac{\|x_n\|_\mathbf H}{\|x_n\|_\mathbf H+\|z^+_n+z^-\|_\mathbf H}\\ &\geq\frac{(l-\hat{l})\|x_n\|_\mathbf H}{l\|x_n\|_\mathbf H+\|F'_\lambda(0)\|_\mathbf H}. \end{align*} That is to say \begin{equation}\label{eq-y0n not to 0} \|y^0_n\|_\mathbf H\geq c>0 \end{equation} for some constant $c>0$ and $n$ large enough. Since $B_1\leq B(z)\leq B_2$, we have $B_1\leq \bar{B}_n\leq B_2$. Let $\mathbf H=\mathbf H^+_{A-\bar B_n}\bigoplus \mathbf H^-_{A-\bar B_n}$ with $A-\bar B_n$ is positive and negative define on $\mathbf H^+_{A-\bar B_n}$ and $\mathbf H^-_{A-\bar B_n}$ respectively. Re-decompose $y_n=\bar{y}^+_n+\bar{y}^-_n$ respect to $\mathbf H^+_{A-\bar B_n}$ and $\mathbf H^-_{A-\bar B_n}$. From Lemma \ref{lem-0 has a positive distance from sigma(A-B)} and \eqref{eq-the equation of yn}, we have \begin{align}\label{eq-y0n to 0} \| y^0_n\|^2_\mathbf H&\leq \| y_n\|^2_\mathbf H\nonumber\\ &\leq \frac{1}{\varepsilon} ((A-\bar{B}_n)y_n,\bar{y}^+_n+\bar{y}^-_n)_\mathbf H\nonumber\\ &\leq \frac{1}{\varepsilon} \frac{o(\|z_n\|_\mathbf H)}{\|z_n\|_\mathbf H}\|y_n\|_\mathbf H. \end{align} Since $\|z_n\|_\mathbf H\to \infty$ and $\|y_n\|=1$, we have $\|y^0_n\|_\mathbf H\to 0$ which contradicts to \eqref{eq-y0n not to 0}, so we have $\{z_n\}$ is bounded. Secondly, we apply the topological degree theory to complete the proof. Since the solutions of (O.E.)$_\lambda$ are bounded, there is a number $R>0$ large eoungh, such that all of the solutions $z_\lambda$ of (O.E.)$_\lambda$ are in the ball $B(0,R):=\{z\in \mathbf H| \|z\|_\mathbf H<R\}$. So we have the Brouwer degree \[ deg (a'_1,B(0,R)\cap \mathbf H^0,0)= deg (a'_0,B(0,R)\cap \mathbf H^0,0)\neq 0, \] where $a_\lambda(x)=\frac{1}{2}(Az_\lambda(x),z_\lambda(x))_\mathbf H-F_\lambda(z_\lambda(x))$, $\lambda\in[0,1]$. That is to say (O.E.) has at least one solution. $ \Box$ In Theorem \ref{thm-abstract thm 1 for the existence of solution}, the non-degeneracy condition of $B(z)$ is important to keep the boundedness of the solutions. The following theorem will not need this non-degeneracy condition, the idea is from \cite{Ji-2018}. \begin{thm}\label{thm-abstract thm 3} Assume $A\in\mathcal O^0_e(-b,b)$. If $F$ satisfies conditions ($F_1$) and the following condition.\\ ($F^\pm_2$) There exists $M>0$, $B_\infty\in\mathcal{L}_s(\mathbf H, -b,b)$, such that \[ F'(z)=B_\infty z+r(z), \] with \[ \|r(z)\|_\mathbf H\leq M,\;\;\forall z\in\mathbf H, \] and \begin{equation}\label{eq-condition of r in ab-thm 3} (r(z),z)_\mathbf H\to\pm\infty,\;\;\|z\|_\mathbf H\to\infty. \end{equation} Then (O.E.) has at least one solution. \end{thm} \noindent{\bf Proof.} If $0\not\in \sigma(A-B_\infty)$, then with the similar method in Theorem \ref{thm-abstract thm 1 for the existence of solution}, we can prove the result. So we assume $0\in \sigma(A-B_\infty)$ and we only consider the case of ($F^-_2$). Since $0$ is an isolate eigenvalue of $A-B_\infty$ with finite dimensional eigenspace (see \cite{Wang-Liu-2016} for details), there exists $\eta>0$ such that \[ (-\eta,0)\cap\sigma(A-B_\infty)=\emptyset. \] For any $\varepsilon\in (0,\eta)$, we have $0\not\in \sigma(\varepsilon+A-B_\infty)$. Thus, with the similar method in Theorem \ref{thm-abstract thm 1 for the existence of solution}, we can prove that there exists $z_\varepsilon\in \mathbf H$ satisfying the following equation \begin{equation}\label{eq-equation of z-varepsilon} \varepsilon z_\varepsilon+(A-B_\infty)z_\varepsilon=r(z_\varepsilon). \end{equation} In what follows, We divide the following proof into two steps and $C$ denotes various constants independent of $\varepsilon$. {\bf Step 1. We claim that $\|z_\varepsilon\|_\mathbf H\leq C$. } Since $z_\varepsilon$ satisfies the above equation, we have \begin{align*} \varepsilon (z_\varepsilon,z_\varepsilon)_\mathbf H&=-((A-B_\infty)z_\varepsilon,z_\varepsilon)_\mathbf H+(r(z_\varepsilon),z_\varepsilon)_\mathbf H\\ &\leq \frac{1}{\eta}\|(A-B_\infty)z_\varepsilon\|^2_\mathbf H+M\|z_\varepsilon\|_\mathbf H\\ &=\frac{1}{\eta}\|\varepsilon z_\varepsilon-r(z_\varepsilon)\|^2_\mathbf H+M\|z_\varepsilon\|_\mathbf H\\ &\leq \frac{\varepsilon^2}{\eta}\|z_\varepsilon\|^2_\mathbf H+C\|z_\varepsilon\|_\mathbf H +C. \end{align*} So we have \[ \varepsilon\|z_\varepsilon\|_\mathbf H\leq C. \] Therefore \begin{equation}\label{eq-boundedness of z-1} \|(A-B_\infty)z_\varepsilon\|_\mathbf H=\|\varepsilon z_\varepsilon-r(z_\varepsilon)\|_\mathbf H\leq C. \end{equation} Now, consider the orthogonal splitting as defined in \eqref{eq-decomposition of space H}, \[ \mathbf H=\mathbf H^0_{A-B_\infty}\oplus\mathbf H^*_{A-B_\infty}, \] where $A-B_\infty$ is zero definite on $\mathbf H^0_{A-B_\infty}$, $\mathbf H^*_{A-B_\infty}$ is the orthonormal complement space of $\mathbf H^0_{A-B_\infty}$. Let $z_\varepsilon=u_\varepsilon+v_\varepsilon$ with $u_\varepsilon\in \mathbf H^0_{A-B_\infty}$ and $v_\varepsilon\in \mathbf H^*_{A-B_\infty}$. Since $0$ is an isolated point in $\sigma(A-B_\infty)$, from \eqref{eq-boundedness of z-1}, we have \begin{equation}\label{eq-boundedness of z-2} \|v_\varepsilon\|_\mathbf H\leq C \end{equation} Additionally, since $r(z)$ and $v_\varepsilon$ are bounded, we have \begin{align}\label{eq-boundedness of z-3} (r(z_\varepsilon),z_\varepsilon)_\mathbf H&=(r(z_\varepsilon),v_\varepsilon)_\mathbf H+(r(z_\varepsilon),u_\varepsilon)_\mathbf H\nonumber\\ &=(r(z_\varepsilon),v_\varepsilon)_\mathbf H+(\varepsilon z_\varepsilon+(A-B_\infty)z_\varepsilon,u_\varepsilon)_\mathbf H\nonumber\\ &=(r(z_\varepsilon),v_\varepsilon)_\mathbf H+\varepsilon(u_\varepsilon,u_\varepsilon)_\mathbf H\nonumber\\ &\geq C. \end{align} Therefor, from \eqref{eq-condition of r in ab-thm 3}, $\|u_\varepsilon\|_\mathbf H$ are bounded in $\mathbf H$ and we have proved the boundedness of $\|z_\varepsilon\|_\mathbf H$. {\bf Step 2. Passing to a sequence of $\varepsilon_n\to 0$, there exists $z\in\mathbf H$ such that \[ \displaystyle\lim_{\varepsilon_n\to 0}\|z_{\varepsilon_n}-z\|_\mathbf H=0. \] } Different from the above splitting, now, we recall the projections $P^-_A,\;P^0_A$ and $P^+_A$ defined in \eqref{eq-projections} and the splitting $\mathbf H=\mathbf H^-\oplus\mathbf H^0\oplus\mathbf H^+$ with $\mathbf H^*=P^*_A$($*=\pm,0$). So $z_\varepsilon$ has the corresponding splitting \[ z_\varepsilon=z_\varepsilon^++z_\varepsilon^-+z_\varepsilon^0, \] with $z_\varepsilon^*\in \mathbf H^*$ respectively. Since $\mathbf H^0$ is a finite dimensional space and $\|z_\varepsilon\|_\mathbf H\leq C$, there exists a sequence $\varepsilon_n\to 0$ and $z^0\in \mathbf H^0$, such that \[ \displaystyle\lim_{n\to\infty}z^0_{\varepsilon_n}=z^0. \] For simplicity, we rewrite $z^*_n:=z^*_{\varepsilon_n}$, $A_n:=\varepsilon_n+A$ and $A^\pm_n:=A_n|_{\mathbf H^\pm} $. Since $z_\varepsilon$ satisfies \eqref{eq-equation of z-varepsilon}, we have \[ z^\pm_n=(A_n^\pm)^{-1}P^\pm_A F'(z^+_n+z^-_n +z^0_n). \] Since $F$ satisfies ($F_1$), with the similar method used in Proposition \ref{prop-continuous and property of saddle point reduction}, for $n$ and $m$ large enough, we have \begin{align*} \|z^\pm_n-z^\pm_m\|_\mathbf H=&\|(A_n^\pm)^{-1}P^\pm_A F'(z_n)-(A_m^\pm)^{-1}P^\pm_A F'(z_m)\|_\mathbf H \\ \leq&\|(A_n^\pm)^{-1}P^\pm_A (F'(z_n)-F'(z_m))\|_\mathbf H+\|((A_n^\pm)^{-1}-(A_m^\pm)^{-1})P^\pm_A F'(z_m)\|_\mathbf H \\ \leq&\frac{l_F}{l}\|z_n-z_m\|_\mathbf H+\|((A_n^\pm)^{-1}-(A_m^\pm)^{-1})P^\pm_A F'(z_m)\|_\mathbf H. \end{align*} Since $(A_n^\pm)^{-1}-(A_m^\pm)^{-1}=(\varepsilon_m-\varepsilon_n)(A_n^\pm)^{-1}(A_m^\pm)^{-1}$ and $z_n$ are bounded in $\mathbf H$, we have \[ \|((A_n^\pm)^{-1}-(A_m^\pm)^{-1})P^\pm_A F'(z_m)\|_\mathbf H=o(1),\;\;n,m\to\infty. \] So we have \[ \|z^\pm_n-z^\pm_m\|_\mathbf H\leq \frac{l_F}{l-l_F}\|z^0_n-z^0_m\|_\mathbf H+o(1),\;\;n,m\to\infty, \] therefor, there exists $z^\pm\in\mathbf H^\pm$, such that $\displaystyle\lim_{n\to\infty}\|z^\pm_n- z^\pm\|_\mathbf H=0$. Thus, we have \[ \displaystyle\lim_{n\to\infty}\|z_{\varepsilon_n}-z\|_\mathbf H=0, \] with $z=z^-+z^++z^0$. Last, let $n\to\infty$ in \eqref{eq-equation of z-varepsilon}, we have $z$ is a solution of (O.E.).$ \Box$ \begin{thm}\label{thm-abstract thm 2 for the multiplicity of solutions} Assume $A\in\mathcal O^0_e(-b,b)$, $F$ satisfies ($F_1$) with $\pm l_F\not\in\sigma(A)$ and the following condition:\\ ($F^+_3$) There exist $B_3\in\mathcal{L}_s(\mathbf{H},-b,b)$ and $C\in\mathbb{R}$, such that \[ B_3>\beta:=\max\{\lambda|\lambda\in \sigma_A\cap(-\infty,l_F)\}, \] with \[ F(z)\geq\frac{1}{2}(B_3z,z)_\mathbf{H}-C,\;\;\forall z\in \mathbf H. \] Or ($F^-_3$) There exist $B_3\in\mathcal{L}_s(\mathbf{H},-b,b)$ and $C\in\mathbb{R}$, such that \[ B_3<\alpha:=\min\{\lambda|\lambda\in \sigma_A\cap(-l_F,\infty)\}, \] with \[ F(z)\leq\frac{1}{2}(B_3z,z)_\mathbf{H}+C,\;\;\forall z\in \mathbf H. \] Then (O.E.) has at least one solution. Further more, assume $F$ satisfies \\ ($F^\pm_4$) $F\in C^2(\mathbf H,\mathbb{R})$, $F'(0)=0$ and there exists $B_0\in\mathcal{L}_s(\mathbf{H},-b,b)$ with \begin{equation}\label{eq-twisted condition 1 in abstract thm 2} \pm(i_A(B_0)+\nu_A(B_0))<\pm i_A(B_3), \end{equation} such that \[ F'(z)=B_0z+o(\|z\|_\mathbf H),\;\;\|z\|_\mathbf H\to 0. \] Then (O.E.) has at least one nontrivial solution. Additionally, if \begin{equation}\label{eq-twisted condition 2 in abstract thm 2} \nu_A(B_0)=0 \end{equation} then (O.E.) has at least two nontrivial solutions.\end{thm} \noindent{\bf Proof.} We only consider the case of ($F^+_3$). According to the saddle point reduction, since $\pm l_F\not\in\sigma(A)$, we can choose $l\in(l_F,b)$ in \eqref{eq-projections} satisfying \[ [-l,-l_F]\cap\sigma(A)=\emptyset=[l_F,l]\cap\sigma(A). \] We turn to the function \[ a(x)=\frac{1}{2}(Az(x),z(x))-F(z(x)), \] where $z(x)=x+z^+(x)+z^-(x)$, $x\in \mathbf H^0$ and $z^\pm\in \mathbf H^\pm$. Denote by $w(x)=x+z^-(x)$ and write $z=z(x)$, $w=w(x)$ for simplicity. Since \begin{equation}\label{eq-eq 1 in abstract thm 2} a(x)=\left\{\frac{1}{2}(Aw,w)-F(w)\right\}+\left\{\frac{1}{2}[(Az,z)-(Aw,w)]-[F(z)-F(w)]\right\}. \end{equation} By condition ($F^+_3$), we obtain \begin{equation}\label{eq-eq 2 in abstract thm 2} \frac{1}{2}(Aw,w)-F(w)\leq \frac{1}{2}((\beta-B_3)w,w)_\mathbf{H}+C, \end{equation} and the terms in the second bracket are equal to \begin{align}\label{eq-eq 3 in abstract thm 2} &\frac{1}{2}(Az^+,z^+)-\int^{1}_{0}(F'(sz^++w),z^+)ds\nonumber\\ =&\frac{1}{2}(Az^+,z^+)-(F'(z^++w),z^+)+\int^{1}_{0}(F'(z^++w)-F'(sz^++w),z^+)ds\nonumber\\ =&-\frac{1}{2}(Az^+,z^+)+\int^{1}_{0}(F'(z^++w)-F'(sz^++w),z^+)ds\nonumber\\ \leq&-\frac{1}{2}(Az^+,z^+)+\int^{1}_{0}(1-s)ds\cdot l_F\cdot\|z^+\|^2_\mathbf H\nonumber\\ \leq& -\frac{l-l_F}{2}\|z^+\|^2_\mathbf H, \end{align} where the last equality is from the fact that $Az^+=P^+F'(z^++w)$. From \eqref{eq-eq 1 in abstract thm 2},\eqref{eq-eq 2 in abstract thm 2} and \eqref{eq-eq 3 in abstract thm 2} we have \begin{align*} a(x)&\leq \frac{1}{2}((\beta-B_3)w,w)_\mathbf{H} -\frac{l-l_F}{2}\|z^+\|^2_\mathbf H+C\\ &\to-\infty,\; as \|x\|\to\infty. \end{align*} Thus the function $-a(x)$ is bounded from below and satisfies the (PS) condition. So the maximum of $a$ exists and the maximum points are critical points of $a$. In order to prove the second part, similarly, we only consider the case of ($F^+_3$) and ($F^+_4$). We only need to realize that $0$ is not a maximum point from \eqref{eq-twisted condition 1 in abstract thm 2}, so the maximum points discovered above are not $0$. In the last, if \eqref{eq-twisted condition 2 in abstract thm 2} is satisfied, we can use the classical three critical points theorem, since $0$ is neither a maximum nor degenerate and the proof is complete. $ \Box$ \begin{rem} (A). Theorem \ref{thm-abstract thm 2 for the multiplicity of solutions} is generalized from \cite[IV,Theorem 2.3]{Chang-1993}. In the first part of our Theorem, we do not need $F$ to be $C^2$ continuous.\\ (B). Theorem \ref{thm-abstract thm 2 for the multiplicity of solutions} is different from our former result in \cite[Theorem 3.6]{Wang-Liu-2016}. Here, we need the Lipschitz condition to keep the method of saddle point reduction valid, where, in \cite[Theorem 3.6]{Wang-Liu-2016}, in order to use the method of dual variation, we need the convex property. \end{rem} \section{Applications in one dimensional wave equation}\label{section-applications} In this section, we will consider the following one dimensional wave equation \[ \left\{\begin{array}{ll} \Box u\equiv u_{tt}-u_{xx}=f(x, t, u),\\ u(0,t)=u(\pi, t)= 0, \\ u(x, t+T)=u(x, t),\\ \end{array} \right.\forall (x, t)\in[0,\pi]\times S^1, \eqno(W.E.) \] where $T>0$, $S^1:=\mathbb{R}/T\mathbb{Z}$ and $f:[0,\pi]\times S^1\times\mathbb{R}\to\mathbb{R}$. In what follows we assume systematically that $T$ is a rational multiple of $\pi$. So, there exist coprime integers $(p,q)$, such that \[ T=\frac{2\pi q}{p}. \] Let \[ L^2:=\left\{u,u=\sum_{j\in\mathbb{N}^+,k\in\mathbb{Z}} u_{j,k}\sin jx\exp ik\frac{p}{q}t\right\}, \] where $i=\sqrt{-1}$ and $u_{j,k}\in \mathbb{C}$ with $u_{j,k}=\bar{u}_{j,-k}$, its inner product is \[ (u,v)_2=\sum_{j\in\mathbb{N}^+,k\in\mathbb{Z}}(u_{j,k},\bar{v}_{j,k}),\;u,v\in L^2, \] the corresponding norm is \[ \|u\|^2_2=\sum_{j\in\mathbb{N}^+,k\in\mathbb{Z}}|u_{j,k}|^2\;u,v\in L^2. \] Consider $\Box$ as an unbounded self-adjoint operator on $L^2$. Its' spectrum set is \[ \sigma(\Box)=\{(p^2k^2-q^2j^2)/q^2|j\in\mathbb{N}^+,k\in\mathbb{Z}\}. \] It is easy to see $\Box$ has only one essential spectrum $\lambda_0=0$. Let $\Omega:=[0,\pi]\times S^1$, assume $f$ satisfying the following conditions. \noindent($f_1$) $f\in C(\Omega\times\mathbb{R},\mathbb{R})$, there exist $ b\neq 0$ and $l_F\in(0,|b|)$, such that \[ |f_{ b}(x,t,u+v)-f_{ b}(x,t,u)|\leq l_F|v|,\;\;\forall (x,t)\in \Omega,\;u,v\in\mathbb{R}, \] where \[ f_{ b}(x,t,u):=f(x,t,u)-bu,\;\;\forall (x,t,u)\in\Omega\times \mathbb{R}. \] Let the working space $\mathbf H:=L^2$ and the operator $A:=\Box-b\cdot I$, with $I$ the identity map on $\mathbf H$. Thus we have $A\in\mathcal O^0_e(-|b|,|b|)$. Denote $L^\infty:=L^\infty(\Omega, \mathbb{R})$ the set of all essentially bounded functions. For any $g\in L^\infty$, it is easy to see $g$ determines a bounded self-adjoint operator on $L^2$, by \[ u(x,t)\mapsto g(x,t)u(x,t),\;\;\forall u\in L^2, \] without confusion, we still denote this operator by $g$, that is to say we have the continuous embedding $L^\infty\hookrightarrow \mathcal L_s(\mathbf H)$. Thus for any $g\in L^\infty\cap \mathcal L_s(\mathbf H,-|b|,|b|)$, we have the index pair ($i_A(g),\nu_A(g)$). Besides, for any $g_1,g_2\in L^\infty$, $g_1\leq g_2$ means that \[ g_1(x,t)\leq g_2(x,t),\;{\rm a.e.} (x,t)\in\Omega. \] \noindent ($f_2$) There exist $g_1,g_2\in L^\infty\cap \mathcal L_s(\mathbf H,-|b|,|b|)$ and $g\in L^\infty(\Omega\times \mathbb{R},\mathbb{R})$, with \[ g_1\leq g_2,\;i_A(g_1)=i_A(g_2),\;\nu_A(g_2)=0, \] \[ g_1(x,t)\leq g(x,t,u)\leq g_2(x,t),\;\;\;{\rm a.e.} (x,t,z)\in\Omega\times\mathbb{R}, \] such that \[ f_{ b}(x,t,u)-g(x,t,u)u=o(|u|),\;|u|\to\infty,\;{\rm uniformly for }(x,t)\in \Omega. \] We have the following results. \begin{thm}\label{thm-application-1} Assume $T$ is a rational multiple of $\pi$, $f$ satisfying ($f_1$) and ($f_2$), then (W.E.) has a weak solution. \end{thm} \noindent{\bf Proof of Theorem \ref{thm-application-1}.} Let \[ \mathcal{F}_b(x,t,u):=\int^u_0f_b(x,t,s)ds,\;\;\forall(x,t,u)\in\Omega\times\mathbb{R}, \] and \begin{equation}\label{eq-definition of F} F(u):=\int_\Omega \mathcal F_b(x,t,u(x,t))dxdt,\;\;\forall u\in \mathbf H. \end{equation} It is easy to verify that $F$ will satisfies condition ($F_1$) and ($F_2$) if $f$ satisfies condition ($f_1$) and ($f_2$). Thus, by Theorem \ref{thm-abstract thm 1 for the existence of solution}, the proof is complete.$ \Box$ Here, we give an example of Theorem \ref{thm-application-1}. \begin{eg} For any $b\neq 0$, assume $\alpha,\beta\in (-|b|,|b|)$ and $ [\alpha,\beta]\cap \sigma(\Box-b)=\emptyset. $ Let \[ g(x,t,u):=\displaystyle\frac{\beta-\alpha}{2}\sin[\varepsilon_1\ln (|x|+|t|+|u|+1)]+\frac{\alpha+\beta}{2}, \] and $h\in C(\mathbb{R},\mathbb{R})$ is Lipschitz continuous with \[ h(u)=o(|u|),\;\;|u|\to\infty. \] then \[ f(x,t,u):=bu+g(x,t,u)u+\varepsilon_2h(u) \] will satisfies condition ($f_1$) and ($f_2$) for $\varepsilon_1$ and $\varepsilon_2>0$ small enough. \end{eg} \begin{thm}\label{thm-application-3} Assume $T$ is a rational multiple of $\pi$, $f$ satisfies ($f_1$) and the following condition,\\ ($f^\pm_2$) There exists $g_\infty(x,t)\in L^\infty\cap\mathcal{L}_s(\mathbf H, -|b|,|b|)$ with \[ |f_b(x,t,u)-g_{\infty}(x,t)u|\leq M_{1}\;\;\forall (x,t,u)\in\Omega\times\mathbf R, \] and \begin{equation}\label{eq-condition of r} \pm(f_b(x,t,u)-g_{\infty}(x,t)u,u)\geq c|u|,\;\;\forall (x,t,u)\in\Omega\times\mathbb R/[-M_{2},M_{2}], \end{equation} where $M_1,\;M_2,\;c>0$ are constants. Then (W.E.) has a weak solution. \end{thm} \noindent{\bf Proof.} We only consider the case of $f^{-}_{2}$. Let $r(x,t,u):=f_b(x,t,u)-g_{\infty}(x,t)u$, then $r$ is bounded in $\mathbf H$. Generally speaking, from \eqref{eq-condition of r}, we cannot prove \eqref{eq-condition of r in ab-thm 3}, so we cannot use Theorem \ref{thm-abstract thm 3} directly. By checking the proof of Theorem \ref{thm-abstract thm 3}, in step 1, when we got \eqref{eq-boundedness of z-2}, \eqref{eq-condition of r in ab-thm 3} was only used to get the boundedness of $z^0_{\varepsilon}$. Now, with \eqref{eq-condition of r}, we can also get the boundedness of $z^0_{\varepsilon}$ from \eqref{eq-boundedness of z-2}. Recall that $\mathbf H=L^{2}(\Omega)$ in this section, from the boundedness of $z^{\pm}_{\varepsilon}$ in $\mathbf H$, we have the boundedness of $z^{\pm}_{\varepsilon}$ in $L^{1}(\Omega)$. On the other hand, since $\ker (A-g_{\infty})$ is a finite dimensional space, if $\|z^{0}_{\varepsilon}\|_{\mathbf H}\to\infty$, we have $\|z^{0}_{\varepsilon}\|_{L^{1}}\to\infty$, thus $\|z_{\varepsilon}\|_{L^{1}}\to\infty$. Therefor, we have the contradiction from \eqref{eq-boundedness of z-2} and \eqref{eq-condition of r}. So we have gotten the boundedness of $z^0_{\varepsilon}$. The rest part of the proof is similar to Theorem \ref{thm-abstract thm 3}, we omit it here. \begin{eg} Here we give an example of Theorem \ref{thm-abstract thm 3}. For any $b\neq 0$, and $g_\infty\in C(\Omega)$ with \[ \|g_\infty\|_{C(\Omega)}<|b|. \] Let $r(u)=\varepsilon\arctan u$, then \[ f(x,t,u):=bu+g_\infty(x,t)u\pm r(u) \] will satisfies the conditions in Theorem \ref{thm-abstract thm 3} for $\varepsilon>0$ small enough. \end{eg} Now, in order to use Theorem \ref{thm-abstract thm 2 for the multiplicity of solutions}, we assume $f$ satisfies the following conditions. \noindent ($f^{\pm}_3$) There exists $g_3(x,t)\in L^\infty\cap\mathcal{L}_s(\mathbf H, -|b|,|b|)$, with \[ \pm g_3(x,t)>\max\{\lambda|\lambda\in \sigma_{(\pm A)}\cap (-\infty,l_F)\}, \] such that \[ \pm\mathcal{F}_b(x,t,u)\geq \frac{1}{2}(g_3(x,t)u,u)+c,\;\;\forall (x,t,u)\in\Omega\times\mathbb{R}, \] for some $c\in\mathbb{R}$. \noindent ($f^\pm_4$) $f\in C^1(\Omega\times \mathbb{R},\mathbb{R})$, $f(x,t,0)\equiv 0,\;\forall(x,t)\in\Omega$ and \[ g_0(x,t):=f'_b(x,t,u),\;\;\forall (x,u)\in\Omega, \] with \[ \pm (i_A(g_0)+\nu_A(g_0))<\pm i_A(g_3). \] We have the following result. \begin{thm}\label{thm-application-2} Assume $T$ is a rational multiple of $\pi$. \\ (A.)If $f$ satisfies condition ($f_1$), ($f^+_3$)( or ($f^-_3$)), then (W.E.) has at least one solution.\\ (B.) Further more, if $f$ satisfies condition ($f^+_4$)( or ($f^-_4$)), then (W.E.) has at least one nontrivial solution. Additionally, if $\nu_A(g_0)=0$, then (W.E.) has at least two nontrivial solutions. \end{thm} The proof is to verify the conditions in Theorem \ref{thm-abstract thm 2 for the multiplicity of solutions}, we only verify the smoothness of $F(u)$ defined in \eqref{eq-definition of F}. From condition ($f_1$) and $f\in C^1(\Omega\times\mathbb{R})$, we have the derivative $f'_b(x,t,u)$ of $f_b$ with respect to $u$, satisfying \begin{equation}\label{eq-the boundedness of f'_b} |f'_b(x,t,u)|\leq l_F,\;\;\forall (x,t,u)\in\Omega\times\mathbb{R}. \end{equation} For any $u,v\in\mathbf{H}$, \begin{align*} F'(u+v)-F'(u)&=f_b(x,t,u+v)-f_b(x,t,u)\\ &=f'_b(x,t,u)v+(f'_b(u+\xi v)-f'_b(u))v. \end{align*} From \eqref{eq-the boundedness of f'_b}, we have $f'_b(u+\xi v)-f'_b(u)\in \mathbf{H}$ and \[ \displaystyle\lim_{\|v\|_\mathbf{H}\to 0}\|f'_b(u+\xi v)-f'_b(u)\|_\mathbf{H}=0,\;\;\forall u\in\mathbf{H}. \] That is to say $F''(u)=f'_b(x,t,u)$ and $F\in C^2(\mathbf{H},\mathbb{R})$. \begin{eg} In order to give an example for Theorem \ref{thm-application-2}, assume \begin{equation}\label{eq-spectrum of Box} \sigma(\Box)=\mathop{\cup}\limits_{n\in\mathbb{Z}}\{\lambda_n\}, \end{equation} with $\lambda_0=0$ and $\lambda_n<\lambda_{n+1}$ for all $n\in\mathbb{Z}$. Choose any $k\in\{2,3\cdots\}$. Let \[ g_0(x,t)\in C(\Omega,[\alpha,\beta]),\;{\rm with}\;[\alpha,\beta]\in(0,\lambda_k), \] and $h\in C(\mathbb{R},\mathbb{R})$ defined above. Define \[ g(x,t,u):=g_0(x,t)+\displaystyle(\lambda_k-g_0(x,t)-\varepsilon_1)\frac{2}{\pi}\arctan(\varepsilon_1u^2), \] then \[ f(x,t,u):=g(x,t,u)u+\varepsilon_2 h(u) \] will satisfies condition ($f_1$) and ($f^+_3$) with $b=\frac{\lambda_{k}}{2}$ and $\varepsilon_1,\varepsilon_2>0$ small enough. Further more, if $g_0,h$ are $ C^1$ continuous and $\beta<\lambda_{k-1}$, we have condition ($f^+_4$) is satisfied. Additionally, if $[\alpha,\beta]\cap\sigma(\Box)=\emptyset$, then $\nu_A(g_0)=0$. \end{eg} \begin{rem} We can also use Theorem \ref{thm-abstract thm 1 for the existence of solution} , Theorem \ref{thm-abstract thm 3} and Theorem \ref{thm-abstract thm 2 for the multiplicity of solutions} to consider the radially symmetric solutions for the $n$-dimensional wave equation: \[ \left\{\begin{array}{ll} \Box u\equiv u_{tt}-\vartriangle_x u=h(x,t,u), &t\in\mathbb{R},\;x\in B_R,\\ u(x,t)= 0, t\in\mathbb{R}, &t\in\mathbb{R},\;x\in \partial B_R,\\ u(x,t+T)=u(x,t), &t\in\mathbb{R},\;x\in B_R,\\ \end{array} \right. \eqno(n \textendash W.E.) \] where $B_R=\{x\in\mathbb{R}^n,|x|<R\}$, $\partial B_R=\{x\in\mathbb{R}^n,|x|=R\}$, $n>1$ { and the nonlinear term $h$ is $T$-periodic in variable $t$}. Restriction of the radially symmetry allows us to know the nature of spectrum of the wave operator. Let $r=|x|$ and {$S^1:=\mathbb{R}/T$, if $h(x,t,u)=h(r,t,u)$} then the $n$-dimensional wave equation ($n$\textendash W.E.) can be transformed into: \[ \left\{\begin{array}{ll} A_0u:=u_{tt}-u_{rr}-\frac{n-1}{r}u_r=h(r,t,u),\\ u(R,t)=0,\; \\ u(r,0)=u(r,T),\;u_t(r,0)=u_t(r,T), \end{array} \right. \;\;\;(r,t)\in{\Omega:=[0,R]\times S^1}.\eqno(RS\textendash W.E.) \] $A_0$ is symmetric on $L^2(\Omega,\rho)$, where $\rho=r^{n-1}$ and \[ L^2(\Omega,\rho):=\left\{u|\|u\|^2_{L^2(\Omega,\rho)}:=\int_\Omega|u(t,r)|^2r^{n-1}dtdr<\infty\right\}. \] By the asymptotic properties of the Bessel functions (see\cite{Watson-1952}), the spectrum of the wave operator can be characterized (see\cite[Theorem 2.1]{Schechter-1998}). Under some more assumption, the self-adjoint extension of $A_0$ has no essential spectrum, and we can get more solutions of (RS\textendash W.E.). \end{rem} \end{document}
\begin{document} \begin{center} \large \textbf{On knot groups acting on trees} \normalsize \textsc{F.~A.~Dudkin$^1$, A.~S.~Mamontov$^1$} \footnotetext[1]{The work was supported by the program of fundamental scientific researches of the SB RAS No I.1.1., project No 0314-2016-0001} \end{center} {\bf Abstract:} A finitely generated group $G$ acting on a tree with infinite cyclic edge and vertex stabilizers is called a generalized Baumslag--Solitar group {\it (GBS group)}. We prove that a 1-knot group $G$ is GBS group iff $G$ is a torus-knot group and describe all n-knot GBS groups for $n\geqslant 3$. {\bf Keywords:} Knot group, GBS group, group acting on a tree, torus-knot group. \section{Introduction} An {\it $n$-knot group } is the fundamental group $\pi_1 (S^{n+2}-K^n,a)$ for an $n$-knot $K^n$ in $n+2$-dimenstional sphere $S^{n+2}$. Starting from a knot it is possible to construct Wirtinger presentation for its group with relations of the form $x_i^w=x^j$, where $x_i, x_j$ are letters, and $w$ is a word \cite{Kaw}. Now, let $G$ be a group represented by a set of generators and relations. When it is a knot group? Treating this just as a question of how to transform relations to a desired form is unfruitful. So to attempt the question we involve some known properties of knot groups and restrict the class of groups. A finitely generated group $G$ acting on a tree with infinite cyclic edge and vertex stabilizers is called a generalized Baumslag--Solitar group {\it (GBS group)} \cite{forester2002}. By the Bass-Serre theorem, $G$ is representable as $\pi_1(\mathbb{B})$, the fundamental group of a graph of groups $\mathbb{B}$ (see \cite{Serr}) with infinite cyclic edge and vertex groups. GBS groups are important examples of JSJ decompositions. JSJ decompositions appeared first in 3-dimensional topology with the theory of the characteristic submanifold by Jaco-Shalen and Johannson. These topological ideas were carried over to group theory by Kropholler for some Poincar\'{e} duality groups of dimension at least 3, and by Sela for torsion-free hyperbolic groups. In this group-theoretical context, one has a finitely generated group G and a class of subgroups $\mathcal{A}$ (such as cyclic groups, abelian groups, etc.), and one tries to understand splittings (i.e. graph of groups decompositions) of $G$ over groups in $\mathcal{A}$ (see \cite{GuLe}). Given a $GBS$ group $G$, we can present the corresponding graph of groups by a labeled graph $\mathbb{A}=(A,\lambda)$, where $A$ is a finite connected graph (with endpoint functions $\partial_0,\partial_1\colon E(A)\to V(A)$) and $\lambda\colon E(A)\to \mathbb{Z}\setminus\{0\}$ labels the edges of $A$. The label $\lambda_e$ of an edge $e$ with the source vertex $v$ defines an embedding $\alpha_e\colon e\to v^{\lambda_e}$ of the cyclic edge group $\langle e \rangle$ into the cyclic vertex group $\langle v \rangle$ (for more details see \cite{ClayForII}) {\it The fundamental group} $\pi_1(\mathbb{A})$ of a {\it labeled graph} $\mathbb{A}=(A, \lambda)$ is given by generators and defining relations. Denote by $\overline{A}$ the graph obtained from $A$ by identifying $e$ with $\overline{e}$. A maximal subtree $T$ of the graph $\overline{A}$ defines the following presentation of the group $\pi_1(\mathbb{A})$ $$\left\langle \begin{array}{lcl} g_v, v\in V(\overline{A}), &\|& g_{\partial_0(e)}^{\lambda(e)}=g_{\partial_1(e)}^{\lambda(\overline{e})}, e\in E(T),\\ t_e, e\in E(\overline{A})\setminus E(T) &\|&t_e^{-1}g_{\partial_0(e)}^{\lambda(e)}t_e=g_{\partial_1(e)}^{\lambda(\overline{e})}, e\in E(\overline{A})\setminus E(T) \end{array} \right\rangle$$ Generatos of first (second) type are called vertex (edge) elements. For different maximal subtrees, corresponding presentations define isomorphic groups. It is sometimes useful to regard a GBS-group as a group obtained as follows: start with the group $\mathbb{Z}$, perform consecutive amalgamated products in accordance with the labels on the maximal subtree; finally, apply several times the construction of the HNN-extension (the number of times is equal to the number of the edges outside the maximal tree). In this approach, the standard theory of amalgamated products and HNN-extensions is applicable to the full extent. In particular, GBS-groups admit a normal form of an element and have no torsion. If two labeled graphs $\mathbb{A}$ and $\mathbb{B}$ define isomorphic $GBS$ groups $\pi_1(\mathbb{A})\cong\pi_1(\mathbb{B})$ and $\pi_1(\mathbb{A})$ is not isomorphic to $\mathbb{Z}, \mathbb{Z}^2$ or Klein bottle group then there exists a finite sequence of {\it expansion} and {\it collapse} (see fig.1) moves connecting $\mathbb{A}$ and $\mathbb{B}$ \cite{forester2002}. A labeled graph is called {\it reduced} if it admits no collapse move (equivalently, the labeled graph contains no edges with distinct endpoints and labels $\pm 1$). \begin{figure} \caption{Expansion and collapse moves} \end{figure} An element $g$ from a GBS group $G$ is called elliptic if $g$ is conjugated with $a^k$, for some vertex generator $a$. The set consisting of all non-trivial elliptic elements is stable under conjugation, its elements have infinite order, and any two such elements are commensurable. These properties yield a homomorphism $\Delta$ from $G$ to the multiplicative group of non-zero rationals $\mathbb{Q}^\ast$, defined as follows. Given $g\in G$, choose any non-trivial elliptic element $a$. There is a relation $g\cdot a^p\cdot g=a^q$, with $p, q$ non-zero, and define $\Delta(g)=\frac{p}{q}$. As pointed out in \cite{KropII} this definition is independent of the choices made ($a$ and the relation), and defines a {\it modular homomorphism}. For different primes $p$ and $q$ let $T(p,q) = \langle x,y | x^p=y^q \rangle$ be a torus-knot group. It is easy to see that $T(p,q)$ is a $GBS$ group for any non-zero integers $p$ and $q$ (corresponding labeled graph has one edge with two different endpoints and labels $p$ and $q$). A group is said to be {\it Hopfian} if any homomorphism of the group onto itself has trivial kernel, i.e. is an automorphism. Baumslag and Solitar \cite{BaSo} came up with a series of examples of two-generator one-relator non-Hopfian groups. In particular, such are the Baumslag--Solitar groups $$BS(p,q)=\langle x,y |xy^{p}x^{-1}=y^{q} \rangle$$ where $p$ and $q$ are coprime integers, $p,q\neq 1$. If a labeled graph $\mathbb{B}$ consists of one vertex and two inverse loops with labels $p$ and $q$, then $\pi_1(\mathbb{B})\cong BS(p,q)$. Therefore, every Baumslag--Solitar groups is a generalized Baumslag--Solitar group. Main results of our work are listed below. {\bf Theorem 1.} {\it Let $G$ be a GBS-group. Then $G$ is 1-knot group if and only if $G \simeq T(p,q)$.} {\bf Theorem 2.} {\it Let $G$ be a GBS-group, $G\not\cong\mathbb{Z}$. Then $G$ is $n$-knot group for $n \geq 3$ if and only if $G$ is a homomorphic image of either $BS(m,m+1)$, where $m\geq 1$, or $T(p,q)$.} All homomorphic images of $BS(m,m+1)$, where $m\geq 1$, and $T(p,q)$ described in terms of labeled graphs in lemmas 4 and 5 (see the proof of theorem 2). M.Kervaire obtained the following necessary conditions for a group to be a knot-group. {\bf Statement 1 \cite[14.1.1]{Kaw}}. {\it Let $\pi$ be $n$-knot group for $n \geq 1$. Then 1. $\pi$ is finitely generated 2. $\pi / \pi ' \simeq \mathbb{Z}$ 3. $H_2(\pi)=0$ 4. $\pi$ is a normal closure of a single element } For $n \geq 3$ these conditions are also sufficient. {\bf Statement 2 \cite[14.1.2]{Kaw}}. {\it If a group $G$ satisfies conditions 1-4 of lemma 1, then $G$ is $n$-knot group for $n\geq 3$.} So our goal is to determine when these conditions are fulfilled in a GBS group. \section{Preliminary results} Given graph $A$, denote the number of edges out of maximal subtree by $\beta_1(A)$. Normal closure of an element $a$ in a group $G$ is denoted as $\langle \langle a \rangle \rangle _G$. {\bf Lemma 1.} {\it Let $G$ be a GBS group such that $G/G' \simeq \mathbb{Z}$, then $\beta_1(A) \leqslant 1$.} Proof. From a representation it is clear that $G/G' \simeq \mathbb{Z}^{\beta_1(A)}\times H$, where $H$ is a subgroup of $G/G'$ generated by images of vertex elements. Hence, $\beta_1(A) \leqslant 1$. Lemma is proved. {\bf Lemma 2.} {\it Let $G=\pi_1(\mathbb{A})$ be a $GBS$-group such that $G= \langle \langle a \rangle \rangle _G$, and $\mathbb{A}$ be a labeled tree. Then $\mathbb{A}$ is a labeled segment.} \begin{figure} \caption{Labeled segment.} \end{figure} Proof. Assume the contrary. Then $\mathbb{A}$ has a trident subgraph. \begin{figure} \caption{Labeled trident.} \end{figure} Let $N$ be a normal closure of all vertex elements except end points $u,v,w$, then $G/N \simeq \mathbb{Z}_p * \mathbb{Z}_q * \mathbb{Z}_r$. If $p,q,r$ are pairwise comprime, then for some normal subgroup $N_1$ containing $N$ and certain powers of $u,v,w$ we have $G/N_1 \simeq \mathbb{Z}_{p_1} * \mathbb{Z}_{q_1} * \mathbb{Z}_{r_1}$, where $p_1,q_1,r_1$ are different primes. By \cite{How} $G/N_1$ is not a normal closure of a single element, so neither is $G$, a contradiction. Now let $(p,q)=d \not = 1$. GBS group $G= \langle \langle a \rangle \rangle _G$ has no torsion, therefore $G/G' \simeq \mathbb{Z}$. Thus we have $(G/G')/N \simeq Z_p \times Z_q \times Z_r$. With $d \not =1$ dividing $p$ and $q$ the later group cannot be a homomorphic image of $\mathbb{Z}$. Using ideas of plateau from \cite{Levitt2015}, we prove. {\bf Lemma 3.} {\it Keep notations of lemma 2. Then $l_i \bot k_j$ for all $i,j$.} Proof. Suppose first that $j \leqslant i$. Choose a pair $k_j, l_i$ so that $j \leqslant i$, $(k_j,l_i)=d>1$, $j$ is minimal and $i$ is maximal with these conditions. Let $p$ be a prime divisor of $d$. Then $k_x \perp p$ for $x < j$ and $l_y \perp p$ for $y > i$. Let $r$ be a maximal index such that $l_{r-1} \not \perp p$, $2 \leq r \leq j$. If there is no such index, then let $r=1$. In a similar way, choose minimal $f$, $f \geq i+1$ such that $p | k_f$. \begin{figure} \caption{Labeled segment, $j \leqslant i$.} \end{figure} Let $N$ be a normal closure of the following elements $$N = \langle \langle a_1, \ldots a_{r-1}, a_{j+1}, \ldots a_i, a_{f+1} \ldots a_{s+1},a_r^p,\ldots a_j^p,a_{i+1}^p,\ldots , a_f^p \rangle \rangle$$ Then corresponding factor group $G_1=G / N$ is isomorphic to $\mathbb{Z}_p * \mathbb{Z}_p$, since relations $$a_m^{k_{m+1}} = a_{m+1}^{l_{m+1}},$$ $$a_m^p = a_{m+1} ^p=1$$ and $p \perp k_{m+1}, p \perp l_{m+1}$ imply $a_m=a_{m+1}$, $m=r,\ldots j,i+1,\ldots , f$. Hence $G_1 / G_1' \simeq \mathbb{Z}_p \times \mathbb{Z}_p$, which condtradicts $G/G' \simeq \mathbb{Z}$. 2. Now assume $i<j$ and $(k_j,l_i)=d>1$. \begin{figure} \caption{Labeled segment, $j>i$.} \end{figure} We may also assume that $j-i$ is minimal with this property. Then using 1 we obtain $k_r \perp l_f$, $r=i+1,\ldots j-1, f=i,\ldots , j-1$ and $k_j \perp l_{i+1}, \ldots l_{j-1}$. In particular, for a prime divisor $p$ of $d$ we have $p \perp k_{i+1}, \ldots k_{j-1}$ and $p \perp l_{i+1}, \ldots , l_{j-1}$. Let $$N= \langle \langle a_2,\ldots , a_i, a_{j+1}, \ldots , a_s, a_{i+1}^p,\ldots , a_j^p \rangle \rangle.$$ Then as above we obtain $$G/N \simeq \mathbb{Z}_{k_1} * \mathbb{Z}_p * \mathbb{Z}_{l_{s+1}}$$ Having $k_1 \perp l_i$ and $k_j \perp l_{s+1}$ we may assume that $k_1,p,l_{s+1}$ are different primes. By our assumption $G$, and hence $G/N$, is a normal closure of a single element, while $ \mathbb{Z}_{k_1} * \mathbb{Z}_p * \mathbb{Z}_{l_{s+1}}$ for different primes $k_1,p,l_{s+1}$ is not \cite{How}. This contradiction proves the lemma. {\bf Lemma 4.} {\it Assume $G = \pi _1 (\mathbb{A})$, $\mathbb{A}$ is a labeled segment, and $l_i \perp k_j$ for all $i,j$. Then $G$ is a homomorphic image of some torus-knot group. } Proof. We prove by induction that $a_2 , \ldots a_{s-1}$ may be excluded from generators. To check basis consider a group with generators $\{ a_1,a_2,a_3 \}$ and relations $a_1^{k_1}=a_2^{l_1}$, $a_2^{k_2}=a_3^{l_2}$. Choose $\alpha$ and $\beta$ such that $\alpha l_1+\beta k_2 =1$. Then $a_2=a_1^{\alpha k_1} a_3^{\beta l_2 }$. For the induction step consider a group with generators $\{ a_1,a_2, \ldots, a_{s+1} \}$. By induction, $a_i = w_i (a_1,a_s)$, where $w_i$ is some word on letters $a_1,a_s$, and $i=2,\ldots s-1$. Choose $\alpha$ and $\beta$ so that $\alpha l_1\ldots l_{s-1}+\beta k_s =1$. Then $a_s=a_s^{\alpha l_1 \ldots l_{s-1}} a_s^{\beta k_s} = a_1^{\alpha k_1 \ldots k_{s-1}} a_{s+1}^{\beta l_s}$. So a group is generated by $a_1$ and $a_{s+1}$, which satisfy a relation $a_1^k=a_{s+1}^l$, with $k=k_1\ldots l_s \perp l=k_1\ldots l_s$. Therefore $G$ is a homomorphic image of some $\pi_1 (T_{k,l})$. The lemma is proved. {\bf Lemma 5.} {\it If $G=\pi_1(\mathbb{A})$, $G=\langle \langle a \rangle \rangle_G$, and $\beta_1(A)=1$ then $\mathbb{A}$ is a labeled cycle with coprime labels $k_i \perp l_j$ for all $i,j$, $|{\displaystyle \prod_{i=1}^{s} k_i} - {\displaystyle \prod_{i=1}^{s} l_i}|=1$ (see fig.6) and $G= \langle \langle t \rangle \rangle_G$.} \begin{figure} \caption{Labeled cycle} \end{figure} Proof. If $\mathbb{A}$ is no a cycle, then there is a pendant vertex $u$. A graph $\mathbb{A}$ is reduced, and so the label $\lambda$ near $u$ is not $\pm 1$. Let $N$ be a normal closure of all vertices, except $u$. Then $G / G'N \simeq \langle t \rangle \times \langle u \rangle \simeq \mathbb{Z} \times \mathbb{Z}_{\lambda}$, a contradiction. Let $d=(k_i,l_j)$ and $p$ be a prime divisor of $d$. In case $j <i$ let $N$ be a normal closure of $a_1,\ldots a_j,a_{j+1}^p,\ldots ,a_i^p,a_{i+1},\ldots , a_s$. Then $G / G'N \simeq \langle t \rangle \times \langle a_{j+1} \rangle \simeq \mathbb{Z} \times \mathbb{Z}_p$, as in lemma 3, a contradiction. The case $j \geq i$ is considered in a similar way. Let $k= {\displaystyle \prod_{i=1}^{s} k_i}$, $l= {\displaystyle \prod_{i=1}^{s} l_i}$. If $k=1$, then $s=1$, because $\mathbb{A}$ is reduced. So $G \simeq \pi_1(\mathbb{A}) \simeq BS(1,l)$, $G /G' \simeq \mathbb{Z} \times \mathbb{Z}_{l-1}$. It follows that $l=2$, and $BS(1,2) = \langle \langle t \rangle \rangle$ satisfies the conclusion of the lemma. If $k\not =1$ and $l\not =1$. Note that in $G$ we have an equality $t^{-1}a_i^kt=a_i^l$ for $1 \leq i \leq s$, and $l \perp k$. A map $\phi _{[a_1,t]}: H=BS(k,l)=\langle a,r| r^{-1}a^kr=a^l \rangle \rightarrow G$ $$ \phi _{[a_1,t]}: \begin{cases} a \rightarrow a_1\\ r \rightarrow t[a_1,t] \end{cases}$$ is an embedding $BS(k,l) \hookrightarrow G$ \cite{DBS}, hence $|a_1|_{G/G'}=|a|_{H/H'}=|k-l|$. Therefore $|k-l|=1$. Moreover there is a subgroup $\langle t \rangle \times \langle a_1 \rangle$ in $G/G'$ isomorphic to $\mathbb{Z} \times \mathbb{Z}_{|k-l|}$. Note that $t^{-1}a_i^kta_i^{-k} \in \langle \langle t \rangle \rangle$, hence $a_i^{l-k} = a_i^{\pm 1} \in \langle \langle t \rangle \rangle$ for all $i$, and so $G = \langle \langle t \rangle \rangle$. The lemma is proved. \section{Proof of theorems} {\bf Proof of theorem 1} Assume that $G$ is both a GBS-group and a 1-knot group. 1. By statement 1 $G/G' \simeq \mathbb{Z}$. 2. By lemma 1 $G\simeq \pi_1(\mathbb{A})$ and either $A$ is a tree or $\beta_1(A)=1$. 3. If $A$ is a tree, then $\Delta(G)=\{1\}$ and by \cite[Statement 2.5]{Levitt2007} $Z(G)=Z$. 4. By \cite[Corollary 6.3.6]{Kaw} $G\simeq T(p,q)$. 5. If $\beta_1(A)=1$, then by \cite[Theorem 6.3.9]{Kaw} $G=\pi_1 (\mathbb{A})$ is residually finite. By \cite[Corollary 7.7]{Levitt2015q} a GBS-group is residually finite iff either $G=BS(1,n)$ or $\Delta(G)=\{\pm 1\}$. 6. If $G=BS(1,n)$ then by Lemma 5 $n=2$ and $G=BS(1,2)$, and by \cite{Shalen} $|m|=|n|$, a contradiction. 7. If $\Delta(G)=\{\pm 1\}$, then there exist $a,g \in G$ such that $gag=a^{-1}$, and hence $G/G'$ has a torsion, a contradiction with $G/G' \simeq \mathbb{Z}$. 8. If $\Delta(G)=\{1\}$, then $Z(G)=\mathbb{Z}$ \cite[Statement 2.5]{Levitt2007}, and $G$ is isomorphic to $T(p,q)$ by \cite[Corollary 6.3.6]{Kaw}. The Theorem 1 is proved. {\bf Proof of theorem 2} I. Assume that $G$ is a GBS-group, $G \not \simeq \mathbb{Z}$ and $G$ is $n$-knot group, $n\geq 3$. 1. By Statement 1 $G/G' \simeq \mathbb{Z}$, $G=\langle \langle a \rangle \rangle$. 2. By Lemmas 1-5 $G$ is a fundamental group of either some labeled segment (see fig.2) with $k_i \perp l_i$, or some labeled cycle (see fig.6) $k_i \perp l_i$. 3. If $G$ is a fundamental group of a segment on fig.2, then by Lemma 4 $G$ is a homomorphic image of $T(k,l)$. 4. If $G$ is a fundamental group of a cycle on fig.6 then by \cite[Theorem 1.1]{Levitt2015q} $G$ is a homomorphic image of a Baumslag Solitar group $BS(k,k+1)$. II. Assume that $G$ is a homomorphic image of rank 2 of either $BS(m,m+1)$ or $T(p,q)$. 5. If $G$ is a homomorphic image of rank 2 of $BS(m,m+1)$, then by \cite[Theorem 1.1]{Levitt2015q}, using that $m \perp m+1$, we get $G=\pi_1(\mathbb{A})$, where $\mathbb{A}$ is a cycle, and ${\displaystyle \prod_{i=1}^{s} k_i} \perp {\displaystyle \prod_{i=1}^{s} l_i}$, i.e. $\mathbb{A}$ as in conclusion of Lemma 5. By \cite{Krop} we have $H_2(G)=0$. Then by Lemma 5 we obtain that properties 1-4 of Statement 1 hold, and hence by Lemma 2 $G$ is a group of $n$-knot, $n\geq 3$. 6. If $G$ is a homomorphic image of rank 2 of $T(p,q)$, then by \cite{Levitt2015} and \cite{Levitt2015q} $G=\pi_1(\mathbb{A})$, where $A$ is either a segment, or a circle or a lollipop. Moreover, from \cite{Levitt2015} it follows that $A$ is a segment from theorem 1.1 \cite{Levitt2015}. Then $A$ satisfies the conclusion of Lemma 4. Hence $G$ satisfies the conditions 1-4 of Statement 1, and by Statement 2 $G$ is $n$-knot group for $n\geq 3$. The Theorem 2 is proved. Acknowledgments. The authors are grateful to V.~G.~Bardakov and V.~A.~Churkin for valuable discussions. \end{document}
\begin{document} \title[Upper central series for the group of unitriangular automorphisms...] {Upper central series for the group of unitriangular automorphisms of a free associative algebra} \author{Valeriy G. Bardakov} \address{Sobolev Institute of Mathematics, Novosibirsk State University, Novosibirsk 630090, Russia} \address{and} \address{Laboratory of Quantum Topology, Chelyabinsk State University, Brat'ev Kashirinykh street 129, Chelyabinsk 454001, Russia} \email{[email protected]} \thanks{The first author thank the organizers of the Conference ``Groups, Geometry and Dynamics" (Almora, India, 2012) for this beautiful and interesting Conference.} \author{Mikhail V. Neshchadim} \address{Sobolev Institute of Mathematics, Novosibirsk State University, Novosibirsk 630090, Russia} \email{[email protected]} \thanks{The authors was supported by Federal Target Grant ``Scientific and educational personnel of innovation Russia'' for 2009-2013 (government contract No. 02.740.11.0429). Also, this research was supported by the Indo-Russian DST-RFBR project grant DST/INT/RFBR/P-137 (No.~13-01-92697)} \subjclass[2010]{Primary 16W20; Secondary 20E15, 20F14.} \keywords{Free associative algebra, group of unitriangular automorphisms, upper central series} \begin{abstract} We study some subgroups of the group of unitriangular automorphisms $U_n$ of a free associative algebra over a field of characteristic zero. We find the center of $U_n$ and describe the hypercenters of $U_2$ and $U_3$. In particular, we prove that the upper central series for $U_2$ has infinite length. As consequence, we prove that the groups $U_n$ are non-linear for all $n \geq 2$. \end{abstract} \maketitle \section{Introduction} In this paper we consider a free associative algebra $A_n = K \langle x_1, x_2,$ $\ldots,$ $x_n \rangle$ over a field $K$ of characteristic zero. We assume that $A_n$ has unity. The group of $K$-automorphisms of this algebra, i.e. automorphisms that fix elements of $K$ is denoted by $\mathrm{Aut} \, A_n$. The group of tame automorphisms $\mathrm{TAut} \, A_n$ of $A_n$ is generated by the group of affine automorphism $\mathrm{Aff} \, A_n$ and the group of unitraingular automorphisms $U_n=U(A_n)$. From the result of Umirbaev \cite{U} follows that $\mathrm{Aut} \, A_3 \not= \mathrm{TAut} \, A_3$. A question about linearity (i.e. about a faithful representation by finite dimensional matrices over some field) of $\mathrm{TAut} \, A_n$ was studied in the paper of Roman'kov, Chirkov, Shevelin \cite{R}, where it was proved that for $n \geq 4$ these groups are not linear. Sosnovskii \cite{S} proved that for $n \geq 3$ the group $\mathrm{Aut} \, P_n$ is not linear, where $P_n = K [x_1, x_2,$ $\ldots,$ $x_n ]$ is the polynomial algebra over a field $K$. His result follows from description of the upper central series for the group of unitriangular automorphisms $U(P_n)$ of $P_n$. The structure of the present paper is the following. In Section 2 we introduce some notations, recall some facts on the automorphism group of free associative algebra and its subgroups. In the previous article \cite{B} we found the lower central series and the series of the commutator subgroups for $U_n$. In Section 3 we study the upper central series for $U_2$ and prove that the length of this series is infinite. Prove that $U_2$ is non-linear. In Section 4 we study the upper central series for $U_3$ and describe the hypercentral subgroups in the terms of some algebras. In Section 5 we find the center of $U_n$ for $n \geq 4$. Also, in Sections 4 and 5 we formulate some hypotheses and questions that are connected with the theory of non-commutative invariants in free associative algebra under the action of some subgroups of $U_n$. \section{Some previous results and remark} Let us recall definitions of some automorphisms and subgroups of $\mathrm{Aut} \, A_n$. For any index $i\in \left\{1, \ldots, n \right\}$, a constant $\alpha \in K^* = K\backslash \{0\}$ and a polynomial $f = f(x_1, \ldots , \widehat{x_i}, \ldots ,x_n) \in A_n$ (where the symbol $\widehat{x_i}$ denotes that $x_i$ is not included in $f$) {\it the elementary automorphism } $\sigma (i, \alpha, f)$ is an automorphism in $\mathrm{Aut} \, A_n$ that acts on the variables $x_1, \ldots ,x_n$ by the rule: $$ \sigma (i, \alpha, f) : \left\{ \begin{array}{lcl} x_i \longmapsto \alpha \, x_i + f, \,\, & \\ x_j \longmapsto x_j, \,\, & \mbox{if} & \,\, j \neq i. \\ \end{array} \right. $$ The group of tame automorphisms $\mathrm{TAut} \, A_n$ is generated by all elementary automorphisms. The group of affine automorphisms $\mathrm{Aff} \, A_n$ is a subgroup of $\mathrm{TAut} \, A_n$ that consists of automorphisms $$ x_i\longmapsto a_{i1} x_1+ \ldots + a_{in} x_n + b_i,\,\, i=1, \ldots , n, $$ where $a_{ij}$, $b_i\in K$, $i,j=1, \ldots ,n$, and the matrix $(a_{ij})$ is a non-degenerate one. The group of affine automorphisms is the semidirect product $K^n \leftthreetimes \mathrm{GL}_n (K)$ and, in particular, embeds in the group of matrices $\mathrm{GL}_{n+1} (K)$. The group of triangular automorphisms $T_n = T(A_n)$ of algebra $A_n$ is generated by automorphisms $$ x_i\longmapsto \alpha_i x_i+f_i(x_{i+1}, \ldots , x_n),\,\, i = 1,\ldots , n, $$ where $\alpha_i \in K^*$, $f_i\in A_n$ and $f_n\in K$. If all $\alpha_i =1$ then this automorphism is called {\it the unitriangular automorphism}. The group of unitriangular automorphisms is denoted by $U_n = U(A_n)$. In the group $U_n$ let us define a subgroup $G_i$, $i = 1, 2, \ldots, n$ which is generated by automorphisms $$ \sigma (i, 1, f),~~ \, \mbox{where} \, f = f(x_{i+1}, \ldots , x_n) \in A_n. $$ Note that the subgroup $G_i$ is abelian and isomorphic to an additive subgroup of algebra $A_n$ that is generated by $x_{i+1}, \ldots , x_n$, $i = 1, \ldots, n-1$, and the subgroup $G_n $ is isomorphic to the additive group of the field $K$. {\it The lower central series} of a group $G$ is the series $$ G = \gamma_1 G \geq \gamma_2 G \geq \ldots, $$ where $\gamma_{i+1} G = [\gamma_i G, G],$ $i = 1, 2, \ldots$. {\it The series of the commutator subgroups} of a group $G$ is the series $$ G = G^{(0)} \geq G^{(1)} \geq G^{(2)} \geq \ldots, $$ where $G^{(i+1)} = [G^{(i)}, G^{(i)}],$ $i = 0, 1, \ldots$. Here for subsets $H$, $K$ of $G$, $[H, K]$ denotes the subgroup of $G$ generated by the commutators $[h, k] = h^{-1} k^{-1} h k$ for $h \in H$ and $k \in K$. Recall that the $k$-th hypercenter $Z_k = Z_k(G)$ of the upper central series of $G$ for the non-limited ordinal $k$ is defined by the rule $$ Z_k / Z_{k-1} = Z(G / Z_{k-1}) $$ or equivalently, $$ Z_k = \{ g \in G ~|~[g, h] \in Z_{k-1}~ \mbox{for all} ~h \in G \}, $$ and $Z_1(G) = Z(G)$ is the center of $G$. If $\alpha$ is a limit ordinal, then define $$ Z_{\alpha} = \bigcup_{\beta < \alpha} Z_{\beta}. $$ It was proved in \cite{B} that $U_n$ is a semidirect product of abelian groups: $$ U_n = (\ldots(G_1 \leftthreetimes G_2)\leftthreetimes \ldots ) \leftthreetimes G_n, $$ and the lower central series and the series of commutator subgroups of $U_n$ satisfy the following two properties respectvely: 1) For $n \geq 2$ $$ \gamma_2 U_n = \gamma_3 U_n = \ldots. $$ In particular, for $n \geq 2$ the group $U_n$ is not nilpotent. 2) The group $U_n$ is solvable of degree $n$ and the corresponding commutator subgroups have the form: $$ \begin{array}{l} U_n= (\ldots(G_1 \leftthreetimes G_2)\leftthreetimes \ldots ) \leftthreetimes G_{n},\\ U_n^{(1)}= (\ldots(G_1 \leftthreetimes G_2)\leftthreetimes \ldots ) \leftthreetimes G_{n-1},\\ .........................................\\ U_n^{(n-1)}= G_1,\\ U_n^{(n)}= 1. \end{array} $$ Yu. V. Sosnovskiy \cite{S} found the upper central series for the unitriangular group $U(P_n)$ of the polynomial algebra $P_n$. (Note, that he considered polynomials without free terms.) He proved that for $n \geq 3$ the group $U(P_n)$ has the upper central series of length $((n-1)(n-2)/2) \omega + 1$ for any field $K$, where $\omega$ is the first limit ordinal. If $\mathrm{char} \, K = 0$ then the hypercenters of $U(P_4)$ have the form\\ $Z_{k} = \{ (x_1 + f_1(x_3, x_4), x_2, x_3) ~|~ \mathrm{deg}_{x_3} f_1(x_3, x_4) \leq k -1 \}$,\\ $Z_{\omega} = \{ (x_1 + f_1(x_3, x_4), x_2, x_3) \}$,\\ $Z_{\omega + k} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2, x_3) ~|~ \mathrm{deg}_{x_2} f_1(x_2, x_3, x_4) \leq k \}$,\\ $Z_{2 \omega} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2, x_3) \}$,\\ $Z_{2\omega + k} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2 + f_2(x_3, x_4), x_3) ~|~ \mathrm{deg}_{x_3} f_2(x_3, x_4) \leq k - 1 \}$,\\ $Z_{3 \omega} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2 + f_2(x_3, x_4), x_3) \}$,\\ $Z_{3 \omega + 1} = U(P_4)$,\\ where $k = 1, 2, \ldots$ runs over the set of natural numbers and $f_1, f_2, f_3$ are arbitrary polynomials in $P_n$ which depend on the corresponding variables. \section{Unitriangular group $U_2$} Let $A_2 = K \langle x, y \rangle$ be the free associative algebra over a field $K$ of characteristic zero with the variables $x$ and $y$. Then $$ U_2= \left\{ \varphi= \left( x + f(y), y + b \right) ~|~ f(y) \in K\langle y \rangle,\,\,b \in K \right\} $$ is the group of unitriangular automorphisms of $A_2$. It is not difficult to check the following Lemma \begin{lemma}\label{l:form} 1) If $\varphi = \left( x + f(y), y + b \right) \in U_2$, then its inverse is equal to $$ \varphi^{-1}= \left( x - f(y - b), y - b \right); $$ 2) if $ \varphi= \left( x + f(y), y + b \right)$ and $\psi= \left( x + h(y), y + c \right)\in U_2$, then the following formulas hold: -- the formula of conjugation $$ \psi^{-1}\varphi \psi = \left( x + h(y) - h(y + b) + f(y + c), y + b \right), $$ -- the formula of commutation $$ \varphi^{-1}\psi^{-1}\varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) - f(y), y \right). $$ \end{lemma} Using this Lemma we can describe the center of $U_2$. \begin{lemma} \label{l:c} The center of $U_2$ has the form $Z(U_2) = \left\{ \varphi = \left( x + a, y \right) ~|~ a \in K \right\}$. \end{lemma} \begin{proof} If $\varphi = \left( x + a, y \right)$, then from the formula of conjugation (see Lemma~\ref{l:form}) follows that $\varphi \in Z(U_2)$. To prove the inverse inclusion suppose $ \varphi= \left( x+f(y), y+b \right) \in Z(U_2)$. Using the formula of conjugation we get $$ \varphi= \left( x + f(y), y + b \right) = \psi^{-1} \varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) y + b \right), $$ for any automorphism $\psi = \left( x + h(y), y + c \right) \in U_2$, i.e. $ f(y) = h(y) - h(y + b) + f(y + c)$. Taking $h = 0$ we get $f(y) = f(y + c)$ for every $c \in K$. Hence, $f(y) = a \in K$. We have only relation $0 = h(y) - h(y + b)$. Since $h(y)$ is arbitrary, it follows that $b=0$. \end{proof} \begin{lemma}\label{l:com} The following properties hold true in $U_2$. 1) $[U_2, U_2] = \left\{\varphi = \left( x + f(y), y \right) ~|~ f(y) \in K \langle y \rangle \right\}$. 2) If $\varphi = \left( x + f(y), y \right)$, where $f(y) \in K \langle y \rangle \setminus K$, then $$ C_{U_2}(\varphi) = \left\{\left( x + h(y), y \right) ~|~ h(y)\in K \langle y \rangle \right\}, $$ where $C_{U_2}(\varphi)$ is the centralizer of $\varphi$ in $U_2$, i.e. $C_{U_2}(\varphi) = \{ \psi \in U_2 ~|~ \psi \varphi = \varphi \psi\}$. 3) If $\varphi= \left( x,y+b \right)$, $b \in K$, then $C_{U_2}(\varphi) = \left\{ \left( x + a, y + c \right) ~|~ a, c \in K \right\}$. \end{lemma} \begin{proof} 1) Let $\varphi = \left( x + f(y), y + b \right)$, $\psi = \left( x + h(y), y + c \right) \in U_2$. By the formula of commutation $$ \varphi^{-1} \psi^{-1} \varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) - f(y), y \right). $$ It is easy to see that any element of $K \langle y \rangle$ can be represented in a form $r(y + d) - r(y)$ for some $r(y) \in K \langle y \rangle$ and $d\in K$. Hence, $$ K \langle y \rangle = \{ h(y) - h(y + b) + f(y + c) - f(y)~|~h, f \in K \langle y \rangle,~~b, c \in K\} $$ and 1) is true. 2) Let $ \varphi = \left( x+f(y), y \right)$, $f(y) \in K \langle y \rangle \setminus K$ and $ \psi = \left( x + h(y), y + c \right)$ be an arbitrary element of $C_{U_2}(\varphi)$. Using the formula of conjugation we get $$ \varphi= \left( x + f(y), y \right) = \psi^{-1} \varphi \psi = \left( x + f(y + c), y \right). $$ Hence $c = 0$. 3) Let $ \varphi = \left( x, y + b \right)$ and $ \psi = \left( x + h(y), y + c \right)$ be an arbitrary element of $C_{U_2}(\varphi)$. Using the formula of conjugation we get $$ \varphi = \left( x, y + b \right) = \psi^{-1} \varphi \psi = \left( x + h(y) - h(y + b), y + b \right). $$ Hence $h(y)=a \in K$. \end{proof} \begin{lemma}\label{l:co} If $s$ is a non-negative integer, then the $(s+1)th$ hypercenter of $U_2$ has the form $$ Z_{s+1}(U_2) = \left\{\varphi = \left( x+f(y),y \right) ~|~ f(y) \in K \langle y \rangle,\,\, \mathrm{deg} f(y) \leq s \right\}. $$ \end{lemma} \begin{proof} If $s = 0$, then the assertion follows from Lemma \ref{l:c}. Suppose that for $s + 1$ the assertion holds true. We now prove it for $s + 2$. Let $$ \varphi = \left( x + f(y), y + b \right)\in Z_{s+2}(U_2). $$ Using the formula of commutation (see Lemma \ref{l:form}) for $\varphi$ and $\psi= \left( x + h(y), y + c \right)$ we get $$ \varphi^{-1}\psi^{-1}\varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) - f(y), y \right). $$ If $b \neq 0$ and since $h(y)$ is an arbitrary polynomial of $K \langle y \rangle$, then $h(y) - h(y + b) + f(y + c) - f(y)$ represents arbitrary element of $K \langle y \rangle$. But it is not possible since for any automorphism $\varphi = (x + f(y), y) \in Z_{s+1}$ the degree of $f(y)$ is not bigger than $s$. Hence $b = 0$. Since $\mathrm{deg} f(y+c)-f(y) \leq s$ and $c$ is an arbitrary element of $K$, we have $\mathrm{deg} f(y) \leq s + 1$. So the inclusion from left to right is proved. The inverse inclusion is evident. \end{proof} Hence, from Lemma \ref{l:co} $$ Z_{\omega}(U_2)=\left\{\varphi= \left( x + f(y), y \right) ~|~ f(y) \in K\langle y \rangle \right\}, $$ and using Lemma \ref{l:com} we have $Z_{\omega}(U_2)=[U_2, U_2]$. Therefore $$ Z_{\omega+1}(U_2)=U_2. $$ \begin{corollary} The group $U_2$ is not linear. \end{corollary} \begin{proof} We know (see, for example \cite{G}) that if a linear group does not contain torsion, then the length of the upper central series of this group is finite. But we proved that the length of the upper central series for $U_2$ is equal to $\omega + 1$. Hence, the group $U_2$ is not linear. \end{proof} Since $U_2$ is a subgroup of $\mathrm{Aut} \, A_2$, we have proven that this group is not linear too. Using the fact that if $P_2$ is a polynomial algebra with unit over $K$, then $\mathrm{Aut} \, A_2 = \mathrm{Aut} \, P_2$ (see, for example \cite{C}). \begin{corollary} Let $n \geq 2$. Then the groups $\mathrm{Aut} \, A_n$ and $\mathrm{Aut} \, P_n$ are not linear. \end{corollary} It follows from the fact that $\mathrm{Aut} \, A_2 \leq \mathrm{Aut} \, A_n$ and $\mathrm{Aut} \, P_2 \leq \mathrm{Aut} \, P_n$ for all $n \geq 2$. \begin{remark} In \cite{S} the author considered the polynomials without free terms and proved that $\mathrm{Aut} \, P_3$ is not linear. Using his method it is not difficult to prove that if $P_2$ contains free terms, then $\mathrm{Aut} \, P_2$ is not linear over any field of arbitrary characteristic. \end{remark} \section{Unitriangular group $U_3$} The group $U_3$ is equal to $$ U_3 = \{ (x_1 + f_1, x_2 + f_2, x_3 + f_3) ~|~f_1 = f_1(x_2, x_3) \in K \langle x_2, x_3 \rangle, $$ $$ f_2 = f_2(x_3) \in K \langle x_3 \rangle, f_3 \in K \}. $$ Define an algebra $S$ as subalgebra of $K \langle x_2, x_3 \rangle$ $$ S = \{ f(x_2, x_3)\in K \langle x_2, x_3 \rangle ~|~ f(x_2 + g(x_3), x_3 + h) = f (x_2, x_3)~ $$ $$ \mbox{for any}~g(x_3) \in K \langle x_3 \rangle,~~h \in K \} $$ Hence, $S$ is a subalgebra of fixed elements under the action of the group $$ \{ (x_2 + g, x_3 + h) ~|~g = g(x_3) \in K \langle x_3 \rangle, h \in K \} $$ which is isomorphic to $U_2$. The set $S$ is a subalgebra of $A_3$. Define a set of commutators $$ c_1 = [x_2, x_3],~~c_{k+1} = [c_k, x_3],~~k = 1, 2, \ldots, $$ where $[a, b] = a b - b a$ is the ring commutator. Using induction on $k$, it is not difficult to check the following result. \begin{lemma} The commutators $c_k$, $k = 1, 2, \ldots,$ lie in $S$. \end{lemma} {\bf Hypothesis 1.} The algebra $K \langle c_1, c_2, \ldots \rangle$ is equal to $S$. Note that the elements $c_1, c_2, \ldots$ are free generators of $K \langle c_1, c_2, \ldots \rangle$ (see \cite[p.~62]{Co}). \begin{theorem} The center $Z(U_3)$ of the group $U_3$ is equal to $$ Z(U_3) = \{ (x_1 + f_1, x_2, x_3) ~|~f_1 = f_1(x_2, x_3) \in S \}. $$ \end{theorem} \begin{proof} The inclusion $\supseteq$ is evident. Let $$ \varphi = (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3 + f_3) $$ be some element in $Z(U_3)$ and $$ \psi = (x_1 + g_1(x_2, x_3), x_2 + g_2(x_3), x_3 + g_3) $$ be an arbitrary element of $U_3$. Since $\varphi \psi = \psi \varphi$ then we have the equalities\\ $x_1 + g_1(x_2, x_3) + f_1(x_2 + g_2(x_3), x_3 + g_3) = x_1 + f_1(x_2, x_3) + g_1(x_2 + f_2(x_3), x_3 + f_3),$\\ $x_2 + g_2(x_3) + f_2(x_3 + g_3) = x_2 + f_2(x_3) + g_2(x_3 + f_3),$\\ $x_3 + g_3 + f_3 = x_3 + f_3 + g_3.$ \\ \noindent The third equality holds for all $x_3$, $g_3$ and $f_3$. Rewrite the first and the second equality in the form\\ $g_1(x_2, x_3) + f_1(x_2 + g_2(x_3), x_3 + g_3) = f_1(x_2, x_3) + g_1(x_2 + f_2(x_3), x_3 + f_3),$\\ $g_2(x_3) + f_2(x_3 + g_3) = f_2(x_3) + g_2(x_3 + f_3).$\\ \noindent Let $g_1 = x_2$, $g_2 = g_3 = 0$. Then $$ x_2 + f_1(x_2, x_3) = f_1(x_2, x_3) + x_2 + f_2(x_3). $$ Hence $f_2(x_3) = 0$. Let $g_1 = x_3$, $g_2 = g_3 = 0$. Then $$ x_3 + f_1(x_2, x_3) = f_1(x_2, x_3) + x_3 + f_3. $$ Hence $f_3 = 0.$ We have only one condition $$ f_1(x_2 + g_2(x_3), x_3 + g_3) = f_1(x_2, x_3), $$ i.e. $f_1 \in S$. \end{proof} Let us define the following subsets in the algebra $A_2 = K\langle x_2, x_3 \rangle$:\\ $S_1 = S$,\\ $S_{m+1} = \{ f \in A_2 ~|~f^{\varphi} - f \in S_m~ \mbox{for all}~ \varphi \in U_2 \}$, $m = 1, 2, \ldots$,\\ $S_{\omega} = \bigcup\limits_{m = 1}^{\infty} S_{m}$, \\ $S_{\omega+1} = \{ f \in A_2 ~|~f^{\varphi} - f \in S_\omega~ \mbox{for all}~ \varphi \in U_2 \}$,\\ $S_{\omega+m+1} = \{ f \in A_2 ~|~f^{\varphi} - f \in S_{\omega+m}~ \mbox{for all}~ \varphi \in U_2 \}$, $m = 1, 2, \ldots$,\\ $S_{2\omega} = \bigcup\limits_{m = 1}^{\infty} S_{\omega+m}$, \\ $R_{m} = \{ f = f(x_3) \in K \langle x_3 \rangle ~|~\mathrm{deg} \, f \leq m \}$, $m = 0, 1, \ldots ,$\\ $R_{\omega} = \bigcup\limits_{m = 0}^{\infty} R_{m}$. \\ It is not difficult to see that all $S_k$ are modules over $S$. \begin{remark} If we consider the homomorphism $$ \pi : K \langle x_1, x_2, x_3 \rangle \longrightarrow K [x_1, x_2, x_3], $$ then\\ $S^{\pi} = K$,\\ $S_{m+1}^{\pi} = \{ f \in K[x_3] ~|~\mathrm{deg} \, f \leq m \}$, $m = 1, 2, \ldots$,\\ $S_{\omega}^{\pi} = K[x_3]$, \\ $S_{\omega+m+1}^{\pi} = \{ f \in K[x_2, x_3] ~|~\mathrm{deg}_{x_2} \, f \leq m \}$, $m = 0, 1, \ldots$,\\ $S_{2\omega}^{\pi} = K[x_2, x_3]$, \\ $R_{m}^{\pi} = \{ f \in K[x_3] ~|~\mathrm{deg} \, f \leq m \}$, $m = 1, 2, \ldots ,$\\ $R_{\omega}^{\pi} = K[x_3]$. \\ \end{remark} \begin{theorem} The following equalities hold \begin{equation}\label{eq:m} Z_{m} = \{ (x_1 + f_1(x_2, x_3), x_2, x_3)~|~ f_1 \in S_m \},~~ m = 1, 2, \ldots, 2\omega, \end{equation} \begin{multline}\label{eq:2m} Z_{2\omega+m} = \{ (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3)~|~ f_1 \in k\langle x_2, x_3 \rangle, f_2(x_3) \in R_m \},\\ m = 1, 2, \ldots, \omega, \end{multline} \begin{equation}\label{eq:3m} Z_{3\omega + 1} = U_3. \end{equation} \end{theorem} \begin{proof} We use induction on $m$. To prove (\ref{eq:m}) for $m+1$, we assume that for all $m$ such that $1 \leq m < \omega$ equality (\ref{eq:m}) holds. If $$ \varphi = (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3 + f_3) \in Z_{m+1} $$ and $$ \psi = (x_1 + g_1(x_2, x_3), x_2 + g_2(x_3), x_3 + g_3) \in U_{3}, $$ then for some $$ \theta = (x_1 + h_1(x_2, x_3), x_2, x_3) \in Z_{m} $$ holds $\varphi \, \psi = \psi \, \varphi \, \theta$. Acting on the generators $x_1$, $x_2$, $x_3$ by $\varphi \, \psi$ and $\psi \, \varphi \, \theta$ we have two relations \begin{multline}\label{eq:1} f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) = h_1(x_2, x_3) + \\ + g_1(x_2 + f_2(x_3), x_3 + f_3) - g_1(x_2, x_3), \end{multline} \begin{equation}\label{eq:2} g_2(x_3) + f_2(x_2 + g_3) = f_2(x_3) + g_2(x_2 + f_3). \end{equation} If $g_2 = 0$, then the relation (\ref{eq:2}) has the form $$ f_2(x_2 + g_3) = f_2(x_3). $$ Since $g_3$ is an arbitrary element of $K$, then $f_2 \in K$. But in this case (\ref{eq:2}) has the form $$ g_2(x_3 + f_3) = g_2(x_3). $$ Since $g_2(x_3)$ is an arbitrary element of $K\langle x_3 \rangle$, then $f_3 = 0$ and (\ref{eq:1}) has the form\\ \begin{equation}\label{eq:3} f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) = h_1(x_2, x_3) + g_1(x_2 + f_2, x_3) - g_1(x_2, x_3). \end{equation}\\ Let $g_1 = x_2^{N}$ for some natural number $N$. Using the homomorphism $$ \pi : K \langle x_1, x_2, x_3 \rangle \longrightarrow K[x_1, x_2, x_3] $$ and the equality $\mathrm{deg}_{x_2}\, h_1^{\pi} = 0$ we see that if $f_2 \not= 0$, then $$ \mathrm{deg}_{x_2}\, \left( f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \right)^{\pi} = N - 1. $$ Since $N$ is any non-negative integer, then $f_2 = 0$ and $$ f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \in S_m, $$ i.e. $f_1(x_2, x_3) \in S_{m+1}$ and we have proven the equality (\ref{eq:m}) for $m+1$: $$ Z_{m+1} = \{ (x_1 + f_1(x_2, x_3), x_2, x_3)~|~ f_1 \in S_{m+1} \}. $$ To prove (\ref{eq:m}) for $\omega+m+1$ assume that for all $\omega+m$ such that $1 \leq m < \omega$ equality (\ref{eq:m}) holds. If $\varphi \in Z_{\omega+m+1}$ and $\psi \in U_3$ then for some $\theta \in Z_{\omega+m}$ we have $\varphi \, \psi = \psi\, \varphi \, \theta$ that give the relations (\ref{eq:1}) and (\ref{eq:2}). As in the previous case we can check that $f_2 \in K$, $f_3 = 0$ and (\ref{eq:1})--(\ref{eq:2}) are equivalent to (\ref{eq:3}). Let $g_1 = x_2^{N}$ for some natural number $N$. Using the homomorphism $$ \pi : K \langle x_1, x_2, x_3 \rangle \longrightarrow K [x_1, x_2, x_3] $$ and the inequality $\mathrm{deg}_{x_2}\, h_1^{\pi} \leq m - 1$ we see that if $f_2 \not= 0$, then $$ \mathrm{deg}_{x_2}\, \left( f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \right)^{\pi} = N - 1 $$ for $N \geq m + 1$. But the degree of the left hand side is bounded. Hence $f_2 = 0$ and we have $$ f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \in S_{\omega+m}, $$ i.e., $f_1(x_2, x_3) \in S_{m+1}$ and we have proven the equality (\ref{eq:m}) for $\omega+m+1$: $$ Z_{\omega+m+1} = \{ (x_1 + f_1(x_2, x_3), x_2, x_3)~|~ f_1 \in S_{\omega+m+1} \}. $$ To prove (\ref{eq:2m}) for $m+1$ assume that for all $m$ such that $1 \leq m < \omega$ equality (\ref{eq:2m}) holds. If $\varphi \in Z_{2\omega+m+1}$, $\psi \in U_3$, then for some $\theta \in Z_{2\omega+m}$ we have $\varphi \, \psi = \psi\, \varphi \, \theta$. If $$ \varphi = (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3 + f_3), $$ $$ \psi = (x_1 + g_1(x_2, x_3), x_2 + g_2(x_3), x_3 + g_3) $$ and $$ \theta = (x_1 + h_1(x_2, x_3), x_2 + h_2(x_3), x_3), $$ then we have the relations \begin{equation*} \begin{split} x_1 + g_1(x_2, x_3) + f_1(x_2 + g_2(x_3), x_3 + g_3) & = x_1 + h_1(x_2, x_3) + f_1(x_2 + h_2(x_3), x_3) +\\ & + g_1(x_2 + h_2(x_3) + f_2(x_3), x_3 + f_3), \end{split} \end{equation*} $$ x_2 + g_2(x_3) + f_2(x_3 + g_3) = x_2 + h_2(x_3) + f_2(x_3) + g_2(x_3 + f_3). $$ Since $h_1$ is an arbitrary element of $K \langle x_2, x_3 \rangle$, then we must consider only the second relation which is equal to $$ f_2(x_3 + g_3) - f_2(x_3) = h_2(x_3) + g_2(x_3 + f_3) - g_2(x_3). $$ Since $\mathrm{deg} \, h_2 \leq m$ and $g_2(x_3)$ is any element of $K \langle x_3 \rangle$, then $f_3 = 0$. Hence, $$ f_2(x_3 + g_3) - f_2(x_3) = h_2(x_3). $$ From this equality follows that $\mathrm{deg} \, f_2 \leq m + 1$. We have proven the equality (\ref{eq:2m}) for $m+1$: $$ Z_{2\omega+m+1} = \{ (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3)~|~ f_1 \in K \langle x_2, x_3 \rangle, f_2(x_3) \in R_{m+1} \}. $$ To prove (\ref{eq:2m}) we note that $$ [U_3, U_3] \subseteq \{ (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3) \}. $$ \end{proof} We described the hypercenters of $U_3$ in the terms of the algebras $S_m$ and $S_{\omega+m}$. It is interesting to find sets of generators for these algebras. To do it we must give answers on the following questions. {\bf Question 1} (see Hypothesis 1). Is it true that $$ S = K\langle c_1, c_2, \ldots \rangle? $$ {\bf Question 2.} Is it true that for all $m \geq 1$ the following equalities are true $$ S_{m+1} = \{ f \in K\langle S, x_3 \rangle ~|~\mathrm{deg}_{x_3} \, f \leq m \}? $$ {\bf Question 3.} Is it true that $$ \bigcup_{m=1}^{\infty} S_m = K\langle S, x_3 \rangle? $$ {\bf Question 4.} Is it true that for all $m \geq 1$ the following equalities are true $$ S_{\omega+m} = \{ f \in K \langle S, x_3, x_2 \rangle ~|~\mathrm{deg}_{x_2} \, f \leq m \}? $$ If $R$ is the Specht algebra of $A_2$, i.e. the subalgebra of $A_2$ that is generated by all commutators $$ [x_2, x_3],~~[[x_2, x_3], x_3],~~[[x_2, x_3], x_2], \ldots $$ then the following inclusions hold \begin{equation}\label{eq:4} S_{m+1} \subset \{ f \in K \langle R, x_3 \rangle ~|~\mathrm{deg}_{x_3} \, f \leq m \}, \end{equation} \begin{equation}\label{eq:5} S_{\omega+m} \subset \{ f \in K \langle R, x_3, x_2 \rangle ~|~\mathrm{deg}_{x_2} \, f \leq m \}, \end{equation} for all $m \geq 1$. It follows from the fact that $K \langle x_2, x_3 \rangle$ is a free left $R$-module with the set of free generators $$ x_2^{\alpha} x_3^{\beta}, ~~~\alpha, \beta \geq 0. $$ Note that the inclusion (\ref{eq:4}) is strict. It follows from \begin{proposition} The commutators $$ [x_2, \underbrace{x_3, \ldots, x_3}_k, x_2] = [c_k, x_2],~~k \geq 1, $$ do not lie in $S_{m}$, $m \geq 1$. \end{proposition} \begin{proof} Indeed, for the automorphism $$ \varphi = (x_2 + g_2(x_3), x_3 + g_3) $$ of the algebra $K \langle x_2, x_3 \rangle$ we have $$ [c_k, x_2]^{\varphi} = [c_k^{\varphi}, x_2^{\varphi}] = [c_k, x_2 + g_2(x_3)] = [c_k, x_2] + [c_k, g_2(x_3)]. $$ If $g_2(x_3) = x_3^{N}$, then\\ \begin{align*} [c_k, x_3^{N}] & = c_k \, x_3^{N} - x_3^{N} c_k = (c_k x_3 - x_3 c_k) x_3^{N-1} + x_3 c_k x_3^{N-1} - x_3^{N} c_k \\ & = c_{k+1} x_3^{N-1} + x_3 (c_k x_3^{N-1} - x_3^{N-1} c_k) \\ & =c_{k+1} x_3^{N-1} + x_3 \left(( c_k x_3 - x_3 c_k) x_3^{N-2} + x_3 c_k x_3^{N-2} - x_3^{N-1} c_k \right)\\ & =c_{k+1} x_3^{N-1} + x_3 c_{k+1} x_3^{N-2} + x_3^{2} ( c_k x_3^{N-2} - x_3^{N-2} c_k )\\ & = \ldots \\ & = \sum_{p+q = N-1} x_3^p c_{k+1} x_3^q.\\ \end{align*} Hence, for $\varphi = (x_2 + g_2(x_3), x_3 + g_3)$ we have $$ [c_k, x_2]^{\varphi} - [c_k, x_2] = \sum_{p+q = N-1} x_3^p c_{k+1} x_3^q. $$ If $$ g_2(x_3) = \sum_{n=0}^{N} a_n x_3^n, $$ then $$ [c_k, x_2]^{\varphi} - [c_k, x_2] = \sum_{n=1}^{N-1} a_n \sum_{p+q=n-1} x_3^p c_{k+1} x_3^q. $$ If $[c_k, x_2] \in S_m$ for some $m$, then $$ [[c_k, x_2], \varphi ] \equiv [c_k, x_2]^{\varphi} - [c_k, x_2] \in S_{m-1}. $$ Let $$ \psi = (x_2 + h_2(x_3), x_3 + h),~~~\varphi = (x_2 + x_3^N, x_3). $$ Then \begin{align*} [[[c_k, x_2], \varphi ], \psi ] & = \left[\sum_{p+q=N-1} x_3^p c_{k+1} x_3^q, \psi \right] \\ & = \sum_{p+q=N-1} (x_3 + h)^p c_{k+1} (x_3 + h)^q - \sum_{p+q=N-1} x_3^p c_{k+1} x_3^q \\ & = \sum_{p+q=N-1} \left( \sum_{l=0}^{p-1} C_p^l x_3^l h^{p-l} \right) c_{k+1} \left( \sum_{r=0}^{q} C_{q-1}^r x_3^r h^{q-r} \right)\\ \end{align*} has the degree $N-2$ on the variable $x_3$. Continuing this process we are getting that if $\mathrm{deg} \, g_2(x_3) = N$, then $$ [c_k, x_2, \varphi, \psi_1, \ldots, \psi_{N-1}] \in S $$ for every $\psi_1, \ldots, \psi_{N-1} \in U_2$. Hence,\\ $[c_k, x_2, \varphi, \psi_1, \ldots, \psi_{N-1}] \in S,$\\ $...............................$ \\ $[[c_k, x_2, \varphi ] \in S_N,$\\ $[c_k, x_2] \in S_{N+1}.$\\ Using the similar ideas we can prove that $$ [c_k, x_2] \not\in S_{N}. $$ Since we can take arbitrary number $N > m$, then $$ [c_k, x_2] \not\in S_{m}, ~~m = 1, 2, \ldots $$ \end{proof} \section{Center of the unitriangular group $U_n$, $n \geq 4$} In this section we prove the following assertion \begin{theorem} Any automorphism $\varphi$ in the center $Z(U_n)$ of $U_n$ has a form $$ \varphi= \left( x_1 + f(x_2, \ldots, x_n), x_2, \ldots, x_n \right), $$ where the polynomial $f$ is such that $$ f(x_2 + g_2, \ldots, x_n + g_n) = f(x_2, \ldots, x_n) $$ for every $ g_2 \in K \langle x_3, \ldots, x_n \rangle$, $g_3\in K \langle x_4, \ldots, x_n \rangle, \ldots, g_n \in K$. \end{theorem} We will assume that $U_{n-1}$ includes in $U_n$ by the rule $$ U_{n-1} = \left\{ \varphi = \left( x_1, x_2 + g_2, \ldots, x_n + g_n \right) \in U_n ~|~ g_2 \in K \langle x_3, \ldots, x_n \rangle,\,\, \ldots, g_n \in K \right\}. $$ Hence we have the following sequence of inclusions for the subgroups $U_k$, $k=3,\ldots,n$ $$ U_n \geq U_{n-1} \geq \ldots \geq U_{3}. $$ In this assumption we can formulate Theorem by the following manner $$ Z(U_n) = \left\{ \varphi= \left( x_1 + f(x_2, \ldots, x_n), x_2, \ldots, x_n \right) ~|~ f^{U_{n-1}}=f \right\}, $$ where $$ f^{U_{n-1}} = \{ f^{\psi} ~|~ \psi \in U_{n-1} \}. $$ \begin{proof} Let $$ \varphi= \left( x_1 + f_1, x_2 + f_2, \ldots, x_n + f_n \right) \in Z(U_n) $$ and $$ \psi= \left( x_1 + g_1, x_2 + g_2, \ldots, x_n + g_n \right) $$ be an arbitrary element of $U_n$. Then $x_k^{\varphi\psi} = x_k^{\psi\varphi}$ for all $k = 1, 2, \ldots, n$. In particular, if $k = 1$, then \begin{equation}\label{eq:11} ( x_1 + f_1)^\psi = ( x_1 + g_1)^\varphi. \end{equation} Put $g_1 = x_2$, $g_2 = g_3 = \ldots = 0$. Then this relation has the form $$ x_1 + x_2 + f_1 = x_1 + f_1 + x_2 + f_2. $$ Hence, $f_2 = 0$. Analogously, putting $g_1 = x_3$, $g_2 = g_3 = \ldots = 0$, we get $f_3 = 0$. Hence, $f_2 = f_3 = \ldots = f_n = 0$. The relation (\ref{eq:11}) for arbitrary $\psi$ has the form $$ x + g_1(x_2, \ldots, x_n) + f_1( x_2 + g_2, \ldots, x_n + g_n) = x + f_1( x_2, \ldots, x_n) + g_1(x_2, \ldots, x_n). $$ Hence, $$ f_1( x_2+g_2, \ldots, x_n + g_n) = f_1( x_2, \ldots, x_n). $$ \end{proof} Let us define the notations $$ \zeta U_n = \left\{ f(x_2, \ldots, x_n) \in K \langle x_2, \ldots, x_n \rangle ~|~ f^{U_{n-1}} = f \right\}, $$ $$ \zeta U_{n-1} = \left\{ f(x_3, \ldots, x_n) \in K \langle x_3, \ldots, x_n \rangle ~|~ f^{U_{n-2}} = f \right\}, $$ $$ ...................................................................................... $$ $$ \zeta U_{3} = \left\{ f(x_{n-2}, x_{n-1}, x_n) \in K \langle x_{n-2},x_{n-1},x_n \rangle ~|~ f^{U_{2}} = f \right\}. $$ Note that $\zeta U_{3} = S$. We formulate the next hypothesis on the structure of the algebras $\zeta U_k$, $k=3, \ldots, n$. {\bf Hypothesis 2.} The following inclusions hold $$ \zeta U_4 \subseteq K \langle \zeta U_3, x_{n-2} \rangle, $$ $$ \zeta U_5\subseteq K \langle \zeta U_4, x_{n-3} \rangle, $$ $$ ................................., $$ $$ \zeta U_n\subseteq K \langle \zeta U_{n-1}, x_{2} \rangle. $$ Recall that by Hypothesis 1 we have $$ \zeta U_3 = K \langle c_1, c_2, \ldots \rangle, $$ where $c_1 = [x_{n-1}, x_n]$, $c_{k+1} = [c_k, x_n]$, $k = 1, 2, \ldots $ \begin{proposition} If Hypotheses 1 and 2 are true, then the following equality holds $$ \zeta U_k = K \langle c_1, c_2, \ldots \rangle,\,\,\,k = 3, 4, \ldots, n. $$ \end{proposition} \begin{proof} For $k = 4$ Hypothesis 2 has the form $$ \zeta U_4 \subseteq K \langle \zeta U_3, x_{n-2} \rangle, $$ i.e., every polynomial $f\in \zeta U_4$ can be represented in the form $$ f = F(x_{n-2}, c_1, c_2, \ldots, c_N) $$ for some non-negative integer $N$. Applying the automorphism $$ \psi = \left( x_1, x_2, \ldots, x_{n-2} + g_{n-2} , x_{n-1}, x_n \right), $$ we get $$ F(x_{n-2} + g_{n-2}, c_1, c_2, \ldots, c_N) = F(x_{n-2}, c_1, c_2, \ldots, c_N). $$ Here $g_{n-2} = g_{n-2}(x_{n-1}, x_n)$ is an arbitrary element of $K \langle x_{n-1}, x_n \rangle$. Putting in this equality $g_{n-2} = c_{N+1}$ and $x_{n-2} = 0$ we have $$ F(c_{N+1}, c_1, c_2, \ldots, c_N) = F(0, c_1, c_2, \ldots, c_N). $$ Since $c_1, c_2, \ldots $ are free generators, $F$ does not contain the variable $x_{n-2}$. Hence $$ \zeta U_4 = K \langle c_1, c_2, \ldots \rangle. $$ Analogously, we can prove the equality $$ \zeta U_k = K \langle c_1, c_2, \ldots \rangle,\,\,\,k=3, 4, \ldots, n. $$ \end{proof} We see that the description of the hypercenters of $U_n$ (see Hypotheses 1 and 2) is connected with the theory of non-commutative invariants in free associative algebra under the action of some subgroups of $U_n$. We will study these invariants in next papers. \end{document}
\begin{document} \begin{frontmatter} \title{Rational $D(q)$-quadruples} \author{Goran Dra\v zi\' c\fnref{fnote1}} \ead{[email protected]} \author{Matija Kazalicki\fnref{fnote2}} \ead{[email protected]} \fntext[fnote1]{Faculty of Food Technology and Biotechnology, University of Zagreb, Pierottijeva 6, 10000 Zagreb, Croatia.} \fntext[fnote2]{Department of Mathematics, University of Zagreb, Bijeni\v cka cesta 30, 10000 Zagreb, Croatia.} \begin{abstract} For a rational number $q$, a \emph{rational $D(q)$-$n$-tuple} is a set of $n$ distinct nonzero rationals $\{a_1, a_2, \dots, a_n\}$ such that $a_ia_j+q$ is a square for all $1 \leqslant i < j \leqslant n$. For every $q$ we find all rational $m$ such that there exists a $D(q)$-quadruple with product $a_1a_2a_3a_4=m$. We describe all such quadruples using points on a specific elliptic curve depending on $(q,m).$ \end{abstract} \begin{keyword} Diophantine $n$-tuples\sep Diophantine quadruples\sep Elliptic curves \sep Rational Diophantine $n$-tuples \end{keyword} \end{frontmatter} \linenumbers \section{Introduction} Let $q\in \mathbb{Q}$ be a nonzero rational number. A set of $n$ distinct nonzero rationals $\lbrace a_1, a_2, \dots, a_n\rbrace$ is called a rational $D(q)$-$n$-tuple if $a_ia_j+q$ is a square for all $1 \leqslant i<j\leqslant n.$ If $\lbrace a_1,a_2,\dots,a_n\rbrace$ is a rational $D(q)$-$n$-tuple, then for all $r\in \mathbb{Q},\lbrace ra_1,ra_2,\dots,ra_n\rbrace$ is a $D(qr^2)$-$n$-tuple, since $(ra_1)(ra_2)+qr^2=(a_1a_2+q)r^2$. With this in mind, we restrict to square-free integers $q.$ If we set $q=1$ then such sets are called rational Diophantine $n$-tuples. The first example of a rational Diophantine quadruple was the set $$\left\lbrace\frac{1}{16}, \frac{33}{16}, \frac{17}{4}, \frac{105}{16}\right\rbrace$$ found by Diophantus, while the first example of an integer Diophantine quadruple, the set \[ \lbrace 1,3,8,120 \rbrace \] is due to Fermat. In the case of integer Diophantine $n$-tuples, it is known that there are infinitely many Diophantine quadruples (e.g. $\{k-1, k+1, 4k, 16k^3-4k\},$ for $k\geq 2$). Dujella \cite{dujella2004there} showed there are no Diophantine sextuples and only finitely many Diophantine quintuples, while recently He, Togb\' e and Ziegler \cite{he2019there} proved there are no integer Diophantine quintuples, which was a long standing conjecture. Gibbs \cite{gibbs2006some} found the first example of a rational Diophantine sextuple using a computer, and Dujella, Kazalicki, Miki\' c and Szikszai \cite{dujella2017there} constructed infinite families of rational Diophantine sextuples. Dujella and Kazalicki parametrized Diophantine quadruples with a fixed product of elements using triples of points on a specific elliptic curve, and used that parametrization for counting Diophantine quadruples over finite fields \cite{dujella2016diophantine} and for constructing rational sextuples \cite{dujella2017more}. There is no known rational Diophantine septuple. Regarding rational $D(q)$-$n$-tuples, Dujella \cite{dujella2000note} has shown that there are infinitely many rational $D(q)$-quadruples for any $q\in \mathbb{Q}.$ Dujella and Fuchs in \cite{dujella2012problem} have shown that, assuming the Parity Conjecture, for infinitely squarefree integers $q\neq 1$ there exist infinitely many rational $D(q)$-quintuples. There is no known rational $D(q)$-sextuple for $q\neq a^2, a\in \mathbb{Q}.$ Our work uses a similar approach Dujella and Kazalicki had in \cite{dujella2016diophantine} and \cite{dujella2017more}. Let $\{a,b,c,d\}$ be a rational $D(q)$-quadruple, for a fixed nonzero rational $q,$ such that $$ ab+q=t_{12}^2,\quad ac+q=t_{13}^2,\quad ad+q=t_{14}^2,$$ $$bc+q=t_{23}^2,\quad bd+q=t_{24}^2,\quad cd+q=t_{34}^2.$$ Then $(t_{12},t_{13},t_{14},t_{23},t_{24},t_{34},m=abcd)\in \mathbb{Q}^7$ defines a rational point on the algebraic variety $\mathcal{C}$ defined by the equations $$(t_{12}^2-q)(t_{34}^2-q)=m,$$ $$(t_{13}^2-q)(t_{24}^2-q)=m,$$ $$(t_{14}^2-q)(t_{23}^2-q)=m.$$ The rational points $(\pm t_{12}, \pm t_{13}, \pm t_{14}, \pm t_{23}, \pm t_{24}, \pm t_{34}, m)$ on $\mathcal{C}$ determine two rational $D(q)$ quadruples $\pm (a,b,c,d)$ $\Big($specifically, $a^2=\frac{(t_{12}^2-q)(t_{13}^2-q)}{t_{23}^2-q}\Big)$ if $a,b,c,d$ are rational, distinct and nonzero. Any point $(t_{12},t_{13},t_{14},t_{23},t_{24},t_{34},m) \in\mathcal{C}$ corresponds to three points $Q'_1=(t_{12},t_{34}), Q'_2=(t_{13},t_{24})$ and $Q'_3=(t_{14},t_{23})$ on the curve\[ \mathcal{D}_m\colon (X^2-q)(Y^2-q)=m. \] If $\mathcal{D}_m(\mathbb{Q})=\emptyset,$ there are no rational $D(q)$-quadruples with product of elements equal to $m,$ so we assume there exists a point $P_1=(x_1,y_1) \in \mathcal{D}_m(\mathbb{Q}).$ The curve $\mathcal{D}_m$ is a curve of genus $1$ unless $m=0$ or $m=q^2,$ which we assume from now on. Since we also assumed a point $P_1 \in \mathcal{D}_m(\mathbb{Q}),$ the curve $\mathcal{D}_m$ is birationally equivalent to the elliptic curve \[ E_m\colon W^2=T^3+(4q^2-2m)T^2+m^2T \] via a rational map $f\colon \mathcal{D}_m \to E_m$ given by \begin{align*} T&=(y_1^2-q)\cdot\frac{2x_1(y^2-q)x+(x_1^2+q)y^2+x_1^2y_1^2-2x_1^2q-y_1^2q}{(y-y_1)^2}, \\ W&=T\cdot\frac{2y_1x(q-y^2)+2x_1y(q-y_1^2)}{y^2-y_1^2}. \end{align*} Note that $f$ maps $(x_1,y_1)$ to the point at infinity $\mathcal{O}\in E_m(\mathbb{Q})$, it maps $(-y_1,x_1)$ to a point of order four, $R=(m,2mq) \in E_m(\mathbb{Q})$, and maps $(-x_1,y_1)$ to \[ S=\left(\frac{y_1^2(x_1^2-q)^2}{x_1^2},\frac{qy_1(x_1^2+y_1^2)(x_1^2-q)^2}{x_1^3}\right)\in E_m(\mathbb{Q}), \] which is generically a point of infinite order. We have the following associations \[ (a,b,c,d)\dashleftarrow \rightarrow \text{a point on }\mathcal{C}(\mathbb{Q})\longleftrightarrow (Q'_1,Q'_2,Q'_3) \in \mathcal{D}_m(\mathbb{Q})^3. \] In order to obtain a rational $D(q)$-quadruple from a triple of points on $\mathcal{D}_m(\mathbb{Q}),$ we must satisfy the previously mentioned conditions: $a,b,c,d$ must be rational, mutually disjoint and nonzero. It is easy to see that if one of them is rational, then so are the other three (i.e. $b=\frac{t_{12}^2-q}{a}$), and that they will be nonzero when $m\neq 0,$ since $m=abcd.$ The elements of the quadruple $(a,b,c,d)$ corresponding to the triple of points $(Q'_1, Q'_2, Q'_3)$ are distinct, if no two of the points $Q'_1, Q'_2, Q'_3$ can be transformed from one to another via changing signs and/or switching coordinates. For example, the triple $(t_{12},t_{34}),(-t_{34},t_{12}),(t_{14},t_{23})$ would lead to $a=d.$ This condition on points in $\mathcal{D}_m$ is easily understood on points in $E_m.$ Assume $P\in E_m \leftrightarrow (x,y)\in \mathcal{D}_m,$ that is, $f(x,y)=P.$ Then \begin{equation}\label{ness} S-P\leftrightarrow (-x,y),\quad P+R\leftrightarrow (-y,x). \end{equation} The maps $P\mapsto S-P$ and $P\mapsto P+R$ generate a group $G$ of translations on $E_m$, isomorphic to $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z},$ and $G$ induces a group action on $E_m({\overline{\Q}}).$ In order to obtain a quadruple from the triple $(Q_1,Q_2,Q_3)\in E_m(\mathbb{Q})^3$, such that the elements of the quadruple are distinct, the orbits $G\cdot Q_1, G\cdot Q_2, G\cdot Q_3$ must be disjoint. This is because the set of points in $\mathcal{D}_m$ corresponding to $G\cdot P$ is $\{(\pm x,\pm y),(\pm y,\pm x)\}.$ We say that such a triple of points satisfies the non-degeneracy criteria. Let $\overline{\mathcal{D}}_m$ denote the projective closure of the curve $\mathcal{D}_m$ defined by \[ \overline{\mathcal{D}}_m \colon (X^2-qZ^2)(Y^2-qZ^2)=mZ^4. \] The map $f^{-1} \colon E_m \to \overline{\mathcal{D}}_m$ is a rational map, and since the curve $E_m$ is smooth, the map is a morphism \cite[II.2.1]{silverman2009arithmetic}. The map $x\circ f^{-1}\colon E_m \to \mathbb{A}^1$ given by \[ x\circ f^{-1}(P)=\frac{X\circ f^{-1}(P)}{Z\circ f^{-1}(P)} \] has a pole in points $P_0$ such that $f^{-1}(P_0)=[1:0:0],$ and is regular elsewhere. The map $y\circ f^{-1}\colon E_m \to \mathbb{A}^1$ given by \[ y\circ f^{-1}(P)=\frac{Y\circ f^{-1}(P)}{Z\circ f^{-1}(P)} \] has a pole in points $P_2$ such that $f^{-1}(P_2)=[0:1:0],$ and is regular elsewhere. We define the rational map $g\colon E_m \to \mathbb{A}^1$ by \[ g(P)=(x_1^2-q)\cdot \left(\left(x\circ f^{-1}(P)\right)^2-q\right). \] The map $g$ has a pole in the same points as the map $x\circ f^{-1},$ and is regular elsewhere. The maps $f$ and $g$ depend on a fixed point $P_1\in\mathcal{D}_m.$ We omit noting this dependency and simply denote these maps by $f$ and $g$. The motivation for the map $g$ is \cite[2.4, Proposition 4]{dujella2017more}. Dujella and Kazalicki use the $2$-descent homomorphism in the proof of Proposition 4, we will use $g$ for similar purposes. \begin{theorem}\label{thm:1} Let $(x_1,y_1)\in \mathcal{D}_m(\mathbb{Q})$ be the point used to define the map $f\colon \mathcal{D}_m \to E_m.$ If $(Q_1,Q_2,Q_3)\in E_m(\mathbb{Q})^3$ is a triple satisfying the non-degeneracy criteria such that $(y_1^2-q)\cdot g(Q_1+Q_2+Q_3)$ is a square, then the numbers $$a=\pm\left(\frac{1}{m}\frac{g(Q_1)}{(x_1^2-q)}\frac{g(Q_2)}{(x_1^2-q)}\frac{g(Q_3)}{(x_1^2-q)}\right)^{1/2},$$ $$b=\frac{g(Q_1)}{a(x_1^2-q)}, c=\frac{g(Q_2)}{a(x_1^2-q)}, d=\frac{g(Q_3)}{a(x_1^2-q)}$$ are rational and form a \emph{rational $D(q)$-quadruple} such that $abcd=m$. Conversely, assume $(a,b,c,d)$ is a \emph{rational $D(q)$-quadruple}, such that $m=abcd$. If the triple $(Q_1,Q_2,Q_3)\in E_m(\mathbb{Q})^3$ corresponds to $(a,b,c,d)$, then $(y_1^2-q)g(Q_1+Q_2+Q_3)$ is a square. \end{theorem} It is not true that the existence of a rational point on $\mathcal{D}_m(\mathbb{Q})$ implies the existence of a rational $D(q)$-quadruple with product $m.$ Examples with further clarification are given in Section \ref{sec:4}. The following classification theorem holds: \begin{theorem}\label{thm:2} There exists a rational $D(q)$-quadruple with product $m$ if and only if \[m=(t^2-q)\left(\frac{u^2-q}{2u}\right)^2 \] for some rational parameters $(t,u).$ \end{theorem} In Section \ref{sec:2} we study properties of the function $g$ which we then use in Section \ref{sec:3} to prove Theorems \ref{thm:1} and \ref{thm:2}. In Section \ref{sec:4}, we give an algorithm on how to determine whether a specific $m$ such that $\mathcal{D}_m(\mathbb{Q})\neq \emptyset$ admits a rational $D(q)$-quadruple with product $m.$ We conclude the section with an example of an infinite family. \section{Properties of the function $g$}\label{sec:2} In this section, we investigate the properties of the function $g$ which we will use to prove the main theorems. The following proposition describes the divisor of $g.$ \begin{proposition}\label{prop:3} The divisor of $g$ is $${\textrm{div}\:} g=2(S_1)+2(S_2)-2(R_1)-2(R_2),$$ where $S_1,R_1,S_2,R_2\in E_m(\mathbb{Q}(\sqrt{q}))$ with coordinates \begin{align*} S_1&=(~(y_1^2-q)(x_1-\sqrt{q})^2, \hspace{5.8pt}\quad 2y_1\sqrt{q}~(y_1^2-q)(x_1-\sqrt{q})^2~),\\ R_1&=(~(x_1^2-q)(y_1+\sqrt{q})^2, \hspace{5.8pt}\quad 2x_1\sqrt{q}~(x_1^2-q)(y_1+\sqrt{q})^2~),\\ S_2&=(~(y_1^2-q)(x_1+\sqrt{q})^2,~ -2y_1\sqrt{q}~(y_1^2-q)(x_1+\sqrt{q})^2~),\\ R_2&=(~(x_1^2-q)(y_1+\sqrt{q})^2,~ -2x_1\sqrt{q}~(x_1^2-q)(y_1-\sqrt{q})^2~). \end{align*} The points $S_1, S_2, R_1$ and $R_2$ satisfy the following identities: \begin{align*} 2S_1&=2S_2=f(x_1,-y_1)=S+2R,\\ 2R_1&=2R_2=f(-x_1,y_1)=S,\\ S_1+R&=R_1,\quad R_1+R=S_2,\quad S_2+R=R_2,\quad R_2+R=S_1.\end{align*} \end{proposition} \begin{proof} We seek zeros and poles of $g.$ The poles of $g$ are the same as the poles of $x\circ f^{-1}.$ To find zeros of $g$, notice that \[ (x\circ f^{-1}(P))^2-q=\frac{m}{(y\circ f^{-1}(P))^2-q}, \] so all we need to find are poles of $y \circ f^{-1}.$ The zeros of $x\circ f^{-1}$ are points on $E_m$ which map to affine points on $\overline{\mathcal{D}}_m$ that have zero $x$-coordinate. We can easily calculate such points. If $x=0,$ then $y^2=\frac{q^2-m}{q}.$ Denote $K=\sqrt{\frac{q^2-m}{q}}.$ We know $K\neq 0,$ since $m\neq q^2.$ The zeros of $x\circ f^{-1}$ are the points $f(0,K), f(0,-K)\in E_m({\overline{\Q}}),$ which are different since $K\neq 0.$ Since $x\circ f^{-1}$ is of degree two, both zeros are of order one. We conclude $x\circ f^{-1}$ has either one double pole, or two poles of order one. Similarly, the zeros of $y\circ f^{-1}$ are the points $f(K,0), f(-K,0)\in E_m({\overline{\Q}}),$ both of order one. The map $y\circ f^{-1}$ also has either a double pole or two poles of order one. Assume the point $P_0 \in E_m$ maps to a non-affine point in $\overline{\mathcal{D}}_m.$ This means that $Z\circ f^{-1}(P_0)=0,$ and at least one of the projective coordinate functions $X\circ f^{-1}, Y\circ f^{-1}$ is nonzero at $P_0.$ It follows that $P_0$ is a pole of at least one of the maps $x\circ f^{-1}, y\circ f^{-1}.$ Let $P_0 \in E_m$ be a pole of one of the maps $x\circ f^{-1}, y\circ f^{-1}.$ None of the points $ f^{-1}(P_0),$ $f^{-1}(P_0+R), f^{-1}(P_0+2R), f^{-1}(P_0+3R)$ are affine points on $\overline{\mathcal{D}}_m$ because if one of them is an affine point, then they all are, since the map $P\mapsto P+R$ viewed on $\overline{\mathcal{D}}_m$ maps affine points to affine points. We conclude that each of the points $P_0, P_0+R, P_0+2R, P_0+3R$ is a pole of one of the maps $x\circ f^{-1}, y\circ f^{-1}$ and with the previous claims we have that $x\circ f^{-1}, y\circ f^{-1}$ both have two poles of order one. The map $P\mapsto S-P,$ viewed on $\overline{\mathcal{D}}_m$, also maps affine points to affine points. Similarly as above, the points $f^{-1}(S-P_0), f^{-1}(S-P_0+R), f^{-1}(S-P_0+2R), f^{-1}(S-P_0+3R)$ are not affine in $\overline{\mathcal{D}}_m,$ because the point $f^{-1}(P_0)$ would be affine as well. The sets $\lbrace P_0, P_0+R, P_0+2R, P_0+3R \rbrace$ and $\lbrace S-P_0, S-P_0+R, S-P_0+2R, S-P_0+3R \rbrace$ must be equal, otherwise the maps $x\circ f^{-1}, y\circ f^{-1}$ would have more than four different poles in total. This means that every pole satisfies the equality $2P_0=S+kR$ for some $k\in \lbrace 0, 1, 2, 3\rbrace.$ Equivalently, every pole $P_0$ is a fixed point of some involution $i_k$ of the form $P\mapsto S-P+kR.$ Each involution $i_k$ has four fixed points on $E_m({\overline{\Q}}),$ because any two fixed points differ by an element from the $[2]$-torsion. The involution $i_0,$ viewed on $\overline{\mathcal{D}}_m,$ maps an affine point $(x,y)=f^{-1}(P)$ to $(-x,y)=f^{-1}(S-P).$ It has two affine fixed points which have $x$-coordinate equal to zero on $\overline{\mathcal{D}}_m$, as well as two fixed points which are not affine on $\overline{\mathcal{D}}_m.$ Such points are either poles of $x\circ f^{-1}$ or poles of $y \circ f^{-1}.$ Using Magma\cite{bosma1997magma} we calculate the coordinates explicitly to obtain $R_1$ and $R_2.$ Computationally, we confirm $R_1$ and $R_2$ are poles of $x \circ f^{-1},$ that is, poles of $g.$ The involution $i_2,$ viewed on $\overline{\mathcal{D}}_m,$ maps an affine point $(x,y)=f^{-1}(P)$ to $(x,-y)=f^{-1}(S-P+2R).$ It has two affine fixed points which have $y$-coordinate equal to zero on $\overline{\mathcal{D}}_m,$ as well as two fixed points which are not affine on $\overline{\mathcal{D}}_m.$ These points must be poles of the map $y\circ f^{-1},$ that is, zeros of $g.$ Again, using Magma, we calculate the coordinates to obtain $S_1$ and $S_2.$ Since the poles of $x\circ f^{-1}$ are of order one, then the poles of $g$ are of order two. The same is true for poles of $y\circ f^{-1},$ that is, for zeros of $g.$ The last row of identities in the statement of the theorem is checked by Magma. \end{proof} \proposition \label{prop:4} There exists $h\in \mathbb{Q}(E_m)$ such that $g\circ[2]=h^2.$ \proof Let $\tilde{h}\in {\overline{\Q}}(E_m)$ such that \begin{align*} {\textrm{div}\:} \tilde{h}&=[2]^{\ast}((S_1)+(S_2)-(R_1)-(R_2))\\ &=\sum\limits_{T\in E_m[2]} (S'_1+T)+\sum\limits_{T\in E_m[2]} (S'_2+T)-\sum\limits_{T\in E_m[2]} (R'_1+T)-\sum\limits_{T\in E_m[2]} (R'_2+T), \end{align*} where $2S'_i=S_i,2R'_i=R_i$ and $[2]^{\ast}$ is the pullback of the doubling map on $E_m.$ Such $\tilde{h}$ exists because of Corollary 3.5 in Silverman\cite[III.3]{silverman2009arithmetic} stating that if $E$ is an elliptic curve and $D=\sum n_P(P)\in \text{Div}(E),$ then $D$ is principal if and only if $$\sum_{P\in E} n_P=0 \text{ \hspace{5pt}and } \sum_{P\in E} [n_P]P=0,$$ where the second sum is addition on $E.$ The first sum being equal to zero is immediate, and for the second one we have \[ \sum\limits_{T\in E_m[2]} (S'_1+T)+\sum\limits_{T\in E_m[2]} (S'_2+T)-\sum\limits_{T\in E_m[2]} (R'_1+T)-\sum\limits_{T\in E_m[2]} (R'_2+T)= \] \[ =[4](S'_1+S'_2-R'_1-R'_2)=[2](S_1+S_2-R_1-R_2)= \] \[ =[2](S_1-R_2+S_2-R_1)\stackrel{(\ast)}{=}[2](R+R)=\mathcal{O}, \] where $(\ast)$ follows from the last row of identities in Proposition \ref{prop:3}. Easy calculations give us ${\textrm{div}\:} g\circ [2]={\textrm{div}\:} \tilde{h}^2$ which implies $C\tilde{h}^2=g\circ [2],$ for some $C\in {\overline{\Q}}.$ Let $h:=\tilde{h}\sqrt{C}\in {\overline{\Q}}(E_m)$ so that $h^2=g\circ [2].$ We will prove $h\in \mathbb{Q}(E_m).$ First, we show that every $\sigma\in \textrm{Gal}(\Qbar/\Q)$ permutes zeros and poles of $\tilde{h}$. Let us check what $\sigma$ does to $S_1$ and $S_2.$ Since $S_1$ and $S_2$ are conjugates over $\mathbb{Q}(\sqrt{q})$, the only possibilities for $S_1^\sigma$ are $S_1$ or $S_2.$ If $S_1^\sigma=S_1,$ then we must have $(S'_1)^\sigma=S'_1+T,$ where $T\in E_m[2],$ because $2((S'_1)^\sigma-S'_1)=(2S'_1)^\sigma-2S'_1=S_1^\sigma-S_1=\mathcal{O}.$ Thus $\sigma$ fixes $\sum\limits_{T\in E_m[2]} (S'_1+T)$. Since in this case we also know that $S_2^\sigma=S_2,$ we get that $\sigma$ fixes $\sum\limits_{T\in E_m[2]} (S'_2+T)$ as well. If $S_1^\sigma=S_2$ it is easy to see that $$\left(\sum\limits_{T\in E_m[2]} (S'_1+T)\right)^\sigma=\sum\limits_{T\in E_m[2]} (S'_2+T) \text{ and } \left(\sum\limits_{T\in E_m[2]} (S'_2+T)\right)^\sigma=\sum\limits_{T\in E_m[2]} (S'_1+T).$$ Similar statements hold for $R_1$ and $R_2,$ so we conclude that $\tilde{h}$ is defined over $\mathbb{Q}.$ Both $h$ and $\tilde{h}$ have the same divisor so $h$ is also defined over $\mathbb{Q}$. Now we use the second statement from Theorem 7.8.3. in \cite{galbraith2012mathematics}: \begin{theorem} Let $C$ be a curve over a perfect field $k$ and let $f\in \overline{k}(C).$ \begin{enumerate} \item If $\sigma(f)=f, ~~$ for each $\sigma \in \textrm{Gal}(\overline{k}/k)$ then $f\in k(C).$ \item If ${\textrm{div}\:}(f)$ is defined over $k$ then $f=ch$ for some $c\in \overline{k}$ and $h\in k(C).$ \end{enumerate} \end{theorem} From the second statement of the previous theorem we conclude that $h=c\cdot h'$ where $c\in {\overline{\Q}}$ and $h'\in \mathbb{Q}(E_m).$ We know that $c^2(h')^2=h^2=g\circ [2]$, and that $g\circ [2] (\mathcal{O})=(x_1^2-q)^2$ is a rational square. It follows that $\displaystyle c^2=\frac{(x_1^2-q)^2}{h'(\mathcal{O})^2}$ is a rational square as well, hence $c$ is rational. Finally, we have $h\in \mathbb{Q}(E_m).$ \endproof We end this section with a theorem which will handle rationality issues in Theorem \ref{thm:1}. \begin{theorem} \label{thm:6} For all $P,Q\in E_m(\mathbb{Q})$ we have $g(P+Q)\equiv g(P)g(Q) \mod (\mathbb{Q}^*)^2.$\\ In particular, if $P\equiv Q \mod 2E_m(\mathbb{Q}) $ then $g(P)\equiv g(Q) \mod (\mathbb{Q}^*)^2.$ \end{theorem} \proof Let $P',Q'\in E_m({\overline{\Q}})$ such that $2P'=P$ and $2Q'=Q.$ We prove that $$\frac{\sigma(h(P'+Q'))}{h(P'+Q')}=\frac{\sigma(h(P'))}{h(P')}\frac{\sigma(h(Q'))}{h(Q')}.$$ Following Silverman \cite[III.8]{silverman2009arithmetic}, assume $T\in E_m[2].$ From Proposition \ref{prop:4} it follows that $\displaystyle h^2(X+T)=g\circ [2] (X+T)=g\circ [2] (X)=h^2(X),$ for every $X\in E_m.$ This means that $\frac{h(X+T)}{h(X)}\in\{\pm 1\}.$ The morphism $$ E_m \rightarrow \mathbb{P}^1, \qquad X\mapsto \frac{h(X+T)}{h(X)}$$ is not surjective, so by \cite[II.2.3]{silverman2009arithmetic} it must be constant. For $ \sigma \in \textrm{Gal}(\Qbar/\Q)$ we have $\sigma(P')-P' \in E_m[2],\sigma(Q')-Q' \in E_m[2]$ and $\sigma(P'+Q')-(P'+Q') \in E_m[2].$ This holds since $2P'=P\in E_m(\mathbb{Q})$ and $2Q'=Q\in E_m(\mathbb{Q}).$ Now we get $$\frac{\sigma(h(P'))}{h(P')}=\frac{h(\sigma(P'))}{h(P')}=\frac{h(P'+(\sigma(P')-P'))}{h(P')}=\frac{h(X+(\sigma(P')-P'))}{h(X)}.$$ Similarly $$\frac{\sigma(h(Q'))}{h(Q')}=\frac{h(X+(\sigma(Q')-Q'))}{h(X)},\quad \frac{\sigma(h(P'+Q'))}{h(P'+Q')}=\frac{h(X+(\sigma(P'+Q')-(P'+Q')))}{h(X)}.$$ Now \begin{align*} \frac{\sigma(h(P'+Q'))}{h(P'+Q')}&=\frac{h(X+(\sigma(P'+Q')-(P'+Q')))}{h(X)}\\ &=\frac{h(X+(\sigma(P'+Q')-(P'+Q')))}{h(X+\sigma(P')-P')}\frac{h(X+\sigma(P')-P')}{h(X)}\\ &=\frac{\sigma(h(Q'))}{h(Q')}\frac{\sigma(h(P'))}{h(P')} \end{align*} by plugging in $X=P'+Q'-\sigma(P')$ for the first $X$ and $X=P'$ for the second one. This leads to $$\frac{h(P'+Q')}{h(P')h(Q')}=\frac{\sigma(h(P'+Q'))}{\sigma(h(Q'))\sigma(h(P'))}=\sigma\left(\frac{h(P'+Q')}{h(P')h(Q')}\right)$$ for every $\sigma\in \textrm{Gal}(\Qbar/\Q).$ Now we conclude $$\frac{h(P'+Q')}{h(P')h(Q')}\in \mathbb{Q}\implies h^2(P'+Q')\equiv h^2(P')h^2(Q') \mod (\mathbb{Q}^*)^2.$$ Finally $$g(P+Q)=g\circ[2](P'+Q')= h^2(P'+Q')\equiv h^2(P')h^2(Q')=g(P)g(Q) \mod (\mathbb{Q}^*)^2.$$ The second statement of the theorem follows easily from the first.\linebreak If $P=Q+2S_3,$ with $S_3\in E_m(\mathbb{Q}),$ then $$g(P)=g(Q+2S_3)\equiv g(Q)g(S_3)^2\equiv g(Q) \mod (\mathbb{Q}^*)^2. $$ \endproof Theorem \ref{thm:6} was more difficult to prove compared to a similar statement in \cite[2.4.]{dujella2017more}. Their version of the function $g$ had a very simple factorization$\mod (\mathbb{Q}^{\ast})^2,$ allowing them to use the $2-$descent homomorphism. \section{Proofs of main theorems}\label{sec:3} The main difficulty in the following proof is the issue of rationality of the quadruple. As we have mentioned, Theorem \ref{thm:6} will deal with this. \emph{Proof of Theorem \ref{thm:1}:} From the assumptions on $(Q_1,Q_2,Q_3)$ we know \linebreak $(y_1^2-q)g(Q_1+Q_2+Q_3)$ is a square. We have \begin{align*} a^2&=\frac{g(Q_1)g(Q_2)g(Q_3)}{(x_1^2-q)^3m}=\frac{g(Q_1)g(Q_2)g(Q_3)(y_1^2-q)}{(x_1^2-q)^4(y_1^2-q)^2} \\ &\equiv g(Q_1+Q_2+Q_3)(y_1^2-q) \mod (\mathbb{Q}^*)^2. \end{align*} The equivalence is a direct application of Theorem \ref{thm:6}. This implies $a^2$ is a rational square so $a$ is rational, which in turn implies $b,c$ and $d$ are rational numbers, as noted in the introduction. Since $abcd=m\neq 0,$ none of the numbers $ a,b,c,d$ are zero, and the non-degeneracy criteria of $(Q_1,Q_2,Q_3)$ ensure that $a,b,c,d$ are pairwise different. Lastly, $ab+q=(x\circ f^{-1}(Q_1))^2$ (with similar equalities holding for other pairs of the quadruple). The previous statements prove the quadruple $(a,b,c,d)$ is a rational $D(q)$-quadruple. On the other hand, if $(a,b,c,d)$ is a rational $D(q)$-quadruple, then we can define the points $(Q_1,Q_2,Q_3)\in E_m(\mathbb{Q})^3$ in correspondence to $(a,b,c,d).$ Using the same identities$\mod (\mathbb{Q}^*)^2$ as above, we get that $$(y_1^2-q)g(Q_1+Q_2+Q_3)\equiv a^2~~ \mod (\mathbb{Q}^*)^2. $$ $ \square$ To prove Theorem \ref{thm:2} we use the following lemma: \begin{lemma}\label{lemma:7} Let $(a,b,c,d)$ be a rational $D(q)$-quadruple such that $abcd=m.$ There exists a point $(x_0,y_0)\in \mathcal{D}_m(\mathbb{Q}),$ such that $x_0^2-q$ is a rational square. \end{lemma} \begin{proof} From Theorem \ref{thm:1} we know that $(y_1^2-q)g(Q_1+Q_2+Q_3)$ is a square, where $(Q_1,Q_2,Q_3)\in E_m(\mathbb{Q})^3$ is the triple that corresponds to the quadruple $(a,b,c,d)$. Let $Q=Q_1+Q_2+Q_3.$ We have \begin{align*} (y_1^2-q)g(Q)&=(y_1^2-q)(x_1^2-q)((x\circ f^{-1}(Q))^2-q)=m\cdot (x\circ f^{-1}(Q))^2-q)\\ &=m\cdot \frac{m}{(y\circ f^{-1}(Q))^2-q}=m^2\frac{1}{(y\circ f^{-1}(Q))^2-q}. \end{align*} Since the left hand side is a square, we conclude $(y\circ f^{-1}(Q))^2-q$ is a square as well. Now define $(x_0,y_0):=f^{-1}(Q+R).$ We know that \[ (y\circ f^{-1}(Q))^2-q\stackrel{(\ref{ness})}{=}(x\circ f^{-1}(Q+R))^2-q=x_0^2-q \] so the claim follows. \end{proof} \emph{Proof of Theorem \ref{thm:2}:} Assume we have a rational $D(q)$-quadruple. Using Lemma \ref{lemma:7} there exists a point $(x_0,y_0)\in \mathcal{D}_m(\mathbb{Q})$ such that $x_0^2-q$ is a rational square. Since $x_0^2-q=k^2,$ then $q=x_0^2-k^2=(x_0-k)(x_0+k).$ Denote $u=x_0-k,$ then $x_0+k=q/u$ and by adding the previous two equalities together to eliminate $k,$ we get $x_0=\frac{q+u^2}{2u}.$ Denoting $t=y_0$ we get $$m=(x_0^2-q)(y_0^2-q)=\left(\left(\frac{q+u^2}{2u}\right)^2-q\right)(t^2-q)=\left(\frac{q-u^2}{2u}\right)^2(t^2-q).$$ Now, let $m=\left(\frac{q-u^2}{2u}\right)^2(t^2-q)$ for some rational $(t,u).$ Denote $y_1=t, x_1=\frac{q+u^2}{2u}.$ It is easy to check that $(x_1^2-q)(y_1^2-q)=m,$ so there is a rational point $(x_1,y_1)\in\mathcal{D}_m(\mathbb{Q})$ such that $x_1^2-q=\left(\frac{u^2-q}{2u}\right)^2$ is a square. We use this point $(x_1,y_1)=\colon P_1$ to define the map $f:\mathcal{D}_m\to E_m.$ Let $Q_1=R+S, Q_2=2S$ and $Q_3=3S.$ The sets $G\cdot Q_i$ are disjoint and $g(Q_1+Q_2+Q_3)(y_1^2-q)=g(R+6S)(y_1^2-q)\equiv g(R)(y_1^2-q)=((x_1^2-q)(y_1^2-q))(y_1^2-q)$ mod $(\mathbb{Q}^*)^2$ is a rational square. The points $(Q_1,Q_2,Q_3)$ satisfy the conditions of Theorem \ref{thm:1} giving us a rational $D(q)$ quadruple.$ \square$ \begin{remark}\label{rem:8} The condition $m=\left(\frac{q-u^2}{2u}\right)^2(t^2-q)$ is equivalent to the fact that there exists $(x_0,y_0)\in\mathcal{D}_m(\mathbb{Q})$ such that $x_0^2-q$ is a square. This was proven in the preceding theorems. \end{remark} \section{Examples}\label{sec:4} There are plenty of examples where $m=(x_1^2-q)(y_1^2-q)$ for some rational $x_1$ and $y_1,$ such that there does not exist a rational $D(q)$-quadruple with product $m.$ Equivalently, $m$ cannot be written as $(x_0^2-q)(y_0^2-q)$ such that $x_0^2-q$ is a square. According to Theorem \ref{thm:1}, to find out whether there is a rational $D(q)$-quadruple with product $m,$ one needs to check whether there is a point $T'\in E_m(\mathbb{Q})$ such that $g(T')(y_1^2-q)$ is a square. Theorem \ref{thm:6} tells us that we only need to check the points $T \in E_m(\mathbb{Q})/2E_m(\mathbb{Q}),$ which is a finite set. If for some explicit $q,m$ we know the generators of the group $E_m(\mathbb{Q}) / 2E_m(\mathbb{Q}),$ we can determine whether there exist rational $D(q)$-quadruples with product $m,$ and parametrize them using points on $E_m(\mathbb{Q}).$ For such computations we used Magma. Let $q=3, x_1=5$ and $y_1=7$ making $m=(5^2-3)(7^2-3)=1012.$ The rank of $E_m$ is two, $E_m$ has one torsion point of order four, giving us in eight points in total to check. None of the points $T \in E_m(\mathbb{Q})/2E_m(\mathbb{Q})$ satisfy that $g(T)(y_1^2-q)$ is a square, so there are no $D(3)$-quadruples with product $1012.$ On the other hand, take $q=-3,x_1=1$ so that $x_1^2-q=4$ and let $y_1=t$ which makes $m=4\cdot(t^2+3).$ The point $S$ is a point of infinite order on $E_m(\mathbb{Q}(t)),$ and the triple $(Q_1,Q_2,Q_3)=(S+R,2S,3S)$ satisfies the conditions of Theorem \ref{thm:1}. We obtain the following family:\\ \begin{align*} a&=\frac{2\cdot (3 + 6 t^2 + 7 t^4)\cdot (27+162t^2+801t^4+1548t^6+1069t^8+306t^{10}+183t^{12})}{(3 + t^2)\cdot (1 + 3 t^2)\cdot (9+9t^2+19t^4+27t^6)\cdot(3+27t^2+33t^4+t^6)}, \\ b&=\frac{(3 + t^2)^2\cdot (1 + 3 t^2)\cdot (9+9t^2+19t^4+27t^6)\cdot(3+27t^2+33t^4+t^6)}{2\cdot (3 + 6 t^2 + 7 t^4)\cdot (27+162t^2+801t^4+1548t^6+1069t^8+306t^{10}+183t^{12})},\\ c&=\frac{2\cdot (3 + 6 t^2 + 7 t^4)\cdot(3 + 27 t^2 + 33 t^4 + t^6)\cdot (9 + 9 t^2 + 19 t^4 + 27 t^6)}{(3 + t^2)\cdot (1 + 3 t^2)\cdot(27 + 162 t^2 + 801 t^4 + 1548 t^6 + 1069 t^8 + 306 t^{10} + 183 t^{12})},\\ d&=\frac{2\cdot(3 + t^2)\cdot (1 + 3 t^2)\cdot(27 + 162 t^2 + 801 t^4 + 1548 t^6 + 1069 t^8 + 306 t^{10} + 183 t^{12})}{(3 + 6 t^2 + 7 t^4)\cdot(3 + 27 t^2 + 33 t^4 + t^6)\cdot (9 + 9 t^2 + 19 t^4 + 27 t^6)}. \end{align*} We can generalize the example above by setting $q=q, y_1=t, x_1=\frac{q+u^2}{2u}.$ The triple of points $(S+R,2S,3S)$ satisfies the conditions of Theorem \ref{thm:1} and we can calculate an explicit family of rational $D(q)$-quadruples with product $m,$ but this example is too large to print (the numerator of $a$ is a polynomial in the variables $(q,t,u)$ of degree forty). All the computations in this paper were done in Magma \cite{bosma1997magma}. \end{document}
\begin{document} \newtheorem{thm}{Theorem} \newtheorem{cor}{Corollary} \newproof{pf}{Proof} \begin{frontmatter} \title{Two stage design for estimating the product of means with cost in the case of the exponential family} \author[1]{Zohra BENKAMRA} \author[2]{Mekki TERBECHE} \author[1]{Mounir TLEMCANI\corref{*}} \address[1]{Department of Mathematics. University of Oran, Algeria} \address[2]{Department of Physics, L.A.A.R University Mohamed Boudiaf, Oran, Algeria} \cortext[*]{Corresponding author : [email protected] (M.Tlemcani)} \begin{abstract} We investigate the problem of estimating the product of means of independent populations from the one parameter exponential family in a Bayesian framework. We give a random design which allocates $m_{i}$ the number of observations from population $P_{i}$ such that the Bayes risk associated with squared error loss and cost per unit observation is as small as possible. The design is shown to be asymptotically optimal. \end{abstract} \begin{keyword} Two stage design, product of means, exponential family, Bayes risk, cost, asymptotic optimality. \end{keyword} \end{frontmatter} \section{Introduction} Assume that for $i=1,...,n$; a random variable $X_{i}$ whose distribution belongs to the one parameter exponential family is observable from population $P_{i}$ with cost $c_{i}$ per unit observation. The problem of estimating several means in the case of the exponential family distributions with linear combination of losses was addressed by \cite{cohen}. The problem of interest in this paper is to estimate the product of means using a Bayesian approach associated with squared error loss and cost. Since a Bayesian framework is considered; see, e.g., \cite{page,shapiro}, then typically optimal estimators are Bayesian estimators and the problem turns to design a sequential allocation scheme; see, e.g., \cite{woodroofe}, to select $m_{i}$ the number of observations from population $P_{i}$ such that the Bayes risk plus the corresponding budget $B=\sum_{i=1}^{n}c_{i}m_{i}$ is as small as possible. \cite{terbeche aas}, have defined a sequential design to estimate the difference between means of two populations from the exponential family with associated cost. The random allocation was shown to be the best from numerical considerations; see, e.g., \cite{terbeche phd}. Similarly, the problem of estimating the product of several means of independent populations, subject to the constraint of a total number of observations $M$ fixed, was addressed by \cite{rekab}, using a two stage approach. The allocation of $m_{i}$ was nonrandom and the first order optimality was shown for large $M$. Suppose that $X_{i}$ has the distribution of the form \[ f_{\theta _{i}}(x_{i})\varpropto e^{\theta _{i}x_{i}-\psi (\theta _{i})},~x_{i}\in \mathbb{R} ,~\theta _{i}\in \Omega \] where $\Omega $ is a bounded open interval in $\mathbb{R}$. It follows that $E_{\theta _{i}}\left[ x_{i}\right] =\psi ^{\prime }\left( \theta _{i}\right) $ and $Var_{\theta _{i}}\left[ x_{i}\right] =\psi ^{\prime \prime }\left( \theta _{i}\right)$. One assumes that prior distribution for each $\theta _{i}$ is given by \[ \pi _{i}\left( \theta _{i}\right)\varpropto e^{r_{i}\left( \mu _{i}\theta _{i}-\psi \left( \theta _{i}\right)\right) } \] where $r_{i}$ and $\mu _{i}$ are reals and $r_{i}>0$, $i=1,...,n$. Here we treat $\theta _{i}$ as a realization of a random variable and assume that for each population, $x_{i1},...,x_{im_{i}}$ are conditionally independent and that $\theta _{1},...,\theta _{n}$ are a priori independent. Our aim is to estimate the product $\theta =\prod_{i=1}^{n}\psi ^{\prime }\left( \theta _{i}\right)$, subject to squared error loss and linear cost. \section{The Bayes risk} Let $\mathcal{F}_{m_{1},...,m_{n}}$ the $\sigma $-Field generated by $\left( X_{1},...,X_{n}\right) $ where $X_{i}=\left( x_{i1},...,x_{im_{i}}\right) $ and let $\mathcal{F}_{m_{i}}=\sigma \left( X_{i}\right) =\sigma \left( x_{i1},...,x_{im_{i}}\right) $. It was shown that \begin{eqnarray} E\left[ \psi ^{\prime }\left( \theta _{i}\right) /\mathcal{F}_{m_{i}} \right] &=&=\frac{\mu_{i}r_{i}+\sum_{j=1}^{m_{i}}x_{ij}}{m_{i}+r_{i}} \label{th 2.2.1-1} \\ Var\left[ \psi ^{\prime }\left( \theta _{i}\right) /\mathcal{F}_{m_{i}} \right] &=&E\left[ \frac{\psi ^{\prime \prime }\left( \theta _{i}\right) }{ m_{i}+r_{i}}/\mathcal{F}_{m_{i}}\right], \label{th 2.2.1-2} \end{eqnarray} (see \cite{terbeche sort}). Using independence across populations, the Bayes estimator of $\theta $ is \[ \hat{\theta}=E\left[ \theta /\mathcal{F}_{m_{1},...,m_{n}}\right] =\prod\limits_{i=1}^{n}E\left[ \psi ^{\prime }\left( \theta _{i}\right) / \mathcal{F}_{m_{i}}\right] \] Assume that there exists $p\geq 1$ such that \begin{equation} E\left[ \left( \psi ^{\prime \prime }\left( \theta _{i}\right) \right) ^{p} \right] <+\infty ~ and ~ E\left[ \left( \psi ^{\prime }\left( \theta _{i}\right) \right) ^{2p}\right] <+\infty, \label{cond} \end{equation} for all $i=1,...,n$; then the corresponding Bayes risk associated with quadratic loss and cost can be written as follows, \begin{equation} \label{r} R\left( m_{1},...,m_{n}\right) =E\left[ \sum\limits_{i=1}^{n}\frac{U_{im_{i}} }{m_{i}+r_{i}}+\sum\limits_{i=1}^{n}c_{i}m_{i}\right] +\sum \limits_{i=1}^{n}o\left( \frac{1}{m_{i}}\right) \end{equation} and by the way, it can be approximated for large samples by \begin{equation} \label{rtilde} \tilde{R}\left( P\right) =\tilde{R}\left( m_{1},...,m_{n}\right) =E\left[ \sum\limits_{i=1}^{n}\frac{U_{im_{i}}}{m_{i}+r_{i}}+\sum \limits_{i=1}^{n}c_{i}m_{i}\right] \end{equation} where $U_{im_{i}}=E\left[ V_{i}/\mathcal{F}_{m_{1},...,m_{n}}\right] $ and $V_{i}=\psi ^{\prime \prime }\left( \theta _{i}\right) \prod_{j\neq i}\psi ^{\prime 2}\left( \theta _{j}\right)$ \section{Lower bound for the scaled Bayes risk} From now on, the notation $c\rightarrow 0$ means that $c_{j}\rightarrow 0$, for all $j=1,...,n$. Assume that for all $i$, \begin{equation} \label{ci} \frac{c_{i}}{\sum\limits_{j=1}^{n}c_{j}}\rightarrow \lambda _{i}\in \left] 0,1\right[ ,\ as \ c\rightarrow 0. \end{equation} \begin{thm} \label{th 233}For any random design (P) satisfying \begin{equation} \label{ai} m_{i}\sqrt{c_{i}}\rightarrow a_{i}\neq 0~,~a.s., \ as\ c\rightarrow 0; \end{equation} then \begin{equation} \label{res} \liminf_{c\rightarrow 0}\frac{R(P)}{\sqrt{ \sum\limits_{j=1}^{n}c_{j}}}\geq 2E \left[ \sum\limits_{i=1}^{n}\sqrt{\lambda _{i}}\sqrt{V_{i}}\right] \end{equation} \end{thm} \begin{pf} Expressions (\ref{r}) and (\ref{rtilde}) with the help of (\ref{ci}) and (\ref{ai}), give \begin{equation} \label{rrtilde} \liminf_{c\rightarrow 0 }\frac{R(P)}{\sqrt{ \sum\limits_{j=1}^{n}c_{j}}}=\liminf_{c\rightarrow 0}\frac{\tilde{R}(P)}{\sqrt{\sum\limits_{j=1}^{n}c_{j}}} \end{equation} and the scaled approximated Bayes risk satisfies the following inequality: \[ \frac{\tilde{R}(P)}{\sqrt{\sum\limits_{j=1}^{n}c_{j}}}\geq 2E\left[ \sum\limits_{i=1}^{n}\sqrt{\frac{c_{i}}{\sum\limits_{j=1}^{n}c_{j}}}\sqrt{ U_{im_{i}}}\right] -\sum\limits_{i=1}^{n}\sqrt{\frac{c_{i}}{ \sum\limits_{j=1}^{n}c_{j}}}\sqrt{c_{i}}r_{i}, \] since for all $i$, \begin{eqnarray*} \frac{U_{im_{i}}}{m_{i}+r_{i}}+c_{i}m_{i} &=&\left( \frac{\sqrt{U_{im_{i}}}}{\sqrt{m_{i}+r_{i}}}+\sqrt{c_{i}}\sqrt{ m_{i}+r_{i}}\right) ^{2}+2\sqrt{c_{i}}\sqrt{U_{im_{i}}}-c_{i}r_{i}\\ &\geq &2\sqrt{c_{i}}\sqrt{U_{im_{i}}}-c_{i}r_{i}. \end{eqnarray*} Finally, Fatou's lemma and condition (\ref{ci}) give \[ \liminf_{c\rightarrow 0}\frac{\tilde{R}(P)}{ \sqrt{\sum\limits_{j=1}^{n}c_{j}}}\geq 2E\left[ \liminf_{c\rightarrow 0}\sum\limits_{i=1}^{n}\sqrt{\frac{c_{i}}{ \sum\limits_{j=1}^{n}c_{j}}}\sqrt{U_{im_{i}}}\right] =2E\left[ \sum\limits_{i=1}^{n}\sqrt{\lambda _{i}}\sqrt{V_{i}}\right] \] and the proof follows. \end{pf} \section{First order optimal design} According to condition (\ref{ai}) and identity (\ref{rrtilde}) a first order optimal design with respect to $\sum_{i=1}^{n}m_{i}=m$ must satisfy \begin{equation} \label{conver} \frac{\tilde{R}(P)}{\sqrt{\sum\limits_{j=1}^{n}c_{j}}}-2E\left[ \sum\limits_{i=1}^{n}\sqrt{\lambda _{i}}\sqrt{V_{i}}\right] \rightarrow 0, \ as\ c\rightarrow 0, \end{equation} It should be pointed that condition (\ref{conver}) is actually similar to the first order efficiency property for A.P.O rules in Bayes sequential estimation; see, e.g., \cite{leng,hwang}, for one-parameter exponential families, which involves a sequential allocation procedure and a stopping time. In our approach, condition (\ref{conver}) is handled by the following expansion. \begin{eqnarray*} \frac{\tilde{R}(P)}{\sqrt{\sum\limits_{j=1}^{n}c_{j}}} &=& \frac{2E\left[ \sum\limits_{i=1}^{n}\sqrt{c_{i}}\sqrt{U_{im_{i}}}\right] }{ \sqrt{\sum\limits_{j=1}^{n}c_{j}}} +\frac{E\left[ \sum\limits_{i=1}^{n}\frac{\left( \sqrt{U_{im_{i}}}-\left( m_{i}+r_{i}\right) \sqrt{c_{i}}\right) ^{2}}{m_{i}+r_{i}}\right] }{\sqrt{ \sum\limits_{j=1}^{n}c_{j}}} \\ &-&\sum\limits_{i=1}^{n}\sqrt{c_{i}}\sqrt{\frac{ c_{i}}{\sum\limits_{j=1}^{n}c_{j}}}r_{i} \end{eqnarray*} The last term goes to zero as $c\rightarrow 0$, thanks to condition (\ref{ci}). Hence, sufficient conditions for a design to satisfy (\ref{conver}) are \begin{eqnarray} E\left[ \sum\limits_{i=1}^{n}\sqrt{\frac{c_{i}}{\sum\limits_{j=1}^{n}c_{j}}} \sqrt{U_{im_{i}}}\right] -E\left[ \sum\limits_{i=1}^{n}\sqrt{\lambda _{i}} \sqrt{V_{i}}\right] &\rightarrow &0 \label{cs1} \\ E\left[ \frac{\left( \sqrt{U_{im_{i}}}-\left( m_{i}+r_{i}\right) \sqrt{c_{i}} \right) ^{2}}{\left( m_{i}+r_{i}\right) \sqrt{\sum\limits_{j=1}^{n}c_{j}}} \right] &\rightarrow &0,~\forall i \label{cs2} \end{eqnarray} as $c\rightarrow 0$. \begin{thm} \label{th 241}Let $P$ a random policy satisfying $m_{i}\rightarrow +\infty ,~a.s.$, and suppose that condition (\ref{cond}) is true, then \[ E\left[ \sum\limits_{i=1}^{n}\sqrt{\frac{c_{i}}{\sum\limits_{j=1}^{n}c_{j}}} \sqrt{U_{im_{i}}}\right] -E\left[ \sum\limits_{i=1}^{n}\sqrt{\lambda _{i}} \sqrt{V_{i}}\right] \rightarrow 0, \ as\ c\rightarrow 0. \] \end{thm} \begin{pf} Remark that \begin{equation} \lim_{m_{1},...,m_{n}\rightarrow +\infty }\sqrt{U_{im_{i}}}=\sqrt{V_{i}} ,~a.s. \label{conv} \end{equation} Now \begin{eqnarray*} \sup_{m_{1},...,m_{n}}E\left[ \left( \sqrt{U_{im_{i}}}\right) ^{2}\right] &=&\sup_{m_{1},...,m_{n}}E\left[ U_{im_{i}}\right] \\ &=&E\left[ \psi ^{^{\prime \prime }}\left( \theta _{i}\right) \prod\limits_{j\neq i}\psi ^{\prime ^{2}}\left( \theta _{j}\right) \right] \\ &=&E\left[ \psi ^{^{\prime \prime }}\left( \theta _{i}\right) \right] \prod\limits_{j\neq i}E\left[ \psi ^{\prime ^{2}}\left( \theta _{j}\right) \right] <+\infty \end{eqnarray*} hence, the uniform integrability of $\sqrt{U_{im_{i}}}$ follows from condition (\ref{cond}) and martingales properties. Therefore, the convergence in (\ref{conv}) holds in $L^{1}$ and consequently : \[ \sqrt{\frac{c_{i}}{\sum\limits_{j=1}^{n}c_{j}}}\sqrt{U_{im_{i}}}\rightarrow \sqrt{\lambda _{i}}\sqrt{V_{i}} ~ in ~ L^{1}, \ as\ c\rightarrow 0, \] which achieves the proof. \end{pf} \section{The two stage procedure} Following the previous section, our strategy now is to satisfy condition ( \ref{cs2}). Then, we define the two stage sequential scheme as follows. \begin{description} \item[Stage one] proceed for $k_{i}$ observation from population $P_{i}$ for $i=1,...,n$; such that $k_{i}\sqrt{c_{i}}\rightarrow 0$ and $ k_{i}\rightarrow +\infty $ as $c_{i}\rightarrow 0$. \item[Stage two] for $i=1,...,n$; select $m_{i}$ integer as follows : \[ m_{i}=\max \left\{ k_{i},\left[ \frac{\sqrt{U_{ik_{i}}}}{\sqrt{c_{i}}}-r_{i}\right]\right\} \] where $\left[ x\right]$ denotes the integer part of $x$ and \[ U_{ik_{i}}=E\left[ \psi ^{^{\prime \prime }}\left( \theta _{i}\right) \prod\limits_{j\neq i}\psi ^{\prime ^{2}}\left( \theta _{j}\right) /\mathcal{ F}_{k_{1},...,k_{n}}\right] \] \end{description} We give now the main result of the paper. \begin{thm} \label{th 251}Assume condition (\ref{cond}) satisfied for a $p\geq 1$, then the two stage design is first order optimal. \end{thm} \begin{pf} The $m_{i}$, as defined by the two stage, satisfies \[ \lim_{c_{i}\rightarrow 0}\left( m_{i}+r_{i}\right) \sqrt{c_{i}}=\sqrt{V_{i}} \] and since \[ \sqrt{\sum\limits_{j=1}^{n}c_{j}}\left( m_{i}+r_{i}\right) =\frac{\sqrt{c_{i} }\left( m_{i}+r_{i}\right) }{\sqrt{\frac{c_{i}}{\sum\limits_{j=1}^{n}c_{j}}}} \rightarrow \sqrt{\frac{V_{i}}{\lambda _{i}}}, \ as\ c\rightarrow 0, \] then \begin{equation} \frac{\left( \sqrt{U_{im_{i}}}-\left( m_{i}+r_{i}\right) \sqrt{c_{i}}\right) ^{2}}{\sqrt{\sum\limits_{j=1}^{n}c_{j}}\left( m_{i}+r_{i}\right) } \rightarrow 0,~a.s.,\ as\ c\rightarrow 0 \label{ae} \end{equation} To show the convergence in $L^{1}$, it will be sufficient to show the uniform integrability of the left hand side of (\ref{ae}). So, observe that \begin{eqnarray*} \frac{\left( \sqrt{U_{im_{i}}}-\left( m_{i}+r_{i}\right) \sqrt{c_{i}}\right) ^{2}}{\sqrt{\sum\limits_{j=1}^{n}c_{j}}\left( m_{i}+r_{i}\right) } &\leq & \frac{U_{im_{i}}+\left( m_{i}+r_{i}\right) ^{2}c_{i}}{\sqrt{ \sum\limits_{j=1}^{n}c_{j}}\left( m_{i}+r_{i}\right) } \\ &\leq &\frac{U_{im_{i}}}{\sqrt{U_{ik_{i}}}}+\sqrt{U_{ik_{i}}} \end{eqnarray*} $\sqrt{U_{ik_{i}}}$ is uniformly integrable, as a result of martingales L$^{p}$ convergence properties with $p=2$. Now, remark that \[ \frac{U_{im_{i}}}{\sqrt{U_{ik_{i}}}}\leq \max_{k^{\prime }}\sqrt{ U_{ik^{\prime }}} \] and for the remainder of the proof, we use Doob's inequality to show that $E\left[ \max_{k^{\prime }}\sqrt{U_{ik^{\prime }}}\right] <+\infty$. We have, \[ E\left[ \max_{k^{\prime }}\left( \sqrt{U_{ik^{\prime }}}\right) ^{2p}\right] \leq \left( \frac{2p}{2p-1}\right) ^{2p}E\left[ \left( \sqrt{V_{i}}\right) ^{2p}\right] <+\infty \] hence, since $p\geq 1,$ $\max_{k^{\prime }}\sqrt{U_{ik^{\prime }}}$ is integrable and the proof follows. \end{pf} \section{Conclusion} The proof of the first order asymptotic optimality for the two stage design has been obtained mainly through an adequate scaling of the approximated Bayes risk associated with squared error loss and cost, a lower bound for the scaled Bayes risk, martingales properties and Doob's inequality. \section*{References} \end{document}
\begin{document} \title{On some conjectures concerning Stern's sequence and its twist} \author{Michael Coons} \address{University of Waterloo, Dept.~of Pure Math., Waterloo, ON, N2L 3G1, Canada} \email{[email protected]} \thanks{Research supported by a Fields--Ontario Fellowship.} \subjclass[2010]{Primary 11B37; Secondary 11B83} \keywords{Stern sequence, functional equations, binary expansion} \date{\today} \begin{abstract} In a recent paper, Roland Bacher conjectured three identities concerning Stern's sequence and its twist. In this paper we prove Bacher's conjectures. Possibly of independent interest, we also give a way to compute the Stern value (or twisted Stern value) of a number based solely on its binary expansion. \end{abstract} \maketitle \section{Introduction} We define the {\em Stern sequence} (also known as {\em Stern's diatomic sequence}) $\{s(n)\}_{n\geqslant 0}$ by $s(0)=0$, $s(1)=1$, and for all $n\geqslant 1$ by $$s(2n)=s(n),\qquad s(2n+1)=s(n)+s(n+1);$$ this is sequence A002487 in Sloane's list. We denote by $S(z)$ the generating function of the Stern sequence; that is, $$S(z):=\sum_{n\geqslant 0} s(n)z^n.$$ Stern's sequence has been well studied and has many interesting properties (see e.g., \cite{Dil1, Dil2, Leh1, Lin1} for details). One of the most interesting properties is that the sequence $\{s(n+1)/s(n)\}_{n\geqslant 1}$ is an enumeration of the positive reduced rationals without repeats. Similarly Bacher \cite{B} introduced the {\em twisted Stern sequence} $\{t(n)\}_{n\geqslant 0}$ given by the recurrences $t(0)=0$, $t(1)=1$, and for $n\geqslant 1$ by $$t(2n)=-t(n),\qquad t(2n+1)=-t(n)-t(n+1).$$ We denote by $T(z)$ the generating function of the twisted Stern sequence; that is, $$T(z):=\sum_{n\geqslant 0} t(n)z^n.$$ Towards describing the relationship between the Stern sequence and its twist, Bacher \cite{B} gave many results, and two conjectures. As the main theorems of this article, we prove these conjectures, so we will state them as theorems (note that we have modified some of the notation). \begin{theorem}\label{Bacherconj1} There exists an integral sequence $\{u(n)\}_{n\geqslant 0}$ such that for all $e\geqslant 0$ we have $$\sum_{n\geqslant 0} t(3\cdot 2^e+n)z^n=(-1)^eS(z)\sum_{n\geqslant 0}u(n)z^{n\cdot 2^e}.$$ \end{theorem} Note that in this theorem (as in the original conjecture), it is implicit that the sequence $\{u(n)\}_{n\geqslant 0}$ is defined by the relationship $$U(z):=\sum_{n\geqslant 0}u(n)z^n =\frac{\sum_{n\geqslant 0}t(3+n)z^n}{S(z)}.$$ \begin{theorem}\label{Bacherconj2} (i) The series $$G(z):=\frac{\sum_{n\geqslant 0}(s(2+n)-s(1+n))z^n}{S(z)}$$ satisfies $$\sum_{n\geqslant 0}(s(2^{e+1}+n)-s(2^e+n))z^n=G(z^{2^e})S(z)$$ for all $e\in\mathbb{N}$. Similarly, (ii) the series $$H(z):=-\frac{\sum_{n\geqslant 0}(t(2+n)+t(1+n))z^n}{S(z)}$$ satisfies $$(-1)^{e+1}\sum_{n\geqslant 0}(t(2^{e+1}+n)+t(2^e+n))z^n=H(z^{2^e})S(z)$$ for all $e\in\mathbb{N}$. \end{theorem} These theorems were originally stated as Conjectures 1.3 and 3.2 in \cite{B}. \section{Untwisting Bacher's First Conjecture} In this section, we will prove Theorem \ref{Bacherconj1}, but first we note the following lemma which are a direct consequence of the definitions of the Stern sequence and its twist. \begin{lemma}\label{AT} The generating series $S(z)=\sum_{n\geqslant 0}s(n)z^n$ and $T(z)=\sum_{n\geqslant 0}t(n)z^n$ satisfy the functional equations $$S(z^2)=\left(\frac{z}{1+z+z^2}\right)S(z)$$ and $$T(z^2)=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^2}\right),$$ respectively. \end{lemma} We prove here only the functional equation for $T(z)$. The functional equation for the generating series of the Stern sequence is well--known; for details see, e.g., \cite{CoonsS1, Dil1}. \begin{proof}[Proof of Lemma \ref{AT}] This is a straightforward calculation using the definition of $t(n)$. Note that \begin{align*} T(z)&= \sum_{n\geq 0}t(2n)z^{2n}+\sum_{n\geq 0}t(2n+1)z^{2n+1}\\ &=-\sum_{n\geq 0}t(n)z^{2n}+t(1)z+\sum_{n\geq 1}t(2n+1)z^{2n+1}\\ &=-T(z^2)+z-\sum_{n\geq 1}t(n)z^{2n+1}-\sum_{n\geq 1}t(n+1)z^{2n+1}\\ &=-T(z^2)+z-zT(z^2)-z^{-1}\sum_{n\geq 1}t(n+1)z^{2(n+1)}\\ &=-T(z^2)+2z-zT(z^2)-z^{-1}\sum_{n\geq 0}t(n+1)z^{2(n+1)}\\ &=-T(z^2)+2z-zT(z^2)-z^{-1}T(z^2). \end{align*} Solving for $T(z^2)$ gives $$T(z^2)=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^2}\right),$$ which is the desired result. \end{proof} Since the proof of Theorem \ref{Bacherconj1} is easiest for the case $e=1$, and this case is indicative of the proof for the general case, we present it here separately. \begin{proof}[Proof of Theorem \ref{Bacherconj1} for $e=1$] Recall that we define the sequence $\{u(n)\}_{n\geqslant 0}$ is by the relationship $$U(z):=\sum_{n\geqslant 0}u(n)z^n =\frac{\sum_{n\geqslant 0}t(3+n)z^n}{S(z)}.$$ Since $$\sum_{n\geqslant 0}t(3+n)z^n=\frac{1}{z^3}\left(T(z)+z^2-z\right),$$ we have that \begin{equation}\label{bUA}U(z)=\frac{T(z)+z^2-z}{z^3S(z)}=\frac{1}{z^3}\cdot\frac{T(z)}{S(z)}+\frac{z^2-z}{z^3}\cdot\frac{1}{S(z)}.\end{equation} Note that we are interested in a statement about the function $U(z^{2})$. We will use the functional equations for $S(z)$ and $T(z)$ to examine this quantity via \eqref{bUA}. Note that equation \eqref{bUA} gives, sending $z\mapsto z^{2}$ and using applying Lemma \ref{AT}, that $$U(z^2)=\frac{1}{z^6}\cdot\frac{T(z^2)}{S(z^2)}+\frac{z^4-z^2}{z^6}\cdot\frac{1}{S(z^2)}=\frac{1}{z^6S(z)}\left(2z-T(z)+(z^3-z)(1+z+z^2)\right).$$ Thus we have that \begin{align*} (-1)^1 S(z)U(z^2)&=\frac{-1}{z^6}\left(2z-T(z)-z-z^2-z^3+z^3+z^4+z^5\right)\\ &=\frac{1}{z^6}\left(T(z)-z+z^2-z^4-z^5\right)\\ &=\frac{1}{z^6}\sum_{n\geq 6}t(n)z^n\\ &=\sum_{n\geq 0}t(3\cdot 2+n)z^n, \end{align*} which is exactly what we wanted to show. \end{proof} For the general case, complications arise in a few different places. The first is concerning $T(z^{2^e})$. We will build up the result with a sequence of lemmas to avoid a long and calculation--heavy proof of Theorem \ref{Bacherconj1}. \begin{lemma}\label{T2e} For all $e\geq 1$ we have $$T(z^{2^e})=T(z)\prod_{i=0}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right)-2\sum_{j=0}^{e-1}z^{2^j}\prod_{i=j}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right).$$ \end{lemma} \begin{proof} We give a proof by induction. Note that for $e=1$, the right--hand side of the desired equality is $$T(z)\left(\frac{-z}{1+z+z^{2}}\right)-2z\left(\frac{-z}{1+z+z^{2}}\right)=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^{2}}\right)=T(z^2)$$ where the last equality follows from Lemma \ref{AT}. Now suppose the identity holds for $e-1$. Then, again using Lemma \ref{AT}, we have \begin{align*} T(z^{2^e}) = T((z^2)^{2^{e-1}})&=T(z^2)\prod_{i=0}^{e-2}\left(\frac{-z^{2^{i+1}}}{1+z^{2^{i+1}}+z^{2^{i+2}}}\right)-2\sum_{j=0}^{e-2}z^{2^{j+1}}\prod_{i=j}^{e-2}\left(\frac{-z^{2^{i+1}}}{1+z^{2^{i+1}}+z^{2^{i+2}}}\right)\\ &=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^{2}}\right)\prod_{i=1}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)\\ &\qquad\qquad-2\sum_{j=1}^{e-1}z^{2^{j}}\prod_{i=j}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)\\ &=\left(T(z)-2z\right)\prod_{i=0}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)-2\sum_{j=1}^{e-1}z^{2^{j}}\prod_{i=j}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)\\ &=T(z)\prod_{i=0}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)-2\sum_{j=0}^{e-1}z^{2^{j}}\prod_{i=j}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right). \end{align*} Hence, by induction, the identity is true for all $e\geq 1$. \end{proof} We will need the following result for our next lemma. \begin{theorem}[Bacher \cite{B}]\label{B1.4} For all $e\geqslant 1$, we have $$\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)=\frac{(-1)^e}{z(1+z^{2^e})}\sum_{n=0}^{3\cdot 2^e}t(3\cdot 2^e+n)z^n.$$ \end{theorem} The following lemma is similar to the comment made in Remark 1.5 of \cite{B}. \begin{lemma} For all $e\geq 1$, we have that $$\sum_{n=0}^{3\cdot 2^e}t(n)z^n=z-z^2+\sum_{k=0}^{e-1}(-1)^kz^{3\cdot 2^k+1}(z^{2^k}+1)\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}}).$$ \end{lemma} \begin{proof} If $e\geq 1$, then we have \begin{align*} \sum_{n=0}^{3\cdot 2^e}t(n)z^n &= z-z^2+\sum_{k=0}^{e-1}\sum_{n=3\cdot 2^k}^{3\cdot 2^{k+1}}t(n)z^n\\ &= z-z^2+\sum_{k=0}^{e-1}\sum_{n=0}^{3\cdot 2^{k}}t(3\cdot 2^{k}+n)z^{n+3\cdot 2^{k}}\\ &= z-z^2+\sum_{k=0}^{e-1}z^{3\cdot 2^k}\sum_{n=0}^{3\cdot 2^{k}}t(3\cdot 2^{k}+n)z^{n}. \end{align*} Applying Theorem \ref{B1.4}, we have that $$\sum_{n=0}^{3\cdot 2^e}t(n)z^n=z-z^2+\sum_{k=0}^{e-1}z^{3\cdot 2^k}(-1)^kz(z^{2^k}+1)\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}}),$$ which after some trivial term arrangement gives the result. \end{proof} \begin{lemma}\label{tspp} For all $e\geq 1$, we have $$\sum_{n=0}^{3\cdot 2^e} t(n)z^n=2z\sum_{j=0}^{e-1}(-1)^j\prod_{i=0}^{j-1}(1+z^{2^i}+z^{2^{i+1}})-(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}).$$ \end{lemma} \begin{proof} This lemma is again proved by induction, using the result of the previous lemma. Note that in view of the previous lemma, by subtracting the first term on the right--hand side of the desired equality, it is enough to show that for all $e\geq 1$, we have \begin{multline}\label{lrhs}z-z^2+\sum_{k=0}^{e-1}(-1)^k\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\ =-(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}).\end{multline} If $e=1$, then the left--hand side of \eqref{lrhs} is $$z-z^2+(z^4+z^3-s)z=-z-z^2+z^4+z^5,$$ and the right--hand side of \eqref{lrhs} is $$-(-1)z(z^2-1)(1+z+z^2)=-z-z^2+z^4+z^5,$$ so that \eqref{lrhs} holds for $e=1$. Now suppose that \eqref{lrhs} holds for $e-1$. Then \begin{align*} z-z^2+\sum_{k=0}^{e-1}(-1)^k&\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\ &= (-1)^{e-1}\left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-2\right)z\prod_{i=0}^{e-2}(1+z^{2^i}+z^{2^{i+1}})\\ &\qquad\qquad+z-z^2+\sum_{k=0}^{e-2}(-1)^k\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\ &= (-1)^{e-1}\left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-2\right)z\prod_{i=0}^{e-2}(1+z^{2^i}+z^{2^{i+1}})\\ &\qquad\qquad-(-1)^{e-1}z(z^{2^{e-1}}-1)\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\end{align*} Factoring out the product we thus have that \begin{align*} z-z^2+\sum_{k=0}^{e-1}(-1)^k&\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\ &=(-1)^e\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\cdot \left(-\left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-2\right)z+z(z^{2^{e-1}}-1)\right)\\ &=-(-1)^ez\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\cdot \left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-z^{2\cdot 2^{e-1}}-1\right)\\ &=-(-1)^ez\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\cdot (z^{2^e}-1)(1+z^{2^{e-1}}+z^{2^{e}})\\ &=-(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}), \end{align*} so that by induction, \eqref{lrhs} holds for all $e\geq 1$. \end{proof} With these lemmas in place we are in position to prove Theorem \ref{Bacherconj1}. \begin{proof}[Proof of Theorem \ref{Bacherconj1}] We start by restating \eqref{bUA}; that is $$U(z)=\frac{T(z)+z^2-z}{z^3S(z)}=\frac{1}{z^3}\cdot\frac{T(z)}{S(z)}+\frac{z^2-z}{z^3}\cdot\frac{1}{S(z)}.$$ Sending $z\mapsto z^{2^e},$ we have that $$U(z^{2^e})=\frac{1}{z^{3\cdot 2^e}}\cdot\frac{T(z^{2^e})}{S(z^{2^e})}+\frac{z^{2^{e+1}}-z^{2^e}}{z^{3\cdot 2^e}}\cdot\frac{1}{S(z^{2^e})}=\frac{1}{z^{3\cdot 2^e}}\cdot\frac{T(z^{2^e})}{S(z^{2^e})}+\frac{z^{2^{e+1}}-z^{2^e}}{z^{3\cdot 2^e}z^{2^e-1}S(z)}\cdot\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}),$$ where we have used the functional equation for $S(z)$ to give the last equality. Using Lemma \ref{T2e} and the functional equation for $S(z)$, we have that \begin{align}\nonumber\frac{T(z^{2^e})}{S(z^{2^e})}&=\frac{T(z)\prod_{i=0}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right)-2\sum_{j=0}^{e-1}z^{2^j}\prod_{i=j}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right)}{S(z)}\cdot\prod_{i=0}^{e-1} \left(\frac{1+z^{2^i}+z^{2^{i+1}}}{z^{2^i}}\right)\\ \label{ToverS}&=(-1)^e\frac{T(z)}{S(z)}-(-1)^e\frac{2z}{S(z)}\sum_{j=0}^{e-1}(-1)^{j}\prod_{i=0}^{j-1}\left({1+z^{2^i}+z^{2^{i+1}}}\right).\end{align} Applying this to the expression for $U(z^{2^e})$ we have, multiplying by $(-1)^eS(z)$, that \begin{multline*}(-1)^eS(z)U(z^{2^e})=\frac{1}{z^{3\cdot 2^e}}\left(T(z)-2z\sum_{j=0}^{e-1}(-1)^{j}\prod_{i=0}^{j-1}\left({1+z^{2^i}+z^{2^{i+1}}}\right)\right.\\ \left.+(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1}\left({1+z^{2^i}+z^{2^{i+1}}}\right)\right).\end{multline*} Now by Lemma \ref{tspp}, this reduces to $$(-1)^eS(z)U(z^{2^e})=\frac{1}{z^{3\cdot 2^e}}\left(T(z)-\sum_{n=0}^{3\cdot 2^e} t(n)z^n\right)=\sum_{n\geq 0}t(3\cdot 2^3+n)z^n,$$ which proves the theorem. \end{proof} \section{Untwisting Bacher's Second Conjecture} In this section, we will prove Theorem \ref{Bacherconj2}. For ease of reading we have separated the proofs of the two parts of Theorem \ref{Bacherconj2}. To prove Theorem \ref{Bacherconj2}(i) we will need the following lemma. \begin{lemma}\label{pss} For all $k\geq 0$ we have that $$z\prod_{i=0}^{k-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)=\sum_{n=1}^{2^k}s(n)z^n+\sum_{n=1}^{2^k-1}s(2^k-n)z^{n+2^k}.$$ \end{lemma} \begin{proof} Again, we prove by induction. Note that for $k=0$, the product and the left--most sum are both empty, thus they are equal to $1$ and $0$, respectively. Since $$z=s(1)z=\sum_{n=1}^{2^0}s(n)z^n$$ the theorem is true for $k=0$. To use some nonempty terms, we consider the case $k=1$. Then we have $$z\prod_{i=0}^{1-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)=z+z^2+z^3=\sum_{n=1}^{2^1}s(n)z^n+\sum_{n=1}^{2^1-1}s(2^1-n)z^{n+2^1},$$ so the theorem holds for $k=1$. Now suppose the theorem holds for $k-1$. Then \begin{align*} z\prod_{i=0}^{k-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)&=\left(1+z^{2^{k-1}}+z^{2^{k}}\right)\cdot z\prod_{i=0}^{k-2}\left(1+z^{2^i}+z^{2^{i+1}}\right)\\ &=\left(1+z^{2^{k-1}}+z^{2^{k}}\right)\left(\sum_{n=1}^{2^{k-1}}s(n)z^n+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k-1}}\right)\\ &=\left(\sum_{n=1}^{2^{k-1}}s(n)z^n+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k-1}}+\sum_{n=1}^{2^{k-1}}s(n)z^{n+2^{k-1}}\right)\\ &\qquad +\left(\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k}}+\sum_{n=1}^{2^{k-1}}s(n)z^{n+2^{k}}\right.\\ &\qquad\qquad\left.+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+3\cdot 2^{k-1}}\right)\\ &=\Sigma_1+\Sigma_2, \end{align*} where $\Sigma_1$ and $\Sigma_2$ represent the triplets of sums from the previous line (we have grouped the last sums in triplets since we will deal with them that way. Note that we have \begin{multline*}\Sigma_1=\sum_{n=1}^{2^{k-1}}s(n)z^n+s(2^{k-1})z^{2^k}+\sum_{n=1}^{2^{k-1}-1}\left(s(n)+s(2^{k-1}-n)\right)z^{n+2^{k-1}}\\ =\sum_{n=1}^{2^{k-1}}s(n)z^n+s(2^{k})z^{2^k}+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}+n)z^{n+2^{k-1}}=\sum_{n=1}^{2^{k}}s(n)z^n,\end{multline*} where we have used the fact that $s(2n)=s(n)$ and for $n\in[0,2^j]$ the identity $s(2^j+n)=s(2^j-n)+s(n)$ holds (see, e.g., \cite[Theorem 1.2(i)]{B} for details). Similarly, since $2^{k-1}-n=2^k-(n+2^{k-1})$ and $$s(2^{k-1}-n)+s(n)=s(2^{k-1}+n)=s(2^k+n)-s(n)=s(2^k-n)$$ (see Proposition 3.1(i) and Theorem 1.2(i) of \cite{B}), we have that \begin{align*}\Sigma_2&=\sum_{n=1}^{2^{k-1}-1}\left(s(2^{k-1}-n)+s(n)\right)z^{n+2^k}+s(2^{k-1})z^{3\cdot 2^{k-1}}+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k-1}+2^k}\\ &= \sum_{n=1}^{2^{k-1}-1}s(2^{k}-n)z^{n+2^k}+s(2^{k-1})z^{3\cdot 2^{k-1}}+\sum_{n=2^{k-1}-1}^{2^{k}-1}s(2^{k}-n)z^{n+2^k}\\ &=\sum_{n=1}^{2^k-1}s(2^k-n)z^{n+2^k}.\end{align*} Thus $$\Sigma_1+\Sigma_2=\sum_{n=1}^{2^{k}}s(n)z^n+\sum_{n=1}^{2^k-1}s(2^k-n)z^{n+2^k},$$ and by induction the lemma is proved. \end{proof} \begin{proof}[Proof of Theorem \ref{Bacherconj2}(i)] We denote as before the generating series of the Stern sequence by $S(z)$. Splitting up the sum in the definition of $G(z)$ we see that $$G(z)=\frac{1}{z^{2}}(1-z)-\frac{1}{z S(z)},$$ so that using the functional equation for $S(z)$ we have \begin{align*}G(z^{2^e}) = \frac{1}{z^{2^{e+1}}}(1-z^{2^e})-\frac{1}{z^{2^e} S(z^{2^e})}= \frac{1}{z^{2^{e+1}}}(1-z^{2^e})-\frac{1}{z^{2^{e}}}\cdot\frac{\prod_{i=0}^{e-1}(1+z^{2^i}+z^{2^{i+1}})}{z^{2^{e}-1}S(z)}. \end{align*} This gives \begin{equation}\label{32rhsi} G(z^{2^e})S(z)=\frac{1}{z^{2^{e+1}}}\left((1-z^{2^e})S(z)-z\prod_{i=0}^{e-1}(1+z^{2^i}+z^{2^{i+1}})\right).\end{equation} We use the previous lemma to deal with the right--hand side of \eqref{32rhsi}; that is, the previous lemma gives that \begin{align*} (1-z^{2^e})S(z)-z\prod_{i=0}^{e-1}(1+z^{2^i}+z^{2^{i+1}})&= \sum_{n\geq 1}s(n)z^n-\sum_{n=1}^{2^e}s(n)z^n\\ &\qquad\qquad-\sum_{n\geq 1}s(n)z^{n+2^e}-\sum_{n=1}^{2^e-1}s(2^e-n)z^{n+2^e}\\ &=\sum_{n\geq 2^e+1} s(n)z^n-\sum_{n\geq 2^e}s(n)z^{n+2^e}\\ &\qquad\qquad-\sum_{n=1}^{2^e-1}(s(n)+s(2^e-n))z^{n+2^e}\\ &=\sum_{n\geq 1} s(2^e+n)z^{n+2^e}-\sum_{n\geq 2^e}s(n)z^{n+2^e}\\ &\qquad\qquad-\sum_{n=1}^{2^e-1}s(2^e+n)z^{n+2^e}\\ &=\sum_{n\geq 0} s(2^{e+1}+n)z^{n+2^{e+1}}-\sum_{n\geq 0}s(2^e+n)z^{n+2^{e+1}}. \end{align*} Dividing the last line by $z^{2^{e+1}}$ gives the desired result. This proves the theorem. \end{proof} The proof of the second part of the theorem follows similarly. We will use the following lemma. \begin{lemma}[Bacher \cite{B}] For $n$ satisfying $1\leq n\leq 2^e$ we have that \begin{enumerate} \item[(i)] $t(2^{e+1}+n)+t(2^{e}+n)=(-1)^{e+1}s(n),$ \item[(ii)] $t(2^{e}+n)=(-1)^e(s(2^e-n)-s(n))$, \item[(iii)] $t(2^{e+1}+n)=(-1)^{e+1}s(2^{e}-n)$. \end{enumerate} \end{lemma} \begin{proof} Parts (i) and (ii) are given in Proposition 3.1 and Theorem 1.2 of \cite{B}, respectively. Part (iii) follows easily from (i) and (ii). Note that (i) gives that $$t(2^{e+1}+n)=(-1)^{e+1}s(n)-t(2^{e}+n),$$ which by (ii) becomes \begin{equation*}t(2^{e+1}+n)=(-1)^{e+1}s(n)+(-1)^{e+1}s(2^e-n)-(-1)^{e+1}s(n)=(-1)^{e+1}s(2^e-n).\qedhere\end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{Bacherconj2} (ii)] We denote as before the generating series of the Stern sequence by $S(z)$. Splitting up the sum in the definition of $H(z)$ we see that $$H(z)=\frac{1}{z S(z)}-\frac{1+z}{z^{2}}\cdot\frac{T(z)}{S(z)}.$$ Since we will need to consider $H(z^{2^e})$, we will need to compute $\frac{T(z^{2^e})}{S(z^{2^e})}.$ Fortunately we have done this in the proof of Theorem \ref{Bacherconj1}, in \eqref{ToverS}, and so we use this expression here. Thus, applying the functional equation for $S(z)$, we have that \begin{multline*} H(z^{2^e})=\frac{\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)}{z^{2^{e+1}-1}S(z)}-(-1)^e\left(\frac{1+z^{2^e}}{z^{2^{e+1}}}\right)\frac{T(z)}{S(z)}\\ +(-1)^e\left(\frac{1+z^{2^e}}{z^{2^{e+1}}}\right)\frac{2z}{S(z)}\sum_{j=0}^{e-1}(-1)^j\prod_{i=0}^{j-1}\left(1+z^{2^i}+z^{2^{i+1}}\right),\end{multline*} so that \begin{multline*} (-1)^{e+1}H(z^{2^e})S(z)=\frac{1}{z^{2^{e+1}}}\left((1+z^{2^e})T(z)-(-1)^ez\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right.\\ \left.-2z(1+z^{2^e})\sum_{j=0}^{e-1}(-1)^j\prod_{i=0}^{j-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right).\end{multline*} An application of Lemma \ref{tspp} gives \begin{align*} (-1)^{e+1}H(z^{2^e})S(z)&=\frac{1}{z^{2^{e+1}}}\left((1+z^{2^e})T(z)-(-1)^ez\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right.\\ &\ \ \left.-(1+z^{2^e})\sum_{n=0}^{3\cdot 2^e}t(n)z^n-(-1)^ez(z^{2^e}-1)(1+z^{2^e})\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right)\\ &=\frac{1}{z^{2^{e+1}}}\left(\mathfrak{S}_1+\mathfrak{S}_2+\mathfrak{S}_3+\mathfrak{S}_3\right),\end{align*} where we have used the $\mathfrak{S}_i$ to indicate the terms in the previous line. Note that \begin{align*}\mathfrak{S}_1+\mathfrak{S}_3&=(1+z^{2^e})\left(T(z)-\sum_{n=0}^{3\cdot 2^e}t(n)z^n\right)\\ &=(1+z^{2^e})\sum_{n\geq 1}t(3\cdot 2^e+n)z^{n+3\cdot2^e}\\ &=\sum_{n\geq 2^e+1}t(2^{e+1}+n)z^{n+2^{e+1}}+\sum_{n\geq 2^{e+1}+1}t(2^{e}+n)z^{n+2^{e+1}},\end{align*} so that $$\frac{\mathfrak{S}_1+\mathfrak{S}_3}{z^{2^e+1}}=\sum_{n\geq 2^e+1}t(2^{e+1}+n)z^{n}+\sum_{n\geq 2^{e+1}+1}t(2^{e}+n)z^{n}.$$ Using Lemma \ref{pss}, we have \begin{align*} \frac{\mathfrak{S}_2+\mathfrak{S}_4}{z^{2^{e+1}}}&=\frac{1}{z^{2^{e+1}}}\left(-(-1)^e-(-1)^e(z^{2^{e+1}}-1)\right)z\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\\ &=\frac{(-1)^{e+1}}{z^{2^{e+1}}}\cdot z^{2^{e+1}}\left(\sum_{n=1}^{2^e}s(n)z^n+\sum_{n=1}^{2^e-1}s(2^e-n)z^{n+2^e}\right)\\ &=(-1)^{e+1}\left(\sum_{n=1}^{2^e}s(n)z^n+\sum_{n=1}^{2^e-1}s(2^e-n)z^{n+2^e}\right). \end{align*} Using the proceeding lemma and the fact that $t(2^{e+1})+t(2^e)=0$ so that we can add in a zero term, we have that \begin{align*} \frac{\mathfrak{S}_2+\mathfrak{S}_4}{z^{2^{e+1}}}&=\sum_{n=1}^{2^e}(t(2^{e+1}+n)+t(2^e+n))z^n+\sum_{n=1}^{2^e-1}t(2^{e+1}+n)z^{n+2^e}\\ &=\sum_{n=1}^{2^e}(t(2^{e+1}+n)+t(2^e+n))z^n+\sum_{n=2^e}^{2^{e+1}-1}t(2^{e}+n)z^{n}\\ &=\sum_{n=0}^{2^e}(t(2^{e+1}+n)+t(2^e+n))z^n+\sum_{n=2^e}^{2^{e+1}-1}t(2^{e}+n)z^{n}\\ &=\sum_{n=0}^{2^e}t(2^{e+1}+n)z^n+\sum_{n=0}^{2^{e+1}-1}t(2^{e}+n)z^{n}. \end{align*} Putting together these results gives \begin{align*}(-1)^{e+1}H(z^{2^e})S(z)&=\frac{1}{z^{2^{e+1}}}\left(\mathfrak{S}_1+\mathfrak{S}_2+\mathfrak{S}_3+\mathfrak{S}_3\right)\\ &=\sum_{n\geq 0}t(2^{e+1}+n)z^n+\sum_{n\geq 0}t(2^{e}+n)z^{n},\end{align*} which proves the theorem. \end{proof} \section{Computing with binary expansions} To gain intuition regarding Bacher's conjectures mentioned in the first section, we found it very useful to understand what happens to the Stern sequence and its twist at sums of powers of $2$. Thus, in this section we prove the following theorem which removes the need to use the recurrences to give the values of the Stern sequence. \begin{theorem} Let $n\geqslant 4$ and write $n=\sum_{i=0}^m 2^ib_i$, the binary expansion of $n$. Then $$s(n)=\left[\begin{matrix}1 & 1\end{matrix}\right]\left(\prod_{i=1}^{m-1}\left[\begin{matrix}1 & 1-b_i\\ b_i & 1 \end{matrix}\right]\right)\left[\begin{matrix}1\\ b_0 \end{matrix}\right]$$ \end{theorem} \begin{proof} Note that from the definition of the Stern sequence we easily have that $$s(2a+x)=s(a)+x\cdot s(a+1)\qquad (x\in\{0,1\}).$$ It follows that for each $k\in\{0,1,\ldots,m-1\}$ we have that both $$s\left(\sum_{i=k}^m 2^{i-k}b_i\right)=s\left(\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right)+b_k\cdot s\left(1+\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right)$$ and $$s\left(1+\sum_{i=k}^m 2^{i-k}b_i\right)=(1-b_k)\cdot s\left(\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right)+s\left(1+\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right).$$ Starting with $k=0$ and applying the above equalities, and using the fact that $b_m=1$ so that $s(b_m)=s(b_m+1)=1$ gives the result. \end{proof} We have a similar result for the twisted Stern sequence whose proof is only trivially different from the above and so we have omitted it. \begin{theorem} Let $n\geqslant 4$ and write $n=\sum_{i=0}^m 2^ib_i$, the binary expansion of $n$. Then $$t(n)=(-1)^m\left[\begin{matrix}1 & -1\end{matrix}\right]\left(\prod_{i=1}^{m-1}\left[\begin{matrix}1 & 1-b_i\\ b_i & 1 \end{matrix}\right]\right)\left[\begin{matrix}1\\ b_0 \end{matrix}\right].$$ \end{theorem} Indeed, both $s(n)$ and $t(n)$ are $2$--regular, and so the fact that $s(n)$ and $t(n)$ satisfy theorems like the two above is provided by Lemma 4.1 of \cite{AS} (note that while the existence is proven, the matrices are not explicitly given there). \end{document}
\begin{document} \title{\centerline{\large \bf INVARIANTS OF LINKS OF CONWAY TYPE} \thispagestyle{firststyle} \section{Introduction}The purpose of this paper is to present a certain combinatorial method of constructing invariants of isotopy classes of oriented tame links. This arises as a generalization of the known polynomial invariants of Conway and Jones. These invariants have one striking common feature. If $L_+, L_-$ and $L_0$ are diagrams of oriented links which are identical, except near one crossing point, where they look like in Fig. 1.1\footnote{Added for e-print: we follow here the old Conway's convention \cite{C}; in modern literature the role of $L_+$ and $L_-$ is usually inverted. In \cite{Prz} the new convention is already used.}, then $w_{L_+}$ is uniquely determined by $w_{L_-}$ and $w_{L_0}$, and also $w_{L_-}$ is uniquely determined by $w_{L_+}$ and $w_{L_0}$. Here $w$ denotes the considered invariants (we will often take the liberty of speaking about the value of an invariant for a specific link or diagram of a link rather than for an isotopy class of links). In the above context, we agree to write $L_+^p, L_-^p$ and $L_0^p$ if we need the crossing point to be explicitly specified. \ \\ \ \\ \centerline{{\psfig{figure=SkeinTriplePT1.eps,height=3.5cm}}}\ \\ \centerline{\footnotesize{Fig. 1.1}} \indent In this paper we will consider the following general situation. Assume we are given an algebra $\A$ with a countable number of $0$-argument operations $a_1, a_2,..., a_n,...$ and two 2-argument operations $|$ and $*$. We would like to construct invariants satisfying the conditions \begin{align*} w_{L_+}= &\ w_{L_-} | w_{L_0} \text{ and}\\ w_{L_-}= &\ w_{L_+} * w_{L_0} \text{ and} \\ w_{T_n}= &\ a_n \text{ for $T_n$ being a trivial link of n components.} \end{align*} \indent We say that $(\A, a_1, a_2,..., |, *)$ is a Conway algebra if the following conditions are satisfied\footnote{Added for e-print: We were unaware, when writing this paper, that the condition 1.5, $(a\!*\!b)\!*\!(c\!*\!d) = (a\!*\!c)\!*\!(b\!*\!d)$, was used already for over 50 years, first time in \cite{Bu-Ma}, under various names, e.g. entropic condition (see for example \cite{N-P}).}: \\ ${1.1} \quad a_n | a_{n+1} = a_n \\ {1.2} \quad a_n * a_{n+1} = a_n $\\ $\left. \begin{aligned} {1.3} && &(a|b)|(c|d) = (a|c)|(b|d) \\ {1.4} && &(a|b)\!*\!(c|d) = (a\!*\!c)|(b\!*\!d) \\ {1.5} && &(a\!*\!b)\!*\!(c\!*\!d) = (a\!*\!c)\!*\!(b\!*\!d) \end{aligned} \right\} \quad \text{transposition properties}$ \\ ${1.6} \quad (a|b)*b = a \\ {1.7} \quad (a*b)|b = a.$\\ \\ We will prove the following theorem: \\ \begin{theorem}\textbf{1.8.} For a given Conway algebra $\A$ there exists a uniquely determined invariant $w$ which attaches an element $w_L$ from $\A$ to every isotopy class of oriented links and satisfies the conditions\\ ${(1)} \quad w_{T_n} = a_n \hspace{2.5cm} \text{ initial conditions}$\\ $\left. \begin{aligned} {(2)}&& &w_{L_+} = &\ w_{L_-} | w_{L_0}\\ {(3)}&& &w_{L_-} = &\ w_{L_+} * w_{L_0} \end{aligned} \right.\bigg\} \ Conway\ relations$ \end{theorem} It will be proved in \S{2}. \\ \indent Let us write here a few words about the geometrical meaning of the axioms 1.1-1.7 of Conway algebra. Relations 1.1 and 1.2 are introduced to reflect the following geometrical relations between the diagrams of trivial links of $n$ and $n+1$ components: \centerline{\epsfig{figure=Figure12.eps,height=10cm}} \centerline{\footnotesize{Fig. 1.2}} Relations 1.3, 1.4, and 1.5 arise when considering rearranging a link at two crossings of the diagram but in different order. It will become clear in \S{2}. Relations 1.6 and 1.7 reflect the fact that we need the operations $|$ and $*$ to be in some sense opposite one to another. \\ \begin{example}\textbf{1.9.} (Number of components). Set $\A=N-$ the set of natural numbers; $a_i=i$ and $i|j=i*j=i$. This algebra yields the number of components of a link.\end{example} \begin{example}\textbf{1.10.} Set $\A=$ \big\{0, 1, 2\big\}; the operation $*$ is equal to $|$ and $0|0\!=\!1,\ 1|0\!=\!0,\ 2|0\!=\!2,\ 0|1\!=\!0,\ 1|1\!=\!2,\ 2|1\!=\!1,\ 0|2\!=\!2,\ 1|2\!=\!1,\ 2|2\!=\!0$. Furthermore $a_i \equiv i \mod3.$ The invariant defined by this algebra distinguishes, for example, the trefoil knot from the trivial knot. \end{example} \begin{example}\textbf{1.11.}{(a)} $\A=Z [x^{\mp 1} , y^{\mp 1}, z]$; $a_1 = 1, a_2=x+y+z,\dots, a_i = (x+y)^{i-1} + z(x+y)^{i-2} + \dots+z(x+y)+z=(x+y)^{i-1}+z \big( \frac{(x+y)^{i-1}-1}{x+y-1} \big)$,{\small ...} . We define $|$ and $*$ as follows: $w_2 | w_0 = w_1$ and $w_1 * w_0 = w_2$ where \begin{flalign*} {1.12}&& xw_1+yw_2 &=w_0-z, \ \ w_1,w_2,w_0 \in \A.& \end{flalign*} \end{example} \indent (b) $\A = Z$[ $x^{\mp1}$ , $y^{\mp1}$] is obtained from the algebra described in (a) by substitution $z=0$. In particular $a_i=(x+y)^{i-1}$ and 1.12 reduces to: \begin{flalign*} {1.13}&& xw_1+yw_2 &= w_0.& \end{flalign*} We describe this algebra for two reasons: \\ \indent $-$first, the invariant of links defined by this algebra is the simplest generalization of the Conway polynomial \big(substitute $x=- \frac{1}{z}$, $y=\frac{1}{z}$, {[$K$-1]}\big) and the Jones polynomial \big(substitute $x= \frac{1}{t} \frac{1}{\sqrt{t}-\frac{1}{\sqrt{t}}}$, $y= \frac{-t}{\sqrt{t}-\frac{1}{\sqrt{t}}}$ {[$J$]}\big); \\ \indent$-$second, this invariant behaves well under disjoint and connected sums of links: \begin{center} $\qquad P_{L_1 \sqcup L_2} (x,y)\!=\! (x+y)P_{L_1{\sharp} L_2}(x,y)\!=\!(x+y)P_{L_1}(x,y)\!\cdot\!P_{L_2}(x,y)$ \\ where $P_L\!(x,y)$ is a polynomial invariant of links yielded by $\A$.\\ \begin{example} \textbf{1.14.} (Linking number). Set $\A\!=\!N\!\times\!Z$, $a_i\!=\!(i,0)$ and\end{example} \begin{equation*} (a,b)|(c,d)=\begin{cases} (a,b-1) \ \textup{if} \ a > c \\ (a,b) \qquad \textup{if} \ a \leqslant c \end{cases} \end{equation*} \begin{equation*} (a,b)*(c,d)=\begin{cases} (a,b+1)\ \textup{if} \ a > c \\ (a,b)\ \qquad \textup{if}\ a \leqslant c \end{cases} \end{equation*} \end{center} \indent The invariant associated to a link is a pair (number of components, linking number). \begin{remark}\textbf{1.15.} It may happen that for each pair $u,v\!\in\!\A$ there exists exactly one $w\!\in\!\A$ such that $v|w\!=\!u$ and $u\!*\!w\!=\!v$. Then we can introduce a new operation $\circ$: $\A \times \A \rightarrow \A$ putting $u \circ v=w$ (we have such a situation in Examples $1.10$ and $1.11$ but not in $1.9$ where $2|1\!=\!2\!*\!1\!=\!2\!=\!2|3\!=\!2\!*\!3$). Then $a_n\!=\!a_{n-1}\circ a_{n-1}$. If the operation $\circ$ is well defined we can find an easy formula for invariants of connected and disjoint sums of links. We can interpret $\circ$ as follows: if $w_1$ is the invariant of $L_+$ (Fig 1.1) and $w_2$ of $L_-$ then $w_1 \circ w_2$ is the invariant associated to $L_0$. \end{remark} \begin{remark}\textbf{1.16.} Our invariants often allow us to distinguish between a link and its mirror image. If $P_L (x, y, z)$ is an invariant of $L$ from Example 1.11 (a) and $\overline{L}$ is the mirror image of $L$ then \[P_{\overline{L}}(x,y,z) = P_L(y,x,z).\] \end{remark} \indent We will call a crossing of the type \parbox{1.1cm}{\psfig{figure=PT-Lplus.eps,height=0.9cm}} positive and crossing of the type \parbox{1.1cm}{\psfig{figure=PT-Lmin.eps,height=0.9cm}} negative. This will be denoted by sgn $p= +$ or $-$. Let us consider now the following example. \begin{example}\textbf{1.17.} Let $L$ be the figure eight knot represented by the diagram \end{example} \centerline{\epsfig{figure=Figure13.eps,height=7.5cm}} \ \\ \centerline{\footnotesize{Fig. 1.3}} \ \\ To determine $w_L$ let us consider the following binary tree: \\ \centerline{\epsfig{figure=Fig14.eps,height=12cm}} \centerline{\footnotesize{Fig. 1.4}} As it is easily seen the leaves of the tree are trivial links and every branching reflects a certain operation on the diagram at the marked crossing point. To compute $w_L$ it is enough to have the following tree: \\ \ \\ \centerline{\psfig{figure=Skein-tree2.eps,height=3cm}} Here the sign indicates the sign of the crossing point at which the operation was performed, and the leaf entries are the values of $w$ for the resulting trivial links. Now we may conclude that \[w_L\!=\!a_1\!*\!(a_2|a_1).\] Such a binary tree of operations on the diagram resulting in trivial links at the leaves will be called the resolving tree of the diagram. \newline \indent There exists a standard procedure to obtain such a tree for every diagram. It will be described in the next paragraph and it will play an essential role in the proof of Theorem 1.8. It should be admitted that the idea is due to Ball and Metha {[$B$-$M$]} and we learned this from the Kauffman lecture notes {[$K$-3]}. \\ \section{Proof of the Main Theorem}\label{Section 2} \begin{definition}\textbf{2.1.} Let $L$ be an oriented diagram of $n$ components and let $b\!=$($b,\dots,b_n$) be base points of $L$, one point from each component of $L$, but not the crossing points. Then we say that $L$ is untangled with respect to $b$ if the following holds: if one travels along $L$ (according to the orientation of $L$) starting from $b_1$, then, after having returned to $b_1-$ from $b_2,\dots,$ finally from $b_n$, then each crossing which is met for the first time is crossed by a bridge.\end{definition} \indent It is easily seen that for every diagram $L$ of an oriented link there exists a resolving tree such that the leaf diagrams are untangled (with respect to appropriately chosen base points). This is obvious for diagrams with no crossings at all, and once it is known for diagrams with less than $n$ crossings we can use the following procedure for any diagram with $n$ crossings: choose base points arbitrarily and start walking along the diagram until the first ``bad" crossing $p$ is met, i.e. the first crossing which is crossed by a tunnel when first met. Then begin to construct the tree changing the diagram in this point. If, for example, sgn $p\!=\!+$ we get \\ \centerline{{\psfig{figure=Skein-tree.eps,height=2.1cm}}}\ \\ Then we can apply the inductive hypothesis to $L_0^p$ and we can continue the procedure with $L_-^p$ (walking further along the diagram and looking for the next bad point). \\ \\ \indent To prove Theorem 1.8 we will construct the function $w$ as defined on diagrams. In order to show that $w$ is an invariant of isotopy classes of oriented links we will verify that $w$ is preserved by the Reidemeister moves. \\ \\ \indent We use induction on the number $cr(L)$ of crossing points in the diagram. For each $k \geqslant 0$ we define a function $w_k$ assigning an element of $\A$ to each diagram of an oriented link with no more than $k$ crossings. Then $w$ will be defined for every diagram by $w_L = w_k(L)$ where $k \geqslant cr(L)$. Of course the functions $w_k$ must satisfy certain coherence conditions for this to work. Finally we will obtain the required properties of $w$ from the properties of $w_k$'s. \\ \indent We begin from the definition of $w_0$. For a diagram $L$ of $n$ components with $cr(L)=0$ we put \\ \begin{flalign*} {2.2}&& w_0(L) &= a_n.& \end{flalign*} To define $w_{k+1}$ and prove its properties we will use the induction several times. To avoid misunderstandings the following will be called the ``Main Inductive Hypothesis": M.I.H. We assume that we have already defined a function $w_k$ attaching an element of $\A$ to each diagram $L$ for which $cr(L) \leqslant k$. We assume that $w_k$ has the following properties: \begin{flalign*} {2.3}&& w_k(U_n) &= a_n & \end{flalign*} for $U_n$ being an untangled diagram of $n$ components (with respect to some choice of base points). \begin{flalign*} {2.4} && w_k(L_+) &= w_k(L_-)|w_k(L_0)&\\ {2.5} && w_k(L_-) &= w_k(L_+)*w_k(L_0) & \end{flalign*} for $L_+$, $L_-$ and $L_0$ being related as usually. \begin{flalign*} {2.6}&& w_k(L) &= w_k(R(L))& \end{flalign*} where $R$ is a Reidemeister move on $L$ such that $cr(R(L))$ is still at most $k$. \\ \indent Then, as the reader may expect, we want to make the Main Inductive Step to obtain the existence of a function $w_{k+1}$ with analogous properties defined on diagrams with at most $k$ +1 crossings. \\ \indent Before dealing with the task of making the M.I.S. let us explain that it will really end the proof of the theorem. It is clear that the function $w_k$ satisfying M.I.H. is uniquely determined by properties 2.3, 2.4, 2.5 and the fact that for every diagram there exists a resolving tree with untangled leaf diagrams. Thus the compatibility of the function $w_k$ is obvious and they define a function $w$ defined on diagrams. \\ \indent The function $w$ satisfies conditions (2) and (3) of the theorem because the function $w_k$ satisfy such conditions. \\ \indent If $R$ is a Reidemeister move on a diagram $L$, then $cr(R(L))$ equals at most $k=cr(L)+2$, whence \\ \indent $w_{R(L)}$=$w_k(R(L))$, $w_L$=$w_k(L)$ and by the properties of $w_k, w_k(L)$=$w_k(R(L))$ what implies $w_{R(L))}$=$w_L$. It follows that $w$ is an invariant of the isotopy class of oriented links. \\ \indent Now it is clear that $w$ has the required property (1) too, since there is an untangled diagram $U_n$ in the same isotopy class as $T_n$ and we have $w_k(U_n)=a_n$. \\ \indent The rest of this section will be occupied by the M.I.S. For a given diagram $D$ with $cr(D)\!\leqslant\!k+1$ we will denote by $\mathcal{D}$ the set of all diagrams which are obtained from $D$ by operations of the kind \parbox{3.1cm}{\psfig{figure=PTplustominus.eps,height=0.9cm}} or \parbox{3.1cm}{\psfig{figure=PTplustozero.eps,height=0.9cm}}. Of course, once base points $b$=($b_1,\dots, b_n$) are chosen on $D$, then the same points can be chosen as base points for any $L\!\in\!\mathcal{D}$, provided $L$ is obtained from $D$ by the operations of the first type only. \\ \indent Let us define a function $w_b$, for a given $D$ and $b$, assigning an element of $\A$ to each $L\!\in\!\mathcal{D}$. If $cr(L)\!<\!k+1$ we put \begin{flalign*} {2.7}&& w_b(L) &= w_k(L) & \end{flalign*} If $U_n$ is an untangled diagram with respect to $b$ we put \begin{flalign*} {2.8}&& w_b(U_n) &= a_{n} & \end{flalign*} ($n$ denotes the number of components). \\ Now we can proceed by induction on the number $b(L)$ of bad crossings in $L$ (in the symbol $b(L)\ b$ works simultaneously for ``bad" and for $b$=($b_1,\dots,b_n$). For a different choice of base points $b'$=($b'_1,\dots,b'_n$) we will write $b'(L))$. Assume that $w_b$ is defined for all $L\!\in\!\mathcal{D}$ such that $b(L)\!<\!t$. Then for $L$, $b(L)$=$t$, let $p$ be the first bad crossing for $L$(starting from $b_1$ and walking along the diagram). Depending on $p$ being positive or negative we have $L$=$L_+^p$ or $L$=$L_-^p$. We put \begin{flalign*} {2.9} & \ \ \ \ \ \ \ \ \ w_b(L)= \begin{cases} w_b(L_-^p) | w_b(L_0^p), & \text{if sgn } p = + \\ w_b(L_+^p)*w_b(L_0^p), & \text{if sgn } p = -. \\ \end{cases}& \end{flalign*} We will show that $w_b$ is in fact independent of the choice of $b$ and that it has the properties required from $w_{k+1}$. \\ \\ \textbf{Conway Relations for $\boldsymbol{w_b}$} \\ \\ \indent Let us begin with the proof that $w_b$ has properties $2.4$ and $2.5$. We will denote by $p$ the considered crossing point. We restrict our attention to the case: $b(L_+^p)>b(L_-^p)$. The opposite situation is quite analogous. \\ \indent Now, we use induction on $b(L_-^p)$. If $b(L_-^p)$=$0$, then $b(L_+^p)$=$1$, $p$ is the only bad point of $L_+^p$, and by defining equalities $2.9$ we have \[w_b(L_+^p)\!=\!w_b(L_-^p)|w_b(L_0^p)\] and using 1.6 we obtain \[w_b(L_-^p)\!=\!w_b(L_+^p)\!*\!w_b(L_0^p).\] Assume now that the formulae $2.4$ and $2.5$ for $w_b$ are satisfied for every diagram $L$ such that $b(L_-^p)\!<\!t$, $t\!\geqslant\!1$. Let us consider the case $b(L_-^p)$=$t$. \\ \indent By the assumption $b(L_+^p)\!\geqslant\!2$. Let $q$ be the first bad point on $L_+^p$. Assume that $q$=$p$. Then by $2.9$ we have \[w_b(L_+^p)\!=\!w_b(L_-^p)| w_b(L_0^p).\] Assume that $q \neq p$. Let sgn $q=+$, for example. Then by 2.9 we have \[w_b(L_+^p) = w_b(L{_+^p}{_+^q}) = w_b(L{_+^p}{_-^q}) | w_b(L{_+^p}{_ 0^q}).\] But $b(L{_-^p}{_-^q})\!<\!t$ and $cr(L{_+^p}{_ 0^q})\!\leqslant\!k$, whence by the inductive hypothesis and M.I.H. we have \[w_b(L{_+^p}{_-^q})\!=\!w_b(L{_-^p}{_-^q})|w_b(L{_0^p}{_ -^q})\] and \[w_b(L{_+^p}{_0^q})\!=\!w_b(L{_-^p}{_0^q})|w_b(L{_0^p}{_ 0^q})\] whence \[w_b(L_+^p) = (w_b(L{_-^p}{_-^q}) | w_b(L{_0^p}{_-^q}))|( w_b(L{_-^p}{_ 0^q})| w_b(L{_0^p}{_0^q)})\] and by the transposition property 1.3 \begin{flalign*} {2.10} && w_b(L_+^p) &=(w_b(L{_-^p}{_-^q})|w_b(L{_-^p}{_0^q})) |(w_b(L{_0^p}{_ -^q})|w_b(L{_0^p}{_0^q})).& \end{flalign*} On the other hand $b(L{_-^p}{_-^q})\!<\!t$ and $cr(L_ 0^p)\!\leqslant\!k$, so using once more the inductive hypothesis and M.I.H. we obtain \[ {2.11} \begin{split} w_b(L_-^p) &= w_b(L{_-^p}{_+^q}) = w_b(L{_-^p}{_-^q}) | w_b(L{_-^p}{_ 0^q}) \\ w_b(L_0^p) &= w_b(L{_0^p}{_+^q}) = w_b(L{_0^p}{_-^q}) | w_b(L{_0^p}{_ 0^q}) \end{split} \] Putting 2.10 and 2.11 together we obtain \[w_b(L_+^p) = w_b(L_-^p) | w_b(L_0^p)\] as required. If sgn $q=-$ we use $1.4$ instead of $1.3$. This completes the proof of Conway Relations for $w_b$. \\ \\ \textbf{Changing Base Points} \\ \\ \indent We will show now that $w_b$ does not depend on the choice of $b$, provided the order of components is not changed. It amounts to the verification that we may replace $b_i$ by $b'_i$ taken from the same component in such a way that $b'_i$ lies after $b_i$ and there is exactly one crossing point, say $p$, between $b_i$ and $b'_i$. Let $b'$=($b_1, \dots,b'_i,\dots,b_n$). We want to show that $w_b(L)\!=\!w_{b'}(L)$ for every diagram with $k+1$ crossings belonging to $\mathcal{D}$. We will only consider the case sgn $p=+$; the case sgn $p=-$ is quite analogous. \\ \indent We use induction on $B(L)=$max$(b(L),b'(L))$. We consider three cases. \\ \\ \indent \textsc{Cbp 1.} Assume $B(L)\!=\!0$. Then $L$ is untangled with respect to both choices of base points and by 2.8 \[w_b(L) = a_n = w_{b'}(L).\] \indent \textsc{Cbp 2.} Assume that $B(L)\!=\!1$ and $b(L)\!\neq\!b'(L)$. This is possible only when $p$ is a self-crossing point of the $i$-th component of $L$. There are two subcases to be considered. \\ \\ \indent \textsc{Cbp 2} (a): $b(L)\!=\!1$ and $b'(L)\!=\!0$. Then $L$ is untangled with respect to $b'$ and by 2.8 $$ w_{b'}(L)=a_n $$ $$ \ \ \ \ w_b(L)=w_b(L_+^p)= w_b(L_-^p)|w_b(L_0^p)$$ \indent Again we have restricted our attention to the case sgn $p\!=\!+$. Now, $w_b(L_-^p)\!=\!a_n$ since $b(L_-^p)\!=\!0$, and $L_0^p$ is untangled with respect to a proper choice of base points. Of course $L_0^p$ has $n+1$ components, so $w_b(L_0^p)\!=\!a_{n+1}$ by 2.8. It follows that $w_b(L)\!=\!a_n|a_{n+1}$ and $a_n|a_{n+1}=a_n$ by $1.1$. \\ \\ \indent \textsc{Cbp 2}(b): $b(L)\!=\!0$ and $b'(L)\!=\!1$. This case can be dealt with like \textsc{Cbp 2}(a).\\ \\ \indent \textsc{Cbp 3.} $B(L)\!=\!t\!>\!1$ or $B(L)\!=\!1\!=\!b(L)\!=\!b'(L)$. We assume by induction $w_b(K)\!=\!w_{b'}(K)$ for $B(K)\!<\!B(L)$. Let $q$ be a crossing point which is bad with respect to $b$ and $b'$ as well. We will consider this time the case sgn $q=-$. The case sgn $q=+$ is analogous. \\ \indent Using the already proven Conway relations for $w_b$ and $w_{b'}$ we obtain \[w_b(L)= w_b(L_-^q)=w_b(L_+^q)\!*\!w_b(L_0^q) \] \[w_{b'}(L)= w_{b'}(L_-^q)= w_{b'}(L_+^q)\!*\!w_{b'}(L_0^q)\] But $B(L_+^q)\!<\!B(L)$ and $cr(L_0^q)\!\leqslant\!k$, whence by the inductive hypothesis and M.I.H. hold \[w_b(L_+^q) = w_{b'}(L_+^q)\] \[w_b(L_0^q) = w_{b'}(L_0^q)\] which imply $w_b(L)\!=\!w_{b'}(L)$. This completes the proof of this step (C.B.P.). \\ \indent Since $w_b$ turned out to be independent of base point changes which preserve the order of components we can now consider a function $w^0$ to be defined in such a way that it attaches an element of $\A$ to every diagram $L$, $cr(L) \leqslant k+1$ with a fixed ordering of components. \\ \\ \textbf{Independence of $\boldsymbol{w^0}$ of Reidemeister Moves} (I.R.M) \\ \\ \indent When $L$ is a diagram with fixed order of components and $R$ is a Reidemeister move on $L$, then we have a natural ordering of components on $R(L)$. We will show now that $w^0(L)=w^0(R(L))$. Of course we assume that $cr(L)$, $cr(R(L)) \leqslant k+1$. \\ \indent We use induction on $b(L)$ with respect to properly chosen base points $b=(b_1, \dots , b_n)$. Of course the choice must be compatible with the given ordering of components. We choose the base points to lie outside the part of the diagram involved in the considered Reidemeister move $R$, so that the same points may work for the diagram $R(L)$ as well. We have to consider the three standard types of Reidemeister moves (Fig. 2.1). \\ \ \\ \centerline{\psfig{figure=R123-PT.eps,height=2.5cm}} \centerline{\footnotesize{Fig. 2.1}} \indent Assume that $b(L)=0$. Then it is easily seen that also $b(R(L))=0$, and the number of components is not changed. Thus \[w^0(L)=w^0(R(L))\ \textup{ by\ 2.8.}\] \indent We assume now by induction that $w^0(L)=w^0(R(L))$ for $b(L)<t$. Let us consider the case $b(L)=t$. Assume that there is a bad crossing $p$ in $L$ which is different from all the crossings involved in the considered Reidemeister move. Assume, for example, that sgn $p=+$. Then, by the inductive hypothesis, we have \begin{flalign*} {2.12} && w^0(L_-^p) &= w^0(R(L_-^p))& \end{flalign*} and by M.I.H. \begin{flalign*} {2.13} && w^0(L_0^p) &= w^0(R(L_0^p))& \end{flalign*} Now, by the Conway relation 2.4, which was already verified for $w^0$ we have \[w^0(L)=w^0(L_+^p)=w^0(L_-^p)|w^0(L_0^p) \] \[w^0(R(L))=w^0(R(L)_+^p)=w^0(R(L)_-^p)|w^0(R(L)_0^p) \] whence by 2.12 and 2.13 \[w^0(L)=w^0(R(L))\] Obviously $R(L_-^p)=R(L)_-^p$ and $R(L_0^p)=R(L)_0^p$. \\ \indent It remains to consider the case when $L$ has no bad points, except those involved in the considered Reidemeister move. We will consider the three types of moves separately. The most complicated is the case of a Reidemeister move of the third type. To deal with it let us formulate the following observation: \\ \indent Whatever the choice of base points is, the crossing point of the top arc and the bottom arc cannot be the only bad point of the diagram. \\ \centerline{\psfig{figure=TriangleB-PT.eps,height=4.1cm}} \centerline{\footnotesize{Fig. 2.2}} The proof of the above observation amounts to an easy case by case checking and we omit it. The observation makes possible the following induction: we can assume that we have a bad point at the crossing between the middle arc and the lower or the upper arc. Let us consider for example the first possibility; thus $p$ from Fig. 2.2 is assumed to be a bad point. We consider two subcases, according to sgn $p$ being $+$ or $-$. \\ \indent Assume sgn $p=+$. Then by Conway relations \[w^0(L)=w^0(L_+^p)=w^0(L_-^p)|w^0(L_0^p)\] \[w^0(R(L))=w^0(R(L)_+^p)=w^0(R(L)_-^p)|w^0(R(L)_0^p)\] But $R(L)_-^p=R(L_-^p)$ and by the inductive hypothesis \[w^0(L_-^p)=w^0(R(L_-^p))\] Also $R(L)_0^p$ is obtained from $L_0^p$ by two subsequent Reidemeister moves of type two (see Fig. 2.3), whence by M.I.H. \[w^0(R(L)_0^p)=w^0(L_0^p)\] and the equality \[w^0(L)=w^0(R(L))\ \textup{follows.}\] \centerline{\epsfig{figure=Figure23.eps,height=3.5cm}} \centerline{\footnotesize{Fig.2.3}} Assume now that sgn $p=-$. Then by Conway relations \[w^0(L)=w^0(L_-^p)=w^0(L_+^p)*w^0(L_0^p) \] \[w^0(R(L))=w^0(R(L)_-^p)=w^0(R(L)_+^p)*w^0(R(L)_0^p)\] But $R(L)_+^p=R(L_+^p)$ and by the inductive hypothesis \[w^0(L_+^p)=w^0(R(L_+^p))\] Now, $L_0^p$ and $R(L)_0^p$ are essentially the same diagrams (see Fig. 2.4), whence $w^0(L_0^p)=w^0(R(L)_0^p)$ and the equality \[w^0(L)=w^0(R(L))\ \textup{follows.}\] \centerline{\epsfig{figure=Fig24.eps, height=3.5cm}} \centerline{\footnotesize{Fig. 2.4}} \ \\ \ \\ Reidemeister moves of the first type. The base points can always be chosen so that the crossing point involved in the move is good. \\ Reidemeister moves of the second type. There is only one case, when we cannot choose base points to guarantee the points involved in the move to be good. It happens when the involved arcs are parts of different components and the lower arc is a part of the earlier component. In this case the both crossing points involved are bad and they are of different signs, of course. Let us consider the situation shown in Fig. 2.5. \\ \centerline{\epsfig{figure=Fig25.eps, height=3.5cm}} \centerline{\footnotesize{Fig. 2.5}} We want to show that $w^0(R(L))=w^0(L)$. But by the inductive hypothesis we have \[w^0(L')=w^0(R'(L'))=w^0(R(L)).\] Using the already proven Conway relations, formulae 1.6 and 1.7 and M.I.H. if necessary, it can be proved that $w^0(L)=w^0(L')$. Let us discuss in detail the case involving M.I.H. It occurs when sgn $p=-$. Then we have \[w^0(L)=w^0(L_+^q)=w^0(L_-^q)|w^0(L_0^q)=(w^0(L{_-^q}{_+^p})*w^0(L{_-^q}{_0^p}))|w^0(L_0^q)\] But $L{_-^q}{_+^p}= L'$ and by M.I.H. $w^0(L{_-^q}{_0^p})=w^0(L_0^q)$ (see Fig. 2.6, where $L{_-^q}{_0^p}$ and $L_0^q$ are both obtained from $K$ by a Reidemeister move of the first type). \\ \ \\ \centerline{\epsfig{figure=Fig26.eps, height=3.2cm}} \centerline{\footnotesize{Fig. 2.6}} Thus by 1.7: \[w^0(L) = w^0(L')\ \textup{whence}\] \[w^0(L)=w^0(R(L)). \] The case: sgn $p=+$ is even simpler and we omit it. This completes the proof of the independence of $w^0$ of Reidemeister moves. To complete the Main Inductive Step it is enough to prove the independence of $w^0$ of the order of components. Then we set $w_k = w^0$. The required properties have been already checked. \\ \\ \textbf {Independence of the Order of Components} (I.O.C.) \\ \\ \indent It is enough to verify that for a given diagram $L\ (cr(L) \leqslant k+1)$ and fixed base points $b=(b_1, \dots , b_i, b_{i+1}, \dots , b_n)$ we have \[w_b(L)=w_{b'}(L)\] where $b'=(b_1, \dots , b_{i+1}, b_{i}, \dots , b_n)$. This is easily reduced by the usual induction on $b(L)$ to the case of an untangled diagram. To deal with this case we will choose $b$ in an appropriate way. \\ \indent Before we do it, let us formulate the following observation: If $L_i$ is a trivial component of $L$, i.e. $L_i$ has no crossing points, neither with itself, nor with other components, then the specific position of $L_i$ in the plane has no effect on $w^0(L)$; in particular we may assume that $L_i$ lies separately from the rest of the diagram: \\ \ \\ \centerline{\epsfig{figure=Fig27.eps, height=2.2cm}} \centerline{\footnotesize{Fig. 2.7}} This can be easily achieved by induction on $b(L)$, or better by saying that it is obvious. \\ \indent For an untangled diagram we will be done if we show that it can be transformed into another one with less crossings by a series of Reidemeister moves which do not increase the crossing number. We can then use I.R.M. and M.I.H. This is guaranteed by the following lemma. \begin{lemma}\textbf{2.14.} Let $L$ be a diagram with $k$ crossings and a given ordering of components $L_1, L_2, \dots , L_n$. Then either $L$ has a trivial circle as a component or there is a choice of base points $b=(b_1, \dots , b_n)$; $b_i \in L_i$ such that an untangled diagram $L^{u}$ associated with $L$ and $b$ (that is all the bad crossings of $L$ are changed to good ones) can be changed into a diagram with less than $k$ crossings by a sequence of Reidemeister moves not increasing the number of crossings. \end{lemma} This was probably known to Reidemeister already, however we prove it in the Appendix for the sake of completeness.\\ \indent With I.O.C. proven we have completed M.I.S. and the proof of Theorem 1.8.\\ \section{Quasi-algebras}\label{Section 3} \indent We shall now describe a certain generalization of Theorem 1.8. This is based on the observation that it was not necessary to have the operations $|$ and $*$ defined on the whole product $\A \times \A$. Let us begin with the following definition. \begin{definition}\textbf{3.0.} A quasi Conway algebra is a triple $(\A, B_1, B_2), B_1, B_2$ being subsets of $\A \times \A$, together with 0-argument operations $a_1,a_2, \dots ,a_n, \dots$ and two 2-argument operations $|$ and $*$ defined on $B_1$ and $B_2$ respectively satisfying the conditions: \end{definition} \begin{align*} \left. \begin{aligned} 3.1 && \qquad &a_n | a_{n+1} = a_n \\ 3.2 && \qquad &a_n * a_{n+1} = a_n \\ 3.3&& \qquad &(a|b)|(c|d) = (a|c)|(b|d) \\ 3.4 && \qquad &(a|b)*(c|d) = (a*c)|(b*d) \\ 3.5 && \qquad &(a*b)*(c*d) = (a*c)*(b*d) \\ 3.6 && \qquad &(a|b)*b = a \\ 3.7 && \qquad &(a*b)|b = a. \\ \end{aligned} \right\} \indent \text{whenever the both sides are defined} \end{align*} \indent We would like to construct invariants of Conway type using such quasi-algebras. As before $a_n$ will be the value of the invariant for the trivial link of $n$ components. \\ \indent We say that $\A$ is geometrically sufficient if and only if for every resolving tree of each diagram of an oriented link all the operations that are necessary to compute the root value are defined. \begin{theorem}\textbf{3.8.} Let $\A$ be a geometrically sufficient quasi Conway algebra. There exists a unique invariant $w$ attaching to each isotopy class of links an element of $\A$ and satisfying the conditions \end{theorem} \begin{enumerate} \item $w_{T_n}=a_n$ for $T_n$ being a trivial link of n components,\\ \item if $L_+$, $L_-$ and $L_0$ are diagrams from Fig. 1.1, then \[w_{L_+}=w_{L_-}|w_{L_0} \text{ and } \] \[w_{L_-}=w_{L_+}*w_{L_0}.\] \end{enumerate} \noindent The proof is identical with the proof of Theorem 1.8.\\ \indent As an example we will now describe an invariant, whose values are polynomials in an infinite number of variables. \begin{example}\textbf{3.9.} $\A=N \times Z[ y_1^{\mp1},{x'}_2^{\mp 1},{z'}_2,\ x_1^{\mp1},z_1,\ x_2^{\mp1},z_2,x_3^{\mp1},z_3,\dots], B_1=B_2=B=\{ ((n_1,\ w_1),(n_2,w_2))\in\A\times\A:|n_1-n_2|=1\}, a_1=(1,1),a_2=(2, x_1+y_1+z_1),\dots, a_n=(n,\Pi_{i=1}^{n-1}(x_i+y_i)+z_1\Pi_{i=2}^{n-1} (x_i+y_i)+\dots+z_{n-2}(x_{n-1}+y_{n-1})+z_{n-1}),\dots$ where $y_i=x_i\frac{y_1}{x_1}$. To define the operations $|$ and $*$ consider the following system of equations:\end{example} \begin{flalign*} {(1)} && &x_1w_1+y_1w_2=w_0-z_1& \\ {(2)} && &x_2w_1+y_2w_2=w_0-z_2& \\ {(2^\prime)}&& &x'_2w_1+y'_2w_2=w_0-z'_2& \\ {(3)} && &x_3w_1+y_3w_2=w_0 -z_3 \\ {(3^\prime)}&& &x'_3w_1+y'_3w_2=w_0-z'_3 \\ && &\dots \\ {(i)}&& &x_iw_1+y_iw_2=w_0-z_i \\ {\text({i}^\prime})&& &x'_iw_1+y'_iw_2=w_0-z'_i \\ && &\ldots \end{flalign*} where $y'_i=\frac{x'_iy_1}{x_i},x'_i=\frac{x'_2x_1}{x_{i-1}}$ and $z'_i$ are defined inductively to satisfy \[\frac{z'_{i+1}-z_{i-1}}{x_1x'_2}= \Big(1+\frac{y_1}{x_1}\Big)\Big(\frac{z'_i}{x'_i}-\frac{z_i}{x_i}\Big).\] We define $(n,w)=(n_1,w_1)|(n_2,w_2)$ (resp.$(n,w)=(n_1,w_1)*(n_2, w_2))$ as follows: $n=n_1$ and if $n_1=n_2-1$ then we use the equations $(n)$ to get $w$; namely $x_nw+y_nw_1=w_2-z_n$ (resp. $x_nw_1+y_nw=w_2-z_n$). If $n_1=n_2+1$ then we use the equation ($n'$) to get $w$; namely $x'_nw+y'_nw_1=w_2-z'_n$ (resp. $x'_nw_1+y'_nw=w_2-z'_n$). We can think of Example 1.11 as being a special case of Example 3.9. \\ \indent Now we will show that the quasi-algebra $\A$ for Example 3.9 satisfies the relations 1.1-1.7.\\ \indent It is an easy task to check that the first coordinate of elements from $\A$ satisfies the relations 1.1-1.7 (compare with Example 1.9) and to check the relations 1.1, 1.2, 1.6 and 1.7 so we will concentrate our attention on relations 1.3, 1.4, and 1.5.\\ \indent It is convenient to use the following notation: if $w\in\A$ then $w=(\lvert{w}\rvert,F)$ and for \[w_1|w_2=(\lvert{w_1}\rvert,F_1)|(\lvert{w_2}\rvert, F_2)=(\lvert{w}\rvert,F)=w \] to use the notation \begin{equation*} F=\begin{cases} F_1|_nF_2 \ \textup{if} \ n=\lvert{w_1}\rvert=\lvert{w_2}\rvert-1 \\ F_1|_{n'}F_2 \ \textup{if} \ n=\lvert{w_1}\rvert=\lvert{w_2}\rvert+1. \end{cases} \end{equation*} Similar notation we use for the operation $*$. \\ \indent In order to verify relations 1.3-1.5 we have to consider three main cases:\\ $1. \quad \lvert{a}\rvert=\lvert{c}\rvert-1=\lvert{b}\rvert+1=n$ \\ Relations 1.3-1.5 make sense iff $\lvert{d}\rvert=n$. The relation 1.3 has the form: \[(F_a|_{n'}F_b)|_n(F_c|_{(n+1)'}F_d)=(F_a|_{n}F_c)|_{n'}(F_b|_{(n-1)}F_d).\] From this we get: \[\frac{1}{x_nx'_{n+1}}F_d-\frac{y'_{n+1}}{x_nx'_{n+1}}F_c-\frac{y_n}{x_nx'_n}F_b + \frac{y_ny'_n}{x_nx'_n} F_a-\frac{z'_{n+1}}{x_nx'_{n+1}}-\frac{z_n}{x_n}+ \frac{y_nz'_n}{x_nx'_n}= \] \[= \frac{1}{x'_nx_{n-1}} F_d-\frac{y_{n-1}}{x'_nx_{n-1}}F_b-\frac{y'_n}{x_nx'_n}F_c + \frac{y_ny'_n}{x_nx'_n}F_a-\frac{z_{n-1}}{x'_nx_{n-1}}-\frac{z'_n}{x'_n}+ \frac{y'_nz_n}{x_nx'_n}\] Therefore: \begin{align*} \text{(i)}&& &x_{n-1}x'_n=x_nx'_{n+1}\\ \text{(ii)}&& &\frac{y'_{n+1}}{x'_{n+1}}=\frac{y'_n}{x'_n} \\ \text{(iii)}&& &\frac{y_n}{x_n}=\frac{y_{n-1}}{x_{n-1}} \\ \text{(iv)}&& &\frac{z'_{n+1}}{x_nx'_{n+1}}+\frac{z_n}{x_n}-\frac{y_nz'_n}{x_nx'_n}=\frac{z_{n-1}}{x'_nx_{n-1}}+\frac{z'_n}{x'_n}-\frac{y'_nz_n}{x_nx'_n} \end{align*} When checking the relations 1.4 and 1.5 we get exactly the same conditions (i)-(iv).\\ $2. \quad \lvert a \rvert =\lvert b \rvert -1 = \lvert{c} \rvert -1 = n$.\\ \indent (I) $\lvert{d}\rvert=n.$\\ The relation 1.3 has the following form: \[(F_a|_{n}F_b)|_n(F_c|_{(n+1)'}F_d)=(F_a|_{n}F_c)|_{n}(F_b|_{(n+1)'}F_d).\] We get after some calculations that it is equivalent to \begin{align*} \text{(v)}&& &\frac{y_n}{x_n}=\frac{y'_{n+1}}{x'_{n+1}}& \end{align*} The relations 1.4 and 1.5 reduce to the same condition (v). \\ \indent (II) $\lvert{d}\rvert=n+2.$ \\ Then the relations 1.3-1.5 reduce to the condition (iii). \\ $3. \quad \lvert{a}\rvert=\lvert{b}\rvert+1=\lvert{c}\rvert+1=n$ \\ \indent (I) $\lvert{d}\rvert = n-2$ \\ \indent (II) $\lvert{d}\rvert = n.$ \\ We get, after some computations, that relations 3 (I) and 3 (II) follow from the conditions (iii) and (v). \\ \indent Conditions (i)-(v) are equivalent to the conditions on $x'_i, y_i, y'_i$ and $z'_i$ described in Example 3.9. Therefore the quasi-algebra $\A$ from Example 3.9 satisfies the relations 1.1-1.7. Furthermore, if $L$ is a diagram and $p-$ its crossing, then the number of components of $L_0^p$ is always equal to the number of components of $L$ plus or minus one, so the set $B\subset\A\times\A$ is sufficient to define the link invariant associated with $\A$.\\ \section{Final remarks and problems}\label{Section 4} \begin{remark}\textbf{4.1.} Each invariant of links can be used to build a better invariant which will be called weighted simplex of the invariant. Namely, if $w$ is an invariant and $L$ is a link of $n$ components $L_1,\dots, L_n$ then we consider an $n-1$ dimensional simplex $\Delta^{n-1}=(q_1,\dots,q_n)$. We associate with each face ($q_{i_1},\dots,q_{i_k}$) of $\Delta^{n-1}$ the value $w_{L'}$ where $L' = L_{i_1}\cup \cdots \cup L_{i_k}$. \end{remark} \indent We say that two weighted simplices are equivalent if there exists a bijection of their vertices which preserves weights of faces. Of course, the weighted simplex of an invariant of isotopy classes of oriented links is also an invariant of isotopy classes of oriented links. \\ \indent Before we present some examples, we will introduce an equivalence relation $\thicksim_c$ (Conway equivalence relation) on isotopy classes of oriented links ($\mathcal{L}$). \begin{definition}\textbf{4.2.} $\thicksim_c$ is the smallest equivalence relation on $\mathcal{L}$ which satisfies the following condition: let $L'_1$ (resp. $L'_2$) be a diagram of a link $L_1$ (resp. $L_2$) with a given crossing $p_1$ (resp. $p_2$) such that $p_1$ and $p_2$ are crossings of the same sign and \[ \begin{split} (L'_1)_-^{p_1} \thicksim_c (L'_2)_-^{p_2}\ &\text{and} \\ (L'_1)_0^{p_1} \thicksim_c (L'_2)_0^{p_2} \end{split} \] then $L_1\thicksim_c L_2$.\\ \end{definition} \indent It is obvious that an invariant given by a quasi Conway algebra is a Conway equivalence invariant. \begin{example}\textbf{4.3.}{(a)} Two links shown on Fig. 4.1 are Conway equivalent but they can be distinguished by weighted simplices of the linking numbers. \end{example} \centerline{\epsfig{figure= Fig41.eps,height=3.5cm}} \centerline{\footnotesize{Fig. 4.1}} {(b)} J. Birman has found three-braids (we use the notation of [M]); \[ \begin{split} \gamma_1=\sigma_1^{-2}\sigma_2^3\sigma_1^{-1}\sigma_2^4\sigma_1^{-2}\sigma_2^{4}\sigma_1^{-1}\sigma_2 \\ \gamma_2=\sigma_1^{-2}\sigma_2^3\sigma_1^{-1}\sigma_2^4\sigma_1^{-1}\sigma_2\sigma_1^{-2}\sigma_2^4 \end{split} \] which closures have the same values of all invariants described in our examples and the same signature but which can be distinguished by weighted simplices of the linking numbers [B]. \\ \indent As the referee has kindly pointed out the polynomial invariants described in 1.11 (a) and 1.11 (b) are equivalent. Namely if we denote them by $w_L$ and $w'_L$ respectively, then we have \[w_L(x,\ y,\ z)=\Big(1-\frac{z}{1-x-y}\Big)w'_L(x,\ y)+\frac{z}{1-x-y}.\] \begin{problem}\textbf{4.4.} \begin{enumerate} \item[(a)] Is the invariant described in Example 3.9 better than the polynomial invariant from Example 1.11?\footnote{Added for e-print: Adam Sikora proved in his Warsaw master degree thesis written under direction of P.Traczyk, that the answer to Problem 4.4 (a) is negative, \cite{Si-1}.} \item[(b)] Find an example of links which have the same polynomial invariant of Example 3.9 but which can be distinguished by some invariant given by a Conway algebra.\footnote{Added for e-print: Adam Sikora proved no invariant coming from a Conway algebra can distinguish links with the same polynomial invariant of Example 1.11, \cite{Si-2}.} \item[(c)] Do there exist two links $L_1$ and $L_2$ which are not Conway equivalent but which cannot be distinguished using any Conway algebra? \item[(d)] Birman [$B$] described two closed 3-braid knots given by \[ \begin{split} y_1=\sigma_1^{-3}\sigma_2^4\sigma_1^{-1}\sigma_2^5\sigma_1^{-3}\sigma_2^{5}\sigma_1^{-2}\sigma_2 \\ y_2=\sigma_1^{-3}\sigma_2^4\sigma_1^{-1}\sigma_2^5\sigma_1^{-2}\sigma_2\sigma_1^{-3}\sigma_2^5 \end{split} \] which are not distinguished by the invariants described in our examples and by the signature. Are they Conway equivalent? (they are not isotopic because their incompressible surfaces are different). \end{enumerate} \end{problem} \begin{problem}\textbf{4.5.} Given two Conway equivalent links, do they necessarily have the same signature? \end{problem} \indent The examples of Birman and Lozano [$B$; Prop. 1 and 2] have different signature but the same polynomial invariant of Example 1.11 (b)(see [$B$]). \begin{problem}\textbf{4.6.} Let ($V_1$, $V_2$) be a Heegaard splitting of a closed 3-manifold $M$. Is it possible to modify the above approach using projections of links onto the Heegaard surface $\partial V_1$? \end{problem} \indent We have obtained the results of this paper in early December 1984 and we were not aware at the time that an important part of our results (the invariant described in Example 1.11 (b)) had been got three months before us by four groups of researchers: R. Lickorish and K. Millett, J. Hoste, A. Ocneanu, P. Freyd and D. Yetter and that the first two groups used arguments similar to ours. We have been informed about this by J. Birman (letter received on January 28, '85) and by J. Montesinos (letter received on February 11, '85; it included the paper by R. Lickorish and K. Millett and also a small joint paper by all above mentioned mathematicians). \\ \section{Appendix} Here we prove Lemma 2.14.\\ \indent A closed part cut out of the plane by arcs of $L$ is called an $i$-gon if it has $i$ vertices (see Fig. 5.1). \\ \centerline{{\psfig{figure=PT-2-2-9.eps,height=4.0cm}}}\ \\ \centerline{\footnotesize{Fig. 5.1}} Every $i$-gon with $i\leqslant2$ will be called $f$-gon ($f$ works for few). Now let $X$ be an innermost $f$-gon that is an $f$-gon which does not contain any other $f$-gon inside. \\ \indent If $X$ is 0-gon we are done because $\partial X$ is a trivial circle. If $X$ is 1-gon then we are done because int $X \cap L = \emptyset$ so we can perform on $L^u$ a Reidemeister move which decreases the number of crossings of $L^u$ (Fig. 5.2). \\ \centerline{{\psfig{figure=PT-2-2-10.eps,height=3.5cm}}}\ \\ \centerline{\footnotesize{Fig. 5.2}}\\ Therefore we assume that $X$ is a 2-gon. Each arc which cuts int $X$ goes from one edge to another. Furthermore, no component of $L$ lies fully in $X$ so we can choose base points $b=(b_1,\dots,b_n)$ lying outside $X$. This has important consequences: if $L^u$ is an untangled diagram associated with $L$ and $b$ then each 3-gon in $X$ supports a Reidemeister move of the third type (i.e. the situation of the Fig. 5.3 is impossible).\\ \centerline{{\psfig{figure=PT-2-2-11.eps,height=4.5cm}}}\ \\ \centerline{\footnotesize{Fig. 5.3}} \indent Now we will prove Lemma 2.14 by induction on the number of crossings of $L$ contained in the 2-gon $X$ (we denote this number by $c$). \\ \indent If $c=2$ then int $X \cap L=\emptyset$ and we are done by the previous remark (2-gon $X$ can be used to make the Reidemeister move of the second type on $L^u$ and to reduce the number of crossings in $L^u$). \\ \indent Assume that $L$ has $c>2$ crossings in $X$ and that Lemma 2.14 is proved for less than $c$ crossings in $X$. In order to make the inductive step we need the following fact. \begin{proposition}\textbf{5.1.} If $X$ is an innermost 2-gon with int $X\cap L \neq \emptyset$ then there is a 3-gon $\Delta \subset X$ such that $\Delta \cap \partial X \neq \emptyset$, int $\Delta \cap L = \emptyset$. \end{proposition} Before we prove Proposition 5.1 we will show how Lemma 2.14 follows from it. \\ \indent We can perform the Reidemeister move of the third type using the 3-gon $\Delta$ and reduce the number of crossings of $L^u$ in $X$ (compare Fig. 5.4).\\ \centerline{\epsfig{figure= Fig54.eps, height=3.5cm}} \centerline{\footnotesize{Fig. 5.4}}\\ Now either $X$ is an innermost $f$-gon with less than $c$ crossings in $X$ or it contains an innermost $f$-gon with less that $c$ crossings in it. In both cases we can use the inductive hypothesis. \\ \indent Instead of proving Proposition 5.1 we will show a more general fact, which has Proposition 5.1 as a special case. \begin{proposition}\textbf{5.2.} Consider a 3-gon $Y=$($a$, $b$, $c$) such that each arc which cuts it goes from the edge $\overline{ab}$ to the edge $\overline{ac}$ without self-intersections (we allow $Y$ to be a 2-gon considered as a degenerated 3-gon with $\overline{bc}$ collapsed to a point). Furthermore let int $Y$ be cut by some arc. Then there is a 3-gon $\Delta \subset Y$ such that $\Delta \cap \overline{ab} \neq\emptyset$ and int $\Delta$ is not cut by any arc. \end{proposition} \indent \textsc{Proof of Proposition 5.2:} We proceed by induction on the number of arcs in int $Y\cap L$ (each such an arc cuts $\overline{ab}$ and $\overline{ac}$). For one arc it is obvious (Fig. 5.5). Assume it is true for $k$ arcs ($k\geqslant 1$) and consider ($k+1)$-th arc $\gamma$. Let $\Delta_0=$($a_1$, $b_1$, $c_1$) be a $3$-gon from the inductive hypothesis with an edge $\overline{a_1b_1}\subset \overline{ab}$ (Fig. 5.6). \\ \centerline{{\psfig{figure=PT-2-2-13.eps,height=5.2cm}}}\ \\ \centerline{\footnotesize{Fig. 5.5}}\\ If $\gamma$ does not cut $\Delta_0$ or it cuts $\overline{a_1b_1}$ we are done (Fig 5.6). Therefore let us assume that $\gamma$ cuts $\overline{a_1c_1}$ (in $u_1$) and $\overline{b_1c_1}$ (in $w_1$). Let $\gamma$ cut $\overline{ab}$ in $u$ and $\overline{ac}$ in $w$ (Fig. 5.7). We have to consider two cases: \\ \indent (a) $\quad \overline{uu_1}\ \cap$ int $\Delta_0=\emptyset$ (so $\overline{ww_1}\ \cap$ int $\Delta_0=\emptyset$); Fig. 5.7.\\ \centerline{\epsfig{figure=Fig56.eps, height=3.3in}}\\ \centerline{\footnotesize{Fig. 5.6}}\\ \ \\ \centerline{{\psfig{figure=PT-2-2-15.eps,height=5.9cm}}}\ \\ \centerline{\footnotesize{Fig 5.7}} Consider the 3-gon $ua_1u_1$. No arc can cut the edge $\overline{a_1u_1}$ so each arc which cuts the 3-gon $ua_1u_1$ cuts the edges $\overline{ua_1}$ and $\overline{uu_1}$. Furthermore this 3-gon is cut by less than $k+1$ arcs so by the inductive hypothesis there is a 3-gon $\Delta$ in $ua_1u_1$ with an edge on $\overline{ua_1}$ the interior of which is not cut by any arc. Then $\Delta$ satisfies the thesis of Proposition 5.2.\\ \indent (b) $\quad \overline{uw_1} \ \cap$ int$\Delta_0=\emptyset$ (so $\overline{wu_1}\ \cap$ int$\Delta_0 =\emptyset$). In this case we proceed like in case (a). \\ This completes the proof of Proposition 5.2 and hence the proof of Lemma 2.14. \ \\ Department of Mathematics\\ Warsaw University\\ 00-901 Warszawa, Poland\\ \ \\ \\ Added for e-print:\\ New address of J.~H.~Przytycki:\\ Department of Mathematics\\ The George Washington University\\ Washington, DC\\ {\tt [email protected]}\\ and University of Gda\'nsk \end{document}
\begin{document} \title{Some Identities Involving Three Kinds of Counting Numbers} \begin{center} L. C. Hsu Mathematics Institute, Dalian University of Technology, Dalian 116024, China\\[5pt] \end{center}\vskip0.5cm \subsection*{Abstract} In this note, we present several identities involving binomial coefficients and the two kind of Stirling numbers. \vskip0.5cm {\large\bf 1. Introduction} Adopting Knuth's notation, let us denote by $\left[\begin{array}{c}n\\k\end{array}\right]$ and $\left\{\begin{array}{c}n\\k\end{array}\right\}$ the unsigned (absolute) Stirling number of the first kind and the ordinary Stirling number of the second kind, respectively. Here in particular, $\left[\begin{array}{c}0\\0\end{array}\right]=\left\{\begin{array}{c}0\\0\end{array}\right\}=1$, $\left[\begin{array}{c}n\\0\end{array}\right]=\left\{\begin{array}{c}n\\0\end{array}\right\}=0~(n>0)$, and $\left[\begin{array}{c}n\\k\end{array}\right]=\left\{\begin{array}{c}n\\k\end{array}\right\}=0~(0\le n<k)$. Generally, $\left[\begin{array}{c}n\\k\end{array}\right]$, $\left\{\begin{array}{c}n\\k\end{array}\right\}$ and the binomial coefficients $\left(\begin{array}{c}n\\k\end{array}\right)$ may be regarded as the most important counting numbers in combinatorics. The object of this short note is to propose some combinatorial identities each consisting of these three kinds of counting numbers, namely the following $$ \sum_k\left[\begin{array}{c}k\\p\end{array}\right]\left\{\begin{array}{c}n+1\\k+1\end{array}\right\}(-1)^k= \left(\begin{array}{c}n\\p\end{array}\right)(-1)^p $$ $$ \sum_k\left[\begin{array}{c}k+1\\p+1\end{array}\right]\left\{\begin{array}{c}n\\k\end{array}\right\}(-1)^k= \left(\begin{array}{c}n\\p\end{array}\right)(-1)^n $$ $$ \sum_{j,k}\left[\begin{array}{c}n\\k\end{array}\right]\left\{\begin{array}{c}k\\j\end{array}\right\} \left(\begin{array}{c}n\\j\end{array}\right)(-1)^k=(-1)^n $$ $$ \sum_{j,k}\left\{\begin{array}{c}n\\k\end{array}\right\}\left[\begin{array}{c}k\\j\end{array}\right] \left(\begin{array}{c}n\\j\end{array}\right)(-1)^k=(-1)^n $$ $$ \sum_{j,k}\left(\begin{array}{c}n\\k\end{array}\right)\left\{\begin{array}{c}k\\j\end{array}\right\} \left[\begin{array}{c}j+1\\p\end{array}\right](-1)^j=\left\{\begin{array}{cl}0, &(n+1>p)\\[6pt] (-1)^n, &(n+1=p)\end{array}\right. $$ $$ \sum_{j,k}\left[\begin{array}{c}n\\k\end{array}\right]\left(\begin{array}{c}k\\j\end{array}\right) \left\{\begin{array}{c}j+1\\p\end{array}\right\}(-1)^j=\left\{\begin{array}{cl}0, &(n+1>p)\\[6pt] (-1)^n, &(n+1=p)\end{array}\right. $$ Here each of the summations in (1) and (2) extends over all $k$ such that $0\le k\le n$ or $p\le k\le n$, and all the double summations within (3)--(6) are taken over all possible integers $j$ and $k$ such that $0\le j\le k\le n$. Note that (1) is a well-known identity that has appeared in the Table 6.4 of Graham-Knuth-Patashnik's book [1] (cf. formula (6.24)). It is quite believable that (1) and (2) may be the most simple identities each connecting with the three kinds of counting numbers. \\[10pt] {\large\bf 2. Proof of the identities} In order to verify (2)--(6), let us recall that the orthogonality relations $$ \sum_{k}\left[\begin{array}{c}n\\k\end{array}\right]\left\{\begin{array}{c}k\\p\end{array}\right\} (-1)^{n-k}=\sum_k\left\{\begin{array}{c}n\\k\end{array}\right\}\left[\begin{array}{c}k\\p\end{array}\right] (-1)^{n-k}=\delta_{np}=\left\{\begin{array}{ll}0, &(n\neq p)\\[8pt] 1, &(n=p)\end{array}\right. $$ are equivalent to the inverse relations $$ a_n=\sum_{k}\left[\begin{array}{c}n\\k\end{array}\right](-1)^{n-k}b_k \Leftrightarrow b_n=\sum_k\left\{\begin{array}{c}n\\k\end{array}\right\}a_k. $$ Also, we shall make use of two known identities displayed in the Table 6.4 of [1], viz. $$ \left\{\begin{array}{c}n+1\\p+1\end{array}\right\}=\sum_k\left(\begin{array}{c}n\\k\end{array}\right) \left\{\begin{array}{c}k\\p\end{array}\right\} $$ $$ \left[\begin{array}{c}n+1\\p+1\end{array}\right]=\sum_k\left[\begin{array}{c}n\\k\end{array}\right] \left(\begin{array}{c}k\\p\end{array}\right). $$ Now, take $a_n=(-1)^n\left[\begin{array}{c}n+1\\p+1\end{array}\right]$ and $b_k=(-1)^k\left(\begin{array}{c}k\\p\end{array}\right)$, so that (10) can be embedded in the first equation of (8). Thus it is seen that (10) can be inverted via (8) to get the identity (2). (3) and (4) are trivial consequences of (7). Indeed rewriting (7) in the form ~~~~~~~~~~~~~~~~~$\displaystyle \sum_k\left[\begin{array}{c}n\\k\end{array}\right]\left\{\begin{array}{c}k\\j\end{array}\right\}(-1)^k= \sum_k\left\{\begin{array}{c}n\\k\end{array}\right\}\left[\begin{array}{c}k\\j\end{array}\right](-1)^k= (-1)^n\delta_{nj}$ $(7)'$ \\[8pt] and noticing that $\sum_j\left(\begin{array}{c}n\\j\end{array}\right)\delta_{nj}= \left(\begin{array}{c}n\\n\end{array}\right)=1$, we see that (3)--(4) follow at once from $(7)'$. For proving (5), let us make use of (9) and (7) with $p$ being replaced by $j$. We find $$ \sum_j\left\{\begin{array}{c}n+1\\j+1\end{array}\right\}\left[\begin{array}{c}j+1\\p\end{array}\right](-1)^j= \sum_j\sum_k\left(\begin{array}{c}n\\k\end{array}\right)\left\{\begin{array}{c}k\\j\end{array}\right\} \left[\begin{array}{c}j+1\\p\end{array}\right](-1)^j=(-1)^n\delta_{n+1,p}. $$ Hence (5) is obtained. Similarly, (6) is easily derived from (10) and (7). \\[10pt] {\large\bf 3. Questions} It may be a question of certain interest to ask whether some of the identities (1)--(6) could be given some combinatorial interpretations with the aid of the inclusion-exclusion principle or the method of bijections. Also, we have not yet decided whether (1)--(6) could be proved by the method of generating functions (cf. [2]). {\bf AMS Classification Numbers}: 05A10, 05A15, 05A19. \end{document}
\begin{document} \title{Nonradial solutions of weighted elliptic superlinear problems in bounded symmetric domains} \author{Hugo Adu\'en\footnote{Departamento de Matem\'aticas y Estad\'\i stica, Universidad de C\'ordoba, Monter\'\i a, Colombia. E-mail address: [email protected]}, Sigifredo Herr\'on\footnote{ Escuela de Matem\'aticas, Universidad Nacional de Colombia Sede Medell\'\i n, Medell\'\i n, Colombia. E-mail address: [email protected]}} \maketitle \begin{abstract} The present work has two objectives. First, we prove that a weight\-ed superlinear elliptic problem has infinitely many nonradial solutions in the unit ball. Second, we obtain the same conclusion in annuli for a more general nonlinearity which also involves a weight. We use a lower estimate of the energy level of radial solutions with $k-1$ zeros in the interior of the domain and a simple counting. Uniqueness results due to Tanaka \cite[2008]{Tanaka1} and \cite[2007]{Tanaka2} are very useful in our approach. \end{abstract} \textbf{Keywords:} Nonradial solutions, critical level, uniqueness, nodal solution \\ \textbf{ MSC2010:} 35A02, 35A24, 35J60, 35J61 \section{Introduction and statement of results} We consider \begin{equation}\label{theproblem} \begin{cases} \Delta u +K(\Vert x\Vert)\vert u\vert^{p-1}u=0, \ \text{ for } \ x\in\Omega,\\ u=0, \text{ for } x\in\partial\Omega, \end{cases} \end{equation} and \begin{equation}\label{theproblem2} \begin{cases} \Delta u +K(\Vert x\Vert)g(u)=0, \ \text{ for } \ x\in\Omega,\\ u=0, \text{ for } x\in\partial\Omega, \end{cases} \end{equation} We are interested in nonradial solutions assuming $K\in C^2(\overline{\Omega})$ and positive, $\Omega$ is the unit ball for the case \eqref{theproblem} and an annulus $\Omega=\{x\in {\ensuremath{\mathbb{R}}}^N\colon a\leq \Vert x\Vert \leq b\}$ for the case \eqref{theproblem2} and $p$ is subcritical, namely $1<p<(N+2)/(N-2)$ with $N\geq 3$. It is well known that some solutions of problems \eqref{theproblem} and \eqref{theproblem2} can be obtained as critical points of the functional $J\colon H^1_0\to\mathbb{R}$ defined by \begin{equation}\label{J} J(u)=\int_\Omega \left( \frac{1}{2}\Vert \nabla u\Vert^2-\frac{1}{p+1}K(\Vert x\Vert)\vert u\vert ^{p+1}\right){\,\mathrm{d}x}, \end{equation} and \begin{equation}\label{J_g} J(u)=\int_\Omega \left( \frac{1}{2}\Vert \nabla u\Vert^2-K(\Vert x\Vert)G(u)\right){\,\mathrm{d}x}, \end{equation} respectively, where $G(s)=\int_0^s g(t)\,dt.$ For simplicity, we are using the same letter $J$ in both cases. When we are looking for radial solutions to \eqref{theproblem} and \eqref{theproblem2}, the corresponding problem to be considered takes the form \begin{equation}\label{rbvp0} \begin{cases} u''(r)+\frac{N-1}{r}u'(r)+K(r)\vert u(r)\vert ^{p-1}u(r)=0, \quad \text{for } r\in (0,1)\\ \hspace{4.34cm} u'(0)=u(1)=0, \end{cases} \end{equation} and \begin{equation}\label{rbvp0:g} \begin{cases} u''(r)+\frac{N-1}{r}u'(r)+K(r)g(u(r))=0, \quad \text{for } r\in (a,b)\\ \hspace{3.46cm} u(a)= u(b)=0, \end{cases} \end{equation} respectively. From \eqref{J}, a radial solution $u$ for \eqref{theproblem} satisfies \begin{equation}\label{J-radial} J(u)=\left( \frac{1}{2}-\frac{1}{p+1}\right) \omega_N\int_{0}^{1}r^{N-1}K(r)\vert v(r)\vert^{p+1}{\,\mathrm{d}r}, \end{equation} where $\omega_N$ is the measure of the unit sphere in $\mathbb{R}^N$ and $v(r) = u(x)$ with $\|x\|= r$. In a similar fashion, if $u$ is a radial solution for \eqref{theproblem2} in the annulus $\Omega=\{x\in {\ensuremath{\mathbb{R}}}^N\colon a\leq \Vert x\Vert \leq b\}$ then, \begin{equation}\label{J-radial:con:g} J(u)= \omega_N\int_{a}^{b}r^{N-1}K(r)\left( \frac{g(v)v(r)}{2}-G(v)\right) {\,\mathrm{d}r}. \end{equation} From now on, all throughout the paper, $c, c_1,C, C_0,\ C_1,\ C_2,\overline{C},\ldots$ will denote generic positive constants, independent from $ u $, which may change from line to line. \\ In this work we prove that problems \eqref{theproblem} and \eqref{theproblem2} have infinitely many nonradial solutions in the unit ball of $\mathbb{R}^N$ and the annuli, respectively. For problems \eqref{theproblem} and \eqref{theproblem2}, Ramos \emph{et al} \cite{Ramos} proved the existence of a sequence $u_k$ of sign-changing solutions whose energy levels are of order $k^\sigma$, where $\sigma=2(p+1)/(N(p-1))$, namely $J(u_k)\sim k^\sigma$. By using radial techniques, we are able to prove a lower estimate for critical levels of radial solutions $u_k$, with $k-1$ zeros and we establish that $J(u_k)\geq C(k-1)^{N\sigma}$. Then, taking into account the uniqueness results due to S. Tanaka (\cite{Tanaka1, Tanaka2}) and that the critical levels of radial solutions are more spaced, we get, by a counting argument, that most of the sign-changing solutions obtained by Ramos \emph{et al} are nonradial. Very little about infinitely many nonradial solutions using radial tehcniques is known and, we emphasize that an upper estimate of the critical levels is not necessary. We take advantage of one result in \cite[theorem 1]{Ramos} and we complement a couple of Tanaka's theorem by proving that \eqref{theproblem} and \eqref{theproblem2} have infinitely many nonradial solutions. Additionally, we prove that there is an infinite number of nonradial solutions considering nonlinearities $ g(x, s) =K(\|x\|)\vert s\vert^{p-1}s$ and $ g(x, s) = K(\|x\|)g(s)$, from the list of sign-changing solutions obtained by Ramos \emph{et al} in \cite[theorem 1]{Ramos}. \\ In \cite{ACC}, an important ingredient for getting nonradial solutions was a uniqueness result in a superlinear context. Papers where uniqueness results have been obtained for other kinds of problem are, for example, \cite{Tanaka3, AH}. For these, our approach does not apply. To the best of our knowledge, an estimate of critical levels as in \cite{Ramos} for sublinear problems, is not known. That is why we will use the results of uniqueness due to S. Tanaka \cite{Tanaka1, Tanaka2}. Precisely, he obtained for the problem \begin{equation}\label{rbvp} \left\{ \begin{aligned} u''(r) + \frac{N-1}{r}u' (r) + K(r)|u(r)|^{p-1}u(r)&=0, \quad 0<r<1,\\ u'(0)=u(1) = 0, \ u(0) & >0,\\ &\hspace*{-6.7cm} u\ \mbox{ has exactly}\quad k-1 \ \mbox{ zeros in } (0, 1), \end{aligned} \right. \end{equation} the following result. \begin{theorem} Under the conditions $K\in C^2[0,1], K>0$ and \begin{equation}\label{conditionmain} [V(r)-p(N-2)-N+4][V(r)-p(N-2)+N]-2rV^\prime(r)<0, \end{equation} where $V(r)=rK'(r)/K(r)$, the solution of problem \eqref{rbvp} exists and it is unique. \end{theorem} In \cite[Corollary 2.2]{Tanaka2}, S. Tanaka proved the following consequence. \begin{theorem} Suppose $K\in C^2[a,b]$ and $K>0$. Assuming that: \begin{enumerate}[\rm(H1)] \item $-2(N-1)\leq V(r)\leq -2$ and $V'(r)\geq 0$. \item The function $g$ is odd, $g\in C^1({\ensuremath{\mathbb{R}}})$ and $g(s)>0$ for $s>0$. \item $\left( g(s)/s\right)'>0$ for $s>0$. \end{enumerate} Then, Problem \eqref{theproblem2} has at most one radial solution $u$ with exactly $k-1$ zeros in $(a,b)$ and $u'(a)>0.$ \end{theorem} We complement these results by proving the existence of infinitely many nonradial solutions. Our main theorems read as follows. \begin{theorem}\label{main} Assuming that $K\in C^2[0,1]$, $K>0$ and \eqref{conditionmain}, the problem \eqref{theproblem} has infinitely many nonradial solutions. \end{theorem} \begin{theorem}\label{main2} If $1<p<(N+2)/(N-2), \ K\in C^2[a,b], K>0$, \textup{(H1)-(H3)} hold and, \begin{enumerate} \item[\rm(H4)] There exists $C>0$ such that, for every $s>0, \ g(s) \leq C\,s^p$. \item [\rm(H5)]\footnote{This is the well known Ambrosetti - Rabinowitz superlinear condition.}There exists $\theta >2$ such that, for every $s>0,\ sg(s)\geq \theta\, G(s)$. \end{enumerate} Then, the problem \eqref{theproblem2} has infinitely many nonradial solutions. \end{theorem} \begin{rem} As an example, the function $g(s)=\vert s\vert ^{p-1}s$ verifies \textup{(H4)} and \textup{(H5)}. \end{rem} In section 2 we present some preliminaries and in section 3 we prove lower estimates of critical levels of radial solutions, which will be very important in order to prove our theorems in section 4. \section{Some preliminaries} From \eqref{J-radial:con:g} and \textup{(H5)}, we observe that \begin{equation}\label{J-radial:con:g2} J(u)\geq C\int_{a}^{b}r^{N-1}K(r) g(v)v(r) {\,\mathrm{d}r}, \end{equation} for every radial solution $u$ for \eqref{theproblem2}. \begin{rem}\label{remark!} Conditions \textup{(H2)-(H5)} imply: \begin{enumerate}[\rm (a)] \item Due to \textup{(H2)}, the function $g$ holds $sg(s)>0$ for $s\neq 0$. In addition, by using \textup{(H4)} it follows that $g(s)/s \leq C \left( sg(s)\right)^{(p-1)/(p+1)}$ for some positive constant $C$: let us denote $\delta=(p-1)/(p+1)$. Assumption \textup{(H4)} implies \[ \left( \frac{g(s)}{s^p}\right) ^{1-\delta}=\left( \frac{g(s)}{s^p}\right) ^{2/(p+1)}\leq C, \] and thus, multiplying by $\left( \frac{g(s)}{s^p}\right) ^{\delta}$, we get \[ \frac{g(s)}{s^p}\leq C\left( \frac{g(s)}{s^p}\right) ^{\delta}, \] from which the assertion follows. \item From \textup{(H3)} and \textup{(H2)} it follows that $g'(s)>g(s)/s>0$ for $s\neq 0$. \item If $g(x,s):=K(\Vert x\Vert)g(s)$, assumption \textup{(H4)} implies $g(x,s)/s\to 0$ as $s\to 0$, uniformly in $x$. \item Again, \textup{(H4)} implies \[ 0\leq g(x,s)s=K(\Vert x\Vert)g(s)s\leq C\Vert K\Vert\vert s\vert^{p+1}\leq C_1(\vert s\vert^{p+1}+1). \] \item Because of \textup{(H5)}, \[ g(x,s)s\geq \theta\, G(x,s)\geq \theta\, G(x,s)-C, \] where $G(x,s)=K(\Vert x\Vert)G(s)$ and $C>0$ is a constant. \item Hypothesis \textup{(H5)} implies that $g$ is superlinear. More exactly we have $\Big(s^{-\theta}G(s)\Big )'\geq 0$ for $s>0$ and thus, for $s>1$ we obtain $G(s) \geq G(1)s^\theta=G(1)\vert s\vert^\theta$. From this, \textup{(H2)} and \textup{(H5)} we get \[ \lim\limits_{\vert s\vert \to\infty}\frac{g(s)}{s}=+\infty. \] \end{enumerate} \end{rem} In order to prove theorems \ref{main} and \ref{main2}, we shall apply theorem 1 due to Ramos \emph{et al} in \cite{Ramos} considering special cases. In such a theorem, for problems \eqref{theproblem} and \eqref{theproblem2}, authors proved the existence of a sequence $u_k$ of sign-changing solutions whose energy levels are of order $k^\sigma$, where $\sigma=2(p+1)/(N(p-1))$. To prove our first theorem, $\Omega$ will be the unit ball, $g(x,s)=K(\|x\|)\,|s|^{p-1}s, f(x,s)\equiv 0, \mu=p+1$ and we choose any number \[ \nu\in\left( 0,\frac{N+2-p(N-2)}{2}\right), \] in order to obtain condition (1.4) in \cite{Ramos}. In this context, such a theorem is established as follows. \begin{theorem}\label{ramos} Assuming that $N\geq3, \ 1 <p< (N+2)/(N-2)$, the problem \[ \Delta u +K(\Vert x\Vert)\vert u\vert^{p-1}u=0; \, u\in H_0^1(\Omega), \] admits a sequence of sign-changing solutions $(u_k)_{k\in\mathbb{N}}$ whose energy levels $J(u_k)$ satisfy \begin{equation}\label{ineq1ramos} c_1k^{\sigma}\leq J(u_k)\leq c_2k^{\sigma}, \end{equation} for some $c_1, c_2>0$ with $\sigma=\frac{2(p+1)}{N(p-1)}$. \end{theorem} To prove our second theorem, we will take $\Omega$ as an annulus, $g(x,s)=K(\|x\|)\,g(s)$, $f(x,s)\equiv 0, \mu=\theta$ and we choose \[ \nu\in\left( 0,\frac{\theta(N+2-p(N-2))}{2(p+1)}\right), \] with the aim that condition (1.4) in \cite{Ramos} holds; further, the above remarks imply all conditions in \cite[Theorem 1]{Ramos} are satisfied and hence, its conclusion give us a sequence of sign-changing solutions $(u_k)_{k\in\mathbb{N}}$ whose energy levels $J(u_k)$ satisfy \eqref{ineq1ramos}. \section{Lower estimates of critical leves} In this section we obtain estimates of the critical levels corresponding to a radial solution $u_k$ with $k-1$ zeros for the problems \eqref{theproblem} and \eqref{theproblem2}. More exactly, in order to prove our first main result we establish an estimate from below of $J(u_k)$ where $u_k$ is a radial solution of \eqref{rbvp} with $k-1$ zeros in $(0,1)$. Then, the same estimate will be gotten for a radial solution $u_k$ with $k-1$ zeros for the problem \eqref{theproblem2}. \begin{theorem}\label{boundbelow} Let $\delta\colon=(p-1)/(p+1)$. There exists a constant $C>0$ such that for all solution $u_k\equiv u$ of \eqref{rbvp}, we have \begin{equation}\label{ineq2Kaji} J(u)\ge C(k-1)^{N\sigma}. \end{equation} \end{theorem} \begin{theorem}\label{boundbelow} There exists a constant $C>0$ such that for all solution $u_k\equiv u$ with $k-1$ zeros of \eqref{rbvp0:g}, we have \begin{equation}\label{ineq2Kaji} J(u)\ge C(k-1)^{N\sigma}. \end{equation} \end{theorem} \section{Proof of theorems \ref{main} and \ref{main2}} By using theorems of section 3 and a counting argument, we can show our main results. \end{document}
\begin{document} \def {} \title{Understanding Trainable Sparse Coding\via matrix factorization} \begin{abstract} Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in \cite{Gregor10}, which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the $\ell_1$ ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails. \end{abstract} \section{Introduction} Feature selection is a crucial point in high dimensional data analysis. Different techniques have been developed to tackle this problem efficiently, and amongst them sparsity has emerged as a leading paradigm. In statistics, the LASSO estimator \citep{Tibshirani1996} provides a reliable way to select features and has been extensively studied in the last two decades (\cite{Hastie2015} and references therein). In machine learning and signal processing, sparse coding has made its way into several modern architectures, including large scale computer vision \citep{coates2011importance} and biologically inspired models \citep{cadieu2012learning}. Also, Dictionary learning is a generic unsupervised learning method to perform nonlinear dimensionality reduction with efficient computational complexity \citep{Mairal2009a}. All these techniques heavily rely on the resolution of $\ell_1$-regularized least squares. The $\ell_1$-sparse coding problem is defined as solving, for a given input $x \in {\mathbb{R}}^n$ and dictionary $D \in {\mathbb{R}}^{n \times m}$, the following problem: \begin{equation} \label{eq:sparsecoding} z^*(x) = \arg\min_z F_x(z) \overset{\Delta}{=}\frac{1}{2} \| x - Dz \|^2 + \lambda \|z \|_1~. \end{equation} This problem is convex and can therefore be solved using convex optimization machinery. Proximal splitting methods \citep{Beck2009} alternate between the minimization of the smooth and differentiable part using the gradient information and the minimization of the non-differentiable part using a proximal operator \citep{Combettes2011}. These methods can also be accelerated by considering a momentum term, as it is done in FISTA \citep{Beck2009, Nesterov2005}. Coordinate descent \citep{Friedman2007,Osher2009} leverages the closed formula that can be derived for optimizing the problem \autoref{eq:sparsecoding} for one coordinate $z_i$ given that all the other are fixed. At each step of the algorithm, one coordinate is updated to its optimal value, which yields an inexpensive scheme to perform each step. The choice of the coordinate to update at each step is critical for the performance of the optimization procedure. Least Angle Regression (LARS) \citep{Hesterberg2008} is another method that computes the whole LASSO regularization path. These algorithms all provide an optimization procedure that leverages the local properties of the cost function iteratively. They can be shown to be optimal among the class of first-order methods for generic convex, non-smooth functions \citep{bubeck2014theory}. But all these results are given in the worst case and do not use the distribution of the considered problem. One can thus wonder whether a more efficient algorithm to solve \autoref{eq:sparsecoding} exists for a fixed dictionary $D$ and generic input $x$ drawn from a certain input data distribution. In \cite{Gregor10}, the authors introduced LISTA, a trained version of ISTA that adapts the parameters of the proximal splitting algorithm to approximate the solution of the LASSO using a finite number of steps. This method exploits the common structure of the problem to learn a better transform than the generic ISTA step. As ISTA is composed of a succession of linear operations and piecewise non linearities, the authors use the neural network framework and the backpropagation to derive an efficient procedure solving the LASSO problem. In \cite{Sprechmann2012}, the authors extended LISTA to more generic sparse coding scenarios and showed that adaptive acceleration is possible under general input distributions and sparsity conditions. In this paper, we are interested in the following question: Given a finite computational budget, what is the optimum estimator of the sparse coding? This question belongs to the general topic of computational tradeoffs in statistical inference. Randomized sketches \citep{alaoui2015fast, yang2015randomized} reduce the size of convex problems by projecting expensive kernel operators into random subspaces, and reveal a tradeoff between computational efficiency and statistical accuracy. \cite{agarwal2012computational} provides several theoretical results on perfoming inference under various computational constraints, and \cite{chandrasekaran2013computational} considers a hierarchy of convex relaxations that provide practical tradeoffs between accuracy and computational cost. More recently, \cite{oymak2015sharp} provides sharp time-data tradeoffs in the context of linear inverse problems, showing the existence of a phase transition between the number of measurements and the convergence rate of the resulting recovery optimization algorithm. \cite{giryes2016tradeoffs} builds on this result to produce an analysis of LISTA that describes acceleration in conditions where the iterative procedure has linear convergence rate. Finally, \cite{xin2016maximal} also studies the capabilities of Deep Neural networks at approximating sparse inference. The authors show that unrolled iterations lead to better approximation if one allows the weights to vary at each layer, contrary to standard splitting algorithms. Whereas their focus is on relaxing the convergence hypothesis of iterative thresholding algorithms, we study a complementary question, namely when is speedup possible, without assuming strongly convex optimization. Their results are consistent with ours, since our analysis also shows that learning shared layer weights is less effective. Inspired by the LISTA architecture, our mathematical analysis reveals that adaptive acceleration is related to a specific matrix factorization of the Gram matrix of the dictionary $B = D^\mathsf{T} D$ as $ B = A^\mathsf{T} S A - R~,$where $A$ is unitary, $S$ is diagonal and the residual is positive semidefinite: $R \succeq 0$. Our factorization balances between near diagonalization by asking that $\|R \|$ is small and small perturbation of the $\ell_1$ norm, \emph{i.e.} $\| A z \|_1 - \|z \|_1$ is small. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys a convergence rate with improved constants with respect to the non-adaptive version. Moreover, our analysis also shows that acceleration is mostly possible at the beginning of the iterative process, when the current estimate is far from the optimal solution, which is consistent with numerical experiments. We also show that the existence of this factorization is not only sufficient for acceleration, but also necessary. This is shown by constructing dictionaries whose Gram matrix diagonalizes in a basis that is incoherent with the canonical basis, and verifying that LISTA fails in that case to accelerate with respect to ISTA. In our numerical experiments, we design a specialized version of LISTA called FacNet, with more constrained parameters, which is then used as a tool to show that our theoretical analysis captures the acceleration mechanism of LISTA. Our theoretical results can be applied to FacNet and as LISTA is a generalization of this model, it always performs at least as well, showing that the existence of the factorization is a sufficient certificate for acceleration by LISTA. Reciprocally, we show that for cases where no acceleration is possible with FacNet, the LISTA model also fail to provide acceleration, linking the two speedup mechanisms. This numerical evidence suggest that the existence of our proposed factorization is sufficient and somewhat necessary for LISTA to show good results. The rest of the paper is structured as follows. \autoref{sec:math} presents our mathematical analysis and proves the convergence of the adaptive algorithm as a function of the quality of the matrix factorization. Finally, \autoref{sec:exp} presents the generic architectures that will enable the usage of such schemes and the numerical experiments, which validate our analysis over a range of different scenarios. \section{Accelerating Sparse Coding with Sparse Matrix Factorizations} \label{sec:math} \subsection{Unitary Proximal Splitting} In this section we describe our setup for accelerating sparse coding based on the Proximal Splitting method. Let $\Omega \subset {\mathbb{R}}^n$ be the set describing our input data, and $D \in {\mathbb{R}}^{n \times m}$ be a dictionary, with $m > n$. We wish to find fast and accurate approximations of the sparse coding $z^*(x)$ of any $x \in \Omega$, defined in \autoref{eq:sparsecoding} For simplicity, we denote $B = D^\mathsf{T} D$ and $y = D^\dagger x$ to rewrite \autoref{eq:sparsecoding} as \begin{equation} \label{sc2} z^*(x) = \arg \min_z F_x(z) = \underbrace{\frac{1}{2}(y - z)^\mathsf{T} B (y-z)}_{E(z)} + \underbrace{\vphantom{\frac{1}{2}}\lambda \| z \|_1}_{G(z)} ~. \end{equation} For clarity, we will refer to $F_x$ as $F$ and to $z^*(x)$ as $z^*$. The classic proximal splitting technique finds $z^*$ as the limit of sequence $(z_k)_k$, obtained by successively constructing a surrogate loss $F_k(z)$ of the form \begin{equation} F_k(z) = E(z_k) + (z_k-y)^\mathsf{T} B(z-z_k) + L_k \| z - z_k \|_2^2 + \lambda \| z \|_1~, \end{equation} satisfying $F_k(z) \geq F(z) $ for all $z \in {\mathbb{R}}^m$ . Since $F_k$ is separable in each coordinate of $z$, $z_{k+1} = \arg\min_z F_k(z)$ can be computed efficiently. This scheme is based on a majoration of the quadratic form $(y-z)^\mathsf{T} B (y-z)$ with an isotropic quadratic form $L_k \| z_k - z \|_2^2$. The convergence rate of the splitting algorithm is optimized by choosing $L_k$ as the smallest constant satisfying $F_k(z) \geq F(z)$, which corresponds to the largest singular value of $B$. The computation of $z_{k+1}$ remains separable by replacing the quadratic form $L_k \textbf{I}$ by any diagonal form. However, the Gram matrix $B = D^\mathsf{T} D$ might be poorly approximated via diagonal forms for general dictionaries. Our objective is to accelerate the convergence of this algorithm by finding appropriate factorizations of the matrix $B$ such that \[B \approx A^\mathsf{T} S A~,~\text{ and } ~\| A z \|_1 \approx \|z \|_1~,\] where $A$ is unitary and $S$ is diagonal positive definite. Given a point $z_k$ at iteration $k$, we can rewrite $F(z)$ as \begin{equation} F(z) = E(z_k) + ( z_k - y )^\mathsf{T} B(z - z_k) + Q_B(z, z_k)~, \end{equation} with $\displaystyle Q_B(v, w ) := \frac{1}{2}(v - w)^\mathsf{T} B (v - w) + \lambda \|v \|_1 ~$. For any diagonal positive definite matrix $S$ and unitary matrix $A$, the surrogate loss \mbox{$\widetilde{F}(z,z_k) := E(z_k) + ( z_k - y)^\mathsf{T} B(z - z_k) + Q_S(A z, Az_k)$} can be explicitly minimized, since \begin{eqnarray} \label{eq:prox} \arg\min_z \widetilde{F}(z,z_k) & = & A^\mathsf{T} \arg\min_u \left( (z_k - y )^\mathsf{T} B A^\mathsf{T} (u - Az_k) + Q_S(u, Az_k)\right) \nonumber \\ &=& A^\mathsf{T} \arg\min_u Q_S\left(u, Az_k - S^{-1}AB(z_k - y)\right) \end{eqnarray} where we use the variable change $u = Az$. As $S$ is diagonal positive definite, \autoref{eq:prox} is separable and can be computed easily, using a linear operation followed by a point-wise non linear soft-thresholding. Thus, any couple $(A, S)$ ensures an computationally cheap scheme. The question is then how to factorize $B$ using $S$ and $A$ in an optimal manner, that is, such that the resulting proximal splitting sequence converges as fast as possible to the sparse coding solution. \subsection{Non-asymptotic Analysis} We will now establish convergence results based on the previous factorization. These bounds will inform us on how to best choose the factors $A_k$ and $S_k$ in each iteration. For that purpose, let us define \begin{equation} \label{eq:eqdelta} \delta_A(z) = \lambda \left(\| A z \|_1 - \| z\|_1 \right)~,~\text{and }~R = A^\mathsf{T} S A - B~. \end{equation} The quantity $\delta_A(z)$ thus measures how invariant the $\ell_1$ norm is to the unitary operator $A$, whereas $R$ corresponds to the residual of approximating the original Gram matrix $B$ by our factorization $A^\mathsf{T} SA$ . Given a current estimate $z_k$, we can rewrite \begin{equation} \label{eq:surro1} \widetilde{F}(z,z_k) = F(z) + \frac{1}{2}(z - z_k)^\mathsf{T} R (z - z_k) + \delta_A(z)~. \end{equation} By imposing that $R$ is a positive semidefinite residual one immediately obtains the following bound. \begin{proposition} \label{eq:bi1} Suppose that $R = A^\mathsf{T} S A - B$ is positive definite, and define \begin{flalign} && z_{k+1} = \arg\min_z \widetilde{F}(z, z_k)~&. \label{eq:algo}\\ &\makebox[0pt][l]{Then } &F(z_{k+1}) - F(z^*) \leq \frac{1}{2} \| R \| \| z_k - z^* \|_2^2 +& \delta_A(z^*) - \delta_A(z_{k+1})~.& \label{eq:bo1} \end{flalign} \end{proposition} \begin{proof} By definition of $z_{k+1}$ and using the fact that $R \succ 0$ we have \begin{eqnarray*} F(z_{k+1}) - F(z^*) & \leq & F(z_{k+1}) - \widetilde{F}(z_{k+1},z_k) + \widetilde{F}(z^*,z_k) - F(z^*) \\ &=& - \frac{1}{2}(z_{k+1} - z_k)^\mathsf{T} R (z_{k+1} - z_k) - \delta_A(z_{k+1}) + \frac{1}{2}(z^* - z_k)^\mathsf{T} R (z^* - z_k) + \delta_A(z^*) \\ &\leq& \frac{1}{2}(z^* - z_k)^\mathsf{T} R (z^* - z_k) + \left(\delta_A(z^*) - \delta_A(z_{k+1})\right) ~. \end{eqnarray*} where the first line results from the definition of $z_{k+1}$ and the third line makes use of $R$ positiveness. \end{proof} This simple bound reveals that to obtain fast approximations to the sparse coding it is sufficient to find $S$ and $A$ such that $\| R \|$ is small and that the $\ell_1$ commutation term $\delta_A$ is small. These two conditions will be often in tension: one can always obtain $R\equiv 0$ by using the Singular Value Decomposition of $B=A_0^\mathsf{T} S_0 A_0$ and setting $A = A_0$ and $S = S_0$. However, the resulting $A_0$ might introduce large commutation error $\delta_{A_0}$. Similarly, as the absolute value is non-expansive, \emph{i.e.} $\left| |a| - |b| \right| \leq \left| a - b \right| $, we have that \begin{eqnarray} \label{bo2} |\delta_A(z)| = \lambda \left| \| A z \|_1 - \| z \|_1 \right| &\leq & \lambda \| (A - {\bf I}) z \|_1 \\ &\leq & \lambda \sqrt{2 \max( \| Az\|_0, \| z \|_0)}~\cdot\, \| A - {\bf I} \| ~\cdot\,\|z \|_2~, \nonumber \end{eqnarray} where we have used the Cauchy-Schwartz inequality $\| x \|_1 \leq \sqrt{ \| x \|_0} \|x \|_2$ in the last equation. In particular, \autoref{bo2} shows that unitary matrices in the neighborhood of ${\bf I}$ with $\| A - {\bf I} \|$ small have small $\ell_1$ commutation error $\delta_A$ but can be inappropriate to approximate general $B$ matrix. The commutation error also depends upon the sparsity of $z$ and $Az$~. If both $z$ and $Az$ are sparse then the commutation error is reduced, which can be achieved if $A$ is itself a sparse unitary matrix. Moreover, since \[|\delta_A(z) - \delta_A(z')| \leq \lambda| \|z \|_1 - \| z' \|_1| + \lambda| \| A z \|_1 - \| A z'\|_1 |~\] \[ \text{and }~ | \|z \|_1 - \| z'\|_1 | \leq \| z - z' \|_1 \leq \sqrt{\| z- z' \|_0 } \| z - z'\|_2\] it results that $\delta_A$ is Lipschitz with respect to the Euclidean norm; let us denote by $L_A(z)$ its local Lipschitz constant in z, which can be computed using the norm of the subgradient in $z$\footnote{ This quantity exists as $\delta_A$ is a difference of convex. See proof of \autoref{lemma1} in appendices for precisions. }. An uniform upper bound for this constant is $(1+\|A\|_1)\lambda\sqrt{m}$, but it is typically much smaller when $z$ and $Az$ are both sparse.\\ Equation \autoref{eq:algo} defines an iterative procedure determined by the pairs $\{(A_k, S_k)\}_k$. The following theorem uses the previous results to compute an upper bound of the resulting sparse coding estimator. \begin{theorem} \label{thm1} Let $A_k, S_k$ be the pair of unitary and diagonal matrices corresponding to iteration $k$, chosen such that $R_k = A_k^\mathsf{T} S_k A_k - B \succ 0$. It results that {\small \begin{equation} \label{eq:zo1} F(z_{k}) - F(z^*) \leq \frac{(z^*- z_0)^\mathsf{T} R_0 (z^* - z_0) + 2 L_{A_0}(z_1) \| z^*-z_{1} \|_2 }{2k} + \frac{\alpha - \beta}{2k}~, \end{equation} \begin{eqnarray*} \text{with } & \alpha = &\sum_{i=1}^{k-1} \left( 2L_{A_i}(z_{i+1})\|z^*-z_{i+1} \|_2 + (z^* - z_i)^\mathsf{T} ( R_{i-1} - R_{i}) (z^* - z_i) \right)~,\\ & \beta = &\sum_{i=0}^{k-1} (i+1)\left((z_{i+1}-z_i)^\mathsf{T} R_i (z_{i+1}-z_i) + 2\delta_{A_i}(z_{i+1}) - 2\delta_{A_i}(z_i) \right)~, \end{eqnarray*} } where $L_A(z)$ denote the local lipschitz constant of $\delta_A$ at $z$. \end{theorem} \textbf{Remarks:} If one sets $A_k={\bf I}$ and $S_k = \| B \| {\bf I}$ for all $k \ge 0$, \autoref{eq:zo1} corresponds to the bound of the ISTA algorithm \citep{Beck2009}. We can specialize the theorem in the case when $A_0, S_0$ are chosen to minimize the bound \autoref{eq:bo1} and $A_k = {\bf I}$, $S_k = \| B \| {\bf I}$ for $k \ge 1$. \begin{corollary} If $A_k = {\bf I}$, $S_k = \| B \| {\bf I}$ for $k \ge 1$ then {\small \begin{equation} \label{zo22} F(z_{k}) - F(z^*) \leq \frac{(z^*- z_0)^\mathsf{T} R_0 (z^* - z_0) + 2 L_{A_0}(z_1) (\| z^*-z_1\| + \| z_1 - z_0\|) + (z^* - z_1)^\mathsf{T} R_0 (z^* - z_1)^\mathsf{T}}{2k}~. \end{equation}} \end{corollary} This corollary shows that by simply replacing the first step of ISTA by the modified proximal step detailed in \autoref{eq:prox}, one can obtain an improved bound at fixed $k$ as soon as \[2 \| R_0 \| \max(\| z^* - z_0 \|_2^2,\| z^* - z_1 \|_2^2) + 4 L_{A_0}(z_1) \max( \|z^* - z_0\|_2, \|z^* - z_1 \|_2) \leq \|B \| \| z^* - z_0\|_2^2~, \] which, assuming $\| z^* - z_0 \|_2 \geq \| z^* - z_1 \|_2 $, translates into \begin{equation} \| R_0 \| + 2 \frac{L_{A_0}(z_1)}{\| z^* - z_0\|_2} \leq \frac{\|B \|}{2}~. \end{equation} More generally, given a current estimate $z_k$, searching for a factorization $(A_k, S_k)$ will improve the upper bound when \begin{equation} \label{zo23} \| R_k \| + 2 \frac{L_{A_k}(z_{k+1})}{\| z^* - z_k\|_2} \leq \frac{\|B \|}{2}~. \end{equation} We emphasize that this is not a guarantee of acceleration, since it is based on improving an upper bound. However, it provides a simple picture on the mechanism that makes non-asymptotic acceleration possible. \subsection{Interpretation} \label{sub:interp} In this section we analyze the consequences of \autoref{thm1} in the design of fast sparse coding approximations, and provide a possible explanation for the behavior observed numerically. \subsubsection{`Phase Transition" and Law of Diminishing Returns} \label{sub:phase} \autoref{zo23} reveals that the optimum matrix factorization in terms of minimizing the upper bound depends upon the current scale of the problem, that is, of the distance $\| z^* - z_k\|$. At the beginning of the optimization, when $\| z^* - z_k\|$ is large, the bound \autoref{zo23} makes it easier to explore the space of factorizations $(A, S)$ with $A$ further away from the identity. Indeed, the bound tolerates larger increases in $L_{A}(z_{k+1})$, which is dominated by \[ L_{A}(z_{k+1}) \leq \lambda (\sqrt{\|z_{k+1}\|_0} + \sqrt{\|Az_{k+1}\|_0})~, \] \emph{i.e.} the sparsity of both $z_1$ and $A_0(z_1)$. On the other hand, when we reach intermediate solutions $z_k$ such that $\| z^* - z_k\|$ is small with respect to $L_A(z_{k+1})$, the upper bound is minimized by choosing factorizations where $A$ is closer and closer to the identity, leading to the non-adaptive regime of standard ISTA ($A=Id$). This is consistent with the numerical experiments, which show that the gains provided by learned sparse coding methods are mostly concentrated in the first iterations. Once the estimates reach a certain energy level, section \ref{sec:exp} shows that LISTA enters a steady state in which the convergence rate matches that of standard ISTA. The natural follow-up question is to determine how many layers of adaptive splitting are sufficient before entering the steady regime of convergence. A conservative estimate of this quantity would require an upper bound of $\| z^* - z_k\|$ from the energy bound $F(z_k) - F(z^*)$. Since in general $F$ is convex but not strongly convex, such bound does not exist unless one can assume that $F$ is locally strongly convex (for instance for sufficiently small values of $F$). \subsubsection{Improving the factorization to particular input distributions} \label{sub:distrib} Given an input dataset $\mathcal{D}={(x_i, z^{(0)}_{i}, z^*_i)}_{i \leq N}$, containing examples $x_i \in {\mathbb{R}}^n$, initial estimates $z^{(0)}_{i}$ and sparse coding solutions $z^*_i$, the factorization adapted to $\mathcal{D}$ is defined as \begin{equation} \label{bty} \min_{A,S;~ A^\mathsf{T} A = {\bf I}, A^\mathsf{T} S A - B \succ 0} \frac{1}{N} \sum_{i \leq N} \frac{1}{2} ( z^{(0)}_{i} - z^*_i)^\mathsf{T} ( A^\mathsf{T} S A - B ) ( z^{(0)}_{i} - z^*_i) + \delta_A(z^*_i) - \delta_A(z_{1,i})~. \end{equation} Therefore, adapting the factorization to a particular dataset, as opposed to enforcing it uniformly over a given ball $B(z^*; R)$ (where the radius $R$ ensures that the initial value $z_0 \in B(z^*;R)$), will always improve the upper bound \autoref{eq:bo1}. Studying the gains resulting from the adaptation to the input distribution will be let for future work. \section{Numerical Experiments} \label{sec:exp} This section provides numerical arguments to analyse adaptive optimization algorithms and their performances, and relates them to the theoretical properties developed in the previous section. All the experiments were run using Python and Tensorflow. For all the experiments, the training is performed using Adagrad \citep{duchi2011adaptive}. The code to reproduce the figures is available online\footnote{The code can be found at ~\url{https://github.com/tomMoral/AdaptiveOptim}}. \subsection{Adaptive Optimization Networks Architectures} \begin{figure} \caption{\textbf{ISTA} - Recurrent Neural Network } \caption{\textbf{LISTA} - Unfolded network} \caption{ Network architecture for ISTA/LISTA. The unfolded version (b) is trainable through backpropagation and permits to approximate the sparse coding solution efficiently.} \label{fig:ista} \label{fig:lista} \end{figure} \paragraph{LISTA/LFISTA} \label{par:lista} In \cite{Gregor10}, the authors introduced LISTA, a neural network constructed by considering ISTA as a recurrent neural net. At each step, ISTA performs the following 2-step procedure : \begin{equation} \left.\begin{aligned} 1.\hspace{1em} u_{k+1} &= z_k - \displaystyle\frac{1}{L} D^\mathsf{T}(Dz_k-x) = \underbrace{({\bf I} - \frac{1}{L}D^\mathsf{T} D)}_{W_g} z_k + \underbrace{\frac{1}{L}D^\mathsf{T}}_{W_e}x~,\\ 2.\hspace{1em} z_{k+1} &= h_{\frac{\lambda}{L}}(u_{k+1}) \text{ where } h_\theta(u) = \text{sign}(u)(\lvert u\rvert-\theta)_+~,\\ \end{aligned} \right\} \quad \text{step $k$ of ISTA} \label{eq:ista} \end{equation} This procedure combines a linear operation to compute $u_{k+1}$ with an element-wise non linearity. It can be summarized as a recurrent neural network, presented in \mbox{\autoref{fig:ista}.}, with tied weights. The autors in \cite{Gregor10} considered the architecture $\Phi_\Theta^K$ with parameters $\Theta = (W_g^{(k)}, W_e^{(k)}, \theta^{(k)})_{k=1,\dots K}$ obtained by unfolding $K$ times the recurrent network, as presented in \autoref{fig:lista}. The layers $\phi_\Theta^k$ are defined as \begin{equation} z_{k+1} = \phi_\Theta^k(z_k) := h_{\theta}( W_g z_k + W_e x)~. \end{equation} If $ W_g^{(k)} = {\bf I} - \frac{D^\mathsf{T} D}{L}$, $ W_e^{(k)} = \frac{D^\mathsf{T}}{L}$ and $ \theta^{(k)} = \frac{\lambda}{L}$ are fixed for all the $K$ layers, the output of this neural net is exactly the vector $z_K$ resulting from $K$ steps of ISTA. With LISTA, the parameters $\Theta$ are learned using back propagation to minimize the cost function: \mbox{$f(\Theta) = \mathbb E_x\left[F_x(\Phi^K_{\Theta}(x))\right]~.$} A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see \autoref{fig:lfista} in \autoref{sec:lfista}~). The architecture is very similar to LISTA, now with two memory tapes: \[z_{k+1} = h_{\theta}( W_g z_k + W_m z_{k-1} + W_e x)~.\] \paragraph{Factorization network} \label{par:facnet} Our analysis in \autoref{sec:math} suggests a refactorization of LISTA in more a structured class of parameters. Following the same basic architecture, and using \autoref{eq:prox}, the network FacNet, $\Psi_\Theta^K$ is formed using layers such that: \begin{equation} z_{k+1} = \psi_\Theta^k(z_k) := A^\mathsf{T} h_{\lambda S^{-1}} (A z_{k} - S^{-1}A(D^\mathsf{T} D z_{k} - D^\mathsf{T} x) )~, \label{eq:facnet} \end{equation} with $S$ diagonal and $A$ unitary, the parameters of the $k$-th layer. The parameters obtained after training such a network with back-propagation can be used with the theory developed in \autoref{sec:math}. Up to the last linear operation $A^\mathsf{T}$ of the network, this network is a re-parametrization of LISTA in a more constrained parameter space. Thus, LISTA is a generalization of this proposed network and should have performances at least as good as FacNet, for a fixed number of layers. The optimization can also be performed using backpropagation. To enforce the unitary constraints on $A^{(k)}$, the cost function is modified with a penalty: \begin{equation} \label{eq:facnet} f(\Theta) = \mathbb E_x\left[F_x(\Psi^K_{\Theta}(x))\right] + \frac{\mu}{K}\sum_{k=1}^K\left\|{\bf I} - \left(A^{(k)}\right)^ TA^{(k)}\right\|^2_2~, \end{equation} with $\Theta = (A^{(k)}, S^{(k)})_{k=1\dots K}$ the parameters of the K layers and $\mu$ a scaling factor for the regularization. The resulting matrix $A^{(k)}$ is then projected on the Stiefel Manifold using a SVD to obtain final parameters, coherent with the network structure. \paragraph{Linear model} \label{par:linear} Finally, it is important to distinguish the performance gain resulting from choosing a suitable starting point and the acceleration from our model. To highlights the gain obtain by changing the starting point, we considered a linear model with one layer such that $z_{out} = A^{(0)} x$. This model is learned using SGD with the convex cost function \mbox{$ f(A^{(0)}) = \|({\bf I} - DA^{(0)})x\|_2^2+\lambda \|A^{(0)}x\|_1~$}. It computes a tradeoff between starting from the sparsest point $\bf 0$ and a point with minimal reconstruction error $y~$. Then, we observe the performance of the classical iteration of ISTA using $z_{out}$ as a stating point instead of $\bf 0~$. \subsection{Synthetic problems with known distributions} \label{sub:gen} \begin{figure} \caption{ Evolution of the cost function $F(z_k)-F(z^*)$ with the number of layers or the number of iteration $k$ for different sparsity level. \emph{(left)} $\rho=\nicefrac{1}{20}$ and \emph{(right)}$\rho=\nicefrac{1}{4}~.$ } \label{fig:layers1} \label{fig:layers2} \label{fig:layers} \end{figure} \textbf{Gaussian dictionary} In order to disentangle the role of dictionary structure from the role of data distribution structure, the minimization problem is tested using a synthetic generative model with no structure in the weights distribution. First, $m$ atoms $d_i \in {\mathbb{R}}^n$ are drawn \emph{iid} from a multivariate Gaussian with mean $\textbf{0}$ and covariance \textbf I$_n$ and the dictionary $D$ is defined as $\left(\nicefrac{d_i}{\|d_i\|_2}\right)_{i=1\dots m}~.$ The data points are generated from its sparse codes following a Bernoulli-Gaussian model. The coefficients $z=(z_1, \dots, z_m)$ are constructed with $z_i = b_i a_i$, where $b_i\sim \mathcal B(\rho)$ and $a_i \sim \mathcal N(0, \sigma \textbf I_m)~,$ where $\rho$ controls the sparsity of the data. The values are set to $ \medmuskip=0mu \thickmuskip=0mu \thinmuskip=0mu m=100,~\medmuskip=0mu \thickmuskip=0mu \thinmuskip=0mu n = 64$ for the dictionary dimension, $ \rho = \nicefrac{5}{m}$ for the sparsity level and $\medmuskip=0mu \thickmuskip=0mu \thinmuskip=0mu\sigma = 10$ for the activation coefficient generation parameters. The sparsity regularization is set to $\medmuskip=0mu \thickmuskip=0mu \thinmuskip=0mu\lambda = 0.01$. The batches used for the training are generated with the model at each step and the cost function is evaluated over a fixed test set, not used in the training. \autoref{fig:layers} displays the cost performance for methods ISTA/FISTA/Linear relatively to their iterations and for methods LISTA/LFISTA/FacNet relatively to the number of layers used to solve our generated problem. Linear has performances comparable to learned methods with the first iteration but a gap appears as the number of layers increases, until a point where it achieves the same performances as non adaptive methods. This highlights that the adaptation is possible in the subsequent layers of the networks, going farther than choosing a suitable starting point for iterative methods. The first layers permit to achieve a large gain over the classical optimization strategy, by leveraging the structure of the problem. This appears even with no structure in the sparsity patterns of input data, in accordance with the results in the previous section. We also observe diminishing returns as the number of layers increases. This results from the phase transition described in \autoref{sub:phase}, as the last layers behave as ISTA steps and do not speed up the convergence. The 3 learned algorithms are always performing at least as well as their classical counterpart, as it was stated in \autoref{thm1}. We also explored the effect of the sparsity level in the training and learning of adaptive networks. In the denser setting, the arbitrage between the $\ell_1$-norm and the squared error is easier as the solution has a lot of non zero coefficients. Thus in this setting, the approximate method is more precise than in the very sparse setting where the approximation must perform a fine selection of the coefficients. But it also yield lower gain at the beggining as the sparser solution can move faster. There is a small gap between LISTA and FacNet in this setup. This can be explained from the extra constraints on the weights that we impose in the FacNet, which effectively reduce the parameter space by half. Also, we implement the unitary constraints on the matrix $A$ by a soft regularization (see \autoref{eq:facnet}), involving an extra hyper-parameter $\mu$ that also contributes to the small performance gap. In any case, these experiments show that our analysis accounts for most of the acceleration provided by LISTA, as the performance of both methods are similar, up to optimization errors. \begin{figure} \caption{Evolution of the cost function $F(z_k)-F(z^*)$ with the number of layers or the number of iteration $k$ for a problem generated with an adversarial dictionary.} \label{fig:adverse} \end{figure} \textbf{Adversarial dictionary} The results from \autoref{sec:math} show that problems with a gram matrix composed of large eigenvalues associated to non sparse eigenvectors are harder to accelerate. Indeed, it is not possible in this case to find a quasi diagonalization of the matrix $B$ that does not distort the $\ell_1$ norm. It is possible to generate such a dictionary using Harmonic Analysis. The Discrete Fourier Transform (DFT) distorts a lot the $\ell_1$ ball, since a very sparse vector in the temporal space is transformed in widely spread spectrum in the Fourier domain. We can thus design a dictionary for which LISTA and FacNet performances should be degraded. $ D = \left(\nicefrac{d_i}{\|d_i\|_2}\right)_{i=1\dots m}~$ is constructed such that $ d_{j,k} = e^{-2\pi i j \zeta_k}$, with $\left(\zeta_k\right)_{k \le n}$ randomly selected from $\left\{\nicefrac{1}{m},\dots, \nicefrac{\nicefrac{m}{2}}{m}\right\}$ without replacement. The resulting performances are reported in \autoref{fig:adverse}. The first layer provides a big gain by changing the starting point of the iterative methods. It realizes an arbitrage of the tradeoff between starting from $\bf 0$ and starting from $y~.$ But the next layers do not yield any extra gain compared to the original ISTA algorithm. After $4$ layers, the cost performance of both adaptive methods and ISTA are equivalent. It is clear that in this case, FacNet does not accelerate efficiently the sparse coding, in accordance with our result from \autoref{sec:math}. LISTA also displays poor performances in this setting. This provides further evidence that FacNet and LISTA share the same acceleration mechanism as adversarial dictionaries for FacNet are also adversarial for LISTA. \subsection{Sparse coding with over complete dictionary on images} \label{sub:img} \textbf{Wavelet encoding for natural images} \label{par:wavelet} A highly structured dictionary composed of translation invariant Haar wavelets is used to encode 8x8 patches of images from the PASCAL VOC 2008 dataset. The network is used to learn an efficient sparse coder for natural images over this family. $500$ images are sampled from dataset to train the encoder. Training batches are obtained by uniformly sampling patches from the training image set to feed the stochastic optimization of the network. The encoder is then tested with $10000$ patches sampled from $100$ new images from the same dataset. \textbf{Learned dictionary for MNIST} To evaluate the performance of LISTA for dictionary learning, LISTA was used to encode MNIST images over an unconstrained dictionary, learned {\it a priori} using classical dictionary learning techniques. The dictionary of $100$ atoms was learned from $10000$ MNIST images in grayscale rescaled to 17x17 using the implementation of \cite{Mairal2009a} proposed in scikit-learn, with $\lambda = 0.05$. Then, the networks were trained through backpropagation using all the $60000$ images from the training set of MNIST. Finally, the perfornance of these encoders were evaluated with the $10000$ images of the training set of MNIST. \begin{figure} \caption{Pascal VOC 2008} \label{fig:pascal} \caption{MNIST} \label{fig:mnist} \caption{ Evolution of the cost function $F(z_k)-F(z^*)$ with the number of layers or the number of iteration $k$ for two image datasets.} \label{fig:images} \end{figure} The \autoref{fig:images} displays the cost performance of the adaptive procedures compared to non-adaptive algorithms. In both scenario, FacNet has performances comparable to the one of LISTA and their behavior are in accordance with the theory developed in \autoref{sec:math}. The gains become smaller for each added layer and the initial gain is achieved for dictionary either structured or unstructured. The MNIST case presents a much larger gain compare to the experiment with natural images. This results from the difference of structure of the input distribution, as the MNIST digits are much more constrained than patches from natural images and the network is able to leverage it to find a better encoder. In the MNIST case, a network composed of $12$ layers is sufficient to achieve performance comparable to ISTA with more than $1000$ iterations. \section{Conclusions} \label{sec:conclusions} In this paper we studied the problem of finite computational budget approximation of sparse coding. Inspired by the ability of neural networks to accelerate over splitting methods on the first few iterations, we have studied which properties of the dictionary matrix and the data distribution lead to such acceleration. Our analysis reveals that one can obtain acceleration by finding approximate matrix factorizations of the dictionary which nearly diagonalize its Gram matrix, but whose orthogonal transformations leave approximately invariant the $\ell_1$ ball. By appropriately balancing these two conditions, we show that the resulting rotated proximal splitting scheme has an upper bound which improves over the ISTA upper bound under appropriate sparsity. In order to relate this specific factorization property to the actual LISTA algorithm, we have introduced a reparametrization of the neural network that specifically computes the factorization, and incidentally provides reduced learning complexity (less parameters) from the original LISTA. Numerical experiments of \autoref{sec:exp} show that such reparametrization recovers the same gains as the original neural network, providing evidence that our theoretical analysis is partially explaining the behavior of the LISTA neural network. Our acceleration scheme is inherently transient, in the sense that once the iterates are sufficiently close to the optimum, the factorization is not effective anymore. This transient effect is also consistent with the performance observed numerically, although the possibility remains open to find alternative models that further exploit the particular structure of the sparse coding. Finally, we provide evidence that successful matrix factorization is not only sufficient but also necessary for acceleration, by showing that Fourier dictionaries are not accelerated. Despite these initial results, a lot remains to be understood on the general question of optimal tradeoffs between computational budget and statistical accuracy. Our analysis so far did not take into account any probabilistic consideration (e.g. obtain approximations that hold with high probability or in expectation). Another area of further study is the extension of our analysis to the FISTA case, and more generally to other inference tasks that are currently solved via iterative procedures compatible with neural network parametrizations, such as inference in Graphical Models using Belief Propagation or other ill-posed inverse problems. \appendix \section{Learned Fista} \label{sec:lfista} A similar algorithm can be derived from FISTA, the accelerated version of ISTA to obtain LFISTA (see \autoref{fig:lfista}~). The architecture is very similar to LISTA, now with two memory taps: It introduces a momentum term to improve the convergence rate of ISTA as follows: \begin{enumerate} \item $\displaystyle y_k = z_{k} + \frac{\medmuskip=0mu \thickmuskip=0mu \thinmuskip=0mu t_{k-1} - 1}{t_k}(z_k - z_{k-1})$~, \item $\displaystyle z_{k+1} = h_{\frac{\lambda}{L}}\left(y_k - \frac{1}{L}\nabla E(y_k)\right)= h_{\frac{\lambda}{L}}\left(({\bf I} - \frac{1}{L}B)y_k + \frac{1}{L}D^\mathsf{T} x\right)$~, \item $ \displaystyle t_{k+1} = \frac{1 + \sqrt{1 + 4t_k^2}}{2}$~. \end{enumerate} By substituting the expression for $y_k$ into the first equation, we obtain a generic recurrent architecture very similar to LISTA, now with two memory taps, that we denote by LFISTA: \[z_{k+1} = h_{\theta}( W_g^{(k)} z_k + W_m^{(k)} z_{k-1} + W_e^{(k)} x)~.\] This model is equivalent to running $K$-steps of FISTA when its parameters are initialized with \begin{eqnarray} W_g^{(k)} & = & \left(1 + \frac{t_{k-1} - 1}{t_{k}}\right)\left({\bf I} - \frac{1}{L}B\right)~, \nonumber \\ W_m^{(k)} & = & \left(\frac{1 - t_{k-1}}{t_{k}}\right)\left({\bf I} - \frac{1}{L}B\right)~, \nonumber \\ W_e^{(k)} & = & \frac{1}{L}D^\mathsf{T}~. \nonumber \end{eqnarray} The parameters of this new architecture, presented in \autoref{fig:lfista}~, are trained analogously as in the LISTA case. \begin{figure} \caption{ Network architecture for LFISTA. This network is trainable through backpropagation and permits to approximate the sparse coding solution efficiently.} \label{fig:lfista} \end{figure} \section{Proofs} \begin{lemma} \label{lemma1_a} Suppose that $R = A^\mathsf{T} S A - B$ is positive definite, and define \begin{equation} \label{eq:algo_a} z_{k+1} = \arg\min_z \widetilde{F}(z, z_k)~,\text{ and } \end{equation} $\delta_A(z) = \|A z \|_1 - \|z \|_1$. Then we have \begin{equation} \label{eq:bo2_a} F(z_{k+1}) - F(z^*) \leq \frac{1}{2} \left( (z^* -z_{k})^\mathsf{T} R(z^* -z_{k}) - (z^* -z_{k+1})^\mathsf{T} R(z^* -z_{k+1}) \right) + \langle \partial \delta_A(z_{k+1}) , z_{k+1} - z^* \rangle ~. \end{equation} \end{lemma} \begin{proof} We define $$f(t) = F\left( t z_{k+1} + (1-t) z^*\right)~,~t \in [0,1]~.$$ Since $F$ is convex, $f$ is also convex in $[0,1]$. Since $f(0) = F(z^*)$ is the global minimum, it results that $f'(t)$ is increasing in $(0,1]$, and hence \begin{equation*} F(z_{k+1}) - F(z^*) = f(1) - f(0) = \int f'(t) dt \leq f'(1)~, \end{equation*} where $f'(1)$ is any element of $\partial f(1)$. Since $\delta_A(z)$ is a difference of convex functions, its subgradient can be defined as a limit of infimal convolutions \cite{Hiriart-Urruty1991}. We have $$\partial f(1) = \langle \partial F(z_{k+1}), z_{k+1} - z^* \rangle~,$$ and since $$\partial F(z) = \partial \widetilde{F}(z,z_k) - R (z - z_k) - \partial \delta_A(z)~\text{ and}~0 \in \partial \widetilde{F}(z_{k+1}, z_k)$$ it results that \begin{equation*} \partial F(z_{k+1}) = -R (z_{k+1} - z_k) - \partial \delta_A(z_{k+1})~, \end{equation*} and thus \begin{equation} F(z_{k+1}) - F(z^*) \leq (z^*-z_{k+1})^\mathsf{T} R ( z_{k+1} - z_k) + \langle \partial \delta_A(z_{k+1}) , (z^*-z_{k+1}) \rangle~. \end{equation} \autoref{eq:bo2_a} is obtained by observing that \begin{equation} \label{eq:bt1_a} (z^*-z_{k+1})^\mathsf{T} R ( z_{k+1} - z_k) \leq \frac{1}{2} \left( (z^* -z_{k})^\mathsf{T} R(z^* -z_{k}) - (z^* -z_{k+1})^\mathsf{T} R(z^* -z_{k+1}) \right) ~, \end{equation} thanks to the fact that $R \succ 0$. \end{proof} \begin{theorem} \label{eq:co1a_a} Let $A_k, S_k$ be the pair of unitary and diagonal matrices corresponding to iteration $k$, chosen such that $R_k = A_k^\mathsf{T} S_k A_k - B \succ 0$. It results that \begin{equation} \label{eq:zo1a_a} F(z_{k}) - F(z^*) \leq \frac{(z^*- z_0)^\mathsf{T} R_0 (z^* - z_0) + 2\langle \nabla \delta_{A_0}(z_{1}) , (z^*-z_{1}) \rangle }{2k} + \frac{\alpha - \beta}{2k}~,\text{ with} \end{equation} $$ \alpha = \sum_{n=1}^{k-1} \left( 2\langle \nabla \delta_{A_n}(z_{n+1}) , (z^*-z_{n+1}) \rangle + (z^* - z_n)^\mathsf{T} ( R_{n-1} - R_{n}) (z^* - z_n) \right)~,$$ $$\beta = \sum_{n=0}^{k-1} (n+1)\left((z_{n+1}-z_n)^\mathsf{T} R_n (z_{n+1}-z_n) + 2\delta_{A_n}(z_{n+1}) - 2\delta_{A_n}(z_n) \right)~.$$ \end{theorem} {\it Proof:} The proof is adapted from \citep{Beck2009}, Theorem 3.1. From \autoref{lemma1_a}, we start by using \autoref{eq:bo2_a} to bound terms of the form $F(z_n) - F(z^*)$: {\small \begin{equation*} F(z_n) - F(z^*) \leq \langle \nabla \delta_{A_n}(z_{n+1}) , (z^*-z_{n+1}) \rangle + \frac{1}{2} \left( (z^* -z_{n})^\mathsf{T} R_n (z^* -z_{n}) - (z^* -z_{n+1})^\mathsf{T} R_n(z^* -z_{n+1}) \right)~. \end{equation*} } Adding these inequalities for $n = 0 \dots k-1$ we obtain \begin{eqnarray} \label{eq:vg1_a} \left(\sum_{n=0}^{k-1} F(z_n) \right) - k F(z^*) &\leq& \sum_{n=0}^{k-1} \langle \nabla \delta_{A_n}(z_{n+1}) , (z^*-z_{n+1}) \rangle + \\ \nonumber && + \frac{1}{2} \left( (z^*- z_0)^\mathsf{T} R_0 (z^* - z_0) - (z^*- z_k)^\mathsf{T} R_{k-1} (z^* - z_k) \right) + \\ \nonumber && + \frac{1}{2} \sum_{n=1}^{k-1} (z^* - z_n)^\mathsf{T} ( R_{n-1} - R_{n}) (z^* - z_n)~. \end{eqnarray} On the other hand, we also have \begin{eqnarray*} \label{eq:vg2_a} F(z_n) - F(z_{n+1}) &\geq& F(z_n) - \tilde{F}(z_n,z_n) + \tilde{F}(z_{n+1},z_n) - F(z_{n+1}) \\ &=& - \delta_{A_n}(z_n) + \delta_{A_n}(z_{n+1}) + \frac{1}{2}(z_{n+1}-z_n)^\mathsf{T} R_n (z_{n+1}-z_n)~, \end{eqnarray*} which results in \begin{eqnarray} \label{eq:vg3_a} \sum_{n=0}^{k-1} (n+1)(F(z_n) - F(z_{n+1})) &\geq& \frac{1}{2}\sum_{n=0}^{k-1}(n+1) (z_{n+1}-z_n)^\mathsf{T} R_n (z_{n+1}-z_n) + \\ \nonumber && + \sum_{n=0}^{k-1}(n+1) \left( \delta_{A_n}(z_{n+1}) - \delta_{A_n}(z_n) \right) \\ \nonumber \left(\sum_{n=0}^{k-1} F(z_n) \right) - k F(z_k) &\geq& \sum_{n=0}^{k-1} (n+1)\left(\frac{1}{2}(z_{n+1}-z_n)^\mathsf{T} R_n (z_{n+1}-z_n) + \delta_{A_n}(z_{n+1}) - \delta_{A_n}(z_n) \right) ~. \end{eqnarray} Combining \autoref{eq:vg1_a} and \autoref{eq:vg3_a} we obtain \begin{eqnarray} F(z_k) - F(z^*) &\leq& \frac{(z^*- z_0)^\mathsf{T} R_0 (z^* - z_0) + 2\langle \nabla \delta_{A_0}(z_{1}) , (z^*-z_{1}) \rangle }{2k} + \frac{\alpha - \beta}{2k}\\ \nonumber \end{eqnarray} with $$ \alpha = \sum_{n=1}^{k-1} \left( 2\langle \nabla \delta_{A_n}(z_{n+1}) , (z^*-z_{n+1}) \rangle + (z^* - z_n)^\mathsf{T} ( R_{n-1} - R_{n}) (z^* - z_n) \right)~,$$ $$\beta = \sum_{n=0}^{k-1} (n+1)\left((z_{n+1}-z_n)^\mathsf{T} R_n (z_{n+1}-z_n) + 2\delta_{A_n}(z_{n+1}) - 2\delta_{A_n}(z_n) \right)~.$$ $\square$ \begin{corollary} \label{eq:coro2_a} If $A_k = {\bf I}$, $S_k = \| B \| {\bf I}$ for $k>0$ then \begin{equation} \label{eq:zo22_a} F(z_{k}) - F(z^*) \leq \frac{(z^*- z_0)^\mathsf{T} R_0 (z^* - z_0) + 2 L_{A_0}(z_1) (\| z^*-z_1\| + \| z_1 - z_0\|) + (z^* - z_1)^\mathsf{T} R_0 (z^* - z_1)^\mathsf{T}}{2k}~. \end{equation} \end{corollary} {\it Proof: } We verify that in that case, $R_{n-1} - R_n \equiv 0$ and for $n>1$ and $\delta_{A_n} \equiv 0$ for $n > 0$ $\square$. \end{document}
\begin{document} \providecommand{\keywords}[1] { \small \textit{Keywords: } #1 } \providecommand{\msccode}[1] { \small \textit{2020 MSC: } #1 } \title{Representation Theorem for modules over a commutative ring} \author {Colin Tan\footnote{Division of Mathematical Sciences, School of Physical and Mathematical Sciences, College of Science, Nanyang Technological University. Email: \texttt{[email protected]}} } \date{04 May 2023} \maketitle \noindent \keywords{archimedean semiring, module over commutative rings, order unit, polynomial, polytope, positive definite matrix, Positivstellensatz, pure state, simplex} \\ \noindent \msccode{Primary 12D99, 26C99; secondary 14P99, 15B48, 52A05} \begin{abstract} Positivstellens\"atze for polynomials, such as that of P\'olya and Handelman, are known to be concrete instances of the abstract Representation Theorem for (commutative unital) rings. We generalise the Representation Theorem to modules over rings. When this module consists of all symmetric matrices with entires in the polynomial ring, our generalisation of the Representation Theorem becomes the corresponding generalisations of P\'olya's and Handelman's Positivstellens\"atze to symmetric matrices with polynomial entries. These generalisations were previously obtained by Scherer and Hol, and L\^{e}\ and Du'\ respectively, using the method of effective estimates from analysis. \end{abstract} Since Berr and W\"ormann \cite{BerrWoermann2001}, and even earlier from W\"ormann's thesis \cite{Woermann1998}, it is known that several Positivstellens\"atze follow from the Representation Theorem of real algebra. In real algebraic geometry, a \emph{Positivstellensatz} is the sufficiency of the positivity of a polynomial on a space, usually compact, for it to be representable in terms of a certificate. A \emph{certificate} is an algebraic expression that immediately witnesses the strict positivity of the polynomial on that space. The Positivstellens\"atze that follow from the Representation Theorem are characterised by their certificates forming a so-called \lq\lq\emph{archimedean}\rq\rq\! subsemiring of the polynomial ring. Given a ring $A$ (always commutative with multiplicative unit), a subset $S \subseteq A$ is a \emph{subsemiring} if it contains $0$ and $1$ and is closed under addition and multiplication. We say that $S \subseteq A$ is \emph{archimedean} if $S + {\mathbb{Z}} = A$, where ${\mathbb{Z}}$ is the abelian group of integers. These archimedean Positivstellens\"atze include those of P\'olya \cite{Polya1928} (reproduced in \cite[pp.\ 57--60]{HLP1952}), Handelman \cite{Handelman1988}, Schm\"udgen \cite{Schmuedgen1991}, and Putinar and Scheiderer \cite{PutinarScheiderer2010}. The Representation Theorem is a criterion for an element of a ring $A$ to lie in a module $M \subseteq A$ over an archimedean subsemiring $S$ of $A$. Here, by an \emph{$S$-module}, we mean a subset $M \subseteq A$ that contains $0$, is closed under multiplication and satisfies $S M \subseteq M$. The fundamental Representation Theorem was proven and rediscovered in various versions by Stone, Krivine, Kadison and Dubois, among others. Krivine's version is definitive \cite{Krivine64a, Krivine64b}. Prestel and Delzell gave an account of its history \cite[Section 5.6]{PrestelDelzell2001}. When $A = {\mathbb{R}}[x_1, \dots, x_n]$ is the real polynomial algebra, and by choosing $S$ appropriately, the Representation Theorem specialises to each of the afore-mentioned Positivstellens\"atze. The abstract criterion in the Representation Theorem for a polynomial $f \in A$ to be representable as a certificate then reduces to the strict postivity of $f$ on the relevant compact subset of real euclidean space ${\mathbb{R}}^n$. We refer the reader to a survey of Scheiderer that details why these Positivstellens\"atze are instances of the Representation Theorem \cite{Scheiderer2009} (except the Positivstellensatz of Putinar and Scheiderer, which was discovered after the survey was written). The purpose of this note, then, is to generalise the Representation Theorem to a criterion for an element of a module $G$ over a ring $A$ to lie in a subsemimodule $M \subseteq G$ over an archimedean subsemiring $S \subseteq A$. Here, we ask the reader to take note, the term \lq\lq\emph{$A$-module}\rq\rq\ is in the usual sense of an additive group equipped with an $A$-action $(f, s) \mapsto f \cdot s: A \times G \to G$ satisfying the usual axioms $f \cdot (s + t) = f \cdot s + f \cdot t$, $(f + g) \cdot s = f \cdot s + g \cdot s$, $(f g) \cdot s = f \cdot (g \cdot s)$, and $1 \cdot s = s$, for all $f, g \in A$, $s, t \in G$. Then a $M \subseteq G$ is a \emph{$S$-subsemimodule} if it contains $0$, is closed under addition and satisfies $S \cdot M \subseteq M$. So a $S$-module, in the above sense as used in real algebra, is a $S$-subsemimodule of $A$ in our terminology, where $A$ is regarded as a module over itself. We state our main results. Let $G$ be a module over a ring $A$, let $M \subseteq G$, and let $u \in M$. Define $\mathcal{X}_A(G, M, u)$ as the set of all group homomorphisms $\Phi : G \to {\mathbb{R}}$ to the additive group of real numbers such that $\Phi|_M \ge 0$, $\Phi(u) = 1$ and \begin{align} \label{eq: multiplicativeLaw} \Phi(f \cdot s) = \Phi(f \cdot u) \Phi(s) & & (\forall f\in A, s \in G). \end{align} Given $s \in G$, we write $s > 0$ (resp.\ $s \ge 0$) on $\mathcal{X}_A(G, M, u)$ if $\Phi(s) > 0$ (resp.\ $\Phi(s) \ge 0$) for all $\Phi \in \mathcal{X}_A(G, M, u)$. \begin{theorem} \label{thm: RepresentationTheoremForModulesOverARing} Let $G$ be a module over a ring $A$, let $M \subseteq G$ be a subsemimodule over an archimedean subsemiring $S$ of $A$. Then, for each $s \in G$ with $s > 0$ on $\mathcal{X}_A(G, M, u)$, there is some positive integer $n$ such that $n s \in M$. \end{theorem} The property that $n s \in M$ for some positive integer $n $ witnesses that $s \ge 0$ on $\mathcal{X}_A(G, M, u)$ since then $0 \le \Phi(n s) = n \Phi(s)$ would imply that $\Phi(s) \ge 0$, for all $\Phi \in \mathcal{X}_A(G, M, u)$. The special case of $(G, u) = (A, 1)$ is the standard Representation Theorem in real algebra. This theorem was first proven algebraically by Becker and Schwartz \cite{BeckerSchwartz1983} (refer also to \cite[Theorem 1.5.9]{Scheiderer2009} and \cite[Theorem 6.1]{BSS12}). However, it is desirable that the conclusion of Theorem \ref{thm: RepresentationTheoremForModulesOverARing} be strengthened to witness the strict positivity of $s$ on $\mathcal{X}_A(G, M, u)$. Let ${\mathbb{Q}}$ denote the field of rational numbers, and let ${\mathbb{Q}}_+$ denote the subsemifield of positive rational numbers. Given a field $\mathbb{K}$ with ${\mathbb{Q}} \subseteq \mathbb{K} \subseteq {\mathbb{R}}$, let $\mathbb{K}_+ := \mathbb{K} \cap {\mathbb{Q}}_+$. Given a $\mathbb{K}$-algebra $A$ (always associative and commutative with multiplicative unit), a \emph{$\mathbb{K}_+$-subsemialgebra} is a subsemiring $S$ of the underlying ring of $A$ that contains $\mathbb{K}_+$. Given an abelian group $G$, written additively, and a submonoid $M \subseteq G$, an element $u \in G$ is said to be an \emph{order unit} of $(G, M)$ if $M + {\mathbb{Z}} u = G$. Order units were first named by Goodearl and Handelman as \lq\lq\emph{strong units}\rq\rq\! in the case where $M \cap (-M) = 0$ is a partial ordering \cite{GoodearlHandelman1976}. By 1980, they had settled on the name \lq\lq order unit\rq\rq\ \cite{GoodearlHandelman1980}, \cite{HHL1980}. For example, a semiring $S$ of a ring $A$ is archimedean if and only if $1$ is an order unit of $(A, S)$. An equivalent definition of an order unit can be given in terms of a preordering $\le_M$ on $G$ associated to $M$, defined by $s \le_M t$ if and only if $t - s \in M$ (for $s, t \in G$). Then $u \in M$ is an order unit of $(G ,M)$ if, for each $s \in G$, there is a positive integer $n$ such that $s \le_M nu$. \begin{theorem} \label{thm: RepresentationTheoremForModulesOverAnAlgebra} Let ${\mathbb{Q}} \subseteq \mathbb{K} \subseteq {\mathbb{R}}$ be a field. Let $G$ be a module over a $\mathbb{K}$-algebra $A$, let $M \subseteq G$ be a subsemimodule over an archimedean $\mathbb{K}_+$-subsemialgebra $S$ of $A$, and let $u \in M$ be an order unit of $(G, M)$. Then every $s \in G$ with $s > 0$ on $\mathcal{X}_A(G, M, u)$ lies in $M$ and is, furthermore, an order unit of $(G, M)$. \end{theorem} As promised, the conclusion that $s \in M$ is an order unit of $(G, M)$ witnesses that $s > 0$ on $\mathcal{X}_A(G, M, u)$. Indeed, for any order unit $s$ of $(G, M)$, there is some positive integer $n$ with $u \le_M n s$, hence $1 = \Phi(u) \le \Phi(n s) = n \Phi(s)$ by the motonicity of $\Phi$, therefore $\Phi(s) \ge 1/n > 0$. The main contribution of this paper is following observation --- the argument of Burgdorf, Scheiderer and Schweighofer in \cite{BSS12} for Theorem \ref{thm: RepresentationTheoremForModulesOverARing} in the case where $G$ is a $S$-module (in the sense of real algebra) already suffices to prove Theorem \ref{thm: RepresentationTheoremForModulesOverARing} in its full generality. We recall their argument. \begin{enumerate}[Step 1.] \item \label{item: EHSCriterion} First, they recall a result of Effros, Handelman and Shen from convex geometry \cite[Theorem 1.4]{EffrosHandelmanShen1980}. \end{enumerate} The result requires some terminology to state. Let $G$ be an abelian group written additively, $M \subseteq G$ a submonoid, and let $u \in M$ be an order unit of $(G, M)$. A \emph{state} of $(G, M, u)$, is a group homomorphism $\Phi : G \to {\mathbb{R}}$ to the additive group of real numbers such that $\Phi|_M \ge 0$ and $\Phi(u) = 1$. Regard the set of states, denoted by $\mathcal{S}(G, M, u)$, as a subset of ${\mathbb{R}}^G := \prod_G {\mathbb{R}}$ via the injection $\Phi \mapsto (\Phi(g))_{g \in G} : \mathcal{S}(G, M, u) \hookrightarrow {\mathbb{R}}^G$. Then $\mathcal{S}(G, M, u)$ is convex, so that we may define a \emph{pure} state as an extremal point $\Phi \in \mathcal{S}(G, M, u)$. Explicitly, $\Phi \in \mathcal{S}(G, M, u)$ is pure if, whenever $2\Phi = \Phi_1 + \Phi_2$ for any two $\Phi_1, \Phi_2 \in \mathcal{S}(G, M, u)$, then $\Phi = \Phi_1 = \Phi_2$. \begin{lemma} \label{lem: EHSCriterionNonStrict} Let $G$ be an abelian group, $M \subseteq G$ a submonoid, and let $u \in M$ be an order unit of $(G, M)$. Then, for every $s \in G$ with $\Phi(s) > 0$ for all pure states $\Phi : G \to {\mathbb{R}}$ of $(G ,M, u)$, there is some positive integer $n$ such that $n s \in M$. \end{lemma} In their original version of Lemma \ref{lem: EHSCriterionNonStrict}, Effros, Handelman and Shen assumed that the preorder $\le_M$ is anti-symmetric, or equivalently that $M \cap (-M) = 0$. Burgdorf, Scheiderer and Schweighofer observed that this assumption is not necessary. They outlined a proof of Lemma \ref{lem: EHSCriterionNonStrict} using two theorems from covnex geometry, namely the Krein-Milman Theorem and a hyperplane seperation theorem of Eidelheit \cite{Eidelheit1936} and Kakutani \cite{Kakutani1937}. (The reader may also refer to Barvinok's textbook \cite{Barvinok2002}, where these two theorems are stated as III.4.1 and III.1.7 respectively). \begin{enumerate}[Step 1.] \addtocounter{enumi}{1} \item \label{item: formOfPureStatesOfRings} Burgdorf, Scheiderer and Schweighofer showed that every pure state of $(A, S, 1)$ is multiplicative, or equivalently, lies in $\mathcal{X}_A(A, S, 1)$ \cite[Corollary 4.4]{BSS12}. \item \label{item: substitution} Therefore, the special case of Theorem \ref{thm: RepresentationTheoremForModulesOverARing} when $(G, u) = (A, 1)$ follows by applying Step \ref{item: formOfPureStatesOfRings} to Lemma \ref{lem: EHSCriterionNonStrict}. \end{enumerate} We observe that their verbatim argument proves the following generalisation of Step \ref{item: formOfPureStatesOfRings} to modules over a ring. \begin{proposition} \label{prop: multiplicativeAlongFibres} Let $G$ be a module over a ring $A$, let $M \subseteq G$ be a subsemimodule over an archimedean subsemiring $S$ of $A$, and let $u \in M$ be an order unit of $(G, M)$. Then each pure state $\Phi$ of $(G, M, u)$ satisfies \eqref{eq: multiplicativeLaw}. \end{proposition} Burgdorf, Scheiderer and Schweighofer stated Proposition \ref{prop: multiplicativeAlongFibres} in the special case where $G \subseteq A$ is an ideal \cite[Proposition 4.1]{BSS12}. We reproduce their proof in the Appendix, where the reader may check that their argument is valid without the assumption that $G$ is contained in $M$. Thus we are ready to prove Theorem \ref{thm: RepresentationTheoremForModulesOverARing}. \begin{proof}[Proof of Theorem \ref{thm: RepresentationTheoremForModulesOverARing}] Apply Lemma \ref{lem: EHSCriterionNonStrict} to Proposition \ref{prop: multiplicativeAlongFibres}. \end{proof} For Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}, we will use the following version of Lemma \ref{lem: EHSCriterionNonStrict}, which is a special case of \cite[Corollary 2.7]{BSS12}. \begin{lemma} \label{lem: EHSCriterionStrict} Let ${\mathbb{Q}} \subseteq \mathbb{K} \subseteq {\mathbb{R}}$ be a field. Let $G$ be a $\mathbb{K}$-vector space, let $M \subseteq G$ a $\mathbb{K}_+$-subsemimodule, and let $u \in M$ be an order unit of $(G, M)$. Then every $s \in G$ with $\Phi(s) > 0$ for all pure $\Phi \in \mathcal{S}(G, M, u)$ lies in $M$ and is, furthermore, an order unit of $(G, M)$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}] Apply Lemma \ref{lem: EHSCriterionStrict} to Proposition \ref{prop: multiplicativeAlongFibres}. \end{proof} We end with some initial consequences of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}. Recall from the beginning of this note that both the Positivstellens\"atze of Handelman \cite{Handelman1988} and P\'olya \cite{Polya1928} for polynomials are concrete instances of the regular Representation Theorem. These two Positivstellens\"atze were generalised by L\^{e}\ and Du'\ \cite[Theorem 3]{LeDu2018}, and Scherer and Hol \cite[Theorem 3]{SchererHol2006} respectively to symmetric matrices whose entries are polynomials. We show that these matrix Positivstellens\"atze are instances of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}. A \emph{multi-index} is a $m$-tuple $k = (k_1, \dots, k_m)$ of nonnegative integers. We use the notation $|k| := k_1 + \cdots + k_m$ and $\ell^k := {\ell_1}^{k_1} \cdots {\ell_m}^{k_m}$, for any $m$-tuple of polynomials $(\ell_1, \dots, \ell_m)$ in ${\mathbb{R}}[x] := {\mathbb{R}}[x_1, \dots, x_d]$. For any $f \in {\mathbb{R}}[x]$ and any matrix $\matrix{P} = (p_{ij})$ with real entries, the entrywise product $f \cdot \matrix{P} = (f p_{ij})$ is a matrix with entries in ${\mathbb{R}}[x]$. Given a symmetric real matrix $\matrix{A}$, let $\matrix{A} \succ 0$ (resp.\ $\matrix{A} \succeq 0$) denote that $\matrix{A}$ is positive definite (resp.\ positive semidefinite); whenever this notation is used, $\matrix{A}$ is understood have real entries. \begin{example}[L\^{e}\ and Du'] \label{eg: MatrixHandelman} Let $P \subseteq {\mathbb{R}}^d$ be a polytope (i.e.\ the convex hull of a finite set of points) assumed to be full-dimensional (i.e. the affine span of $P$ is the entire ${\mathbb{R}}^d$). Fix a presentation of this bounded set $P$ as the intersection of halfspaces, say \begin{equation} \label{eq: HPolytopePresentation} P = \{ x \in {\mathbb{R}}^d :\, \ell_1(x), \dots, \ell_m(x) \ge 0\}, \end{equation} where $\ell_1, \dots, \ell_m$ are polynomials in ${\mathbb{R}}[x] := {\mathbb{R}}[x_1, \dots, x_d]$ of degree $1$. Then any symmetric matrix $\matrix{M} = (f_{ij})$ having entries $f_{ij}$ in ${\mathbb{R}}[x]$ with $\matrix{M}(x) = (f_{ij}(x)) \succ 0$ for all $x \in P$ can be written as \begin{equation} \label{eq: HandelmanMatrixCertificate} \matrix{M} = \sum_{|k| = \kappa} \ell^k \cdot \matrix{P_k} \end{equation} for some integer $\kappa \ge 0$ and some family $\{\matrix{P_k}\}_{|k| = \kappa}$ of positive definite symmetric real matrices. \end{example} \begin{proof} Let $A = {\mathbb{R}}[x]$, let $S = \{\sum_k a_k \ell^k \in {\mathbb{R}}[x] : \, \forall k, \, a_k \in {\mathbb{R}}_+\} \subseteq A$ be the ${\mathbb{R}}_+$-subsemialgebra generated by $\ell_1, \dots, \ell_m$, let $G = \mathrm{Sym}_q(A)$ be $A$-module of symmetric $q \times q$ matrices with entries in $A$, let $M = \{ \sum_i g_i \cdot \matrix{S_i} : \, \forall i, \, g_i \in S, \matrix{S_i} \succeq 0\}$ be the $S$-subsemimodule generated by the positive semidefinite $q \times q$ symmetric real matrices, and let $u = \matrix{I_q}$ be the $q \times q$ identity matrix. We shall apply Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}. To this end, recall that $S \subseteq A$ is archimedean by a classical argument of Minkowski. Then $u = \matrix{I_q}$ is an order unit of $(G, M)$ due to the following identity: \begin{equation} mn \matrix{I_q} - f \cdot \matrix{A} = \frac{1}{2}\big( (m - f) \cdot (n \matrix{I_q}+ \matrix{A}) + (m + f) \cdot (n \matrix{I_q} - \matrix{A}) \big), \end{equation} which holds for all positive integers $m, n$, all $f \in A$, and all $q \times q$ symmetric real matrices $\matrix{A}$. This argument is standard, see \cite[Lemma 1]{BerrWoermann2001}. Every $\Phi \in \mathcal{X}_A(G, M, u)$ takes the form \begin{align} \Phi(\matrix{M}) = \tr (\matrix{M}(x) \matrix{S}) & & (\forall \matrix{M} \in G) \end{align} for some $x \in P$ and some $\matrix{S} \succeq 0$ with $\tr(\matrix{S}^2) = 1$. Here $\tr$ denotes the trace. Therefore, if $\matrix{M} \in G $ has $\matrix{M}(x) \succ 0$ for all $x \in P$, then $\matrix{M}(x) \matrix{S} \succeq 0$, thus $\Phi(\matrix{M}) = \tr (\matrix{M}(x) \matrix{S}) > 0$ for all $\Phi \in \mathcal{X}_A(G, M, u)$. Therefore $\matrix{M}$ is an order unit of $(G ,M)$ by Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}. Hence $\matrix{I_q} \le_M n \matrix{M}$ for some positive integer $n$, so that there are $a_{i, k} \in {\mathbb{R}}_+$, $\matrix{S_i} \succeq 0$ such that $\matrix{M} = (1/n) (\matrix{I_q} + \sum_i \sum_{k} a_{i,k} \ell^k \cdot \matrix{S_i})$. Therefore $\matrix{M}$ has the desired form in \eqref{eq: HandelmanMatrixCertificate} by choosing $\kappa$ sufficiently large and using the identity $\ell_1 + \cdots + \ell_m = 1$. \end{proof} The case of $1 \times 1$ matrices in Example \ref{eg: MatrixHandelman} is the classical Positivstellensatz of Handelman. L\^{e}\ and Du'\ also gave an effective upper bound of the least $\kappa$ required in \eqref{eq: HandelmanMatrixCertificate} using the corresponding effective estimates of Powers and Reznick for Handelman's Positivstellensatz \cite{PowersReznick2001}. Choosing $P \subseteq {\mathbb{R}}^d$ as a $d$-dimensional simplex, and expressing $\matrix{M}$ in terms of the barycentric coordinates of $P$ gives Scherer and Hol's generalisation of P\'olya's Positivstellensatz. In the $1 \times 1$ case, the observation that P\'olya's Positivstellensatz is Handelman's Positivstellensatz for a full-dimensional simplex in barycentric coordinates is due to Powers and Reznick \cite{PowersReznick2001}. Let $\Delta^d \subseteq {\mathbb{R}}^{d + 1}$ denote the \emph{standard $d$-dimensional simplex}, whose vertices are $(1, 0, \dots, 0)$, $(0, 1, 0, \dots, 0)$, \dots, $(0, \dots, 0, 1)$. A point $X = (X_0, \dots, X_d)\in {\mathbb{R}}^{d + 1}$ lies in $\Delta^d$ if and only if $X_0, \dots, X_d \ge 0$ and $X_0 + \cdots + X_d = 1$. \begin{example}[Scherer and Hol] Fix an integer $e \ge 0$, and let $\matrix{A}$ be a symmetric matrix whose entries are forms (i.e. homogeneous polynomials) in ${\mathbb{R}}[X_0, \dots, X_d]$ of degree $e$. If $\matrix{A}(X) \succ 0$ for all $X \in \Delta^d$, then there is some integer $\kappa \ge e$, such that for every integer $m \ge \kappa$, \begin{equation} \label{eq: matrixPolyaCertificate} (X_0 + \cdots + X_d)^{m - e} \cdot \matrix{A} = \sum_{|k| = m} {X_0}^{k_0} \cdots {X_d}^{k_d} \cdot \matrix{P_k} \end{equation} where $\matrix{P_k} \succ 0$ for all $|k| = m$. Scherer and Hol similarly gave an effective estimate for $\kappa$ deduced from that of Powers and Reznick for P\'olya's Positivstellensatz \cite{PowersReznick2001}. \end{example} Other applications of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra} to Positivstellens\"atze should be possible. \paragraph{Acknowledgements.} I am grateful for CheeWhye Chin's encouragement to pursue this line of research. Thank you, Tim Netzer, for giving me a chance to speak at a conference at Universit\"at Innsbruck on some ideas in this paper. I would also like to thank Tobias Fritz \cite{Fritz2023}, Xiangyu Liu, Mihai Putinar, Claus Scheiderer, Markus Schweighofer, and Wing-Keung To for disucussions and input. Finally, without help from my better half, I would not have had the peace of mind to complete this note. \section*{Appendix: Proof of Proposition \ref{prop: multiplicativeAlongFibres}} There is nothing essentially original due to the author below --- the following proof of Proposition \ref{prop: multiplicativeAlongFibres} is that of Burgdorf, Scheiderer and Schweighofer \cite[p.123]{BSS12}, and reproduced verbatim below only for the convenience of the reader. The author's only contribution is the observation that $G$ being contained in $A$ (and hence an ideal) is never used in the proof. As mentioned in \textit{loc.\ cit.}, precedents of Proposition \ref{prop: multiplicativeAlongFibres} can be found in the work of Bonsall, Lindenstrauss and Phelps \cite[Theorem 10]{BLP66}, Krivine \cite[Theorem 15]{Krivine64b} and Handelman \cite[Proposition 1.2]{Handelman85}. Let $A, S, G, M, u$ be as in Proposition \ref{prop: multiplicativeAlongFibres}. Given a map $\Phi : G \to {\mathbb{R}}$, we associate to each $f \in A$ satisfying $\Phi(f \cdot u) \neq 0$ a map $\Phi_f : G \to {\mathbb{R}}$ given by \begin{align} \Phi_f(s) := \frac{\Phi(f \cdot s)}{\Phi(f \cdot u)} & & (\forall s\in G). \end{align} The reader can verify that if $\Phi$ is a state of $(G, M, u)$ and $p \in S$ satisfies $\Phi(p \cdot u) > 0$, then $\Phi_p$ is also a state of $(G, M, u)$. Furthermore, if $\Phi$ is a state and $p_1, p_2 \in S$ satisfy $\Phi(p_1 \cdot u), \Phi(p_2 \cdot u) > 0$, so that $p_1 + p_2 \in S$ and $\Phi((p_1 + p_2) \cdot u )> 0$, then $\Phi_{p_1 + p_2}$ is a proper convex combination of the states $\Phi_{p_1}$ and $\Phi_{p_2}$: \begin{equation} \label{eq: convexCombination} \Phi(p_1 \cdot u) \Phi_{p_1} + \Phi(p_2 \cdot u) \Phi_{p_2} = \Phi((p_1 + p_2) \cdot u) \Phi_{p_1 + p_2}. \end{equation} \begin{proof}[Proof of Proposition \ref{prop: multiplicativeAlongFibres}] Since $S \subseteq A$ is archimedean and $u$ is an order unit of $(G, M)$, it suffices to show that \eqref{eq: multiplicativeLaw} holds whenever $f \in S$ and $s \in M$. Let $f \in S$ and $s \in M$ be given. Then $f \cdot u\in S \cdot M \subseteq M$. Hence $\Phi(f \cdot u) \ge 0$. There are two cases: either $\Phi(f \cdot u) = 0$ or $\Phi(f \cdot u) > 0$. \paragraph{Case 1: $\Phi(f \cdot u) = 0$.} Then $u$ being an order unit of $(G, M)$ gives a positive integer $n$ such that $0 \le_M s \le_M n u$. Since $M$ is closed under the $S$-action, $0 \le_M f \cdot s \le_M n f \cdot u$. Now $\Phi$ is monotone with respect to $\le_M$, so \begin{equation} 0 \le \Phi(f \cdot s) \le n \Phi(f \cdot u) = 0, \end{equation} forcing $\Phi(f \cdot s) = 0$ so that both sides of \eqref{eq: multiplicativeLaw} equals to zero in this case. \paragraph{Case 2: $\Phi(f \cdot u) > 0$.} Since $S \subseteq A$ is archimedean, there is some positive integer $n$ such that $n - f \in S$. But increasing $n$ if necessary, we may suppose that $n > \Phi(f \cdot u)$. Then \begin{equation} \Phi((n - f) \cdot u) = n\Phi(u) - \Phi(f \cdot u) = n - \Phi(f \cdot u) > 0. \end{equation} Since $f, n - f\in S$ and $\Phi(f \cdot u), \Phi((n - f) \cdot u ) >0$, we may apply \eqref{eq: convexCombination} to conclude that $\Phi_n$ is a proper convex combination of $\Phi_f$ and $\Phi_{n - f}$. But $\Phi_n = \Phi$ (by direct calculation, using the fact that $n$ is a scalar), so the purity of $\Phi$ implies that $\Phi_f = \Phi$, which is just \eqref{eq: multiplicativeLaw}. \end{proof} \end{document}
\begin{document} \title{Study of the transcendence of a family of generalized continued fractions} \begin{abstract} We study a family of generalized continued fractions, which are defined by a pair of substitution sequences in a finite alphabet. We prove that they are {\em stammering} sequences, in the sense of Adamczewski and Bugeaud. We also prove that this family consists of transcendental numbers which are not Liouvillian. We explore the partial quotients of their regular continued fraction expansions, arriving at no conclusion concerning their boundedness. \end{abstract} \keywords{Continued fractions; Transcendence; Stammering Sequences.} \section{Introduction} The problem of characterizing continued fractions of numbers beyond rational and quadratic has received consistent attention over the years. One direction points to an attempt to understand properties of algebraic numbers of degree at least three, but at times even this line ends up in the realm of transcendental numbers. Some investigations \cite{bruno,garrity,schwe} on algebraic numbers depart from generalizations of continued fractions. This line of investigation has been tried since Euler, see \cite{schwe,bruno} and references therein, with a view to generalize Lagrange's theorem on quadratic numbers, in search of proving a relationship between algebraic numbers and periodicity of multidimensional maps yielding a sequence of approximations to a irrational number. This theory has been further developed for instance to the study of ergodicity of the triangle map \cite{meno}. In fact, a considerable variety of algorithms may be called generalizations of continued fractions: for instance \cite{keane}, Jacobi-Perron's and Poincaré's algorithms in \cite{schwe}. We report on a study of generalized continued fractions of the form: \begin{equation} \label{thab} \theta(a,b)\stackrel{{\rm def}}{=} a_0+\frac{b_0}{a_1+\frac{b_1}{a_2+\frac{b_2}{a_3+\frac{b_3}{ a_4+\frac { b_4 } { \ddots}}}}} \ , \end{equation} (where $a_0\geq 0$, $a_n\in \mathbb N$, for $n\in \mathbb N$ and $b_n \in \mathbb N$, for $n\geq 0$), investigating a class with a regularity close, in a sense, to periodicity. This family of generalized continued fractions converges when $(a_n)$ and $(b_n)$ are finite valued sequences. They were considered {\em formally} in \cite{list}, in the context exemplified in Section \ref{exam1}. The {\em stammering} sequences \cite{acta}, and sequences generated by morphisms \cite{abd,aldavquef,allsha}, consist in a natural step away from periodic ones. Similarly to the results in \cite{abd,aldavquef}, on regular continued fractions, we prove that the family of numbers considered are transcendental. \begin{teo} \label{teo1} Suppose $(a_n)$ and $(b_n)$, $n\geq 0$, are fixed points of primitive substitutions, then the number $\theta(a,b)$ is transcendental. \end{teo} \def\mathsf S{\mathsf S} We also consider Mahler's classification within this family of transcendental numbers, proving that they cannot be Liouvillian. Let us call the numbers of the family of generalized continued fractions $\theta(a,b)$, with ${\mathbf a}$ and ${\mathbf b}$ stammering sequences coming from a primitive substitution number of {\em type} $\mathsf S_3$. \begin{teo} Type $\mathsf S_3$ numbers are either $S$-numbers or $T$-numbers in Mahler's classification. \label{teo2} \end{teo} The paper is organized as follows. In Section \ref{conv}, we prove the convergence of the generalized continued fraction expansions for type $\mathsf S_3$ numbers. In Section \ref{transc}, we prove the transcendence of type $\mathsf S_3$ numbers. In Section \ref{liou}, we use Baker's Theorem to prove that they are either $S$-numbers or $T$-numbers. In Section \ref{exam1}, we show some inconclusive calculations on the partial quotients of the regular continued fraction of an specific type $\mathsf S_3$ number. \section{Convergence} \label{conv} We start from the analytic theory of continued fractions \cite{wall} to prove the convergence of \eqref{thab} when $(a_n)$ and $(b_n)$, $n\geq 0$, are sequences in a finite alphabet. Let $\overline{\R}^+=[0,\infty]$ denote the extended positive real axis, with the understanding that $a+\infty=\infty$, for any $a\in \overline{\R}^+$; $a\cdot \infty=\infty$, if $a>0$, $0\cdot \infty = 0$ and $a/\infty=0$, if $a\in \mathbb R$. We do not need to define $\infty/\infty$. Given the sequences $(a_k)$ and $(b_k)$ of non-negative (positive) integers, consider the Möbius transforms $t_k:\overline{\R}^+\to \overline{\R}^+$ \[ t_k(w)= a_k+\frac{b_k}{w} \ ,\; k\in \mathbb N \ ,\] and their compositions \[ t_it_j(w)= t_i(a_j+b_j/w)=a_i+b_i/(a_j+b_j/w) \ .\] This set of Möbius transforms is closed under compositions and form a semigroup. It is useful to consider the natural correspondence between Möbius transformations and $2\times 2$ matrices: \[ M_k=\begin{pmatrix} a_k & b_k \\ 1 & 0 \end{pmatrix} \ .\] Taking the positive real cone ${\cal C}_2=\{(x,y) \ |\ x\geq 0\ , \ y\geq 0\ , x+y>0\}$ with the equivalence $(x,y) \sim \lambda (x,y)$ for every $\lambda>0$, we have an homomorfism between the semigroup of Möbius transforms, under composition, acting on $\overline{\R}^+$ and the algebra of matrices above (which are all invertible) acting on ${\cal C}_2/\sim$. Assume the limit \[ \lim_{n\to \infty} t_0t_1t_2\cdots t_n(0) \] exists as a real positive number, then it is given once we know the sequences $(a_n)$ and $(b_n)$, $n\geq 0$. In this case, it is equal to $\lim_{n\to \infty} t_0t_1t_2\cdots t_{n-1}(\infty)$ as well, so that the initial point may be taken as $0$ or $\infty$ in the extended positive real axis. In terms of matrices multiplication, we have \[ M_0M_1M_2\cdots M_n \begin{pmatrix} 0 \\ 1 \end{pmatrix} \sim M_0M_1M_2\cdots M_{n-1} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \] in ${\cal C}_2$. Define $p_{-1}=1$, $q_{-1}=0$, $p_0=a_0$, $q_0=1$ and \[ \begin{pmatrix} p_n & b_np_{n-1} \\ q_n & b_nq_{n-1} \end{pmatrix} \stackrel{{\rm def}}{=} M_0M_1M_2 \cdots M_n \ ,\ n\geq 0 \ . \] We have the following second order recursive formulas for $(p_n,q_n)$: \begin{align} \label{recur} & p_{n+1}=a_{n+1}p_n+b_np_{n-1} \\ & q_{n+1}=a_{n+1}q_n+b_nq_{n-1} \nonumber \end{align} and the determinant formula \begin{equation} \label{detAn} p_nq_{n-1}-p_{n-1}q_n = (-1)^{n-1} b_0\cdots b_{n-1} \end{equation} We recall the series associated with a continued fraction \cite{wall}: \begin{lema} Let $(q_n)$ denote the sequence of denominators given in \eqref{recur} for the continued fraction $\theta(a,b)$. \if 0 \begin{equation} \label{basecf} \frac{1}{1+\frac{b_1}{a_2+\frac{b_2}{a_3+\frac{b_3}{\ddots}}}} \ . \end{equation} \fi Let \begin{equation} \rho_k=-\frac{b_k q_{k-1}}{q_{k+1}} \ ,\; k \in \mathbb N \ .\label{assure_conv} \end{equation} Then \[ a_0+\frac{b_0}{a_1}\left(1+\sum_{k=1}^{n-1} \rho_1\rho_2\ldots \rho_k \right) = \frac{p_n}{q_n} \ , n\geq 1 \ . \] \end{lema} \begin{proof} For $n=1$, the sum is empty, $p_1=a_1a_0+b_0$ and $q_1=a_1$; the equality holds. Consider the telescopic sum, for $n\geq 1$, \[ \frac{p_1}{q_1}+\sum_{k=1}^{n-1} \left( \frac{p_{k+1}}{q_{k+1}} - \frac{p_k}{q_k}\right) = \frac{p_n}{q_n} \ . \] From \eqref{detAn}, \[ \left( \frac{p_{k+1}}{q_{k+1}} - \frac{p_k}{q_k}\right) = (-1)^k \frac{b_0b_1\cdots b_k}{q_{k+1}q_k}\ .\] Now $\frac{b_0}{a_1}\rho_1=-\frac{b_0}{q_1}\frac{b_1q_0}{q_2} =-\frac{b_0b_1}{q_1q_2}=\frac{p_2}{q_2}-\frac{p_1}{q_1}$. Moreover \[ \frac{p_3}{q_3}-\frac{p_2}{q_2}= \frac{b_0b_1b_2}{q_3q_2} = \frac{b_0}{q_1} \left(-\frac{b_1q_0}{q_2}\right)\left(-\frac{b_2q_1}{ q_3 } \right) = \frac{b_0}{a_1} \rho_1\rho_2 \ . \] Multiplicative cancelling provides the argument to deduce the formula \[ \frac{p_{k+1}}{q_{k+1}}-\frac{p_k}{q_k}= \frac{b_0}{a_1} \rho_1\rho_2\ldots \rho_k \] by induction, finishing the proof. \end{proof} Even though $b_n>1$ may occur in \eqref{thab}, note that we still have $q_{n+1}\geq 2^{(n-1)/2}$. Indeed $q_0=1$, $q_1=a_1$ and $q_2=a_2q_1+b_1q_0>1$. Finally, since $a_n\geq 1$ and $b_n\geq 1$ for all $n\in \mathbb N$, \[ q_{n+1}=a_{n+1}q_n+b_nq_{n-1} \geq q_n+q_{n-1}\geq 2^{n/2-1}+\frac{2^{(n-1)/2}}2> 2^{(n-1)/2} \ .\] \begin{lema} If $(a_n)$ and $(b_n)$ are sequences on a finite alphabet $\mathscr A\subset [\alpha,\beta] \subset [1,\infty)$, then the generalized continued fraction \eqref{thab} converges. \label{dois} \end{lema} \begin{proof} It follows from \eqref{recur} that $(q_n)$, $n\geq 1$, is increasing, and \[ |\rho_k| = \left| \frac{b_kq_{k-1}}{q_{k+1}}\right| = \left(1+ \frac{a_{k+1}q_k}{b_kq_{k-1}}\right)^{-1}< (1+\alpha/\beta)^{-1} \ . \] Thus the series with general term $\rho_1\cdots \rho_k$ is bounded by a convergent geometric series. \end{proof} \begin{lema} Let $(q_n)$ denote the sequence of denominators given in \eqref{recur} for a generalized continued fraction, with $(a_n)$ and $(b_n)$ sequences in a finite alphabet $\mathscr A \subset [\alpha,\beta]\subset [1,\infty)$. Then $q_n^{1/n}$ is bounded. \label{logq_nlim} \end{lema} \begin{proof} From \eqref{recur}, $q_1=a_1$, and $q_{n+1}<(a_{n+1}+b_n)q_n \leq (2\beta)q_n$. Hence $q_n^{1/n}\leq 2\beta \sqrt[n]{a_1}\leq 2\beta^{3/2}$, where the last inequality is necessary only when $a_1=\beta$. \end{proof} \section{Transcendence} \label{transc} We now specialize the study of generalized continued fractions for sequences $(a_n)$ and $(b_n)$ which are generated by primitive substitutions. These sequences provide a wealth of examples of {\em stammering sequences}, defined below, following \cite{acta}. Let us introduce some notation. The set ${\cal A}$ is called alphabet. A word $w$ on ${\cal A}$ is a finite or infinite sequence of letters in ${\cal A}$. For finite $w$, $|w|$ denotes the number of letters composing $w$. Given a natural number $k$, $w^k$ is the word obtained by $k$ concatenated repetitions of $w$. Given a rational number $r>0$, which is not an integer, $w^r$ is the word $w^{\lfloor r\rfloor} w'$, where $\lfloor r\rfloor$ denotes the integer part of $r$ and $w'$ is a prefix of $w$ of length $\lceil (r-\lfloor r\rfloor)|w|\rceil$, where $\lceil q\rceil=\lfloor q\rfloor +1$ is the upper integer part of $q$. Note that if $(a_n)$ and $(b_n)$ are sequences on ${\cal A}$, then $(a_n,b_n)$ is a sequence in ${\cal A}\times {\cal A}$, which is also an alphabet. A sequence ${\bf a}=(a_n)$ has the {\em stammering property} if it is not a periodic sequence and, given $r>1$, there exists a sequence of finite words $(w_n)_{n\in \mathbb N}$, such that \begin{itemize} \item[a)] for every $n\in \mathbb N$, $w_n^r$ is a prefix of ${\bf a}$; \item[b)] $(|w_n|)$ is increasing. \end{itemize} We say, more briefly, that $(a_n)$ is a stammering sequence with exponent $r$. It is clear that if $(a_n)$ and $(b_n)$ are both stammering with exponents $r$ and $s$ respectively, then $(a_n,b_n)$ is also stammering with exponent $\min\{r,s\}$. \begin{lema} If $u$ is a substitution sequence on a finite alphabet $\mathscr A$, then $u$ is stammering. \label{fact1} \end{lema} \begin{proof} Denote the substitution map by $\xi:\mathscr A\to \mathscr A^+$. Since $\mathscr A$ is a finite set, there is a $k\geq 1$ and $\alpha \in \mathscr A$ such that $\alpha$ is a prefix of $\xi^k(\alpha)$. $u=\lim_{n\to \infty} \xi^{kn}(\alpha)$. Moreover, there is a least finite $j$ such that $\alpha$ occurs a second time in $\xi^{jk}(\alpha)$. Therefore $u$ is stammering with $w\geq 1+\frac{1}{|\xi^{jk}(\alpha)|-1}$. \end{proof} \begin{proof}[Proof of Theorem \ref{teo1}] From Lemma \ref{fact1} $(a_n)$ and $(b_n)$ stammering sequences with exponent $r>1$. Hence $\theta(a,b)$ has infintely many {\em good} quadratic approximations. Let $(w_n)\in ({\cal A}\times {\cal A})^*$ be a sequence of words of increasing length characterizing $(a_n,b_n)$ as a stammering sequence with exponent $r>1$. Consider $\psi_k(a,b)$ given by \[ \psi_k(a,b)= c_0+\frac{d_0}{c_1+\frac{d_1}{c_2+\frac{d_2}{c_3+\frac{d_3}{\ddots}}}} \ , \] where $c_j=a_j$, $d_j=b_j$, for $0\leq j<k$, and $c_{j}=c_{j\pmod{k}}$ and $d_{j}=d_{j\pmod{k}}$, for $j\geq k$. $\psi_k$ is a root of the quadratic equation \[ q_{k-1}x^2+(q_k-p_{k-1})x-p_k=0 \ , \] which might not be in lowest terms. Arguing as in Theorem 1 from \cite{acta}, we choose $k$ from the subsequence of natural numbers given by $|w_n^r|$. Lemma \ref{logq_nlim} allows us to conclude that the generalized continued fraction $\theta(a,b)$ is transcendental if both $(a_n)$ and $(b_n)$ are stammering sequences with exponent $r>1$. \end{proof} \section{Quest on Liouville numbers} \label{liou} We address the question of Mahler's classification of the numbers for type ${\mathcal S}_3$ numbers. The statement of Baker's Theorem we quote use a measure of transcendence introduced by Koksma, which is equivalent to Mahler's, and we explain briefly, following \cite{plms}, Section 2. Let $d\geq 1$ and $\xi$ a real number. Denote by $P(X)$ an arbitrary polynomial with integer coefficients, and $H(P)=\max_{0\leq k\leq j} \{|a_k|\ :\ P(X)=a_0+a_1X+\cdots+a_jX^j\}$ is the height of the polynomial $P$. Let $w_d(\xi)$ be the supremum of the real numbers such that the inequality \[ 0< |P(\xi)|\leq H(P)^{-w} \] is true for infinitely many polynomials $P(X)$ with integer coefficients and degree at most $d$. Koksma introduced $w_d^*(\xi)$ as the supremum of the real numbers $w^*$ such that \[ 0< |\xi-\alpha| \leq H(\alpha)^{-w^*-1} \] are true for infinitely many algebraic numbers $\alpha$ of degree at most $d$, where $H(\alpha)$ is the height of the minimal polynomial with integer coefficients which vanishes at $\alpha$. Let $w(\xi) = \lim_{d\to \infty} \frac{w_d(\xi)}{d}$, then $\xi$ is called \begin{itemize} \item an $A$-number if $w(\xi)=0$; \item an $S$-number if $0<w(\xi)<\infty$; \item a $T$-number if $w(\xi)=\infty$, but $w_d(\xi)<\infty$ for every integer $d\geq 1$; \item an $U$-number if $w(\xi)=\infty$ and $w_d(\xi)=\infty$ for some $d\geq 1$. \end{itemize} It was shown by Koksma that $w^*_d$ and $w^*$ provide the same classification of numbers. Liouville numbers are precisely those for which $w_1(\xi)=\infty$, they are $U$-numbers of type 1. \begin{teob}[Baker] \label{baker} Let $\xi$ be a real number and $\epsilon >0$. Assume there is an infinite sequence of irreducible rational numbers $(p_n/q_n)_{n\in \mathbb N}$, $(p_n,q_n)=1$, ordered such that $2\leq q_1< q_2\leq \cdots $ satisfying \[ \left| \xi - \frac{p_n}{q_n} \right| < \frac 1{q_n^{2+\epsilon} } \ . \] Additionally, suppose that \[ \limsup_{n\to \infty} \frac{\log q_{n+1}}{\log q_n}<\infty \ , \] then there is a real number $c$, depending only on $\xi$ and $\epsilon$ such that \[ w_d^* (\xi) \leq \exp\exp ( cd^2) \ . \] for every $d\in \mathbb N$. Consequently, $\xi$ is either an $S$-number or a $T$-number. \end{teob} \begin{proof}[Proof of Theorem \ref{teo2}] We note that the hypothesis of irreducibility is lacking for type $\mathsf S_3$ numbers. Let us write $d_n=(p_n,q_n)$. By eq. \eqref{detAn}, $d_n=b_0\ldots b_{n-1}$. Recall that, for primitive substitutions in ${\cal A}=\{\alpha,\beta\}\subset \mathbb N$, there is a frequency $\nu$, which is uniform in the sequence $({\mathbf b})$ \cite{queffe}, for which $b_k=\beta$. Thus, $d_n \approx \beta^\nu \alpha^{1-\nu}$ for large $n$. If, for every $n\in \mathbb N$, there is a number $\theta$ such that $d_n < \left(\frac{q_n}{d_n}\right)^\theta$, then \[ 0< \left| \xi -\frac{p_n/d_n}{q_n/d_n}\right| < \frac 1{q_n^{2+\epsilon}}= \frac 1{d_n^{2+\epsilon} (q_n/d_n)^{2+\epsilon}} \ .\] In this case, from the estimates of $q_n$ and $d_n$, the limit \[ \limsup_{n\to \infty} \frac{\log(q_{n+1}/d_{n+1})}{\log(q_n/d_n)} <\infty \ .\] We would conclude from Theorem \ref{baker} that type $\mathsf S_3$ contains either $S$-numbers or $T$-numbers and no Liouville numbers. From the analysis of Section \ref{conv}, keeping its notations, \[ (-1)^n \frac{d_n}{q_nq_{n-1}} = \frac{b_0}{a_1} \rho_1 \ldots \rho_{n-1} \ . \] Now $|\rho_k|=\left(1+\frac{a_{k+1}q_k}{b_kq_{k-1}}\right)^{-1}$, and since $q_k\leq 2\beta q_{k-1}$, we conclude that \[ |\rho_k| > \left( 1+ \frac{2\beta^2}{\alpha}\right)^{-1} \ . \] Therefore, recalling that $q_{n-1}\geq 2^{(n-3)/2}$ \[ \frac{d_n}{q_n} > q_{n-1} (1+2\beta^2/\alpha)^{-n+1} \frac{b_0}{a_1} \quad \Rightarrow\quad \frac{q_n}{d_n}< (1+2\beta^2/\alpha)^{n-1} 2^{-(n-3)/2}\frac{\beta}{\alpha} \ . \] Therefore, we want to determine the existence of a solution for $\theta$ for the inequality \[ d_n < \left(\frac{q_n}{d_n}\right)^\theta \] considering that $d_n \approx \beta^{\nu n} \alpha^{(1-\nu)n}$ we obtain the inequality \[ \beta^{\nu n} \alpha^{(1-\nu)n} < (1+2\beta^2/\alpha)^{\theta(n-1)}2^{-\theta(n-3)/2} \frac{\beta}{\alpha} \ .\] For large $n$, it is sufficient to solve \[ \beta^{\nu}\alpha^{1-\nu} < \frac 1{2^{\theta/2}}\left(1+\frac{2\beta^2}{\alpha}\right)^\theta \ ,\] which clearly has the solution $\theta=1$, since $\alpha<\beta$ and $0<\nu<1$, implying $\beta^\nu \alpha^{1-\nu} < \beta$. We conclude that type $\mathsf S_3$ consists only of $S$-numbers or $T$-numbers. \end{proof} \section{Example: partial quotients of a corresponding regular continued fraction} \label{exam1} We now examine one specific example: a generalized continued fraction associated with the period doubling sequence. The period doubling sequence, which we denote by $\omega$, is the fixed point of the substitution $\xi(\alpha)=\alpha\beta$ and $\xi(\beta)=\alpha\alpha$ on the two lettered alphabet $\{\alpha,\beta\}$. It is also the limit of a sequence of {\em foldings}, and called a {\em folded sequence} \cite{allsha}. We make some observations and one question about the partial quotients of the corresponding regular continued fraction representing the real number that the generalized continued fraction given by \eqref{thab} when both sequences $({\mathbf a})$ and $({\mathbf b})$ are given by the period doubling sequence: $a_n=b_n=\omega_n$. We choose to view the period doubling sequence as the limit of folding operations. The algebra of matrices with fixed determinant will play a role. A folding is a mapping \begin{align*} {\cal F}_p: &{\cal A}^*\to {\cal A}^*\\ & w\mapsto wp\tilde{w} \end{align*} where $\tilde{w}$ equals the word $w$ reversed: if $w=a_1\ldots a_n$, $a_i\in {\cal A}$, then $\tilde{w}=a_n\ldots a_1$, and $p\in {\cal A}^*$. It is clear that \[ \omega= \lim_{n\to \infty} ({\cal F}_a\circ {\cal F}_b)^n (a) \ ,\] see also \cite{allsha}, where the limit is understood in the product topology (of the discrete topology) in ${\cal A}^\mathbb N\cup {\cal A}^*$. Let $\theta$ denote the number whose generalized continued fraction is obtained from the substitution of the letters $\alpha$ and $\beta$ by \[ A=\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} \ , \quad B= \begin{pmatrix} 3 & 3 \\ 1 & 0 \end{pmatrix} \] respectively. It corresponds to the choice $\{1,3\}$ for the alphabet where the sequences $({\mathbf a})$ and $({\mathbf b})$ take values. Now we use Raney transducers \cite{raney} to describe the computation of some partial quotients of the regular continued fraction converging to $\theta$. A transducer ${\mathscr T}=(Q,\Sigma,\delta,\lambda)$, or two-tape machine, is defined by a set of states $Q$, an alphabet $\Sigma$, a transition function $\delta: Q\times \sigma \to Q$, and an output function $\lambda: Q\times \sigma \to \Sigma^*$, where $\sigma\subset \Sigma$ (a more general definition is possible \cite{allsha}, but this is sufficient for our purposes). The states of Raney's tranducers are column and row (or doubly) balanced matrices over the non-negative integers with a fixed determinant. A matrix $\begin{pmatrix} a & b \\ c & d\end{pmatrix}$ is column balanced if $(a-b)(c-d)<0$ \cite{raney}. Figure 1 shows the Raney transducer for determinant 3 doubly balanced matrices. In the text, we use the abbreviations: $\beta_1=\begin{pmatrix} 3 & 0 \\ 0 & 1 \end{pmatrix}$, $\beta_2=\begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}$ and $\beta_3=\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$. Then $Q=\{\beta_1,\beta_2,\beta_3\}$, $\Sigma=\{L,R\}$, where \[ R=\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \ , \quad L=\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \ ,\] the transition function $\delta$ and the output function are indicated in the graph. For instance, if $RL^2R$ is the input word on state $\beta_2$, the output word is $L^2R^4$ and the final state is $\beta_1$. Any infinite word in $\Sigma^\mathbb N$ can be read by ${\mathscr T}$, but not every finite word can be fully read by ${\mathscr T}$, for instance, $L^{11}$ in state $\beta_1$ will produce $L^3$, but $L^2$ will stay in the reading queue in state $\beta_1$. Algebraically, these two examples are written as \begin{align*} \beta_2 RL^2R & = L \beta_3 LR = L LR \beta_1R = L^2 RR^3 \beta_1 = L^2R^4 \beta_1 \\ \beta_1 L^{11} &= L^3 \beta_1 L^2 \end{align*} As explained in \cite{vdP}, Theorem 1 in \cite{list} or even Theorem 5.1 in \cite{raney}, one may use the transducer ${\mathscr T}$ to commute the matrices $A$ and $B$ to get an approximation to the continued fraction of $\theta$. \begin{figure} \caption{Transducer $\mathscr T$.} \label{trans1} \end{figure} \def\mathscr A{\mathscr A} \def\mathscr B{\mathscr B} Introducing the matrix $J=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$, we note the following relations: $A=RJ$, $B=\beta_1 RJ$, $\beta_2 J=J\beta_1$. The homomorfism between ${\cal A}^*$ and the semigroup of matrices generated by $\{A,B\}$, as Moebius transforms, is the basis for discovering some curious properties of the regular continued fraction of $\theta$. The sequence of matrices \[ ABA,\ ABAAABA,\ ABAAABABABAAABA \ ,\] which corresponds to ${\cal F}_b(a)$, ${\cal F}_a\circ{\cal F}_b(a)$, ${\cal F}_b\circ {\cal F}_a\circ {\cal F}_b(a)$, yields the beginning of the regular continued fraction expansion of $\theta$. A step by step calculation shows the basic features in the use of ${\mathscr T}$: \begin{align*} ABAAABA &= RJ(\beta_1 RJ) RJRJRJ(\beta_1 RJ)RJ \\ &= R \beta_2 (JRJ)R(JRJ)R\beta_2 (JRJ)RJ \\ &= R \beta_2 LRLR\beta_2 LRJ \\ & = R L^3 \beta_2 RLR L^3 \beta_2 RJ \\ & =RL^3 L \beta_3 RL^3 \beta_2 RJ \\ & = RL^4 RL \beta_2 L^3 \beta_2 RJ \\ &= RL^4 RL L^9 \beta_2^2 RJ = RL^4 RL^{10} \beta_2^2 RJ \ . \end{align*} This means that the continued fraction of $\theta$ begins as $[1;4,1,k,\cdots]$, with $k\geq 10$. Writing $T=ABAAABA$, upon the next folding \begin{align*} TBT &= RL^4RL^{10} \beta_2^2 RJ (\beta_1RJ) RL^4RL^{10} \beta_2^2 RJ \\ &= RL^4 RL^{10} \beta_2^2 R\beta_2 LRL^4RL^{10} \beta_2^2 RJ \\ &= RL^4RL^{10} \beta_2^2RL^3L\beta_3 L^3 RL^{10} \beta_2^2 RJ \\ & = RL^4R L^{10} \beta_2^2 RL^4LR \beta_1 L^2R L^{10} \beta_2^2 RJ \\ &= RL^4R L^{10} \beta_2^2 RL^5 RRL^2 \beta_2 L^{10} \beta_2^2 RJ \\ &=RL^4RL^{10} \beta_2 L \beta_3L^4 R^2 L^{32} \beta_2^3 RJ \\ &= RL^4RL^{10} \beta_2 L \beta_3 L^4 R^2 L^{32}\beta_2^3 RJ \\ &= RL^4 R L^{13} \beta_2 LR \beta_1 L^3 R^2L^{32} \beta_2^3 RJ \\ & = RL^4 RL^{16} \beta_2 R L \beta_1 R^2 L^{32} \beta_2^3 RJ \\ &= RL^4 RL^{17} \beta_3 R^6L^{10} \beta_1 L^2 \beta_2^3 RJ \\ &= RL^4 RL^{17} RL \beta_2 R^5L^{10} \beta_1 L^2\beta_2^3 RJ \\ &= RL^4 RL^{17} RL R LR^2 \beta_1 L^9 \beta_1 L^2 \beta_2^3 RJ \\ &= RL^4 RL^{17} RLRLR^2L^3 \beta_1^2 L^2 \beta_2^3 RJ \end{align*} Now we have the knowledge that the beginning of the regular continued fraction expansion of $\theta=[1;4,1,17,1,1,1,1,2,a,\cdots]$, with $a\geq 3$. Instructions for the transducer are stuck on the right of this factorization to be read for the next folding. Note that only states $\beta_1$ and $\beta_2$ will remain, since a transition from $\beta_3$ is always possible given any finite word in $\Sigma^*$. An inductive prediction as to whether the high power in the beginning, $L^{17}$, will consistently increase upon (sufficient) repetitions of foldings is out of reach. This observation poses the question: are the partial quotients of regular continued fraction of $\theta$ bounded? Similar calculations have been done with a simpler choice of the alphabet, that is, $\{1,2\}$, where the transducer has only two states (doubly balanced matrices with determinant 2). \end{document}
\begin{document} \begin{center} \textbf{\Large{Investigating spatial scan statistics for multivariate functional data}} \\ Camille Frévent$^1$, Mohamed-Salem Ahmed$^{1}$ , Sophie Dabo-Niang$^2$ and Michaël Genin$^1$ \\ $^1$Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France.\\ $^2$Laboratoire Paul Painvelé UMR CNRS 8524, INRIA‐MODAL, University of Lille. \end{center} \begin{center} \rule{1\linewidth}{.9pt} \end{center} \noindent \textbf{\Large Abstract} \\ \noindent This paper introduces new scan statistics for multivariate functional data indexed in space. The new methods are derivated from a MANOVA test statistic for functional data, an adaptation of the Hotelling $T^2$-test statistic, and a multivariate extension of the Wilcoxon rank-sum test statistic. In a simulation study, the latter two methods present very good performances and the adaptation of the functional MANOVA also shows good performances for a normal distribution. Our methods detect more accurate spatial clusters than an existing nonparametric functional scan statistic. Lastly we applied the methods on multivariate functional data to search for spatial clusters of abnormal daily concentrations of air pollutants in the north of France in May and June 2020.\\ \noindent \textbf{Keywords: } Cluster detection, multivariate functional data, spatial scan statistics \\ \begin{center} \rule{1\linewidth}{.9pt} \end{center} \noindent \section{Introduction} Spatial cluster detection has been studied for many years. The goal is usually to develop new tools capable of detecting the aggregation of spatial sites that behave ``differently'' from other sites. In particular, spatial scan statistics detect statistically significant spatial clusters with a scanning window and without any pre-selection bias. This approach was originally proposed by \cite{spatialdisease} and \cite{spatialscanstat} in the cases of Bernouilli and Poisson models. They present a method based on the likelihood ratio and Monte-Carlo testing to detect significant clusters of various sizes and shapes. Following on from Kulldorff's initial work, several researchers have adapted spatial scan statistics to other spatial data distributions, such as ordinal \citep{jung2007spatial}, normal \citep{normalkulldorff}, exponential \citep{huang2007spatial}, and Weibull models \citep{bhatt2014spatial}. These methods were applied in many different fields such as epidemiology \citep{epidemio1, epidemio2, genin2020fine}, environmental sciences \citep{social2, social1}, geology \citep{geology}. \\ \noindent Thanks to progress in sensing and data storage capacity, data are increasingly being measured continuously over time. This led to the introduction of functional data analysis (FDA) by \cite{ramsaylivre}. A considerable amount of work has gone to adapt classical statistical methods to the univariate functional framework such as principal component analysis \citep{fpc_boente, fpc_berrendero} or regression \citep{reg_cuevas, reg_ferraty, reg_chiou} but also to the multivariate functional one \citep{local_functional, clustering_multi}). \\ \noindent In some research fields, such as environmental surveillance, pollution sensors are deployed in a geographical area. In a context where these sensors measure simultaneously the concentrations of many pollutants at regular intervals over a long period of time, environmental experts may search for environmental black-spots , that can be defined as geographical areas characterized by elevated concentrations of pollutants. For this purpose three different approaches can be considered. The simplest one consists in summarizing the information by averaging each variable over the time and to apply a parametric multivariate spatial scan statistic \citep{a_multivariate_gaussian} or the nonparametric one proposed by \cite{nonparam_multi} but this could lead to a huge loss of information when the data is measured over a long time period. Another solution could be to apply a spatial scan statistic for univariate functional data on each variable \citep{wilco_cucala, notre_fonctionnel}. However this does not allow to take into account the correlations between the variables. A relevant solution consists in using spatial scan statistics for multivariate functional data. According to the authors, the nonparametric spatial scan statistic for functional data proposed by \cite{wilco_cucala} could be extended to multivariate processes although it has never been evaluated in this context. Moreover to our knowledge no parametric scan statistic for multivariate functional data has been proposed. Thus we will define new spatial scan statistics for multivariate functional data based on statistical tests for comparing multivariate functional samples. Recently \cite{manovafonctional} and \cite{two_s} developed respectively a MANOVA test statistic and a functional Hotelling $T^2$-test statistic for multivariate functional data. We also propose to consider the multivariate extension of the Wilcoxon rank-sum test developed by \cite{oja} as a pointwise test statistic. Using these statistics we will adapt to the multivariate functional framework the parametric and the distribution-free spatial scan statistics for functional data proposed by \cite{notre_fonctionnel} and we will also investigate a new multivariate functional method based on the ranks of the observations at each time. \\ \noindent This paper develops three new spatial scan statistics for multivariate functional data. Section \ref{sec:method} describes the parametric multivariate functional scan statistic, the multivariate version of the distribution-free functional spatial scan statistic proposed by \cite{notre_fonctionnel} and a new rank-based spatial scan statistic for multivariate functional data . In Section \ref{sec:simulation} the behaviours of our methods are investigated through a simulation study and compared to the one proposed by \cite{wilco_cucala}. The methods are applied on a real dataset in Section \ref{sec:realdata}. Finally the paper is concluded with a discussion in Section \ref{sec:discussion}. \section{Methodology} \label{sec:method} \subsection{General principle} \noindent Let $\{ X(t), \ t \in \mathcal{T} \}$ be a $p$-dimensional vector-valued stochastic process where $\mathcal{T}$ is an interval of $\mathbb{R}$. Let $s_1, \dots, s_n$ be $n$ non-overlapping locations of an observation domain $S \subset \mathbb{R}^2$ and $X_1, \dots, X_n$ be the observations of $X$ in $s_1, \dots, s_n$. Hereafter all observations are considered to be independent, which is a classical assumption in scan statistics. \noindent Spatial scan statistics aim at detecting spatial clusters and testing their significance. Hence, one tests a null hypothesis $\mathcal{H}_0$ (the absence of a cluster) against a composite alternative hypothesis $\mathcal{H}_1$ (the presence of at least one cluster $w \subset S$ presenting abnormal values of $X$). \noindent \cite{notre_fonctionnel} defined the notion of cluster in the univariate functional framework. Their definitions can be easily extended to the multivariate functional context by defining a \textit{multivariate magnitude cluster} $w$ as follows: \begin{equation} \forall t \in \mathcal{T}, \ \mathbb{E}[X_i(t)\mid s_i\in w] = \mathbb{E}[X_i(t)\mid s_i\notin w] + \Delta(t), \end{equation} where $\Delta(t) = (\Delta_1(t), \dots, \Delta_p(t))^\top$, all $\Delta_i$ are of constant and identical signs, and exists $i \in \llbracket 1 ; p \rrbracket$ such that $\Delta_i$ is non-zero over at least one sub-interval of $\mathcal{T}$. In the same way a \textit{multivariate shape cluster} can be defined as follows: \begin{equation} \forall t \in \mathcal{T}, \ \mathbb{E}[X_i(t)\mid s_i\in w] = \mathbb{E}[X_i(t)\mid s_i\notin w] + \Delta(t) \end{equation} where $\Delta(t) = (\Delta_1(t), \dots, \Delta_p(t))^\top$ and exists $i \in \llbracket 1 ; p \rrbracket$ such that $\Delta_i$ is not constant almost everywhere. \\ \noindent Since the article of \cite{cressie}, a scan statistic is defined by the maximum of a concentration index over a set of potential clusters $\mathcal{W}$. In the following and without loss a generality, we focus on variable-size circular clusters \citep[as introduced by][]{spatialscanstat}. The set of potential clusters $\mathcal{W}$ is the set of discs centered on a location and passing through another one, with $|w|$ the number of sites in $w$: \begin{equation} \mathcal{W} = \{ w_{i,j} \ / \ 1 \le |w_{i,j}| \le \frac{n}{2}, \ 1 \le i,j \le n \}, \end{equation} where $w_{i,j}$ is the disc centered on $s_i$ that passes through $s_j$. Thus, a cluster cannot cover more than 50\% of the studied region which is the recommended approach of \cite{spatialdisease}. Remark that in the literature other possibilities have been proposed such as elliptical clusters \citep{elliptic}, rectangular clusters \citep{rectangular} or graph-based clusters \citep{cucala_graph}. \\ \noindent We proposed a parametric scan statistic in subsection \ref{subsec:param}, a distribution-free one is detailed in subsection \ref{subsec:dfree} and a new rank-based scan statistic for multivariate functional data is developed in subsection \ref{subsec:max_wmw}. \subsection{A parametric spatial scan statistic for multivariate functional data} \label{subsec:param} \noindent In this subsection the process $X$ is supposed to take values in the Hilbert space $L^2(\mathcal{T}, \mathbb{R}^p)$ of $p$-dimensional vector-valued square-integrable functions on $\mathcal{T}$, equipped with the inner product $\langle X, Y \rangle = \int_{\mathcal{T}} X(t)^\top Y(t) \ \text{d}t$. \noindent \cite{notre_fonctionnel} proposed a parametric scan statistic for univariate functional data based on a functional ANOVA. A multivariate version of the ANOVA is the classical MANOVA Lawley–Hotelling trace test \citep{oja}. It was adapted by \cite{manovafonctional} for $L^2(\mathcal{T}, \mathbb{R}^p)$ processes: considering two groups $g_1$ and $g_2$ of independent random observations of two $p$-dimensional stochastic processes $X_{g_1}$ and $X_{g_2}$ taking values in $L^2(\mathcal{T}, \mathbb{R}^p)$, it tests the equality of the two mean vector-valued functions $\mu_{g_1}$ and $\mu_{g_2}$ where $\mu_{g_i}(t) = \mathbb{E}[X_{g_i}(t)] \in \mathbb{R}^p$, $i=1,2, \ t \in \mathcal{T}$. \\ \noindent For the cluster detection problem, the null hypothesis $\mathcal{H}_0$ (the absence of a cluster) can be defined by $\mathcal{H}_0: \forall w \in \mathcal{W}, \ \mu_{w} = \mu_{w^\mathsf{c}} = \mu_S$, where $\mu_w$, $\mu_{w^\mathsf{c}}$ and $\mu_S$ stand for the mean functions in $w$, outside $w$ and over $S$, respectively. And the alternative hypothesis $\mathcal{H}_1^{(w)}$ associated with a potential cluster $w$ can be defined as $\mathcal{H}_1^{(w)}: \mu_{w} \neq \mu_{w^\mathsf{c}}$. Thus we can use the functional MANOVA to compare the mean functions in $w$ and $w^\mathsf{c}$. \\ Actually \cite{manovafonctional} presented the adaptation of different MANOVA tests to the functional framework. However the Wilks lambda test statistic, the Lawley-Hotelling trace test statistic and the Pillai trace test statistic presented in the article showed similar performances. In addition, they often outperformed in terms of power the tests proposed in the same article that use projections. Thus we decide to study the Lawley-Hotelling trace test for the cluster detection problem by using the following statistic: \begin{equation} \mathrm{LH}^{(w)} = \text{Trace}(H_w E_w^{-1}) \end{equation} where $$H_w = |w| \int_{\mathcal{T}} [\bar{X}_w(t) - \bar{X}(t)][\bar{X}_w(t) - \bar{X}(t)]^\top \ \text{d}t + |w^\mathsf{c}| \int_{\mathcal{T}} [\bar{X}_{w^\mathsf{c}}(t) - \bar{X}(t)][\bar{X}_{w^\mathsf{c}}(t) - \bar{X}(t)]^\top \ \text{d}t$$ \text{ and } $$E_w = \sum_{j, s_j \in w} \int_{\mathcal{T}} [X_j(t) - \bar{X}_w(t)][X_j(t) - \bar{X}_w(t)]^\top \text{d}t + \sum_{j, s_j \in w^\mathsf{c}} \int_{\mathcal{T}} [X_j(t) - \bar{X}_{w^\mathsf{c}}(t)][X_j(t) - \bar{X}_{w^\mathsf{c}}(t)]^\top \text{d}t$$ where $\bar{X}_{g}(t) = \frac{1}{|g|} \sum_{i, s_i \in g} X_i(t)$ are empirical estimators of $\mu_g(t)$ ($g \in \{w, w^\mathsf{c}\}$), $\bar{X}(t) = \frac{1}{n} \sum_{i = 1}^n X_i(t)$ is the empirical estimator of $\mu_S(t)$. \\ \noindent Now, $\mathrm{LH}^{(w)}$ can be considered as a concentration index and maximized over the set of potential clusters $\mathcal{W}$, which results in the following definition of the parametric multivariate functional spatial scan statistic (PMFSS): \begin{equation} \Lambda_{\text{PMFSS}} = \underset{w \in \mathcal{W}}{\max} \ \mathrm{LH}^{(w)}. \end{equation} The potential cluster for which this maximum is obtained, namely the most likely cluster (MLC) is \begin{equation} \text{MLC} = \underset{w \in \mathcal{W}}{\arg \max} \ \mathrm{LH}^{(w)}. \end{equation} \subsection{A distribution-free spatial scan statistic for multivariate functional data} \label{subsec:dfree} \cite{notre_fonctionnel} proposed a distribution-free spatial scan statistic for univariate functional data based on the combination of the distribution-free scan statistic for non-functional data proposed by \cite{a_distribution_free} which relies on a Student's t-test, and the globalization of a pointwise test over the time \citep{hd_manova}. \\ Very recently, \cite{two_s} proposed a version of this pointwise test for $p$-dimensional functional data ($p \ge 2$) to compare the mean functions of $X$ in two groups. \\ \noindent We suppose that for each time $t$, $\mathbb{V}[X_i(t)] = \Sigma(t,t)$ for all $i \in \llbracket 1 ; n \rrbracket$, where $\Sigma$ is a $p \times p$ covariance matrix function. \\ \noindent Thus, as previously, in the context of cluster detection, the null hypothesis $\mathcal{H}_0$ can be defined as follows: $\mathcal{H}_0: \forall w \in \mathcal{W}, \ \mu_{w} = \mu_{w^\mathsf{c}} = \mu_S$, where $\mu_w$, $\mu_{w^\mathsf{c}}$ and $\mu_S$ stand for the mean functions in $w$, outside $w$ and over $S$, respectively. And the alternative hypothesis $\mathcal{H}_1^{(w)}$ associated with a potential cluster $w$ can be defined as follows: $\mathcal{H}_1^{(w)}: \mu_{w} \neq \mu_{w^\mathsf{c}}$. Next, \cite{two_s} proposed to compare the mean function $\mu_w$ in $w$ with the mean function $\mu_{w^\mathsf{c}}$ in $w^\mathsf{c}$ by using the following statistic: $$ T_{n,\text{max}}^{(w)} = \underset{t \in \mathcal{T}}{\sup} \ T_n(t)^{(w)}$$ where $T_n(t)$ is a pointwise statistic defined by the Hotelling $T^2$-test statistic $$ T_n(t)^{(w)} = \frac{|w| |w^\mathsf{c}|}{n} (\bar{X}_w(t) - \bar{X}_{w^\mathsf{c}}(t))^\top \hat{\Sigma}(t,t)^{-1} (\bar{X}_w(t) - \bar{X}_{w^\mathsf{c}}(t)).$$ $\bar{X}_w(t)$ and $\bar{X}_{w^\mathsf{c}}(t)$ are the empirical estimators of the mean functions defined in subsection \ref{subsec:param}, and $$\hat{\Sigma}(s,t) = \frac{1}{n-2} \left[ \sum_{i, s_i \in w} (X_i(s) - \bar{X}_w(s)) (X_i(t) - \bar{X}_w(t))^\top + \sum_{i, s_i \in w^\mathsf{c}} (X_i(s) - \bar{X}_{w^\mathsf{c}}(s)) (X_i(t) - \bar{X}_{w^\mathsf{c}}(t))^\top \right]$$ is the pooled sample covariance matrix function. \\ \noindent Then $T_{n,\text{max}}^{(w)}$ is considered as a concentration index and maximized over the set of potential clusters $\mathcal{W}$, yielding to the following multivariate distribution-free functional spatial scan statistic (MDFFSS): \begin{equation} \Lambda_{\text{MDFFSS}} = \underset{w \in \mathcal{W}}{\max} \ T_{n,\text{max}}^{(w)}. \end{equation} The most likely cluster is therefore \begin{equation} \text{MLC} = \underset{w \in \mathcal{W}}{\arg \max} \ T_{n,\text{max}}^{(w)}. \end{equation} \subsection{A new rank-based spatial scan statistic for multivariate functional data} \label{subsec:max_wmw} \cite{oja} developed a $p$-dimensional ($p \ge 2$) extension of the classical Wilcoxon rank-sum test using multivariate ranks. Following on \cite{oja}'s definitions, we can define the notion of ``pointwise multivariate ranks'' as following: \\ For each time $t \in \mathcal{T}$, the pointwise multivariate ranks are defined by $$R_i(t) = \frac{1}{n} \sum_{j=1}^{n} \text{sgn}(A_X(t)(X_i(t) - X_j(t)))$$ where $\text{sgn}(\cdot)$ is the spatial sign function defined as $$\begin{array}{ccccl} \text{sgn} &: & \mathbb{R}^p & \to & \mathbb{R}^p \\ & & x & \mapsto & \left\{ \begin{array}{cl} ||x||_2^{-1} x & \text{ if } x \neq 0 \\ 0 & \text{ otherwise} \end{array} \right. \\ \end{array}$$ and $A_X(t)$ is a pointwise data-based transformation matrix that makes the pointwise multivariate ranks behave as though they are spherically distributed in the unit $p$-sphere: $$ \frac{p}{n}\sum_{i=1}^{n} R_i(t) R_i(t)^\top = \frac{1}{n} \sum_{i=1}^{n} R_i(t)^\top R_i(t) I_p.$$ Note that this matrix can be easily computed using an iterative procedure. \\ \noindent Without loss of generality, \cite{oja} compared the cumulative distribution functions of real multivariate observations in two groups. In the context of multivariate functional data, their statistic can be considered as a pointwise test statistic for each time $t$: the pointwise multivariate extension of the Wilcoxon rank-sum test statistic is defined as $$W(t)^{(w)} = \frac{pn}{\sum_{i=1}^{n} R_i(t)^\top R_i(t)} \left[ \ |w| \ ||\bar{R}_w(t) ||_2^2 + |w^\mathsf{c}| \ ||\bar{R}_{w^\mathsf{c}}(t) ||_2^2 \ \right]$$ where $\bar{R}_{g}(t) = \frac{1}{|g|} \sum_{i, s_i \in g} R_i(t) $ for $g \in \{ w, w^\mathsf{c} \}.$ \\ \noindent Now we propose as previously to globalize the information over the time with $$W^{(w)} = \underset{t \in \mathcal{T}}{\sup} \ W(t)^{(w)}.$$ \noindent Then in the context of cluster detection, the null hypothesis is defined as $\mathcal{H}_0$: $\forall w \in \mathcal{W}, \ \forall t, \ F_{w,t} = F_{w^\mathsf{c},t}$ where $F_{w,t}$ and $F_{w^\mathsf{c},t}$ correspond respectively to the cumulative distribution functions of $X(t)$ in $w$ and outside $w$. The alternative hypothesis $\mathcal{H}_1^{(w)}$ associated with a potential cluster $w$ is $\mathcal{H}_1^{(w)}$: $\exists t$, \ $F_{w,t}(x) = F_{w^\mathsf{c},t}(x-\Delta_t)$, $\Delta_t \neq 0$. \\ \noindent Then $W^{(w)}$ can be considered as a concentration index and maximized over the set of potential clusters $\mathcal{W}$ so that the multivariate rank-based functional spatial scan statistic (MRBFSS) is defined as follows: \begin{equation} \Lambda_{\text{MRBFSS}} = \underset{w \in \mathcal{W}}{\max} \ W^{(w)}. \end{equation} Thus the most likely cluster is \begin{equation} \text{MLC} = \underset{w \in \mathcal{W}}{\arg \max} \ W^{(w)}. \end{equation} \subsection{Computing the significance of the MLC} \label{subsec:significance} \noindent Once the most likely cluster has been detected, its significance must be evaluated. The distribution of the scan statistic $\Lambda$ ($\Lambda_{\text{PMFSS}}$, $\Lambda_{\text{MDFFSS}}$ or $\Lambda_{\text{MRBFSS}}$) is untractable under $\mathcal{H}_0$ due to the dependence between $\mathcal{S}^{(w)}$ and $\mathcal{S}^{(w')}$ if $w \cap w' \neq \emptyset$ ($\mathcal{S} = \mathrm{LH}, T_{n,\text{max}}$ or $W$). Then we chose to obtain a large set of simulated datasets by randomly permuting the observations $X_i$ in the spatial locations. This technique called ``random labelling'' was already used in spatial scan statistics \citep{normalkulldorff, a_multivariate_gaussian, notre_fonctionnel}. \noindent Let $M$ denote the number of random permutations of the original dataset and $\Lambda^{(1)},\dots,\Lambda^{(M)}$ be the observed scan statistics on the simulated datasets. According to \cite{dwass} the p-value for $\Lambda$ observed in the real data is estimated by \begin{equation} \hat{p} = \frac{1 + \sum_{m=1}^M \mathds{1}_{\Lambda^{(m)} \ge \Lambda}}{M+1}. \end{equation} Finally, the MLC is considered to be statistically significant if the associated $\hat{p}$ is less than the type I error. \\ \section{A simulation study} \label{sec:simulation} \noindent \noindent A simulation study was conducted to compare the performances of the parametric multivariate functional spatial scan statistic (PMFSS) $\Lambda_{\text{PMFSS}}$, the multivariate distribution-free functional spatial scan statistic (MDFFSS) $\Lambda_{\text{MDFFSS}}$ and the new multivariate rank-based functional spatial scan statistic (MRBFSS) $\Lambda_{\text{MRBFSS}}$. \cite{wilco_cucala} proposed a nonparametric scan statistic for univariate functional data (NPFSS) $\Lambda_{\text{NPFSS}}$. However according to the authors it can be extended to the multivariate functional framework although it has not been studied in this context. Thus we decided to include their approach in the simulation, using the computation improvement proposed by \cite{notre_fonctionnel}. \subsection{Design of simulation study} \noindent Artificial datasets were generated by using the geographic locations of the 94 French \textit{départements} (county-type administrative areas) as shown in Figure \ref{fig:sitesgrid}. The location of each \textit{département} was defined by its administrative capital. For each artificial dataset, a spatial cluster $w$ (composed of eight \textit{départements} in the Paris region ; the red area, see Figure \ref{fig:sitesgrid} in Supplementary materials) was simulated. \subsubsection{Generation of the artificial datasets} \noindent The $X_i$ were simulated according to the following model with $p = 2$ \citep[see][for more details]{two_s, article_milano}: \begin{center} $\text{for each }i \in \llbracket 1 ; 94 \rrbracket,\ X_i(t) = (\sin{[2\pi t^2]}^5 ; 1 + 2.3 t + 3.4 t^2 + 1.5 t^3)^\top + \Delta(t) \mathds{1}_{s_i \in w} + \varepsilon_{i}(t), \ t \in [0;1]. $ \end{center} where $\varepsilon_{i}(t) = \sum_{k=1}^{100} Z_{i,k} \sqrt{1.5 \times 0.2^k} \theta_k(t)$, with $\theta_k(t) = \left\{ \begin{array}{ll} 1 & \text{ if } k = 1 \\ \sqrt{2} \sin{[k\pi t]} & \text{ if } k \text{ even} \\ \sqrt{2} \cos{[(k-1)\pi t]} & \text{ if } k \text{ odd and } k > 1 \end{array} \right.$. \\ \noindent The functions $X_i$ were measured at 101 equally spaced times on $[0;1]$. \\ \noindent Giving $\Sigma = \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}$ the covariance matrix of the $Z_{i,k}$, three distributions for the $Z_{i,k}$ were considered: (i) a normal distribution: $Z_{i,k} \sim \mathcal{N}(0,\Sigma)$, (ii) a standardized Student distribution: $Z_{i,k} = U_{i,k} \left( \frac{V_{i,4}}{4} \right)^{-0.5}$ where the $U_{i,k}$ are independent $\mathcal{N}(0, \Sigma/2)$ variables and the $V_{i,4}$ are independent $\chi_2(4)$ variables and (iii) a standardized chi-square distribution: $Z_{i,k} = \left[ U_{i,k} - \begin{pmatrix} 4 \\ 4 \end{pmatrix} \right] / (2\sqrt{2})$ where the $U_{i,k}$ are independent and $U_{i,k} \sim \chi_2(4,\Sigma) = \Gamma(2,1/2, \Sigma)$ (rate parameterization). Remark that $\rho$ is also the correlation of the two components of $X(t)$ for each time. \\ \noindent Three values of correlation $\rho$ were tested: $\rho = 0.2$, $0.5$ and $0.8$, and three types of clusters with intensity controlled by some parameter $\alpha > 0$ were studied: $\Delta_1(t) = \alpha (t ; t)^\top$, $\Delta_2(t) = \alpha ( t (1-t) ; t (1-t) )^\top$ and $\Delta_3(t) = \alpha (\exp{[ - 100 (t-0.5)^2 ]}/3 ; \exp{[ - 100 (t-0.5)^2 ]}/3)^\top$. Since they vary over time and are positive and non-zero on $\mathcal{T} = [0;1]$ (except possibly in $t = 0$ or $t = 1$), they correspond to both multivariate magnitude and multivariate shape clusters. \\ Different values of the parameter $\alpha$ were considered for each $\Delta$: $\alpha \in \{0 ; \ 0.375 ; \ 0.75 ; \ 1.125 ; \ 1.5 \}$ for $\Delta_1$, $\alpha \in \{0 ; \ 1 ; \ 2 ; \ 3 ; \ 4 \}$ for $\Delta_2$ and $\alpha \in \{ 0 ; \ 1.25 ; \ 2.5 ; \ 3.75 ; \ 5 \}$ for $\Delta_3$. Note that $\alpha = 0$ was also tested in order to evaluate the maintenance of the nominal type I error. An example of the data for $\rho = 0.2$ and for the Gaussian distribution for the $Z_{i,k}$ is given in the Appendix (Figure \ref{fig:examplesimu}). \subsubsection{Comparison of the methods} \noindent For each distribution of the $Z_{i,k}$, each type of $\Delta$, each level of correlation $\rho$, and each value of $\alpha$, 1000 artificial datasets were simulated. The type I error was set to 5\% and 999 samples were generated by random permutations of the data to evaluate the p-value associated with each MLC. The performances of the methods were compared through four criteria: the power, the true positive rate, the false positive rate and the F-measure. \\ \noindent The power was estimated by the proportion of simulations yielding to the rejection of $\mathcal{H}_0$ according to the type I error. Among the simulated datasets yielding to the rejection of $\mathcal{H}_0$, the true positive rate is the average proportion of sites correctly detected among the sites in $w$, the false positive rate is the average proportion of sites in $w^\mathsf{c}$ that were included in the detected cluster and the F-measure corresponds to the average harmonic mean of the proportion of sites in $w$ within the detected cluster (positive predictive value) and the true positive rate. \subsubsection{Results of the simulation study} \noindent The results of the simulation are presented in Figures \ref{fig:resultssimu1}, \ref{fig:resultssimu2} and \ref{fig:resultssimu3}. \\ \noindent For $\alpha = 0$, all methods seem to maintain the correct type I error of 0.05 regardless of the type of process, the type of $\Delta$ and the level of correlation $\rho$ (see the power curves in Figures \ref{fig:resultssimu1}, \ref{fig:resultssimu2} and \ref{fig:resultssimu3}). \\ \noindent For all methods the performances slightly decrease when the correlation $\rho$ increases. \\ \noindent The NPFSS and the PMFSS show similar powers for the Gaussian distribution for the shifts $\Delta_1$ and $\Delta_2$. However for non-Gaussian distributions of the $Z_{i,k}$ or the shift $\Delta_3$, the NPFSS presents higher powers than the PMFSS. The MDFFSS presents the highest powers in the Gaussian case. However its performances also decrease when the data are not distributed normally: in that case the MRBFSS shows the highest powers (except for $\Delta_2$ even if they are still very high). In the Gaussian case it also presents better powers than the NPFSS (except for $\Delta_2$) and the PMFSS. \\ \noindent The MRBFSS almost always shows the highest true positive rates (except sometimes for $\Delta_2$). The true positive rates for the MDFFSS are also very high for normal data but they decrease for non-normal data. The PMFSS presents the lowest true positive rates, however it presents very low false positive rates. In terms of false positives, the MDFFSS always shows the better performances. The MRBFSS often shows higher false positive rates, however they are lower than the ones of the NPFSS (except for $\Delta_2$ although both are very close). As a result the MDFFSS shows the highest F-measures, followed by the MRBFSS. For $\Delta_1$, in the Gaussian case the PMFSS and the NPFSS present similar F-measures whereas the F-measures are lower for the PMFSS for non-normal distributions. The NPFSS, the PMFSS and the MRBFSS shows very close F-measures for the shift $\Delta_2$ and finally the F-measures for the NPFSS and the PMFSS are strongly lower than the ones of the MDFFSS and the MRBFSS for the local shift $\Delta_3$. \begin{figure} \caption{The simulation study: comparison of the NPFSS, MDFFSS, MRBFSS and PMFSS methods for the shift $\Delta_1(t) = (\alpha t ; \alpha t)^\top$. For each method and each level of correlation $\rho$, the power curves, the true positive and false positive rates, and the F-measure values for detection of the spatial cluster as the MLC are shown. $\alpha$ is the parameter that controls the cluster intensity.} \label{fig:resultssimu1} \end{figure} \begin{figure} \caption{The simulation study: comparison of the NPFSS, MDFFSS, MRBFSS and PMFSS methods for the shift $\Delta_2(t) = (\alpha t (1-t) ; \alpha t (1-t))^\top$. For each method and each level of correlation $\rho$, the power curves, the true positive and false positive rates, and the F-measure values for detection of the spatial cluster as the MLC are shown. $\alpha$ is the parameter that controls the cluster intensity.} \label{fig:resultssimu2} \end{figure} \begin{figure} \caption{The simulation study: comparison of the NPFSS, MDFFSS, MRBFSS and PMFSS methods for the shift $\Delta_3(t) = (\alpha \exp{[ - 100 (t-0.5)^2 ]}/3 ; \alpha \exp{[ - 100 (t-0.5)^2 ]}/3)^\top$. For each method and each level of correlation $\rho$, the power curves, the true positive and false positive rates, and the F-measure values for detection of the spatial cluster as the MLC are shown. $\alpha$ is the parameter that controls the cluster intensity.} \label{fig:resultssimu3} \end{figure} \section{Application on real data} \label{sec:realdata} \subsection{Air pollution in \textit{Nord-Pas-de-Calais}} \noindent The data considered is the concentration in $\mu g.m^{-3}$ of four pollutants: ozone ($\text{O}_3$), nitrogen dioxide ($\text{NO}_2$) and fine particles $\text{PM}_{10}$ and $\text{PM}_{2.5}$ corresponding respectively to particles whose diameter is less than $10 \mu m$ and $2.5 \mu m$. Note that the $\text{PM}_{2.5}$ particles are included in the $\text{PM}_{10}$ particles. The data provided by the French national air quality forecasting platform PREV'AIR consists in the daily average of these variables from May 1, 2020 to June 25, 2020 (56 values for each variable) aggregated at the \textit{canton} (administrative subdivisions of \textit{départements}) level for each of the 169 \textit{cantons} of the \textit{Nord-Pas-de-Calais} (a region in northern France) located by their center of gravity. The pollutants daily concentration curves in each \textit{canton} are presented in Figure \ref{fig:courbes_pollutions} (left panels) and the spatial distributions of the average concentrations for each pollutant over the studied time period are presented in Figure \ref{fig:courbes_pollutions} (right panels). \begin{figure} \caption{Daily concentration curves of $\text{NO}_2$, $\text{O}_3$, $\text{PM}_{10}$ and $\text{PM}_{2.5}$ (from May 1, 2020 to June 25, 2020) in each of the 169 \textit{cantons} of \textit{Nord-Pas-de-Calais} (a region in northern France) (left panels), and the spatial distributions of the average concentrations for each pollutant over period from from May 1, 2020 to June 25, 2020 (right panels).} \label{fig:courbes_pollutions} \end{figure} \noindent The maps in Figure \ref{fig:courbes_pollutions} show a spatial heterogeneity of the average concentration for each pollutant. High concentrations of $\text{O}_3$ tend to aggregate in the rural areas of Montreuil and Avesnes-sur-Helpe, and high concentrations of the other pollutants tend to aggregate in the urban areas of Calais, Dunkerque and Lille. Moreover the daily concentration curves present a marked temporal variability during the period from May 1, 2020 to June 25, 2020. Thus functional spatial scan statistics seem relevant to highlight the presence of \textit{cantons}-level spatial clusters of pollutants concentrations. \subsection{Spatial clusters detection} \noindent For the sake of concision, we have decided to present here only one method based on the results of the simulation. With regard to the latter, we have chosen the MRBFSS because it presents stable performances whatever the correlation and the distribution of the variables. \noindent We considered a round-shaped scanning window of maximum radius 10 km since small clusters of pollution are more relevant for interpretation because the sources of the pollutants are very localized: the main source of $\text{NO}_2$ is road traffic, and $\text{PM}_{2.5}$ is mainly emitted in urban (heating, road traffic) or industrial areas. The statistical significance of the MLC was evaluated through 999 Monte-Carlo permutations and the MLC is said to be statistically significant if its p-value is less than 0.05. Cluster detection was also performed for the other three methods, the results are presented in Figure \ref{fig:othermethods} and Table \ref{table:noconstraint} in Supplementary materials. \subsection{Results} \noindent The MRBFSS detected a significant most likely cluster (15 \textit{cantons}, $308\text{km}^2$, $\hat{p} = 0.001$) in the area of Lille. This cluster, corresponding to high values of $\text{NO}_2$, $\text{PM}_{10}$ and $\text{PM}_{2.5}$ concentrations, is presented in Figure \ref{fig:noconstraint}. For these three pollutants all the curves in the MLC are above the average concentrations in the \textit{Nord-Pas-de-Calais}. In environmental science it is well-known that those pollutants are more frequent in urban areas. Therefore this is consistent with the cluster observed here. \begin{figure} \caption{Most likely cluster of pollutants ($\text{NO}_2$, $\text{O}_3$, $\text{PM}_{10}$ and $\text{PM}_{2.5}$) concentrations detected by the MRBFSS. The daily concentration curves of the pollutants (from May 1, 2020 to June 25, 2020) in each \textit{canton} are presented with colored lines. The black curves are the daily average concentration curves in the \textit{Nord-Pas-de-Calais} (a region in northern France).} \label{fig:noconstraint} \end{figure} \section{Discussion} \label{sec:discussion} \noindent Here we developed a parametric multivariate functional scan statistic (PMFSS), a multivariate distribution-free functional spatial scan statistic (MDFFSS) and a multivariate rank-based functional spatial scan statistic (MRBFSS) which allow to detect clusters of abnormal values on multivariate functional data indexed in space. The goal of such methods is to alert the scientists if abnormal values are detected. Typically in the environmental-surveillance context they will generate an alarm if they detect areas where populations are multi-exposed to environmental pollutants. The new methods appear to be more relevant for multivariate functional data than a multivariate spatial scan statistic approach since the latter would face huge losses of information by summarizing each variable of the data by its average over the time period. Furthermore they also appear to be more relevant than using a univariate functional spatial scan statistic for each variable since this does not allow to take into account the correlations between the variables. \\ \noindent Although they only studied their approach in the univariate functional framework, \cite{wilco_cucala} suggest that the NPFSS can be extended to the multivariate case. Thus the MDFFSS, the PMFSS and the MRBFSS were compared with the NPFSS in a simulation study. The simulation study highlighted that the performances of all methods decreased with increasing correlation between the variables. The MRBFSS and the MDFFSS presented higher powers than the PMFSS and the NPFSS whatever the distribution and the correlation between the variables. The PMFSS and the MDFFSS showed the lowest false positive rates. However the MRBFSS presented the highest true positive rates and the PMFSS showed the lowest ones which results in very high F-measures for the MRBFSS and the MDFFSS, which improves the confidence in the clusters detected by these approaches compared to the ones detected by the NPFSS and the PMFSS. Moreover the ones detected by the NPFSS tended to contain more false positives which is less relevant in practice. Indeed in the case of the application of scan statistics to environmental surveillance, having fewer false positives rates is an advantage since the detection of spatial clusters is the starting point for future investigation by environmental experts within the cluster. \\ When the data were far from being normally distributed the performances of the PMFSS decreased as well as the ones of the MDFFSS. However they still maintain very low false positive rates. \\ \noindent For the sake of brevity we have chosen to apply only the MRBFSS to detect clusters of abnormal values of pollutants concentrations in the \textit{cantons} of \textit{Nord-Pas-de-Calais}, based on the results of the simulation since it shows stable performances whatever the distribution of the variables and the correlation. The method detected a significant most likely cluster in the area of Lille which presents high values of $\text{NO}_2$, $\text{PM}_{10}$ and $\text{PM}_{2.5}$ concentrations. \\ \noindent Remark that we only focused on circular clusters. In the application the maps of pollutants present elongated shapes of high average concentrations especially for $\text{PM}_{10}$ on the coastline, which suggests that other forms of clusters may be relevant to consider in the analysis. As an example \cite{tango} proposed to consider irregularly shaped clusters by considering all sets of sites connected to each other, with a predetermined maximum size. However it should be noted that using this approach generates many more potential clusters than the approach proposed by \cite{spatialscanstat} which drastically increases the computing time. The same disadvantage can be found with the elliptic clusters approach of \citep{elliptic}. However these problems can be overcome with the graph-based clusters proposed by \citep{cucala_graph}. Another possible approach was proposed by \cite{lin} who suggests to regroup the estimated circular clusters to form clusters with arbitrary shapes. \\ \noindent It should also be noted that the application on real data only considered the MLC. It may also be interesting to detect secondary clusters, which can be done by following the approach of \cite{spatialscanstat} who considers also clusters that had a high value of the concentration index ($\mathrm{LH}^{(w)}$ for the PMFSS, $T_{n, \text{max}}^{(w)}$ for the MDFFSS, $W^{(w)}$ for the MRBFSS, and $U(w)$ for NPFSS \citep[see][for details]{wilco_cucala}) and did not cover the MLC. \\ \noindent Finally in the context of spatial epidemiology, one could imagine case count data collected monthly on spatial units over a long period of time. In this context, to detect spatial clusters of diseases with the already existing methods, we often use cumulative incidences, which implies to get only one data per spatial unit. This induces a huge loss of information, particularly when the incidence curves show high temporal variability. However we should underline that the NPFSS, the PMFSS, the MDFFSS and the MRBFSS could be applied to count data including the possibility to adjust the analysis for the underlying population. \appendix \section{Supplementary materials} \begin{figure} \caption{The 94 French \textit{départements} and the spatial cluster (in red) simulated for each artificial dataset.} \label{fig:sitesgrid} \end{figure} \noindent Figure \ref{fig:examplesimu} shows an example of the data generated when the $Z_{i,k}$ are Gaussian and $\rho = 0.2$. \begin{figure} \caption{The simulation study: an example of the two components of the data generated for the Gaussian process and $\rho = 0.2$, with $\Delta(t) = \Delta_1(t) = 1.5(t ; t)^\top$ (left panel), $\Delta(t) = \Delta_2(t) = 4(t(1-t) ; t(1-t))^\top$ (middle panel) and $\Delta(t) = \Delta_3(t) = 5( \exp{[ - 100 (t-0.5)^2 ]}/3 ; \exp{[ - 100 (t-0.5)^2 ]}/3)^\top$ (right panel). The red curves correspond to the observations in the cluster.} \label{fig:examplesimu} \end{figure} \begin{figure} \caption{Most likely clusters of pollutants ($\text{NO}_2$, $\text{O}_3$, $\text{PM}_{10}$ and $\text{PM}_{2.5}$) concentrations detected by the NPFSS, the PMFSS and the MDFFSS. The daily concentration curves of the pollutants (from May 1, 2020 to June 25, 2020) in each \textit{canton} are presented with colored lines. The black curves are the daily average concentration curves in the \textit{Nord-Pas-de-Calais} (a region in northern France).} \label{fig:othermethods} \end{figure} \begin{table}[htbp] \centering \caption{Description of the most likely cluster of pollutants concentrations detected for the NPFSS, the PMFSS and the MDFFSS . } \begin{tabular}{lccc} \hline & \multirow{2}{*}{\# \textit{cantons}} & \multirow{2}{*}{Surface} & \multirow{2}{*}{p-value} \\ & & & \\ \hline \multirow{2}{*}{\textbf{$\Lambda_{\text{NPFSS}}$}} & \multirow{2}{*}{15} & \multirow{2}{*}{308 km$^2$} & \multirow{2}{*}{0.001} \\ \multirow{2}{*}{\textbf{$\Lambda_{\text{PMFSS}}$}} & \multirow{2}{*}{13} & \multirow{2}{*}{264 km$^2$} & \multirow{2}{*}{0.001} \\ \multirow{2}{*}{\textbf{$\Lambda_{\text{MDFFSS}}$}} & \multirow{2}{*}{7} & \multirow{2}{*}{284 km$^2$} & \multirow{2}{*}{0.001} \\ & & & \\ \hline \end{tabular} \label{table:noconstraint} \end{table} \noindent Figure \ref{fig:othermethods} presents the most likely clusters obtained with the three other methods. The NPFSS detects exactly the same cluster as the MRBFSS and the most likely cluster for the PMFSS is quite similar to it. The result obtained with the MDFFSS seems at first sight quite surprising. However we only focussed here on the most likely clusters and it was found that this cluster is significant ($\hat{p} = 0.001$) for all methods and that the secondary cluster for the MDFFSS ($\hat{p} = 0.001$) is exactly the MLC of the NPFSS and the MRBFSS. Some characteristics of the detected MLCs are presented in Table \ref{table:noconstraint}. \end{document}
\begin{document} \title{On the local fluctuations of last-passage percolation models} \author{Eric Cator and Leandro P. R. Pimentel} \date{\today} \maketitle \begin{abstract} Using the fact that the Airy process describes the limiting fluctuations of the Hammersley last-passage percolation model, we prove that it behaves locally like a Brownian motion. Our method is quite straightforward, and it is based on a certain monotonicity and good control over the equilibrium measures of the Hammersley model (local comparison). \end{abstract} \section{Introduction and results} In recent years there has been a lot of research on the Airy process and related processes such as the Airy sheet \cite{CQ}. Most papers use analytic methods and exact formulas given by Fredholm determinants to prove properties of these processes, but some papers use the fact that these processes are limiting processes of last-passage percolation models or random polymer models, and they use properties of these well studied models to prove the corresponding property of the limiting process. A nice example of these different approaches can be found in two recent papers, one by H\"agg \cite{H} and one by Corwin and Hammond \cite{CH}. H\"agg proves in his paper that the Airy process behaves locally like a Brownian motion, at least in terms of convergence of finite dimensional distributions. He uses the Fredholm determinant description of the Airy process to obtain his result. Corwin and Hammond on the other hand, use the fact that the Airy line process, of which the top line corresponds to the Airy process, can be seen as a limit of Brownian bridges, conditioned to be non-intersecting. They show that a particular resampling procedure, which they call the Brownian Gibbs property, holds for the system of Brownian bridges and also in the limit for the Airy line process. As a consequence, it follows that the local Brownian behavior of the Airy process holds in a stronger functional limit sense. Our paper will prove the same theorem, also using the fact that the Airy process is a limiting process, but in a much more direct way: we will consider the Hammersley last-passage percolation model, and show that we can control local fluctuations of this model by precisely chosen equilibrium versions of this model, which are simply Poisson processes. Then we show that in the limit this control suffices to prove the local Brownian motion behavior of the Airy process, as well as tightness of the models approaching the Airy process. We also extend the control of local fluctuations of the Hammersley process to scales smaller than the typical cube-root scale. Our method is quite straightforward, yet rather powerful, mainly because we have a certain monotonicity and good control over the equilibrium measures. In fact, we think that we can extend our result to the more illustrious Airy sheet, the two dimensional version of the Airy process. We address the reader to \cite{CQ}, for a description of this process in terms of the renormalization fixed point of the KPZ universality class. However, here we run in to much more technical problems, and this will still require a lot more work, beyond the scope of this paper. We will continue the introduction by developing notation, introducing all relevant processes and stating the three main theorems. In Section 2 we introduce the local comparison technique and in each of the following three sections one theorem is proved. \subsection{The Hammersley Last Passage Percolation model} The Hammersley last-passage percolation model \cite{AD} is constructed from a two-dimensional homogeneous Poisson point process of intensity $1$. Denote $[x]_t:=(x,t)\in{\mathbb R}^2$ and call a sequence $[x_1]_{t_1},[x_2]_{t_2},\dots,[x_k]_{t_k}$ of planar points increasing if $x_j<x_{j+1}$ and $t_j<t_{j+1}$ for all $j=1,\dots,k-1$. The last-passage time $L([x]_s,[y]_t)$ between $[x]_s<[y]_t$ is the maximal number of Poisson points among all increasing sequences of Poisson points lying in the rectangle $(x, y ]\times(s, t]$. Denote $L[x]_t:=L([0]_0,[x]_t)$ and define ${\mathcal A}_n$ by $$u\in{\mathbb R}\,\mapsto\,{\mathcal A}_n(u):=\frac{L[n+2un^{2/3}]_n-(2n+2un^{2/3})+u^2n^{1/3}}{n^{1/3}}\,.$$ Pr\"ahofer and Spohn \cite{PS} proved that \begin{equation}\label{eq:Airy} \lim_{n\to\infty}{\mathcal A}_n(\cdot)\stackrel{dist.}{=}{\mathcal A}(\cdot)\,, \end{equation} in the sense of finite dimensional distributions, where ${\mathcal A}\equiv({\mathcal A}(u))_{u\in{\mathbb R}}$ is the so-called Airy process. This process is a one-dimensional stationary process with continuous paths and finite dimensional distributions given by a Fredholm determinant \cite{J}: $${\mathbb P}\left({\mathcal A}(u_1)\leq \xi_1,\dots,{\mathcal A}(u_m)\leq \xi_m\right):=\det\left(I-f^{1/2}Af^{1/2}\right)_{L^2\left(\{u_1,\dots,u_m\}\times{\mathbb R}\right)}\,.$$ The function $A$ denotes the extended Airy kernel, which is defined as $$A_{s,t}(x,y):=\left\{\begin{array}{ll}\int_0^\infty e^{-z(s-t)}{\rm Ai}(x+z){\rm Ai}(y+z)dz\,,& \mbox{ if } s\geq t\,,\\&\\ -\int_{-\infty}^0 e^{-z(t-s)}{\rm Ai}(x+z){\rm Ai}(y+z)dz\,,& \mbox{ if } s< t\,,\end{array}\right.$$ where ${\rm Ai}$ is the Airy function, and for $\xi_1,\dots,\xi_m\in{\mathbb R}$ and $u_1<\dots<u_m$ in ${\mathbb R}$, \begin{eqnarray*} f\,:\,\{u_1,\dots,u_m\}\times{\mathbb R}&\to&{\mathbb R}\\ (u_i,x)&\mapsto&1_{(\xi_i,\infty)}(x)\,. \end{eqnarray*} The main contribution of this paper is the development of a local comparison technique to study the local fluctuations of last-passage times and its scaling limit. The ideas parallel the work of Cator and Groeneboom \cite{CG1,CG2}, where they studied local convergence to equilibrium and the cube-root asymptotic behavior of $L$. This technique consists of bounding from below and from above the local differences of $L$ by the local differences of the equilibrium regime (Lemma \ref{lem:LocalComparison}), with suitable parameters that will allow us to handle the local fluctuations in the right scale (Lemma \ref{lem:ExitControl}). For the Hammersley model the equilibrium regime is given by a Poisson process. We have strong indications that the technique can be applied to a broad class of models, as soon as one has Gaussian fluctuations for the equilibrium regime. Although this is a very natural assumption, one can only check that for a few models. As a first application, we will prove tightness of ${\mathcal A}_n$. \begin{thm}\label{thm:Tight} The collection $\{{\mathcal A}_n\}$ is tight in the space of cadlag functions on $[a,b]$. Furthermore, any weak limit of ${\mathcal A}_n$ lives on the space of continuous functions. \end{thm} The local comparison technique can be used to study local fluctuations of last-passage times for lengths of size $n^{\gamma}$, with $\gamma\in(0,2/3)$ (so smaller than the typical scale $n^{2/3}$). Let ${\mathcal B}\equiv({\mathcal B}(u))_{u\geq 0}$ denote the standard two-sided Brownian motion process. \begin{thm}\label{thm:LocalFluct} Fix $\gamma\in(0,2/3)$ and $s>0$ and define $\Delta_n$ by $$u\in{\mathbb R}\,\mapsto\, \Delta_n(u):=\frac{L[sn+un^{\gamma}]_{n}-L[sn]_n-\mu un^{\gamma}}{\sigma n^{\gamma/2}}\,,$$ where $\mu:=s^{-1/2}$ and $\sigma:=s^{-1/4}$. Then $$\lim_{n\to\infty}\Delta_n(\cdot)\stackrel{dist.}{=}{\mathcal B}(\cdot)\,,$$ in the sense of weak convergence of probability measures in the space of cadlag functions. \end{thm} As we mentioned in the previous section, the Airy process locally behaves like Brownian motion \cite{CH,H}. By applying the local comparison technique again, we will present an alternative proof of the functional limit theorem for this local behavior. \begin{thm}\label{thm:LocalAiry} Define ${\mathcal A}^\epsilon$ by $$u\in{\mathbb R}\,\mapsto\,{\mathcal A}^{\epsilon}(u):=\epsilon^{-1/2}\left({\mathcal A}(\epsilon u)-{\mathcal A}(0)\right)\,.$$ Then $$\lim_{\epsilon\to 0}{\mathcal A}^{\epsilon}(\cdot)\stackrel{dist.}{=}\sqrt{2}{\mathcal B}(\cdot)\,,$$ in the sense of weak convergence of probability measures in the space of continuous functions. \end{thm} \subsection{The lattice model with exponential weights} Consider a collection $\{\omega_{[x]_t}\,:\,[x]_t\in{\mathbf Z}^2\}$ of i.i.d. non negative random variables with an exponential distribution of parameter one. Let $\Pi([x]_t,[y]_u)$ denote the collection of all lattice paths $\varpi=([z]_{v_j})_{j=1,\dots,k}$ such that: \begin{itemize} \item $[z]_{v_1}\in\{[x]_t+[1]_0,[x]_t+[0]_1\}$ and $[z]_{v_k}=[y]_u$; \item $[z]_{v_{j+1}}-[z]_{v_j}\in\{[1]_0,[0]_1\}$ for $j=0,1\dots,,k-1$. \end{itemize} The (lattice) last-passage percolation time between $[x]_t <[y]_u$ is defined by $$L^l([x]_t,[y]_u):=\max_{\varpi\in\Pi([x]_t,[y]_u)}\big\{\sum_{[z]_v\in\varpi}\omega_{[z]_v}\big\}\,.$$ Denote $L^l[x]_t:=L^l([0]_0,[x]_t)$ and define ${\mathcal A}^l_n$ by $$u\in{\mathbb R}\,\mapsto\,{\mathcal A}^l_n(u):=\frac{L^l[n+2^{5/3}un^{2/3}]_n-(4n+2^{8/3}un^{2/3})+2^{4/3}u^2n^{1/3}}{2^{4/3}n^{1/3}}\,.$$ Corwin, Ferrari and P\'ech\'e \cite{CFP} proved that \begin{equation}\label{eq:Airy} \lim_{n\to\infty}{\mathcal A}^l_n(\cdot)\stackrel{dist.}{=}{\mathcal A}(\cdot)\,, \end{equation} in the sense of finite dimensional distributions. The local comparison method can be used in this context as well. (The lattice version of Lemma \ref{lem:LocalComparison} is straightforward. For exponential weights, the analog to Lemma \ref{lem:ExitControl} was proved in \cite{BCS}.) \begin{thm}\label{thm:LTight} The collection $\{{\mathcal A}^l_n\}$ is tight in the space of cadlag functions on $[a,b]$. Furthermore, any weak limit of ${\mathcal A}_n$ lives on the space of continuous functions. \end{thm} \begin{thm}\label{thm:LLocalFluct} Fix $\gamma\in(0,2/3)$ and $s>0$ and define $\Delta_n$ by $$u\in{\mathbb R}\,\mapsto\, \Delta^l_n(u):=\frac{L^l[sn+un^{\gamma}]_{n}-L^l[sn]_n-\mu un^{\gamma}}{\sigma n^{\gamma/2}}\,,$$ where $\mu=\sigma:=s^{-1/2}(1+s^{1/2})$. Then $$\lim_{n\to\infty}\Delta^l_n(\cdot)\stackrel{dist.}{=}{\mathcal B}(\cdot)\,,$$ in the sense of weak convergence of probability measures in the space of cadlag functions. \end{thm} To avoid repetitions, we will not present a proof of the lattice results. We hope that the reader can convince his (or her) self that the method that we will describe in detail for the Hammersley last-passage percolation model can be easily adapted to the lattice models with exponentials weights. \begin{rem} We also expect that the local comparison method can be used in the log-gamma polymer model, introduced by Sepp\"al\"ainen \cite{T}. The polymer versions of Lemma \ref{lem:LocalComparison} and Lemma \ref{lem:ExitControl} were proved in \cite{T}. \end{rem} \section{Local comparison and exit points} The Hammersley last-passage percolation model has a representation as an interacting particle system, called the Hammersley process \cite{AD,CG1}. We will use notations used in \cite{CP}. In the positive time half plane we have the same planar Poisson point process as before. On the $x$-axis we take a Poisson process of intensity $\lambda>0$. The two Poisson process are assumed to be independent of each other. For $x\in {\mathbb R}$ and $t>0$ we define $$L_{{\lambda}}[x]_t\equiv L_{\nu_\lambda}[x]_t:=\sup_{z\in (-\infty,x]} \left\{ \nu_{\lambda}(z) + L([z]_0,[x]_t)\right\}\,,$$ where, for $z\leq x$, $$\nu_{\lambda}(z)=\left\{\begin{array}{ll}\mbox{ the number of Poisson points in }(0,z]\times\{0\}& \mbox{ for } z> 0\\ \mbox{ minus the number of Poisson points in }(z,0]\times \{0\} & \mbox{ for } z\leq0\,.\end{array}\right.$$ The process $(M^t_\lambda)_{t\geq 0}$, given by $$M^t_{\nu_{\lambda}}(x,y]\equiv M^t_{\lambda}(x,y]:=L_{\lambda}[y]_t-L_{\lambda}[x]_t\,\,\,\mbox{ for }x<y\,,$$ is a Markov process on the space of locally finite counting measures on ${\mathbb R}$. The Poisson process is the invariant measure of this particle system in the sense that \begin{equation}\label{eq:equilibrium} M^t_{\lambda}\stackrel{dist.}{=}\mbox{ Poisson process of intensity $\lambda$ for all $t\geq 0$}\,. \end{equation} Notice that the last-passage time $L=L_{\nu_0}$ can be recovered in the positive quadrant by choosing a measure $\nu_0$ on the axis that has no points to the right of $0$, and an infinite amount of points in every interval $(-{\varepsilon },0)$, $\forall {\varepsilon }>0$ (this could be called a ``wall'' of points). Thus $$L_{\lambda}\equiv L_{\nu_\lambda}({\cal P})\,\mbox{ and }\, L\equiv L_{\nu_0}({\cal P})\,$$ are coupled by the same two-dimensional Poisson point process $\cal P$, which corresponds to the basic coupling between $M^t_{\nu_\lambda}$ and $M^t_{\nu_0}$. Define the exit points $$Z_{\lambda}[x]_t :=\sup\left\{z\in(-\infty,x]\,:\,L_{\lambda}[x]_t=\nu_\lambda(z)+L([z]_0,[x]_t)\right\}\,,$$ and $$Z'_{\lambda}[x]_t :=\inf\left\{z\in(-\infty,x]\,:\,L_{\lambda}[x]_t=\nu_\lambda(z)+L([z]_0,[x]_t)\right\}\,.$$ By using translation invariance and invariance under the map $(x,t)\mapsto(\lambda x,t/\lambda)$, we have that \begin{equation}\label{eq:sym} Z_\lambda[x+h]_t\stackrel{dist.}{=}Z_\lambda[x]_t+h\,\,\,\mbox{ and }\,\,\, Z_{\lambda}[x]_t\stackrel{dist.}{=}\lambda Z_1[\lambda x]_{t/\lambda}. \end{equation} We need to use one more symmetry. In \cite{CG1}, the Hammersley process was set up as a process in the first quadrant with sources on the $x$-axis and sinks on the $t$-axis. In our notation this means that the process $t\mapsto L_\lambda[0]_t$ is a Poisson process with intensity $1/\lambda$ which is independent of the Poisson process $\nu_\lambda$ restricted to the positive $x$-axis and independent of the Poisson process in the positive quadrant. We can now use reflection in the diagonal to see that the following equality holds: \begin{equation}\label{eq:diagsym} {\mathbb P}\left(Z'_\lambda[x]_t < 0\right) = {\mathbb P}\left(Z_{1/\lambda}[t]_x > 0\right). \end{equation} We use that $Z'_\lambda[x]_t<0$ is equivalent to the fact that the maximizing path to the left-most exit point crosses the positive $t$-axis, and not the positive $x$-axis. The local comparison technique consists of bounding from above and from below the local differences of $L$ by the local differences of $L_\lambda$. These bounds depend on the position of the exit points. It is precisely summarized by the following lemma. \begin{lem}\label{lem:LocalComparison} Let $0\leq x\leq y$ and $t\geq 0$. If $Z'_{\lambda}[x]_t\geq0$ then $$L[y]_t-L[x]_t\leq L_{\lambda}[y]_t-L_{\lambda}[x]_t\,,$$ and if $Z_{\lambda}[y]_t\leq 0$ then $$L[y]_t-L[x]_t\geq L_{\lambda}[y]_t-L_{\lambda}[x]_t\,.$$ \end{lem} \noindent{\bf Proof\,\,} When we consider a path $\varpi$ from $[x]_s$ to $[y]_t$ consisting of increasing points, we will view $\varpi$ as the lowest increasing continuous path connecting all the points, starting at $[x]_s$ and ending at $[y]_t$. In this way we can talk about crossings with other paths or with lines. The geodesic between $[x]_s$ and $[y]_t$ is given by the lowest path (in the sense we just described) that attains the maximum in the definition of $L([x]_s,[y]_t)$. We will denote this geodesic by $\varpi([x]_s,[y]_t)$. Notice that $$L([x]_s,[y]_t)=L([x]_s,[z]_r)+L([z]_r,[y]_t)\,,$$ for any $[z]_r\in\varpi([x]_s,[y]_t)$. Assume that $Z'_{\lambda}[x]_t\geq 0$ and let ${\mathbf c}$ be a crossing between the two geodesics $\varpi([0]_0,[y]_t)$ and $\varpi([z']_0,[x]_t)$, where $z':=Z'_{\lambda}[x]_t$. Such a crossing always exists because $x\leq y$ and $z'=Z'_{\lambda}[x]_t\geq 0$. We remark that, by superaddivity, $$L_{\lambda}[y]_t \geq \nu_{\lambda}(z') + L([z']_0,[y]_t) \geq \nu_{\lambda}(z') + L([z']_0,{\mathbf c}) + L({\mathbf c},[y]_t)\,.$$ We use this, and that (since ${\mathbf c}\in\varpi([z']_0,[x]_t)$) $$ \nu_{\lambda}(z') + L([z']_0,{\mathbf c})-L_{\lambda} [x]_t= -L({\mathbf c},[x]_t)\,,$$ in the following inequality: \begin{eqnarray*} L_{\lambda}[y]_t - L_{\lambda}[x]_t & \geq & \nu_{\lambda}\big(z'\big)+L([z']_0,{\mathbf c}) + L({\mathbf c},[y]_t) - L_{\lambda}[x]_t\\ & = & L({\mathbf c},[y]_t) - L({\mathbf c},[x]_t)\,. \end{eqnarray*} By superaddivity, $$ - L({\mathbf c}\,,\,[x]_t)\geq L([0]_0,{\mathbf c})-L[x]_t\,,$$ and hence (since ${\mathbf c}\in\varpi([0]_0,[y]_t)$) \begin{eqnarray*} L_{\lambda}[y]_t - L_{\lambda}[x]_t & \geq & L({\mathbf c},[y]_t) - L({\mathbf c},[x]_t)\\ & \geq & L({\mathbf c},[y]_t) + L([0]_0,{\mathbf c})-L([0]_0,[x]_t)\\ & = & L[y]_t-L[x]_t\,. \end{eqnarray*} The proof of the second inequality is very similar. Indeed, denote $z:=Z_{\lambda}[y]_t$ and let ${\mathbf c}$ be a crossing between $\varpi([0]_0,[x]_t)$ and $\varpi([z]_0,[y]_t)$. By superaddivity, $$L_{\lambda}[x]_t \geq \nu_{\lambda}(z) + L([z]_0,[x]_t) \geq \nu_{\lambda}(z) + L([z]_0,{\mathbf c}) + L({\mathbf c},[x]_t)\,.$$ Since ${\mathbf c}\in\varpi([z]_0,[y]_t)$ we have that $$L_{\lambda}[y]_t-\nu_{\lambda}(z) - L([z]_0,{\mathbf c})=L({\mathbf c},[y]_t)\,,$$ which implies that \begin{eqnarray*} L_{\lambda}[y]_t - L_{\lambda}[x]_t & \leq &L_{\lambda}[y]_t- \nu_{\lambda}(z)-L([z]_0,{\mathbf c}) - L({\mathbf c},[x]_t)\\ & = & L({\mathbf c},[y]_t) - L({\mathbf c},[x]_t)\\ & \leq & L[y]_t-L([0]_0,{\mathbf c})- L({\mathbf c},[x]_t)\\ & = & L[y]_t-L[x]_t\,, \end{eqnarray*} where we have used that ${\mathbf c}\in\varpi([0]_0,[x]_t)$ in the last step. $\Box$\\ \begin{rem} In fact the first statement of the lemma is also true when $Z_\lambda[x]_t\geq 0$ and the second statement is true when $Z'_\lambda[y]_t\leq 0$, both without any change to the given proof. This is a stronger statement, since $Z'_\lambda[x]_t\leq Z_\lambda[x]_t$, but we will only need the lemma as it is formulated. \end{rem} In order to apply Lemma \ref{lem:LocalComparison} and extract good bounds for the local differences one needs to control the position of exit points. This is given by the next lemma. \begin{lem}\label{lem:ExitControl} There exist constant $C>0$ such that, $${\mathbb P}\left(Z_1[n]_n > r n^{2/3}\right)\leq \frac{C}{r^3}\,,$$ for all $r\geq 1$ and all $n\geq 1$. \end{lem} \noindent{\bf Proof\,\,} See Corollary 4.4 in \cite{CG2}. $\Box$\\ \section{Proof of Theorem \ref{thm:Tight}} For simple notation, and without loss of generality, we will restrict our proof to $[a,b]=[0,1]$. \begin{lem}\label{lem:tight} Fix $\beta\in(1/3,1)$ and for each $\delta\in(0,1)$ and $n\geq 1$ set $$\lambda_\pm=\lambda_\pm(n,\delta):=1\pm\frac{\delta^{-\beta}}{n^{1/3}}\,.$$ Define the event $$E_n(\delta):=\left\{Z'_{\lambda_{+}}[n]_n\geq 0\,\,\mbox{ and }\,\,Z_{\lambda_{-}}[n+2n^{2/3}]_n\leq 0\right\}\,.$$ Then there exists a constant $C>0$ such that, for sufficiently small $\delta>0$, $$\limsup_{n\to\infty}{\mathbb P}\left(E_n(\delta)^c\right)\leq C\delta^{3\beta}\,.$$ \end{lem} \noindent{\bf Proof\,\,} Denote $r:=\delta^{-\beta}$ and let $$n_+:=\lambda_+ n< 2n\,\,\mbox{ and }\,\,h_{+}:=\left(\lambda_+-\frac{1}{\lambda_+}\right) n> rn^{2/3}> r n_+^{2/3}/2 \,$$ (for all sufficiently large $n$). By \eqref{eq:sym} and \eqref{eq:diagsym}, \begin{eqnarray*} {\mathbb P}\left(Z'_{\lambda_+}[n]_n<0\right)&=&{\mathbb P}\left(Z_{1/\lambda_+}[n]_n>0\right)\\ &=&{\mathbb P}\left(Z_1[n/\lambda_+]_{\lambda_+ n}>0\right)\\ &=&{\mathbb P}\left(Z_1[\lambda_+n-h_+]_{\lambda_+ n}>0\right)\\ &=&{\mathbb P}\left(Z_1[\lambda_+n]_{\lambda_+ n}>h_+\right)\\ &\leq &{\mathbb P}\left(Z_1[n_+]_{n_+}>r n_+^{2/3}/2\right)\,. \end{eqnarray*} Analogously, for $$n_-:= \frac{n}{\lambda_-}<2n\,\,\mbox{ and }\,\,h_{-}:=\left(\frac{1}{\lambda_-}-\lambda_-\right) n> rn^{2/3}>r n_-^{2/3}/2 \,,$$ we have that \begin{eqnarray*} {\mathbb P}\left(Z_{\lambda_-}[n+2 n^{2/3}]_n>0\right)&=&{\mathbb P}\left(Z_{\lambda_-}[n]_n>-2 n^{2/3}\right)\\ &=&{\mathbb P}\left(\lambda_-Z_{1}[\lambda_-n]_{n/\lambda_-}>-2 n^{2/3}\right)\\ &\leq&{\mathbb P}\left(Z_{1}[n_- - h_-]_{n/\lambda_-}>-2 n^{2/3}\right)\\ &=&{\mathbb P}\left(Z_{1}[n_-]_{n_-}>h_- -2 n^{2/3}\right)\\ &\leq& {\mathbb P}\left(Z_1[n_-]_{n_-}>(r-4) n_-^{2/3}/2\right)\,. \end{eqnarray*} Now one can use Lemma \ref{lem:ExitControl} to finish the proof. $\Box$\\ \begin{lem}\label{lem:compa} Let $\delta\in(0,1)$ and $u\in[0,1-\delta)$. Then, on the event $E_n(\delta)$, for all $v\in[u,u+\delta]$ we have that $$ {\mathcal B}_{n,-}(v)-{\mathcal B}_{n,-}(u)-2\delta^{1-\beta}\leq{\mathcal A}_n(v)-{\mathcal A}_n(u)\,\leq\, {\mathcal B}_{n,+}(v)-{\mathcal B}_{n,+}(u)+4\delta^{1-\beta}\,,$$ where $${\mathcal B}_{n,\pm}(u):\frac{L_{\lambda\pm}[n+2un^{2/3}]_n-L_{\lambda\pm}[n]_n-\lambda_\pm 2un^{2/3}}{n^{1/3}}\,.$$ \end{lem} \noindent{\bf Proof\,\,} For fixed $t$, $Z_\lambda'[x]_t$ and $Z_\lambda[x]_t$ are non-decreasing functions of $x$. Thus, on the event $E_n(\delta)$, $$Z'_{\lambda_+}[n+2un^{2/3}]_n\geq 0\,\mbox{ and }\,Z_{\lambda_-}[n+2(u+\delta)n^{2/3}]_n\leq 0\,.$$ By Lemma \ref{lem:LocalComparison}, this implies that, for all $v\in[u,u+\delta]$, $$L[n+vn^{2/3}]_{n}-L[n+un^{2/3}]_n\leq L_{\lambda_+}[n+vn^{2/3}]_{n}-L_{\lambda_+}[n+un^{2/3}]_n\,,$$ and $$L[n+vn^{2/3}]_{n}-L[n+un^{2/3}]_n\geq L_{\lambda_-}[n+vn^{2/3}]_{n}-L_{\lambda_-}[n+un^{2/3}]_n\,.$$ Since $$(\lambda_+-1)(2v-2u)n^{1/3}+v^2-u^2\leq 2\delta^{1-\beta}+2\delta\leq 4\delta^{1-\beta}\,,$$ and $$(\lambda_- -1)(2v-2u)n^{1/3}+v^2-u^2\geq -2\delta^{1-\beta}\,,$$ we have that, on the event $E_n(\delta)$, $${\mathcal A}_n(v)-{\mathcal A}_n(u)\,\leq\, {\mathcal B}_{n,+}(v)-{\mathcal B}_{n,+}(u)+4\delta^{1-\beta}\,.$$ and $${\mathcal A}_n(v)-{\mathcal A}_n(u)\,\geq\, {\mathcal B}_{n,-}(v)-{\mathcal B}_{n,-}(u)-2\delta^{1-\beta}\,,$$ for all $v\in[u,u+\delta]$. $\Box$\\ \noindent{\bf Proof of Theorem \ref{thm:Tight}\,\,} For fixed $u\in[0,1)$ take $\delta>0$ such that $u+\delta\leq1$. By Lemma \ref{lem:compa}, $$\sup_{v\in[u,u+\delta]}|{\mathcal A}_n(v)-{\mathcal A}_n(u)|\leq \max\left\{\sup_{v\in[u,u+\delta]}|{\mathcal B}_{n,\pm}(v)-{\mathcal B}_{n,\pm}(u)| \right\}+4\delta^{1-\beta}\,,$$ on the event $E_n(\delta)$. Hence, for any $\eta>0$, \begin{eqnarray*} {\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal A}_n(v)-{\mathcal A}_n(u)|>\eta\right)&\leq&{\mathbb P}\left(E_n(\delta)^c\right)\\ &+&{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal B}_{n,+}(v)-{\mathcal B}_{n,+}(u)|>\eta-4\delta^{1-\beta}\right)\\ &+&{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal B}_{n,-}(v)-{\mathcal B}_{n,-}(u)|>\eta-4\delta^{1-\beta}\right)\,. \end{eqnarray*} By \eqref{eq:equilibrium}, $$P_n(x):=L_\lambda\big([n+x]_n\big)-L_\lambda\big([n]_n\big)\,,\,\mbox{ for }\,\,\,x\geq 0\,,$$ is a Poisson process of intensity $\lambda$. Since $\lambda^{\pm}\to1$ as $n\to\infty$, ${\mathcal B}_{n,-}(u/2)$ and ${\mathcal B}_{n,+}(u/2)$ converge in distribution to a standard Brownian motion ${\mathcal B}$. Thus, by Lemma \ref{lem:tight}, for $\delta<(\eta/8)^{1/(1-\beta)}$, \begin{eqnarray*} \limsup_{n\to\infty}{\mathbb P}\left(\sup_{v\in[u,u+\delta]}|{\mathcal A}_n(v)-{\mathcal A}_n(u)|>\eta\right)&\leq& C\delta^{3\beta}+2{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal B}(2v)-{\mathcal B}(2u)|>\eta-4\delta^{1-\beta}\right)\\ &\leq&C\delta^{3\beta}+2{\mathbb P}\left( \sup_{v\in[0,1]}|{\mathcal B}(v)|>\frac{\eta}{2\sqrt{2\delta}}\right)\,, \end{eqnarray*} which implies that (recall that $\beta\in(1/3,1)$) \begin{equation}\label{tight} \limsup_{\delta\to 0^+}\frac{1}{\delta}\left(\limsup_{n\to\infty}{\mathbb P}\left(\sup_{v\in[u,u+\delta]}|{\mathcal A}_n(v)-{\mathcal A}_n(u)|>\eta\right)\right)=0\,. \end{equation} Since \cite{CG2} $$\limsup_{n\to\infty}\frac{{\mathbb E} |L[n]_n-2n|}{n^{1/3}}<\infty\,,$$ we have that $\{{\mathcal A}_n(0)\,,\,n\geq 1\}$ is tight. Together with \eqref{tight}, this shows tightness of the collection $\{{\mathcal A}_n\,,\,n\geq 1\}$ in the space of cadlag functions on $[0,1]$, and also that every weak limit lives in the space of continuous functions \cite{Bi}. $\Box$\\ \section{Proof of Theorem \ref{thm:LocalFluct}} For simple notation, we will prove the statement for $s=1$ and restrict our selves to $[0,1]$. The reader can then check that rescaling gives the result for general $s>0$, since \[ L[sn]_n \stackrel{dist.}{=} L[s^{1/2}n]_{s^{1/2}n}.\] \begin{lem}\label{lem:PoissonTight} Fix $\gamma'\in(\gamma,2/3)$ and let $${\lambda}_{\pm}={\lambda}_{\pm}(n):=1\pm\frac{1}{n^{\gamma'/2}}\,.$$ Define the event $$E_n:=\left\{Z'_{{\lambda}_+}[n]_n\geq0\,\,\mbox{ and }\,\,Z_{{\lambda}_-}[n+n^{\gamma}]_n\leq0\right\}\,.$$ There exists a constant $C>0$ such that $${\mathbb P}\left(E_n^c\right)\leq \frac{C}{n^{1-3\gamma'/2}}\,$$ for all sufficiently large $n$. \end{lem} \noindent{\bf Proof\,\,} Denote $r:=n^{1/3-\gamma'/2}$ and let $$n_+:=\lambda_+ n< 2n\,\,\mbox{ and }\,\,h_{+}:=\left(\lambda_+-\frac{1}{\lambda_+}\right) n> rn^{2/3}> r n_+^{2/3}/2 \,,$$ (for all sufficiently large $n$). By \eqref{eq:sym} and \eqref{eq:diagsym}, \begin{eqnarray*} {\mathbb P}\left(Z'_{\lambda_+}[n]_n<0\right)&=&{\mathbb P}\left(Z_{1/\lambda_+}[n]_n>0\right)\\ &=&{\mathbb P}\left(Z_1[n/\lambda_+]_{\lambda_+ n}>0\right)\\ &=&{\mathbb P}\left(Z_1[\lambda_+n-h_n]_{\lambda_+ n}>0\right)\\ &=&{\mathbb P}\left(Z_1[\lambda_+n]_{\lambda_+ n}>h_+\right)\\ &\leq&{\mathbb P}\left(Z_1[n_+]_{n_+}>r n_+^{2/3}/2\right)\,. \end{eqnarray*} Analogously, for $$n_-:= \frac{n}{\lambda_-}<2n\,\,\mbox{ and }\,\,h_{-}:=\left(\frac{1}{\lambda_-}-\lambda_-\right) n> rn^{2/3}> r n_-^{2/3}/2 \,,$$ we have that \begin{eqnarray*} {\mathbb P}\left(Z_{\lambda_-}[n+n^{\gamma}]_n>0\right)&=&{\mathbb P}\left(Z_{\lambda_-}[n]_n>-n^{\gamma}\right)\\ &=&{\mathbb P}\left(\lambda_-Z_{1}[\lambda_-n]_{n/\lambda_-}>-n^{\gamma}\right)\\ &\leq&{\mathbb P}\left(Z_{1}[n_-]_{n_-}>h_- - n^{\gamma}\right)\\ &\leq& {\mathbb P}\left(Z_1[n_-]_{n_-}>(r-n^{\gamma-2/3}) n_-^{2/3}/2\right)\,, \end{eqnarray*} Now one can use Lemma \ref{lem:ExitControl} to finish the proof. $\Box$\\ \begin{lem}\label{lem:PoissonComparison} On the event $E_n$, for all $u<v$ in $[0,1]$, $$\Gamma_n^{-}(v)-\Gamma_n^{-}(u)-\frac{1}{n^{(\gamma'-\gamma)/2}}\leq \Delta_n(v)-\Delta_n(u)\leq \Gamma_n^+(v)-\Gamma_n^+(u)+\frac{1}{n^{(\gamma'-\gamma)/2}}\,,$$ where $$\Gamma_n^{\pm}(u):=\frac{L_{{\lambda}_\pm}[n+un^{\gamma}]_{n}-L_{{\lambda}_\pm}[n]_n-\lambda_\pm un^{\gamma}}{n^{\gamma/2}}\,.$$ \end{lem} \noindent{\bf Proof\,\,} By Lemma \ref{lem:LocalComparison}, if $Z'_{\lambda_+}[n]_n\geq 0$ then $$L[n+vn^{\gamma}]_{n}-L[n+un^{\gamma}]_n\leq L_{\lambda_+}[n+vn^{\gamma}]_{n}-L_{\lambda_+}[n+un^{\gamma}]_n\,,$$ and if $Z_{\lambda_-}[n+n^{\gamma}]_n\leq 0$ then $$L[n+vn^{\gamma}]_{n}-L[n+un^{\gamma}]_n\geq L_{\lambda_-}[n+vn^{\gamma}]_{n}-L_{\lambda_-}[n+un^{\gamma}]_n\,.$$ Using that $\lambda_{\pm}:=1\pm n^{-\gamma'/2}$, one can finish the proof of the lemma. $\Box$\\ \noindent{\bf Proof of Theorem \ref{thm:LocalFluct}\,\,} By Lemma \ref{lem:PoissonComparison}, on the event $E_n^c$, $$| \Delta_n(v)-\Delta_n(u)|\leq\max \left\{|\Gamma_n^{\pm}(v)-\Gamma_n^{\pm}(u)|\right\}+\frac{1}{n^{(\gamma'-\gamma)/2}}\,.$$ Thus, by Lemma \ref{lem:PoissonTight}, \begin{eqnarray*} {\mathbb P}\left(\sup_{v\in[u,u+\delta]} | \Delta_n(v)-\Delta_n(u)|>\eta\right)&\leq&{\mathbb P}\left(\sup_{v\in[u,u+\delta]} |\Gamma^+_n(v)-\Gamma^+_n(u)|+\frac{1}{n^{(\gamma'-\gamma)/2}}>\eta\right)\\ &+&{\mathbb P}\left(\sup_{v\in[u,u+\delta]} |\Gamma^-_n(v)-\Gamma^-_n(u)|+\frac{1}{n^{(\gamma'-\gamma)/2}}>\eta\right)\\ &+&{\mathbb P}\left(E_n^c\right)\,. \end{eqnarray*} As before, $\lambda^{\pm}\to1$ as $n\to\infty$, which implies that $$\limsup_{n\to\infty}{\mathbb P}\left(\sup_{v\in[u,u+\delta]} | \Delta_n(v)-\Delta_n(u)|>\eta\right)\leq 2{\mathbb P}\left(\sup_{v\in[0,\delta]} |B(v)|>\eta\right)=2{\mathbb P}\left(\sup_{v\in[0,1]} |B(v)|>\frac{\eta}{\sqrt{\delta}}\right)\,,$$ and hence \begin{equation}\label{eq:tight} \limsup_{\delta\to0^+}\frac{1}{\delta}\left(\limsup_{n\to\infty}{\mathbb P}\left(\sup_{v\in[u,u+\delta]} | \Delta_n(v)-\Delta_n(u)|>\eta\right)\right)=0\,. \end{equation} Since $\Delta_n(0)=0$, \eqref{eq:tight} implies tightness of the collection $\{\Delta_n\,,\,n\geq 1\}$ in the space of cadlag functions on $[0,1]$, and also that every weak limit lives in the space of continuous functions \cite{Bi}. The finite dimensional distributions of the limiting process can be obtained in the same way. Indeed, by Lemma \ref{lem:PoissonTight} and Lemma \ref{lem:PoissonComparison}, for $u_1,\dots,u_k\in[0,1]$ and $a_1,\dots,a_k,\in{\mathbb R}$, $${\mathbb P}\left(\cap_{i=1}^k\left\{\Delta_n(u_i)\leq a_i\right\}\right)\geq {\mathbb P}\left(\cap_{i=1}^k\left\{\Gamma^+_n(u_i)\leq a_i -\frac{1}{n^{(\gamma'-\gamma)/2}}\right\}\right)-{\mathbb P}\left(E_n^c\right)\,,$$ and $${\mathbb P}\left(\cap_{i=1}^k\left\{\Delta_n(u_i)\leq a_i\right\}\right)\leq{\mathbb P}\left(\cap_{i=1}^k\left\{\Gamma^-_n(u_i)\leq a_i+\frac{1}{n^{(\gamma'-\gamma)/2}}\right\}\right)+{\mathbb P}\left(E_n^c\right)\,,$$ which shows that the finite dimensional distributions of $\Delta_n$ converge to the finite dimensional distributions of the standard Brownian motion process. $\Box$\\ \section{Proof of Theorem \ref{thm:LocalAiry}} \begin{lem}\label{lem:local} Fix $\beta\in(0,1/2)$ and for $\epsilon\in(0,1)$ let $$\lambda_\pm=\lambda_\pm(n,\epsilon):=1\pm\frac{\epsilon^{-\beta}}{n^{1/3}}\,.$$ Define the event $$E_n(\epsilon):=\left\{Z'_{\lambda_{+}}([n]_n)\geq 0\,\,\mbox{ and }\,\,Z_{\lambda_{-}}([n+n^{2/3}]_n)\leq 0\right\}\,.$$ There exists a constant $C>0$ such that, for all sufficiently small $\epsilon>0$, $$\limsup_{n\to\infty}{\mathbb P}\left(E_n(\epsilon)^c\right)\leq C\epsilon^{3\beta}\,.$$ \end{lem} \noindent{\bf Proof\,\,} The same proof as in Lemma \ref{lem:tight} applies. $\Box$\\ \begin{lem}\label{lem:localcompa} On the event $E_n(\epsilon)$, for all $u\in[0,1-\delta)$ and $v\in[u,u+\delta]$, we have that $$ {\mathcal B}_{n,-}(\epsilon v)-{\mathcal B}_{n,-}(\epsilon u)-2\delta\epsilon^{1-\beta}\leq{\mathcal A}_n(\epsilon v)-{\mathcal A}_n(\epsilon u)\,\leq\, {\mathcal B}_{n,+}(\epsilon v)-{\mathcal B}_{n,+}(\epsilon u)+4\delta\epsilon^{1-\beta}\,.$$ \end{lem} \noindent{\bf Proof\,\,} The same proof as in Lemma \ref{lem:compa} applies. Note that in this case we have \[ (\lambda_+ - 1)(2{\varepsilon } v- 2{\varepsilon } u) \leq 2{\varepsilon }^{1-\beta}\delta.\] $\Box$\\ \noindent{\bf Proof of Theorem \ref{thm:LocalAiry}\,\,} For $u\in[0,1]$, let $${\mathcal A}_n^\epsilon(u):=\epsilon^{-1/2}\left({\mathcal A}_n(\epsilon u)-{\mathcal A}_n(0)\right)\,\mbox{ and }\,{\mathcal B}^\epsilon_{n,\pm}(u):=\epsilon^{-1/2}{\mathcal B}_{n,\pm}(\epsilon u)\,.$$ By Lemma \ref{lem:localcompa}, on the event $E_n(\epsilon)$, for all $v\in[u,u+\delta]$, $$ {\mathcal B}^\epsilon_{n,-}(v)-{\mathcal B}^\epsilon_{n,-}(u)-2\delta\epsilon^{1/2-\beta}\leq{\mathcal A}^\epsilon_n(v)-{\mathcal A}^\epsilon_n(u)\,\leq\, {\mathcal B}^\epsilon_{n,+}(v)-{\mathcal B}^\epsilon_{n,+}(u)+4\delta\epsilon^{1/2-\beta}\,,$$ which shows that $$\sup_{v\in[u,u+\delta]}|{\mathcal A}^\epsilon_n(v)-{\mathcal A}^\epsilon_n(u)|\leq \max\left\{\sup_{v\in[u,u+\delta]}|{\mathcal B}_{n,\pm}^\epsilon(v)-{\mathcal B}_{n,\pm}^\epsilon(u)| \right\}+4\delta\epsilon^{1/2-\beta}\,.$$ Therefore, \begin{eqnarray*} {\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal A}^\epsilon_n(v)-{\mathcal A}^\epsilon_n(u)|>\eta\right)&\leq&{\mathbb P}\left(E_n(\epsilon)^c\right)\\ &+&{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal B}_{n,+}^\epsilon(v)-{\mathcal B}_{n,+}^\epsilon(u)|>\eta-4\delta\epsilon^{1/2-\beta}\right)\\ &+&{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal B}_{n,-}^\epsilon(v)-{\mathcal B}_{n,-}^\epsilon(u)|>\eta-4\delta\epsilon^{1/2-\beta}\right)\,. \end{eqnarray*} Since ${\mathcal A}_n^\epsilon$ is converging to $\cal A^\epsilon$, and ${\mathcal B}_{n,\pm}^\epsilon$ is converging to a Brownian motion ${\mathcal B}$, the preceding inequality implies that $${\mathbb P}\left(\sup_{v\in[u,u+\delta]}|{\mathcal A}^\epsilon(v)-{\mathcal A}^\epsilon(u)|>\eta\right)\leq C\epsilon^{3\beta}+2{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal B}(2v)-{\mathcal B}(2u)|>\eta-4\delta\epsilon^{1/2-\beta}\right)\,.$$ Hence $$\limsup_{\epsilon\to 0^+}{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal A}^\epsilon(v)-{\mathcal A}^\epsilon(u)|>\eta\right)\leq 2{\mathbb P}\left( \sup_{v\in[0,1]}|{\mathcal B}(v)|>\frac{\eta}{\sqrt{2\delta}}\right)\,,$$ which shows that \begin{equation}\label{eq:BlowTight} \limsup_{\delta\to 0^+}\frac{1}{\delta}\left(\limsup_{\epsilon\to 0^+}{\mathbb P}\left( \sup_{v\in[u,u+\delta]}|{\mathcal A}^\epsilon(v)-{\mathcal A}^\epsilon(u)|>\eta\right)\right)=0\,. \end{equation} Since ${\mathcal A}^{\epsilon}(0)=0$, by \eqref{eq:BlowTight} we have that $\left\{{\mathcal A}^\epsilon\,,\,\epsilon\in(0,1]\right\}$ is tight \cite{Bi}. The finite dimensional distributions of the limiting process can be obtained in the same way. Indeed, by Lemma \ref{lem:localcompa}, for $u_1,\dots,u_k\in[0,1]$ and $a_1,\dots,a_k,\in{\mathbb R}$, $${\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal A}^\epsilon_n(u_i)\leq a_i\right\}\right)\leq{\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal B}^\epsilon_{-,n}(u_i)\leq a_i+4\epsilon^{1/2-\beta}\right\}\right)+{\mathbb P}\left(E_n(\epsilon)^c\right)\,,$$ and $${\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal A}^\epsilon_n(u_i)\leq a_i\right\}\right)\geq {\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal B}^\epsilon_{+,n}(u_i)\leq a_i-4\epsilon^{1/2-\beta}\right\}\right)-{\mathbb P}\left(E_n(\epsilon)^c\right)\,.$$ Thus, by Lemma \ref{lem:local}, $$ {\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal A}^\epsilon(u_i)\leq a_i\right\}\right)\leq {\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal B}(2u_i)\leq a_i+4\epsilon^{1/2-\beta}\right\}\right)+C\epsilon^{3\beta}\,,$$ and $$ {\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal A}^\epsilon(u_i)\leq a_i\right\}\right)\geq {\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal B}(2u_i)\leq a_i-4\epsilon^{1/2-\beta}\right\}\right)-C\epsilon^{3\beta}\,,$$ which proves that, \begin{equation}\label{eq:BlowDist} \lim_{\epsilon\to 0^+}{\mathbb P}\left(\cap_{i=1}^k\left\{{\mathcal A}^\epsilon(u_i)\leq a_i\right\}\right)={\mathbb P}\left(\cap_{i=1}^k\left\{\sqrt{2}{\mathcal B}(u_i)\leq a_i\right\}\right)\,. \end{equation} $\Box$\\ \end{document}
\begin{document} \title{Attacking quantum key distribution with single-photon two-qubit quantum logic} \author{Jeffrey H. Shapiro} \email[Electronic address: ]{[email protected]} \author{Franco N. C. Wong} \affiliation{Massachusetts Institute of Technology, Research Laboratory of Electronics, Cambridge, Massachusetts 02139 USA} \date{\today} \begin{abstract} The Fuchs-Peres-Brandt (FPB) probe realizes the most powerful individual attack on Bennett-Brassard 1984 quantum key distribution (BB84 QKD) by means of a single controlled-NOT (CNOT) gate. This paper describes a complete physical simulation of the FPB-probe attack on polarization-based BB84 QKD using a deterministic CNOT constructed from single-photon two-qubit quantum logic. Adding polarization-preserving quantum nondemolition measurements of photon number to this configuration converts the physical simulation into a true deterministic realization of the FPB attack. \end{abstract} \pacs{03.67.Dd, 03.67.Lx, 42.50.Dv, 42.40.Lm} \maketitle \section{Introduction} Bennett-Brassard 1984 quantum key distribution (BB84 QKD) using single-photon polarization states works as follows \cite{BB84}. In each time interval allotted for a bit, Alice transmits a single photon in a randomly selected polarization, chosen from horizontal ($H$), vertical ($V$), $+$45$^\circ$, or $-$45$^\circ$, while Bob randomly chooses to detect photons in either the $H$/$V$ or $\pm$45$^\circ$ bases. Bob discloses to Alice the sequence of bit intervals and associated measurement bases for which he has detections. Alice then informs Bob which detections occurred in bases coincident with the ones that she used. These are the \em sift\/\rm\ events, i.e., bit intervals in which Bob has a detection \em and\/\rm\ his count has occurred in the same basis that Alice used. An \em error\/\rm\ event is a sift event in which Bob decodes the incorrect bit value. Alice and Bob employ a prescribed set of operations to identify errors in their sifted bits, correct these errors, and apply sufficient privacy amplification to deny useful key information to any potential eavesdropper (Eve). At the end of the full QKD procedure, Alice and Bob have a shared one-time pad with which they can communicate in complete security. In long-distance QKD systems, most of Alice's photons will go undetected, owing to propagation loss and detector inefficiencies. Dark counts and, for atmospheric QKD systems, background counts can cause error events in these systems, as can intrusion by Eve. Employing an attenuated laser source, in lieu of a true single-photon source, further reduces QKD performance as such sources are typically run at less than one photon on average per bit interval, and the occurrence of multi-photon events, although rare at low average photon number, opens up additional vulnerability. Security proofs have been published for ideal BB84 \cite{security1}, as have security analyses that incorporate a variety of non-idealities \cite{security2}. Our attention, however, will be directed toward attacking BB84 QKD, as to our knowledge no such experiments have been performed, although a variety of potentially practical approaches have been discussed \cite{attacks}. Our particular objective will be to show that current technology permits physical simulation of the Fuchs-Peres-Brandt (FPB) probe \cite{FPB}, i.e., the most powerful individual attack on single-photon BB84, and that developments underway in quantum nondemolition (QND) detection may soon turn this physical simulation into a full implementation of the attack. Thus we believe it is of interest to construct the physical simulation and put BB84's security to the test: how much information can Eve really derive about the key that Alice and Bob have distilled while keeping Alice and Bob oblivious to her presence. The remainder of this paper is organized as follows. In Sec.~II we review the FPB probe and its theoretical performance. In Sec.~III we describe a complete physical simulation of this probe constructed from single-photon two-qubit (SPTQ) quantum logic. We conclude, in Sec.~IV, by showing how the addition of polarization-preserving QND measurements of photon number can convert this physical simulation into a true deterministic realization of the FPB attack on polarization-based BB84. \section{The Fuchs-Peres-Brandt Probe} In an individual attack on single-photon BB84 QKD, Eve probes Alice's photons one at a time. In a collective attack, Eve's measurements probe groups of Alice's photons. Less is known about collective attacks \cite{collective}, so we will limit our consideration to individual attacks. Fuchs and Peres \cite{FP} described the most general way in which an individual attack could be performed. Eve supplies a probe photon and lets it interact with Alice's photon in a unitary manner. Eve then sends Alice's photon to Bob, and performs a probability operator-valued measurement (POVM) on the probe photon she has retained. Slutsky {\em et al.} \cite{Slutsky} demonstrated that the Fuchs-Peres construct---with the appropriate choice of probe state, interaction, and measurement---affords Eve the maximum amount of R\'{e}nyi information about the error-free sifted bits that Bob receives for a given level of disturbance, i.e., for a given probability that a sifted bit will be received in error. Brandt \cite{FPB} extended the Slutsky {\em et al.} treatment by showing that the optimal probe could be realized with a single CNOT gate. Figure~1 shows an abstract diagram of the resulting Fuchs-Peres-Brandt probe. In what follows we give a brief review of its structure and performance---see \cite{FPB} for a more detailed treatment---where, for simplicity, we assume ideal conditions in which Alice transmits a single photon per bit interval, there is no propagation loss and no extraneous (background) light collection, and both Eve and Bob have unity quantum efficiency photodetectors with no dark counts. These ideal conditions imply there will not be any errors on sifted bits in the absence of eavesdropping; the case of more realistic conditions will be discussed briefly in Sec.~IV. \begin{figure} \caption{(Color online) Block diagram of the Fuchs-Peres-Brandt probe for attacking BB84 QKD.} \end{figure} In each bit interval Alice transmits, at random, a single photon in one of the four BB84 polarization states. Eve uses this photon as the control-qubit input to a CNOT gate whose computational basis---relative to the BB84 polarization states---is shown in Fig.~2, namely \begin{eqnarray} |0\rangle &\equiv& \cos(\pi/8)|H\rangle + \sin(\pi/8)|V\rangle \\ |1\rangle &\equiv& -\sin(\pi/8)|H\rangle + \cos(\pi/8)|V\rangle, \end{eqnarray} in terms of the $H/V$ basis. Eve supplies her own probe photon, as the target-qubit input to this CNOT gate, in the state \begin{equation} |T_{\rm in}\rangle \equiv C|+\rangle + S|-\rangle, \label{probeinput} \end{equation} where $C = \sqrt{1-2P_E}$, $S = \sqrt{2P_E}$, $|\pm\rangle = (|0\rangle \pm |1\rangle)/\sqrt{2}$, and $0\le P_E\le 1/2$ will turn out to be the error probability that Eve's probe creates on Bob's sifted bits \cite{footnote1}. So, as $P_E$ increases from 0 to 1/2, $|T_{\rm in}\rangle$ goes from $|+\rangle$ to $|-\rangle$. The (unnormalized) output states that may occur for this target qubit are \begin{eqnarray} |T_\pm\rangle &\equiv& C|+\rangle \pm \frac{S}{\sqrt{2}}|-\rangle \\ |T_E\rangle &\equiv& \frac{S}{\sqrt{2}}|-\rangle. \end{eqnarray} \begin{figure} \caption{(Color online) Computational basis for Eve's CNOT gate referenced to the BB84 polarization states.} \end{figure} Here is how the FPB probe works. When Alice uses the $H/V$ basis for her photon transmission, Eve's CNOT gate effects the following transformation, \begin{eqnarray} |H\rangle|T_{\rm in}\rangle &\longrightarrow& |H\rangle|T_-\rangle + |V\rangle|T_E\rangle \label{Hin_out} \\ |V\rangle|T_{\rm in}\rangle &\longrightarrow& |V\rangle|T_+\rangle +|H\rangle|T_E\rangle,\label{Vin_out} \end{eqnarray} where the kets on the left-hand side denote the Alice\,$\otimes$\,Eve state of the control and target qubits at the CNOT's input and the kets on the right-hand side denote the Bob\,$\otimes$\,Eve state of the control and target qubits at the CNOT's output. Similarly, when Alice uses the $\pm 45^\circ$ basis, Eve's CNOT gate has the following behavior, \begin{eqnarray} |\mbox{$+$}45^\circ\rangle|T_{\rm in}\rangle &\longrightarrow& |\mbox{$+$}45^\circ\rangle|T_+\rangle + |\mbox{$-$}45^\circ\rangle|T_E\rangle \label{plus_in_out}\\ |\mbox{$-$}45^\circ\rangle|T_{\rm in}\rangle &\longrightarrow& |\mbox{$-$}45^\circ\rangle|T_-\rangle +|\mbox{$+$}45^\circ\rangle|T_E\rangle. \label{minus_in_out} \end{eqnarray} Suppose that Bob measures in the basis that Alice has employed \em and\/\rm\ his outcome matches what Alice sent. Then Eve can learn their shared bit value, once Bob discloses his measurement basis, by distinguishing between the $|T_+\rangle$ and $|T_-\rangle$ output states for her target qubit. Of course, this knowledge comes at a cost: Eve has caused an error event whenever Alice and Bob choose a common basis and her target qubit's output state is $|T_E\rangle$. To maximize the information she derives from this intrusion, Eve applies the minimum error probability receiver for distinguishing between the single-photon polarization states $|T_+\rangle$ and $|T_-\rangle$. This is a projective measurement onto the polarization basis $\{|d_+\rangle,|d_-\rangle\}$, shown in Fig.~3 and given by \begin{eqnarray} |d_+\rangle &=& \frac{|+\rangle + |-\rangle}{\sqrt{2}} = |0\rangle \\ |d_-\rangle &=& \frac{|+\rangle - |-\rangle}{\sqrt{2}} = |1\rangle. \end{eqnarray} \begin{figure} \caption{(Color online) Measurement basis for Eve's minimum-error-probability discrimination between $|T_+\rangle$ and $|T_-\rangle$.} \end{figure} Two straightforward calculations will now complete our review of the FPB probe. First, we find the error probability that is created by Eve's presence. Suppose Alice and Bob use the $H/V$ basis and Alice has sent $|H\rangle$. Alice and Bob will incur an error if the control\,$\otimes$\,target output from Eve's CNOT gate is $|V\rangle|T_E\rangle$. The probability that this occurs is $\langle T_E| T_E\rangle = S^2/2 = P_E$. The same conditional error probability ensues for the other three error events, e.g., when Alice and Bob use the $\pm 45^\circ$ basis, Alice sends $|$$+45^\circ\rangle$, and the CNOT output is $|$$-45^\circ\rangle|T_E\rangle$. It follows that the unconditional error probability incurred by Alice and Bob on their sift events is $P_E$. Now we shall determine the R\'{e}nyi information that Eve derives about the sift events for which Alice and Bob do not suffer errors. Let $B = \{0,1\}$ and $E = \{0,1\}$ denote the ensembles of possible bit values that Bob and Eve receive on a sift event in which Bob's bit value agrees with Alice's. The R\'{e}nyi information (in bits) that Eve learns about each Alice/Bob error-free sift event is \begin{eqnarray} I_R &\equiv& -\log_2\!\left(\sum_{b= 0}^1P^2(b)\right) \nonumber \\ &+&\sum_{e = 0}^1P(e)\log_2\!\left(\sum_{b = 0}^1 P^2(b\mid e)\right), \end{eqnarray} where $\{P(b), P(e)\}$ are the prior probabilities for Bob's and Eve's bit values, and $P(b\mid e)$ is the conditional probability for Bob's bit value to be $b$ given that Eve's is $e$. Alice's bits are equally likely to be 0 or 1, and Eve's conditional error probabilities satisfy \cite{Helstrom} \begin{eqnarray} \lefteqn{P(e = 1\mid b = 0) = P(e = 0\mid b = 1)} \\ &=& \frac{1}{2}\!\left(1 - \sqrt{1 - \frac{|\langle T_+|T_-\rangle|^2}{\langle T_+|T_+\rangle \langle T_-|T_-\rangle}}\right) \\ &=& \frac{1}{2}\!\left(1- \frac{\sqrt{4P_E(1-2P_E)}}{1-P_E}\right). \end{eqnarray} These results imply that $b$ is also equally likely to be 0 or 1, and that $P(b\mid e) = P(e\mid b)$, whence \begin{equation} I_R = \log_2\!\left(1 + \frac{4P_E(1-2P_E)}{(1-P_E)^2}\right), \end{equation} which we have plotted in Fig.~4. \begin{figure} \caption{(Color online) Eve's R\'{e}nyi information about Bob's error-free sifted bits as a function of the error probability that her eavesdropping creates.} \end{figure} Figure~4 reveals several noteworthy performance points for the FPB probe. The $I_R = 0, P_E = 0$ point in this figure corresponds to Eve's operating her CNOT gate with $|T_{\rm in}\rangle = |+\rangle$ for its target qubit input. It is well known that such an input is unaffected by and does not affect the control qubit. Thus Bob suffers no errors but Eve gets no R\'{e}nyi information. The $I_R = 1, P_E = 1/3$ point in this figure corresponds to Eve's operating her CNOT gate with $|T_{\rm in}\rangle = \sqrt{1/3}|+\rangle + \sqrt{2/3}|-\rangle$, which leads to $|T_\pm\rangle \propto |d_\pm\rangle$. In this case Eve's Fig.~3 receiver makes no errors, so she obtains the maximum (1 bit) R\'{e}nyi information about each of Bob's error-free bits. The $I_R = 0, P_E = 1/2$ point in this figure corresponds to Eve's operating her CNOT gate with $|T_{\rm in}\rangle = |-\rangle$, which gives $|T_+\rangle = |T_-\rangle = |T_E\rangle = \sqrt{1/2}|-\rangle$. Here it is clear that Eve gains no information about Bob's error-free bits, but his error probability is 1/2 because of the action of the $|-\rangle$ target qubit on the control qubit. \section{Physical Simulation in SPTQ Logic} In single-photon two-qubit quantum logic, each photon encodes two independently controllable qubits \cite{SPTQ1}. One of these is the familiar polarization qubit, with basis $\{|H\rangle,|V\rangle\}$. The other we shall term the momentum qubit---because our physical simulation of the FPB probe will rely on the polarization-momentum hyperentangled photon pairs produced by type-II phase matched spontaneous parametric downconversion (SPDC)---although in the collimated configuration in which SPTQ is implemented its basis states are single-photon kets for right and left beam positions (spatial modes), denoted $\{|R\rangle, |L\rangle\}$. Unlike the gates proposed for linear optics quantum computing \cite{KLM}, which are scalable but non-deterministic, SPTQ quantum logic is deterministic but not scalable. Nevertheless, SPTQ quantum logic suffices for a complete physical simulation of polarization-based BB84 being attacked with the FPB probe, as we shall show. Before doing so, however, we need to comment on the gates that have been demonstrated in SPTQ logic. It is well known that single qubit rotations and CNOT gates form a universal set for quantum computation. In SPTQ quantum logic, polarization-qubit rotations are easily accomplished with wave plates, just as is done in linear optics quantum computing. Momentum-qubit rotations are realized by first performing a SWAP operation, to exchange the polarization and momentum qubits, then rotating the polarization qubit, and finally performing another SWAP. The SWAP operation is a cascade of three CNOTs, as shown in Fig.~5. For its implementation in SPTQ quantum logic the left and right CNOTs in Fig.~5 are momentum-controlled NOT gates (M-CNOTs) and the middle CNOT is a polarization-controlled NOT gate (P-CNOT). (An M-CNOT uses the momentum qubit of a single photon to perform the controlled-NOT operation on the polarization qubit of that same photon, and vice versa for the P-CNOT gate.) Experimental demonstrations of deterministic M-CNOT, P-CNOT, and SWAP gates are reported in \cite{SPTQ1,SPTQ2}. \begin{figure} \caption{Quantum circuit diagram for a SWAP gate realized as a cascade of three CNOTs. In SPTQ quantum logic the upper rail is the momentum qubit and the lower rail is the polarization qubit of the same photon.} \end{figure} Figure~6 shows a physical simulation of polarization-based BB84 under FPB attack when Alice has a single-photon source and Bob employs active basis selection; Fig.~7 shows the modification needed to accommodate Bob's using passive basis selection. In either case, Alice uses a polarizing beam splitter and an electro-optic modulator, as a controllable half-wave plate (HWP), to set the randomly-selected BB84 polarization state for each photon she transmits. Moreover, she employs a single spatial mode, which we assume coincides with the $R$ beam position in Eve's apparatus. Eve then begins her attack by imposing the probe state $|T_{\rm in}\rangle$ on the momentum qubit. She does this by applying a SWAP gate, to exchange the momentum and polarization qubits of Alice's photon, rotating the resulting polarization qubit (with the HWP in Fig.~6) to the $|T_{\rm in}\rangle$ state, and then using another SWAP to switch this state into the momentum qubit. This procedure leaves Alice's BB84 polarization state unaffected, although her photon, which will ultimately propagate on to Bob, is no longer in a single spatial mode. Eve completes the first stage of her attack by sending Alice's photon through a P-CNOT gate, which will accomplish the state transformations given in Eqs.~(\ref{Hin_out})--(\ref{minus_in_out}), and then routing it to Bob. If Bob employs active basis selection (Fig.~6), then in each bit interval he will use an electro-optic modulator---as a controllable HWP---plus a polarizing beam splitter to set the randomly-selected polarization basis for his measurement. The functioning of this basis-selection setup is unaffected by Alice's photon no longer being in a single spatial mode. The reason that we call Fig.~6 a physical simulation, rather than a true attack, lies in the measurement box. Here, Eve has invaded Bob's turf, and inserted SWAP gates, half-wave plates, polarizing beam splitters, and additional photodetectors, so that she can forward to Bob measurement results corresponding to photon counting on the polarization basis that he has selected while she retains the photon counting results corresponding to her $\{|d_+\rangle, |d_-\rangle\}$ measurement. Clearly Bob would never knowingly permit Eve to intrude into his receiver box in this manner. Moreover, if Eve could do so, she would not bother with an FPB probe as she could directly observe Bob's bit values. \begin{figure} \caption{(Color online) Physical simulation of polarization-based BB84 QKD and the FPB-probe attack.} \end{figure} \begin{figure} \caption{(Color online) Modification of Fig.~6 to accommodate Bob's using passive basis selection.} \end{figure} If Bob employs passive basis selection (Fig.~7), then he uses a 50/50 beam splitter followed by static-HWP analysis in the $H$-$V$ and $\pm 45^\circ$ bases, with only the former being explicitly shown in Fig.~7. The rest of Eve's attack mimics what was seen in Fig.~6, i.e., she gets inside Bob's measurement boxes with SWAP gates, half-wave plates, and additional detectors so that she can perform her probe measurement while providing Bob with his BB84 polarization-measurement data. Because the Fig.~7 arrangement requires that twice as many SWAP gates, twice as many half-wave plates, and twice as many single-photon detectors be inserted into Bob's receiver system, as compared to what is needed in the Fig.~6 setup, we shall limit the rest of our discussion to the case of active basis selection as it leads to a more parsimonious physical simulation of the Fuchs-Peres-Brandt attack. We recognize, of course, that the decision to use active basis selection is Bob's to make, not Eve's. More importantly, however, in Sec.~IV we will show how the availability of polarization-preserving QND photon-number measurements can be used to turn Fig.~6 into a true, deterministic implementation of the FPB attack. The same conversion can be accomplished for passive basis selection. Before turning to the true-attack implementation, let us flesh out some details of the measurement box in Fig.~6 and show how SPDC can be used, in lieu of the single-photon source, to perform this physical simulation. Let $|\psi_{\rm out}\rangle$ denote the polarization\,$\otimes$\,momentum state at the output of Eve's P-CNOT gate in Fig.~6. Bob's polarization analysis box splits this state, according to the basis he has chosen, so that one basis state goes to the upper branch of the measurement box while the other goes to the lower branch of that box. This polarization sorting does nothing to the momentum qubit, so the SWAP gates, half-wave plates, and polarizing beam splitters that Eve has inserted into the measurement box accomplish her $\{|d_+\rangle, |d_-\rangle\}$ projective measurement, i.e., the horizontal paths into photodetectors in Fig.~6 are projecting the momentum qubit of $|\psi_{\rm out}\rangle$ onto $|d_-\rangle$ and the vertical paths into photodetectors in Fig.~6 are projecting this state onto $|d_+\rangle$. Eve records the combined results of the two $|d_+\rangle$ versus $|d_-\rangle$ detections, whereas Bob, who only sees the combined photodetections for the upper and lower branches entering the measurement box, gets his BB84 polarization data. Bob's data is impaired, of course, by the effect of Eve's P-CNOT. Single-photon on-demand sources are now under development at several institutions \cite{single}, and their use in BB84 QKD has been demonstrated \cite{singleBB84}. At present, however, it is much more practical to use SPDC as a heralded source of single photons \cite{herald}. In SPDC, signal and idler photons are emitted in pairs, thus detection of the signal photon heralds the presence of the idler photon. Moreover, with appropriate configurations \cite{bidirectional}, SPDC will produce photons that are simultaneously entangled in polarization and in momentum. This hyperentanglement leads us to propose the Fig.~8 configuration for physically simulating the FPB-probe attack on BB84. Here, a pump laser drives SPDC in a type-II phase matched $\chi^{(2)}$ crystal, such as periodically-poled potassium titanyl phosphate (PPKTP), producing pairs of orthogonally-polarized, frequency-degenerate photons that are entangled in both polarization and momentum. The first polarizing beam splitter transmits a horizontally-polarized photon and reflects a vertically-polarized photon while preserving their momentum entanglement. Eve uses a SWAP gate and (half-wave plate plus polarizing beam splitter) polarization rotation so that her photodetector's clicking will, by virtue of the momentum entanglement, herald the setting of the desired $|T_{\rm in}\rangle$ momentum-qubit state on the horizontally-polarized photon emerging from the first polarizing beam splitter. Alice's electronically controllable half-wave plate sets the BB84 polarization qubit on this photon, and the rest of the Fig.~8 configuration is identical to that shown and explained in Fig.~6. Inasmuch as the SPDC source and SPTQ gates needed to realize the Fig.~8 setup have been demonstrated, we propose that such an experiment be performed. Simultaneous recording of Alice's polarization choices, Bob's polarization measurements and Eve's $|d_+\rangle$ versus $|d_-\rangle$ results can then be processed through the BB84 protocol stack to study the degree to which the security proofs and eavesdropping analyses stand up to experimental scrutiny. \begin{figure} \caption{(Color online) Proposed configuration for a complete physical simulation of the FPB attack on BB84 that is based on hyperentangled photon pairs from type-II phase matched SPDC and gates built from SPTQ quantum logic.} \end{figure} \section{The Complete Attack} Although the FPB attack's physical simulation, as described in the preceding section, is both experimentally feasible and technically informative, any vulnerabilities it might reveal would only be of academic interest were there no practical means to turn it into a true deterministic implementation in which Eve did \em not\/\rm\ need to invade Bob's receiver. Quantum nondemolition measurement technology provides the key to creating this complete attack. As shown in the appendix, it is possible, in principle, to use cross-phase modulation between a strong coherent-state probe beam and an arbitrarily polarized signal beam to make a QND measurement of the signal beam's total photon number while preserving its polarization state. Cross-phase modulation QND measurement of photon number has long been a topic of interest in quantum optics \cite{Imoto}, and recent theory has shown that it provides an excellent new route to photonic quantum computation \cite{Nemoto}. Thus it is not unwarranted to presume that polarization-preserving QND measurement of total photon number may be developed. With such technology in hand, the FPB-probe attack shown in Fig.~9 becomes viable. Here, Eve imposes a momentum qubit on Alice's polarization-encoded photon and performs a P-CNOT operation exactly as discussed in conjunction with Figs.~6 and 8. Now, however, Eve uses a SWAP-gate half-wave plate combination so that the $|d_+\rangle$ and $|d_-\rangle$ momentum qubit states emerging from her P-CNOT become $|V\rangle$ and $|H\rangle$ states entering the polarizing beam splitter that follows the half-wave plate. This beam splitter routes these polarizations into its transmitted and reflected output ports, respectively, where, in each arm, Eve employs a SWAP gate, a polarization-preserving QND measurement of total photon number, and another SWAP gate. The first of these SWAPs returns Alice's BB84 qubit to polarization, so that a click on Eve's polarization-preserving QND apparatus completes her $\{|d_+\rangle, |d_-\rangle\}$ measurement without further scrambling Alice's BB84 qubit beyond what has already occurred in Eve's P-CNOT gate. The SWAP gates that follow the QND boxes then restore definite ($V$ and $H$) polarizations to the light in the upper and lower branches so that they may be recombined on a polarizing beam splitter. The SWAP gate that follows this recombination then returns the BB84 qubit riding on Alice's photon to polarization for transmission to and measurement by Bob. This photon is no longer in the single spatial mode emitted by Alice's transmitter, hence Bob could use spatial-mode discrimination to infer the presence of Eve, regardless of the $P_E$ value she had chosen to impose. Eve, however, can preclude that possibility. Because the result of her $\{|d_+\rangle,|d_-\rangle\}$ measurement tells her the value of the momentum qubit on the photon being sent to Bob, she can employ an additional stage of qubit rotation to restore this momentum qubit to the $|R\rangle$ state corresponding to Alice's transmission. Also, should Alice try to defeat Eve's FPB probe by augmenting her BB84 polarization qubit with a randomly-chosen momentum qubit, Eve can use a QND measurement setup like that shown in Fig.~9 to collapse the value of that momentum qubit to $|R\rangle$ or $|L\rangle$, and then rotate that momentum qubit into the $|R\rangle$-state spatial mode before applying the FPB-probe attack. At the conclusion of her attack, she can then randomize the momentum qubit on the photon that will be routed on to Bob without further impact---beyond that imposed by her P-CNOT gate---on that photon's polarization qubit. So, unless Alice and Bob generalize their polarization-based BB84 protocol to include cooperative examination of the momentum qubit, Alice's randomization of that qubit will neither affect Eve's FPB attack, nor provide Alice and Bob with any additional evidence, beyond that obtained from the occurrence of errors on sifted bits, of Eve's presence. \begin{figure} \caption{(Color online) Deterministic FPB-probe attack on polarization-based BB84 that is realized with polarization-preserving QND measurements and SPTQ quantum logic.} \end{figure} Some concluding remarks are now in order. We have shown that a physical simulation of the Fuchs-Peres-Brandt attack on polarization-based BB84 is feasible with currently available technology, and we have argued that the development of polarization-preserving QND technology for measuring total photon number will permit mounting of a true deterministic FBP-probe attack. Our analysis has presumed ideal conditions in which Alice employs a single-photon source, there is no propagation loss and no extraneous (background) light collection, and both Eve and Bob have unity quantum efficiency photodetectors with no dark counts. Because current QKD systems typically employ attenuated laser sources, and suffer from propagation loss, photodetector inefficiencies, and extraneous counts, it behooves us to at least comment on how such non-idealities could impact the FPB probe we have described. The use of an attenuated laser source poses no problem for the configurations shown in Figs.~6--9. This is because the single-qubit rotations and the CNOT gates of SPTQ quantum logic effect the same transformations on coherent states as they do on single-photon states. For example, the same half-wave plate setting that rotates the single-photon $|H\rangle$ qubit into the single-photon $|V\rangle$ qubit will transform the horizontally-polarized coherent state $|\alpha\rangle_H$ into the vertically-polarized coherent state $|\alpha\rangle_V$. Likewise, the SPTQ P-CNOT gate that transforms a single photon carrying polarization ($|H\rangle = |0\rangle, |V\rangle = |1\rangle$) and momentum ($|R\rangle = |0\rangle, |L\rangle = |1\rangle$) qubits according to \begin{eqnarray} \lefteqn{c_{HR}|HR\rangle + c_{HL}|HL\rangle + c_{VR}|VR\rangle + c_{VL}|VL\rangle \longrightarrow } \nonumber \\ &&c_{HR}|HR\rangle + c_{HL}|HL\rangle + c_{VR}|VL\rangle + c_{VL}|VR\rangle, \end{eqnarray} will transform the four-mode coherent-state input with eigenvalues \begin{eqnarray} \lefteqn{\hspace*{-.5in} \left[\begin{array}{cccc} \langle\hat{a}_{HR}\rangle & \langle\hat{a}_{HL}\rangle & \langle\hat{a}_{VR}\rangle & \langle\hat{a}_{VL}\rangle \end{array}\right] = }\nonumber \\ &&\hspace*{.25in}\left[\begin{array}{cccc} \alpha_{HR} & \alpha_{HL} & \alpha_{VR} & \alpha_{VL}\end{array}\right], \end{eqnarray} into a four-mode coherent-state output with eigenvalues \begin{eqnarray} \lefteqn{\hspace*{-.5in}\left[\begin{array}{cccc} \langle\hat{a}_{HR}\rangle & \langle\hat{a}_{HL}\rangle & \langle\hat{a}_{VR}\rangle & \langle\hat{a}_{VL}\rangle \end{array}\right] = }\nonumber \\ &&\hspace*{.25in} \left[\begin{array}{cccc} \alpha_{HR} & \alpha_{HL} & \alpha_{VL} & \alpha_{VR}\end{array}\right], \end{eqnarray} where the $\hat{a}$'s are annihilation operators for modes labeled by their polarization and beam positions. It follows that the coherent-state $P_E$ and $I_B$ calculations mimic the qubit derivations that we presented in Sec.~III, with coherent-state inner products taking the place of qubit-state inner products. At low average photon number, these coherent-state results reduce to the qubit expressions for events which give rise to clicks in the photodetectors shown in Figs.~6--9. Finally, a word about propagation loss, detector inefficiencies, and extraneous counts from dark current or background light is in order. All of these non-idealities actually help our Eve, in that they lead to a non-zero quantum bit error rate between Alice and Bob in the absence of the FPB attack. If Eve's $P_E$ value is set below that baseline error rate, then her presence should be undetectable. \vspace*{.2in} \begin{acknowledgments} The authors acknowledge useful technical discussions with Howard Brandt, Jonathan Smith and Stewart Personick. This work was supported by the Department of Defense Multidisciplinary University Research Initiative program under Army Research Office grant DAAD-19-00-1-0177 and by MIT Lincoln Laboratory. \end{acknowledgments} \appendix* \section{QND Measurement} Here we show that it is possible, in principle, to use cross-phase modulation between a strong coherent-state probe beam and an arbitrarily-polarized signal beam to make a QND measurement of the signal beam's total photon number. Let $\{\hat{a}_H, \hat{a}_V, \hat{a}_P\}$ be the annihilation operators of the horizontal and vertical polarizations of the signal beam and the (single-polarization) probe beam, respectively at the input to a cross-phase modulation interaction. We shall take that interaction to transform these annihilation operators according to the following commutator-preserving unitary operation, \begin{eqnarray} \hat{a}_H &\longrightarrow& \hat{a}'_H \equiv \exp(i\kappa \hat{a}_P^\dagger\hat{a}_P)\hat{a}_H \\ \hat{a}_V &\longrightarrow& \hat{a}'_V \equiv \exp(i\kappa \hat{a}_P^\dagger\hat{a}_P)\hat{a}_V\\ \hat{a}_P &\longrightarrow&\hat{a}'_P \equiv \exp[i\kappa(\hat{a}_H^\dagger\hat{a}_H + \hat{a}_V^\dagger\hat{a}_V)]\hat{a}_P, \end{eqnarray} where $0 < \kappa \ll 1$ is the cross-phase modulation coupling coefficient. When the probe beam is in a strong coherent state, $|\sqrt{N}_P\rangle$ with $N_P\gg 1/\kappa^2$, the total photon number in the signal beam can be inferred from a homodyne-detection measurement of the appropriate probe quadrature. In particular, the state of $\hat{a}'_P$ will be $|\sqrt{N}_P\rangle$ when the signal beam's total photon number is zero, and its state will be $|(1+i\kappa)\sqrt{N}_P\rangle$ when the signal beam's total photon number is one, where $\kappa \ll 1$ has been employed. Homodyne detection of the $\hat{a}'_{P2} \equiv {\rm Im}(\hat{a}'_P)$ quadrature thus yields a classical random-variable outcome $\alpha'_{P2}$ that is Gaussian distributed with mean zero and variance 1/4, in the absence of a signal-beam photon, and Gaussian distributed with mean $\kappa\sqrt{N}_P$ and variance 1/4 in the presence of a signal-beam photon. Note that these conditional distributions are independent of the polarization state of the signal-beam photon when it is present. Using the decision rule, ``declare signal-beam photon present if and only if $\alpha'_{P2} > \kappa\sqrt{N}_P/2$,'' it is easily shown that the QND error probability is bounded above by $\exp(-\kappa^2 N_P/2)/2 \ll 1$. The preceding polarization independent, low error probability QND detection of the signal beam's total photon number does \em not\/\rm\ disturb the polarization state of that beam. This is so because the probe imposes the same nonlinear phase shift on both the $H$ and $V$ polarizations of the signal beam. Hence, if the signal-beam input is in the arbitrarily-polarized single-photon state, \begin{equation} |\psi_S\rangle = c_H |1\rangle_{H}|0\rangle_{V} + c_V |0\rangle_{H}|1\rangle_{V}, \vspace*{.075in} \end{equation} where $|c_H|^2 + |c_V|^2 = 1$, then, except for a physically unimportant absolute phase, the signal-beam output will also be in the state $|\psi_S\rangle$. \end{document}
\begin{document} \title{Harbourne constants and arrangements of lines on smooth hypersurfaces in $\mathbb{P}^3_{\mathbb{C}}$} \author{Piotr Pokora} \date{\today} \maketitle \thispagestyle{empty} \begin{abstract} In this note we find a bound for the so-called linear Harbourne constants for smooth hypersurfaces in $\mathbb{P}^{3}_{\mathbb{C}}$. \keywords{line configurations, Miyaoka inequality, blow-ups, negative curves, the Bounded Negativity Conjecture} \subclass{14C20, 14J70} \end{abstract} \section{Introduction} In this short note we find a global estimate for Harbourne constants which were introduced in \cite{BdRHHLPSz} in order to capture and measure the bounded negativity on various birational models of an algebraic surface. \begin{definition} Let $X$ be a smooth projective surface. We say that $X$ \emph{has bounded negativity} if there exists an integer $b(X)$ such that for every \emph{reduced} curve $C \subset X$ one has the bound $$C^{2} \geq -b(X).$$ \end{definition} The bounded negativity conjecture (BNC for short) is one of the most intriguing problems in the theory of projective surfaces and attracts currently a lot of attention, see \cite{Duke, BdRHHLPSz, Harbourne1, XR}. It can be formulated as follows. \begin{conjecture}[BNC] An arbitrary smooth \emph{complex} projective surface has bounded negativity. \end{conjecture} Some surfaces are known to have bounded negativity (see \cite{Duke, Harbourne1}). For example, surfaces with $\mathbb{Q}$-effective anticanonical divisor such as Del Pezzo surfaces, K3 surfaces and Enriques surfaces have bounded negativity. However, when we replace these surfaces by their blow ups, we do not know if bounded negativity is preserved. Specifically, it is not known whether the blow up of $\mathbb{P}^{2}$ at ten general points has bounded negativity or not. Recently in \cite{BdRHHLPSz} the authors have showed the following theorem. \begin{theorem}(\cite[Theorem~3.3]{BdRHHLPSz}) Let $\mathcal{L}$ be a line configuration on $\mathbb{P}^{2}_{\mathbb{C}}$. Let $f: X_{s} \rightarrow \mathbb{P}^{2}_{\mathbb{C}}$ be the blowing up at $s$ distinct points on $\mathbb{P}^{2}_{\mathbb{C}}$ and let $\widetilde{\call}$ be the strict transform of $\mathcal{L}$. Then we have $\widetilde{\call}^{2} \geq -4\cdot s$. \end{theorem} In this note, we generalize this result to the case of line configurations on smooth hypersurfaces $S_{n}$ of degree $n \geq 3$ in $\mathbb{P}^{3}_{\mathbb{C}}$. A classical result tells us that every smooth hypersurface of degree $n=3$ contains $27$ lines. For smooth hypersurfaces of degree $n=4$ we know that the upper bound of the number of lines on quartic surfaces is 64 (claimed by Segre \cite{Segre} and correctly proved by Sch\"utt and Rams \cite{SR}). In general, for degree $n \geq 3$ hypersurfaces $S_{n}$ Boissi\'ere and Sarti (see \cite[Proposition~6.2]{BS}) showed that the number of lines on $S_{n}$ is less than or equal to $n(7n-12)$. Using techniques similar to the one introduced in \cite{BdRHHLPSz} we prove the following result. \begin{theoremA} Let $S_{n}$ be a smooth hypersurface of degree $n \geq 4$ in $\mathbb{P}^{3}_{\mathbb{C}}$. Let $\mathcal{L} \subset S_{n}$ be a line configuration, with the singular locus ${\rm Sing}(\mathcal{L})$ consisting of $s$ distinct points. Let $f : X_{s} \rightarrow S_{n} $ be the blowing up at ${\rm Sing}(\mathcal{L})$ and denote by $\widetilde{\call}$ the strict transform of $\mathcal{L}$. Then we have $$\widetilde{\call}^2 > -4s -2n(n-1)^2.$$ \end{theoremA} In the last part we study some line configurations on smooth complex cubics and quartics in detail. Similar systematic studies on line configurations on the projective plane were initiated in \cite{Szpond}. \section{Bounded Negativity viewed by Harbourne Constants} We start with introducing the Harbourne constants \cite{BdRHHLPSz}. \begin{definition}\label{def:H-constants} Let $X$ be a smooth projective surface and let $\mathcal{P}=\{ P_{1},\ldots,P_{s} \}$ be a set of mutually distinct $s \geq 1$ points in $X$. Then the \emph{local Harbourne constant of $X$ at $\mathcal{P}$} is defined as \begin{equation}\label{eq:H-const for calp} H(X;\mathcal{P}):= \inf_{C} \frac{\left(f^*C-\sum_{i=1}^s \mult_{P_i}C\cdot E_i\right)^2}{s}, \end{equation} where $f: Y \to X$ is the blow-up of $X$ at the set $\mathcal{P}$ with exceptional divisors $E_{1},\ldots,E_s$ and the infimum is taken over all \emph{reduced} curves $C\subset X$.\\ Similarly, we define the \emph{$s$--tuple Harbourne constant of $X$} as $$H(X;s):=\inf_{\mathcal{P}}H(X;\mathcal{P}),$$ where the infimum now is taken over all $s$--tuples of mutually distinct points in $X$. \\ Finally, we define the \emph{global Harbourne constant of $X$} as $$H(X):=\inf_{s \geq 1}H(X;s).$$ \end{definition} The relation between Harbourne constants and the BNC can be expressed in the following way. Suppose that $H(X)$ is a finite real number. Then for any $s \geq 1$ and any reduced curve $D$ on the blow-up of $X$ at $s$ points, we have $$D^2 \geq sH(X).$$ Hence the BNC holds on all blow ups of $X$ at $s$ mutually distinct points with the constant $b(X) = sH(X)$. On the other hand, even if $H(X)=-\infty$, the BNC might still be true. It is very hard to compute Harbourne constants in general. Moreover, it is quite tricky to find these numbers even for the simplest types of reduced curves on a well-understood surface. \section{Proof of the main result} Given a configurations of lines on $S_{n}$ we denote by $t_{r}$ the number of its $r$-ple points, at which exactly $r$ lines of the configuration meet. In the sequel we will repeatedly use two elementary equalities, namely $\sum_{i} {\rm mult}_{P_{i}}(C) = \sum_{k \geq 2}kt_{k}$ and $\sum_{k\geq 2} t_{k} = s$. In this section we will study \emph{linear Harbourne constants} $H_{L}$. We define only the local linear Harbourne constant for $S_{n}$ containing a line configuration $\mathcal{L}$ since this is the only one difference comparing to Definition \ref{def:H-constants}. \begin{definition} Let $S_{n}$ be a smooth hypersurface of degree $n \geq 2$ in $\mathbb{P}^{3}_{\mathbb{C}}$ containing at least one line and let $\mathcal{P}=\{ P_{1},\ldots,P_{s} \}$ be a set of mutually distinct $s$ points in $S_{n}$. Then the \emph{local linear Harbourne constant of $S_{n}$ at $\mathcal{P}$} is defined as \begin{equation} H_{L}(S_{n}; \mathcal{P}):= \inf_{\mathcal{L}} \frac{\widetilde{\call}^{2}}{s}, \end{equation} where $\widetilde{\call}$ is the strict transform of $\mathcal{L}$ with respect to the blow up $f : X_{s} \rightarrow S_{n} $ at $\mathcal{P}$ and the infimum is taken over all \emph{reduced} line configurations $\mathcal{L} \subset S_{n}$. \end{definition} Our proof is based on the following result due to Miyaoka \cite[Section~2.4]{Miyaoka}. \begin{theorem} Let $S_{n}$ be a smooth hypersurface in $\mathbb{P}^{3}_{\mathbb{C}}$ of degree $n \geq 4$ containing a configuration of $d$ lines. Then one has $$nd -t_{2} + \sum_{k \geq 3}(k-4)t_{k} \leq 2n(n-1)^2.$$ \end{theorem} Now we are ready to give a proof of the Main Theorem. \begin{proof} Pick a number $n \geq 4$. Recall that using the adjunction formulae one can compute the self-intersection number of a line $l$ on $S_{n}$, which is equal to $$l^{2} = -2 - K_{S_{n}}.l = -2 - \mathcal{O}(n-4).l = 2-n.$$ Observe that the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ has the following form \begin{equation} \label{Hconst} H_{L}(S_{n}; {\rm Sing}(\mathcal{L})) = \frac{ (2-n)d + I_{d} - \sum_{k\geq 2}k^{2}t_{k}}{\sum_{k\geq 2}t_{k}}, \end{equation} where $I_{d} = 2 \sum_{i < j} l_{i}l_{j}$ denotes the number of incidences of $d$ lines $l_{1}, ..., l_{d}$. It is easy to see that we have the combinatorial equality $$I_{d} = \sum_{k \geq 2}(k^{2} - k)t_{k},$$ hence we obtain $$I_{d} -\sum_{k \geq 2}k^{2}t_{k} = -\sum_{k \geq 2}kt_{k}.$$ Applying this to (\ref{Hconst}) we get $$H_{L}(S_{n}; {\rm Sing}(\mathcal{L})) = \frac{ (2-n)d - \sum_{k \geq 2} kt_{k}}{\sum_{k \geq 2}t_{k}}.$$ Simple manipulations on the Miyaoka inequality lead to $$nd + t_{2} -4\sum_{k \geq 2}t_{k} - 2n(n-1)^2 \leq -\sum_{k\geq 2} kt_{k},$$ and finally we obtain $$H_{L}(S_{n}; {\rm Sing}(\mathcal{L})) \geq -4 + \frac{ 2d + t_{2} -2n(n-1)^2}{s},$$ which completes the proof. \end{proof} It is an interesting question how the linear Harbourne constant behaves when degree $n$ of a hypersurface grows. We present two extreme examples. \begin{example} Let us consider the Fermat hypersurface of degree $n \geq 3$ in $\mathbb{P}^{3}_{\mathbb{C}}$, which is given by the equation $$F_{n} \,\, : \,\, x^{n} + y^{n} + z^{n} + w^{n} = 0.$$ It is a classical result that on $F_{n}$ there exists the line configuration $\mathcal{L}_{n}$ consisting of $3n^{2}$ lines and delivers $3n^{3}$ double points and $6n$ points of multiplicity $n$. It is easy to check that $${\rm lim}_{n \rightarrow \infty} H_{L}(F_{n}; {\rm Sing}(\mathcal{L}_{n})) = {\rm lim}_{n \rightarrow \infty} \frac{ 3 n^{2} \cdot (2-n) + 12n^{3} - 6n^{2} - 4 \cdot 3n^3 - n^{2} \cdot 6n} {3 n^3 + 6n} = -3.$$ On the other hand, the Main Theorem gives $$ {\rm lim}_{n \rightarrow \infty} H_{L}(F_{n}; {\rm Sing}(\mathcal{L}_{n})) \geq -4 + {\rm lim}_{n \rightarrow \infty} \frac{ 6n^{2} + 3n^3 - 2n(n-1)^2}{3n^3 + 6n} = -3 \frac{2}{3},$$ which shows that the estimate given there is quite efficient. \end{example} \begin{example} \label{Rams} This construction comes from \cite{R}. Let us consider Rams hypersurface $\mathbb{P}^{3}_{\mathbb{C}}$ of degree $n \geq 6$ given by the equation $$R_{n} \, : \, x^{n-1}\cdot y + y^{n-1} \cdot z + z^{n-1} \cdot w + w^{n-1}\cdot x = 0.$$ On $R_{n}$ there exists a configuration $\mathcal{L}_{n}$ of $n(n-2)+4$ lines, which delivers exactly $2n^2 - 4n + 4$ double points -- this configuration is the grid of $n(n-2)+2$ vertical disjoint lines intersected by two horizontal disjoint lines. The local linear Harbourne constant at ${\rm Sing}(\mathcal{L}_{n})$ is equal to $$H_{L}(R_{n}; {\rm Sing}(\mathcal{L}_{n})) = \frac{ (n^2-2n+4)\cdot(2-n) + 4n^2 - 8n + 8 - 4\cdot(2n^2 - 4n + 4)}{2n^2 - 4n + 4} = \frac{-n^3}{2n^2 -4n + 4}.$$ Then ${\rm lim}_{n \rightarrow \infty} H_{L}(R_{n}; {\rm Sing}(\mathcal{L}_{n})) = - \infty.$ \end{example} Example \ref{Rams} presents a quite interesting phenomenon since we can obtain very low linear Harbourne constants having singularities of minimal orders -- the whole game is made by the large number of (disjoint) lines. \section{Smooth cubics and quartics} We start with the case $n = 3$. As we mentioned in the first section every smooth cubic surface contains $27$ line, and the configuration of these lines have only double and triple points. These triple points are called \emph{Eckardt points}. Now we find a lower bound for the linear Harbourne constant for such hypersurfaces. \begin{proposition} Under the above notation one has $$H_{L}(S_{3}; {\rm Sing}(\mathcal{L})) \geq -2 \frac{5}{11}.$$ \end{proposition} \begin{proof} Recall that the combinatorial equality \cite[Example II.20.]{Urzua} for cubic surfaces has the form $$135 = t_{2} + 3t_{3}.$$ Moreover, another classical result asserts that the maximal number of \emph{Eckardt} points is equal to $18$ and this number is obtained on Fermat cubic. In order to get a sharp lower bound for $H_{L}$ we need to consider the case when the number of Eckardt points is the largest. To see this we show that the linear Harbourne constant for $t$ triple points is greater then for $t+1$ triple points. Simple computations show that $$H_{L}(S_{3};t) = \frac{-297+3t}{135-2t},$$ $$H_{L}(S_{3};t+1) = \frac{-294+3t}{133-2t},$$ and $H_{L}(S_{3};t+1) < H_{L}(S_{3};t)$ iff $(-297+3t)\cdot(133-2t) - (-294+3t)\cdot(135-2t) > 0$ for all $t \in \{0, ..., 18\}$, which is obvious to check. Having this information in hand we can calculate that for $18$ triple points and $81$ double points the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ is equal to $$H_{L}(F_{3};{\rm Sing}(\mathcal{L})) = \frac{27\cdot(-1) + 270 - 4\cdot81 - 9 \cdot 18}{99} = -2 \frac{5}{11},$$ which ends the proof. \end{proof} \begin{example} Now we consider the case $n=4$ and we start with the configuration of $64$ lines on Schur quartic $Sch$. It is well-known that every line from this configuration intersects exactly $18$ other lines -- see for instance \cite[Proposition 7.1]{SR}. One can check that these $64$ lines deliver $8$ quadruple points, $64$ triple points and $336$ double points (in \cite{Urzua} we can find that the number of double points is equal to $192$, which is false). Then the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ is equal to $$H_{L}(Sch; {\rm Sing}(\mathcal{L})) = \frac{(-2)\cdot64 + 1152 -16\cdot8 - 9\cdot 64 - 4\cdot 336}{336 + 64 + 8} = -2.509.$$ \end{example} Now we present an example of a line configuration on a smooth quartic which deliver the most negative (according to our knowledge) local linear Harbourne constant for this kind of surfaces. \begin{example}[Bauer configuration of lines] Let us consider Fermat quartic $F_{4}$. It is well-known that on $F_{4}$ there exists the configuration of $48$ lines. From this configuration one can extract a subconfiguration of $16$ lines which has only $8$ quadruple points. Then the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ is equal to $$H_{L}(F_{4}; {\rm Sing}(\mathcal{L})) = \frac{ 16\cdot(-2) + 16\cdot6 - 16 \cdot8}{8} = -8.$$ Using Main Theorem we get $H_{L}(F_{4}; {\rm Sing}(\mathcal{L})) \geq -9$, which also shows efficiency of our result. \end{example} \paragraph*{\emph{Acknowledgement.}} The author like to express his gratitude to Thomas Bauer for sharing Example 4.3, to S\l awomir Rams for pointing out his construction in \cite{R} and to Tomasz Szemberg and Halszka Tutaj-Gasi\'nska for useful remarks. Finally, the author would like to thank the anonymous referee for many useful comments which allowed to improve the exposition of this note. The author is partially supported by National Science Centre Poland Grant 2014/15/N/ST1/02102. Piotr Pokora, Instytut Matematyki, Pedagogical University of Cracow, Podchor\c a\.zych 2, PL-30-084 Krak\'ow, Poland. \nopagebreak \textit{E-mail address:} \texttt{[email protected]} \end{document}
\begin{document} \begin{frontmatter} \title{Qualitative behaviour and numerical approximation of solutions to conservation laws with non-local point constraints on the flux and modeling of crowd dynamics at the bottlenecks} \author[Besancon]{Boris Andreianov} \ead{[email protected]} \author[Besancon]{Carlotta Donadello} \ead{[email protected]} \author[Besancon]{Ulrich Razafison\corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected]} \author[Warsaw]{Massimiliano D.~Rosini} \ead{[email protected]} \address[Besancon]{Laboratoire de Math\'ematiques, Universit\'e de Franche-Comt\'e,\\ 16 route de Gray, 25030 Besan\c{c}on Cedex, France} \address[Warsaw]{ ICM, University of Warsaw\\ ul.~Prosta 69, P.O. Box 00-838, Warsaw, Poland} \begin{abstract} In this paper we investigate numerically the model for pedestrian traffic proposed in [B.~Andreianov, C.~Donadello, M.D.~Rosini, Crowd dynamics and conservation laws with nonlocal constraints and capacity drop, Mathematical Models and Methods in Applied Sciences 24 (13) (2014) 2685-2722] . We prove the convergence of a scheme based on a constraint finite volume method and validate it with an explicit solution obtained in the above reference. We then perform \emph{ad hoc} simulations to qualitatively validate the model under consideration by proving its ability to reproduce typical phenomena at the bottlenecks, such as Faster Is Slower effect and the Braess' paradox. \end{abstract} \begin{keyword} finite volume scheme \sep scalar conservation law \sep non-local point constraint \sep crowd dynamics \sep capacity drop \sep Braess' paradox \sep Faster Is Slower \MSC 35L65 \sep 90B20 \sep 65M12 \sep 76M12 \end{keyword} \end{frontmatter} \section{Introduction}\label{sec:intro} \noindent Andreianov, Donadello and Rosini developed in~\cite{BorisCarlottaMax-M3AS} a macroscopic model, called here ADR, aiming at describing the behaviour of pedestrians at bottlenecks. The model is given by the Cauchy problem for a scalar hyperbolic conservation law in one space dimension with non-local point constraint of the form \begin{subequations}\label{eq:constrianed} \begin{align}\label{eq:constrianed1} &\partial_t\rho + \partial_xf(\rho) = 0 && (t,x) \in {\mathbb{R}}_+\times{\mathbb{R}},\\ \label{eq:constrianed3} &\rho(0,x) =\bar\rho(x) && x \in {\mathbb{R}},\\ \label{eq:constrianed2} &f\left(\rho(t,0\pm)\right) \le p\left( \int_{{\mathbb{R}}_-} w(x)~ \rho(t,x)~{{\rm{d}}} x\right) && t \in {\mathbb{R}}_+, \end{align} \end{subequations} where $\rho(t,x) \in \left[0,R\right]$ is the (mean) density of pedestrians in $x \in {\mathbb{R}}$ at time $t \in {\mathbb{R}}_+$ and $\bar\rho \colon {\mathbb{R}} \to [0,R]$ is the initial (mean) density, with $R >0$ being the maximal density. Then, $f \colon [0,R] \to {\mathbb{R}}_+$ is the flow considered to be bell-shaped, which is an assumption commonly used in crowd dynamics. A typical example of such flow is the so-called Lighthill-Whitham-Richards (LWR) flux~\cite{LWR1, LWR2, Greenshields_1934} defined by $$f(\rho)=\rho\,v_{\max}\left(1-\dfrac{\rho}{\rho_{\max}}\right),$$where $v_{\max}$ and $\rho_{\max}$ are the maximal velocity and the maximal density of pedestrians respectively. Throughout this paper the LWR flux will be used. Next $p \colon {\mathbb{R}}_+ \to {\mathbb{R}}_+$ prescribes the maximal flow allowed through a bottleneck located at $x=0$ as a function of the weighted average density in a left neighbourhood of the bottleneck and $w \colon {\mathbb{R}}_- \to {\mathbb{R}}_+$ is the weight function used to average the density. Finally in \eqref{eq:constrianed2}, $\rho(t,0-)$ denotes the left measure theoretic trace along the constraint, implicitly defined by \begin{align*} &\lim_{\varepsilon\downarrow0}\frac{1}{\varepsilon} \int_0^{+\infty} \int_{-\varepsilon}^0 \modulo{\rho(t,x) - \rho(t,0-)}~\phi(t,x)~{{\rm{d}}} x~{{\rm{d}}} t=0& &\text{for all }\phi \in \Cc\infty({\mathbb{R}}^2;{\mathbb{R}}). \end{align*} The right measure theoretic trace, $\rho(t,0+)$, is defined analogously.\\ \noindent In the last few decades, the study of the pedestrian behaviour through bottlenecks, namely at locations with reduced capacity, such as doors, stairs or narrowings, drawn a considerable attention. The papers \cite{Schadschneider2008Evacuation, Cepolina2009532, Hoogendoorn01052005, Kopylow, Schreckenberg2006, Seyfried:59568, Zhang20132781} present results of empirical experiments. However, for safety reasons, experiments reproducing extremal conditions such as evacuation and stampede are not available. In fact, the unique experimental study of a crowd disaster is proposed in~\cite{Helbing_disaster}. The available data show that the capacity of the bottleneck (i.e.~the maximum number of pedestrians that can flow through the bottleneck in a given time interval) can drop when high-density conditions occur upstream of the bottleneck. This phenomenon is called \emph{capacity drop} and can lead to extremely serious consequences in escape situations. In fact, the crowd pressure before an exit can reach very high values, the efficiency of the exit dramatically reduces and accidents become more probable due to the overcrowding and the increase of the evacuation time (i.e.~the temporal gap between the times in which the first and the last pedestrian pass through the bottleneck). A linked phenomenon is the so-called \emph{Faster Is Slower} (FIS) effect, first described in~\cite{Helbing2000Simulating}. FIS effect refers to the jamming and clogging at the bottlenecks, that result in an increase of the evacuation time when the degree of hurry of a crowd is high. We recall that the capacity drop and the FIS effect are both experimentally reproduced in~\cite{Cepolina2009532,Soria20121584}. A further related (partly counter-intuitive) phenomenon is the so-called \emph{Braess' paradox} for pedestrian flows~\cite{hughes2003flow}. It is well known that placing a small obstacle before an exit door can mitigate the inter-pedestrian pressure and, under particular circumstances, it reduces the evacuation time by improving the outflow of people. Note that as it happens for any first order model, see for instance~\cite[Part~III]{Rosinibook} and the references therein, ADR can not explain the capacity drop and collective behaviours at the bottlenecks. Therefore one of the difficulties we have to face is that the constraint $p$ has to be deduced together with the fundamental diagram from the empirical observations. The aim of this paper is to validate ADR by performing simulations in order to show the ability of the model to reproduce the main effects described above and related to capacity drop that are FIS and Braess' paradox. To this end we propose a numerical scheme for the model and prove its convergence. The scheme is obtained by adapting the local constrained finite volume method introduced in~\cite{scontrainte} to the non-local case considered in ADR, using a splitting strategy. The paper is organized as follows. In Section~\ref{sec:model} we briefly recall the main theoretical results for ADR. In Section~\ref{sec:NumericalMethod} we introduce the numerical scheme, prove its convergence and validate it with an explicit solution obtained in \cite{BorisCarlottaMax-M3AS}. In Section~\ref{sec:simulations} we perform simulations to show that ADR is able to reproduce the Braess' paradox and the FIS effect. In Subsection~\ref{sec:Braess+FIS} we combine local and non-local constraints to model a slow zone placed before the exit. Conclusions and perspectives are outlined in Section~\ref{sec:conclusions}. \section{Well-posedness for the ADR model}\label{sec:model} Existence, uniqueness and stability for the general Cauchy problem~\eqref{eq:constrianed} are established in~\cite{BorisCarlottaMax-M3AS} under the following assumptions: \begin{enumerate}[label=\bfseries{(W)}] \item[\textbf{(F)}] $f$ belongs to $\mathbf{Lip}\left( [0,R]; \left[0, +\infty\right[ \right)$ and is supposed to be bell-shaped, that is $f(0) = 0 = f(R)$ and there exists $\sigma \in \left]0,R\right[$ such that $f'(\rho)~(\sigma-\rho)>0$ for a.e.~$\rho \in [0,R]$. \item[\textbf{(W)}] $w$ belongs to $\L\infty({\mathbb{R}}_-;{\mathbb{R}}_+)$, is an increasing map, $\norma{w}_{\L1({\mathbb{R}}_-)} = 1$ and there exists $\textrm{i}_{w} >0$ such that $w(x) = 0$ for any $x \le -\textrm{i}_{w}$. \item[\textbf{(P)}] $p$ belongs to $\mathbf{Lip} \left( \left[0,R\right]; \left]0,f(\sigma)\right] \right)$ and is a non-increasing map. \end{enumerate} The regularity $w\in \L\infty({\mathbb{R}}_-;{\mathbb{R}}_+)$ is the minimal requirement needed in order to prove existence and uniqueness of~\eqref{eq:constrianed}. In this paper, we shall consider continuous $w$. The existence of solutions for the Riemann problem for~\eqref{eq:constrianed} is proved in~\cite{AndreianovDonadelloRosiniRazafisonProc} for piecewise constant $p$. However, such hypothesis on $p$ is not sufficient to ensure uniqueness of solutions, unless the flux $f$ and the efficiency $p$ satisfy a simple geometric condition, see~\cite{AndreianovDonadelloRosiniRazafisonProc} for details. In the present paper, we consider either continuous nonlinear $p$ or a piecewise constant $p$ that satisfies such geometric condition. The definition of entropy solution for a Cauchy problem~\eqref{eq:constrianed1}, \eqref{eq:constrianed3} with a fixed \emph{a priori} time dependent constraint condition \begin{align}\label{eq:constrianed2bis} &f\left(\rho(t,0\pm)\right) \le q(t) && t \in {\mathbb{R}}_+ \end{align} was introduced in~\cite[Definition~3.2]{ColomboGoatinConstraint} and then reformulated in~\cite[Definition~2.1]{scontrainte}, see also~\cite[Proposition~2.6]{scontrainte} and \cite[Definition 2.2]{chalonsgoatinseguin}. Such definitions are obtained by adding a term that accounts for the constraint in the classical definition of entropy solution given by Kruzkov in~\cite[Definition~1]{Kruzkov}. The definition of entropy solution given in~\cite[Definition~2.1]{BorisCarlottaMax-M3AS} is obtained by extending these definitions to the framework of non-local constraints. The following theorem on existence, uniqueness and stability of entropy solutions of the constrained Cauchy problem~\eqref{eq:constrianed} is achieved under the hypotheses~\textbf{(F)}, \textbf{(W)} and~\textbf{(P)}. \begin{thm}[Theorem~3.1 in~\cite{BorisCarlottaMax-M3AS}]\label{thm:1} Let~\textbf{(F)}, \textbf{(W)}, \textbf{(P)} hold. Then, for any initial datum $\bar\rho \in \L\infty({\mathbb{R}};[0,R])$, the Cauchy problem~\eqref{eq:constrianed} admits a unique entropy solution $\rho$. Moreover, if $\rho' = \rho'(t,x)$ is the entropy solution corresponding to the initial datum $\bar\rho' \in \L\infty({\mathbb{R}};[0,R])$, then for all~$T>0$ and $L>{\rm{i}_w}$, the following inequality holds \begin{equation}\label{eq:lipdepen-localized} \norma{\rho(T) - \rho'(T)}_{\L1([-L,L])} \le e^{CT}\norma{\bar\rho - \bar\rho'}_{\L1(\{\modulo{x}\le L+MT\})}, \end{equation} where $M=\mathrm{Lip}(f)$ and $C=2 \mathrm{Lip}(p) \norma{w}_{\L\infty({\mathbb{R}}_-)}$. \end{thm} The total variation of the solution may in general increase due to the presence of the constraint. In~\cite{BorisCarlottaMax-M3AS} the authors provide an invariant domain $\mathcal{D} \subset \L1 \left({\mathbb{R}} ; [0,R]\right)$ such that if $\bar\rho$ belongs to $\mathcal{D}$, then one obtains a Lipschitz estimate with respect to time of the $\L1$ norm and an a priori estimate of the total variation of \[ \Psi(\rho) = \mathrm{sign}(\rho-\sigma) [f(\sigma) - f(\rho)] = \int_\sigma^\rho \modulo{\dot{f}(r)} \, {\rm{d}} r. \] \section{Numerical method for approximation of ADR}\label{sec:NumericalMethod} In this section we describe the numerical scheme based on finite volume method that we use to solve~\eqref{eq:constrianed}. Then we prove the convergence of our scheme and validate it by comparison with an explicit solution of~\eqref{eq:constrianed}. In what follows, we assume that \textbf{(F)}, \textbf{(W)} and~\textbf{(P)} hold. \subsection{Non-local constrained finite volume method} Let $\Delta x$ and $\Delta t$ be the constant space and time steps respectively. We define the points $x_{j+1/2}=j\Delta x$, the cells $K_j=[x_{j-1/2},x_{j+1/2}[$ and the cell centers $x_j=(j-1/2)\Delta x$ for $j\in{\mathbb{Z}}$. We define the time discretization $t^n=n\Delta t$. We introduce the index $j_c$ such that $x_{j_c+1/2}$ is the location of the constraint (a door or an obstacle). For $n\in{\mathbb{N}}$ and $j\in{\mathbb{Z}}$, we denote by $\rho_j^n$ the approximation of the average of $\rho(t^n,\cdot~)$ on the cell $K_j$, namely \begin{align*} &\rho_j^0=\frac{1}{\Delta x}\displaystyle\int_{x_{j-1/2}}^{x_{j+1/2}}\overline{\rho}(x)\,{\rm{d}} x& &\text{ and }& &\rho_j^n\simeq\frac{1}{\Delta x}\displaystyle\int_{x_{j-1/2}}^{x_{j+1/2}}\rho(t^n,x)\,{\rm{d}} x &\text{ if }n>0. \end{align*} We recall that for the classical conservation law~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}, a standard finite volume method can be written into the form \begin{equation} \label{schema} \rho_j^{n+1}=\rho_j^n-\frac{\Delta t}{\Delta x}\left(\mathcal{F}_{j+1/2}^n-\mathcal{F}_{j-1/2}^n\right), \end{equation} where $\mathcal{F}_{j+1/2}^n=F\left(\rho_j^n,\rho_{j+1}^n\right)$ is a monotone, consistent numerical flux, that is, $F$ satisfies the following assumptions: \begin{itemize} \item $F$ is Lipschitz continuous from $[0,R]^2$ to ${\mathbb{R}}$ with Lipschitz constant $\mathrm{Lip}(F)$, \item $F(a,a)=f(a)$ for any $a\in[0,R]$, \item $(a,b) \in [0,R]^2 \mapsto F(a,b) \in {\mathbb{R}}$ is non-decreasing with respect to $a$ and non-increasing with respect to $b$. \end{itemize} We also recall that in~\cite{scontrainte} the numerical flux for the time dependent constraint~\eqref{eq:constrianed2bis} is modified as follow in order to take into account the constraint condition \begin{align} \label{def.flux.num.contrainte} &\mathcal{F}_{j+1/2}^n=\left\{\begin{array}{l@{\quad\text{if }}l} F\left (\rho_j^n,\rho_{j+1}^n\right )&j\ne j_c,\\[6pt] \min\left \{F\left(\rho_j^n,\rho_{j+1}^n\right),q^n\right \}&j=j_c, \end{array}\right. \end{align} where $q^n$ is an approximation of $q(t^n)$. In the present paper, when dealing with a Cauchy problem subject to a non-local constraint of the form~\eqref{eq:constrianed2} we will use the approximation \begin{equation} \label{contrainte.approx} q^n=p\left(\Delta x\sum_{j\le j_c}w(x_j)\,\rho_j^n\right). \end{equation} Roughly speaking \begin{itemize} \item we apply the numerical scheme~\eqref{schema} for the problem~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}, \item we apply the numerical scheme~\eqref{schema}-\eqref{def.flux.num.contrainte} for the problem~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}-\eqref{eq:constrianed2bis}, \item we apply the numerical scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}-\eqref{contrainte.approx} for the problem~\eqref{eq:constrianed}. \end{itemize} \subsection{Convergence of the scheme} Let us introduce the finite volume approximate solution $\rho_{\Delta}$ defined by \begin{align}\label{def.rho.delta} &\rho_\Delta(t,x)=\rho_j^n& &\mbox{for }x\in K_j\mbox{ and }t\in [t^n,t^{n+1} [, \end{align} where the sequence $(\rho_{j}^n)_{j\in{\mathbb{Z}},\,n\in{\mathbb{N}}}$ is obtained by the numerical scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}. Analogously, we also define the approximate constraint function \begin{align} \label{def.q.delta} &q_\Delta(t)=q^n& &\mbox{for }t\in [t^n,t^{n+1} [. \end{align} First, we prove a discrete stability estimate valid for any domain $Q = [0,T] \times {\mathbb{R}}$ with $T>0$, for the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte} applied to problem~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}-\eqref{eq:constrianed2bis}. This estimate can be seen as the equivalent, in this framework, of the stability result established in~\cite[Proposition~2.10]{scontrainte}. \begin{pro} \label{pro.diff.solutions} Let $\overline{\rho}$ be in $\L\infty({\mathbb{R}};[0,R])$ and $q_\Delta$, $\hat{q}_\Delta$ be piecewise constant functions of the form~\eqref{def.q.delta}. If $\rho_\Delta$ and $\hat{\rho}_\Delta$ are the approximate solutions of~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}-\eqref{eq:constrianed2bis} corresponding, respectively, to $q_\Delta$ and $\hat{q}_\Delta$ and constructed by applying the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}, then we have $$\norma{\rho_\Delta-\hat{\rho}_\Delta}_{\L1(Q)}\le2 T \norma{q_\Delta-\hat{q}_\Delta}_{\L1([0,T])}.$$ \end{pro} \begin{proof} For notational simplicity, let $N = \lfloor T/\Delta t\rfloor$. Let us also introduce $(\tilde{\rho}_j^n)_{j\in{\mathbb{Z}},\,n\in{\mathbb{N}}}$ defined by, $$\tilde{\rho}_j^{n+1}=\rho_j^n-\frac{\Delta t}{\Delta x}\left(\mathcal{\tilde{F}}_{j+1/2}^n-\mathcal{\tilde{F}}_{j-1/2}^n\right),\quad\mbox{for any } j\in{\mathbb{Z}},\,n\in{\mathbb{N}},$$ where $\mathcal{\tilde{F}}_{j+1/2}^n$ is defined by \begin{align*} &\mathcal{\tilde{F}}_{j+1/2}^n=\left\{\begin{array}{l@{\quad\text{if }}l} F\left (\rho_j^n,\rho_{j+1}^n\right )&j\ne j_c,\\[6pt] \min\left \{F\left(\rho_j^n,\rho_{j+1}^n\right),\hat{q}^n\right \}&j=j_c. \end{array}\right. \end{align*} Then using the definitions of $(\rho_j^n)_{j\in{\mathbb{Z}},\,n\in{\mathbb{N}}}$ and $(\tilde{\rho}_j^n)_{j\in{\mathbb{Z}},\,n\in{\mathbb{N}}}$, we have for any $n=1,\dots,N$, $$\rho_j^n=\tilde{\rho}_j^n\quad\mbox{if}\quad j\notin\{j_c,j_c+1\}$$ and \begin{align*} &\rho_{j_c}^n-\tilde{\rho}_{j_c}^n= -\frac{\Delta t}{\Delta x}\left(\min\left \{F\left (\rho_{j_c}^{n-1},\rho_{j_c+1}^{n-1}\right ),q^{n-1}\right\} +\min\left \{F\left (\rho_{j_c}^{n-1},\rho_{j_c+1}^{n-1}\right ),\hat{q}^{n-1}\right \}\right),&\\ &\rho_{j_c+1}^n-\tilde{\rho}_{j_c+1}^n= \frac{\Delta t}{\Delta x}\left(\min\left \{F\left(\rho_{j_c}^{n-1},\rho_{j_c+1}^{n-1}\right),q^{n-1}\right \} - \min\left \{F\left(\rho_{j_c}^{n-1},\rho_{j_c+1}^{n-1}\right),\hat{q}^{n-1}\right \} \right), \end{align*} which implies that \begin{align*} &\modulo{\rho_{j_c}^n-\tilde{\rho}_{j_c}^n}\le\frac{\Delta t}{\Delta x} ~ \modulo{q^{n-1}-\hat{q}^{n-1}},& &\modulo{\rho_{j_c+1}^n-\tilde{\rho}_{j_c+1}^n}\le\frac{\Delta t}{\Delta x} ~ \modulo{q^{n-1}-\hat{q}^{n-1}}. \end{align*} Therefore we deduce that, for any $n=1,\dots,N$, \begin{equation} \label{pro.diff.solutions.eq1} \sum_{j\in{\mathbb{Z}}}\modulo{\rho_{j}^n-\tilde{\rho}_{j}^n}\le2\frac{\Delta t}{\Delta x} ~ \modulo{q^{n-1}-\hat{q}^{n-1}}. \end{equation} Besides, observe that the modification of the numerical flux at the interface $x_{j_c+1/2}$ introduced in~\eqref{def.flux.num.contrainte} does not affect the monotonicity of the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte} (see~\cite[Proposition~4.2]{scontrainte}). Therefore, for any $n=1,\dots,N$, we have \begin{equation} \label{pro.diff.solutions.eq2} \sum_{j\in{\mathbb{Z}}}\modulo{\tilde{\rho}_j^n-\hat{\rho}_j^n}\le \sum_{j\in{\mathbb{Z}}}\modulo{\rho_j^{n-1}-\hat{\rho}_j^{n-1}}. \end{equation} Hence thanks to~\eqref{pro.diff.solutions.eq1} and~\eqref{pro.diff.solutions.eq2}, we can write \begin{align*} \sum_{j\in{\mathbb{Z}}}|\rho_j^1-\hat{\rho}_j^1|&\le\sum_{j\in{\mathbb{Z}}}|\rho_j^1-\tilde{\rho}_j^1|+\sum_{j\in{\mathbb{Z}}}|\tilde{\rho}_j^1-\hat{\rho}_j^1|\le2\frac{\Delta t}{\Delta x} ~ \modulo{q^0-\hat{q}^0}+\sum_{j\in{\mathbb{Z}}}|\rho_j^0-\hat{\rho}_j^0| =2\frac{\Delta t}{\Delta x} ~ \modulo{q^0-\hat{q}^0}. \end{align*} Then an induction argument shows that for any $n=1,\dots,N$, $$\sum_{j\in{\mathbb{Z}}}\modulo{\rho_j^n-\hat{\rho}_j^n}\le2\frac{\Delta t}{\Delta x}\sum_{k=0}^{n-1}|q^k-\hat{q}^k|\le\frac{2}{\Delta x}~\|q_\Delta-\hat{q}_\Delta\|_{L^1([0,t^n])}.$$ In conclusion, we find that \begin{align*} \|\rho_\Delta-\hat{\rho}_\Delta\|_{L^1(Q)}&=\Delta t\,\Delta x\sum_{n=1}^N\sum_{j\in{\mathbb{Z}}}|\rho_j^n-\hat{\rho}_j^n| \le 2 \|q_\Delta-\hat{q}_\Delta\|_{L^1([0,T])}~\sum_{n=1}^N \Delta t \le 2 T \|q_\Delta-\hat{q}_\Delta\|_{L^1([0,T])} \end{align*} and this ends the proof. \end{proof} \begin{comment} \begin{proof} Assume first $n=1$. Then we easily see from the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte} that we have $\rho_j^1=\hat{\rho}_j^1$ for any $j\not\in\left \{ j_c, j_c+1\right \}$. Moreover we can write \begin{align*} &\rho_{j_c}^1-\hat{\rho}_{j_c}^1= -\frac{\Delta t}{\Delta x}\left(\min\left \{F\left (\rho_{j_c}^0,\rho_{j_c+1}^0\right ),q^0\right\} -\min\left \{F\left (\rho_{j_c}^0,\rho_{j_c+1}^0\right ),\hat{q}^0\right \}\right),&\\ &\rho_{j_c+1}^1-\hat{\rho}_{j_c+1}^1= \frac{\Delta t}{\Delta x}\left(\min\left \{F\left(\rho_{j_c}^0,\rho_{j_c+1}^0\right),q^0\right \} - \min\left \{F\left(\rho_{j_c}^0,\rho_{j_c+1}^0\right),\hat{q}^0\right \} \right), \end{align*} and therefore \begin{align*} &\modulo{\rho_{j_c}^1-\hat{\rho}_{j_c}^1}\le\frac{\Delta t}{\Delta x} ~ \modulo{q^0-\hat{q}^0},& &\modulo{\rho_{j_c+1}^1-\hat{\rho}_{j_c+1}^1}\le\frac{\Delta t}{\Delta x} ~ \modulo{q^0-\hat{q}^0}. \end{align*} Then an induction argument allows to prove that for any $n\in{\mathbb{N}}$, $n\ge1$, we have \begin{align*} &\rho_j^n=\hat{\rho}_j^n\quad\mbox{if }j\not\in \left \{j_c,\,j_c+1\right \},& &\modulo{\rho_{j_c}^n-\hat{\rho}_{j_c}^n} \le \frac{\Delta t}{\Delta x} ~ \modulo{q^{n-1}-\hat{q}^{n-1}},& &\modulo{\rho_{j_c+1}^n-\hat{\rho}_{j_c+1}^{n}} \le \frac{\Delta t}{\Delta x} ~ \modulo{q^{n-1}-\hat{q}^{n-1}}. \end{align*} Therefore we have \begin{align*} \norma{\rho_{\Delta}-\hat{\rho}_{\Delta}}_{\L1(Q)}&=\Delta t ~ \Delta x\sum_{n=0}^{\lfloor T/\Delta t\rfloor}\sum_{j\in{\mathbb{Z}}}\modulo{\rho_j^n-\hat{\rho}_j^n} =\Delta t ~ \Delta x\sum_{n=0}^{\lfloor T/\Delta t\rfloor}\left(\modulo{\rho_{j_c}^n-\hat{\rho}_{j_c}^n} + \modulo{\rho_{j_c+1}^n-\hat{\rho}_{j_c+1}^n}\right)\\ &\le2 ~ \Delta t^2\sum_{n=1}^{\lfloor T/\Delta t\rfloor}\modulo{q^{n-1}-\hat{q}^{n-1}} \le2 ~ \Delta t ~ \norma{q_\Delta-\hat{q}_\Delta}_{\L1([0,T])}, \end{align*} which ends the proof. \end{proof} \end{comment} \begin{comment} \begin{proof} For notational simplicity, let $N = \lfloor T/\Delta t\rfloor$. By definition we have that \begin{align*} &\norma{\rho_{\Delta}-\hat{\rho}_{\Delta}}_{\L1(Q)}= \Delta t ~ \Delta x \left [\sum_{n=1}^N \sum_{i=1-n}^{n} \modulo{\rho^n_{j_c+i} - \hat{\rho}^n_{j_c+i}}\right ] \end{align*} where by~\eqref{CFL} \begin{align*} &\sum_{n=1}^N \sum_{i=1-n}^{n} \modulo{\rho^n_{j_c+i} - \hat{\rho}^n_{j_c+i}} \leq \sum_{n=2}^N \sum_{i=1-n}^{n} \modulo{\rho^{n-1}_{j_c+i} - \hat{\rho}^{n-1}_{j_c+i}} +2 ~ \frac{\Delta t}{\Delta x} \sum_{n=1}^N \modulo{q^{n-1}-\hat{q}^{n-1}} \\ &+\frac{\Delta t}{\Delta x} ~ \mathrm{Lip}(F) \sum_{n=2}^N \sum_{i=1-n}^{n} \left [ \modulo{\rho^{n-1}_{j_c+i-1} - \hat{\rho}^{n-1}_{j_c+i-1}} +2\modulo{\rho^{n-1}_{j_c+i} - \hat{\rho}^{n-1}_{j_c+i}} +\modulo{\rho^{n-1}_{j_c+i+1} - \hat{\rho}^{n-1}_{j_c+i+1}} \right ] \\ &\leq 2 ~ \frac{\Delta t}{\Delta x} \sum_{n=1}^N \modulo{q^{n-1}-\hat{q}^{n-1}} +\left [1+4\frac{\Delta t}{\Delta x} ~ \mathrm{Lip}(F)\right ] \left [\sum_{n=2}^N \sum_{i=1-n}^{n} \modulo{\rho^{n-1}_{j_c+i} - \hat{\rho}^{n-1}_{j_c+i}}\right ] \\ &\leq 2 ~ \frac{\Delta t}{\Delta x} \sum_{n=1}^N \modulo{q^{n-1}-\hat{q}^{n-1}} +3 \sum_{n=1}^{N-1} \sum_{i=1-n}^{n} \modulo{\rho^{n}_{j_c+i} - \hat{\rho}^{n}_{j_c+i}} \end{align*} \end{proof} \end{comment} \noindent Let us now notice that as in~\cite[Proposition~4.2]{scontrainte}, under the CFL condition \begin{equation} \label{CFL} \mathrm{Lip}(F) ~ \frac{\Delta t}{\Delta x}\le\frac{1}{2}, \end{equation} we have the $\L\infty$ stability of the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}-\eqref{contrainte.approx} that is \begin{align} \label{stability.Linfty} &0\le\rho_\Delta(t,x)\le R&&\mbox{for a.e.~}(t,x)\in Q. \end{align} \noindent This stability result allows to prove the statement below. \begin{pro} \label{prop.estimation.bv.q} Let $q_\Delta$ be defined by~\eqref{contrainte.approx}-\eqref{def.q.delta}. Then under the CFL condition~\eqref{CFL}, for any $T>0$, there exists $C>0$ only depending on $T$, $f$, $F$, $p$, $w$ and $R$ such that: \begin{equation} \label{estimation.bv.q} \modulo{q_\Delta}_{BV([0,T])}\le C. \end{equation} \end{pro} \begin{proof} Let $N=\lfloor{T/\Delta t}\rfloor$ and $j_w$ be an integer such that supp$(w)\subset\underset{j_w\le j\le j_c}\cup K_j$. Then for any $n=0,\dots,N-1$, we have \begin{align*} \modulo{q^{n+1}-q^n}& = \modulo{p\left(\Delta x\sum_{j_w\le j\le j_c}w(x_j)\rho_j^{n+1}\right)-p\left(\Delta x\sum_{j_w\le j\le j_c}w(x_j)\rho_j^n\right)}\\ &\le\Delta x\,\mathrm{Lip}(p)\left|\sum_{j_{w}\le j\le j_c}w(x_j)(\rho_j^{n+1}-\rho_j^n)\right| =\Delta t\,\mathrm{Lip}(p)\left|\sum_{j_{w}\le j\le j_c}w(x_j)\left(\mathcal{F}_{j+1/2}^n-\mathcal{F}_{j-1/2}^n\right)\right|. \end{align*} Now, using a summation by part, we have \begin{align*} \sum_{j_{w}\le j\le j_c}w(x_j)\left(\mathcal{F}_{j+1/2}^n-\mathcal{F}_{j-1/2}^n\right) =w(x_{j_c})\mathcal{F}_{j_c+1/2}^n-w(x_{j_w})\mathcal{F}_{j_w-1/2} - \!\!\!\sum_{j_w\le j\le j_c-1}\left(w(x_{j+1})-w(x_j)\right)\mathcal{F}_{j+1/2}^n. \end{align*} Then, it follows that $$|q^{n+1}-q^n|\le\Delta t\,\mathrm{Lip}(p)\,\|w\|_{\L\infty({\mathbb{R}}_-;{\mathbb{R}})}\sum_{j_w-1\le j\le j_c}|\mathcal{F}_{j+1/2}^n|.$$ Now, from~\eqref{def.flux.num.contrainte}, for any $j\in{\mathbb{Z}}$ we have the estimate \begin{align*} \modulo{\mathcal{F}_{j+1/2}^n} & \le \modulo{F(\rho_{j}^n,\rho_{j+1}^n)} \le \modulo{F(\rho_{j}^n,\rho_{j+1}^n)-F(\rho_{j}^n,\rho_{j}^n)} + \modulo{f(\rho_{j}^n)} \le \mathrm{Lip}(F) \, \modulo{\rho_{j+1}^n-\rho_{j}^n}+\mathrm{Lip}(f)\, \modulo{\rho_{j}^n} \le R\left (\mathrm{Lip}(F)+\mathrm{Lip}(f)\right ). \end{align*} Hence we deduce that $$\modulo{q_\Delta}_{BV([0,T])}=\sum_{n=0}^{N-1}\modulo{q^{n+1}-q^n}\le C,$$ where $C=(j_c-j_w+2)\,T\,R\,\mathrm{Lip}(p)\,\norma{w}_{\L\infty({\mathbb{R}}_-;{\mathbb{R}})}\,\left (\mathrm{Lip}(F)+\mathrm{Lip}(f)\right )$. \end{proof} We are now in a position to prove a convergence result for the scheme ~\eqref{schema}-\eqref{def.flux.num.contrainte}-\eqref{contrainte.approx}. \begin{thm} \label{th.convergence} Under the CFL condition~\eqref{CFL}, the constrainted finite volume scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}-\eqref{contrainte.approx} converges in $\L1(Q)$ to the unique entropy solution to~\eqref{eq:constrianed}. \end{thm} \begin{proof} Let $(\rho_\Delta,q_\Delta)$ be constructed by the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}-\eqref{contrainte.approx}. Proposition~\ref{prop.estimation.bv.q} and Helly's lemma give the existence of a subsequence, still denoted $q_\Delta$ and a constraint function $q\in \L\infty([0,T])$ such that $q_\Delta$ converges to $q$ strongly in $\L1([0,T])$ as $\Delta t\to0$. Let $\rho\in \L\infty({\mathbb{R}}_+\times{\mathbb{R}};[0,R])$ be the unique entropy solution to~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}-\eqref{eq:constrianed2bis} associated to $q$. It remains to prove that the subsequence $\rho_\Delta$ converges to $\rho$ strongly in $\L1(Q)$ as $\Delta t,\,\Delta x\to0$. The uniqueness of the entropy solution to~\eqref{eq:constrianed1}-\eqref{eq:constrianed3}-\eqref{eq:constrianed2bis} will then imply that the full sequence $\rho_\Delta$ converges to $\rho$ and, as a consequence, the full sequence $q_\Delta$ converges to $q=p\left (\int_{{\mathbb{R}}_-}w(x)\,\rho(t,x)\,{\rm{d}} x\right )$.\\ Let $\hat{q}_\Delta$ be a piecewise constant approximation of $q$ such that $\hat{q}_\Delta$ converges to $q$ strongly in $\L1([0,T])$. Furthermore, we also introduce $\hat{\rho}_\Delta$ constructed by the scheme~\eqref{schema}-\eqref{def.flux.num.contrainte} and associated to $\hat{q}_\Delta$. Now we have $$\norma{\rho-\rho_\Delta}_{\L1(Q)}\le\norma{\rho-\hat{\rho}_\Delta}_{\L1(Q)}+\norma{\rho_\Delta-\hat{\rho}_\Delta}_{\L1(Q)}. $$ But, thanks to~\cite[Theorem~4.9]{scontrainte}, under the CFL condition~\eqref{CFL}, $\norma{\rho-\hat{\rho}_\Delta}_{\L1(Q)}$ tends to $0$ as $\Delta t$, $\Delta x\to0$. Furthermore, thanks to Proposition~\ref{pro.diff.solutions}, we have $$\norma{\rho_\Delta-\hat{\rho}_\Delta}_{\L1(Q)}\le2 ~ T~ \norma{q_\Delta-\hat{q}_\Delta}_{\L1([0,T])}$$ which also shows that $\norma{\rho_\Delta-\hat{\rho}_\Delta}_{\L1(Q)}$ tends to $0$ as $\Delta t$, $\Delta x\to0$. \end{proof} \subsection{Validation of the numerical scheme}\label{sec:validation} \begin{figure} \caption{The functions ${[\rho \mapsto f(\rho)]}$ and ${[\xi \mapsto p(\xi)]}$ as in Section~\ref{sec:validation}.} \label{fig:validation1} \end{figure} \begin{figure} \caption{Representation of the solution constructed in~\cite[Section~6]{BorisCarlottaMax-M3AS} and described in Subsection \ref{sec:validation}.} \label{fig:gull1} \label{fig:gull2} \label{fig:gull3} \label{fig:gull4} \label{fig:validation2} \end{figure} We propose here to validate the numerical scheme~\eqref{schema}-\eqref{def.flux.num.contrainte}-\eqref{contrainte.approx} using the Godounov numerical flux (see e.g.~\cite{GodlewskiRaviartBook, LevequeBook}) which will be used in the remaining of this paper: \begin{eqnarray*} F(a,b) &= \left\{ \begin{array}{l@{\quad\text{ if }}l} \underset{[a,b]}\min f & a\le b,\\ \underset{[b,a]}\max f & a>b. \end{array} \right. \end{eqnarray*} \noindent We consider the explicit solution to~\eqref{eq:constrianed} constructed in~\cite[Section~6]{BorisCarlottaMax-M3AS} by applying the wave front tracking algorithm. The set up for the simulation is as follows. Consider the domain of computation $[-6,1]$, take a normalized flux $f(\rho) = \rho(1-\rho)$ (namely the maximal velocity and the maximal density are assumed to be equal to one) and a linear weight function $w(x) = 2 (1 + x)\,\chi_{[-1, 0]}(x)$. Assume a uniform distribution of maximal density in $[x_A, x_B]$ at time $t=0$, namely $\bar\rho = \chi_{[x_A, x_B]}$. The efficiency of the exit, $p$, see Figure~\ref{fig:validation1}, is of the form \begin{eqnarray*} p(\xi) &= \left\{ \begin{array}{l@{\quad\text{ if }}l} p_0 & 0\le\xi<\xi_1,\\ p_1 & \xi_1\le\xi<\xi_2,\\ p_2 & \xi_2\le\xi\le 1. \end{array} \right. \end{eqnarray*} The explicit solution $\rho$ corresponding to the values \begin{align*} &p_0 =0.21, && p_1 =0.168, && p_2 =0.021, && \xi_1 \sim 0.566, &x_A = -5.75, && x_B =-2, && \xi_2 \sim 0.731, \end{align*} is represented in Figure~\ref{fig:validation2}. The above choices for the flux $f$ and the efficiency $p$ ensure that the solution to each Riemann problem is unique, see~\cite{AndreianovDonadelloRosiniRazafisonProc}. We defer to~\cite[Section~6]{BorisCarlottaMax-M3AS} for the details of the construction of the solution $\rho$ and its physical interpretation. \begin{figure} \caption{With reference to Subsection \ref{sec:validation}: The numerically computed solution $x \mapsto \rho_\Delta(t,x)$ and the explicitly computed solution $x \mapsto \rho(t,x)$ at different fixed times $t$.} \label{fig:tiger1} \label{fig:tiger2} \label{fig:tiger3} \label{fig:tiger4} \label{fig:zebra1} \label{fig:zebra2} \label{fig:zebra3} \label{fig:zebra4} \label{fig:tiger5} \label{fig:tiger6} \label{fig:tiger7} \label{fig:tiger8} \label{fig:zebra5} \label{fig:zebra6} \label{fig:zebra7} \label{fig:zebra8} \label{fig:validation3} \end{figure} \noindent A qualitative comparison between the numerically computed solution $x \mapsto \rho_\Delta(t,x)$ and the explicitly computed solution $x \mapsto \rho(t,x)$ at different fixed times $t$ is in Figure~\ref{fig:validation3}. We observe good agreements between $x \mapsto \rho(t,x)$ and $x \mapsto\rho_\Delta(t,x)$. The parameters for the numerically computed solution are $\Delta x=3.5\times10^{-4}$ and $\Delta t=7\times10^{-5}$.\\ A convergence analysis is also performed for this test. We introduce the relative $\L1$-error for the density $\rho$, at a given time $t^n$, defined by $$E_{\L1}^n=\left [\sum_{j}\left|\rho(t^n,x_j)-\rho_j^n\right|\right ]\,\Big/\left [\sum_{j}\left|\rho(t^n,x_j)\right|\right ].$$ In Table~\ref{erreurs}, we computed the relative $\L1$-errors for different numbers of space cells at the fixed time $t=10$. We deduce that the order of convergence is approximatively $0.906$. As in~\cite{scontrainte}, we observe that the modification~\eqref{def.flux.num.contrainte} of the numerical flux does not affect the accuracy of the scheme. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|} \hline Number of cells & $\L1$-error\\ \hline $625$ & $9.6843\times10^{-3}$\\ \hline $1250$ & $6.2514\times10^{-3}$\\ \hline $2500$ & $3.4143\times10^{-3}$\\ \hline $5000$ & $1.3172\times10^{-3}$\\ \hline $10000$ & $1.03\times10^{-3}$\\ \hline $20000$ & $4.2544\times10^{-4}$\\ \hline Order & $0.906$\\ \hline \end{tabular} \tiny\caption{Relative $\L1$-error at time $t=10$.} \label{erreurs} \end{center} \end{table} \section{Numerical simulations}\label{sec:simulations} This section is devoted to the phenomenological description of some collective effects in crowd dynamics related to capacity drop, namely the Braess' paradox and the Faster Is Slower (FIS) effect. \subsection{Faster is Slower effect}\label{sec:FIS} The FIS effect was first described in~\cite{Helbing2000Simulating, Parisi2005606} in the context of the room evacuation problem. The authors studied the evolution of the evacuation time as a function of the maximal velocity reached by the pedestrians, and they shown that there exists an optimal velocity for which the evacuation time attains a minimum. Therefore, any acceleration beyond the optimal velocity worses the evacuation time. Following the studies above, the curve representing the evacuation time as a function of the average velocity takes a characteristic shape \cite[Figure 1c]{Parisi2005606}. The first numerical tests we performed aim to verify if such shape is obtained starting from the ADR model. To this end, we consider the corridor modeled by the segment [-6,1], with an exit at $x=0$. We consider the flux $f(\rho)=\rho \, v_{\max} \, (1-\rho)$ where $v_{\max}$ is the maximal velocity of the pedestrians and the maximal density is equal to one. We use the same weight function as for the validation of the scheme, $w(x)=2(1+x)\chi_{[-1,0]}(x)$ and, the same initial density, $\bar{\rho}=\chi_{[-5.75,-2]}$. The efficiency of the exit $p$ is now given by the following continuous function \begin{eqnarray} \label{P_continuous} p(\xi) &= \left\{ \begin{array}{l@{\quad\text{ if }}l} p_0 & 0\le\xi<\xi_1,\\[6pt] \displaystyle\frac{(p_0-p_1)\xi+p_1\xi_1-p_0\xi_2}{\xi_1-\xi_2} & \xi_1\le\xi<\xi_2,\\[10pt] p_1 & \xi_2\le\xi\le1, \end{array} \right. \end{eqnarray} where \begin{align*} &p_0 =0.24, && p_1 =0.05, && \xi_1=0.5, && \xi_2=0.9. \end{align*} The space and time steps are fixed to $\Delta x=5\times10^{-3}$ and $\Delta t=5\times10^{-4}$. In Figure~\ref{fis_f_p} are plotted the flux $f$ corresponding to the maximal velocity $v_{\max}=1$ and the above efficiency of the exit. \begin{figure}\label{fis_f_p} \end{figure} \begin{figure}\label{fis_exit_times_velocities} \end{figure} \begin{figure} \caption{With reference to Subsection \ref{sec:FIS}: Densities at the exit as a function of time for different velocities.} \label{fis_densities_doors} \end{figure} \begin{figure}\label{p_beta} \end{figure} \begin{figure}\label{fis_stabilities} \end{figure} Figure~\ref{fis_exit_times_velocities} represents the evacuation time as a function of the maximal velocity $v_{\max}$, as $v_{\max}$ varies in the interval $[0.1 , 5]$. As we can observe, the general shape described above is recovered. The numerical minimal evacuation time is $19.007$ and is obtained for $v_{\max}=1$. In addition, we reported in Figure~\ref{fis_densities_doors} the density at the exit as a function of time for different values of the maximal velocity $v_{\max}$ around the optimal one. We notice that the maximal density at the exit and the time length where the density is maximal increase with the velocity. This expresses the jamming at the exit that leads to the FIS effect. \begin{figure} \caption{With reference to Subsection \ref{sec:Braess}: Evacuation time as a function of the position of the obstacle.} \label{braess_exit_times} \end{figure} \begin{figure} \caption{With reference to Subsection \ref{sec:Braess}: Braess paradox simulations: density profiles at times $t=1$ (first line), $t=7$ (second line), $t=15$ (third line), $t=19$ (fourth line) and $t=24.246$ (last line).} \label{Braess_snapshots} \end{figure} Then we performed some series of tests to see how the general shape obtained in Figure~\ref{fis_exit_times_velocities} changes with respect to variations of the parameters of the model. In Figure~\ref{fis_stabilities}~(a), we show this variation when we consider different initial densities, namely, $\bar\rho$, $\bar{\rho}_1$ and $\bar{\rho}_2$ with $\bar\rho_1(x)=0.8\chi_{[-5.75,-2]}$ and $\bar\rho_2(x)=0.6\chi_{[-5.75,-2]}$. The general shape of the curves is conserved. We observe that the evacuation time increases with the initial amount of pedestrians while the optimal velocity decreases as the initial amount of pedestrians increases. The minimal evacuation time and the corresponding optimal maximal velocity are $12.259$ and $1.07$ for $\bar{\rho}_2$ and $15.691$ and $1.03$ for $\bar{\rho}_1$. Next we explore the case where the efficiency of the exit varies. We consider the function $p$ defined in~\eqref{P_continuous} and the modification $p_\beta$ such that $p_\beta(\xi)=p(\beta\xi)$. In Figure~\ref{p_beta}, we plotted the functions $p$, $p_\beta$ for $\beta=0.8$ and $\beta=0.9$. Then, in Figure~\ref{fis_stabilities}~(b) are plotted the evacuation time curves corresponding to these three efficiencies of the exit. As minimum evacuation times, we obtain $18.586$ and $18.827$ for $\beta=0.8$, $0.9$ respectively. As expected, the minimal evacuation time increases with lower efficiency of the exit. The corresponding velocities are approximatively $1.06$ and $1.02$ respectively. Finally, we change the location of the initial density. In addition to the corridor $[-6,1]$, we consider two other corridors modeled by the segments $[-12,1]$ and $[-20,1]$. In these two corridors we take as initial densities $\bar{\rho}_3(x)=\chi_{[-11.75,-8]}$ and $\bar{\rho}_4(x)=\chi_{[-19.75,-16]}$ respectively. We have reported the obtained evacuation time curves in Figure~\ref{fis_stabilities}~(c). As expected, the minimal evacuation time increases with the distance between the exit and the initial density location. \subsection{Braess' paradox}\label{sec:Braess} The presence of obstacles, such as columns upstream from the exit, may prevent the crowd density from reaching dangerous values and may actually help to minimize the evacuation time, since in a moderate density regime the full capacity of the exit can be exploited. From a microscopic point of view, the decrease of the evacuation time may seem unexpected, as some of the pedestrians are forced to chose a longer path to reach the exit. The ADR model is able to reproduce the Braess' paradox for pedestrians, as we show in the following simulations. We consider, as in the previous subsection, the corridor modeled by the segment $[-6,1]$ with an exit at $x=0$. We compute the solution corresponding to the flux $f(\rho)=\rho(1-\rho)$, the initial density $\bar\rho(x)=\chi_{[-5.75,-2]}(x)$, the efficiency of the exit $p$ of the form~\eqref{P_continuous} with the parameters \begin{align*} &p_0 =0.21, && p_1 =0.1, && \xi_1=0.566, && \xi_2=0.731 \end{align*} and the same weight function $w(x)=2(1+x)\chi_{[-1,0]}(x)$. The space and time steps are fixed to $\Delta x=5\times10^{-3}$ and $\Delta t=5\times10^{-4}$. Without any obstacle, the numerical evacuation time is $29.496$. In these following simulations we place an obstacle at $x=d$, with $-2<d<0$. The obstacle reduces the capacity of the corridor and can be seen as a door, which we assume larger than the one at $x=0$. Following these ideas we define an efficiency function $p_d(\xi)=1.15 p(\xi)$ and a weight function $w_d(x)=2(x-d+1)\chi_{[d-1,d]}(x)$ associated to the obstacle. In Figure~\ref{braess_exit_times} we have reported the evolution of the evacuation time when the position of the obstacle varies in the interval $[-1.9,-0.01]$ with a step of 0.01. We observe that for $-1.8\le d\le-1.72$, the evacuation time is lower than in the absence of the obstacle. The optimal position of the obstacle is obtained for $d=-1.72$ and the corresponding evacuation time is $24.246$. We compare in Figure~\ref{Braess_snapshots} five snapshots of the solution without obstacle and the solutions with an obstacle placed at $d=-1.72$ and $d=-1.85$. This latter location corresponds to a case where the evacuation time is greater than the one without an obstacle. In these snapshots, we see that the obstacle placed at $d=-1.85$ becomes congested very soon. This is due to the fact that the obstacle is too close to the location of the initial density. When the obstacle is placed at $d=-1.72$, it delays the congestion at the exit. \subsection{Zone of low velocity}\label{sec:Braess+FIS} \begin{figure}\label{slowzone} \end{figure} \noindent In this section, we perform a series of simulations where the obstacle introduced in Subsection~\ref{sec:Braess} is now replaced by a zone where the velocity of pedestrians is lower than elsewhere in the domain. The effect we want to observe here is similar to the one we see in Braess' Paradox. Namely we prevent an high concentration of pedestrians in front of the exit by constraining their flow in an upstream portion of the corridor. In this case however the constraint is local, as the maximal value allowed for the flow only depends on the position in the corridor. \begin{figure} \caption{With reference to Subsection \ref{sec:Braess+FIS}: Braess' paradox and zone of low velocity simulations: density profiles at times $t=1$ (first line), $t=7$ (second line), $t=15$ (third line), $t=19$ (fourth line) and $t=20.945$ (last line).} \label{column_snapshots} \end{figure} We consider again the corridor modeled by the segment $[-6,1]$ with an exit at $x=0$. The efficiency of the exit and the initial density are the same as in the previous subsection. Assume that the slow zone is of size one and is centred at $x=d$, where $-1.9\le d\le0$. Define the following function \begin{eqnarray} \label{def_k} k(x) &= \left\{ \begin{array}{l@{\quad\text{ if }}l} 1 & x\le d-0.5,\\[6pt] -2(x-d) & d-0.5\le x\le d,\\[10pt] 2(x-d) & d\le x\le d+0.5,\\[10pt] 1 & x\ge d+0.5, \end{array} \right. \end{eqnarray} and the following velocity $v(x,\rho)= \left[\lambda+(1-\lambda) \, k(x)\right] v_{\max} \, (1-\rho)$, where $\lambda\in[0,1]$ and $v_{\max}\ge1$ is the maximal velocity. With such velocity, the maximal velocity of pedestrians decreases in the interval $[d-0.5,d]$, reaching its minimal value $\lambda \, v_{\max}$ at $x=d$. Then the velocity increases in the interval $[d,d+0.5]$ reaching the maximum value $v_{\max}$, that corresponds to the maximal velocity away from the slow zone. Finally we consider the flux $f(x,\rho)=\rho\,v(x,\rho)$ and the space and time steps are fixed to $\Delta x=5\times10^{-3}$ and $\Delta t=5\times10^{-4}$. \noindent Figure~\ref{slowzone}~(a) shows the evolution of the evacuation time as a function of the parameter $\lambda$ varying in the interval $[0.1,1]$ when the center of the slow zone is fixed at $d=-1.5$. We observe that the optimal minimal velocity in the slow zone is for $\lambda=0.88$ and the corresponding evacuation time is $20.945$. Recalling that without the slow zone the evacuation time is $29.496$, we see that the introduction of the slow zone allows to reduce the evacuation time. In Figure~\ref{slowzone}~(b), we show the evolution of the evacuation time when varying the center of the slow zone $d$ in the interval $[-1.9,0]$ and when the minimal and the maximal velocities are fixed and correspond to $\lambda=0.88$ and $v_{\max}=1$. We observe here that, unlike in the Braess paradox tests case, the evacuation time does not depend on the location of the slow zone, except when this latter is close enough to the exit. Indeed, when the slow zone gets too close to the exit, the evacuation time grows. This is due to the fact that pedestrians do not have time to speed up before reaching the exit. Fix now $d=-1.5$ and $\lambda=0.88$ and assume that $v_{\max}$ varies in the interval $[0.1,5]$. The evolution of the evacuation time as a function of $v_{\max}$ is reported in Figure~\ref{slowzone}~(c). We observe that we get the characteristic shape already obtained in the FIS effect. Finally we present in Figure~\ref{column_snapshots} five snapshots for three different solutions. The first two solutions are the ones computed in Subsection~\ref{sec:Braess}, without obstacle and with an obstacle located at $d=~-1.72$ respectively. The third solution is computed with a zone of low velocity centered at $d=-1.72$, $\lambda=0.88$ and $v_{\max}=1$. In order to have a good resolution of this third solution, the space and time steps where fixed to $\Delta x=3.5\times10^{-4}$ and $\Delta t=7\times10^{-5}$.We note that in the case where a zone of low velocity is placed in the domain, we do not see the capacity drop, as the density of pedestrians never attains very high values in the region next to the exit. \section{Conclusions}\label{sec:conclusions} Qualitative features that are characteristic of pedestrians' macroscopic behaviour at bottlenecks (Faster is Slower, Braess' paradox) are reproduced in the setting of the simple scalar model with non-local point constraint introduced in~\cite{BorisCarlottaMax-M3AS}. These effects are shown to be persistent for large intervals of values of parameters. The validation is done by means of a simple and robust time-explicit splitting finite volume scheme which is proved to be convergent, with experimental rate close to one. The results presented in this paper allow to consider more complex models. Indeed, as ADR is a first order model, it is not able to capture more complicated effects related to crowd dynamics. Typically, ADR fails to reproduce the amplification of small perturbations. This leads to consider second order model such as the model proposed by Aw, Rascle and Zhang~\cite{Aw_SIAP_2000, Zhang_Method_2002} in the framework of vehicular traffic. Another extension of this work is to consider the ADR model with constraints that are non-local in time. Such constraints allow to tackle optimal management problems in the spirit of \cite{ColomboFacchiMaterniniRosini, CGRESAIM}. Finally, this work can also be extended to two-dimensional models where experimental validations may be possible. \section*{Acknowledgment} All the authors are supported by French ANR JCJC grant CoToCoLa and Polonium 2014 (French-Polish cooperation program) No.331460NC. The first author is grateful to IRMAR, Universit\'e de Rennes, for the hospitality during the preparation of this paper. The second author is also supported by the Universit\'e de Franche-Comt\'e, soutien aux EC 2014. Projekt zosta{\l} sfinansowany ze \'srodk\'ow Narodowego Centrum Nauki przyznanych na podstawie decyzji nr: DEC-2011/01/B/ST1/03965. \section*{References} \end{document}
\begin{document} \begin{frontmatter} \title{Projection sparse principal component analysis: \\ An efficient least squares method} \if00{ \author[A1]{Giovanni Maria Merola\corref{cor1}} \author[A1,A2]{Gemai Chen} \address[A1]{Department of Mathematical Sciences, Xi'an Jiaotong-Liverpool University, 111 Ren€™ai Road,\\Suzhou Industrial Park, Suzhou, Jiangsu Province, P.R. China 215123} \address[A2]{Department of Mathematics and Statistics, University of Calgary, Calgary, Alberta, Canada T2N IN4 } \cortext[cor1]{Corresponding author. Email address: \url{[email protected]}} }\fi \begin{abstract} We propose a new sparse principal component analysis (SPCA) method in which the solutions are obtained by projecting the full cardinality principal components onto subsets of variables. The resulting components are guaranteed to explain a given proportion of variance. The computation of these solutions is very efficient. The proposed method compares well with the optimal least squares sparse components. We show that other SPCA methods fail to identify the best sparse approximations of the principal components and explain less variance than our solutions. We illustrate and compare our method with others with extensive simulations and with the analysis of the computational results for nine datasets of increasing dimensions up to 16,000 variables. \end{abstract} \begin{keyword} Dimension reduction \sep Power method \sep SPCA \sep Variable selection. \end{keyword} \end{frontmatter} \section{Introduction} Principal components analysis (PCA) is the oldest and most popular data dimensionality reduction method used to approximate a set of variables in a lower dimensional space \cite{pea}. Effective use of the method can approximate a large number of variables by a few linear combinations of them, called principal components (PCs). PCA has been extensively used over the past century and in recent times the interest in this method has surged, due to the availability of very large datasets. Applications of PCA include the analysis of gene expression analysis, market segmentation, handwriting classification, image recognition, and other types of data. PCs are usually difficult to interpret and not informative on important features of the dataset because they are combinations of all the observed variables, as already pointed out by \citet{jef}. A common approach used to increase their interpretability is to threshold the coefficients of the combinations defining the PCs, which are called loadings. That is, variables corresponding to loadings that are lower than a given threshold are ignored. However, this practice can give misleading results \citep{jol} and the retained variables can be highly correlated among themselves. This means that the variables included in the interpretation actually carry similar information. In recent years a large number of methods for sparse principal components analysis (SPCA) have been proposed; see, e.g., \citep{jol00, mog, zou, sri, she, wan}. These methods compute solutions in which some of the coefficients to be estimated are equal to zero. In addition to increased interpretability of the results, sparse methods are recommended under the sparsity principle \citep{has}. Conventional SPCA methods replace the ordinary PCs with PCs of subsets (blocks) of variables. The resulting sparse PCs (SPCs) are combinations of only a few of the observed variables. That is, the SPCs are linear combinations of all the variables with only few loadings not equal to zero, the number of which is called cardinality. The difference among conventional SPCA methods is in the optimization approach used to select the variables to be included in the blocks. In this context, the variable selection problem is non-convex NP-hard \citep{mog}, hence computationally intractable. Some methods use a genuine cardinality penalty (improperly called $\ell_0$ norm), others an $\ell_1$ penalty. The most popular of these methods seems to be the Lasso based SPCA \citep{zou}. SPCA methods are expressly recommended for large fat datasets \citep{has2}, i.e., samples with fewer observations than variables. By the nature of the objective function maximized, the components computed maximize the variance explained of each block, instead of that of the whole data matrix. As a consequence, the selected blocks contain highly correlated variables \citep{mer}. Furthermore, with this approach the exact sparse reduction of the PCs of fat matrices cannot be identified, as we will show later. A least squares SPCA method (LS SPCA) in which the sparse components are obtained by minimizing the $\ell_2$ norm of the approximation error was proposed in \cite{mer}. This approach produces sparse components that explain the largest possible proportion of variance for a given cardinality. LS SPCA can identify the equivalent sparse representation of the PCs of fat matrices. However, the variable selection approaches suggested are not scalable to large matrices because they are top-down and require the computation of (generalized) eigenvectors of large matrices. In this paper we suggest an efficient variable selection strategy for LS SPCA, based on projecting the full cardinality PCs on blocks of variables. This approach is based on a property we prove which says that if the regression of a PC on a block of variables yields an $R^2$ statistics equal to $\alpha \in (0, 1)$, then the LS SPCA components computed on that block of variables will explain a proportion of the variance not smaller than $\alpha$. With this approach, the NP-hard SPCA variable selection problem is reduced to a more manageable univariate regression variable selection problem. This procedure, to which we refer to as Projection SPCA (PSPCA), also gives as by-products the projections of the PCs, which are sparse components in themselves. We show that algorithms using PSPCA variable selection are very efficient for computing LS SPCA components, having a growth order of about the number of variables squared. We also show that the performance of the projected PCs is comparable to that of the LS SPCA components. This is relevant because these projections are easier to understand by researchers and also easier and more economical to compute. In the next section we review PCA and LS SPCA, and give a novel interpretation of the latter. The methodological details of PSPCA are discussed in Section~\ref{sec:pspca}. In Section~\ref{sec:selectblocks} we discuss the use of PSPCA for variable selection. We compare its performance on fat matrices with that of conventional SPCA methods and explain the details of the PSPCA algorithm. The proposed method is illustrated by using simulated and real datasets in Section~\ref{sec:examples}. We give some final comments in Section~\ref{sec:conclusions}. The Appendix contains some of the proofs. \section{Full cardinality and sparse principal components}\label{sec:sparsecomp} We assume that ${\mathbf{X}}$ is an $n\times p$ matrix containing $n$ observations on $p$ mean centred variables which have been scaled so that ${\mathbf{S}} = {\mathbf{X}}\trasp{\mathbf{X}}$ is the sample covariance or correlation matrix. Because of this, we will use the terms uncorrelated (correlated) and nonorthogonal (orthogonal) interchangeably. The information contained in the dataset is summarized by its total variance, defined by the squared (Frobenius) norm $||{\mathbf{X}}||^2 = \mathrm{tr}({\mathbf{S}})$, where $\mathrm{tr}$ is the trace operator. In the following the term norm refers to this norm, unless otherwise specified. A component is any linear combination of the columns of ${\mathbf{X}}$, generically denoted by ${\mathbf{t}} = {\mathbf{X}}\mathbf{a}$, where the vector $\mathbf{a}$ is the vector of loadings (or just the loadings, for short). A set of ordered components $({\mathbf{t}}_1, \ldots, {\mathbf{t}}_d) = {\mathbf{X}}(\mathbf{a}_1, \ldots, \mathbf{a}_d)$ is denoted as ${\mathbf{T}_{[d]}} = {\mathbf{X}}{\mathbf{A}_{[d]}}$, where the subscript ${[j]}$ denotes the first $j$ columns of a matrix and ${\mathbf{A}_{[d]}} = (\mathbf{a}_1, \ldots, \mathbf{a}_d)$ is called the matrix of loadings. The least squares estimates of $d$ components, with $d \le p$, are obtained by minimizing the squared norm of the difference of the data matrix from its projection onto the components, $\Pi_{{\mathbf{T}_{[d]}}}{\mathbf{X}}$, where $\Pi_{{\mathbf{M}}}={\mathbf{M}}({\mathbf{M}}\trasp{\mathbf{M}})\matinv{\mathbf{M}}\trasp$ denotes the projector onto the column space of the matrix ${\mathbf{M}}$. By Pythagoras' theorem, the solutions must satisfy \begin{equation}\label{eq:vexp} {\mathbf{T}_{[d]}} = \argmin\limits_{{\mathbf{M}}\in \mathbb{R}^{n\times d}} ||{\mathbf{X}} - \Pi_{{\mathbf{M}}}{\mathbf{X}}||^2 = \argmax\limits_{{\mathbf{M}}\in \mathbb{R}^{n\times d}} ||\Pi_{{\mathbf{M}}}{\mathbf{X}}||^2. \end{equation} The term on the right-hand side of Eq.~(\ref{eq:vexp}), called the variance explained by the components ${\mathbf{T}_{[d]}}$ and denoted as $\mathrm{vexp}({\mathbf{T}_{[d]}})$, is used to measure the performance of the components in approximating the data and is equal to \begin{align} \mathrm{vexp}({\mathbf{T}_{[d]}}) = \mathrm{tr} \Big \{ {\mathbf{X}}\trasp{\mathbf{T}_{[d]}} ({\mathbf{T}\traspd{[d]}}{\mathbf{T}_{[d]}} )^{-1}{\mathbf{T}\traspd{[d]}}{\mathbf{X}} \Big \} = \mathrm{tr} \Big \{ {\mathbf{S}}{\mathbf{A}_{[d]}} ({\mathbf{A}\traspd{[d]}}{\mathbf{S}}{\mathbf{A}_{[d]}} )^{-1}{\mathbf{A}\traspd{[d]}}{\mathbf{S}} \Big \}. \label{eq:vexp1} \end{align} \subsection{Principal components} The principal components, denoted as ${\mathbf{u}}_j = {\mathbf{X}}{\mathbf{v}}_j$ with $j \in \{ 1,\ldots, d\}$ and $d \leq \mathrm{rank}(S)$, are obtained by maximizing $\mathrm{vexp}({\mathbf{T}_{[d]}})$ under the orthogonality requirements ${\mathbf{u}}\trasps{i}{\mathbf{u}}_j = 0$ if $i\neq j$. That is, the PCs' loadings are found from \begin{align}\label{eq:pcaprob} \forall_{j \in \{ 1, \ldots, d\}} \quad {\mathbf{v}_j} = &\argmax\limits_{{\mathbf{a}_j}\in\mathbb{R}^p} {{\mathbf{a}\trasps{j}}{\mathbf{X}}\trasp{\mathbf{X}}{\mathbf{X}}\trasp{\mathbf{X}}{\mathbf{a}_j}}/{{\mathbf{a}\trasps{j}}{\mathbf{X}\trasp}{\mathbf{X}}{\mathbf{a}_j}}, \\ &\text{subject to}\, {\mathbf{v}}\trasps{i}{\mathbf{S}}{\mathbf{a}_j}=\mathbf{0},\, i < j, \,\text{if } j >1.\notag \end{align} The solution loadings ${\mathbf{v}}_j$ are the eigenvectors of ${\mathbf{S}}$, such that ${\mathbf{S}}{\mathbf{v}}_j = {\mathbf{v}}_j\lambda_j$, corresponding to the eigenvalues in nondecreasing order, $\lambda_1 \geq \cdots \geq \lambda_d$. By the orthogonality of the components, the total variance explained can be broken down as the sum of the individual variances explained as $$ \mathrm{vexp}\left({\mathbf{U}}_{[d]}\right) = \sum_{j=1}^d\mathrm{vexp}({\mathbf{u}_j}) = \sum_{j=1}^d {{\mathbf{u}\trasps{j}}{\mathbf{X}}{\mathbf{X}\trasp}{\mathbf{u}_j}}/{{\mathbf{u}\trasps{j}}{\mathbf{u}_j}} = \sum_{j=1}^d \lambda_j. $$ The PCs can be regarded as solutions to a number of different problems, such as the singular value decomposition \citep{eck}. Most notably, \citet{hot} showed that when the loadings are scaled to unit norm, we have $\mathrm{vexp}({\mathbf{u}}_j) ={\mathbf{u}}\trasps{j}{\mathbf{u}}_j = \lambda_j$. Noting also that, by the properties of the spectral decomposition of a symmetric matrix, the loadings are orthogonal, then the PCA problem can be formulated as finding \begin{align}\label{eq:pcahot} \forall_{j \in \{ 1, \ldots, d\}} \quad {\mathbf{v}}_j =& \argmax_{\mathbf{a}_j\in\mathbb{R}^p} \mathbf{a}\trasps{j}{\mathbf{S}}\mathbf{a}_j, \\ &\text{subject to } \mathbf{a}\trasps{j}\mathbf{a}_j = 1, \mathbf{a}\trasps{j}{\mathbf{v}}_i = 0,\, \text{if}\ j > i \geq 1.\nonumber \end{align} \subsection{Least squares sparse principal components} A sparse component is a linear combination of a subset, $\dX_j$, of columns of the ${\mathbf{X}}$ matrix, or block of variables, defined by the sparse loadings $\da_j$ as ${\mathbf{t}}_j = \dX_j\da_j$. The number of variables in the block is the cardinality of the loadings. The least squares sparse PCA (LS SPCA) problem is defined by adding sparsity constraints directly into the PCA objective \eqref{eq:pcaprob}, which gives \begin{align}\label{eq:spcaprob} \forall_{j \in \{ 1, \ldots, d\}} \quad \dbj =& \argmax\limits_{\daj\in\mathbb{R}^{c_j}} {\dajT\dXjT{\mathbf{X}}{\mathbf{X}}\trasp\dXj\daj}/{\dajT\dXjT\dXj\daj} \\ & \text{subject to}\, {\mathbf{b}}\traspd{i}{\mathbf{S}}{\mathbf{a}_j} = 0,\, i < j, \,\text{if } j >1,\nonumber \end{align} where $\dXj$ is a block of $c_j$ variables, $\mathbf{a}_j = {\mathbf{J}}_j\daj$ and ${\mathbf{b}_j} = {\mathbf{J}}_j\dbj$ are the full cardinality representations of the sparse loadings, defined by means of the matrix ${\mathbf{J}}_j$, which is formed by the columns of the order-$p$ identity matrix corresponding to the variables in $\dXj$. Note that the SPCA objective must be maximized sequentially. We will refer to the components in the order with which they are computed. The solutions are given in the following proposition \citep{mer}. \begin{proposition}[Uncorrelated LS SPCA]\label{prop:uspca} Given a block of $c_j$ linearly independent variables $\dX_j$, the solutions to objective \eqref{eq:spcaprob} are the generalized eigenvectors satisfying \begin{equation}\label{eq:uspca} {\mathbf{C}}_j\dXjT{\mathbf{X}}{\mathbf{X}\trasp}\dXj\dbj = {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j\dbj = \dSj\dbj\xi_{j}, \end{equation} where $$ {\mathbf{C}}_j = \{ {\mathbf{I}}_{c_j} - {\mathbf{H}\trasps{j}}({\mathbf{H}_j}\bdS\matinvj{j}{\mathbf{H}\trasps{j}})\matinv{\mathbf{H}_j}\bdS\matinvj{j} \} , \quad {\mathbf{C}}_1 = {\mathbf{I}}_{c_1}, $$ the sparse loadings $\dbj$, the orthogonality constraints ${\mathbf{H}_j} = {\mathbf{Y}}\trasps{[j-1]}\dXj$, with $\bdS_j = \dXjT\dXj$ and $\xi_{j} = \mathrm{vexp}({\mathbf{y}}_j)$ the largest eigenvalue. The SPCs ${\mathbf{y}}_j = \dXj\dbj$ are mutually orthogonal and maximize the variance explained. \end{proposition} The optimal orthogonal LS SPCA components are highly constrained; for example their cardinality cannot be smaller than their order. Due to the greedy nature of the optimization carried out, these locally optimal orthogonal components often stride away from the global optimum, while globally better solutions can be found by removing the orthogonality constraints \cite{mer}. If the orthogonality constraints are dropped, the net increment in total variance explained due to a new component is the variance explained by the residuals orthogonal to the components already in the model. This extra variance explained, which we denote as $\mathrm{evexp}$, is equal to \begin{align} \mathrm{evexp}({\mathbf{t}_j}) &= ||\Pi_{[\bQTj{T}{\mathbf{a}_j}]}{\mathbf{X}}||^2 = {{\mathbf{a}\trasps{j}}\bQTjT{T}{\mathbf{X}}{\mathbf{X}}\trasp\bQTj{T}{\mathbf{a}_j}}/{{\mathbf{a}\trasps{j}}\bQTjT{T}\bQTj{T}{\mathbf{a}_j}}\nonumber\\ &= {{\mathbf{a}\trasps{j}}{\mathbf{X}}\trasp\bQTj{T}\bQTjT{T}{\mathbf{X}}{\mathbf{a}_j}}/{{\mathbf{a}\trasps{j}}\bQTjT{T}\bQTj{T}{\mathbf{a}_j}}, \label{eq:evexp} \end{align} where $\bQTj{T} = ({\mathbf{I}}_n - \Pi_{{\mathbf{T}}_{[j-1]}}){\mathbf{X}}$ is the orthogonal complement of the ${\mathbf{X}}$ matrix (${\mathbf{Q}}_{T_{[0]}} = {\mathbf{X}}$), to which we refer as deflated ${\mathbf{X}}$ matrix (with respect to ${\mathbf{T}}_{[j-1]}$), and $\bQTj{T}\mathbf{a}_j = {\mathbf{t}}_j - \Pi_{{\mathbf{T}}_{[j-1]}} {\mathbf{t}}_j$ is the orthogonal residual of ${\mathbf{t}}_j$. For the first component $\mathrm{evexp}({\mathbf{t}}_1) = \mathrm{vexp}({\mathbf{t}}_1)$. The total variance explained by a set of correlated components is equal to the sum of the extra variances explained, viz. $$ \mathrm{vexp}({\mathbf{T}_{[d]}}) = \sum_{j = 1}^d \mathrm{evexp}({\mathbf{t}}_j). $$ The sparse solutions cannot be determined from the maximization of objective \eqref{eq:evexp} because this is defined in terms of the deflated components $\bQTj{T}\mathbf{a}_j$, while the cardinality constraints must be imposed on the $x$-variables. For this reason, \citet{mer} derives nonorthogonal SPCs ${\mathbf{z}}_j = {\mathbf{X}}{\mathbf{d}}_j = \dXj\ddj$ which maximize the variance of $\bQTj{Z}$ explained by a component ${\mathbf{z}}_j$. This is defined in terms of the variables $\dXj$ as \begin{equation}\label{eq:vexpq} \mathrm{vexp}_Q({\mathbf{z}_j}) = ||\Pi_{{\mathbf{z}}_j}\bQTj{Z}||^2 = {{\mathbf{z}}\trasps{j}\bQTj{Z}\bQTjT{Z}{\mathbf{z}_j}}/{{\mathbf{z}}\trasps{j}{\mathbf{z}_j}} = {\ddjT\dXjT\bQTj{Z}\bQTjT{Z}\dXj\ddj}/{\ddjT\dXjT\dXj\ddj}. \end{equation} \begin{proposition}[Correlated LS SPCA] Given a block of linearly independent variables $\dX_j$, the correlated SPCs, ${\mathbf{z}}_j = \dXj\ddj = {\mathbf{X}}{\mathbf{d}}_j$, that successively maximize $\mathrm{vexp}_Q$ are the generalized eigenvectors satisfying \begin{equation}\label{eq:lsspcasol} \dXjT\bQTj{Z}\bQTjT{Z}\dXj\ddj = \dXjT\dXj\ddj\gamma_j = \dSj\ddj\gamma_j, \end{equation} where $\gamma_j$ is the largest generalized eigenvalue, which is equal to $\mathrm{vexp}_Q({\mathbf{z}_j})$. The full cardinality loadings are equal to ${\mathbf{d}}_j = {\mathbf{J}}_j\ddj$. The first component is identical to the first orthogonal component. \end{proposition} We will refer to the SPCs derived from the minimization of the least squares criterion generically as LS SPCA components and use USPCA and CSPCA to refer to the uncorrelated and correlated solutions, respectively. The variance of ${\mathbf{Q}}_{{\mathbf{T}}_{[j]}}$ that a component explains is a lower bound for the variance of ${\mathbf{X}}$ that this component can explain, as stated in the following proposition, whose proof is given in the Appendix. \begin{proposition}\label{prop:vqltev} Given an ordered set of $d$ components, ${\mathbf{t}}_j = {\mathbf{X}}\mathbf{a}_j, j = 1, \ldots, d$, the different types of variance explained as defined in Eqs.~\eqref{eq:vexp1}, \eqref{eq:evexp} and \eqref{eq:vexpq} satisfy \begin{equation}\label{eq:ineqvexp} \mathrm{vexp}_Q({\mathbf{t}_j}) \leq \mathrm{evexp}({\mathbf{t}_j}) \leq \mathrm{vexp}({\mathbf{t}_j}), \end{equation} where $\mathrm{vexp}({\mathbf{t}_j}) = {{\mathbf{t}\trasps{j}}{\mathbf{X}}{\mathbf{X}\trasp}{\mathbf{t}_j}}/{{\mathbf{t}\trasps{j}}{\mathbf{t}_j}}$. Equality is achieved for the first component or if a component is orthogonal to the preceding ones. \end{proposition} The difference between $\mathrm{evexp}$ and $\mathrm{vexp}_Q$ lies in the different spaces onto which the matrix $\bQTj{T}$ is projected. The extra variance explained measures the norm of the projection of $\bQTj{T}$ onto a component in the span of $$ \mathscr{C}(\dQTj{T})\subseteq \mathscr{C}(\bQTj{T}). $$ Instead, $\mathrm{vexp}_Q$ measures the norm of the projection onto a component in the span of $$ \mathscr{C}(\dXj) = \mathscr{C}(\dQTj{T} + \Pi_{T_{[j-1]}}\dXj)\nsubseteq \mathscr{C}(\dQTj{T}), $$ where $\mathscr{C}({\mathbf{A}})$ denotes the column space of the matrix ${\mathbf{A}}$. This leads to the simple interpretation of the LS SPCA solutions as the first PCs of two different projections of the ${\mathbf{X}}$ matrix, as shown in the next theorem, the proof of which is in the Appendix. \begin{theorem}\label{th:spcaisproj} Let $\dXj$ be a block of linearly independent variables. Then, \begin{enumerate} \item[(i)] The orthogonal LS SPCA components, ${\mathbf{y}}_j = \dXj\dbj$, are the first PCs of the matrices $ ( \Pi_{\dXj} - \Pi_{\hat{Y}_\dXj} ) {\mathbf{X}}$, where $$ \hat{Y}_\dXj = \Pi_{\dXj}{\mathbf{Y}}_{[j-1]}. $$ \item[(ii)] The nonorthogonal LS SPCA components, ${\mathbf{z}_j} = \dXj\ddj$, are the first PCs of the matrices $$ \hQTj{Z} = \Pi_{\dXj}\bQTj{Z} = \Pi_{\dXj} ({\mathbf{I}} - \Pi_{{\mathbf{Z}}_{[j-1]}} ){\mathbf{X}}. $$ \end{enumerate} \end{theorem} \subsection{Conventional sparse principal components} Conventional SPCA methods compute the sparse components as the first PCs of blocks of variables deflated in different ways~\citep{mog}. These solutions are derived with different motivations, either from a constrained LS approach (see, among others, \citep{zou, she}) or by directly adding sparsifying penalties to the Hotelling's formulation of PCA in Eq.~(\ref{eq:pcahot}); see \citep{mer} for a discussion. Hence, the SPCs are the PCs of the (possibly deflated) blocks of variables and the loadings are the solution to \begin{align*} \max\limits_{{\mathbf{a}_j}} {\mathbf{a}\trasps{j}}\mathbf{\cal{Q}}\trasps{j}\mathbf{\cal{Q}}_j{\mathbf{a}_j} \quad \Leftrightarrow \quad \max\limits_{\daj} \dajT\dQcjT\dQcj\daj, \quad \mathrm{card}({\mathbf{a}_j}) = c_j, \quad {\mathbf{a}\trasps{j}}{\mathbf{a}_j}=1 \quad \dajT\daj=1, \end{align*} where $\mathbf{\cal{Q}}_j$ denotes the ${\mathbf{X}}$ matrix deflated using one of the different existing methods; for a review, see \citep{mac}. In conventional SPCA the norm of the components is considered equivalent to the variance of ${\mathbf{X}}$ that they explain. It is easy to show that this latter assumption is not true \citep{mer}, thus these components do not maximize the variance explained. Furthermore, the blocks selected will contain highly correlated variables because the more correlated the variables in the block are, the larger the first eigenvalue of their covariance (or correlation) matrix. Hence, conventional SPCs will have larger cardinality and explain less variance than LS SPCs. \section{Projection sparse principal components}\label{sec:pspca} The idea underpinning PSPCA is to iteratively project the (full cardinality) first principal components of the deflated matrices onto blocks of variables $\dXj$. These projections, to which we refer to as projection sparse components, are SPCs in themselves and the variance of the PCs that they explain is a lower bound for the extra variance of ${\mathbf{X}}$ explained by an LS SPCA component, as we prove next. Let $\bQTj{{\widehat{\mathbf{R}}}}$ denote the ${\mathbf{X}}$ matrix deflated of the first $j-1$ projection SPCs, $\widehat{\mathbf{r}}_{i_1}$ for all $i \in \{ 1,\ldots, j-1\}$, and $$ \bQTjT{{\widehat{\mathbf{R}}}}\bQTj{{\widehat{\mathbf{R}}}} = {\mathbf{W}}_j{\mathbf{M}_{j}}{\mathbf{W}}\trasps{j}, $$ with ${\mathbf{M}_{j}} = \mathrm{diag} (\mu_{j_1} \geq\cdots\geq \mu_{j_p})$ the eigendecomposition of its covariance matrix, the PCs of $\bQTj{{\widehat{\mathbf{R}}}}$ are $$ {\mathbf{R}}_j = \bQTj{{\widehat{\mathbf{R}}}}{\mathbf{W}}_j. $$ Since the PCs, ${\mathbf{r}}_{j_i}$, are orthogonal to the previously computed components, then $\mathrm{vexp}({\mathbf{r}_{j_1}}) = \mathrm{evexp}({\mathbf{r}_{j_1}}) = {\mathbf{r}\trasps{j_1}}{\mathbf{r}_{j_1}}= \mu_{j_1}$. Hence, $\mathrm{vexp}({\mathbf{r}_{j_1}})$ is an upper bound for the extra variance explained by any component, ${\mathbf{t}}_j = {\mathbf{X}}\mathbf{a}_j$, added to the model, i.e., $\mathrm{evexp}({\mathbf{t}}_j)\leq \mu_{j_1}$. Assume that the variables in a block $\dXj$ are linearly independent and explain a proportion of the variance of ${\mathbf{r}_{j_1}}$ not less than $\alpha \in (0,1)$, i.e., ${\widehat{\mathbf{r}}_{j_1}} = \Pi_{\dXj}{\mathbf{r}_{j_1}}$ is such that \begin{equation}\label{eq:rsqu} {{\widehat{\mathbf{r}}\trasps{j}}{\widehat{\mathbf{r}}_{j_1}}}/{{\mathbf{r}\trasps{j_1}}{\mathbf{r}_{j_1}}} \geq \alpha\; \mathrm{or, equivalently,} \; {\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}} \geq \alpha\mu_{j}. \end{equation} Since $\mathscr{C}(\bQTj{{\widehat{\mathbf{R}}}})\subseteq \mathscr{C}({\mathbf{X}})$, a subset of variables $\dXj$ satisfying (\ref{eq:rsqu}) can be found for any $\alpha\in[0,1]$. The projection ${\widehat{\mathbf{r}}_{j_1}}$ is an SPC defined by ${\widehat{\mathbf{r}}_{j_1}} = \dXj{\widehat{\mathbf{w}}_{j_1}}$, with loadings \begin{equation}\label{eq:pspcaloads} {\widehat{\mathbf{w}}_{j_1}} = (\dXjT\dXj)^{-1}\dXjT{\mathbf{r}_{j_1}}. \end{equation} A lower bound for the extra variance explained by ${\widehat{\mathbf{r}}_{j_1}}$ is given in the following theorem. \begin{theorem}\label{th:prspca} Let ${\widehat{\mathbf{r}}_{j_1}}$ be the projection of the first PC of $\bQTj{{\widehat{\mathbf{R}}}}$ on a block of variables $\dXj$ such that ${\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}}\geq \alpha\mu_{j_1}$. Then, $\mathrm{evexp}({\widehat{\mathbf{r}}_{j_1}}) \geq \alpha\mu_{j_1}$. \end{theorem} \noindent \textbf{Proof}. By substituting the eigendecomposition $\bQTj{{\widehat{\mathbf{R}}}}\bQTjT{{\widehat{\mathbf{R}}}} = {\mathbf{R}_j}{\mathbf{R}}\trasps{j}$ into Eq.~(\ref{eq:vexpq}), it is easy to verify that \begin{equation*}\label{eq:evexprhat} \mathrm{evexp}({\widehat{\mathbf{r}}_{j_1}}) \geq \mathrm{vexp}_Q({\widehat{\mathbf{r}}_{j_1}}) = {\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}} + \sum_{i>1} {({\widehat{\mathbf{r}}\trasps{{j_1}}}{\mathbf{r}_{j_i}})^2}/{{\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}}} \geq {\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}} \geq \alpha \mu_{j_1}, \end{equation*} because of Eqs.~\eqref{eq:ineqvexp} and \eqref{eq:rsqu}. $\Box$ The intricacy of the iterated projections renders the comparison between SPCs computed with different methods difficult. In the following theorem we show that ${\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}}$ is a lower bound for the extra variances explained by the LS SPCs. \begin{theorem}\label{th:ineqvexp} Let $\dXj$ be a block of linearly independent variables and ${\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}}\geq \alpha\mu_{j_1}$. Also let ${\mathbf{z}}_j$ and ${\mathbf{y}}_j$ be the correlated and uncorrelated SPCA components, respectively, and assume that, when $j>1$, all components have been computed with respect to the same set of previous components ${\mathbf{T}}_{[j-1]}$. Then, the following properties hold: \begin{subequations}\label{eq:th3} \begin{equation}\label{th3:i} \alpha \mu_{j}\leq \mathrm{vexp}_Q({\widehat{\mathbf{r}}_{j_1}}) \leq \mathrm{evexp}({\widehat{\mathbf{r}}_{j_1}})\leq \mathrm{evexp}({\mathbf{y}_j})\leq \mu_j, \end{equation} \begin{equation}\label{th3:ii} \alpha \mu_{j}\leq \mathrm{vexp}_Q({\widehat{\mathbf{r}}_{j_1}}) \leq \mathrm{vexp}_Q({\mathbf{z}_j}) \leq \mathrm{evexp}({\mathbf{z}_j})\leq \mathrm{evexp}({\mathbf{y}_j})\leq \mu_j. \end{equation} \end{subequations} \end{theorem} \noindent \textbf{Proof}. By definition, ${\mathbf{y}_j}$ is the linear combination of the variables in $\dXj$ that explains the most possible extra variance of ${\mathbf{X}}$ and inequality (\ref{th3:i}) follows from Theorem~\ref{th:prspca}. By substituting the PCA decomposition into Eq.~\eqref{eq:vexpq}, it can be verified that \begin{equation}\label{eq:vlspr} \mathrm{vexp}_Q({\mathbf{z}_j}) = \max\limits_{{\mathbf{t}}_j=\dXj\daj}\sum_{i = 1}^p {({\mathbf{t}}\trasps{j}{\mathbf{r}}_{j_i})^2}/{{\mathbf{t}}\trasps{j}{\mathbf{t}}_j} \geq \sum_{i = 1}^p {({\widehat{\mathbf{r}}\trasps{{j_1}}}{\mathbf{r}}_{j_i})^2}/{{\widehat{\mathbf{r}}\trasps{{j_1}}}{\widehat{\mathbf{r}}_{j_1}}} = \mathrm{vexp}_Q({\widehat{\mathbf{r}}_{j_1}}) \geq \alpha\mu_j. \end{equation} This, together with the optimality of $\mathrm{evexp}({\mathbf{y}_j})$, proves inequality (\ref{th3:ii}). When considering the first components, the inequalities reduce to $\alpha \mu_{1}\leq \mathrm{evexp}(\widehat{\mathbf{r}}_{1_1}) \leq \mathrm{evexp}({\mathbf{z}}_1)= \mathrm{evexp}({\mathbf{y}}_1)\leq \mu_1$, because for the first components $\mathrm{vexp}({\mathbf{t}}_1) = \mathrm{vexp}_Q({\mathbf{t}}_1) = \mathrm{evexp}({\mathbf{t}}_1)$. $\Box$ In principle it cannot be excluded that $\mathrm{evexp}({\widehat{\mathbf{r}}_{j_1}}) > \mathrm{evexp}({\mathbf{z}_j})$. The question of how different are ${\widehat{\mathbf{r}}_{j_1}}$ and ${\mathbf{z}_j}$ when computed with respect to the same deflated matrix ${\mathbf{Q}_j}$ does not have a straightforward answer. We have that ${\mathbf{z}_j}$ is the linear combination of the variables in $\dXj$ which maximizes $\mathrm{vexp}_Q$, while ${\widehat{\mathbf{r}}_{j_1}}$ is the component most correlated with the first PC ${\mathbf{r}_{j_1}}$. Given that ${\mathbf{t}}_j= \dXj\da_j$, $ {({\mathbf{t}}\trasps{j}{\mathbf{r}}_{j_i})^2}/{{\mathbf{t}}\trasps{j}{\mathbf{t}}_j} = \mathrm{corr}^2({\mathbf{t}}_j, {\mathbf{r}}_{j_i})\mu_{j_i}$, from Eq.~(\ref{eq:vlspr}) it follows that \begin{align*} \mathrm{vexp}_Q({\mathbf{z}_j})-\mathrm{vexp}_Q({\widehat{\mathbf{r}}_{j_1}}) &= \sum_{i=1}^p (\beta_{j_i} - \alpha_{j_i})\mu_{j_i} = \sum_{i>1} (\beta_{j_i} - \alpha_{j_i})\mu_{j_i} - (\alpha_{j_1} - \beta_{j_1})\mu_{j_1}\geq 0, \end{align*} where $\beta_{j_i} = \mathrm{corr}^2({\mathbf{z}}_j, {\mathbf{r}}_{j_i})$ and $\alpha_{j_i} = \mathrm{corr}^2({\widehat{\mathbf{r}}_{j_1}}, {\mathbf{r}}_{j_i})$, and necessarily $\alpha_{j_1} \geq \beta_{j_1}$. The following lemma, which is proved in the Appendix, is useful to characterize the difference in variance of ${\mathbf{Q}}_j$ explained by the two components. \setcounter{lemma}{0} \begin{lemma}\label{le:corr} Let $t$ and $x$ be two random variables and ${\mathbf{y}} = ( y_1, \ldots, y_d)\trasp$ a set of $d$ random variables uncorrelated with $x$. If $\mathrm{corr}^2(t, x) =\alpha$, then for all $i \in \{ 1, \ldots, d\}$, $\mathrm{corr}^2(t, y_i) \leq 1 - \alpha$. If the $y_i$ variables are mutually uncorrelated, it follows that \begin{equation*}\label{eq:lemma1ii} \sum_{i=1}^d \mathrm{corr}^2(t, y_i) \leq 1 - \alpha. \end{equation*} \end{lemma} Since the PCs ${\mathbf{r}_{j_i}}$ are mutually uncorrelated, assuming that $\beta_{j_1} > \beta_{j_2}$, by the inequalities in Lemma \ref{le:corr}, $$ 0\leq \max \{\mathrm{vexp}_Q({\mathbf{z}_j}) - \mathrm{vexp}_Q({\widehat{\mathbf{r}}_{j_1}})\} \leq \beta_{j_1}\mu_{{j_1}} + (1 - \beta_{{j_1}})\mu_\jind{2} - \alpha_{{j_1}}\mu_{{j_1}} = (1-\beta_{{j_1}})\mu_\jind{2} -(\alpha_{{j_1}} - \beta_{{j_1}})\mu_{{j_1}}. $$ Therefore, the squared correlations $\alpha_{{j_1}}$ and $\beta_{{j_1}}$ are such that $$ \alpha_{{j_1}} - {\mu_\jind{2}(1 - \alpha_{{j_1}})}/{(\mu_{{j_1}} - \mu_\jind{2})} \leq \beta_{{j_1}} \leq \alpha_{{j_1}}. $$ Hence, when $\alpha_{{j_1}}$ is large and the eigenvalues $\mu_{{j_1}}$ and $\mu_\jind{2}$ are well separated, ${\widehat{\mathbf{r}}_{j_1}}$ and ${\mathbf{z}_j}$ will be very close because they have similar correlation with ${\mathbf{r}_{j_1}}$. In our studies we found that the extra variance explained by the PSPCA components and the LS SPCA components is very similar. This is to be expected because the PSPCA components have the largest possible correlation with the first PCs ${\mathbf{r}_{j_1}}$, which are the components that explain the most extra variance. It should be noted that the inequalities in Theorem~\ref{th:ineqvexp} apply when the different components are computed after the same set of previous components. This is hardly possible in practice because the solutions computed with the various methods are (slightly) different, and different optimality paths are determined by greedy algorithms. \section{Using projection SPCA to select the variables for the sparse components}\label{sec:selectblocks} Finding an efficient algorithm for the selection of the variables forming the SPCs is fundamental for the scalability of an SPCA algorithm. Greedy approaches are required because searching the $2^{(p-1)d}$ possible subsets of indices for $d$ SPCs of unknown cardinality is a non-convex NP-hard, hence computationally intractable, problem. A first simplification adopted by most SPCA methods is to select the blocks of variables sequentially for each component. Also in this case, each problem is NP-hard \citep{mog}. The several greedy solutions proposed for conventional SPCA cannot be used for the LS SPCA problem because they seek subsets of variables that are highly correlated. \citet{mer} suggested a branch-and-bound and a backward selection algorithms for LS SPCA. Neither of these is efficient because they are top-down and require the computation of SPCs of large cardinality to evaluate the variance explained. PSPCA provides a simple yet effective supervisor for the selection of subsets of variables for the computation of LS SPCs. In fact, by Theorem~\ref{th:ineqvexp}, it is enough to select a block of variables that explains a given percentage of the current first PC to be guaranteed that an LS SPCA component computed on that block will explain more than that percentage of the variance of the whole data matrix. Hence, by using PSPCA the LS SPCA variable selection problem is transformed into a more economical regression variable selection problem, thus eliminating the need of computing costly SPCs to evaluate the objective function (the variance explained by the SPCs). Regression model selection has been extensively researched and several approaches for this task have been proposed. Any of these approaches can be used to select the blocks of variables, including Lasso and regularized lasso, if preferred. The regression approach has also the advantage of being familiar to most data analysts and provides the projection SPCs as a by-product. \subsection{Comparison with conventional SPCA methods on ``fat'' matrices}\label{sec:convspca} A particular concern with conventional SPCA methods is that their objective function increases even when perfectly correlated variables are added to the model. Therefore, another important advantage of using a regression model selection method is that the blocks of variables can be chosen to have full column rank and not contain highly correlated variables. This property is important because parsimonious approximations of the PCs should not contain redundant variables. Hence, this property is a further reason for preferring the LS SPCA to conventional SPCA methods which, conversely, generate solution from blocks of highly correlated variables. Another drawback of conventional SPCA methods, connected with the concern mentioned above, is that they cannot identify the most sparse representation of the PCs when applied to column rank deficient matrices. ``Fat'' matrices, i.e., datasets made up of more features than objects, are very common in the analysis of gene expression microarrays or near-infrared spectroscopy data, for example. In this case, the features are linearly dependent and the PCs can be expressed as linear combinations of as many variables as the rank of the matrix, as stated in the following lemma; see the Appendix for a proof. \begin{lemma} When $\mathrm{rank}({\mathbf{X}}) = r < p$ the principal components can be expressed as sparse components of cardinality $r$ and loadings that have norm larger than $1$. \end{lemma} When applied to column rank deficient matrices, conventional SPCA methods compute components of cardinality larger than the rank of the matrix because the only components with unit norm loadings that are equal to the PCs are the full cardinality PCs themselves. This fact is well documented by several examples available in the SPCA literature; see, e.g., \citep {wan,zou}. This means that the model is overfitted by the inclusion of redundant perfectly correlated variables. When $\mathrm{rank}({\mathbf{X}}) = r$, the LS SPCA components computed on a block of $r$ independent variables will be equal to the full cardinality PC, because of Theorem~\ref{th:spcaisproj}. The same is true for PSPCA, when $r$ variables are enough to explain 100\% of the variance of the PCs. In fact, LS SPCA and PSPCA components of cardinality larger than the rank of the data matrix cannot be computed because in that case a matrix $\dXjT\dXj$ would be singular. The following example shows the overfitting resulting from applying conventional SPCA to a matrix with perfectly correlated variables. Consider a matrix with 100 observations on five perfectly collinear variables defined, for all $i \in \{ 1, \ldots, 100\}$ and $j \in \{ 1, \ldots, d\}$, by \begin{equation*} x_{ij} = (-1)^{i}\sqrt{j}. \end{equation*} The covariance matrix of these variables, ${\mathbf{S}} = {\mathbf{X}}\trasp{\mathbf{X}}$, has rank $1$, and the only nonzero eigenvalue is equal to 1500. The first PC explains all the variance and can be written in terms of any of the variables as ${\mathbf{x}}_j\sqrt{1500/s_{jj}}$ with $j \in \{ 1, \ldots, 5\}$, i.e., as cardinality $1$ components with loading larger than $1$. The loadings, norms and relative norms (norm of a component/total variance) of the conventional SPCA optimal solutions for the complete set of conventional SPCs of increasing cardinality are shown in Table~\ref{tab:expl}. We see that $x_5$ ``explains'' just 33\% of the norm of the PC, $x_4$ and $x_5$ together only 60\% (with a net increase equal to 27\%) and so on. These results suggest that the variables with larger variances explain more variance than other collinear variables and that only the full cardinality PC explains the maximum variance. This is true because the norm of a component with unit norm loadings is bounded by $\mathrm{tr}(\dXjT\dXj)$. \begin{table}[t!] \centering \caption{Loadings and norms of the conventional SPCA solutions for the covariance matrix.} \begin{tabular}{lrrrrr} & \multicolumn{5}{c}{Cardinality} \\ \cmidrule{2-6} Variable & 1 & 2 & 3 & 4 & 5 \\ \midrule $x_1$ & 0 & 0 & 0 & 0 & 0.58 \\ $x_2$ & 0 & 0 & 0 & 0.60 & 0.52 \\ $x_3$ & 0 & 0 & 0.65 & 0.53 & 0.45 \\ $x_4$ & 0 & 0.75 & 0.58 & 0.46 & 0.37 \\ $x_5$ & 1 & 0.67 & 0.50 & 0.38 & 0.26 \\ \midrule Norm & 500 & 900 & 1200 & 1400 & 1500 \\ Rel norm & 0.33 & 0.60 & 0.80 & 0.93 & 1.0 \\ \bottomrule \end{tabular} \label{tab:expl} \end{table} The results of applying conventional SPCA to the correlation matrix of the variables in the above example is even more revealing. The correlation matrix is a matrix of $1$s with only one nonzero eigenvalue equal to 5. The conventional SPCA optimal solutions are shown in Table~\ref{tab:expl2}. The results are given irrespectively of which variables are included, because the standardized variables are identical. The cardinality five component is the first PC, which explains all the variability. These results lead to the absurd conclusion that a linear combination of identical variables explains more variance than just one of them. \begin{table}[t!] \centering \caption{Loadings and norms of the conventional SPCA solutions for the correlation matrix.} \begin{tabular}{lrrrrr} & \multicolumn{5}{c}{Cardinality} \\ \cmidrule{2-6} & 1 & 2 & 3 & 4 & 5 \\ \midrule Norm & 1 & 2 & 3 & 4 & 5 \\ Rel norm & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\ \bottomrule \end{tabular} \label{tab:expl2} \end{table} Applying LS SPCA to this dataset would yield a single cardinality one component both for the covariance and the correlation matrices, as expected. This can be seen by considering that the variance explained by any variable $x_j$ is equal to $$ \mathrm{vexp}({\mathbf{x}}_j) = {{\mathbf{x}}\trasps{j}{\mathbf{X}}{\mathbf{X}}\trasp{\mathbf{x}}_j}/{{\mathbf{x}}\trasps{j}{\mathbf{x}}_j} = {\sum_{i=1}^5 s_{ij}^2}/{s_{jj}} = \sum_{i=1}^5 s_{ii} = \mathrm{tr}(S), $$ because $\mathrm{corr}(x_i, x_j)^2 = s_{ij}^2/(s_{ii}s_{jj}) = 1$. Which variable is chosen depends on the algorithm used. The same is true for PSPCA because any variable explains 100\% of the variance of the first PC. An analogous example for blocks of collinear variables can be derived using the artificial data example introduced in \cite{zou}, as illustrated in \cite{mer}, where other unconvincing aspects of conventional SPCA are discussed. \subsection{Computational considerations}\label{sec:compdet} The basic algorithm for computing LS SPCA or PSPCA SPCs is outlined in Algorithm~\ref{algo:fspca}. The algorithm is straightforward and simple to implement. The computation of the PSPCA loadings (line~\ref{algo:projection}) can be simplified as follows. Let us assume, without loss of generality, that the first $c_j$ variables in ${\mathbf{X}}$ form the block $\dXj$ and that the block $\cX_j$ contains the remaining variables, so that ${\mathbf{X}} = (\dXj, \cX_j)$. We write, correspondingly, ${\mathbf{w}}_{j_1} = (\dwjuT, \cwjuT)\trasp$, then we have that $$ {\mathbf{X}\trasp}{\mathbf{r}_{j_1}} = \begin{pmatrix} \dXjT{\mathbf{r}_{j_1}}\\ \cXjT{\mathbf{r}_{j_1}} \end{pmatrix} = \begin{pmatrix} \dwju\mu_{j_1}\\ \cwju\mu_{j_1} \end{pmatrix}, $$ because ${\mathbf{X}\trasp}{\mathbf{r}_{j_1}} = \bQTjT{{\widehat{\mathbf{R}}}}\bQTj{{\widehat{\mathbf{R}}}}{\mathbf{w}}_{j_1} = {\mathbf{w}}_{j_1}\mu_{j_1}$. Substituting this expression into Eq.~(\ref{eq:pspcaloads}), we can write the PSPCA loadings ${\widehat{\mathbf{w}}_j}$ as $$ {\widehat{\mathbf{w}}_{j_1}} = (\dXjT\dXj)^{-1}\dXjT{\mathbf{r}_{j_1}} = (\dXjT\dXj)^{-1}\dwju\mu_{j_1}. $$ The CSPCA loadings in Eq.~(\ref{eq:uspca}) can be computed as the generalized eigenvectors satisfying $$ {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j {\mathbf{C}}\trasps{j}\dbj = \dSj\dbj\xi_{j}, $$ as shown as Statement 1 in the Appendix. The algorithm requires careful implementation because in its simplest form it is highly computationally demanding. The most demanding operations are the computation of the PCs of the ${\mathbf{Q}}_j$ matrices, the extraction of submatrices (which is a costly operation when the number of variables is large), the deflation of the ${\mathbf{X}}$ matrix, the multiplication $\bQTj{Z}\bQTjT{Z}$ (for CSPCA) and the computation of generalized eigenvalues, when the cardinality is large. The algorithm can be sped up by computing the first eigenvector of the deflated covariance matrix ${\mathbf{Q}}\trasps{j}{\mathbf{Q}}_j$ and, if necessary the generalized eigenvectors for the sparse loadings, with the iterative power method. The simple power method algorithm is not necessarily very efficient, especially when the first two eigenvalues are not well separated, and other more efficient but complex algorithms could be used, e.g., Lanczos iterations \citep{bjo} or LOBPCG \citep{kny}. \begin{algorithm}[H]\caption{Projection LS SPCA} \begin{algorithmic}[1] \Procedure {plsspca}{${\mathbf{X}},\, \alpha,\, computePSPCA,\, computeCSPCA, stopRuleCompute$} \State \textbf{initialize} \State {\hspace{1em}${\mathbf{Q}}_1 \gets {\mathbf{X}}$} \State{\hspace{1em}$j \gets 0$} \State{\hspace{1em}stopCompute $\gets$ FALSE} \State \textbf{end initialize} \While{(stopCompute = FALSE)}\Comment{\textbf{start components computation}} \State{$j \gets j + 1$} \State{${\mathbf{r}_{j_1}} = {\mathbf{Q}_j}{\mathbf{w}_{j_1}} :\, {\mathbf{Q}}_j{\mathbf{Q}}\trasps{j}{\mathbf{r}_{j_1}} = {\mathbf{r}_{j_1}}\mu_j$} \Comment{compute first PC of ${\mathbf{Q}_j}$}\label{algo:pwm} \State{$ind_j \gets \{i_1,\ldots, i_{c_j}\}:\, ||{\widehat{\mathbf{r}}_j}||^2 \geq \alpha\mu_j$} \Comment{variable selection output}\label{algo:fwdselect} \State{$\dXj\leftarrow {\mathbf{X}}[,\, ind_j]$}\Comment{$\dX_j$ are columns of ${\mathbf{X}}$ in $ind_j$} \If{(computePSPCA)}\Comment{PSPCA} \State{$\daj \gets (\dX\traspd{j}\dX_j)\matinv\dwju$}\label{algo:projection} \Comment{$\dwju$ are the elements of ${\mathbf{w}_{j_1}}$ in $ind_j$} \ElsIf{(computeCSPCA)}\Comment{Correlated LS SPCA} \State{$\daj: \dXjT{\mathbf{Q}}_j{\mathbf{Q}\trasps{j}}\dXj\da_j = \dXjT\dXj\daj\gamma_j$} \Else\Comment{Uncorrelated LS SPCA} \State{$\daj: {\mathbf{C}}_j\dXjT{\mathbf{X}_j}{\mathbf{X}\trasps{j}}\dXj\daj = \dXjT\dXj\daj\xi_j$} \EndIf \State{$ {\mathbf{t}_j} \gets \dX_j\daj$} \Comment{$j$-th sparse component} \If{(stopRuleCompute = FALSE)}\quad\Comment{stop rule on total variance explained or number of components} \State{$ {\mathbf{Q}}_{j+1} \gets {\mathbf{Q}_j} - \frac{{\mathbf{t}_j}{\mathbf{t}\trasps{j}}}{{\mathbf{t}\trasps{j}}{\mathbf{t}_j}}{\mathbf{Q}_j}$} \Comment{deflate ${\mathbf{X}}$ of current component}\label{algo:deflX} \State{$\mathrm{cvexp}(j) \gets \mathrm{tr}({\mathbf{X}}\trasp{\mathbf{X}}) - \mathrm{tr}({\mathbf{Q}}\trasps{j+1}{\mathbf{Q}}_{j+1})$} \Comment{cumulative \rm{vexp}}\label{algo:cvexp} \Else \State stopCompute $\gets$ TRUE \Comment{\textbf{terminate components computation}} \EndIf \EndWhile \EndProcedure \end{algorithmic}\label{algo:fspca} \end{algorithm} In our implementation we used the simple version of the power method, which has complexity growth rate of about $O(p^2)$, while direct algorithms that compute the whole set of eigenvectors accurately are about $O(p^3)$. The power method is used in extremely high dimensional problems (for example, Google page ranking \citep{bry}) and in various algorithms for conventional SPCA, including \citep{jou, wan}. The computational complexity of the algorithm depends also on which variable selection algorithm is used (line \ref{algo:fwdselect}). For our implementation we chose the fast greedy forward selection in which the variables that explain the most extra variance conditionally on the variables already in the model are selected until a given percentage of variance is explained. This method can be seen as a QR decomposition with supervised pivoting and can be implemented efficiently using updating formulas; see, e.g., Section~2.4.7 in~\citep {bjo}. Since the QR decomposition can be stopped after $c_j$ iterations, identifying a block will be an operation of order about $O(2c_jnp)$. When applied to fat matrices, the solutions can be computed more economically by using a ``reverse svd'' approach (see Section~12.1.4 in~\citep{bish}), which means computing the eigenvectors of ${\mathbf{X}}\trasp{\mathbf{X}}$ starting from the ${\mathbf{X}}{\mathbf{X}}\trasp$ matrix. The theoretical time complexity of the PLSSPCA algorithms cannot be computed exactly because the time taken to compute the eigenvectors with power iterations (if used) and to select the variables and extract submatrices depends on the implementation and the structure of the data. However, we expect the order of growth of the whole algorithm to be not higher than the complexity of computing the first PC of ${\mathbf{Q}_j}$ as it does not contain operations of higher complexity. Therefore, the computation of each vector of loadings should be about $O(p^3)$ when direct eigendecomposition of ${\mathbf{Q}}_j$ is used and roughly $O(p^2)$ when the power method is used. In the next section we will analyze the run time empirically. \section{Numerical results}\label{sec:examples} In this section we report the results of running repetitions of USPCA, CSPCA and PSPCA on simulated and real datasets, each with different number of variables, from 100 to over 16,000. To compute the components we applied forward selection requiring that each sparse component explained at least 95\% of the corresponding PC's variance. Since the number of factors considered is large and displaying the results would require very large tables, we present the results mostly graphically, highlighting the main features. The computational times reported were measured using an implementation of the algorithm in \textsf{C++} embedded in \textsf{R} using the packages \texttt{Rcpp} and \texttt{RcppEigen}. The execution times were measured on an eptacore Intel$^\circledR$\ Core(TM) i7-4770S CPU @ 3.10GHz using Windows 7 operating system. It is well known that \textsf{R} is an inefficient language \citep{mor}, so the run times are slower than what they would be if the programs had been written in a lower level language. \subsection{Simulations} \begin{figure} \caption{Median computational times (in milliseconds) for computing sparse components with underlying latent dimension of 100.} \label{fig:doetime} \end{figure} In order to assess the behavior of the different methods, USPCA, CSPCA and PSPCA, we simulated different datasets according to an experimental design. We considered three different levels of latent dimension (number of latent variables), $d \in \{5, 50, 100\}$; different number of variables, $p \in \{100, 300, 500\}$ and different signal to noise ratio (snr), $s \in \{0.2, 1, 3\}$. The model we considered is ${\mathbf{X}}(d, p, s) = {\mathbf{T}}{\mathbf{P}}\trasp + \sqrt{s}\,{\mathbf{E}}$, where ${\mathbf{T}}$ are $d$ independent $\mathcal{N}(0, 1)$ latent variables, ${\mathbf{P}}$ is a $p \times d$ matrix with unit-norm rows and ${\mathbf{E}}$ is a matrix of $p$ independent $\mathcal{N}(0, 1)$ errors. Therefore, the theoretical correlation matrix of ${\mathbf{X}}$ is equal to $$ {\mathbf{S}} = \mathrm{corr}({\mathbf{X}}) = ({\mathbf{P}}{\mathbf{P}}\trasp + s{\mathbf{I}} )/(1+s). $$ When the snr, $s$, is small this correlation matrix is almost of rank $d$, while as snr increases the correlation matrix becomes closer to the identity matrix. The ${\mathbf{P}}$ matrices were created by generating the entries as independent $\mathcal{U}(-1, 1)$ variables and rescaling the rows. The error correlation matrix was generated as the correlation matrix of a sample of $4p$ pseudo-random realizations of a $\mathcal{N}(0,1)$ variable, without removing possible random correlations. For each combination of levels we ran 1000 repetitions, computing five components when the latent dimension was equal to five and 25 when it was larger, for each method. Variables were selected using a forward selection with stopping criterion $\alpha = 0.95$. The PCs of the ${\mathbf{Q}}_j$ matrices were computed with the power method but USPCA and CSPCA sparse loadings were computed with a direct generalized eigendecomposition algorithm. In the following we highlight the main findings from these simulations. More details can be found in the Online Supplement to the paper. The time taken to compute USPCA and CSPCA components are very similar and are indistinguishable on the plot. The higher efficiency of PSPCA shows when the number of variables or the snr grows, as can be appreciated by observing the median computational (CPU) times, shown in Figure~\ref{fig:doetime}. \begin{table}[b!] \centering \caption{Log-log regression of computational times on experimental factors.} \begin{tabular}{Clllll} & & & & \multicolumn{2}{c}{95\% Confidence Interval} \\ \cmidrule{5-6} Term & \multicolumn{1}{l}{Estimate} & \multicolumn{1}{l}{Standard Error} & \multicolumn{1}{l}{$p$-value} & \multicolumn{1}{l}{Low} & \multicolumn{1}{l}{High} \\ \midrule Intercept & \multicolumn{1}{r}{1.58} & \multicolumn{1}{r}{0.0050} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{1.57} & \multicolumn{1}{r}{1.59} \\ nvar (p) & \multicolumn{1}{r}{2.21} & \multicolumn{1}{r}{0.0008} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{2.21} & \multicolumn{1}{r}{2.21} \\ latDim (d) & \multicolumn{1}{r}{0.28} & \multicolumn{1}{r}{0.0018} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{0.27} & \multicolumn{1}{r}{0.28} \\ snr (s) & \multicolumn{1}{r}{0.28} & \multicolumn{1}{r}{0.0005} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{0.28} & \multicolumn{1}{r}{0.28} \\ Comp. No. (j) & \multicolumn{1}{r}{1.17} & \multicolumn{1}{r}{0.0030} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{1.16} & \multicolumn{1}{r}{1.17} \\ CSPCA & \multicolumn{1}{r}{$-0.04$} & \multicolumn{1}{r}{0.0012} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{$-0.05$} & \multicolumn{1}{r}{-0.04} \\ PSPCA & \multicolumn{1}{r}{$-0.17$} & \multicolumn{1}{r}{0.0012} & \multicolumn{1}{r}{0.0000} & \multicolumn{1}{r}{$-0.17$} & \multicolumn{1}{r}{-0.17} \\ \bottomrule & & & & & \\ \multicolumn{6}{l}{Residual standard error: 0.1446 on 80,993 degrees of freedom} \\ \multicolumn{6}{l}{Multiple R$^2$: 0.9946, Adjusted R$^2$: 0.9946 } \\ \multicolumn{6}{l}{F-statistic: 2471982 on 6 and 80993 DF, $p$-value: 0} \\ \end{tabular} \label{tab:doeregtime} \end{table} We assumed a polynomial dependence of time on the parameters $d, p, s$ and the order of the component computed, $j$. Hence, we estimated the polynomial terms by regressing the logarithm of time on the logarithm of these parameters and adding indicator variables for the method used. The results of the regression, shown in Table~\ref{tab:doeregtime}, confirm the conclusions given above. The fit is excellent, as indicated by the coefficient of determination $R^2 >0.99$, and the final time equation is $$ t(d, p, s, j, M) = e^{1.58} p^{2.21} d^{0.28} s^{0.28} j^{1.17} (0.96)^{I_{CSPCA}} ({0.84})^{I_{PSPCA}} \epsilon, $$ where $I_M$ denotes the indicator variable equal to $1$ when method $M$ is used and $0$ otherwise. The coefficients of these indicator variables measure the ratio of time with the corresponding times taken by USPCA. This result confirms that using the power method to compute the PCs the complexity growth rate is about $O(p^{2.2})$. The time increases almost linearly with the number of components computed. PSPCA is slightly faster than the other methods. The components computed to explain 95\% of the variance explained by the PCs have relatively low cardinality. As expected, the cardinality of the components increases with the number of variables in the set, the latent dimension and the snr. The variability is low and it increases when the number of variables and the snr increase. The results of the log-log regression of cardinality on the experimental factors gave an excellent fit, with coefficient of determination $R^2 \approx 0.97$. The final cardinality equation is $$ c(d, p, s, j, M) = e^{-0.73} p^{0.49} d^{0.48} s^{0.39} j^{1.01} \epsilon. $$ The cardinality increases less than linearly with the number of variables, latent dimension and snr, while it grows almost linearly with the components' order. The method used was not found to significantly change the cardinality of the solutions. The proportions of variance explained by the three methods is very similar in value and in ratio, and differences are only observable at the third or fourth decimal figure. The variance explained by the CSPCA components is always very close to that explained by the USPCA components or is slightly higher. In most cases, the PSPCA components explain the least proportion of variance. The USPCA components of higher order tend to explain less variance than the CSPCA components. This phenomenon has already been observed and it is due to the greediness of the approach, when the local optimality of the USPCA components leads to globally inferior paths. To compare the variance explained by different methods we use the cumulative variances explained by the sparse components relative to the variances explained by the same number of PCs, $$ \mathrm{rCvexp} = {\sum_{i = 1}^j \mathrm{evexp}({\mathbf{t}}_i)}{\Big /} {\sum_{i = 1}^j\mathrm{vexp}({\mathbf{p}}_i)}. $$ Figure~\ref{fig:doecvexp} shows the median $\mathrm{rCvexp}$ for various number of variables and latent dimensions at a constant snr, $s = 0.2$. This shows how USPCA performs noticeably worse than the other methods when the latent dimension is small ($d = 5$) and the snr ratio is low. This is because, under this setting, the rank of the $\bQTj{Y}$ matrices is almost equal to $5 - j + 1$ and orthogonality determines a more severe departure from the optimal path. However, also in this case, the differences are very small. Another aspect that we investigated is the correlation among the components computed with CSPCA and PSPCA. Since each component is highly correlated with the corresponding first PC of ${\mathbf{Q}}_j$, which are mutually orthogonal, by Lemma \ref{le:corr}, we expect their correlation to be small. This is confirmed by the distribution of the $nc (nc-1)/2$ correlations between each pair of components computed for each experimental set up, of which the summary statistics are shown in Table~\ref{tab:corcomp}. The correlations are extremely small and do not show a particular pattern with respect to any of the experimental factors, except that, in most cases, the variability is slightly larger for the PSPCA components. \begin{table}[b!] \centering \caption{Summary statistics of the correlations among sparse components computed with the same method on each simulated data set.} \begin{tabular}{cccccc} \toprule \multicolumn{1}{l}{Minimum} & \multicolumn{1}{l}{1st Quartile} & \multicolumn{1}{l}{Median} & \multicolumn{1}{l}{Mean} & \multicolumn{1}{l}{3rd Quartile} & \multicolumn{1}{l}{Maximum} \\ 0.004 & 0.005 & 0.006 & 0.008 & 0.009 & 0.016 \\ \bottomrule \end{tabular} \label{tab:corcomp} \end{table} \begin{figure} \caption{Median $\mathrm{rCvexp}$ for the sparse components computed with different methods for different simulated datasets, with constant snr = 0.2. The median values are computed over 1000 runs.} \label{fig:doecvexp} \end{figure} \subsection{Real datasets} The datasets that we consider in this section, listed in Table~\ref{tab:descdata}, have been taken from various sources, mostly from the data distributed with the book ``Elements of Statistical Learning'' (ESL) \citep{has}. Other sets were taken from the UCI Machine Learning Repository \citep{uci}. The remaining sets were taken from different sources; see Table~\ref{tab:descdata} for details. Most of these are fat datasets as they have a large number of features and fewer objects. The largest dataset, \cite{rama}, has been used to test other SPCA methods, including \citep{zou, sri, wan}. \begin{table}[b!] \caption{Description of the datasets used for numerical comparison.} \centering \begin{threeparttable} \begin{tabular}{lrrlll} Name & Samples &Features & Type &Description & Source \\ \midrule Crime& 1994& 99 &regular& social data &UCI Repository\tnote{a} \\ Isolet & 6238& 716 &regular & character recognition & UCI Repository\tnote{b} \\ Ross (NCI60) &60 & 1375 &fat & gene expression & \textsf{R} package \texttt{made4}\tnote{c} \\ Khanh &88& 2308 &fat & gene expression & ESL\tnote{d} \\ Phoneme &257 & 4509 &fat & speech recognition & ESL \\ NCI60 &60 & 6830 &fat & gene expression & ESL \\ Protein &11 & 7466 &fat & protein cytometry& ESL \\ Radiation &58 & 12625 &fat & gene expression& ESL \\ Ramaswamy &144 & 16063 &fat & gene expression & Broadinstitute repository\tnote{e} \\ \bottomrule \end{tabular} \begin{tablenotes} {\footnotesize \item[a] \url{https://archive.ics.uci.edu/ml/datasets/Communities+and+Crime} \item[b] \url{https://archive.ics.uci.edu/ml/datasets/ISOLET} \item[c] \url{http://bioconductor.org/packages/release/bioc/html/made4.html} \item[d] \url{https://statweb.stanford.edu/~tibs/ElemStatLearn/} \item[e] \url{http://software.broadinstitute.org/cancer/software/genepattern/datasets}} \end{tablenotes} \end{threeparttable} \label{tab:descdata} \end{table} First we compared the performance of USPCA, CSPCA and PSPCA on the fat dataset described in Table~\ref{tab:descdata}. We computed 10 components for each dataset using the reverse svd approach and requiring that each of them explained at least a proportion $\alpha = 0.95$ of the variance explained by the corresponding PC. Both the PCs and the sparse loadings were computed using direct eigendecomposition. Figure~\ref{fig:fatty_times} shows the median computational times over 25 repetitions. The plots are shown in increasing order of number of observations in the datasets. The computational times of the three methods are very close for all datasets with the exception of Protein (Prot.). For this dataset the computation of the USPCA components takes longer because the orthogonality constraints require the cardinality to be larger than that of the other methods, as shown in Figure~\ref{fig:fatty_cards}. Even though the computation of the eigenvectors of the ${\mathbf{Q}_j}{\mathbf{Q}}\trasps{j}$ matrices is $O(n^3)$, in some cases the computational time on some sets (for example Radiation) is greater than that on dataset with fewer variables and more observations (note the different scales on the vertical axes). This is because the computation of the PC loadings is $O(n^3 + n^2p)$, hence there is a cross-over effect, due to the number of variables. \begin{figure} \caption{Median computational times taken to compute the first 10 sparse components with $\alpha = 0.95$ on seven fat datasets. Time is expressed in milliseconds.} \label{fig:fatty_times} \end{figure} \begin{figure} \caption{Comparison of the performance of LS SPCA methods on two fat datasets, Protein (top) and Phonema (bottom).} \label{fig:fatty_cards} \end{figure} \subsection{Comparison with conventional SPCA}\label{sec:compspca} In this section we compare the performances of the first sparse components computed with a conventional SPCA method and with PSPCA. As our conventional SPCA method we used SPCA-IE \citep{wan} with the amvl criterion. This method was shown to perform similarly to other SPCA methods. It does not require to choose arbitrary sparsity parameters and is simple to implement. Since the results for fat matrices are quite similar, we present the results only for four datasets: Crime, Isolets, Khanh and Ramaswamy. For the last dataset, we computed the conventional sparse components using simple thresholding, stopping the computation at cardinality 200; details of the performance of components with larger cardinality computed with different conventional SPCA methods for this dataset can be found in the papers cited above. Figure~\ref{fig:compspca} compares the relative norm ($||{\mathbf{t}}||^2/||{\mathbf{X}}||^2$), $\mathrm{rCvexp}$ and their correlation with the full PCs for increasing cardinality of the first components computed with PSPCA and SPCA-IE on different datasets. The PSPCA values for the rank deficient Khanh and Ramaswamy datasets are available only until the solutions reach full rank cardinality (87 and 143, respectively) at which the components explain the maximum possible variance. Clearly SPCA-IE outperforms PSPCA in the norm of the components. However, the latter method guarantees higher variance explained and closer convergence to the PC with much lower cardinality. \begin{figure} \caption{Norm, $\mathrm{rCvexp}$ and correlation with the PCs versus cardinality of the first sparse components computed with SPCA-IE and PSPCA.} \label{fig:compspca} \end{figure} The differences in performance of the two approaches are more evident for large rank deficient datasets, when conventional sparse components with cardinality in the hundreds explain less variance than PSPCA components of much lower cardinality. The plots also show clearly that the components' norms are not related to the variance that they explain or to their correlation with the PC. This confirms the theoretical conclusions given in Section \ref{sec:convspca}. \begin{table}[b!] \centering \caption{Cardinality needed to reach 99.9\% $\mathrm{rCvexp}$ by the components computed with PSPCA.} \begin{threeparttable} \begin{tabular}{lrrr} Dataset& Rank & \multicolumn{2}{c}{Cardinality Needed for $99.9\%\, \mathrm{rCvexp}$}\\ \cmidrule{3-4} && PSPCA & SPCA\tnote{a}\\ \midrule Crime & 99& 38 & 74 \\ Isolet & 716& 123 & 439 \\ Khanh & 87& 28 & 1338 \\ Ramaswamy &143& 23 & $>$ 200\\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] The values for the first three datasets were obtained using SPCA-IE and the values for the Ramaswamy dataset using simple thresholding \end{tablenotes} \end{threeparttable} \label{tab:compspca} \end{table} Table~\ref{tab:compspca} shows the cardinality with which the components computed with the two methods reached 99.9\% $\mathrm{rCvexp}$. In all cases the cardinality of the PSPCA components is much lower than that of the SPCA-IE components. \section{Discussion}\label{sec:conclusions} Projection SPCA is a very efficient method for selecting variables for computing a sparse approximation to the PCs. The methodology is intuitive and can be understood by users who do not have a deep knowledge of numerical optimization. The only parameter to be set for computing the solutions is the proportion of variance explained, the meaning of which is also easily understandable. The algorithm is simple to implement and scalable to large datasets. Users can choose their preferred regression variable selection algorithm to select the variables. Most conventional SPCA methods, instead, are based on special numerical optimizations methods and require setting values for parameters with a difficult to understand effect. Future research could explore the results of using $\ell_1$ norm selection methods, such as least angle or Lasso regression, for example, on the computation of LS SPCA components. In this work we have developed a framework for computing LS SPCA components that closely approximate the full PCs with low cardinality. We showed that sparse USPCA, CSPCA and PSPCA components can be efficiently computed for very large datasets. We also show that conventional SPCA methods suffer from a number of drawbacks which yield less attractive solutions than the corresponding LS SPCA solutions. Conventional SPCA methods have been shown to give results similar to simple thresholding if not worse; see, e.g., \citep{zou, wan}. Thresholding has been proven to give misleading results; see, e.g., \citep{cad}. Since the loadings are proportional to the covariances of the variables with the PCs, the largest loadings correspond to variables that are highly correlated with the current PC and among themselves. These sets of variables are not very informative because they contain different measures of the same features. \citet{zou} proposed three properties of a good SPCA method: \begin{itemize} \item [(i)] Without any sparsity constraint, the method should reduce to PCA. \item [(ii)] It should be computationally efficient for both small \emph{p} and big \emph{p} data. \item [(iii)] It should avoid misidentifying the important variables. \end{itemize} The first property is not enough for a good method. The second is not necessary for the most commonly analyzed datasets and the third is vague because importance is not defined and variables known to be unimportant could be directly eliminated from the analysis. We suggest the following properties for a good SPCA method: \begin{itemize} \item [(i)] Without any sparsity constraint, the method should reduce to PCA. \item [(ii)] It should identify the sparsest expression of the principal components. \item [(iii)] The addition of a variable perfectly correlated with one or more variables already in the solution should not improve the objective function. \end{itemize} The last property eliminates redundant variables from the solution and should deter the inclusion of highly correlated ones. Conventional SPCA methods do not have the last two properties while methods based on LS SPCA do. It is possible that other methods could have these properties. LS SPCA is implemented in the \textsf{R} package \texttt{spca} available on GitHub. PSPCA will be added to this package in the future. \section*{Acknowledgments} We thank the Editor-in-Chief, Christian Genest, an Associate Editor and the anonymous referees for their useful comments and suggestions that improved the paper. Gemai Chen's research is partially supported by a grant from the Natural Sciences and Engineering Research Council of Canada. \section*{Appendix} \setcounter{equation}{0} \setcounter{proposition}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \renewcommand{\Alph{proposition}}{\Alph{proposition}} \renewcommand{\Alph{theorem}}{\Alph{theorem}} \renewcommand{\Alph{lemma}}{\Alph{lemma}} \begin{proposition} Given an ordered set of $d$ components, ${\mathbf{t}}_j = {\mathbf{X}}\mathbf{a}_j$ with $ j \in \{ 1, \ldots, d\}$, the different types of variance explained as defined in Eqs.~\eqref{eq:vexp1}, \eqref{eq:evexp} and \eqref{eq:vexpq} satisfy \begin{equation*} \mathrm{vexp}_Q({\mathbf{t}_j}) \leq \mathrm{evexp}({\mathbf{t}_j}) \leq \mathrm{vexp}({\mathbf{t}_j}), \end{equation*} where $\mathrm{vexp}({\mathbf{t}_j}) = {{\mathbf{t}\trasps{j}}{\mathbf{X}}{\mathbf{X}\trasp}{\mathbf{t}_j}}/{{\mathbf{t}\trasps{j}}{\mathbf{t}_j}}$. Equality is achieved for the first component or if a component is orthogonal to the preceding ones. \end{proposition} \noindent \textbf{Proof}. The extra variance explained cannot be smaller than the variance of ${\mathbf{Q}}$ explained by the same components because \begin{equation*} \mathrm{vexp}_Q({\mathbf{t}_j}) = \mathrm{evexp}({\mathbf{t}}_j) \, \frac{{\mathbf{a}\trasps{j}}\bQTjT{T}\bQTj{T}{\mathbf{a}_j}}{{\mathbf{a}\trasps{j}}{\mathbf{X}\trasp}{\mathbf{X}}{\mathbf{a}_j}} = \mathrm{evexp}({\mathbf{t}}_j) \left(\frac{{\mathbf{a}\trasps{j}}{\mathbf{X}\trasp}{\mathbf{X}}{\mathbf{a}_j} - {\mathbf{a}\trasps{j}}{\mathbf{X}\trasp}\Pi_{{\mathbf{T}_{[j-1]}}}{\mathbf{X}}{\mathbf{a}_j}}{{\mathbf{a}\trasps{j}}{\mathbf{X}\trasp}{\mathbf{X}}{\mathbf{a}_j}} \right) \leq \mathrm{evexp}({\mathbf{t}}_j). \end{equation*} It is well known that extra variance explained is not larger than the variance explained by a regressor. In fact, \begin{equation*} \mathrm{vexp}({\mathbf{t}_j}) = ||\Pi_{{\mathbf{t}_j}}{\mathbf{X}}||^2 = ||\Pi_{[\Pi_{{\mathbf{T}}_{[j-1]}}{\mathbf{t}_j}]}{\mathbf{X}} + \Pi_{[\bQTj{T}{\mathbf{a}_j}]}{\mathbf{X}}||^2 \geq ||\Pi_{[\bQTj{T}{\mathbf{a}_j}]}{\mathbf{X}}||^2 = \mathrm{evexp}({\mathbf{t}_j}) \end{equation*} because ${\mathbf{t}}_j = \Pi_{{\mathbf{T}}_{[j-1]}}{\mathbf{t}_j} + \Pi_{[({\mathbf{I}} - {\mathbf{T}}_{[j-1]})]}{\mathbf{t}_j}$ and $\Pi_{[({\mathbf{I}} - {\mathbf{T}}_{[j-1]}){\mathbf{t}_j}]} = \Pi_{[\bQTj{T}{\mathbf{a}_j}]}$. Therefore, $\Pi_{{\mathbf{t}_j}} = \Pi_{\Pi_{{\mathbf{T}}_{[j-1]}}{\mathbf{t}_j}} + \Pi_{\Pi_{({\mathbf{I}} - {\mathbf{T}}_{[j-1]}){\mathbf{t}_j}}}$; see, e.g., Theorem~8.8 in~\citep{pun}. The statement about the equality is true because if a component ${\mathbf{t}_j}$ is orthogonal to all preceding variables, then ${\mathbf{t}_j} = {\mathbf{X}}\mathbf{a}_j = \bQTj{T}{\mathbf{a}_j}$, and ${\mathbf{Q}}_{{\mathbf{T}}_{[0]}}= {\mathbf{X}}$. $\Box$ \setcounter{theorem}{0} \begin{theorem} Let $\dXj$ be a block of linearly independent variables. Then, \begin{enumerate} \item[(i)] The orthogonal LS SPCA components, ${\mathbf{y}}_j = \dXj\dbj$, are the first PCs of the matrices $(\Pi_{\dXj} - \Pi_{\hat{Y}_\dXj} ){\mathbf{X}}$, where $\hat{Y}_\dXj = \Pi_{\dXj}{\mathbf{Y}}_{[j-1]}$. \item[(ii)] The nonorthogonal LS SPCA components, ${\mathbf{z}_j} = \dXj\ddj$, are the first PCs of the matrices $\hQTj{Z} = \Pi_{\dXj}\bQTj{Z} = \Pi_{\dXj} ({\mathbf{I}} - \Pi_{{\mathbf{Z}}_{[j-1]}} ){\mathbf{X}}$. \end{enumerate} \end{theorem} \noindent \textbf{Proof}. Premultiplying Eq.~(\ref{eq:uspca}) by $\dXj\dSjinv$ gives \begin{equation*} \Big(\Pi_{\dXj} - \Pi_{\hat{Y}_\dXj}\Big){\mathbf{X}}{\mathbf{X}}\trasp {\mathbf{y}}_j = {\mathbf{y}}_j\xi_{j_\text{max}}, \end{equation*} where $\Pi_{\hat{Y}_\dXj} = \Pi_{\dXj} {\mathbf{Y}_{[j-1]}} ({\mathbf{Y}\trasps{[j-1]}}\Pi_{\dXj}{\mathbf{Y}_{[j-1]}} )\matinv{\mathbf{Y}\trasps{[j-1]}}\Pi_{\dXj}$. Since, $ (\Pi_{\dXj} - \Pi_{\hat{Y}_\dXj} ){\mathbf{y}}_j = {\mathbf{y}}_j$, we can write $$ \Big(\Pi_{\dXj} - \Pi_{\hat{Y}_\dXj}\Big) {\mathbf{X}}{\mathbf{X}}\trasp \Big(\Pi_{\dXj} - \Pi_{\hat{Y}_\dXj}\Big){\mathbf{y}}_j = {\mathbf{y}}_j\xi_{j_\text{max}}, $$ which proves part (i). Given that $ \mathscr{C}(\hat{Y}_\dXj)\subset \mathscr{C}(\dXj)$, $\Pi_{\dXj} - \Pi_{\hat{Y}_\dXj} $ is a projector onto $\mathscr{C}(\dXj)\cap \mathscr{C}(\hat{Y}_\dXj)^\bot$, where $\mathscr{C}({\mathbf{A}})^\bot$ denotes the orthocomplement of $\mathscr{C}({\mathbf{A}})$ with respect to ${\mathbf{I}}$; see Chapter~7 in~\citep{pun}. In a similar fashion, premultiplying Eq.~\eqref{eq:lsspcasol} by $\dXj\bdS\matinvj{j}$ we obtain $$ \Pi_{\dXj}\bQTj{Z}\bQTjT{Z}{\mathbf{z}_j} = \hQTj{Z}\hQTjT{Z}{\mathbf{z}_j} = {\mathbf{z}_j}\gamma_j, $$ which proves part~(ii). $\Box$ \setcounter{lemma}{0} \begin{lemma} Let $t$ and $x$ be two random variables and ${\mathbf{y}} = (y_1, \ldots, y_d) \trasp$ a set of $d$ random variables uncorrelated with $x$. If $\mathrm{corr}^2(t, x) =\alpha$, then for all $i \in \{ 1, \ldots, d\}$, $ \mathrm{corr}^2(t, y_i) \leq 1 - \alpha$. If the $y_i$ variables are mutually uncorrelated, it follows that \begin{equation*} \sum_{i=1}^d \mathrm{corr}^2(t, y_i) \leq 1 - \alpha. \end{equation*} \end{lemma} \noindent \textbf{Proof.} Let $\boldsymbol{\rho}\trasps{t{\mathbf{y}}} = ( \rho_{ty_1}, \ldots,\rho_{ty_d})$, where $\rho_{ty_i}= \mathrm{corr}(t,y_i)$. The squared multiple correlation coefficient of the regression of $t$ on $[x,\mathbf{y}\trasp]\trasp$ is such that \[ \rho_{t.x{\mathbf{y}}}^2 = [\sqrt{\alpha}, \boldsymbol{\rho}\trasps{t{\mathbf{y}}}] \begin{bmatrix} 1& \mathbf{0}\trasp\\ \mathbf{0} & {\mathbf{R}}\matinv \end{bmatrix} \begin{bmatrix} \sqrt{\alpha}\\ \boldsymbol{\rho}_{t{\mathbf{y}}} \end{bmatrix} = \alpha + \rho_{t.{\mathbf{y}}}^2\leq 1, \] where ${\mathbf{R}}$ is the correlation matrix of ${\mathbf{y}}$ and $\rho_{t.{\mathbf{y}}}^2 = \boldsymbol{\rho}\trasps{t{\mathbf{y}}} {\mathbf{R}}\matinv\boldsymbol{\rho}_{t{\mathbf{y}}}$ is the squared coefficient of multiple correlation between $t$ and ${\mathbf{y}}$. Since $\rho_{t.{\mathbf{y}}}^2$ can be written as the sum of the squared correlation of the response variable with one of the regressors, $y_i$, say, and the multiple correlation of the response variable with the orthogonal complement of the remaining variables, $\{y_j,\ j\neq i\}$, namely, $\rho_{t.{\mathbf{y}}}^2 = \mathrm{corr}^2(t,y_i) + \rho_{t.{\mathbf{y}}_{/i}.y_i}^2\geq \mathrm{corr}^2(t,y_i)$, it follows that \[ 0\leq \mathrm{corr}^2(t,y_i) \leq \boldsymbol{\rho}\trasps{t{\mathbf{y}}} {\mathbf{R}}\matinv\mathbf{\rho}_{t{\mathbf{y}}} \leq 1 - \alpha. \] When $\mathrm{corr}(y_i,y_j)=0, i\neq j$, $$ \rho_{t.{\mathbf{y}}}^2 = \boldsymbol{\rho}\trasps{t{\mathbf{y}}} {\mathbf{R}}\matinv\boldsymbol{\rho}_{t{\mathbf{y}}} = \sum_{i=1}^p \mathrm{corr}^2({\mathbf{t}}, {\mathbf{y}}_i), $$ from which follows the second statement to be proved. For a more general proof, see \cite{pun05}. $\Box$ \begin{lemma} When $\mathrm{rank}({\mathbf{X}}) = r < p$ the principal components can be expressed as sparse components of cardinality $r$ and loadings that have norm larger than $1$. \end{lemma} \noindent \textbf{Proof}. Given that $\mathrm{rank}({\mathbf{X}}) = r$, there are $p - r$ columns of ${\mathbf{X}}$ which are linearly dependent. Assume without loss of generality that the first $r$ columns of ${\mathbf{X}}$ are linearly independent and denote them as $\dX$. Also let $\cX$ be the remaining columns. Then, we can write \begin{equation*} {\mathbf{X}} = [\dX, \cX] = \dX \Big\{ {\mathbf{I}}_r, \dX\trasp(\dX\dX\trasp)\matginv\cX \Big\} = \dX \Big \{{\mathbf{I}}_r, (\dX\trasp\dX)\matinv\dX\trasp\cX\Big\} \nonumber =\Pi_{\dX}{\mathbf{X}} = \dX{\mathbf{G}}, \end{equation*} where the superscript `$^+$' denotes the Moore--Penrose generalized inverse and ${\mathbf{G}} = {\mathbf{I}}_r, (\dX\trasp\dX)\matinv \dX\trasp\cX$. Hence, the PCs can be written as ${\mathbf{u}}_j = {\mathbf{X}}{\mathbf{v}}_j = \dX({\mathbf{G}}{\mathbf{v}}_j) = \dX\dot{\mathbf{v}}_j$, where $\dot{\mathbf{v}}_j = {\mathbf{G}}{\mathbf{v}}_j$ has length $r$. Since the largest singular value of ${\mathbf{X}}_j$ ($\bQTj{U}$) is not smaller than the largest singular value of $\dXj$ ($\dQTj{U}$), it must follow that $\dot{\mathbf{v}}\trasps{j}\dot{\mathbf{v}}_j \geq 1$. Therefore, the PCs can be defined as sparse components of cardinality $r$ with loadings of norm larger than $1$. $\Box$ \begin{state} The CSPCA loadings can be computed as the generalized eigenvectors satisfying $${\mathbf{C}}_j {\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j {\mathbf{C}}\trasps{j}\dbj = \dSj\dbj\xi_{j}, $$ where ${\mathbf{C}}_j = {\mathbf{I}}_{c_j} - {\mathbf{H}\trasps{j}}({\mathbf{H}_j}\bdS\matinvj{j}{\mathbf{H}\trasps{j}})\matinv{\mathbf{H}_j}\bdS\matinvj{j}$. \end{state} \noindent \textbf{Proof}. From Proposition~\ref{prop:uspca}, we have that the USPCA loadings satisfy \begin{equation*} {\mathbf{C}}_j\dXjT{\mathbf{X}}{\mathbf{X}\trasp}\dXj\dbj = {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j\dbj = \dSj\dbj\xi_{j}. \end{equation*} Then $$ {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j\dbj = \dSj\dbj\xi_{j} \quad \Leftrightarrow \quad {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j \dSjinv (\dSj\dbj ) = \dSj\dbj\xi_{j}. $$ Since ${\mathbf{C}}_j$ is idempotent and $\mathscr{C}(\dSj\dbj)\subseteq\mathscr{C}({\mathbf{C}}_j)$, because ${\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j \dSjinv (\dSj\dbj ) \propto \dSj\dbj$, $\dbj$ must satisfy $$ {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j \dSjinv{\mathbf{C}}_j (\dSj\dbj ) = {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j (\dSjinv{\mathbf{C}}_j\dSj )\dbj = \dSj\dbj\xi_{j}. $$ Now, \begin{equation*} \dSjinv{\mathbf{C}}_j\dSj = \dSjinv \Big \{ {\mathbf{I}} - {\mathbf{H}\trasps{j}} ({\mathbf{H}_j}\dSjinv{\mathbf{H}\trasps{j}} )\matinv{\mathbf{H}_j}\dSjinv \Big \} \dSj = {\mathbf{I}} - \dSjinv{\mathbf{H}\trasps{j}} ({\mathbf{H}_j}\dSjinv{\mathbf{H}\trasps{j}} )\matinv{\mathbf{H}_j} = {\mathbf{C}}\trasps{j}. \end{equation*} Therefore, $\dbj$ is the generalized eigenvector satisfying $ {\mathbf{C}}_j{\mathbf{J}}\trasps{j}{\mathbf{S}}{\mathbf{S}}{\mathbf{J}}_j {\mathbf{C}}\trasps{j}\dbj = \dSj\dbj\xi_{j}$. This completes the argument. $\Box$ \section*{References} \end{document}
\begin{document} \title{Cauchy-Davenport Theorem for linear maps: Simplification and Extension} \author{ John Kim\thanks{Department of Mathematics, Rutgers University. Research supported in part by NSF Grant Number DGE-1433187. {\tt [email protected]}.} \and Aditya Potukuchi\thanks{Department of Computer Science, Rutgers University. {\tt [email protected]}.}} \maketitle \begin{abstract} We give a new proof of the Cauchy-Davenport Theorem for linear maps given by Herdade et al., (2015) in~\cite{HKK}. This theorem gives a lower bound on the size of the image of a linear map on a grid. Our proof is purely combinatorial and offers a partial insight into the range of parameters not handled in~\cite{HKK}. \end{abstract} \section{Introduction} Let $\mathbb{F}_p$ be the field containing $p$ elements, where $p$ is a prime, and let $A,B \subseteq \mathbb{F}_p$. The Cauchy-Davenport Theorem gives a lower bound on the size of the sumset $A + B \defeq \{a + b \;\ifnum\currentgrouptype=16 \middle\fi|\; a \in A,b \in B\}$ (for more on sumsets, see, for example,~\cite{TV}). The size of the sumset can be thought of as the size of the image of the linear map $(x,y) \rightarrow x + y$, where $x \in A$, and $y \in B$. Thus the theorem can be restated as follows: \begin{theorem}[Cauchy-Davenport Theorem] Let $p$ be a prime, and $L:\mathbb{F}_p \times \mathbb{F}_p \rightarrow \mathbb{F}_p$ be a linear map that takes $(a,b)$ to $a+b$. For $A,B \subseteq \mathbb{F}_p$, Let $L(A,B)$ be the image of $L$ on $A \times B$. Then, $$ |L(A,B)| \geq \min(|A|+|B|-1,p) $$ \end{theorem} In~\cite{HKK}, this notion was extended to study the sizes of images of general linear maps on product sets. A lower bound was proved using the polynomial method (via a nonstandard application of the Combinatorial Nullstellensatz~\cite{Alon}). In this paper, we give a simpler, and combinatorial proof of the same using just the Cauchy-Davenport Theorem. \\ Notation: For a linear map $L : \mathbb{F}_p^{n} \rightarrow \mathbb{F}_p^m$, and for $S_1, S_2, \ldots S_n \subseteq \mathbb{F}_p$, we use $L(S_1, S_2 \ldots S_n)$ to denote the image of $L$ on $S_1 \times S_2 \times \cdots S_n$. The \emph{support} of a vector is the set of nonzero entries in the vector. A \emph{min-support vector} in a set $V$ of vectors is a nonzero vector of minimum support size in $V$. \begin{theorem}[Main Theorem] \label{the:main} Let $p$ be a prime, and $L:\mathbb{F}_p^{m+1} \rightarrow \mathbb{F}_p^m$ be a linear map of rank $m$. Let $A_1, A_2, \ldots A_{m+1} \subseteq \mathbb{F}_p$ with $|A_i| = k_i$. Further, suppose that $\min_i(k_i) + \max_i(k_i) < p$. Let $S$ be the support of $\ker(L)$, and $S' = [n] \setminus S$. Then $$ |L(A_1,A_2, \ldots, A_n)| \geq \left( \prod_{j \in S'}k_j \right) \cdot \left( \prod_{i\in S}k_i - \prod_{i \in S}(k_i - 1) \right) $$ \end{theorem} As noted in~\cite{HKK}, this bound is tight for every $m$ and $p$. We restrict our theorem to study only maps from $\mathbb{F}_p^{m+1}$ to $\mathbb{F}_p^m$ of rank $m$ for two reasons mainly:(1) It is simpler to state, and contains the tight case and (2) We are unable prove any better bounds if the rank is not $m$. It is not clear to us what the correct bound for the general case is. \\ We also show the following result for the size of the image for certain full rank linear maps from $\mathbb{F}_p^n \rightarrow \mathbb{F}_p^{n-1}$ when the size of the sets it is evaluated on are all large enough. \begin{theorem} \label{cover} Let $L:\mathbb{F}_p^n \rightarrow \mathbb{F}_p^{n-1}$ be a linear map given by $L(x_1,\ldots x_n) = (x_1 + x_n, x_2 + x_n \ldots x_{n-1} + x_n)$. Let $S_1, \ldots S_n \subseteq \mathbb{F}_p$ with $|S_i| = k$ for $i \in [n]$ such that $k > \frac{(n-1)p}{n}$, then $|L(S_1, \ldots S_n)| = p^{n-1}$ (i.e., $L(S_1, \ldots S_n) = \mathbb{F}_p^{n-1}$). \end{theorem} The theorems do not, however, give tight bounds for all set sizes, for example if $\min_i |A_i| > p/2$. It would be interesting to obtain a tight bound even for the simple linear map $(x,y,z) \rightarrow (x+z, y+z)$ on the product set $A_1 \times A_2 \times A_3 \subseteq \mathbb{F}_p^3$ which holds for all sizes of the $A_i$'s. \section{The Theorem} \subsection{The Main Lemma} The idea is that since the size of the image is invariant under row operations of $L$, we perform row operations to isolate a `hard' part, which gives the main part of the required lower bound \\ Our proof proceeds by induction on the dimension of the linear map. The base case is given by the Cauchy Davenport Theorem. \begin{lemma} \label{lem:main} Let $L:\mathbb{F}_p^n \rightarrow \mathbb{F}_p^{n-1}$ be a linear map such that $L(x_1,\ldots, x_n) = (x_1 + x_n, x_2 + x_n, \ldots, x_{n-1} + x_n)$. Let $S_1, \ldots S_n \subseteq \mathbb{F}_p$ with $|S_i| = s_i$ such that $\min_i(s_i) + \max_i(s_i) \leq p+1$. Then $|L(S_1, \ldots S_n)| \geq \prod_{i=1}^n s_i-\prod_{i=1}^n (s_i-1)$ \end{lemma} \begin{proof} We use the shorthand notation $|L| \defeq |L(S_1,S_2 \ldots S_n)|$. W.L.O.G, let $S_1$ be such that $|S_1| = \min_{i \in [n-1]}(|S_i|)$. \\ A preliminary observation is that $|S_1| + |S_n| \leq p+1$, and therefore, by the Cauchy-Davenport Theorem, \begin{equation} \label{CD} |S_1 + S_n| \geq s_1 + s_n - 1 \end{equation} The proof proceeds by induction on $n$. If $n = 2$, the result $|L| \geq s_1 \cdot s_2 - (s_1-1) \cdot (s_2-1) = s_1 + s_2 - 1$ is given by the Cauchy-Davenport Theorem. \\ For every $a \in \mathbb{F}_p$, we have $T_a \defeq \{x_n \in S_n \;\ifnum\currentgrouptype=16 \middle\fi|\; \exists x_1 \in S_1, x_1 + x_n = a\}$, and $t_a \defeq |T_a|$. We now look at the restricted linear map $L|_{x_1 + x_n = a}$. In this case, the induction is on sets $S_2, \ldots S_{n-1} \times T_a$. This is equivalent to restricting $S_n$ to the set $T_a$, and dropping $S_1$, since for every $x_n \in S_n$, there is a unique $x_1 \in S_1$ such that $x_1 + x_n = a$. \\ We first observe that the conditions are satisfied, i.e., $\min_i(|S_i|) + \max_i(|S_i|) \leq p + 1$, since $t_a \leq \min(|S_1|,|S_n|)$. Also the resulting linear map is of the same form, i.e., $L|_{x_1 + x_n = a}(x_2,\ldots x_n) = (x_2 + x_n \ldots x_{n-1} + x_n)$. (In reality, $L|_{x_1 + x_n = a}$ is a map from $\mathbb{F}_p^n$ to $\mathbb{F}_p^{n-1}$, given by $L|_{x_1 + x_n = a}(x_1,x_2,\ldots x_n) = (a, x_2 + x_n \ldots x_{n-1} + x_n)$ but we drop the first coordinate because it is fixed, i.e., $a$)\\ By induction hypothesis, the number of points in the image of $L_{x_1 + x_n = a}$ is at least: $$ \left( \prod_{i=2}^{n-1}s_i \right) t_a - \left( \prod_{i=2}^{n-1}(s_i - 1) \right)(t_a - 1) $$ Summing over all $a \in \mathbb{F}_p$, we get a bound on the number of points in the image: \begin{eqnarray*} |L| &\geq& \sum_{a \in \mathbb{F}_p, t_a \neq 0} \left( \left( \prod_{i=2}^{n-1}s_i \right) t_a - \left( \prod_{i=2}^{n-1}(s_i - 1) \right)(t_a - 1) \right) \\ &=& \left( \prod_{i=2}^{n-1}s_i \right)\sum_{a \in \mathbb{F}_p}t_a - \left( \prod_{i=2}^{n-1}(s_i - 1) \right)\sum_{a\in \mathbb{F}_p, t_a \neq 0}(t_a - 1) \\ &\geq& \prod_{i=1}^{n}s_i - \prod_{i=1}^n(s_i - 1) \end{eqnarray*} The last inequality comes from observing that $\sum_{a \in \mathbb{F}_p}t_a = s_1s_n$, and an upper bound on $\sum_{a \in \mathbb{F}_p, t_a \neq 0}(t_a - 1)$, by using~\ref{CD}. We have $\sum_{a \in \mathbb{F}_p, t_a \neq 0}(t_a - 1) = \sum_{a \in \mathbb{F}}t_a - \sum_{a \in \mathbb{F}_p}\mathbbm{1}_{t_a \neq 0} = \sum_{a \in \mathbb{F}}t_a + |S_1 + S_n| \leq s_1s_n - (s_1 + s_n - 1)$. \end{proof} \subsection{Arriving at the Main Theorem} The first step in arriving at the main theorem is exactly as in~\cite{HKK}. For completeness, we describe it here. The idea is to transform a general linear map into a specific form, without reducing the size of the image (in fact, here it remains the same). This step is very intuitive, but describing it requires some setup. \\ Let $L:\mathbb{F}_p^{m+1} \rightarrow \mathbb{F}_p^m$ be an $\mathbb{F}_p$-linear map of rank $m$. Let $v$ be a non-zero min-support vector of $\ker(L)$. So, we have $Lv = 0$. The main observation is that under row operations, two quantities remain unchanged: the size of the image of $L$, and the size of the support of the min-support vector in the kernel. \\ Let $r_1, \ldots r_m$ be the rows, and $c_1, c_2, \ldots c_{m+1}$ be the columns of associated to $L$ with respect to the standard basis. We show that one can perform elementary row operations, and some column operations on $L$ while preserving the size of the image. \begin{lemma} \label{lem:linop} The size of the image of $L$ does not change under \begin{enumerate} \item Elementary row operations. \item Scaling any column $c_i$ by some $d \in \mathbb{F}_p \setminus \{0\}$ and scaling every element of $A_i$ by $d$. \item Swapping any two columns $c_i$ and $c_j$, and swapping sets $A_i$ and $A_j$. \end{enumerate} \end{lemma} \begin{proof} We prove this by considering each given operation separately. \begin{enumerate} \item Suppose $L'$ was obtained from $L$ by elementary row operations. There is an invertible linear map $M$ such that $M \cdot L = L'$. This gives the bijection from every vector $v$ in the image of $L$, to the vector $M \cdot v$ in the image of $L'$. \item Suppose $L'$ was obtained from $L$ by scaling column $c_i$ by $d \in \mathbb{F}_p \setminus \{0\}$, and scaling the set $A_i$ by $d^{-1}$. We map every vector $(u_1, \ldots, u_m) \in$ $L(A_1,\ldots ,A_i, \ldots, A_{m+1})$, to the vector $(u_1, \ldots, u_m) \in L'(A_1,\ldots , d^{-1} \cdot A_i, \ldots, A_{m+1})$. Here $d^{-1} \cdot A_i \defeq \{d^{-1}a_i \;\ifnum\currentgrouptype=16 \middle\fi|\; a_i \in A_i\}$ This map is invertible. \item Suppose $L'$ was obtained from $L$ by switching columns $c_i$ and $c_j$, and swapping the sets $A_i$ and $A_j$. We map every vector $(u_1,\ldots, u_m) \in L(A_1,\ldots ,A_i, \ldots, A_j, \ldots, A_{m+1})$ to the identical vector $(u_1, \ldots, u_m) \in$ $ L'(A_1,\ldots ,A_j, \ldots, A_i, \ldots, A_{m+1})$. This map is invertible. \end{enumerate} For every given operation, we have a bijection between the images of $L$ before and after the operation. \end{proof} \begin{observation} After the operations stated in Lemma~\ref{lem:linop}, the size of the support of the min-support vector in $\ker(L)$ does not change. \end{observation} To see this, we first observe that the kernel has rank $1$, and is orthogonal to the row span of $L$. Therefore, all nonzero vectors in $\ker(L)$ have the same support. Since, row operations do not change the row span of $L$, the resulting kernel spans the same subspace of $\mathbb{F}^{m+1}$, and therefore, the size of the support of the vectors in $\ker(L)$ does not change. \\ Next, we do the following operations, each of which preserves the size of the image. \begin{enumerate} \item Perform row operations so that the last $m$ columns form an identity matrix. \item Scale the rows so that the first column of every row is $1$. \item Scale the last $m$ columns so that every nonzero entry in $L$ is $1$. \end{enumerate} After we perform these operations, we have a linear map where the first column consists of $1$'s and $0$'s and the remaining $m$ columns form an identity matrix. Let the $S'$ be the set of indices of rows containing $1$'s in the first column. Consider the vector $v = -e_1 + \sum_{i \in S'}e_{i+1}$. This vector has support $|S'| + 1$, and lies in the kernel of $L$. Therefore, $|S| = |S'| + 1$. \begin{proof}[Proof of Theorem~\ref{the:main}] Apply the transformation from Lemma~\ref{lem:linop} to $L$ to reduce it to the simple form. Let $S'$ be the set of rows where the first column is nonzero. Consider the restriction of $L$ on the the coordinates given by $S$. By Lemma~\ref{lem:main}, the size of this image is at least $\left( \prod_{i\in S}k_i - \prod_{i \in S}(k_i - 1) \right)$. The linear map restricted to the coordinates $[m] \setminus S$ is nothing but the identity map, so the size of the image is $\prod_{i \not \in S} |A_i|$, and is independent of the linear map restricted to $S$. Putting them together, we have the desired result. \end{proof} \section{The case when $2k > p+1$} The proof of Lemma~\ref{lem:main} breaks down when $s_1+s_n > p + 1$ and, unfortunately, we do not know how to fix this issue. Consider, for example, the simplest nontrivial case where $m = 2$, i.e., $L(x,y,z) = (x+z, y+z)$, and we are interested in the size of the image of $L$ on $X \times Y \times Z$, further suppose, for simplicity, that $|X|=|Y|=|Z| = k$. If $k < \frac{p + 1}{2}$, then the above bound holds, and is tight. If $k>\frac{2p}{3}$, then $L$ covers $\mathbb{F}_p^2$, i.e., $|L(A,B,C)| = p^2$. This makes the case in between the interesting one. We conjecture that the correct lower bound is the size of the image of $L$ when $X = Y = Z = \{1,2,\ldots k\}$. Towards this, we are able to prove a partial result (Lemma~\ref{2d}) using the above method. We will need the following Lemma: \begin{lemma} \label{boundtx} Let $X,Y\subseteq \mathbb{F}_p$ and $t_a = |\{(x,y)\in X\times Y: x+y = a\}|$. Then for every $a \in \mathbb{F}_p$: $$|X|+|Y|-p\leq t_a \leq \min(|X|,|Y|).$$ \end{lemma} \begin{proof} The bounds follow from the fact that $t_a$ can be written as the size of the intersection of two sets of sizes $|X|$ and $|Y|$: $$t_a = |X \cap (a-Y)|.$$ \end{proof} Now we state the partial result: \begin{theorem} \label{2d} Let $L:\mathbb{F}_p^3\rightarrow \mathbb{F}_p^2$ be the linear map defined by $L(x,y,z) = (x+z,y+z)$. Let $X,Y,Z\subset F_p$ be sets of size $k$, where $k \geq \frac{p+1}{2}$. Then we have the following lower bound: $$|L(X,Y,Z)| \geq \min(p^2 + 3k^2 - (2p+1)k,p^2).$$ \end{theorem} \begin{proof} Let $T_a \defeq \{z \in Z \;\ifnum\currentgrouptype=16 \middle\fi|\; \exists x \in X, x+z = a\}$, with $t_a\defeq |T_a|$. Looking at this restriction, $L|_{x+z = a}$, by Cauchy-Davenport Theorem, there are at least $\min(t_a + k-1,p)$ points of $L(X,Y,Z)$ on $L_{x+z=a}(Y,T_a)$. By summing over all $a\in \mathbb{F}_p$, we get a lower bound on the size of $L(X,Y,Z)$: \begin{eqnarray*} |L(X,Y,Z)| &\geq& \displaystyle\sum_{a\in\mathbb{F}_p}{\min(t_a + k-1,p)} \\ &=& \displaystyle\sum_{a\in\mathbb{F}_p}{\min(t_a,p-k+1)} + p(k-1) \\ &=& \displaystyle\sum_{a:t_a \leq p-k+1}{t_a} + \displaystyle\sum_{a:t_a > p-k+1}{(p-k+1)} + p(k-1). \end{eqnarray*} We now want to remove the dependence of the lower bound on the $t_a$ by considering the worst case scenario, where the $t_a$ take values that minimize the lower bound. First, we observe $\sum_{a\in\mathbb{F}_p}{t_a} = k^2$, a fixed quantity. So to minimize the above lower bound for $|L(X,Y,Z)|$, we need $t_a$ to be maximal for as many $a\in\mathbb{F}_p$ as possible. By Lemma~\ref{boundtx}, we know that $2k-p\leq t_a \leq k$. We set $t_a = k$ for as many $a\in\mathbb{F}_p$ as possible, and the remainder of the $t_a = 2k-p$. This gives: \begin{eqnarray*} |L(X,Y,Z)| &\geq& \displaystyle\sum_{a:t_a \leq p-k+1}{t_a} + \displaystyle\sum_{a:t_a > p-k+1}{(p-k+1)} + p(k-1) \\ &\geq& k(2k-p) + (p-k)(p-k+1) + p(k-1) \\ & = & 3k^2 + p^2 - (2p-1)k. \end{eqnarray*} \end{proof} As a corollary, we get, independent of theorem~\ref{cover}, the following corrolary: \begin{corollary} If the linear map $L$, and the sets $A$, $B$, $C$ were as above, with $|A|=|B|=|C|=k$, and $k > \frac{2p}{3}$, then $L(A,B,C) = p^2$. \end{corollary} We would like to point out that at the two extremes, i.e., when $k = \frac{p+1}{2}$, and when $k =\lceil\frac{2p}{3} \rceil$, the above bound matches the `correct' lower bound. \subsection{Proof of Theorem~\ref{cover}} We prove theorem~\ref{cover} via a slightly stronger claim \begin{claim} \label{lem:cover} Let $L:\mathbb{F}_p^n \rightarrow \mathbb{F}_p^{n-1}$ be a linear map given by $L(x_1,\ldots x_n) = (x_1 + x_n, x_2 + x_n \ldots x_{n-1} + x_n)$. Let $S_1, \ldots S_n \subseteq \mathbb{F}_p$ with $|S_i| = k$ for $i \in [n-1]$, and $|S_n| = k'$. Further, suppose that $(n-1)k + k' \geq (n-1)p + 1$, then $|L(S_1, \ldots S_n)| = p^{n-1}$. \end{claim} \begin{proof} We prove this by induction on $n$, analogous to Lemma~\ref{lem:main}. The case where $n = 2$ is, again, given by the Cauchy-Davenport Theorem. \\ For $a \in \mathbb{F}_p$, $T_a \defeq \{x_n \in S_n \;\ifnum\currentgrouptype=16 \middle\fi|\; \exists x_1 \in S_1, x_1 + x_n = a\}$ with $t_a \defeq |T_a|$. Looking at this restriction of $L$ (i.e., $x_1 + x_n = a$), we have a linear map, $L_{x_1 + x_n = a}$ on the sets $S_2 \times S_3 \times \cdots T_a$, given by the $L_{x_1 + x_n = a}(x_2 ,\ldots x_n) = (x_2 + x_n, \ldots x_{n-1} + x_n)$. (similar to Lemma~\ref{lem:main}, we drop the first coordinate). \\ Here, $|S_i| = k$ for $i = 2,\ldots n-1$, and $|T_a| \geq k+k'-p$, by Lemma~\ref{boundtx}. Further, the required condition holds, i.e.,: $$(n-2)k + t_a \geq (n-2)k + k + k' - p = (n-1)k + k' - p \geq (n-2)p+1.$$ Therefore, by induction hypothesis $|L|_{x_1 + x_n = a}(S_2, \ldots S_{n-1}, T_a)| = p^{n-2}$. Since this holds for every $a \in \mathbb{F}_p$, we have $|L(S_1, \ldots S_n)| = p^{n-1}$. \end{proof} In particular, Lemma~\ref{lem:cover} tells that for the linear map $L$ given by $L(x_1,\ldots x_n) = (x_1 + x_n, x_2 + x_n, \ldots, x_{n-1} + x_n)$ on $S_1 \times S_2 \times \cdots S_n$, if $|S_i| \geq \frac{(n-1)p}{n}$, then $L(S_1, \ldots, S_n) = \mathbb{F}_p^{n-1}$. \end{document}
\begin{document} \title{A $(k + 3)/2$-approximation algorithm for monotone submodular $k$-set packing and general $k$-exchange systems} \begin{abstract} We consider the monotone submodular $k$-set packing problem in the context of the more general problem of maximizing a monotone submodular function in a $k$-exchange system. These systems, introduced by Feldman et al.\ \cite{Feldman-2011}, generalize the matroid k-parity problem in a wide class of matroids and capture many other combinatorial optimization problems. We give a deterministic, non-oblivious local search algorithm that attains an approximation ratio of $(k + 3)/2 + \epsilon$ for the problem of maximizing a monotone submodular function in a $k$-exchange system, improving on the best known result of $k + \epsilon$, and answering an open question posed in Feldman et al. \end{abstract} \section{Introduction} In the general $k$-set packing problem, we are given a collection $\mathcal{G}$ of sets, each with at most $k$ elements, and an objective function $f : 2^\mathcal{G} \to \ensuremath{\mathbb{R}}_+$ assigning each subset of $\mathcal{G}$ a value, and seek a collection of pairwise-disjoint sets $S \subseteq \mathcal{G}$ that maximizes $f$. In the special case that $f(A) = |A|$, we obtain the \emph{unweighted $k$-set packing problem}. Similarly, if $f$ is linear function, so that $f(A) = \sum_{e \in A}w(e)$ for some weight function $w : \mathcal{G} \to \ensuremath{\mathbb{R}}_+$ we obtain the \emph{weighted $k$-set packing problem}. In this paper we consider the case in which $f$ may be any monotone submodular function. For unweighted $k$-set packing, Hurkens and Schrijver \cite{Hurkens-1989} and Halld\'orsson \cite{Halldorsson-1995} independently obtained a $k/2 + \epsilon$ approximation via a simple local search algorithm. Using similar techniques, Arkin and Hassin \cite{Arkin-1997} obtained a $k - 1 + \epsilon$ approximation for weighted $k$-set packing, and showed that this result is tight for their simple local search algorithm. Chandra and Halld\'orsson \cite{Chandra-1999} showed that a more sophisticated local search algorithm, which starts with a greedy solution and always chooses the best possible local improvement at each stage, attains an approximation ratio of $2(k + 1)/3 + \epsilon$. This was improved further by Berman \cite{Berman-2000}, who gave a \emph{non-oblivious} local search algorithm yielding a $(k + 1)/2 + \epsilon$ approximation for weighted $k$-set packing. Non-oblivious local search \cite{Khanna-1994} is a variant of local search in which an auxiliary objective function to evaluate solutions, rather than the problem's given objective. In the case of Berman, the local search procedure repeatedly seeks to improve the sum of \emph{square} of the weights in the current solution, rather than the sum of the weights. Many of the above local search algorithms for $k$-set packing yield the same approximations for the more general problem of finding maximum independent sets in $(k + 1)$-claw free graphs. Additionally, local search techniques have proved promising for other generalizations of $k$-set packing, including variants of the matroid $k$-parity problem \cite{Lee-2010,Soto-2011a}. Motivated by the similarities between these problems, Feldman et al.\ \cite{Feldman-2011} introduced the class of \emph{$k$-exchange} systems, which captures problems amenable to approximation by local search algorithms. These systems are formulated in the general language of independence systems, which we now briefly review. An independence system is specified by a ground set $\mathcal{G}$, and a hereditary (i.e.\ non-empty and downward-closed) family $\mathcal{I}$ of subsets of $\mathcal{G}$. These subsets of $\mathcal{G}$ contained in $\mathcal{I}$ are called \emph{independent sets}, and the inclusion-wise maximal sets of $\mathcal{I}$ are called \emph{bases} of the independence system $(\mathcal{I}, \mathcal{G})$. Given an independence system $(\mathcal{G}, \mathcal{I})$ and a function $f : 2^\mathcal{G} \to \ensuremath{\mathbb{R}}_+$, we consider the problem of finding an independent set $S \in \mathcal{I}$ that maximizes $f$. The class of $k$-exchange systems satisfy the following additional property: \begin{definition}[$k$-exchange system \cite{Feldman-2011}] \label{def:k-exchange} A hereditary system $\mathcal{I}$ is a \emph{$k$-exchange system} if, for all $A$ and $B$ in $\mathcal{I}$, there exists a multiset $Y = \{ Y_e \subseteq B \setminus A\ |\ e \in A \setminus B \}$, containing a subset $Y_e$ of $B \setminus A$ for each element $e \in A \setminus B$, that satisfies: \begin{enumerate}[itemsep=0em,topsep=0.5em,leftmargin=0.65in,label={\rm (K\arabic{*})}] \item $|Y_e| \le k$ for each $x \in A$. \label{N1} \item Every $x \in B \setminus A$ appears in at most $k$ sets of $Y$. \label{N2} \item For all $C \subseteq A \setminus B$, $(B \setminus \left(\bigcup_{e \in C}Y_e\right)) \cup C \in \mathcal{I}$. \label{N3} \end{enumerate} \end{definition} We call the set $Y_e$ in Definition \ref{def:k-exchange} the \emph{neighborhood} of $e$ in $B$. For convenience, we extend the collection $Y$ in Definition \ref{def:k-exchange} by including the set $Y_x = \{x\}$ for each element $x \in A \cap B$. It is easy to verify that the resulting collection still satisfies conditions \ref{N1}--\ref{N3}. The 1-exchange systems are precisely the class of strongly base orderable matroids described by Brualdi \cite{Brualdi-1971}. This class is quite large and includes all gammoids, and hence all transversal and partition matroids. For $k > 1$, the class of $k$-exchange systems may be viewed as a common generalization of the matroid $k$-parity problem in strongly base orderable matroids and the independent set problem in $(k + 1)$-claw free graphs. Feldman et al. showed that $k$-exchange systems encompass a wide variety of combinatorial optimization problems, including as $k$-set packing, intersection of $k$ strongly base orderable matroids, hypergraph $b$-matching (here $k = 2$), as well as problems such as asymmetric traveling salesperson (here $k = 3$). Our results hold for any $k$-exchange system, and so we present them in the general language of Definition \ref{def:k-exchange}. However, the reader may find it helpful to think in terms of a concrete problem, such as the $k$-set packing problem. In that case, the ground set $\mathcal{G}$ is the given collection of sets, and a sub-collection of sets $S \subseteq \mathcal{G}$ is independent if and only if all of the sets in $S$ are disjoint. Given $A$ and $B$ as in Definition \ref{def:k-exchange}, $Y_e$ is the set of all sets in $B$ that contain any element contained by the set $e \in A$ (i.e. the set of all sets in $B$ that are not disjoint from $e$). Then, property \ref{N3} is immediate, and \ref{N1} and \ref{N2} follow directly from the fact that each set in $\mathcal{G}$ contains at most $k$ elements. \subsection{Related Work} Recently, the problem of maximizing submodular functions subject to various constraints has attracted much attention. We focus here primarily on results pertaining to matroid constraints and related independence systems. In the case of an arbitrary single matroid constraint, Calinescu et al.\ have attained a $e/(e - 1)$ approximation for monotone submodular maximization, via the \emph{continuous greedy algorithm}. This result is tight, provided that $P \neq NP$ \cite{Feige-1998}. In the case of $k \ge 2$ simultaneous matroid constraints, an early result of Fisher, Nemhauser, and Wolsey \cite{Fisher-1978} shows that the standard greedy algorithm attains a $k + 1$ approximation for monotone submodular maximization. Fischer et al.\ state further that the result can be generalized to $k$-systems (a full proof appears in Calinescu et al.\ \cite{Calinescu-2007}). More recently, Lee, Sviridenko, and Vondr\`ak\ \cite{Lee-2010a} have improved this result to give a $k + \epsilon$ approximation for monotone submodular maximization over $k \ge 2$ arbitrary matroid constraints, via a simple, oblivious local search algorithm. A similar analysis was used by Feldman et al.\ \cite{Feldman-2011} to show that oblivious local search attains a $k + \epsilon$ approximation for the class of $k$-exchange systems (here, again, $k \ge 2$). For the more general class of $k$-systems, Gupta et al.\ \cite{Gupta-2010} give a $(1 + \beta)(k + 2 + 1/k)$ approximation, where $\beta$ is the best known approximation ratio for unconstrained non-monotone submodular maximization. In the case of unconstrained non-monotone submodular maximization, Feige, Mirrokni, and Vondr\'ak \cite{Feige-2007} gave a randomized $2.5$ approximation, which was iteratively improved by Gharan and Vondr\'{a}k \cite{Gharan-2011} and then Feldman, Naor, and Shwartz \cite{Feldman-2011a} to $\approx 2.38$. For non-monotone maximization subject to $k$ matroid constraints, Lee, Sviridenko, and Vondr\'ak \cite{Lee-2009} gave a $k + 2 + 1/k + \epsilon$ approximation, and later improved \cite{Lee-2010a} this to a $k + 1 + 1/(k - 1) + \epsilon$ approximation. Again, the latter result is obtained by a standard local search algorithm. Feldman et al.\ \cite{Feldman-2011} apply similar techniques to yield a $k + 1 + 1/(k - 1) + \epsilon$ approximation for non-monotone submodular maximization the general class of $k$-exchange systems. \subsection{Our Contribution} In the restricted case of a linear objective function, Feldman et al.\ \cite{Feldman-2011} gave a non-oblivious local search algorithm inspired by Berman's algorithm \cite{Berman-2000} for $(k + 1)$-claw free graphs. They showed that the resulting algorithm is a $(k + 1)/2 + \epsilon$ approximation for linear maximization in any $k$-exchange system. Here we consider a question posed in \cite{Feldman-2011}: namely, whether a similar technique can be applied to the case of monotone submodular maximization in $k$-exchange systems. In this paper, we give a successful application of the non-oblivious local search techniques to the case of monotone submodular maximization in a $k$-exchange system. As in \cite{Feldman-2011}, the $k$-exchange property is used only in the analysis of our algorithm. The resulting non-oblivious local search algorithm attains an approximation factor of $\frac{k + 3}{2} + \epsilon$. For $k > 3$, this improves upon the $k + \epsilon$ approximation obtained by the oblivious local search algorithm presented in \cite{Feldman-2011}. Additionally, we note that our algorithm runs in time polynomial in $\epsilon^{-1}$, while the $k + \epsilon$ approximation algorithm of \cite{Feldman-2011} requires time exponential in $\epsilon^{-1}$. As a consequence of our general result, we obtain an improved approximation guarantee of $\frac{k + 3}{2}$ for a variety of monotone submodular maximization problems (some of which are generalizations of one another) including: $k$-set packing, independent sets in $(k + 1)$-claw free graphs, $k$-dimensional matching, intersection of $k$ strongly base orderable matroids, and matroid $k$-parity in a strongly base orderable matroid. In all cases, the previously best known result was $k + \epsilon$. \section{A First Attempt at the Submodular Case} Before presenting our algorithm, we describe some of the difficulties that arise when attempting to adapt the non-oblivious local search algorithm of \cite{Feldman-2011} and \cite{Berman-2000} to the submodular case. Our hope is that this will provide some intuition for our algorithm, which we present in the next section. We recall that a function $f : 2^\mathcal{G} \to \ensuremath{\mathbb{R}}_+$ is submodular if $f(A) + f(B) \ge f(A \cup B) + f(A \cap B)$ for all $A, B \subseteq \mathcal{G}$. Equivalently, $f$ is submodular if for all $S \subseteq T$ and all $x \not\in T$, $f(S + x) - f(S) \ge f(T + x) - f(T)$. In other words, submodular functions are characterized by decreasing marginal gains. We say that a submodular function $f$ is monotone if it additionally satisfies $f(S) \le f(T)$ for all $S \subseteq T$. The non-oblivious algorithm of \cite{Feldman-2011} for the linear case is shown in Algorithm \ref{alg:nols}. It proceeds from one solution to another by applying a \emph{$k$-replacement} $(A,B)$. Formally, we call the pair of sets $(A,B)$, where $A \subseteq \mathcal{G} \setminus (S \setminus B)$ and $B \subseteq S$ a $k$-replacement if $|A| \le k$, $|B| \le k^2 - k + 1$ and $(S \setminus B) \cup A \in \mathcal{I}$. Algorithm \ref{alg:nols} repeatedly searches for a $k$-replacement that improves an auxiliary potential function $w^2$. Because $f$ is linear, it can be represented as a sum of weights, one for each element in $S$. If $w(e) = f(\{e\})$ is the weight assigned to an element $e$ in this representation, then the non-oblivious potential function is given by $w^2(S) = \sum_{e \in S}w(e)^2$. That is, our non oblivious potential function $w^2(S)$ is simply the sum of the \emph{squared} weights of the elements of $S$.\footnote{To ensure polynomial-time convergence, Algorithm \ref{alg:nols} first round the weights down to integer multiples of a suitable small value $\alpha$, related to the approximation parameter $\epsilon$. The algorithm then converges in time polynomial in $\epsilon^{-1}$ and $n$, at a loss of only $(1 - \epsilon)^{-1}$ in the approximation factor.} \begin{algorithm} \KwIn{\parbox[t]{7in}{\begin{itemize}[parsep=0em,itemsep=0em,topsep=0em,leftmargin=1em] \item Ground set $\mathcal{G}$ \item Membership oracle for $\mathcal{I} \subseteq 2^\mathcal{G}$ \item Value oracle for monotone submodular function $f : 2^\mathcal{G} \to \ensuremath{\mathbb{R}}_+$ \item Approximation parameter $\epsilon \in (0,1)$ \end{itemize}}} Let $S_\mathit{init} = \{ \arg\max_{e \in \mathcal{G}} w(e) \}$\; Let $\alpha = w(S_\mathit{init})\epsilon/n$\; Round all weights $w(e)$ down to integer multiples of $\alpha$\; $S \gets S_\mathit{init}$\; $\old{S} \gets S$\; \Repeat{$\old{S} = S$}{ \ForEach{$k$-replacement $(A,B)$}{ \If{$w^2(A) > w^2(B)$}{ $\old{S} \gets S$\; $S \gets (S \setminus B) \cup A$\; \Break\; } } } \Return $S$\; \caption{Non-Oblivious Local Search for Linear Objective Functions} \label{alg:nols} \end{algorithm} In the monotone submodular case, we can no longer necessarily represent $f$ as a sum of weights. However, borrowing some intuition from the greedy algorithm, we might decide to replace each weight $w(e)$ in the potential function $w$ with the marginal gain/loss associated with $e$. That is, at the start of each iteration of the local search algorithm, we assign each element $e \in \mathcal{G}$ weight $w(e) = f(S + e) - f(S - e)$, where $S$ is the algorithm's current solution, then we proceed as before. Note that $w(e)$ is simply the marginal gain attained by adding $e$ to $S$, in the case that $e \not\in S$ or the marginal loss suffered by removing $e$ from $S$, in the case that $e \in S$. We define the non-oblivious potential function $w^2$ in terms of the (current) weight function $w$, as before. Unfortunately, the resulting algorithm may fail to terminate, as the following small example shows. We consider a simple, unweighted coverage function on the universe $U = \{a,b,c,x,y,z\}$, defined as follows. Let: \begin{align*} S_1 &= \{a, b \} & S_3 &= \{x, y \} \\ S_2 &= \{a, c \} & S_4 &= \{x, z \} \end{align*} We define $\mathcal{G} = \{1,2,3,4\}$ and $f(A) = \left| \bigcup_{i \in A} S_i \right|$ for all $A \subseteq \mathcal{G}$. Finally, we consider the 2-exchange system with only 2 bases: $P = \{1,2\}$ and $Q = \{3,4\}$. For current solution $S = P$ we have $w(1) = w(2) = 1$ and $w(3) = w(4) = 2$. Since $w^2(\{1,2\}) = 2 < 8 = w^2(\{3,4\})$, the 2-replacement $(\{3,4\}, \{1,2\})$ is applied, and the current solution becomes $Q$. In the next iteration, we have $S = Q$, and $w(1) = w(2) = 2$ and $w(3) = w(4) = 1$, so the 2-replacement $(\{1,2\}, \{3,4\})$ is applied by the algorithm. This returns us to the solution to $P$, where the process repeats indefinitely. \section{The New Algorithm} Intuitively, the problem with this initial approach stems from the fact that the weight function used at each step of the algorithm depends on the current solution $S$ (since all marginals are taken with respect to $S$). Each time the algorithm makes an improvement, it changes the current solution, thereby changing the weights assigned to all elements in the next iteration. Hence, we are effectively making use of an entire family of non-oblivious potential functions, indexed by the current solution $S$. It may be the case that a $k$-replacement $(A,B)$ results in an improvement with respect to the current solution's potential function, but in fact results in a \emph{decreased} potential value in the next iteration, after the weights have been updated. Surprisingly, we can solve the problem by introducing \emph{even more} variation in the potential function. Specifically, we allow the algorithm to use a different weight function not only for each current solution $S$, but also for \emph{each $k$-replacement $(A,B)$ that is considered}. We give the full algorithm at the end of this section and a detailed analysis in the next. Our general approach is to consider the elements of a set $X$ in some order $\prec$, and assign to each $e \in X$ in the set the marginal gain in $f$ obtained when it is added to the set containing all the preceding elements of $X$. By carefully updating $\prec$ together with the current solution $S$ at each step, we ensure that the algorithm converges to a local optimum. We now give the details for how are weights are calculated, given the current ordering $\prec$. At each iteration of the algorithm, before searching an improving $k$-replacement, we assign weights $w$ to all of the elements of the current solution $S$. The weights will necessarily depend on $S$, but remain fixed for all $k$-replacements considered in the current phase. Let $s_i$ be the $i$th element of $S$ in the ordering $\prec$ and let $S_i = \{s_j \in S : j \le i\}$ be the set containing the first $i$ elements of $S$ in the ordering $\prec$. Then, the weight function $w : S \to \ensuremath{\mathbb{R}}_+$ is given by \[ w(s_i) = f(S_{i - 1} + s_i) - f(S_{i - 1}) \] for all $s_i \in S$. Note that our weight function satisfies \begin{equation} \label{eq:15} \sum_{x \in S}w(x) = \sum_{i = 1}^{|S|} f(S_{i - 1} + s_i) - f(S_{i - 1}) = f(S) \end{equation} In order to evaluate each $k$-replacement $(A,B)$, we need to assign weights to the elements in $A \subseteq \mathcal{G} \setminus (S \setminus B)$. We use a different weight function for each $k$-replacement. Suppose that we are considering the $k$-replacement $(A,B)$. Let $a_i$ be the $i$th element of $A$ in the ordering $\prec$ and let $A_i = \{a_j \in A : j \le i\}$ be the set containing the first $i$ elements of $A$ in the ordering $\prec$. Then, we define the weight function $w_{(A,B)} : A \to \ensuremath{\mathbb{R}}_+$ by \[ w_{(A,B)}(a_i) = f((S \setminus B) \cup A_{i - 1} + a_i) - f((S \setminus B) \cup A_{i - 1}) \] for all $a_i \in A$. Note that for every $k$-replacement $(A,B)$, \begin{multline} \label{eq:16}p \sum_{x \in A}w_{(A,B)}(x) \ge \sum_{i = 1}^{|A|}\left(f((S \setminus B) \cup A_{i - 1} + a_i) - f((S \setminus B) \cup A_{i - 1})\right) \\ = f((S \setminus B) \cup A) - f(S \setminus B) \ge f(S \cup A) - f(S) \enspace, \end{multline} where the last inequality follows from the decreasing marginals characterization of submodularity. Note that since the function $f$ is \emph{monotone} submodular, all of the weights $w$ and $w_{(A,B)}$ that we consider will be nonnegative. This fact plays a crucial role in our analysis. Our final algorithm appears in Algorithm \ref{alg:1}. We start from an initial solution $S_\mathit{init} = \{\arg\max_{e \in \mathcal{G}} f(\{e\})\}$, consisting of the singleton set of largest value. Note that after applying a $k$-replacement $(A,B)$, the algorithm updates $\prec$ to ensure that all of the elements of $S \setminus B$ precede those of $A$. As we shall see in the next section, this ensures that the algorithm will converge to a local optimum. As in the linear case, we use the non-oblivious potentials $w^2(B) = \sum_{b \in B}w(b)^2$ and $w_{(A,B)}(A) = \sum_{a \in A}w_{(A,B)}(a)^2$. Again, we note that while \emph{all} of our weights implicitly depend on the current solution, the weights $w_{(A,B)}$ additionally depend on the $k$-replacement $(A,B)$ considered. Additionally, to ensure polynomial-time convergence, we round all of our weights down to the nearest integer multiple of $\alpha$, depending on the parameter $\epsilon$. This will ensure that every improvement improves the current solution by an additive factor of at least $\alpha^2$. Because of this rounding factor, we must actually work with the following analogs of (\ref{eq:15}) and (\ref{eq:16}): \begin{equation} \label{eq:15r} \sum_{x \in S}w(x) \le \sum_{i = 1}^{|S|} \left(f(S_{i - 1} + s_i) - f(S_{i - 1})\right) = \sum_{i = 1}^{|S|} \left(f(S_{i}) - f(S_{i - 1})\right) = f(S) - f(\emptyset) \le f(S) \end{equation} \begin{multline} \label{eq:16r} \sum_{x \in A}w_{(A,B)}(x) \ge \sum_{i = 1}^{|A|}\left(f((S \setminus B) \cup A_{i - 1} + a_i) - f((S \setminus B) \cup A_{i - 1}) - \alpha\right) \\ = f((S \setminus B) \cup A) - f(S \setminus B) - |A|\alpha \ge f(S \cup A) - f(S) - |A|\alpha \end{multline} \begin{algorithm}[t] \caption{Non-Oblivious Local Search} \label{alg:1} \KwIn{\parbox[t]{7in}{\begin{itemize}[parsep=0em,itemsep=0em,topsep=0em,leftmargin=1em] \item Ground set $\mathcal{G}$ \item Membership oracle for $\mathcal{I} \subseteq 2^\mathcal{G}$ \item Value oracle for monotone submodular function $f : 2^\mathcal{G} \to \ensuremath{\mathbb{R}}_+$ \item Approximation parameter $\epsilon \in (0,1)$ \end{itemize}}} \DontPrintSemicolon Let $S_\mathit{init} = \{ \arg\max_{e \in \mathcal{G}} f(\{e\} )\}$\; Let $\delta = \left(1 + \frac{k + 3}{2\epsilon}\right)^{-1}$ and $\alpha = f(S_\mathit{init})\delta/n$\; $S \gets S_\mathit{init}$, $\prec\ \gets $ an arbitrary total ordering on $\mathcal{G}$, and $\old{S} \gets S$\; \Repeat{$\old{S} = S$}{ Sort $S$ according to $\prec$ and let $s_i$ be the $i$th element in $S$\; $X \gets \emptyset$\; \For{$i = 1$ \KwTo $|S|$} { $w(s_i) \gets \displaystyle\left\lfloor (f(X + s_i) - f(X))/\alpha\right\rfloor\alpha$\; $X \gets X + s_i$\; } \ForEach{$k$ replacement $(A,B)$}{ Sort $A$ according to $\prec$ and let $a_i$ be the $i$th element in $A$\; $X \gets S \setminus B$\; \For{$i = 1$ \KwTo $|A|$}{ $w_{(A,B)}(a_i) \gets \displaystyle\left\lfloor (f(X + a_i) - f(X))/\alpha\right\rfloor\alpha$\; $X \gets X + a_i$\; } \If{$w_{(A,B)}^2(A) > w^2(S)$}{ $\prec\ \gets $ the ordering $\prec'$ defined by $\begin{cases} x \prec' y & \text{for all $x \in S \setminus B$, $y \in A$}\\ x \prec' y & \text{if $x \prec y$ for all other $x,y$} \end{cases}$\; $\old{S} \gets S$\; $S \gets (S \setminus B) \cup A$\; \Break\; } } } \Return $S$\; \end{algorithm} \section{Analysis of Algorithm \ref{alg:1}} We now analyze the approximation and runtime performance of Algorithm \ref{alg:1}. We now consider the worst-case ratio, or \emph{locality gap}, $f(O)/f(S)$ where $S$ is any locally optimal solution (with respect to Algorithm \ref{alg:1}'s potential function) and $O$ is a globally optimal solution. We shall need the following technical lemma, which is a direct consequence of Lemma 1.1 in \cite{Lee-2010a}. We give a proof here for the sake of completeness. \begin{lemma} \label{lem:submod} Let $f$ be a submodular function on $\mathcal{G}$, Let $T,S \subseteq \mathcal{G}$, and $\{T_i\}_{i=1}^t$ be a partition of $T \setminus S$. Then, \[ \sum_{i=1}^t\left(f(S \cup T_i) - f(S)\right) \ge f(S \cup T) - f(S) \] \end{lemma} \begin{proof} Define $A_0 = S$ and $A_i = T_i \cup A_{i - 1}$ for all $1 \le i \le t$. Suppose that $T_i = \{ t_j \}_{j = 1}^{|T_i|}$. Then, note that $S \subseteq A_{i - 1}$ and $T_i \cap A_{i - 1} = \emptyset$. Submodularity of $f$ implies that: \begin{align*} f(T_i \cup S) - f(S) &= \sum_{j = 1}^{T_i}\left(f(\{t_l\}_{l = 1}^{j + 1} \cup S) - f(\{t_{l}\}_{l = 1}^j \cup S)\right) \\ &\ge \sum_{j = 1}^{T_i}\left(f(\{t_l\}_{l = 1}^{j + 1} \cup A_{i - 1}) - f(\{t_{l}\}_{l = 1}^j \cup A_{i - 1})\right) \\ &= f(T_i \cup A_{i - 1}) - f(A_{i - 1}) = f(A_i) - f(A_{i - 1}) \end{align*} Now, we have \begin{equation*} \sum_{i = 1}^t\left(f(S \cup T_i) - f(S)\right) \ge \sum_{i = 1}^t\left(f(A_i) - f(A_{i - 1})\right) = f(A_t) - f(A_0) = f(S \cup T) - f(S) \qedhere \end{equation*} \end{proof} We begin by considering the approximation ratio of Algorithm \ref{alg:1}. Suppose that $S$ is the locally optimal solution returned by the algorithm on some instance, while $O$ is a global optimum for this instance. Then, for every $k$-replacement $(A,B)$, we must have $w^2_{(A,B)}(A) \le w^2(B)$, where $w$ and each $w_{(A,B)}$ are weight functions determined by the solution $S$. We consider only a particular subset of $k$-replacements in our analysis. We have $S, O \in \mathcal{I}$ for the $k$-exchange system $\mathcal{I}$. Thus, there must be a collection $Y$ assigning each $e$ of $O$ a neighborhood $Y_e \subseteq S$, satisfying the conditions of Definition \ref{def:k-exchange}. For each $x \in S$, let $P_x$ be the set of all elements in $e \in O$ for which: (1) $x \in Y_e$ and (2) for all $z \in Y_e$, $w(z) \le w(x)$. That is, $P_x$ is the set of all elements of $O$ in which $x$ is the heaviest element. Note that the construction of $P_x$ depends on the fact that the weights $w$ assigned to elements in $S$ are \emph{fixed} throughout each iteration, and do \emph{not} depend on the particular improvement under consideration. We define $N_x = \bigcup_{e \in P_x}Y_e$, and consider $(P_x, N_x)$. Property \ref{N2} of $Y$ ensures that $|P_x| \le k$. Similarly, property \ref{N1}, together with the fact that all elements $e \in P_x$ have as a common neighbor $x \in Y_e$, ensures that $|N_x| \le 1 + k(k - 1) = k^2 - k + 1$. Finally, property \ref{N3} ensures that $(S \setminus N_x) \cup P_x \in \mathcal{I}$. Thus, $(P_x, N_x)$ is a valid $k$-replacement for all sets $P_x \subseteq O$, $x \in S$. Observe that $\{P_x\}_{x \in S}$ is a partition of $O$. Furthermore, by the definition of $P_x$, we have $w(x) \ge w(z)$ for all $z \in N_x$. Again, this depends on the fact that the weights of elements in $S$ are the same for all $k$-replacements considered by the algorithm during a given phase. The following extension of a theorem from \cite{Berman-2000} allows us to relate the non-oblivious potentials $w^2$ and $w^2_{(P_x,N_x)}$ to the weight functions $w$ and $w_{(P_x,N_x)}$ for each of our $k$-replacements $(P_x, N_x)$. \begin{lemma} \label{thm:axy-lemma} For all $x \in S$, and $e \in P_x$, $$w_{(P_x,N_x)}^2(e) - w^2(Y_e - x) \ge w(x) \cdot \left(2 w_{(P_x,N_x)}(e) - w(Y_e)\right) \enspace.$$ \end{lemma} \begin{proof} Let $a = \frac{1}{2}w(Y_e)$, and $b, c$ be such that $w(x) = a + b$ and $w_{(P_x,N_x)}(e) = a + c$ (note that $b$ and $c$ are not necessarily positive). Then, since $e \in P_x$, every element $z$ in $Y_e$ has weight at most $w(x) = a + b$. Furthermore, $w(Y_e - x) = w(Y_e) - w(x) = a - b$. Thus, \begin{equation} w^2(Y_e - x) = \sum_{z \in Y_e - x} w(z)^2 \le \sum_{z \in Y_e - x}(a + b) w(z) = (a + b)(a - b) \label{eq:6} \end{equation} Using \eqref{eq:6} and our definition of $a$, $b$, and $c$, we have \begin{multline*} w_{(P_x,N_x)}^2(e) - w^2(Y_e - x) - w(x) \cdot (2 w_{(P_x,N_x)}(e) - w(Y_e)) \\ \ge (a + c)^2 - (a + b)(a - b) - (a + b)(2a + 2c - 2a) = (b - c)^2 \ge 0 \enspace. \qedhere \end{multline*} \end{proof} Using Lemma \ref{thm:axy-lemma} we can prove the following lemma, which uses the local optimality of $S$ to obtain a lower bound on the weight $w(x)$ of each element $x \in S$. \begin{lemma} For each $x \in S$, $w(x) \ge\sum\limits_{e \in P_x} \left(2w_{(P_x,N_x)}(e) - w(Y_e)\right)$. \label{thm:charge} \end{lemma} \begin{proof} Because $S$ is locally optimal with respect to $k$-replacements, including in particular $(P_x,N_x)$, we must have: \begin{equation} \label{eq:7} w^2_{(P_x,N_x)}(P_x) \le w^2(N_x) \end{equation} First, we consider the case $w(x) = 0$. Recall that all the weights produced by the algorithm are non-negative. Because $w(x)$ is the largest weight in $N_x$, we must have $w(e) = 0$ for all $e \in N_x$ and so $w^2(N_x) = 0$. Moreover, (\ref{eq:7}) implies that $w^2_{(P_x,N_x)}(P_x) = 0$ as well, and so, in particular, $w_{(P_x,N_x)}(e) = 0$. The claim then follows. Now, suppose that $w(x) \neq 0$. From (\ref{eq:7}), together with the fact that $x \in Y_e$ for all $e \in P_x$, and the non-negativity of all the weights $w$, we have: \begin{equation} w_{(P_x,N_x)}^2(P_x) \le w^2(N_x) \le w^2(x) + \sum\limits_{e \in P_x} w^2(Y_e - x) \enspace.\label{eq:8} \end{equation} Rearranging \eqref{eq:8} using $w^2_{(P_x,N_x)}(P_x) = \sum_{e \in P_x} w^2_{(P_x,N_x)}(e)$ we obtain: \begin{equation} \sum\limits_{e \in P_x} w^2_{(P_x,N_x)}(e) - w^2(Y_e - x) \le w^2(x) \enspace. \label{eq:9} \end{equation} Applying Lemma \ref{thm:axy-lemma} to each term on the left of \eqref{eq:9} we have: \begin{equation} \sum\limits_{e \in P_x} w(x)\cdot(2w_{(P_x,N_x)}(e) - w(Y_e)) \le \sum\limits_{e \in P_x} w^2_{(P_x,N_x)}(e) - w^2(Y_e - x) \le w^2(x) = w(x)^2 \enspace. \label{eq:19} \end{equation} Dividing by $w(x)$ (recall that $w(x) \neq 0$) then yields \begin{equation*} \sum\limits_{e \in P_x} \left(2w_{(P_x,N_x)}(e) - w(Y_e)\right) \le w(x) \enspace.\qedhere \end{equation*} \end{proof} We now prove our main result, which gives an upper bound on the locality gap of Algorithm \ref{alg:1}. \begin{theorem} \label{thm:locality-gap} $\left( \frac{k + 3}{2} + \epsilon \right) f(S) \ge f(O)$ \end{theorem} \begin{proof} Lemma \ref{thm:charge} gives us one inequality for each $x \in S$. We now add all $|S|$ inequalities to obtain \begin{equation} \sum_{x \in S}\sum_{e \in P_x}\left(2w_{(P_x,N_x)}(e) - w(Y_e)\right) \le \sum_{x \in S} w(x) \enspace. \label{eq:10} \end{equation} We have $\sum_{x \in S}w(x) \le f(S)$ by (\ref{eq:15r}). Additionally, from (\ref{eq:16r}), $ f(S \cup P_x) - f(S) - |P_x|\alpha \le \sum_{e \in P_x} w_{(P_x,N_x)}(e)$ for every $P_x$. Thus, (\ref{eq:10}) implies \begin{equation} 2\sum_{x \in S}\left(f(S \cup P_x) - f(S) - |P_x|\alpha \right) - \sum_{x \in S}\sum_{e \in P_x}w(Y_e) \le f(S) \enspace. \label{eq:11} \end{equation} Since $\{P_x\}_{x \in S}$ is a partition of $O$, (\ref{eq:11}) is equivalent to \begin{equation} 2\sum_{x \in S}\left(f(S \cup P_x) - f(S)\right) - 2|O|\alpha - \sum_{e \in O}w(Y_e) \le f(S) \enspace. \label{eq:12} \end{equation} We have $w(x) \ge 0$ for all $x \in S$, and there are at most $k$ distinct $e$ for which $x \in Y_e$, by property \ref{N2} of $Y$. Thus, we have \[\sum_{e \in O}w(Y_e) \le k\sum_{x \in S}w(x) \le kf(S)\enspace,\] by (\ref{eq:15}). Combining this with (\ref{eq:12}), we obtain \begin{equation} 2\sum_{x \in S}\left(f(S \cup P_x) - f(S)\right) - 2|O|\alpha - kf(S) \le f(S) \label{eq:2} \end{equation} Using again the fact that $P$ is a partition of $O$, we can apply Lemma \ref{lem:submod} to the remaining sum on the left of \ref{eq:2}, yielding \begin{equation*} 2\left(f(S \cup O) - f(S)\right) - 2|O|\alpha - kf(S) \le f(S)\enspace \end{equation*} which simplifies to \begin{equation} \label{eq:17} f(S \cup O) - |O|\alpha \le \frac{k + 3}{2}f(S)\enspace. \end{equation} From the definition of $\alpha$ and the optimality of $O$, we have \begin{equation*} \label{eq:1} |O|\alpha \le n\alpha = \delta f(S_\mathit{init}) \le \delta f(O)\enspace. \end{equation*} Finally, since $f$ is monotone, we have $f(S \cup O) \ge f(O)$. Thus, (\ref{eq:17}) implies: \begin{equation*} (1 - \delta)f(O) \le \frac{k + 3}{2}f(S)\enspace, \end{equation*} which, after expanding the definition of $\delta$ and simplifying, is equivalent to $f(O) \le \left(\frac{k + 3}{2} + \epsilon\right)f(S)$. \end{proof} Next, we consider the runtime of Algorithm \ref{alg:1}. Each iteration requires time $O(n)$ to compute the weights for $S$, plus time to evaluate all potential $k$-replacements. There are $O(n^{k + k(k - 1) + 1}) = O(n^{k^2 + 1})$ such $k$-replacements $(A,B)$, and each one can be evaluated in time $O(k^2)$, including the computation of the weights $w_{(A,B)}$. Thus, the total runtime of Algorithm \ref{alg:1} is $O(Ik^2n^{k^2 + 1})$, where $I$ is the number of improvements it makes. The main difficulty remaining in our analysis is showing that Algorithm \ref{alg:1} constantly improves some global quantity, and so $I$ is bounded. Here, we show that although the weights $w$ assigned to elements of $S$ change at each iteration, the non-oblivious potential $w^2(S)$, is monotonically increasing. While the preceding analysis of the locality gap is valid regardless of the particular ordering $\prec$ used to generate the weights, our analysis of the convergence of Algorithm \ref{alg:1} requires that $\prec$ be updated at each phase to maintain the relative ordering of all elements in the current solution. Finally, we consider what happens to the total squared weight $w^2(S)$ of the current solution after applying an $k$-replacement $(A,B)$. In order to show that our algorithm terminates, we would like to show that this value is strictly increasing. To show this, it is sufficient to show that each weight $w(x)$ for $x \in (S \setminus B) \cup A$ is strictly greater after applying the $k$-replacement than the corresponding weight before. Unfortunately, the weight assigned to an element is highly sensitive to the ordering $\prec$ in which elements are considered. Let $w$ be the weight function for solution $S$, and $w'$ be the weight function for solution $(S \setminus B) \cup A$. If $x \prec y$ for some $x \in A$, $y \in S\setminus B$, then in the updated weight function, we could have $w'(y) < w(y)$, since $w'$ considers the marginal gain of $y$ with respect to a set containing $x$, while $w$ does not (since $x \not\in S$). We avoid this phenomenon by updating the ordering $\prec$ each time an improvement is made. In particular, we ensure that all the elements of $S$ and $A$ are considered in the same relative order, but that all of $A$ comes after all of $S$. As we shall show, this ensures that the weights assigned each individual element in the $S \setminus B$ and $A$ solution do not decrease after applying the $k$-replacement $(A,B)$. \begin{lemma} Suppose that for some $k$-replacement $(A,B)$ and $\alpha > 0$, we have $w^2_{(A,B)}(A) \ge w^2(B) + \alpha$, and Algorithm \ref{alg:1} applies the replacement $(A,B)$ to $S$ to obtain solution $T = (S \setminus B) \cup A$. Let $w_S$ be the weight function for solution $S$ and $w_T$ be the weight function for solution $T$. Then, $w_T^2(T) \ge w^2_S(S) + \alpha$. \label{lem:monotonic} \end{lemma} \begin{proof} After applying the $k$-replacement $(A,B)$ to $S$, we obtain a new current solution $T = (S \setminus B) \cup A$ and a new ordering $\prec_T$. We now show that for any element $x \in S \setminus B$, we must have $w(x) = w_S(x) \le w_T(x)$ and for any element $y \in A$, we must have $w_{(A,B)}(y) \le w_T(y)$. In the first case, let $S_x$ be the set of all elements in $S$ that come before $x$ in the ordering $\prec$, and similarly let $T_x$ be the set of all elements in $T$ that come before $x$ in $\prec_T$. Suppose that for some element $z \in T$ we have $z \prec_T x$. Then, since $x \in S \setminus B$, we must have $z \prec x$. Thus, $T_x \subseteq S_x$. It follows directly from the submodularity of $f$ that \begin{equation*} w(x) = w_S(x) = \left\lfloor\frac{f(S_x + x) - f(S_x)}{\alpha}\right\rfloor\alpha \le \left\lfloor\frac{f(T_x + x) - f(T_x)}{\alpha}\right\rfloor\alpha = w_T(x)\enspace. \label{eq:13} \end{equation*} In the second case, let $A_y$ be the set of all elements of $A$ that come before $y$ in the ordering $\prec$, and let $T_y$ be the set of all elements of $T$ that come before $y$ in the ordering $\prec_T$. Suppose that for some element $z \in T$ we have $z \prec_T y$. Then, since $y \in A$, we must have either $z \in S \setminus B$ or $z \in A$ and $z \prec y$. Thus, $T_y \subseteq (S \setminus B) \cup A_y$, and so \begin{equation*} w_{(A,B)}(y)\! =\! \left\lfloor\frac{f((S\! \setminus\! B) \cup A_y + y) - f((S\!\setminus\! B) \cup A_y)}{\alpha}\right\rfloor\!\alpha \le \left\lfloor\frac{f(T_y + y) - f(T_y)}{\alpha}\right\rfloor\!\alpha = w_T(y) \enspace . \label{eq:14} \end{equation*} From the above bounds on $w$ and $w_{(A,B)}$, together with the assumption of the lemma, we now have \begin{multline*} w^2_S(S) = \sum_{x \in S \setminus B}w_S(x)^2 + \sum_{x \in B}w_S(x)^2 \le \sum_{x \in S \setminus B}w_S(x)^2 + \sum_{y \in A}w_{(A,B)}(y)^2 + \alpha \\\le \sum_{x \in S \setminus B}w_T(x)^2 + \sum_{y \in A}w_{T}(y)^2 + \alpha = w_T^2(T) + \alpha \enspace .\qedhere \end{multline*} \end{proof} \begin{theorem} \label{thm:runtime} For any value $\epsilon \in (0,1)$, Algorithm \ref{alg:1} makes at most $O(n^3\epsilon^{-2})$ improvements. \end{theorem} \begin{proof} Note submodularity implies that for any element $e$ and any set $T \subseteq \mathcal{G}$, we must have $f(T + e) - f(T) \le f(\{e\}) \le f(S_\mathit{init})$. In particular, for any solution $S \subseteq \mathcal{G}$ with associated weight function $w$, we have \[w^2(S) = \sum_{e \in S}w(e)^2 \le |S|f(S_\mathit{init})^2 \le nf(S_\mathit{init})^2\enspace.\] Consider a given improvement $(A,B)$ applied by the algorithm. Because every weight used in the algorithm is a multiple of $\alpha$, we have $w_{(A,B)}^2(A) > w^2(B)$ only if $w_{(A,B)}^2(A) \ge w^2(B) + \alpha^2$. Let $T = (S \setminus B) \cup A$ be the solution resulting from the improvement, and, as in the proof of Lemma \ref{lem:monotonic}, let $w_S$ be the weight function associated with $S$ and $w_T$ be the weight function associated with $T$. For any $\epsilon > 0$, we have $\alpha > 0$, and hence $\alpha^2 > 0$. Thus, from Lemma \ref{lem:monotonic}, after applying the improvement must we must have $w^2_T(T) \ge w^2_S(S) + \alpha^2$. Thus, the number of improvements we can make is at most \[\frac{nf(S_\mathit{init})^2 - f(S_\mathit{init})^2}{\alpha^2} = (n - 1)\left(\frac{f(S_\mathit{init})}{\alpha}\right)^2 = (n - 1)\frac{n^2}{\delta^2} = O(n^3\epsilon^{-2}) \enspace.\qedhere \] \end{proof} \begin{corollary} For any $\epsilon > 0$, Algorithm \ref{alg:1} is a $\frac{k + 3}{2} + \epsilon$ approximation algorithm, running in time $O(\epsilon^{-2}k^2n^{k^2 + 4})$. \end{corollary} \section{Open Questions} \label{sec:open-questions} We do not currently have an example for which the locality gap of Algorithm \ref{alg:1} can be as bad as stated, even for specific $k$-exchange systems such as $k$-set packing. In the particular case of weighted independent set in $(k + 1)$-claw free graphs Berman \cite{Berman-2000} gives a tight example that shows his algorithm can return a set $S$ with $\frac{k + 1}{2}w(S) = w(O)$. His example uses only unit weights, and so the non-oblivious potential function is identical to the oblivious one. However, the algorithm of Feldman et al.\ (given here as Algorithm \ref{alg:nols}) considers a larger class of improvements than those considered by Berman, and so Berman's tight example no longer applies, even in the linear case. For the unweighted variant, Hurkens and Schrijver give a lower bound of $k/2 + \epsilon$, where $\epsilon$ depends on the size of the improvements considered. Because the non-oblivious local search routine performs the same as oblivious local search on instances with unit weights (since $1 = 1^2$), this lower bound applies to Algorithm \ref{alg:nols} in the linear case. From a hardness perspective, the best known bound is the $\Omega(k/\ln k)$ NP-hardness result of Hazan, Safra, and Schwartz \cite{Hazan-2006}, for the special case of unweighted $k$-set packing. In addition to providing a tight example for our analysis, it would be interesting to see if similar techniques could be adapted to apply to more general problems such as matroid $k$ parity in arbitrary matroids (here, even an improvement over $k$ for the general linear case would be interesting) or to non-monotone submodular functions. A major difficulty with the latter generalization is our proof's dependence on the weights' non-negativity, as this assumption no longer holds if our approach is applied directly to non-monotone submodular functions. \subparagraph*{Acknowledgment} The author thanks Allan Borodin for providing comments on a preliminary version of this paper. \end{document}
\begin{document} \title{Propagation by Selective Initialization\ and Its Application to\ Numerical Constraint Satisfaction Problems} \begin{abstract} Numerical analysis has no satisfactory method for the more realistic optimization models. However, with constraint programming one can compute a cover for the solution set to arbitrarily close approximation. Because the use of constraint propagation for composite arithmetic expressions is computationally expensive, consistency is computed with interval arithmetic. In this paper we present theorems that support, selective initialization, a simple modification of constraint propagation that allows composite arithmetic expressions to be handled efficiently. \end{abstract} \section{Introduction} The following attributes all make an optimization problem more difficult: having an objective function with an unknown and possibly large number of local minima, being constrained, having nonlinear constraints, having inequality constraints, having both discrete and continuous variables. Unfortunately, faithfully modeling an application tends to introduce many of these attributes. As a result, optimization problems are usually linearized, discretized, relaxed, or otherwise modified to make them feasible according to conventional methods. One of the most exciting prospects of constraint programming is that such difficult optimization problems can be solved without these possibly invalidating modifications. Moreover, constraint programming solutions are of known quality: they yield intervals guaranteed to contain all solutions. Equally important, constraint programming can prove the absence of solutions. In this paper we only consider the core of the constraint programming approach to optimization, which is to solve a system of nonlinear inequalities: \begin{equation} \label{nonLinSys} \begin{array}{ccccccccc} g_1(x_1&,& x_2 &,& \ldots &,& x_m) & \leq & 0 \\ g_2(x_1 &,& x_2 &,& \ldots &,& x_m) & \leq & 0 \\ \multicolumn{9}{c}{\dotfill} \\ g_k(x_1 &,& x_2 &,& \ldots &,& x_m) & \leq & 0 \\ \end{array} \end{equation} It is understood that it may happen that $g_i= -g_j$ for some pairs $i$ and $j$, so that equalities are a special case. If this occurs, then certain obvious optimizations are possible in the methods described here. The ability to solve systems such as (\ref{nonLinSys}) supports optimization in more ways than one. In the first place, these systems occur as conditions in some constrained optimized problems. Moreover, one of \myVec{g}{k} could be defined as $f(\myVec{x}{m}) - c$, where $f$ is the objective function and where $c$ is a constant. By repeatedly solving such a system for suitably chosen $c$, one can find the greatest value of $c$ for which (\ref{nonLinSys}) is found to have no solution. That value is a lower bound for the global minimum \cite{vnmdn03}. This approach handles nonlinear inequalities with real variables. It also allows some or all variables to be integer by regarding integrality as a constraint on a real variable \cite{bnldr97}. All constraint programming work in this direction has been based on interval arithmetic. The earliest work \cite{BNR88} used a generic propagation algorithm based directly on domain reduction operators for primitive arithmetic constraints. These constraints included $sum(x,y,z)$ defined as $x+y=z$ for all reals $x$, $y$, and $z$. Also included was $prod(x,y,z)$ defined as $xy=z$ for all reals $x$, $y$, and $z$. This was criticized in \cite{bmcv94} which advocated the use of composite arithmetic expression directly rather than reducing them to primitive arithmetic constraints. In \cite{bggp99,vnmdn01b} it was acknowledged that the generic propagation algorithm is not satisfactory for CSPs that derive from composite arithmetic expressions. These papers describe propagation algorithms that exploit the structure of such expressions and thereby improve on what is attainable by evaluating such expressions in interval arithmetic. Selective Initialization was first described in \cite{vnmdn03a}. This was done under the tacit assumption that all default domains are $[-\infty,+\infty]$. As a result some of the theorems in that paper are not as widely applicable as claimed. All of these researches are motivated by the severe difficulties experienced by conventional numerical analysis to solve practical optimization problems. They can be regarded as attempts to fully exploit the potential of interval arithmetic. In this paper we also take this point of view. We show that, though Equation (\ref{nonLinSys}) can contain arbitrarily large expressions, only a small modification of the generic propagation algorithm is needed to optimally exploit the structure of these expressions. This is made possible by a new canonical form for Equation (\ref{nonLinSys}) that we introduce in this paper. In addition to supporting our application of constraint processing to solving systems similar to Equation (\ref{nonLinSys}), this canonical form exploits the potential for parallelism in such systems. \section{A software architecture for optimization problems} \begin{figure} \caption{ A software architecture for optimization problems. } \label{softArch} \end{figure} In Figure~\ref{softArch} we propose a hierarchical software architecture for optimization problems. Each layer is implemented in terms of the layer below. In the introduction we briefly remarked on how layer 4 can be reduced to layer 3. More detail is given in \cite{vnmdn03}. For the transition between layers 0 and 1 there is much material in the interval arithmetic literature. The part that is relevant to constraint processing can be found in \cite{hckvnmdn01}. In the present paper we present a new method for implementing layer 3 in terms of layer 2. But first we review the transition between layers 1 and 2. \section{Preliminaries} In this section we provide background by reviewing some basic concepts. These reviews also serve to establish the terminology and notation used in this paper. The first few sections apply to all constraint satisfaction problems, not only to numerical ones. \subsection{Constraint satisfaction problems} A {\em constraint satisfaction problem (CSP)} consists of a set of {\em constraints}. Each of the variables in the constraint is associated with a {\em domain}, which is the set of values that are possible for the variable concerned. Typically, not all sets of values can be domains. For example, sets of real values are restricted to intervals, as described later. A {\em valuation} is a tuple indexed by variables where the component indexed by $v$ is an element of the domain of $v$. A {\em solution} is a valuation such that each constraint is true if every variable in the constraint is substituted by the component of the valuation indexed by the variable. The set of solutions is a set of valuations; hence a set of tuples; hence a \emph{relation}. We regard this relation as the relation defined by the CSP. In this way the relation that is the meaning of a constraint in one CSP can be defined by another. This gives CSPs an hierarchical structure. With each constraint, there is an associated {\em domain reduction operator}\/; DRO for short. This operator may remove from the domains of each of the variables in the constraint certain values that do not satisfy the constraint, given that the other variables of the constraint are restricted to their associated domains. Any DRO is contracting, monotonic, and idempotent. When the DROs of the constraints are applied in a ``fair'' order, the domains converge to a limit or one of the domains becomes empty. A sequence of DROs activations is \emph{fair} if every one of them occurs an infinite number of times. The resulting Cartesian product of the domains becomes the greatest common fixpoint of the DROs \cite{vnmd97,aptEssence}. If one of the domains becomes empty, it follows that no solutions exist within the initial domains. In practice, we are restricted to the domains that are representable in a computer. As there are only a finite number of these, any fair sequence of DRO applications yields domains that remain constant from a certain point onwards. \subsection{Constraints} According to the usual terminology in constraint programming, a constraint states that a certain relation holds between its arguments. But in first-order predicate logic the same role is played by an \emph{atomic formula}. In this paper we adopt the terminology of first-order predicate logic for the meaning of ``atomic formula'' and we reserve ``constraint'' for a special case. Thus an atomic formula consists of a predicate symbol with terms as arguments. A \emph{term} is a function symbol with terms as arguments. What makes an atomic formula first-order is that a predicate symbol can only occur as the outermost symbol in the formula. At first sight, the inequalities in Equation (\ref{nonLinSys}) are atomic formulas. In fact, they follow the related, but different, usage that prevails in informal mathematics. The inequality \begin{equation} \label{inEq} \begin{array}{ccccccccc} g_i(x_1&,& x_2 &,& \ldots &,& x_m) & \leq & 0 \end{array} \end{equation} is an atomic formula where $\leq$ is the predicate symbol with two arguments, which are the terms $g_i(x_1, x_2 , \ldots , x_m)$ and $0$. A possible source of confusion is that in mathematics $g_i(x_1, x_2 , \ldots , x_m)$ is not necessarily interpreted as a syntactically concrete term, but as an abstractly conceived function $g_i$ applied to the arguments \myVc{x}{1}{m}. The function could be defined by means of a term that looks quite different; such a term could be nested and contain several function symbols. For example, Equation (\ref{inEq}) could be $g_i(x_1,x_2) \leq 0$ with $g_i$ defined as $g_i(x,y) = x^2 + xy - y^2 $ for all $x$ and $y$. Accordingly, the atomic formula corresponding to Equation (\ref{inEq}) is \begin{equation} \label{inEqEx} \begin{array}{ccccccccc} \leq(+(sq(x),-(\times(x,y),sq(y))),0). \end{array} \end{equation} Taking advantage of infix and postfix notation this becomes $x^2 + xy - y^2 \leq 0$. A \emph{constraint} is an atomic formula without function symbols. An example of such an atomic formula is $sum(x,y,z)$, which is a ternary constraint whose relation is defined by $x+y=z$ for all reals $x$, $y$, and $z$. In this paper we translate Equation~(\ref{inEqEx}) to a CSP with the set of constraints $$ \{t_1 \leq 0,sum(t_2,t_3,t_1),sq(x,t_2),sum(t_5,t_3,t_4), prod(x,y,t_5),sq(y,t_4) \} $$ Consider a constraint $c(\myVec{x}{n})$. The meaning of predicate symbol $c$ is a relation, say $r$. For all $i \in \{1,\ldots,n\}$, a value $a_i$ for variable $x_i$ is \emph{inconsistent} with respect to $r$ and domains $X_1,\ldots,X_{i-1},X_{i+1},\ldots,X_{n}$ iff it is not the case that \begin{eqnarray*} \exists a_1 \in X_1,\ldots, \exists a_{i-1} \in X_{i-1},\exists a_{i+1}\in X_{i+1},\ldots, \exists a_{n} \in X_{n} \hbox{ such that } \\ \langle \myVec{a}{n} \rangle \in r. \end{eqnarray*} A DRO for $c$ may replace one or more of the domains \myVec{X}{n} by a subset of it if the set difference between the old and the new domains contains inconsistent values only. A DRO is \emph{optimal} if every domain is replaced by the smallest domain containing all its consistent values. We call a constraint \emph{primitive} if an optimal DRO is available for it that is sufficiently efficiently computed. What is sufficient depends on the context. \subsection{Constraint propagation} To gain information about the solution set, inconsistent values are removed as much as possible with modest computation effort. For example, DROs can be applied as long as they remove inconsistent values. It is the task of a constraint propagation algorithm to reach as quickly as possible a set of domains that cannot be reduced by any DRO. Many versions of this algorithm exist \cite{aptEssence,rthpra01}. They can be regarded as refinements of the algorithm in Figure~\ref{lGPA}, which we refer to as the \emph{generic propagation algorithm}\/; GPA for short. GPA maintains a pool of DROs, called {\em active set}. No order is specified for applying these operators. Notice that the active set $A$ is initialized to contain all constraints. \begin{figure} \caption{ A pseudo-code for GPA. } \label{lGPA} \end{figure} \subsection{Intervals} A {\em floating-point number} is any element of $F \cup \{-\infty, +\infty\}$, where $F$ is a finite set of reals. A {\em floating-point interval} is a closed connected set of reals, where the bounds, in so far as they exist, are floating-point numbers. When we write ``interval'' without qualification, we mean a floating-point interval. For every bounded non-empty interval $X$, $lb(X)$ and $rb(X)$ denote the least and the greatest element of $X$ respectively. They are referred to as the left and the right bound of $X$. If $X$ is not bounded from below, then $lb(X) = -\infty$. Similarly, if $X$ is not bounded from above, then $rb(X) = +\infty$. Thus, $ X = [lb(X), rb(X)]$ is a convenient notation for all non-empty intervals, bounded or not. A {\em box} is a Cartesian product of intervals. \subsection{Solving inequalities in interval arithmetic} Moore's idea of solving inequalities such as those in Equation (\ref{nonLinSys}) by means of interval arithmetic is at least as important as the subsequent applications of interval constraints to this problem. Suppose we wish to investigate the presence of solutions of a single inequality in Equation (\ref{nonLinSys}) in a box $X_1 \times \cdots \times X_m$. Then one evaluates in interval arithmetic the expression in the left-hand side. As values for the variables $x_1 , \ldots , x_m$ one uses the intervals $X_1 , \cdots , X_m$. Suppose the result is the interval $[a_i,b_i]$. We have exactly one of the following three cases. If $0 < a_i$ for at least one $i$, then there are no solutions. If $b_i \leq 0$ for all $i$, then all tuples in $X_1 \times \cdots \times X_m$ are solutions. If $a_i \leq 0 < b_i$ for at least one $i$, then either of the above may be true. In this same case of $a_i \leq 0 < b_i$, it may also be that some of the tuples in $X_1 \times \cdots \times X_m$ are solutions, while others are not. Again, in the case where $a_i \leq 0 < b_i$, it may be possible to split $X_1 \times \cdots \times X_m$. In this way, a more informative outcome may be obtained for one or both of the results of splitting. Such splits can be repeated as long as possible and necessary. \subsection{Interval CSPs} Problems in a wide variety of application areas can be expressed as CSPs. Domains can be as different as booleans, integers, finite symbolic domains, and reals. In this paper we consider {\em Interval CSPs} (ICSPs), which are CSPs where all domains are intervals and all constraints are primitive. ICSPs are important because they encapsulate what can be efficiently computed; they represent Layer 2 in the software architecture of Figure~\ref{softArch}. The layer is distinct from Layer 3 because in Equation (\ref{nonLinSys}) there typically occur atomic formulas that contain function symbols. To emphasize the role of ICSPs as a layer of software architecture, we view them as a \emph{virtual machine}, with a function that is similar to those for Prolog or Java. Just as a program in Prolog or Java is translated to virtual machine instructions, a system such as Equation (\ref{nonLinSys}) can be translated to an ICSP, as described in a later section. The instructions of the ICSP level are DROs, one for each constraint. As an example of such an ICSP virtual machine instruction, consider the DRO for product constraint. It reduces the box $[a,b]\times [c,d]\times [e,f]$ to the box that has the projections \begin{eqnarray} \varphi([a,b] & \cap & ([e,f]/[c,d])) \nonumber\\ \varphi([c,d] & \cap & ([e,f]/[a,b])) \nonumber\\ \varphi([e,f] & \cap & ([a,b]*[c,d])) \nonumber \end{eqnarray} Here $\varphi$ is the function that yields the smallest interval containing its argument. Of particular interest is the effect of the DRO when all variables have $[-\infty,+\infty]$ as domain. For each argument, the domain after application of the DRO is defined as the \emph{default domain} of that argument. Typically, default domains are $[-\infty,+\infty]$. Notable exceptions include the constraint $sin(x,y)$ (defined as $y = \sin(x)$), where the default domain of $y$ is $[-1,1]$. Another is $\hbox{sq}(x,y)$ (defined as $y = x^2$), where the default domain of $y$ is $[0,\infty]$. A difference with other virtual machines is that a program for the ICSP virtual machine is an unordered collection of DROs. Programs for other virtual machines are ordered sequences of instructions. In those other virtual machines, the typical instruction does not specify the successor instruction. By default this is taken to be the next one in textual order. Execution of the successor is implemented by incrementing the instruction counter by one. The simplicity of the instruction sequencing in conventional virtual (and actual) machines is misleading. Many instruction executions concern untypical instructions, where the next instruction is specified to be another than the default next instruction. Examples of such untypical instructions are branches (conditional or unconditional) and subroutine jumps. In the ICSP virtual machine, the DROs are the instructions, and they form an unordered set. Instead of an instruction counter specifying the next instruction, there is the active set of GPA containing the set of possible next instructions. Instead of an instruction or a default rule determining the next instruction to be executed, GPA selects in an unspecified way which of the DROs in the active set to execute. In this way, programs can be declarative: instructions have only meaning in terms of \emph{what} is to be computed. \emph{How} it is done (instruction sequencing), is the exclusive task of the virtual machine. \subsection{A canonical form for nonlinear numerical inequalities} \label{arch} Equation (\ref{nonLinSys}) may have multiple occurrences of variables in the same formula. As there are certain advantages in avoiding such occurrences, we rewrite without loss of generality the system in Equation~(\ref{nonLinSys}) to the canonical form shown in Figure~\ref{singleSys}. \begin{figure}\label{singleSys} \end{figure} In Figure~\ref{singleSys}, the expressions for the functions $g_1, \ldots, g_k$ have no multiple occurrences of variables. As a result, they have variables \myVec{y}{n} instead of \myVec{x}{m}, with $m \leq n$ as in Equation~(\ref{nonLinSys}). This canonical form is obtained by associating with each of the variables $x_j$ in Equation~(\ref{nonLinSys}) an equivalence class of the variables in Figure~\ref{singleSys}. This is done by replacing in Equation~(\ref{nonLinSys}) each occurrence of a variable by a different element of the corresponding equivalence class. This is possible by ensuring that each equivalence class is as large as the largest number of multiple occurrences. The predicate $\hbox{{\it allEq }}$ is true if and only if all its real-valued arguments are equal. An advantage of this translation is that evaluation in interval arithmetic of each expression gives the best possible result, namely the range of the function values. At the same time, the $\hbox{{\it allEq }}$ constraint is easy to enforce by making all intervals of the variables in the constraint equal to their common intersection. This takes information into account from all $k$ inequalities. If the system in its original form as in Equation~\ref{nonLinSys}, with multiple occurrences, would be translated to a CSP, then only multiple occurrences in a single expression would be exploited at one time. In the coming sections and without loss of generality, we will only consider expressions without multiple occurrences of variables. \subsection{Translating nonlinear inequalities to ICSPs} \label{translation} ICSPs represent what we \emph{can} solve. They consist of atomic formulas without function symbols that, moreover, have efficient DROs. Equation (\ref{nonLinSys}) exemplifies what we \emph{want} to solve: it consists of atomic formulas typically containing deeply nested terms. \paragraph{The tree form of a formula} We regard a first-order atomic formula as a tree. The unique predicate symbol is the root. The terms that are the arguments of the formula are also trees and they are the subtrees of the root. If the term is a variable, then the tree only has a root, which is that variable. A term may also be a function symbol with one or more arguments, which are terms. In that case, the function symbol is the root with the argument terms as subtrees. In the tree form of a formula the leaves are variables. In addition, we label every node that is a function symbol with a unique variable. Any constants that may occur in the formula are replaced by unique variables. We ensure that the associated domains contain the constants and are as small as possible. \paragraph{Translating a formula to an ICSP} The tree form of a formula thus labeled is readily translated to an ICSP. The translation has a set of constraints in which each element is obtained by translating an internal node of the tree. The root translates to $p(\myVc{x}{0}{n-1})$ where $p$ is the predicate symbol that is the root and \myVc{x}{0}{n-1} are the variables labeling the children of the root. A non-root internal node of the form $f(\myVc{t}{0}{n-1})$ translates to $F(\myVc{x}{0}{n-1}, y)$, where \begin{itemize} \item $y$ is the variable labeling the node \item \myVc{x}{0}{n-1} are the variables labeling the child nodes \item $F$ is the relation defined by $F(\myVc{a}{0}{n-1}, v)$ iff $v= f(\myVc{a}{0}{n-1})$ for all $\myVc{a}{0}{n-1},v$. \end{itemize} \subsection{Search} Propagation may terminate with small intervals for all variables of interest. This is rare. More likely, propagation leaves a large box as containing all solutions, if any. To obtain more information about possibly existing solutions, it is necessary to split an ICSP into two ICSPs and to apply propagation to both. An ICSP $S'$ is a result of splitting an ICSP $S$ if $S'$ has the same constraints as $S$ and differs only in the domain for one variable, say, $x$. The domain for $x$ in $S'$ is the left or right half of the domain for $x$ in $S$. A \emph{search strategy} for an ICSP is a binary tree representing the result of successive splits. Search strategies can differ greatly in the effort required to carry them to completion. The most obvious search strategy is the \emph{greedy strategy}\/: the one that ensures that all intervals become small enough by choosing a widest domain as the one to be split. This is a plausible strategy in the case where the ICSP has a few point solutions. In general, the set of solutions is a continuum: a line segment, a piece of a surface, or a variety in a higher dimensional space that has positive volume. In such cases we prefer the search to result in a single box containing all solutions. Of course we also prefer such a box to be as small as possible. The greedy search strategy splits the continuum of solutions into an unmanageably large number of small boxes. It is not clear that the greedy strategy is preferable even in the case of a few well-isolated point solutions. In general we need a search strategy other than the greedy one. A more promising search strategy was first described in the \verb+absolve+ predicate of the BNR Prolog system \cite{BNR88} and by \cite{bmcv94}, where it is called \emph{box consistency}. The box consistency search strategy selects a variable and a domain bound. Box consistency uses binary search to determine a boundary interval that can be shown to contain no solutions. This boundary interval can then be removed from the domain, thus shrinking the domain. This is repeated until a boundary interval with width less than a certain tolerance is found that cannot be shown to contain no solutions. When this is the case for both boundaries of all variables, the domains are said to be \emph{box consistent} with respect to the tolerance used and with respect to the method for showing inconsistency. When this method is interval arithmetic, we obtain \emph{functional box consistency}. When it is propagation, then it is called \emph{relational box consistency} \cite{vnmdn01b}. All we need to know about search in this paper is that greedy search and box consistency are both search strategies and that both can be based on propagation. Box consistency is the more promising search strategy. Thus we need to compare interval arithmetic and propagation as ways of showing that a nonlinear inequality has no solutions in a given box. This we do in section~\ref{PSI}. \section{Propagation with selective initialization} \label{PSI} Suppose we have a term that can be evaluated in interval arithmetic. Let us compare the interval that is the result of such an evaluation with the effect of GPA on the ICSP obtained by translating the term as described in section~\ref{translation}. To make the comparison possible we define evaluation of a term in interval arithmetic. The definition follows the recursive structure of the term: a term is either a variable or it is a function symbol with terms as arguments. If the term is an interval, then the result is that interval. If the argument is function $f$ applied to arguments, then the result is $f$ evaluated in interval arithmetic applied to the results of evaluating the arguments in interval arithmetic. This assumes that every function symbol denotes a function that is defined on reals as well as on intervals. The latter is called the \emph{interval extension} of the former. For a full treatment of interval extensions, see \cite{moore66,nmr90,hnsn92}. The following lemma appears substantially as Theorem 2.6 in \cite{chn98}. \begin{lemma} \label{basic} Let $t$ be a term that can be evaluated in interval arithmetic. Let the variables of $t$ be $x_1,\ldots,x_n$. Let $y$ be the variable associated with the root of the tree form of $t$. Let $S$ be the ICSP that results from translating $t$, where the domains of $x_1,\ldots,x_n$ are $X_1,\ldots,X_n$ and where the domains of the internal variables are $[-\infty,+\infty]$. After termination of GPA applied to $S$, the domain for $y$ is the interval that results from interval arithmetic evaluation of $t$. \end{lemma} \begin{proof} Suppose that a variable of a constraint has domain $[-\infty,+\infty]$. After applying the DRO for that constraint, this domain has become the result of the interval arithmetic operation that obtains the domain for this variable from the domains of the other variables of the constraint.\\ According to \cite{vnmd97,aptEssence}, every fair sequence of DROs converges to the same domains for the variables. These are also the domains on termination of GPA. Let us consider a fair sequence that begins with a sequence $s$ of DROs that mimics the evaluation of $t$ in interval arithmetic. At the end of this, $y$ has the value computed by interval arithmetic. This shows that GPA gives a result that is a subinterval of the result obtained by interval arithmetic.\\ GPA terminates after activating the DROs in $s$. This is because in the interval arithmetic evaluation of $t$ an operation is only performed when its arguments have been evaluated. This means that the corresponding DRO only changes one domain. This domain is the domain of a unique variable that occurs in only one constraint that is already in the active set. Therefore none of the DRO activations adds a constraint to the active set, which is empty after $s$. \end{proof} GPA yields the same result whatever the way constraints are selected in the active set. Therefore GPA always gives the result of interval arithmetic evaluation. However, GPA may obtain this result in an inefficient way by selecting constraints that have no effect. This suggests that the active set be structured in a way that reflects the structure of $t$. This approach has been taken in \cite{bggp99,vnmdn01b}. The proof shows that, if the active set had not contained any of the constraints only involving internal variables, these constraints would have been added to the active set by GPA. This is the main idea of selective initialization. By initializing and ordering the active set in a suitable way and leaving GPA otherwise unchanged, it will obtain the interval arithmetic result with no more operations than interval arithmetic. This assumes the optimization implied by the Totality Theorem in \cite{hckmdw98}. \begin{definition} A constraint is a \emph{seed constraint} iff at least one of its variables has a domain that differs from the default domain assigned to that variable. \end{definition} For example, the term $\sin(x_1)+\sin(x_2)$ translates to an ICSP with constraints $sum(u,v,y)$, $sin(x_1,u)$, and $sin(x_2,v)$. When the domains are $[-\infty,+\infty]$ for all variables, then the seed constraints are $sin(x_1,u)$ and $sin(x_2,v)$. When the domains are $[-\infty,+\infty]$ for $x_1$, $x_2$, and $y$; $[-1,1]$ for $u$ and $v$, then $sum(u,v,y)$ is the one seed constraint. When the domains are $[-\infty,+\infty]$ for $x_2$ and $y$; $[-1,1]$ for $x_1$, $u$ and $v$, then the seed constraints are $sum(u,v,y)$ and $sin(x_1,u)$. \begin{definition} Let PSI (Propagation with Selective Initialization) be GPA except for the following modifications.\\ (a) PSI only applies to ICSPs generated by translation from an atomic formula. \\ (b) The active set is a priority queue that is ordered according to the distance from the root of the node that generated the constraint. The greater that distance, the earlier the item is removed from the queue. \\ (c) The active set contains all seed constraints and no other ones. \end{definition} Lemma~\ref{basic} says that GPA simulates interval arithmetic as far as the result is concerned. It does not say anything about the efficiency with which the result is obtained. Theorem~\ref{simulate} says that PSI obtains the result as efficiently as it is done in interval arithmetic. This assumes the functionality optimization in the DROs \cite{hckmdw98}. \begin{theorem} \label{simulate} Let $S$ be the ICSP obtained by translating a term $t$ in variables \myVc{x}{1}{n}, where these variables have domains \myVc{X}{1}{n}. Applying PSI to S terminates after selecting no constraint more than once. Moreover, the root variable ends up with $Z$ as domain where $Z$ is the interval resulting from evaluating $t$ with \myVc{x}{1}{n} substituted by \myVc{X}{1}{n}. \end{theorem} \begin{proof} Suppose GPA is applied to $S$ in such a way that all non-seed constraints are selected first. The execution of the DRO corresponding to these non-seed constraints does not change any domains. Therefore these DRO executions do not add any constraints. As a result, the effect of applying GPA is the same as when the active set would have been initialized with only the seed constraints.\\ Suppose the seed constraints are selected according to priority order. This ensures that no future constraint selection re-introduces a constraint previously selected. Thus GPA terminates after activating every seed constraint exactly once.\\ Such an execution of GPA coincides step by step with that of PSI. As GPA terminates with the correct result, so does PSI. \end{proof} \section{Using ICSPs to solve inequalities} We briefly reviewed how interval arithmetic can solve systems of nonlinear inequalities. The fundamental capability turned out to be that of evaluating a term in interval arithmetic. We saw that this can also be done by applying propagation to ICSPs generated from arithmetic terms. We now investigate how to extend this to the use of ICSPs to solve nonlinear inequalities. \subsection{Using ICSPs to solve a single inequality} Suppose $S$ is the ICSP resulting from translating $$ g_i(\myVc{x}{1}{m}) \leq 0 $$ Let $y$ be the variable labeling the left child of the root; that is, the variable representing the value of the left-hand side. Let \myVc{X}{1}{m} be the domains in $S$ of \myVc{x}{1}{m}, respectively. Now suppose that GPA is applied to $S$. One possible initial sequence of DRO activations is the equivalent of interval arithmetic evaluation of the left-hand side, leaving $y \leq 0$ as the only constraint in the active set with the domain for $y$ equal to $[a_i,b_i]$, the value in interval arithmetic of $g_i(\myVc{X}{1}{m})$. At this stage the DRO for $y \leq 0$ is executed. If $0 < a_i$, then failure occurs. If $b_i \leq 0$, then the domain for $y$ is unchanged. Therefore, no constraint is added to the active set. Termination occurs with nonfailure. There is no change in the domain of any of \myVc{x}{1}{m}. The third possibility is that $a_i \leq 0 < b_i$. In this case, the domain for $y$ shrinks: the upper bound decreases from $b_i$ to $0$. This causes the constraints to be brought into the active set that correspond to nodes at the next lower level in the tree. This propagation may continue all the way down to the lowest level in the tree, resulting in shrinking of the domain of one or more of \myVc{x}{1}{m}. Let us compare this behaviour with the use of interval arithmetic to solve the same inequality. In all three cases, GPA gives the same outcome as interval arithmetic: failure or nonfailure. In the first two cases, GPA gives no more information than interval arithmetic. It also does no more work. In the third case, GPA may give more information than interval arithmetic: in addition to the nonfailure outcome, it may shrink the domain of one or more of \myVc{x}{1}{m}. This is beyond the capabilities of interval arithmetic, which is restricted to transmit information about arguments of a function to information about the value of the function. It cannot transmit information in the reverse direction. To achieve this extra capability, GPA needs to do more work than the equivalent of interval arithmetic evaluation. In the above, we have assumed that GPA magically avoids selecting constraints in a way that is not optimal. In such an execution of GPA we can recognize two phases: an initial phase that corresponds to evaluating the left-hand side in interval arithmetic, followed by a second phase that starts with the active set containing only the constraint $y \leq 0$. When we consider the nodes in the tree that correspond to the constraints that are selected, then it is natural to call the first phase bottom-up (it starts at the leaves and ends at the root) and the second phase top-down (it starts at the root and may go down as far as to touch some of the leaves). The bottom-up phase can be performed automatically by the PSI algorithm. The start of the top-down phase is similar to the situation that occurs in search. In both search and in the top-down phase a different form of selective initialization can be used, shown in the next section. The bottom-up phase and the top-down phase are separated by a state in which the active set only contains $y \leq 0$. For reasons that become apparent in the next section, we prefer a separate treatment of this constraint: not to add it to the active set and to execute the shrinking of the domain for $y$ as an extraneous event. This is then a special case of termination of GPA, or its equivalent PSI, followed by the extraneous event of shrinking one domain. The Pseudo-code for PSI algorithm is given in Figure~\ref{PSIAlg}. \begin{figure} \caption{ Pseudo-code for Propagation with Selective Initialization (PSI). } \label{PSIAlg} \end{figure} The correctness of PSI algorithm can be easily deduced from the following theorem. \begin{theorem} \label{modEval} Consider the ICSP $\mathcal{S}$ obtained from the tree $T$ of the atomic formula $g_i(x_1,\ldots,x_n) \leq 0$. Suppose we modify GPA so that the active set is initialized to contain instead of all constraints only seed constraints. Suppose also that the active set is a priority queue in which the constraints are ordered according to the level they occupy in the tree $T$, with those that are further away from the root placed nearer to the front of the queue. Then GPA terminates with the same result as when the active set would have been initialized to contain all constraints. \end{theorem} \begin{proof} As we did before, suppose that in GPA the active set $A$ is initialized with all constraints such that the seed constraints are at the end of the active set. Applying any DRO of a constraint that is not a seed constraint will not affect any domain. Thus, the constraints that are not seed constraints can be removed from the active set without changing the result of GPA. Since the GPA does not specify any order, $A$ can be ordered as desired. Here we choose to order it in such a way we get an efficient GPA when used to evaluate an expression (see previous section). \end{proof} \section{Selective Initialization for search} Often we find that after applying GPA to an ICSP $S$, the domain $X$ for one of the variables, say $x$, is too wide. Search is then necessary. This can take the form of splitting $S$ on the domain for $x$. The results of such a split are two separate ICSPs $S_1$ and $S_2$ that are the same as $S$ except for the domain of $x$. In $S_1$, $x$ has as domain the left half of $X$; in $S_2$, it is the right half of $X$. However, applying GPA to $S_1$ and $S_2$ entails duplicating work already done when GPA was applied to $S$. When splitting on $x$ after termination of the application of GPA to $S$, we have the same situation as at the beginning of the downward phase of applying GPA to an inequality: the active set is empty and an extraneous event changes the domain of one variable to a proper subset. The following theorem justifies a form of the PSI algorithm where the active set is initialized with what is typically a small subset of all constraints. \begin{theorem} \label{modSolveGeneral} Let $T$ be the tree obtained from the atomic formula $g_i(x_1,\ldots,x_n) \leq 0$. Let $\mathcal{S}$ be the ICSP obtained from $T$. Let $x$ be a variable in $\mathcal{S}$. Suppose we apply GPA to $\mathcal{S}$. After the termination of GPA, suppose the domain of $x$ is changed to an interval that is a proper subset of it. If we apply GPA to $\mathcal{S}$ with an active set initialized with the constraints only involving $x$, then GPA terminates with the same result as when the active set would have been initialized to contain all constraints. \end{theorem} \begin{proof} To prove Theorem~\ref{modSolveGeneral}, we should show that initializing GPA with all constraints gives the same results as when it is initialized with only the constraints involving $x$. Since no ordering is specified for the active set of GPA, we choose an order in which the constraints involving $x$ are at the end of the active set. Because DROs are idempotent, all constraints at the front of the active set, different from those involving $x$, do not affect any domain. Thus removing them from the active set in the initialization process does not change the fixpoint of the GPA. Thus, Theorem~\ref{modSolveGeneral} is proved. \end{proof} \section{Further work} We have only considered the application of selective initialization to solve a single inequality. A conjunction of inequalities such as Equation~(\ref{nonLinSys}) can be solved by solving each in turn. This has to be iterated because the solving of another inequality affects the domain of an inequality already solved. This suggests performing the solving of all inequalities in parallel. Doing so avoids the waste of completing an iteration on the basis of unnecessarily wide intervals. It also promises speed-up because many of the DRO activations only involve variables that are unique to the inequality. In the current version of the design of our algorithm, we combine this parallelization with a method of minimizing the complexity usually caused by multiple occurrences of variables. \section{Conclusions} Before interval methods it was not clear how to tackle numerically realistic optimization models. Only with the advent of interval arithmetic in the 1960s \cite{moore66} one could for the first time at least say: ``If only we had so much memory and so much time, then we could solve this problem.'' Interval arithmetic has been slow in developing. Since the 1980s constraint programming has added fresh impetus to interval methods. Conjunctions of nonlinear inequalities, the basis for optimization, can be solved both with interval arithmetic and with constraint programming. In this paper we relate these two approaches. It was known that constraint propagation subsumes interval arithmetic. It was also clear that using propagation for the special case of interval arithmetic evaluation is wasteful. In this paper we present an algorithm for propagation by Selective Initialization that ensures that propagation is as efficient in the special case of interval arithmetic evaluation. We also apply Selective Initialization for search and for solving inequalities. Preliminary results on a parallel version of the methods presented here suggest that realistic optimization models will soon be within reach of modest computing resources. \section*{Acknowledgments} We acknowledge generous support by the University of Victoria, the Natural Science and Engineering Research Council NSERC, the Centrum voor Wiskunde en Informatica CWI, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek NWO. \end{document}
\begin{document} \begin{frontmatter} \title{ On some new difference sequence spaces of fractional order} \author[label1]{Serkan Demiriz\corref{cor1}} \ead{[email protected]}\cortext[cor1]{Corresponding Author (Tel: +90 356 252 16 16, Fax: +90 356 252 15 85)} \author[label3]{Osman Duyar} \ead{[email protected]} \address[label1]{Department of Mathematics, Faculty of Arts and Science, Gaziosmanpa\c{s}a University,\\ 60250 Tokat, Turkey } \address[label3]{Anatolian High School, 60200 Tokat, Turkey } \begin{abstract} Let $\Delta^{(\alpha)}$ denote the fractional difference operator. In this paper, we define new difference sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$. Also, the $\beta-$ dual of the spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ are determined and calculated their Schauder basis. Furthermore, the classes $(\mu(\Gamma,\Delta^{(\alpha)},u):\lambda)$ where $\mu\in \{c_{0},c\}$ and $\lambda \in \{c_{0},c,\ell_{\infty},\ell_1\}$ . \end{abstract} \begin{keyword} Difference operator $\Delta^{(\alpha)}$, Sequence spaces, $\beta-$dual, Matrix transformations. \end{keyword} \end{frontmatter} \noindent \section{Preliminaries,background and notation} By a \textit{sequence space}, we mean any vector subspace of $\omega$, the space of all real or complex valued sequences $x=(x_k)$. The well-known sequence spaces that we shall use throughout this paper are as following: $\ell_{\infty}$: the space of all bounded sequences, $c$: the space of all convergent sequences, $c_{0}$: the space of all null sequences, $cs$: the space of all sequences which form convergent series, $\ell_1$: the space of all sequences which form absolutely convergent series, $\ell_p$: the space of all sequences which form $p$-absolutely convergent series,\\ where $1< p<\infty$. Let $X,Y$ be two sequence spaces and $A=(a_{nk})$ be an infinite matrix of real or complex numbers $a_{nk}$, where $n,k\in \mathbb{N}$. Then, we say that $A$ defines a matrix mapping from $X$ into $Y$, and we denote it by writing $A:X\rightarrow Y$, if for every sequence $x=(x_{k})\in \lambda$ the sequence $Ax=\{(Ax)_{n}\}$, the $A$-transform of $x$, is in $Y$; where \begin{equation}\label{1} (Ax)_{n}=\sum_{k} a_{nk}x_{k}, \quad (n\in \mathbb{N}). \end{equation} For simplicity in notation, here and in what follows, the summation without limits runs from $0$ to $\infty$. By $(X:Y)$, we denote the class of all matrices $A$ such that $A:X\rightarrow Y$. Thus, $A\in (X:Y)$ if and only if the series on the right side of (\ref{1}) converges for each $n\in \mathbb{N}$ and every $x\in X$, and we have $Ax=\{(Ax)_{n}\}_{n\in \mathbb{N}}\in Y$ for all $x\in X$. A sequence $x$ is said to be $A$-summable to $\alpha$ if $Ax$ converges to $\alpha$ which is called as the $A$-limit of $x$. If a normed sequence space $X$ contains a sequence $(b_{n})$ with the property that for every $x\in X$ there is a unique sequence of scalars $(\alpha_{n})$ such that $$ \lim_{n\rightarrow \infty} \|x-(\alpha_{0}b_{0}+\alpha_{1}b_{1}+...+\alpha_{n}b_{n})\|=0, $$ then $(b_{n})$ is called a \emph{Schauder basis} (or briefly \emph{basis}) for $X$. The series $\sum \alpha_{k}b_{k}$ which has the sum $x$ is then called the expansion of $x$ with respect to $(b_{n})$, and written as $x=\sum \alpha_{k}b_{k}$. A matrix $A=(a_{nk})$ is called a \emph{triangle} if $a_{nk}=0$ for $k>n$ and $a_{nn}\neq 0$ for all $n\in \mathbb{N}$. It is trivial that $A(Bx)=(AB)x$ holds for triangle matrices $A,B$ and a sequence $x$. Further, a triangle matrix $U$ uniquely has an inverse $U^{-1}=V$ that is also a triangle matrix. Then, $x=U(Vx)=V(Ux)$ holds for all $x\in \omega$. We write additionally $\mathcal{U}$ for the set of all sequences $u$ such that $u_k\neq0$ for all $k \in \mathbb{N}$. For a sequence space $X$, the \emph{matrix domain} $X_{A}$ of an infinite matrix $A$ is defined by \begin{equation}\label{2} X_{A}=\left\{x=(x_{k})\in \omega: Ax\in \lambda\right\}, \end{equation} which is a sequence space. The approach constructing a new sequence space by means of the matrix domain of a particular limitation method has been recently employed by Wang \cite{w}, Ng and Lee \cite{nglee}, Ayd{\i}n and Ba\c{s}ar \cite{cafb} and Altay and Ba\c{s}ar \cite{bafb5}. The gamma function may be regarded as a generalization of $n!$ ($n-$factorial), where $n$ is any positive integer. The gamma function $\Gamma$ is defined for all $p$ real numbers except the negative integers and zero. It can be expressed as an improper integral as follows: \begin{equation}\label{gamma} \Gamma(p)=\int_0^\infty e^{-t}t^{p-1}dt. \end{equation} From the equality (\ref{gamma}) we deduce following properties:\\ (i) If $n\in\mathbb{N}$ then we have $\Gamma(n+1)=n!$ (ii) If $n\in\mathbb{R}-\{0,-1,-2,-3...\}$ then we have $\Gamma(n+1)=n\Gamma(n)$.\\ For a proper fraction $\alpha$, Baliarsingh and Dutta have defined a fractional difference operators $\Delta^{\alpha}:w\rightarrow w$, $\Delta^{(\alpha)}:w\rightarrow w$ and their inverse in \cite{bail3} as follows: \begin{equation}\label{op} \Delta^{\alpha}(x_{k})=\sum_{i=0}^\infty(-1)^i\frac{\Gamma(\alpha+1)}{i!\Gamma(\alpha+1-i)}x_{k+i} \end{equation} \begin{equation}\label{op2} \Delta^{(\alpha)}(x_{k})=\sum_{i=0}^\infty(-1)^i\frac{\Gamma(\alpha+1)}{i!\Gamma(\alpha+1-i)}x_{k-i} \end{equation} \begin{equation}\label{op3} \Delta^{-\alpha}(x_{k})=\sum_{i=0}^\infty(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)}x_{k+i} \end{equation} and \begin{equation}\label{op4} \Delta^{(-\alpha)}(x_{k})=\sum_{i=0}^\infty(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)}x_{k-i} \end{equation} where we assume throughout the series defined in (\ref{op})-(\ref{op4}) are convergent. In particular, for $\alpha=\frac{1}{2}$, \begin{eqnarray}\label{space1} \nonumber \Delta^{1/2}x_k&=&x_{k}-\frac{1}{2}x_{k+1}-\frac{1}{8}x_{k+2}-\frac{1}{16}x_{k+3}-\frac{5}{128}x_{k+4} -\frac{7}{256}x_{k+5}-...\\ \nonumber \Delta^{-1/2}x_k&=&x_{k}+\frac{1}{2}x_{k+1}+\frac{3}{8}x_{k+2}+\frac{5}{16}x_{k+3}+\frac{35}{128}x_{k+4} +\frac{63}{256}x_{k+5}+...\\ \nonumber \Delta^{(1/2)}x_k&=&x_{k}-\frac{1}{2}x_{k-1}-\frac{1}{8}x_{k-2}-\frac{1}{16}x_{k-3}-\frac{5}{128}x_{k-4} -\frac{7}{256}x_{k-5}-...\\ \nonumber \Delta^{(-1/2)}x_k&=&x_{k}+\frac{1}{2}x_{k-1}+\frac{3}{8}x_{k-2}+\frac{5}{16}x_{k-3}+\frac{35}{128}x_{k-4} +\frac{63}{256}x_{k-5}+.... \end{eqnarray} Baliarsingh have been defined the spaces $X(\Gamma,\Delta^{\alpha},u)$ for $X\in\{\ell_\infty,c_0,c\}$ by introducing the fractional difference operator $\Delta^{\alpha}$ and a positive fraction $\alpha$ in \cite{bail1}. In this article, Baliarsingh have been studied some topological properties of the spaces $X(\Gamma,\Delta^{\alpha},u)$ and established their $\alpha-$, $\beta-$ and $\gamma-$duals. Following \cite{bail1}, we introduce the sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ and obtain some results related to these sequence spaces. Furthermore, we compute the $\beta-$dual of the spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ . Finally, we characterize some matrix transformations on new sequence spaces. \section{The sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ } In this section, we define the sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ and examine the some topological properties of this sequence spaces. The notion of difference sequence spaces was introduced by Kýzmaz \cite{kzmaz}. It was generalized by Et and Çolak \cite{metolak} as follows: Let $m$ be a non-negative integer. Then $$ \Delta^{m}(X)=\{x=(x_k): \Delta^{m}x_k\in X\} $$ where $\Delta^{0}x=(x_k)$, $\Delta^{m}x=(\Delta^{m-1}x_k-\Delta^{m-1}x_{k+1})$ for all $k\in\mathbb{N}$ and \begin{equation}\label{delta} \Delta^{m}x_k=\sum_{i=0}^{m}(-1)^i\left(m \atop i\right)x_{k+i}. \end{equation} Furthermore, Malkowsky E., et al.\cite{emmmss} have been introduced the spaces \begin{equation}\label{delta} \Delta_u^{(m)}X=\{x\in \omega: \Delta_u^{(m)}x\in X\} \end{equation} where $\Delta_u^{(m)}x=u\Delta^{(m)}x$ for all $x\in\omega$ . In this study, the operator $\Delta_u^{(m)}:\omega\rightarrow\omega$ was defined as follows: \begin{equation}\label{delta2} \Delta^{(m)}x_k=\sum_{i=0}^{m}(-1)^i\left(m \atop i\right)x_{k-i}. \end{equation} Let $\alpha$ be a proper fraction and $u\in\mathcal{U}$. We define the sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ as follows: \begin{equation}\label{space1} c_0(\Gamma,\Delta^{(\alpha)},u)=\{x\in\omega: (\sum_{j=0}^ku_j\Delta^{(\alpha)}x_j)\in c_0\} \end{equation} and \begin{equation}\label{space2} c(\Gamma,\Delta^{(\alpha)},u)=\{x\in\omega: (\sum_{j=0}^ku_j\Delta^{(\alpha)}x_j)\in c\}. \end{equation} Now, we define the triangle matrix $\Delta_u^{(\alpha)}(\Gamma)=(\tau_{nk})$, \begin{equation}\label{matrix} \tau_{nk}=\left\{\begin{array}{ll} \displaystyle \sum_{i=0}^{n-k}(-1)^i\frac{\Gamma(\alpha+1)}{i!\Gamma(\alpha+1-i)}u_{i+k}, & (0\leq k\leq n)\\ \displaystyle 0, & k>n \end{array}\right. \end{equation} for all $k,n\in \mathbb{N}$. Further, for any sequence $x=(x_k)$ we define the sequence $y=(y_k)$ which will be used, as the $\Delta_u^{(\alpha)}(\Gamma)-$transform of $x$, that is \begin{eqnarray}\label{space1} \nonumber y_k&=&\sum_{j=0}^ku_j\Delta^{(\alpha)}x_j~=~\sum_{j=0}^ku_j(x_{j}-\alpha x_{j-1}+\frac{\alpha(\alpha-1)}{2!}x_{j-3}+...)\\ &=&\sum_{j=0}^{k}\bigg(\sum_{i=0}^{k-j}(-1)^i\frac{\Gamma(\alpha+1)}{i!\Gamma(\alpha+1-i)}u_{i+j}\bigg)x_{j} \end{eqnarray} for all $k\in\mathbb{N}$. It is natural that the spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ may also be defined with the notation of (\ref{2}) that \begin{equation}\label{fac1} c_0(\Gamma,\Delta^{(\alpha)},u)=(c_0)_{\Delta_u^{(\alpha)}(\Gamma)}~~ \textrm{and}~~~~c(\Gamma,\Delta^{(\alpha)},u)=c_{\Delta_u^{(\alpha)}(\Gamma)}. \end{equation} Before the main result let us give some lemmas, which we use frequently throughout this study, with respect to $\Delta^{(\alpha)}$ operator. \begin{lm}\label{lem1}\cite[Theorem 2.2]{bail3} $$\Delta^{(\alpha)}o\Delta^{(\beta)}=\Delta^{(\beta)}o\Delta^{(\alpha)}=\Delta^{(\alpha+\beta)}.$$ \end{lm} \begin{lm}\label{lem2}\cite[Theorem 2.3]{bail3} $$\Delta^{(\alpha)}o\Delta^{(-\alpha)}=\Delta^{(-\alpha)}o\Delta^{(\alpha)}=Id$$ where $Id$ the identity operator on $\omega$. \end{lm} \begin{thm} The sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ are $BK-$spaces with the norm \begin{equation}\label{norm} \|x\|_{c_0(\Gamma,\Delta^{(\alpha)},u)}=\|x\|_{c(\Gamma,\Delta^{(\alpha)},u)} =\sup_k\bigg|\sum_{j=0}^ku_j\Delta^{(\alpha)}x_j\bigg|. \end{equation} \end{thm} \begin{pf} Since (\ref{fac1}) holds and $c_{0},c$ are $BK-$ spaces with respect to their natural norms (see \cite[pp. 16-17]{fb}) and the matrix $\Delta_u^{(\alpha)}(\Gamma)=(\tau_{nk})$ is a triangle, Theorem 4.3.12 Wilansky \cite[pp. 63]{aw} gives the fact that $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ are $BK-$ spaces with the given norms. This completes the proof . \end{pf} Now, we may give the following theorem concerning the isomorphism between the spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$, $c(\Gamma,\Delta^{(\alpha)},u)$ and $c_0,c$, respectively: \begin{thm} The sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ are linearly isomorphic to the spaces $c_0$ and $c$, respectively, i.e, $c_0(\Gamma,\Delta^{(\alpha)},u)\cong c_0$ and $c(\Gamma,\Delta^{(\alpha)},u)\cong c$. \end{thm} \begin{pf} We prove the theorem for the space $c(\Gamma,\Delta^{(\alpha)},u)$. To prove this, we should show the existence of a linear bijection between the spaces $c(\Gamma,\Delta^{(\alpha)},u)$ and $c$. Consider the transformation $T$ defined, with the notation of (\ref{space1}), from $c(\Gamma,\Delta^{(\alpha)},u)$ to $c$ by $x\mapsto y=Tx=\Delta_u^{(\alpha)}(\Gamma)x$. The linearity of $T$ is clear. Further, it is trivial that $x=\theta$ whenever $Tx=\theta$ and hence $T$ is injective. Let be $y=(y_k)\in c$ and we define a sequence $x=(x_k)\in c(\Gamma,\Delta^{(\alpha)},u)$ by \begin{equation}\label{ters} x_k=\sum_{i=0}^{\infty}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)}\frac{y_{k-i}-y_{k-i-1}}{u_{k-i}}. \end{equation} Then by Lemma \ref{lem2}, we deduce that \begin{eqnarray}\label{space5} \nonumber \sum_{j=0}^ku_j\Delta^{(\alpha)}x_j&=&\sum_{j=0}^ku_j\Delta^{(\alpha)}\bigg(\sum_{i=0}^{\infty}(-1)^i \frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)}\frac{y_{j-i}-y_{j-i-1}}{u_{j-i}}\bigg)\\ \nonumber &=&\sum_{j=0}^ku_j\Delta^{(\alpha)}\bigg(\Delta^{(-\alpha)}\bigg(\frac{y_{j}-y_{j-1}}{u_{j}}\bigg)\bigg)\\ &=&\sum_{j=0}^k(y_{j}-y_{j-1})~=~y_k \end{eqnarray} Hence, $x\in c(\Gamma,\Delta^{(\alpha)},u)$ so $T$ is surjective. Furthermore one can easily show that $T$ is norm preserving. This complete the proof. \end{pf} \section{The $\beta$-Dual of The Spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$ } In this section, we determine the $\beta$-dual of the spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$. For the sequence spaces $X$ and $Y$, define the set $S(X,Y)$ by \begin{equation}\label{3.1} S(X,Y)=\{z=(z_{k})\in \omega: xz=(x_{k}z_{k})\in Y \textrm{ for all } x\in X\}. \end{equation} With the notation of (\ref{3.1}), $\beta-$ dual of a sequence space $X$ is defined by $$ X^{\beta}=S(X,cs). $$ \begin{lm}\label{e4.10} $A\in (c_0:c)$ if and only if \begin{equation}\label{e5.2} \lim_{n\rightarrow\infty} a_{nk}=\alpha_k \ \textrm{for each fixed}\ k\in \mathbb{N}, \end{equation} \begin{equation}\label{e5.3} \sup_{n\in \mathbb{N}}\sum_k|a_{nk}|<\infty. \end{equation} \end{lm} \begin{lm}\label{e4..2} $A\in (c:c)$ if and only if (\ref{e5.2}) and (\ref{e5.3}) hold, and \begin{equation} \lim_{n\rightarrow\infty}\sum_ka_{nk}\quad\ \textrm{exists}. \end{equation} \end{lm} \begin{lm}\label{lem3} $A=(a_{nk})\in (\ell_\infty:\ell_\infty)$ if and only if \begin{equation}\label{7c} \sup_n\sum_{k} |a_{nk}|<\infty. \end{equation} \end{lm} \begin{thm}\label{t3} Define the sets $\Gamma_{1}$, $\Gamma_{2}$ and a matrix $T=(t_{nk})$ by $$ t_{nk}=\left\{\begin{array}{ll} \displaystyle t_{k}-t_{k+1}, & (k<n)\\ \displaystyle t_{n}, & (k=n) \\\displaystyle\ 0, & (k>n) \end{array}\right. $$ for all $k,n\in\mathbb{N}$ where $t_{k}=a_{k}\sum_{i=0}^{k}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{k-i}}$ \begin{eqnarray*} \Gamma_{1}&=&\bigg\{ a=(a_n)\in w:\lim_{n\rightarrow\infty}t_{nk}=\alpha_k ~\textrm{exists for each}\ k\in \mathbb{N}\bigg\} \\ \Gamma_{2}&=&\bigg\{ a=(a_n)\in w:\sup_{n\in\mathbb{N}}\sum_{k}^{}|t_{nk}|<\infty\bigg\}\\ \Gamma_{3}&=&\bigg\{ a=(a_n)\in w:\sup_{n\in\mathbb{N}}\lim_{n\rightarrow\infty}\sum_k t_{nk}\quad\ \textrm{exists}\bigg\}. \end{eqnarray*} Then, $\{c_0(\Gamma,\Delta^{(\alpha)},u)\}^{\beta}=\Gamma_1\cap \Gamma_2$ and $\{c(\Gamma,\Delta^{(\alpha)},u)\}^{\beta}=\Gamma_1\cap \Gamma_2\cap \Gamma_3$. \end{thm} \begin{pf} We prove the theorem for the space $c_0(\Gamma,\Delta^{(\alpha)},u)$. Let $a=(a_n)\in w$ and $x=(x_k)\in c_0(\Gamma,\Delta^{(\alpha)},u) $. Then, we obtain the equality \begin{eqnarray}\label{beta} \nonumber \sum_{k=0}^{n}a_kx_k&=&\sum_{k=0}^{n}\bigg[a_{k}\sum_{i=0}^{\infty}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{k-i}}\bigg](y_{k-i}-y_{k-i-1})\\ \nonumber &=&\sum_{k=0}^{n}\bigg[a_{k}\sum_{i=0}^{k}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{k-i}}\bigg](y_{k-i}-y_{k-i-1})\\ \nonumber &=&\sum_{k=0}^{n-1}\bigg[a_{k}\sum_{i=0}^{k}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{k-i}}-a_{k+1}\sum_{i=0}^{k+1}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{k+1-i}}\bigg]y_k\\ \nonumber & &+~\bigg[a_{n}\sum_{i=0}^{n}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{n-i}}\bigg]y_n\\ &=&T_ny \end{eqnarray} Then we deduce by (\ref{beta}) that $ax=(a_kx_k)\in cs$ whenever $x=(x_k)\in c_0(\Gamma,\Delta^{(\alpha)},u)$ if and only if $T y\in c$ whenever $y=(y_k)\in c_0$. This means that $a=(a_k)\in \{c_0(\Gamma,\Delta^{(\alpha)},u)\}^\beta$ if and only if $T\in (c_0:c)$. Therefore, by using Lemma \ref{e4.10}, we obtain : \begin{eqnarray} &&\lim_{n\rightarrow\infty}t_{nk}=\alpha_k\qquad \textrm{exists for each}\ k\in\mathbb{N},\\ &&\sup_{n\in\mathbb{N}}\sum_{k}^{}|t_{nk}|<\infty. \end{eqnarray} Hence, we conclude that $\{c_0(\Gamma,\Delta^{(\alpha)},u)\}^{\beta}=\Gamma_1\cap \Gamma_2$. \end{pf} \section{Some matrix transformations related to the sequence spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$} In this final section, we state some results which characterize various matrix mappings on the spaces $c_0(\Gamma,\Delta^{(\alpha)},u)$ and $c(\Gamma,\Delta^{(\alpha)},u)$. We shall write throughout for brevity that \begin{equation}\label{z1} \widetilde{a}_{nk}=z_{nk}-z_{n,k+1}~~\textrm{and}~~b_{nk}=\sum_{j=0}^{n}\bigg(\sum_{i=0}^{n-j}(-1)^i \frac{\Gamma(\alpha+1)} {i!\Gamma(\alpha+1-i)}u_{i+j}\bigg) a_{jk} \end{equation} for all $k,n\in \mathbb{N}$ , where $$z_{nk}=a_{nk}\sum_{i=0}^{k}(-1)^i\frac{\Gamma(1-\alpha)}{i!\Gamma(1-\alpha-i)} \frac{1}{u_{k-i}}.$$ Now, we may give the following theorem. \begin{thm}\label{teo4} Let $\lambda$ be any given sequence space and $\mu\in \{c_{0},c\}$. Then, $A =(a_{nk})\in (\mu(\Gamma,\Delta^{(\alpha)},u):\lambda) $ if and only if ~$C\in (\mu: \lambda)$ and \begin{eqnarray}\label{Bn} C^{(n)}\in (\mu:c) \end{eqnarray} for every fixed $n\in\mathbb{N}$, where $c_{nk}=\widetilde{a}_{nk}$ and $C^{(n)}=(c_{mk}^{(n)})$ with $$ c^{(n)}_{mk}=\left\{\begin{array}{ll} \displaystyle z_{nk}-z_{n,k+1}, & (k<m)\\ \displaystyle z_{nm}, & (k=m) \\\displaystyle\ 0, & (k>m) \end{array}\right. $$ for all $k,m\in\mathbb{N}.$ \end{thm} \begin{pf} Let $\lambda$ be any given sequence space. Suppose that (\ref{z1}) holds between the entries of $A=(a_{nk})$ and $C=(c_{nk})$, and take into account that the spaces $\mu(\Gamma,\Delta^{(\alpha)},u)$ and $\mu$ are linearly isomorphic. Let $A =(a_{nk})\in (\mu(\Gamma,\Delta^{(\alpha)},u):\lambda) $ and take any $y=(y_k)\in \mu$. Then, $C\Delta_u^{(\alpha)}(\Gamma)$ exists and $\{a_{nk}\}_{k\in \mathbb{N}}\in \mu(\Gamma,\Delta^{(\alpha)},u)^{\beta}$ which yields that (\ref{Bn}) is necessary and $\{c_{nk}\}_{k\in \mathbb{N}}\in \mu^{\beta}$ for each $n\in \mathbb{N}$. Hence, $Cy$ exists for each $y\in\mu$ and thus by letting $m\rightarrow\infty$ in the equality \begin{equation} \sum_{k=0}^{m}a_{nk}x_k=\sum^{m-1}_{k=0} (z_{nk}-z_{n,k+1})y_{k}+z_{nm}y_m; \quad (m,n\in \mathbb{N}) \end{equation} we have that $Cy=Ax$ and so we have that $C\in (\mu:\lambda)$. Conversely, suppose that $C\in (\mu:\lambda)$ and (\ref{Bn}) hold, and take any $x=(x_{k})\in \mu(\Gamma,\Delta^{(\alpha)},u)$. Then, we have $\{c_{nk}\}_{k\in \mathbb{N}}\in \mu^{\beta}$ which gives together with (\ref{Bn}) that $\{a_{nk}\}_{k\in \mathbb{N}}\in \mu(\Gamma,\Delta^{(\alpha)},u)^{\beta}$ for each $n\in \mathbb{N}$. So, $Ax$ exists. Therefore, we obtain from the equality \begin{equation} \sum_{k=0}^{m} c_{nk}y_{k}=\sum_{k=0}^{m}\bigg[\sum_{j=k}^{m}\bigg(\sum_{i=0}^{j-k}(-1)^i\frac{\Gamma(\alpha+1)} {i!\Gamma(\alpha+1-i)}u_{i+k}\bigg) c_{nj}\bigg]x_{k} \quad \textrm{for all}\ n\in \mathbb{N}, \end{equation} as $m\rightarrow \infty$ that $Ax=Cy$ and this shows that $A\in (\mu(\Gamma,\Delta^{(\alpha)},u):\lambda)$. This completes the proof. \end{pf} \begin{thm}\label{teo5} Suppose that the entries of the infinite matrices $A=(a_{nk})$ and $B=(b_{nk})$ are connected with the relation (\ref{z1}) and $\lambda$ be given sequence space and $\mu\in \{c_0,c\}$. Then $A=(a_{nk})\in(\lambda:\mu(\Gamma,\Delta^{(\alpha)},u))$ if and only if $B=(b_{nk})\in(\lambda:\mu)$ . \end{thm} \begin{pf} Let $z=(z_k)\in\lambda$ and consider the following equality with (\ref{z1}) $$ \sum_{k=0}^{m} b_{nk}z_{k}=\sum_{j=0}^{n}\bigg[\sum_{k=0}^{m}\bigg(\sum_{i=0}^{n-j}(-1)^i\frac{\Gamma(\alpha+1)} {i!\Gamma(\alpha+1-i)}u_{i+j}\bigg) a_{jk}\bigg]z_{k} \quad \ (m,n\in \mathbb{N}), $$ which yields as $m\rightarrow\infty$ that $(Bz)_n=[\Delta_u^{(\alpha)}(\Gamma)(Az)]_n$. Hence, we obtain that $Az\in\mu(\Gamma,\Delta^{(\alpha)},u)$ whenever $z\in\lambda$ if and only if $Bz\in\mu$ whenever $z\in\lambda$. \end{pf} We will have several consequences by using Theorem \ref{teo4} and Theorem \ref{teo5}. But we must give firstly some relations which are important for consequences: \begin{eqnarray} \label{co1}&& \sup_{n\in\mathbb{N}}\sum_k|a_{nk}|<\infty \\ \label{co2}&& \lim_{n\rightarrow\infty}a_{nk}=\alpha_k\quad \textrm{exists for each fixed}\ k\in \mathbb{N} \\ \label{co3}&& \lim_{k\rightarrow\infty}a_{nk}=0 \quad \textrm{for each fixed}\ n\in \mathbb{N}\\ \label{co4}&& \lim_{n\rightarrow\infty}\sum_ka_{nk}\quad\ \textrm{exists}\\ \label{co5}&&\lim_{n\rightarrow\infty}\sum_ka_{nk}=0\\ \label{co6}&& \sup_{K\in \mathcal{F}}\sum_n\bigg|\sum_{k\in K}a_{nk}\bigg|<\infty\\ \label{co7}&& \lim_{n\rightarrow\infty}\sum_k |a_{nk}|=0\\ \label{co8}&&\sup_{n,k}|a_{nk}|<\infty\\ \label{co9}&&\lim_{m\rightarrow\infty}\sum_k|a_{nk}|=\sum_k|\alpha_{k}| \end{eqnarray} Now, we can give the corollaries: \begin{cor} The following statements hold: (i) $A=(a_{nk})\in (c_0(\Gamma,\Delta^{(\alpha)},u):\ell_\infty)= (c(\Gamma,\Delta^{(\alpha)},u):\ell_\infty)$ if and only if (\ref{co1}) holds with $\tilde{a}_{nk}$ instead of $a_{nk}$ and (\ref{Bn}) also holds. (ii) $A=(a_{nk})\in (c_0(\Gamma,\Delta^{(\alpha)},u):c)$ if and only if (\ref{co1})and (\ref{co2}) hold with $\tilde{a}_{nk}$ instead of $a_{nk}$ and (\ref{Bn}) also holds. (iii) $A=(a_{nk})\in (c_0(\Gamma,\Delta^{(\alpha)},u):c_0)$ if and only if (\ref{co1}) and (\ref{co3}) hold with $\tilde{a}_{nk}$ instead of $a_{nk}$ and (\ref{Bn}) also holds. (iv) $A=(a_{nk})\in (c(\Gamma,\Delta^{(\alpha)},u):c)$ if and only if (\ref{co1}), (\ref{co2}) and (\ref{co4}) hold with $\tilde{a}_{nk}$ instead of $a_{nk}$ and (\ref{Bn}) also holds. (v) $A=(a_{nk})\in (c(\Gamma,\Delta^{(\alpha)},u):c_0)$ if and only if (\ref{co1}), (\ref{co3}) and (\ref{co5}) hold with $\tilde{a}_{nk}$ instead of $a_{nk}$ and (\ref{Bn}) also holds. (vi) $A=(a_{nk})\in (c_0(\Gamma,\Delta^{(\alpha)},u):\ell)=(c(\Gamma,\Delta^{(\alpha)},u):\ell)$ if and only if (\ref{co6}) holds with $\tilde{a}_{nk}$ instead of $a_{nk}$ and (\ref{Bn}) also holds. \end{cor} \begin{cor} The following statements hold: (i)$A=(a_{nk})\in (\ell_\infty:c_0(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co7}) hold with $b_{nk}$ instead of $a_{nk}$. (ii)$A=(a_{nk})\in (c:c_0(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co1}), (\ref{co3}) and (\ref{co5}) hold with $b_{nk}$ instead of $a_{nk}$. (iii) $A=(a_{nk})\in (c_0:c_0(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co1}) and (\ref{co3}) hold with $b_{nk}$ instead of $a_{nk}$. (iv)$A=(a_{nk})\in (c:c_0(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co3}) and (\ref{co8}) hold with $b_{nk}$ instead of $a_{nk}$. (v) $A=(a_{nk})\in (\ell_\infty:c(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co2}) and (\ref{co9}) hold with $b_{nk}$ instead of $a_{nk}$. (vi) $A=(a_{nk})\in (c:c(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co1}), (\ref{co2}) and (\ref{co4}) hold with $b_{nk}$ instead of $a_{nk}$. (vii) $A=(a_{nk})\in (c_0:c(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co1}) and (\ref{co2}) hold with $b_{nk}$ instead of $a_{nk}$. (viii) $A=(a_{nk})\in (\ell:c(\Gamma,\Delta^{(\alpha)},u))$ if and only if (\ref{co2}) and (\ref{co8}) hold with $b_{nk}$ instead of $a_{nk}$. \end{cor} \end{document}
\begin{document} \begin{frontmatter} \title{Extensions and applications of ACF mappings} \author{Jean-Philippe Chancelier} \ead{[email protected]} \address{Université Paris-Est, CERMICS (ENPC), 6-8 Avenue Blaise Pascal, Cité Descartes , F-77455 Marne-la-Vallée} \begin{abstract} Using a definition of ASF sequences derived from the definition of asymptotic contractions of the final type of ACF, we give some new fixed points theorem for cyclic mappings and alternating mapping which extend results from \cite[Theorem 2]{suzuki2007} and \cite[Theorem 1]{zhang2007}. \end{abstract} \begin{keyword} Nonexpansive mappings \sep Fixed points \sep Meir-Keeler contraction \sep ACF mappings \end{keyword} \end{frontmatter} \def\mathop{\normalfont Fix}{\mathop{\normalfont Fix}} \def\stackrel{\mbox{\tiny def}}{=}{\stackrel{\mbox{\tiny def}}{=}} \def\mathop{\mbox{\rm Argmax}}{\mathop{\mbox{\rm Argmax}}} \def\mathop{\mbox{\rm Argmin}}{\mathop{\mbox{\rm Argmin}}} \def{\mathbb P}{{\mathbb P}} \def{\mathbb C}{{\mathbb C}} \def{\mathbb E}{{\mathbb E}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal H}{{\cal H}} \def{\cal V}{{\cal V}} \def{\cal W}{{\cal W}} \def{\cal U}{{\cal U}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb N}{{\mathbb N}} \def{\mathbb M}{{\mathbb M}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb I}{{\mathbb I}} \def{\mathbb U}{{\mathbb U}} \def{{\cal T}_{\mbox{\tiny ad}}}{{{\cal T}_{\mbox{\tiny ad}}}} \def\texte#1{\quad\mbox{#1}\quad} \def\Proba#1{{\mathbb P}\left\{ #1 \right\}} \def\Probax#1#2{{{\mathbb P}}_{#1}\left\{ #2 \right\}} \def\ProbaU#1#2{{{\mathbb P}}^{#1} \left\{ #2 \right\}} \def\ProbaxU#1#2#3{{{\mathbb P}}^{#1}_{#2} \left\{ #3 \right\}} \def\valmoy#1{{\mathbb E}\left[ #1 \right]} \def\valmoyDebut#1{{\mathbb E} [ #1 } \def\valmoyFin#1{ #1 ]} \def\valmoyp#1#2{{\mathbb E}_{#1}\left[ #2 \right]} \def\valmoypDebut#1#2{{\mathbb E}_{#1} \left[ #2 \right.} \def\valmoypFin#1{ \left. #1 \right]} \def\valmoypU#1#2#3{{\mathbb E}_{#1}^{#2}\left[ #3 \right]} \def\norminf#1{ {\Vert #1 \Vert}_{\infty}} \def\norm#1{ {\Vert #1 \Vert}} \def${\text{\bf H}}_1${${\text{\bf H}}_1$} \def${\text{\bf H}}_2${${\text{\bf H}}_2$} \def${\text{\bf H}}_3${${\text{\bf H}}_3$} \def\psca#1{\left< #1 \right>} \def\sigma\mbox{-}\lim{\sigma\mbox{-}\lim} \def\seq#1{\{{#1}_n\}_{n\in{\mathbb N}}} \def{\mathbb X}{{\mathbb X}} \newenvironment{myproof}{{ \small{\it Proof:}}}{ $\Box$\normalsize \\ } \setenumerate{labelindent=\parindent,label=\emph{($\mbox{C}_{\arabic*}$)},ref=($\mbox{C}_{\arabic*}$)} \section{Introduction} Many extensions of the well known Banach contraction principle \cite{banach} have been proposed in nonlinear analysis literature. Among them fixed point theorems for Meir-Keeler contraction have been extensively studied \cite{meir-keeler,kirk-2,suzuki-2006} and a final (in some sense) generalization defined as \emph{asymptotic contraction of the final type} (ACF, for short) has been stated by T.Suzuki \cite[Theorem 5]{suzuki2007}. Our aim in this paper is to extend the results of T.Suzuki to more general cases with regards to the mappings. More precisely, we want to be able to use the same framework for proving fixed point theorems for alternating mappings \S\ref{alternate} or for cyclic mappings \S\ref{cycling}. For that purpose we propose the definition of $p$-ASF-$1$ and $p$-ASF-$2$ sequences which are defined without references to a mapping and prove some Cauchy properties of such sequences in Theorem~\ref{thmpasf}. In \S\ref{defacf}, we recall the definition of ACF mapping and relate ACF mapping to $p$-ASF mappings. When the $p$-ASF sequences are generated using $\{T^nx\}$ we show that the two definitions coincide (Theorem~\ref{asf-acf}). We give an application to cyclic mappings in \S\ref{cycling} by providing a fixed point theorem which extends \cite[Theorem 2]{suzuki2007} to continuous $p$-ASF mappings. In \S\ref{alternate} we give an application to alternating mapping through Theorem~\ref{thm:ptfixe} which extends the results of~\cite{zhang2007}. \section{ACF sequences} In \cite{suzuki2007} T.Suzuki introduces the definition of an {\em asymptotic contraction of the final type} (ACF, for short) and proves that if a mapping $T$ is ACF then the sequence $\seq{x}$ defined by $x_n\stackrel{\mbox{\tiny def}}{=} T^n x$ is a Cauchy sequence for all $x\in {\mathbb X}$. Since our aim is to extend T.Suzuki results when sequences $\seq{x}$ are generated by more general processes, we introduce a new definition that we call ASF, which stands for {\em asymptotic sequences of the final type}. The definition characterizes two sequences and not a mapping. The link between the two definitions is the following. Suppose that the mapping $T$ is ACF and for $x$, $y \in {\mathbb X}$ define two sequences $\seq{x}$, $\seq{y}$ by $x_n\stackrel{\mbox{\tiny def}}{=} T^n x$ and $y_n\stackrel{\mbox{\tiny def}}{=} T^n y$. If for all $n\in {\mathbb N}$ we have $x_n \not=y_n$ then the two sequences are ASF. Properties of ASF sequences are given in Lemma~\ref{lem:ASF} and a proof is given but note that the proof is mostly a simple rephrase of \cite[Lemma 1 and 2]{suzuki2007}. We first start by the ASF definition. In the sequel $({\mathbb X},d)$ is a complete metric space and $p$ is a given function from ${\mathbb X}\times {\mathbb X}$ into $[0,\infty)$. \begin{defn} \label{def:acf} We say that two sequences $\seq{x}$, $\seq{y}$ with $x_n,y_n \in {\mathbb X}$ are $p$-ASF-$1$ if the following are satisfied: \begin{enumerate} \item \label{it:acfun} For each $\epsilon >0$ there exists $\delta >0$ such that if for $i \in {\mathbb N}$ we have $p(x_i,y_i) < \delta$ then $\limsup_{n \to \infty} p(x_{n},y_{n}) \le \epsilon$\,; \item \label{it:acfdeux} For each $\epsilon > 0$, there exists $\delta > 0$ such that for $i$,$j \in {\mathbb N}$ with $\epsilon < p(x_i, y_i) < \epsilon + \delta$, there exists $\nu \in {\mathbb N}$ such that $p(x_{\nu+i},y_{\nu+i}) \le \epsilon$\,; \item \label{it:acftrois} For each given $(x_i,y_i)$ such that $p(x_i,y_i)\ne 0$ there exists $\nu \in {\mathbb N}$ such that $$p(x_{\nu+i}, y_{\nu+i}) < p(x_i,y_i)\,.$$ \end{enumerate} \end{defn} \begin{lem}\label{lem:ASF} Let $\seq{x}$, $\seq{y}$ be two $p$-ASF-1 sequences then $\lim_{n\to \infty} p(x_n,y_n)=0$. \end{lem} \begin{myproof} We follow \cite[Lemma 2]{suzuki2007}. If we suppose that there exists $i\in{\mathbb N}$ such that $p(x_i,y_i)=0$, we conclude directly using \ref{it:acfun} that $\lim_{n\to \infty} p(x_n,y_n)=0$. Thus we assume now that $p(x_n,y_n)\not=0$ for all $n\in {\mathbb N}$. We first prove that if $\seq{x}$, $\seq{y}$ satisfy \ref{it:acfdeux} and \ref{it:acftrois} then $\liminf_{n \to \infty} p(x_n,y_n)=0$. Using the fact that $p$ is nonnegative and repeatedly using Property~\ref{it:acftrois} it is possible to build an extracted decreasing sub-sequence $p(x_{\sigma(n)},y_{\sigma(n)})$ such that $0 \le p(x_{\sigma(n)},y_{\sigma(n)}) \le p(x_0,y_0)$ which implies that $\liminf_{n \to \infty} p(x_n,y_n)=\alpha$ exists and is finite. Suppose that $\alpha >0$. We first show that we must have $\alpha < p(x_n,y_n)$ for all $n \in {\mathbb N}$. Indeed suppose that there exists $n_0$ such that $p(x_{n_0},y_{n_0}) \le \alpha$ then repeatedly using \ref{it:acftrois} we can build an extracted decreasing sequence $p(x_{\sigma(n)},y_{\sigma(n)})$ such that $p(x_{\sigma(n)},y_{\sigma(n)}) < p(x_{n_0},y_{n_0}) \le \alpha$. This decreasing sequence will converge to a cluster point of $p(x_{n},y_{n})$ strictly smaller than $\alpha$ which is contradictory with the definition of $\alpha$. Thus we have $\alpha < p(x_n,y_n)$ for all $n \in {\mathbb N}$ and $\alpha >0$. We then consider $\delta(\alpha)$ given by \ref{it:acfdeux} for $\epsilon=\alpha$. By definition of $\alpha$ we can find $(x_i,y_i)$ such that $\alpha < p(x_i, y_i) < \alpha + \delta(\alpha)$ and by \ref{it:acfdeux} we will obtain $\nu \in {\mathbb N}$ such that $p(x_{\nu+i},y_{\nu+i}) \le \alpha$ which contradicts $\alpha < p(x_n,y_n)$ for all $n \in {\mathbb N}$. Thus we conclude that $\alpha=0$. We prove now that $\liminf_{n \to \infty} p(x_n,y_n)=0$ and \ref{it:acfun} imply that $\limsup_{n \to \infty} p(x_n,y_n)=0$. For $\epsilon > 0$ given, we consider $\delta$ given by \ref{it:acfun}. Since $\liminf_{n \to \infty} p(x_n,y_n)=0$ then we can find $i\in {\mathbb N}$ such that $ p(x_i, y_i) < \delta$. Thus by \ref{it:acfun} we have $\limsup_{n \to \infty} p(x_{n+i},y_{n+i}) \le \epsilon$ and thus successively $\limsup_{n \to \infty} p(x_{n},y_{n}) \le \epsilon$ and $\limsup_{n \to \infty} p(x_{n},y_{n}) =0$ and the result follows. \end{myproof} \begin{defn} \label{def:acfd} We say that a sequences $\seq{x}$, with $x_n \in {\mathbb X}$ is $p$-ASF-$2$ if we have the following property: \begin{enumerate}[start=4] \item \label{it:acfddeux} For each $\epsilon > 0$, there exist $\delta > 0$ and $\nu \in {\mathbb N}$ such that if for $i$,$j \in {\mathbb N}$ we have $\epsilon < p(x_i, x_j) < \epsilon + \delta$, then $p(x_{\nu+i},x_{\nu+j}) \le \epsilon$\,. \end{enumerate} \end{defn} Let $q$ be a given function from ${\mathbb X}\times {\mathbb X}$ into $[0,\infty)$ and $p=G\circ q$ where the mapping $G$ is a nondecreasing right continuous function such that $G(t)>0$ for $t>0$. We first show here that when a sequence is $(G\circ q)$-ASF-$2$ then it is also a $q$-ASF-$2$ sequence if \ref{it:acftrois-g} is satisfied by $p$. Note that Property~\ref{it:acfddeux} (resp. \ref{it:acftrois-g}) is a kind of uniform extension of \ref{it:acfdeux} (resp. \ref{it:acftrois}) when only one sequence is involved. \begin{lem}(\cite[In Theorem 6]{suzuki-2007})\label{lem:ASF-D} Let $\seq{x}$ be a $p$-ASF-2 sequence and suppose that $p=G\circ q$ where $G$ is a nondecreasing right continuous function such that $G(t)>0$ for $t>0$. Suppose that we have \begin{enumerate}[start=5] \item \label{it:acftrois-g} for each given $(x_i,x_j)$ such that $p(x_i,x_j)\ne 0$ there exists $\nu \in {\mathbb N}$ such that $$p(x_{\nu+i}, x_{\nu+j}) < p(x_i,x_j)\,,$$ \end{enumerate} then $\seq{x}$ is a $q$-ASF-2 sequence. \end{lem} \begin{myproof} The proof is contained in \cite[Theorem 6]{suzuki-2007}. Fix $\eta >0$ and consider $\epsilon = G(\eta)$. Since $G(t) > 0$ for $t >0$ we have $\epsilon >0$. Then we can use \ref{it:acfddeux} to obtain $\delta >0$ and $\nu \in {\mathbb N}$ such that $\epsilon < p(x_i, x_j) < \epsilon + \delta$, for some $i$, $j\in {\mathbb N}$ implies $p(x_{\nu+i},x_{\nu+j}) \le \epsilon$\,. Since $G$ is nondecreasing right continuous we can find $\beta$ such that $G([\eta,\eta+\beta]) \subset [\epsilon,\epsilon+\delta)$. Thus suppose that $\eta < q(x_i,x_j) < \eta+\beta$, we then have $\epsilon \le G(q(x_i,x_j)) < \epsilon+\delta$. Since $G$ is nondecreasing it can be constant and equal to $\epsilon$ on a non empty interval $[\eta,\eta+\overline{\beta}) \subset \eta+\beta$ in the contrary we will have $\epsilon < G(\eta+\gamma)$ for $\gamma \in(0,\beta)$. If we are in the second case then $\epsilon < G(q(x_i,x_j)) < \epsilon+\delta$ and using \ref{it:acfddeux} we obtain $G(q(x_{i+\nu},x_{j+\nu})) \le \epsilon < G(\eta+\gamma)$ we thus have $q(x_{i+\nu},x_{j+\nu}) < \eta+\gamma$ for all $\gamma \in(0,\beta)$ and consequently $q(x_{i+\nu},x_{j+\nu})\le \eta$. In the first case we have $G(q(x_i,x_j))=\epsilon$ for $\eta < q(x_i,x_j) < \eta+\overline{\beta}$. Using \ref{it:acftrois-g} we can find $\nu\in {\mathbb N}$ such that \begin{equation} G(q(x_{i+\nu},x_{j+\nu})) < G(q(x_i,x_j))=\epsilon = G(\eta) \end{equation} and thus $q(x_{i+\nu},x_{j+\nu}) \le \eta$. We thus have proved that Property~\ref{it:acfddeux} is satisfied by $q$. \end{myproof} We prove now that $p$-ASF-$2$ sequences mixed with convergence properties of the sequence $p(x_n,x_{n+1})$ gives $p$-Cauchy properties. More precisely we have the following lemma. \defr{r} \begin{lem}\label{lem:ASFD} Let $\seq{x}$, be a $p$-ASF-2 sequence and suppose that $p$ is such that $p(x,y) \le p(x,z)+r(z,y)$ and $p(x,y) \le r(x,z)+ p(z,y)$ for all $x$, $y$, $z\in {\mathbb X}$ where the mapping $r:{\mathbb X}\times {\mathbb X} \to [0,\infty)$ satisfies the triangle inequality $r(x,y) \le r(x,z)+r(z,y)$ for all $x$, $y$, $z\in {\mathbb X}$. If the sequence $\seq{x}$ is such that $\lim_{n \to \infty} r(x_{n},x_{n+1})=0$ and $\lim_{n \to \infty} p(x_{n},x_{n+1})=0$ then we have $\lim_{n\to \infty} \sup_{m>n} p(x_n,x_m) = 0$. \end{lem} \begin{myproof} We follow \cite[Lemma 2]{suzuki2007} where a similar proof is given when $r=p$. Let $\epsilon >0 $ be fixed and consider $\delta$ and $\nu$ given by \ref{it:acfddeux}. There exists $N\in {\mathbb N}$ such that $r(x_n,x_{n+1}) < \delta/\nu$ and $p(x_n,x_{n+1}) < \delta/\nu$ for all $n \ge N$. We first have for $k\le \nu$: \begin{align} r(x_{n},x_{n+k}) & \le \sum_{i=0}^{k-1} r(x_{n+i},x_{n+i+1}) < k \frac{\delta}{\nu} \le \delta \label{recfornu} \\ \intertext{and} p(x_{n},x_{n+k}) & \le p(x_{n},x_{n+1}) + \sum_{i=1}^{k-1} r(x_{n+i},x_{n+i+1}) < k \frac{\delta}{\nu} \le \delta \label{recfornd} \end{align} We suppose that $p(x_{n},x_{n+\alpha}) < \epsilon+\delta$ is satisfied for $\alpha \in[1,k]$ and we want to prove that the same inequalities are satisfied for $\alpha \in[1,k+1]$. Using \eqref{recfornd} we may assume that $k\ge \nu$. Using the mixed triangle inequality satisfied by $p$ we have the two separate inequalities: \begin{align} p(x_{n},x_{n+k+1}) & \le p(x_{n},x_{n+k+1-\nu}) + \sum_{i=1-\nu}^{0} r(x_{n+k+i},x_{n+k+i+1})\nonumber \\ & < p(x_{n},x_{n+k+1-\nu}) + \delta \label{inequn} \\ \intertext{and} p(x_{n},x_{n+k+1}) & \le r(x_{n},x_{n+\nu}) + p(x_{n+\nu},x_{n+k+1-\nu+\nu}) \label{ineqdeux} \end{align} By hypothesis we have $p(x_n,x_{n+k+1-\nu}) < \epsilon+\delta$. If $p(x_n,x_{n+k+1-\nu}) \le \epsilon$ then using \eqref{inequn} we obtain $p(x_{n},x_{n+k+1}) < \epsilon+\delta$ else we can use \ref{it:acfddeux} to first get $p(x_{n+\nu},x_{n+k+1-\nu+\nu}) \le \epsilon$ and using \eqref{recfornu} and \eqref{ineqdeux} we obtain $p(x_{n},x_{n+k+1}) < \epsilon+\delta$. \end{myproof} In \cite{suzuki-2001}, T. Suzuki introduces the definition of a $\tau$-distance. We just recall here two properties which are satisfied by $\tau$-distance: if a function $p$ from ${\mathbb X}\times {\mathbb X}$ into ${\mathbb R}^+$ is a $\tau$-distance it satisfies $p(x,y)\le p(x,z)+p(z,y)$ for all $x$, $y$, $z \in {\mathbb X}$ and if a sequence $\seq{x}$ in ${\mathbb X}$ satisfies $\lim_{n \to \infty} \sup_{m >n} p(x_{n},x_{m})=0$ then $\seq{x}$ is a Cauchy sequence. We thus have the following theorem. \begin{thm}\label{thmpasf} Let $\seq{x}$ be a $p$-ASF-2 sequence in ${\mathbb X}$ such that $\seq{x}$ and $\seq{y}$ are $p$-ASF-1 for $y_{n}=x_{n+1}$ for all $n\in {\mathbb N}$. If one of the following assumptions holds true \begin{enumerate}[labelindent=\parindent,label=(\roman*),ref=\emph{(\roman*)}] \item \label{thmpasf:un} $p=q$ and $q$ is a $\tau$-distance\,; \item \label{thmpasf:deux} $p=G(q)$ where $q$ is a $\tau$-distance and where $G$ is a nondecreasing right continuous function such that $G(t)>0$ for $t >0$ and \emph{\ref{it:acftrois-g}} is satisfied by the sequence $\seq{x}$ (for the mapping $p=G(q)$)\,; \item \label{thmpasf:trois} $p$ is a $\tau$-distance such that $p(x,y) \le p(x,z)+q(z,y)$ and $p(x,y) \le q(x,z)+p(z,y)$ for all $x$, $y$, $z\in {\mathbb X}$ where the mapping $q:{\mathbb X}\times {\mathbb X} \to [0,\infty)$ satisfies the triangle inequality $q(x,y) \le q(x,z)+q(z,y)$ for all $x$, $y$, $z\in {\mathbb X}$ and $\lim_{n\to \infty} q(x_n,x_{n+1})=0$\,; \end{enumerate} then, $\seq{x}$ is a Cauchy sequence. \end{thm} \begin{myproof} First note that, in the three cases, using Lemma~\ref{lem:ASF} we have that $$\lim_{n\to \infty} p(x_n,x_{n+1})=0\,.$$ \ref{thmpasf:un} We consider the case $p=q$. Since $\lim_{n\to \infty} p(x_n,x_{n+1})=0$ We can use Lemma~\ref{lem:ASFD} (with $r=p$) to obtain $\lim_{n\to \infty} \sup_{m>n} p(x_n, x_m) = 0$ and since $p$ is a $\tau$-distance we obtain the fact that $\seq{x}$ is a Cauchy sequence \cite[Lemma 1]{suzuki-2001}. \ref{thmpasf:deux} Suppose now that $p=G(q)$, we have $\lim_{n\to \infty} G(q(x_n,x_{n+1}))=0$. This is only possible if $G(0)=0$ and we thus also obtain that $\lim_{n\to \infty} q(x_n,x_{n+1})=0$. Using Lemma~\ref{lem:ASF-D} we obtain that $\seq{x}$ is $q$-ASF-2 and we conclude as in the part \ref{thmpasf:un} using now the $\tau$-distance $q$. \ref{thmpasf:trois} Here we can use Lemma~\ref{lem:ASFD} to obtain $\lim_{n\to \infty} \sup_{m>n} p(x_n, x_m) = 0$ and using the fact that $p$ is a $\tau$-distance the conclusion follows the lines of the case \ref{thmpasf:un}. \end{myproof} \begin{rem} Note that we have proved during the proof of Theorem~\ref{thmpasf} that if we have two sequences $\seq{x}$ and $\seq{y}$ which are $p$-ASF-1 with $p=G\circ q$ then $G(0)=0$. \end{rem} \section{Links with ACF sequences} \label{defacf} We first recall here the definition of an ACF mapping. Then we give a definition of a $T$-ASF mapping by defining properties which are to be satisfied by the sequences $\{T^nx\}$ for $x \in {\mathbb X}$. We prove in Theorem~\ref{asf-acf} that the two definitions are equivalent. \begin{defn}\cite[Definition 1]{suzuki2007} Let $({\mathbb X}, d)$ be a metric space. Then a mapping $T$ on ${\mathbb X}$ is said to be an asymptotic contraction of the final type (ACF, for short) if the following hold: \begin{enumerate}[label=\emph{($\mbox{D}_{\arabic*}$)}] \item $\lim_{\delta \to 0^+} \sup \left\{ \limsup_{n\to \infty} d(T^n x,T^n y): d(x, y) <\delta \right\} = 0$. \item For each $\epsilon > 0$, there exists $\delta > 0$ such that for $x$, $y \in {\mathbb X}$ with $\epsilon < d(x, y) < \epsilon + \delta$, there exists $\nu \in N$ such that $d(T^\nu x,T^\nu y) \le \epsilon$. \item For $x$, $y \in {\mathbb X}$ with $x\not=y$, there exists $\nu \in N$ such that $d(T^\nu x,T^\nu y)<d(x,y)$. \item For $x \in {\mathbb X}$ and $\epsilon >0$, there exist $\delta >0$ and $\nu \in N$ such that \begin{equation} \epsilon <d(T^i x,T^j x) < \epsilon +\delta \;\mbox{implies}\; d(T^\nu \circ T^i x,T^\nu\circ T^j x )\le \epsilon\, \end{equation} for all $i$, $j \in N$. \end{enumerate} \end{defn} \begin{thm}\label{asf-acf} Let $({\mathbb X}, d)$ be a metric space. A mapping $T$ on ${\mathbb X}$ is said to be $p$-ASF if for all $x$, $y \in {\mathbb X}$ the sequences $\{T^n x\}$ and $\{T^n y\}$ are $p$-ASF-1 and $\{T^n x\}$ is $p$-ASF-2. Then, $T$ is an $ACF$ mapping is equivalent to $T$ is a $d$-ASF mapping. \end{thm} \begin{myproof} Suppose that the mapping $T$ is $ACF$. For each $x$, $y\in {\mathbb X}$ it is very easy to check and left to the reader that $\{T^n x\}$ and $\{T^n y\}$ are $d$-ASF-1 and $\{T^n x\}$ is $d$-ASF-2. Thus, $T$ is $d$-ASF. If $T$ is $d$-ASF, using Lemma~\ref{lem:ASF} we obtain $\lim_{n\to \infty} d(T^n x,T^n y) = 0$. If we consider the special case $y=Tx$ and the sequence $x_n = T^nx$ we obtain using Theorem~\ref{thmpasf} that $\{T^nx\}$ is a Cauchy sequence. Then using \cite[Theorem 6]{suzuki2007}\footnote{ We first recall from \cite[Theorem 6]{suzuki2007} that for a mapping $T$ on a metric space $({\mathbb X}, d)$ the following are equivalent: \begin{enumerate}[labelindent=\parindent,label=(\roman*),ref=\emph{(\roman*)}] \item $T$ is an ACF. \item $\lim_{n\to \infty} d(T^n x,T^n y) = 0$ holds true and $\{T^nx\}$ is a Cauchy sequence for all $x$, $y \in {\mathbb X}$. \end{enumerate} } we obtain that the mapping $T$ is ACF. \end{myproof} Existence and uniqueness of fixed points of $p$-ASF mappings is now obtained. Note that, in the special case where the mapping $p$ is equal to $d$ (i.e when we use the $\tau$-distance $p=d$ in $(i)$) the next theorem gives same results as \cite[Theorem 5]{suzuki2007}. \begin{thm}\label{fixedpt} Let $({\mathbb X},d)$ be a complete metric space, $T$ be a $p$-ASF mapping which is such that $T^l$ is continuous for some $l\in {\mathbb N}$ ($l>0$). We suppose that the function $q$ is a $\tau$-distance and one of the following holds true for the mapping $p$: \begin{enumerate}[labelindent=\parindent,label=(\roman*),ref=\emph{(\roman*)}] \item $p=q$\,. \item $p=G(q)$ where $G$ is a nondecreasing right continuous function such that $G(t)>0$ for $t >0$ and \emph{\ref{it:acftrois-g}} is satisfied by the sequence $\seq{x}$ (for the mapping $p=G(q)$). \end{enumerate} then, there exists a fixed point $z\in{\mathbb X}$ of $T$. Moreover, if for every sequences $\seq{x}$ and $\seq{y}$ $\lim_{n\to \infty} p(x_n,y_n)=0$ implies that $\lim_{n\to \infty} d(x_n,y_n)=0$ then the fixed point is unique and $\lim_{n\to \infty} T^nx=z$ holds true for every $x\in {\mathbb X}$ \end{thm} \begin{myproof} For every $x\in {\mathbb X}$ using Theorem~\ref{thmpasf} we know that $\{T^nx\}$ is a Cauchy sequence. By Lemma~\ref{lem:ASF} we know that $\lim_{n\to \infty} p(T^nx,T^ny)=0$. We then have all the ingredients of \cite[Theorem 4 and Lemma 3]{suzuki2007} to conclude the proof. \end{myproof} \section{An application to $ACF$ cyclic mappings} \label{cycling} We suppose here that ${\mathbb X}$ is a uniformly convex Banach space and thus $d(x,y)\stackrel{\mbox{\tiny def}}{=} \norm{x-y}$. We consider $A$ and $B$ two nonempty subsets of ${\mathbb X}$, $A$ being convex, and a cyclic mapping $T: A \cup B \to A \cup B$. We recall that $T$ is a cyclic mapping if $T(A) \subset B$ and $T(B) \subset A$. We define a mapping $p: {\mathbb X}\times {\mathbb X} \to {\mathbb R}^+$ by $p(x,y) \stackrel{\mbox{\tiny def}}{=} d(x,y) -d(A,B)$ where $d(A,B) \stackrel{\mbox{\tiny def}}{=} \inf \{ d(x,y) \,|\, x\in A, y\in B\}$. Then, using previous results we can give a short proof of a theorem which extends~\cite[Theorem 1]{suzuki-2008}. \begin{thm}\label{thm:bcauchy} Suppose that the mapping $T$ is $p$-ASF, then the sequence $\{T^{2n}x\}$ for $x\in A$ is a $d$-Cauchy sequence. \end{thm} \begin{myproof} For a given $x\in {\mathbb X}$, we consider the sequence $x_n=T^nx$. Since $T$ is $p$-ASF we have by Lemma~\ref{lem:ASF} that $\lim_{n\to \infty} p(x_n,x_{n+1})=0$. Using the definition of $p$ we immediately also have $\lim_{n\to \infty} d(x_n,x_{n+1})=d(A,B)$. Using Lemma~\cite[Lemma 4]{suzuki-2008} we obtain that $\lim_{n \to \infty} d(x_{2n},x_{2n+2})=0$ (convexity of $A$ and uniformly convexity of ${\mathbb X}$ are used here). We now consider the sequence $\{T^{2n} x\}$ taking values in $A$. We have $\lim_{n\to \infty} p(x_{2n},x_{2n+2})=0$ and as it was already shown $\lim_{n \to \infty} d(x_{2n},x_{2n+2})=0$. If the sequence $x_n=T^nx$ is $p$-ASF-2 then it is the same for the sequence $\{T^{2n} x\}$. The distance $d$ satisfy the triangle inequality and it is straightforward to see that we have the two mixed triangle inequality $p(x,y) \le p(x,z)+d(z,y)$ and $p(x,y) \le d(x,y)+p(z,y)$ for all $x$, $y$, $z\in {\mathbb X}$. We can thus apply Lemma~\ref{lem:ASFD} to the sequence $\{T^{2n} x\}$ with $x\in A$ to obtain that it is a $p$-Cauchy sequence. It is now easy to see by contradiction that a $p$-Cauchy sequence is a $d$-Cauchy sequence \cite[Proof of Theorem 2]{suzuki-2008}. The key argument being again the use of \cite[Lemma 4]{suzuki-2008} \end{myproof} We extend now \cite[Theorem 2]{suzuki-2008} which was stated for continuous cyclic Meir-Keeler contractions to continuous $p$-ASF mappings. \begin{thm}\label{thm:cyclic}Suppose in addition that $A$ is closed, $T$ is $p$-ASF and $T^l$ is continuous for some $l\in{\mathbb N}$ ($l>0$) then there exists a unique best proximity point $z\in A$ (i.e $d(z,Tz)=d(A,B)$). Moreover $\lim_{n\to \infty} T^{2n} x=z$ for each $x\in A$. \end{thm} \begin{myproof} Using Theorem~\ref{thm:bcauchy}, the sequence $\{T^{2n}x\}$ for each $x\in A$ is a $d$-Cauchy sequence. Using Lemma~\ref{lem:ASF}, we have $\lim_{n \to \infty} p(T^{n}x,T^{n}y)=0$ for each $x$, $y \in A$, hence for $(x,Tx)$ it gives $\lim_{n \to \infty} p(T^{2n}x,T^{2n+1}x)=0$ and for $(Tx,y)$ it gives $\lim_{n \to \infty} p(T^{2n+1}x,T^{2n}y)=0$. Using again \cite[Lemma 4]{suzuki-2008} we obtain $\lim d(T^{2n}x,T^{2n}y)=0$ and we can use \cite[Theorem 4 and Lemma 3]{suzuki2007} to conclude the proof. \end{myproof} \section{ASMK Sequences} We introduce in this section the definition of ASMK sequences. It is an adaptation to sequences of the ACMK (Asymptotic contraction of Meir-Keeler type) definition used for mappings \cite{suzuki-2006}. It is proved in \cite[Theorem 3]{suzuki2007} that an ACMK mapping on a metric space is an ACF mapping. We will prove in this section similar results which relate ASMK sequences to ASF sequences. These results will be used in next section for studying sequences of alternating mappings. \begin{defn} \label{Famk} We say that two sequences $\seq{x}$, $\seq{y}$ with $x_n,y_n \in {\mathbb X}$ are $p$-ASMK-$1$ if there exists a sequence $\{\psi_n\}$ of functions from $[0,\infty)$ into itself satisfying $\psi_n(0)=0$\footnote{Note that this assumption can be removed when $F(0)>0$.} for all $n \in {\mathbb N}$ and the following: \begin{enumerate}[start=6] \item \label{it:famkun} $\limsup_n \psi_n(\epsilon) < \epsilon \texte{for all} \epsilon > 0$. \item \label{it:famkdeux} For each $\epsilon >0$, there exists $\delta >0$ such that for each $t \in [\epsilon, \epsilon + \delta]$ there exists $\nu \in {\mathbb N}$ such that $\psi_\nu (t) < \epsilon$. \item \label{it:famktrois} $F\bp{p(x_{n+i},y_{n+i})} \le \psi_n\Bp{F\bp{p(x_i,y_i)}}$ for all $n$, $i \in {\mathbb N}$. $F$ is a given right continuous nondecreasing mapping such that $F(t) >0$ for $t\not=0$. \end{enumerate} \end{defn} \begin{lem}\label{lem:asmk-asf} Suppose that the two sequences $\seq{x}$, $\seq{y}$ are $p$-ASMK-$1$ then they are $p$-ASF-1. \end{lem} \begin{myproof} \ref{it:acfun}: For all $n$, $i \in {\mathbb N}$ we have by~\ref{it:famktrois} and~\ref{it:famkun} when $F\bp{p(x_i,y_i}\ne 0$ that \begin{align} F\bp{p(x_{n+i},y_{n+i})} & \le \psi_n\Bp{F\bp{p(x_i,y_i)}} \nonumber \\ & \le \limsup_{n\to \infty} \psi_n\Bp{F\bp{p(x_i,y_i)}} \nonumber \\ & < F\bp{p(x_i,y_i)}\,. \nonumber \end{align} Since $F$ is nondecreasing and the inequality is strict we obtain for all $n \in {\mathbb N}$: \begin{align*} p(x_{n+i},y_{n+i}) < p(x_i,y_i) \,, \end{align*} and thus \begin{align*} \limsup_{n\to \infty} p(x_{n+i},y_{n+i}) \le p(x_i,y_i) \,. \end{align*} Then \ref{it:acfun} follows easily when $F\bp{p(x_i,y_i}\ne 0$. When $F\bp{p(x_i,y_i}= 0$, we have by~\ref{it:famktrois} $F\bp{p(x_{n+i},y_{n+i})}\le 0$ for all $n\in {\mathbb N}$. Since $F$ is a right continuous mapping such that $F(t) >0$ we must have $F(0)\ge 0$. Thus we have that $F\bp{p(x_{n+i},y_{n+i})}= 0$ for all $n\in {\mathbb N}$ and thus $p(x_{n+i},y_{n+i})=0$ for all $n\in {\mathbb N}$ and the same conclusion holds. \ref{it:acfdeux}: for $\epsilon > 0$ we know that $F(\epsilon) >0$ and we can use \ref{it:famkdeux} to find $\delta >0$ such that for each $t \in [F(\epsilon), F(\epsilon) + \delta]$ we can find $\nu \in {\mathbb N}$ such that $\psi_\nu (t) < F(\epsilon)$. Since $F$ is right continuous and nondecreasing we can find $\delta'$ such that $F([\epsilon,\epsilon+\delta']) \subset [F(\epsilon), F(\epsilon) + \delta]$. Thus, taking $i\in {\mathbb N}$ such that $\epsilon < p(x_i,y_i) < \epsilon + \delta'$, we can find $\nu$ such that $\psi_{\nu}(F(p(x_i,y_i))) < F(\epsilon)$. And we conclude using \ref{it:famktrois} that: \begin{equation} F\bp{p(x_{\nu+i},y_{\nu+i})} \le \psi_{\nu} \Bp{F\bp{p(x_i,y_i)}} < F(\epsilon) \le F\bp{p(x_i,y_i)}\,. \end{equation} Thus we have $F\bp{p(x_{\nu+i},y_{\nu+i})} < F\bp{p(x_i,y_i)}$ and since $F$ is nondecreasing and the inequality is strict we obtain \ref{it:acfdeux}. \ref{it:acftrois}: Let $i$ be given such that $p(x_i,y_i)\not=0$ and start as in the previous paragraph using $\epsilon =p(x_i,y_i)$. We can find $\nu \in {\mathbb N}$ such that $\psi_{\nu}\Bp{F\bp{p(x_i,y_i)}} < F(\epsilon)$ which combined with \ref{it:famktrois} gives: \begin{equation} F\bp{p(x_{\nu+i},y_{\nu+i})} \le \psi_{\nu} \Bp{F\bp{p(x_i,y_i)}} < F(\epsilon) = F\bp{p(x_i,y_i)}\,. \end{equation} Since $F$ is nondecreasing and the inequality is strict the result follows. \end{myproof} \begin{defn} We say that two sequences $\seq{x}$, $\seq{y}$ with $x_n,y_n \in {\mathbb X}$ are $p$-ASMK-$2$ when \emph{\ref{it:famktrois}} is replaced by \begin{enumerate}[start=9] \item \label{it:famktrois-p} $F\bp{p(x_{n+i},y_{n+j})} \le \psi_n \Bp{F\bp{p(x_i,y_j)}}$ for all $n$, $p$, $i$,$j \in {\mathbb N}$. \end{enumerate} \end{defn} \begin{cor} If two sequences $\seq{x}$, $\seq{y}$ with $y_{n}=x_{n+1}$ are $p$-ASMK-$2$ then they are $p$-ASF-1 and the sequence $\seq{x}$ is $p$-ASF-2. Moreover, assumption \emph{\ref{it:acftrois-g}} holds true for $p$. \end{cor} \begin{myproof} It is obvious to see that if two sequences $\seq{x}$, $\seq{y}$ are $p$-ASMK-$2$ then they are $p$-ASMK-$1$. Thus by Lemma~\ref{lem:asmk-asf} they are $p$-ASF-1. Proving that \ref{it:acfddeux} holds true is similar to the proof that \ref{it:acfdeux} holds true in Lemma~\ref{lem:asmk-asf} and proving that \ref{it:acftrois-g} holds true follows the same steps as the proof that \ref{it:acftrois} holds true in Lemma~\ref{lem:asmk-asf}. \end{myproof} \section{A sequence of alternating mappings} \label{alternate} In this section $p$ is a given function from ${\mathbb X}\times {\mathbb X}$ into $[0,\infty)$ such that such $p(x,y) \le p(x,z)+p(z,y)$ for all $x$, $y$, $z\in {\mathbb X}$ and $p(x,y)=p(y,x)$ for all $x$, $y \in {\mathbb X}$. \begin{defn} We will say that the pair $(T,S)$ satisfy the $(F,\psi)$-contraction property if we can find two functions $F$ and $\psi$ such that: \begin{equation} F \bp{ p(Tx,Sy)} \le \psi \Bp{F\bp{ M(x,y)}} \end{equation} where \begin{equation} M(x,y) \stackrel{\mbox{\tiny def}}{=} \max \left\{p(x,y),p(Tx,x),p(Sy,y), \frac{1}{2}\left\{p(Tx,y)+p(Sy,x) \right\} \right\} \end{equation} The function $F:{\mathbb R}^+ \to {\mathbb R}^+$ is a given right continuous nondecreasing mapping such that $F(t) >0$ for $t\not=0$. The function $\psi:{\mathbb R}^+ \to {\mathbb R}^+$ is a given nondecreasing upper semicontinuous function satisfying $\psi(t)< t$ for each $t>0$ and $\psi(0)= 0$. \end{defn} We first start by a technical lemma. \begin{lem}\label{lemme_dix} Let the pair of mappings $(T,S)$ be a $(F,\psi)$-contraction. Suppose that $x=S\alpha$ and $p(x,Tx)\not=0$ then we have: \begin{equation} \label{ineqT} F \bp{ p(x,Tx)} \le \psi\Bp{ F \bp{p(S\alpha,\alpha)}} \,. \end{equation} Suppose that $y=T\alpha$ and $p(y,Sy)\not=0$ then we have: \begin{equation} \label{ineqS} F \bp{p(Sy,y)} \le \psi \Bp{ F\bp{p(\alpha,T\alpha)}} \,. \end{equation} \end{lem} \begin{myproof} We prove the first inequality \eqref{ineqT}. We suppose that $x=S\alpha$ then we have \begin{align} F \bp{p(x,Tx)} &= F \bp{p(Tx,x)} = F \bp{p(Tx,S\alpha)} \le \psi \Bp{ F \bp{M(x,\alpha)}} \nonumber \end{align} and we have: \begin{align} M(x,\alpha) &= \max \left\{p(x,\alpha),p(Tx,x),p(S\alpha,\alpha), \frac{1}{2}\left\{p(Tx,\alpha)+p(S\alpha,x) \right\} \right\} \nonumber \\ &= \max \left\{p(x,\alpha),p(Tx,x),\frac{1}{2} p(Tx,\alpha) \right\} \nonumber \\ &= \max \left\{ p(x,\alpha),p(Tx,x) \right\}\,. \end{align} We show now that the maximum cannot be achieved by $p(Tx,x)$. Indeed, suppose that $M(x,\alpha)=p(Tx,x)$ then we would have \begin{align} F \bp{p(x,Tx)} & \le \psi \Bp{ F \bp{p(x,Tx)}} \nonumber \end{align} which is not possible since $p(x,Tx)\ne 0$ and there does not exist $x > 0$ such that $F(x) \le \psi(F(x))$ (since for $x>0$ we have that $F(x) \le \psi(F(x)) < F(x)$). The proof for the second inequality is very similar and thus omited. \end{myproof} We now introduce the alternating sequence of mappings $\seq{\Gamma}$ defined by: \begin{equation} \Gamma_n \stackrel{\mbox{\tiny def}}{=} \begin{cases} T, & \mbox{if } n\mbox{ is even} \\ S, & \mbox{if } n\mbox{ is odd}\,. \end{cases} \end{equation} Then, we consider the two sequences $\seq{x}$ and $\seq{y}$ defined by \begin{equation} x_{n+1} =\Gamma_{n} x_n \quad\mbox{and} \quad y_{n+1}= \Gamma_{n+1} y_n\,. \label{defxy} \end{equation} It is very easy to check that when the two sequences are initiated with $(x_0,y_0)=(Sx,x)$ for a given $x\in {\mathbb X}$ they are related by $y_{n+1}=x_n$ and that only the two following cases can occur: \begin{equation} (x_{n+1},y_{n+1})= \begin{cases} (Sx_n,x_n)& \text{with }\, x_n=Tx_{n-1} \text{ and }\, y_n = x_{n-1} \\ (T x_n,x_n) & \text{with }\, x_n= S x_{n-1} \text{ and }\, y_n = x_{n-1} \end{cases} \end{equation} If we are in the first (resp. the second) case we use \eqref{ineqS} (resp. \eqref{ineqT}) to obtain the inequality \begin{equation} F (p(x_{n+1},y_{n+1})) \le \psi (F(p(x_n,y_n)))\label{ineqfpu} \end{equation} We thus have the following easy lemma: \begin{lem}\label{lem:tscontr}Let the pair of mappings $(T,S)$ be a $(F,\psi)$-contraction and $\seq{x}$ and $\seq{y}$ be two sequences defined by \eqref{defxy}. If the two sequences are initiated by $(x_0,y_0)=(Sx,x)$ we have $y_{n+1}=x_n$ and \begin{equation} F \bp{p(x_{n+1},y_{n+1})} \le \psi \Bp{ F \bp{ p(x_n,y_n)}} \label{ineqfp}\,. \end{equation} If $x_{n+1}=Tx_n$ (resp. $x_{n+1}=Sx_n$) and $y_{k+1}= Sy_k$ (resp. $y_{k+1}= Ty_k$) we have: \begin{equation} F \bp{p(x_{n+1},y_{k+1})} \le \psi \Bp{ F \bp{p(x_n,y_k)+\max(p(x_{n},x_{n+1}),p(x_{k},x_{k+1}))}}\,. \label{ineqmelange} \end{equation} \end{lem} \begin{myproof} Since inequation \eqref{ineqfp} was proved by \eqref{ineqfpu}, it just remains to prove inequality \eqref{ineqmelange}. Suppose that $x_{n+1}=Tx_n$ and $y_{k+1}= Sy_k$ then we have \begin{align} M(x_n,y_k) &=\max \left\{p(x_n,y_k),p(Tx_n,x_n),p(Sy_k,y_k), \frac{1}{2}\left\{p(Tx_n,y_k)+p(Sy_k,x_n) \right\} \right\} \nonumber \\ &= \max \left\{p(x_n,y_k),p(x_{n+1},x_n),p(y_{k+1},y_k), \frac{1}{2}\left\{p(x_{n+1},y_k)+p(y_{k+1},x_n) \right\} \right\} \nonumber \\ &\le \max \left\{p(x_n,y_k),p(x_{n+1},x_n),p(y_{k+1},y_k),\right. \nonumber \\ &\hspace{1.5cm} \left. p(x_{n},y_k) + \max(p(x_{n+1},x_n),p(y_{k+1},y_k)) \right\} \nonumber \\ &\le p(x_{n},y_k)+ \max(p(x_{n+1},x_n),p(y_{k+1},y_k)) \end{align} We thus have \begin{align} F \bp{p(x_{n+1},y_{k+1})} &\le \psi \Bp{F \bp{M(x_n,y_k)}} \nonumber \\ & \le \psi \Bp{F \bp{ p(x_{n},y_k)+\max(p(x_{n+1},x_n),p(y_{k+1},y_k))}} \,. \label{ineqcauchy} \end{align} If the opposite situation is $x_{n+1}=Sx_n$ and $y_{k+1}=Ty_k$ we obtain the same result by the same arguments. \end{myproof} We make here a direct proof of the fact that the sequence $\seq{x}$ is a Cauchy sequence when $\lim_{n\to \infty} p(x_{n+1},x_n)=0$ is assumed. This last property will be derived from $p$-ASMK-$1$ properties as proved in Theorem~\ref{thm:ptfixe}. \begin{lem} Let the pair of mappings $(T,S)$ be a $(F,\psi)$-contraction. Suppose that $$\lim_{n\to \infty} p(x_{n+1},x_n)=0\,,$$ then the sequence $\seq{x}$ given by~\eqref{defxy} is a Cauchy sequence. \label{cauchylem} \end{lem} \begin{myproof} We follow here \cite{zhang2007} to prove the result by contradiction. If the sequence is not a Cauchy sequence we can find two subsequences $\sigma(n)$ and $\rho(n)$ such that for all $n \in {\mathbb N}$ $p(x_{\sigma(n)}, x_{\rho(n)}) \ge 2\epsilon$ and $\sigma(n) < \rho(n)$. Since the sequence $\{ p(x_n,x_{n+1})\}_{n\in{\mathbb N}}$ converges to zero we can choose $N$ such that $p(x_n,x_{n+1}) < \epsilon$ for all $n \ge N$. Using the triangle inequality $$ p(x_{\sigma(n)},x_{\rho(n)+1}) \ge p(x_{\sigma(n)},x_{\rho(n)}) - p(x_{\rho(n)+1},x_{\rho(n)})\, $$ we obtain that $p(x_{\sigma(n)},x_{\rho(n)+1}) > \epsilon$ for large $n$. Thus, we can always change the subsequence ${\rho(n)}$ in such a way that the parity between $\sigma(n)$ and $\rho(n)$ is conform to the one we need for applying inequality \eqref{ineqcauchy} and such that for all $n \in {\mathbb N}$ $p(x_{\sigma(n)}, x_{\rho(n)}) > \epsilon$. We now define $k(n)$ as follows: \begin{equation} k(n) \stackrel{\mbox{\tiny def}}{=} \min \left\{ k > \sigma(n) \, \vert \, p(x_{\sigma(n)},x_{k}) > \epsilon \quad \mbox{ with same parity as $\rho(n)$} \right\} \end{equation} $k(n)$ is well defined and by construction $\sigma(n) < k(n)\le \rho(n)$. We now have that: \begin{align} \epsilon < p(x_{\sigma(n)},x_{k(n)}) \le p(x_{\sigma(n)},x_{k(n)-2}) + p(x_{k(n)-2},x_{k(n)}) \le \epsilon + p(x_{k(n)-2},x_{k(n)})\,. \end{align} the sequence $\{p(x_{k(n)-2},x_{k(n)})\}_{n\in{\mathbb N}}$ converges to zero since we have $$p(x_{k(n)-2},x_{k(n)})\le p(x_{k(n)-2},x_{k(n)-1}) + p(x_{k(n)-1},x_{k(n)}) $$ and thus $p(x_{\sigma(n)},x_{k(n)}) \to \epsilon^+$ when $n$ goes to infinity. We also obtain that $p(x_{\sigma(n)-1},x_{k(n)-1}) \to \epsilon$ when $n$ goes to infinity since: \begin{align} |p(x_{\sigma(n)},x_{k(n)}) - p(x_{\sigma(n)-1},x_{k(n)-1})| \le p(x_{k(n)},x_{k(n)-1}) + p(x_{\sigma(n)},x_{\sigma(n)-1})\,. \end{align} We now use inequality~\eqref{ineqcauchy} to obtain \begin{align} F \bp{p(x_{\sigma(n)},x_{k(n)})} \le \psi \Bp{F\bp{p(x_{\sigma(n)-1},x_{k(n)-1})+\delta_n}} \end{align} where $\delta_n \stackrel{\mbox{\tiny def}}{=} \max \bp{p(x_{\sigma(n)-1},x_{\sigma(n)}),p(x_{k(n)-1},x_{k(n)})}$. When $n$ goes to infinity, using the facts that $F$ is right continuous and nondecreasing and $\psi \circ F$ is upper semicontinuous we obtain that $F(\epsilon) \le \psi \bp{F(\epsilon)}$ which is a contradiction. \end{myproof} \begin{rem}\label{rem:zhang} The proof remains valid is we assume as in \cite{zhang2007} that the function $F$ is nondecreasing and continuous with $F(0)=0$ and $F(t)> 0$ for $t>0$ and that the function $\psi:{\mathbb R}^+ \to {\mathbb R}^+$ is assumed to be nondecreasing and right upper semicontinuous and satisfy $\psi(t)< t$ for each $t>0 $. The idea is to build the sequences choosing the parity so as to use~\eqref{ineqcauchy} in the reverse situation where \[F \bp{p(x_{\sigma(n)+1},x_{k(n)+1})} \le \psi \Bp{F\bp{p(x_{\sigma(n)},x_{k(n)})+\delta'_n}}, \] and where $\delta'_n \stackrel{\mbox{\tiny def}}{=} \max \bp{p(x_{\sigma(n)+1},x_{\sigma(n)}),p(x_{k(n)+1},x_{k(n)})}$. \end{rem} \begin{thm}\label{thm:ptfixe} Consider two mappings $T: {\mathbb X} \to {\mathbb X} $ and $S: {\mathbb X} \to {\mathbb X}$ and suppose that the pair $(T,S)$ has the $(F,\psi)$-contraction property. Let the sequence of function $\seq{\psi}$ be defined by $\psi_n \stackrel{\mbox{\tiny def}}{=} \overset{n}{\overbrace{\psi\circ\psi\circ\cdots\psi}}$ and assume that \emph{\ref{it:famkun}} and \emph{\ref{it:famkdeux}} are satisfied, then the sequence $\seq{x}$ defined by \eqref{defxy} and initialized by $x_0=Sx$ is a Cauchy sequence. \end{thm} \begin{myproof} The only point to prove is that assumption~\ref{it:famktrois} is satisfied. We consider the sequence $\seq{x}$ and the sequence $\seq{y}$ defined by \eqref{defxy} and initialized by $y_0=x$. Using the fact that $\psi$ is non-decreasing, we repeatedly use Equation~\eqref{ineqfp} in Lemma~\ref{lem:tscontr} to obtain assumption \ref{it:famktrois} and conclude that the two sequences $\seq{x}$ and $\seq{y}$ are $p$-ASMK-$1$ and then by Lemma~\ref{lem:asmk-asf} and \ref{lem:ASF} we obtain that $\limsup_{n \to \infty} p(x_n,x_{n+1})=0$. Using Lemma~\ref{cauchylem} we conclude that $\seq{x}$ is a Cauchy sequence. \end{myproof} We make a link here with the result of \cite{zhang2007} where it is assumed that $F(0)=0$ and $F(t)> 0$ for $t>0$ and $F$ is supposed to be nondecreasing and continuous. The function $\psi:{\mathbb R}^+ \to {\mathbb R}^+$ is assumed to be nondecreasing and right upper semicontinuous and satisfy $\psi(t)< t$ for each $t>0 $ and $\lim_{n\to \infty} \psi_n(t)=0$. It is proved in \cite{zhang2007} that $F(x) \le \psi(F(x))$ implies $x=0$. We prove in the next lemma that these properties of functions $F$ and $\psi$ imply Properties~\ref{it:famkun} and~\ref{it:famkdeux}. \begin{lem} Let $\psi:{\mathbb R}^+ \to {\mathbb R}^+$ be a nondecreasing, right upper semicontinuous function satisfying $\psi(t)< t$ for each $t>0$. Then the sequence of functions $\seq{\psi}$ defined by $\psi_n \stackrel{\mbox{\tiny def}}{=} \overset{n}{\overbrace{\psi\circ\psi\circ\cdots\psi}}$ satisfy \emph{\ref{it:famkun}} and \emph{\ref{it:famkdeux}}. \end{lem} \begin{myproof} \ref{it:famkun}: For $t>0$, since $\psi$ is nondecreasing and $\psi(t) < t$ we have $\psi_n(t) \le \psi(t) < t$ and thus \ref{it:famkun} follows. \ref{it:famkdeux}: Using \cite[Theorem 2]{jacek-1997} we can find a right continuous function $\overline{\psi}: {\mathbb R}^+ \to {\mathbb R}^+$ such that $\psi(t)\le \overline{\psi}(t) <t$ for $t>0$. Thus we easily have \ref{it:famkdeux}, since proving \ref{it:famkdeux} (using $\nu=1$) for a right continuous function is easy. \end{myproof} In \cite{zhang2007} it is proved that $T$ and $S$ have a common fixed point when ${\mathbb X}$ is a complete metric space and $p=d$. The proof follows the following steps: Since $\seq{x}$ is a Cauchy sequence it converges to $\overline{x} \in {\mathbb X}$. Using the definition of $M$ one easily checks that $M(x_{2n},\overline{x})\to d(S\overline{x},\overline{x})$ and $M(x_{2n},\overline{x})\ge d(S\overline{x},\overline{x})$. Moreover $Tx_{2n}=x_{2n+1}$ also converges to $\overline{x}$. We therefore have \begin{equation} F \bp{d(Tx_{2n},S\overline{x})} \le \psi \Bp{F \bp{M(x_{2n},\overline{x})}} \,. \end{equation} Using next Lemma~\ref{lem:ptfixe} we obtain that $\overline{x}=S\overline{x}$. Then proving that $\overline{x}$ is also a fixed point of $T$ is given in \cite[Theorem 1]{zhang2007}. We therefore conclude that in order to obtain convergence of the sequence $\seq{x}$ to the unique fixed point of $T$ and $S$ requires to add continuity of $F$ in the hypothesis of Theorem~\ref{thm:ptfixe}. \begin{lem}\label{lem:ptfixe} Suppose that $F$ is a continuous nondecreasing function, $\psi$ is a right upper semicontinuous function satisfying one of the following property: \begin{enumerate}[label=\emph{($\mbox{E}_{\arabic*}$)}] \item $\psi(t) < t$ for all $t >0$; \item $\psi$ is nondecreasing and for each $t>0$, there exists $\nu \in {\mathbb N}$, $\nu \ge 1$ such that $\psi_{\nu}(t) < t$. \end{enumerate} Suppose that we have two sequences $\seq{\alpha}$ and $\seq{\beta}$ such that: \begin{equation} F(\alpha_n) \le \psi \bp{F(\beta_n)} \,. \end{equation} If $\lim_{n\to \infty} \alpha_n = \lim_{n\to \infty} \beta_n =\gamma$ and $\beta_n\ge \gamma$ for all $n \in {\mathbb N}$ then we must have $\gamma=0$. \end{lem} \begin{myproof} We have \begin{align} F(\gamma)=\lim_{n\to \infty} F(\alpha_n) \le \limsup_{n\to \infty} \psi \bp{F(\beta_n)} \le \psi \bp{ \limsup_{n\to \infty} F(\beta_n)} \le \psi \bp{F(\gamma)}; \end{align} If $\gamma \not=0$ and $\psi(\gamma) < \gamma$ we conclude that $F(\gamma) < F(\gamma)$ which is a contradiction. If $\psi$ is nondecreasing we consider the value of $\nu$ associated to $\gamma$ to obtain: $F(\gamma) \le \psi_{\nu} \bp{F(\gamma)} < F(\gamma)$ and conclude again by contradiction. \end{myproof} \begin{rem} Note that using \cite[Theorem 2]{jacek-1997} we obtain that a right upper semicontinuous function $\psi$ satisfying $\psi(t)<t$ for all $t >0$ satisfies Property~\ref{it:famkdeux} with $\nu=1$. \end{rem} \end{document}
\begin{document} \title{Robustness of quantum key distribution with discrete and continuous variables to channel noise} \author{Miko{\l}aj Lasota} \author{Radim Filip} \author{Vladyslav C. Usenko} \email{Corresponding author. E-mail: [email protected]} \affiliation{Department of Optics, Palack\'{y} University, 17.\,listopadu 1192/12, 77146 Olomouc, Czech Republic} \pacs{03.67.Dd, 03.67.Hk, 42.50.Ex} \keywords{quantum key distribution; quantum cryptography; discrete variables; continuous variables; squeezed states; single-photon states} \begin{abstract} We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field. \end{abstract} \maketitle \section{Introduction} Quantum key distribution (QKD) is the method of sharing a secret key between two trusted parties using nonclassical properties of quantum states. This enables the security of the key based on physical principles contrary to the mathematical complexity in the classical cryptographic protocols. The first QKD protocols were suggested on the basis of single photons \cite{Bennett84} or entangled photon pairs \cite{Ekert1991} and, respectively, photon-counting measurements. The key bits were encoded to and obtained from the measurement of the states with the discrete spectrum and so the protocols were later referred to as discrete-variable (DV). Alternatively, schemes utilizing multiphoton quantum states of light and encoding the key using observables with the continuous spectrum \cite{Ralph99} were suggested on the basis of Gaussian modulation \cite{Weedbrook12} of squeezed \cite{Cerf2001} or coherent \cite{Grosshans2002,Weedbrook04} states and homodyne detection, and are referred to as continuous-variable (CV) QKD protocols. Both these families of protocols were successfully implemented \cite{Bennett92,*Muller95,*Jennewein00,*Naik00,*Tittel00,Grosshans2003,Lodewyck2007,*Huang2016,Madsen2012,Jouguet2013} and their security was analyzed with respect to individual \cite{Lutkenhaus96,*Slutsky98,*Bechmann06,Grosshans2003a}, collective \cite{Biham97,*Biham02,Navascues2006,*Garcia2006} or the most effective coherent attacks \cite{Kraus05,*Renner05,Leverrier2013}, also taking into account the effects of finite data ensemble size \cite{Hasegawa07,*Hayashi07,Scarani08,Leverrier2010,*Ruppert2014}. The applicability of all QKD protocols is limited by the imperfections of the devices used to prepare and measure quantum states and also by the properties of quantum channels, which are inclined to losses and noise \cite{Lutkenhaus99,*Brassard00,*Gottesman04,Filip2008,*Usenko2010a,*Jouguet2012,*Usenko2016,Garcia2009}. While it is important to understand which kind of protocols may be advantageous in specific conditions, at present there are no simple criteria for choosing either of their families for a particular task. The main reason for this is that making a fair comparison between DV and CV QKD protocols is hard due to the relativity of practical conditions and even different physical mechanisms leading to imperfections in the devices typically used in the protocol implementations. The only attempt to compare the performance of DV and CV systems done so far concerned the measurement-device independent systems and discussed practical conditions which can strongly vary depending on the wavelength, types of sources, channels and detectors being used, and set of optimistic or pessimistic assumptions being made about the possibility of an eavesdropper to attack the devices \cite{Xu15,*Pirandola15}. In our work we limit the discussion of realistic implementations of DV and CV schemes to a minimum. We mainly focus our attention on comparing the robustness of different types of protocols to the channel noise in the otherwise perfect set-ups. Later, we consider only finite nonclassical resources, \emph{i.e.} quality of single-photon states and finite amount of quadrature squeezing. Including the problem of channel noise in the QKD security analysis is more typical for the CV case. While it is well known that CV protocols can tolerate ideally any level of channel losses, the excess channel noise can be very harmful and even break the security of these protocols making QKD impossible. It can be considered as a main threat for their security. Indeed, the Gaussian excess noise, which is typically assumed in the CV QKD following the optimality of Gaussian collective attacks \cite{Navascues2006,*Garcia2006}, can break the security at the values below a shot-noise unit for a lossless channel and is further enforced by the channel losses \cite{Lodewyck2007,*Huang2016,Madsen2012,Jouguet2013}. On the other hand, the analyses of DV QKD protocols performed so far usually focused on different setup imperfections, specifically multiphoton pulses and detection noise, which seem to be the main threats for security in this field. Even if various types of channel noise, originating \emph{e.g.}\,from birefringence effect present in optical fibers, inhomogeneity of the atmosphere, changes of temperature or background light were sometimes included in these investigations \cite{Castelletto03,*Dong11}, they were usually described in a very simplified way, typically by using a single constant parameter, estimation of which could be made experimentally for a given, specific setup. This is especially true for the analyses of free-space DV QKD considering background light, arriving at Bob's detectors from other sources than the one used by Alice \cite{Miao05,*Bonato09,*Bourgoin13}. Disturbances of the states of photons traveling through a given quantum channel in the case of fiber-based QKD schemes were typically treated in a similar way, basing on the assumption that although these kind of effects can generally vary in time, the variations can be considered to be very slow comparing to the time needed for a single photon to propagate from Alice to Bob \cite{Dong11}. In this case, it is reasonable to assume that in short periods of time channel noise affects all of the traveling photons in the same way and can be described by a single, constant parameter. Such noise, called collective, was analyzed in many articles and a lot of possible countermeasures against it have been proposed, utilizing \emph{e.g.} Faraday mirrors \cite{Muller97,*Stucki02}, decoherence-free subspaces \cite{Zanardi97,*Kempe01,*Walton03,*Boileau04,*Li08}, quantum error-rejection codes \cite{Wang04a,*Wang04b,*Kalamidas05,*Chen06}, dense coding \cite{Cai04,*Wang05b,*Li09} or entanglement swapping \cite{Yang13}. However, no detailed analysis of the relationships between the transmittance of the channel connecting Alice and Bob, and the amount of tolerable channel noise has been presented so far and the influence of this relationship on the security of DV QKD protocols has never been analyzed, at least to our knowledge. At the same time due to the continuous improvement of realistic single-photon sources and detectors taking place nowadays, this issue gradually gains importance, especially since the links connecting Alice and Bob in commercial QKD applications may be more noisy than in the typical quantum-optical laboratories \cite{Eraerds10,Qi10}. In this paper we use in the DV protocols the model for excess channel noise basing on a typical model for CV QKD configuration. We analyze the security bound on such noise for both of these two cases under the assumption that Alice's sources and Bob's detection systems are perfect. Furthermore, we check the stability of the obtained results to the decreasing number of quantum signals exchanged by the trusted parties during the protocol in the finite-key regime. We also compare lower bounds on the secure key rate for the two schemes and find requirements for the nonclassicality of resources needed for their realistic implementations. For the case of ideal sources and detectors our study shows that while CV protocols can successfully compete with DV schemes when the channel transmittance is relatively high, the latter are superior than the former ones for long-distance channels. In this situation it turns out to be possible for DV protocols to beat infinite-squeezing CV schemes even when using realistic single-photon sources. The requirements on the quality of pulses produced by such sources, which would be needed in order to demonstrate this superiority in practice, turn out to be high but reachable by the current technology. For the thermal sources of noise with mean number of photons produced per pulse higher than $10^{-4}$ overcoming CV protocols with DV schemes can be possible only by using single-photon sources with at least $50\%$ probability of producing a non-empty pulse and negligible probability of multiphoton emission. The paper is organized as follows. In Sec.\,\ref{Sec:Models} we describe the models for excess channel noise used in our analysis: first a standard model for CV QKD case and subsequently the analogous model for DV QKD case. We also derive there all the necessary formulae needed for assessing the security of Gaussian squeezed-state, BB84 and six-state protocols. Next, in Sec.\,\ref{Sec:NumericalResults} we numerically compare the maximal secure values of the channel noise that these protocols can tolerate on the transmittance of the channel connecting Alice and Bob in the case of perfect source and detection system. The comparison is done both in the asymptotic and the finite-key regimes. We also present analytical expressions approximating the maximal tolerable channel noise for DV and CV protocols in the limit of very low transmittance. More realistic situation is analyzed in Sec.\,\ref{Sec:RealisticCase}, where we investigate how the quality of Alice's source may influence the security of our models. Finally, Sec.\,\ref{Sec:Conclusions} concludes our work. \section{Models for the channel noise in QKD} \label{Sec:Models} To assess the security of DV and CV QKD protocols we estimate the lower bound on the secure key rate per one pulse emitted by Alice's source. In the DV case this quantity can be expressed as \cite{Scarani09} \begin{equation} K^{(DV)}=p_{exp}\Delta I, \label{eq:keyDV} \end{equation} where $p_{exp}$ denotes the probability for Bob to get a click in his detection system per pulse produced by the source and $\Delta I$ is the so-called secret fraction. Following the quantum generalization of the Csisz\'ar-K\"orner theorem \cite{Csiszar1978} performed by Devetak and Winter \cite{Devetak2005}, this quantity reads \begin{equation} \Delta I=\max[0,I_{AB}-\min\left\{I_{EA},I_{EB}\right\}], \label{eq:DeltaImain} \end{equation} where $I_{AB}$ is the mutual information between Alice and Bob and $I_{EA}$ ($I_{EB}$) represents the amount of information Eve can gain on Alice's (Bob's) data upon an eavesdropping attack. On the other hand in the case of CV QKD protocols the lower bound on the secure key rate can be written simply as \begin{equation} K^{(CV)}=\Delta I, \label{eq:keyCV} \end{equation} since, contrary to the DV case, all of the pulses emitted by Alice's source are registered by Bob's detection system in this situation. Generally speaking both the formulae (\ref{eq:keyDV}) and (\ref{eq:keyCV}) should also contain the so-called sifting probability, representing the chance for the chosen settings of Bob's measurement setup to be compatible with a given signal sent by Alice. However, in the theoretical, asymptotic case, in which we assume that the key produced by Alice and Bob is infinitely long, its generation rate can be increased without compromising its security by performing highly asymmetric version of a given DV or CV protocol, making the sifting probability arbitrary close to one \cite{Scarani09,Lo05b}. While the methods of calculating the lower bound on the secure key rate (\ref{eq:DeltaImain}) are substantially different in the DV and CV QKD, as we discuss in the following subsections, we develop the model of noise which can be applied to both these families of protocols using the same parametrization. The model is based on coupling every signal mode to an independent thermal reservoir with the coupling ratio $T$, which corresponds to the channel transmittance, and with the reservoirs being characterized by the mean number of thermal photons $\mu$ emitted per pulse. We study robustness of the DV and CV protocols to such thermal noise and derive and compare the security bounds in terms of the maximum tolerable mean numbers of noise photons. \subsection{Channel noise in CV QKD} \label{Sec:CVmodel} CV QKD protocols typically use Gaussian states of light and respectively Gaussian modulation, which are compatible with the extremality of Gaussian states \cite{Wolf2006} and enable the security proofs against optimal Gaussian collective attacks \cite{Navascues2006,*Garcia2006}. In our study we consider the Gaussian squeezed-state protocol based on the quadrature modulation and homodyne detection \cite{Cerf2001}. The reason why we choose this scheme instead of the more popular GG02 protocol \cite{Grosshans2002} is that the squeezed-state protocol is more resistant to the channel noise than GG02. This conclusion can be confirmed by comparing the results of our analysis performed for the squeezed-state protocol, presented in Sec.\,\ref{Sec:NumericalResults}, with the analogous results obtained for the GG02 scheme, shown in the Appendix \ref{Sec:GG02protocol}. Moreover, the squeezed-state protocol is the best known Gaussian CV QKD protocol in terms of the resistance to the channel noise \cite{Madsen2012}. Hence, demonstration of its inferiority to the DV protocols in that regard, shown in Sec.\,\ref{Sec:NumericalResults}, automatically implies that also other existing Gaussian CV QKD protocols cannot compete with the DV schemes. The squeezed-state protocol was shown to be secure against collective \cite{Navascues2006,*Garcia2006} and subsequently against general attacks \cite{Renner2009} in the asymptotic limit and against the collective attacks in the finite-size regime \cite{Leverrier2010}. In our analysis we assume that i) Alice uses a perfect source of quadrature-squeezed states with a quadrature variance $1/V \ll 1$ and that ii) Bob's homodyne detection is perfect with a unity efficiency and no uncontrollable noise. The scheme of the protocol, illustrated in Fig. \ref{fig:CVscheme}, is based on the squeezed signal state preparation by Alice using an optical parametric oscillator (OPO), phase/amplitude quadrature modulation based on the random Gaussian displacements applied in the modulator (M), and transmission along with the local oscillator (LO), being a phase reference for the homodyne measurement, through the Gaussian lossy and noisy channel. The remote party (Bob) splits the signal from the LO and performs homodyne measurement on the squeezed and modulated quadrature. The parties should swap between the bases (i.e. squeezing and modulating either of the two complementary quadratures) in order to perform the channel estimation, but in the following we assume that the channel estimation is perfect. \begin{figure} \caption{(color online) CV QKD scheme with lossy and noisy quantum channel connecting Alice and Bob. The following abbreviations were used in this picture: OPO -- optical parametric oscillator, M -- amplitude/phase quadrature modulator, PBS -- polarization beam-splitter, LO - local oscillator.} \label{fig:CVscheme} \end{figure} We now use Gaussian asymptotic security analysis to estimate the security bounds on the CV QKD protocols. To do so, following the Gaussian security proofs, we calculate the lower bound on the secure key rate in the reverse reconciliation scenario, which is known to be more robust against channel loss \cite{Grosshans2003} and being no less sensitive to the channel noise: \begin{equation} \label{LBCV} K^{(CV)}=\max[0,I_{AB}-\chi_{BE}], \end{equation} where $I_{AB}$ is the mutual information shared between the trusted parties, and $\chi_{BE}$ is the Holevo bound \cite{Holevo2001}, upper limiting the information available to an eavesdropper from a collective attack in a given channel. To analyze the security of CV QKD we switch to the equivalent entanglement-based representation \cite{Grosshans2003a} so that Alice and Bob measure a two-mode entangled state shared between them through a quantum channel. The covariance matrix of the state is then given by \begin{equation} \label{gammaAB} \gamma_{AB} = \left( \begin{array}{cc} V\mathbb{I} & \sqrt{T} \sqrt{V^2-1}\sigma_z \\ \sqrt{T} \sqrt{V^2-1}\sigma_z & [V T + (1 - T)W]\mathbb{I} \end{array} \right), \end{equation} where the diagonal matrix $\mathbb{I}=diag(1,1)$, $\sigma_z=diag(1,-1)$ is the Pauli matrix, $V$ is the variance of the modulated squeezed signal states, and $W=2\mu+1$ is the quadrature variance of the thermal noise state. The mutual information between the trusted parties then reads \begin{equation} \label{CVmutinf} I_{AB}=\frac{1}{2}\log_2{\frac{V+W'}{1/V+W'}}, \end{equation} where $W'=W(1-T)/T$. Following the pessimistic assumption that Eve is able to purify all the noise added to the signal, we estimate the Holevo bound as $\chi_{BE}=S(AB)-S(A|B)$ through the quantum (von Neumann) entropies $S(AB)$ derived from the symplectic eigenvalues \cite{Weedbrook12} $\lambda_{1,2}$ of the state described by the covariance matrix (\ref{gammaAB}), and $S(A|B)$ derived from the symplectic eigenvalue $\lambda_3$ of the state conditioned on Bob's measurement and described by the covariance matrix \begin{equation} \gamma_{A|B}=\gamma_A-\sigma_{AB}(X \gamma_B X)^{MP}\sigma_{AB}^T, \end{equation} where $\gamma_A=diag(V,V)$, $\gamma_B=diag([V T + (1 - T)W],[V T + (1 - T)W])$ are the matrices, describing the modes A and B individually; $\sigma_{AB}=\sqrt{T(V^2-1)}\sigma_z$ is the matrix, which characterizes correlations between the modes A and B, all being submatrices of (\ref{gammaAB}). MP stands for Moore-Penrose inverse of a matrix (also known as pseudoinverse applicable to the singular matrices), and $X=diag(1,0)$. Here with no loss of generality we assume that the x-quadrature is measured by Bob. Now the Holevo bound can be directly calculated as \begin{equation}\label{holevo1} \chi_{BE}=G\bigg(\frac{\lambda_1-1}{2}\bigg)+G\bigg(\frac{\lambda_2-1}{2}\bigg)-G\bigg(\frac{\lambda_3-1}{2}\bigg), \end{equation} which together with the mutual information (\ref{CVmutinf}) gives the lower bound on the secure key rate (\ref{LBCV}). Here $G(x)=(x+1)\log_2{(x+1)}-x\log_2x$ is the bosonic entropic function \cite{Serafini2005}. The bounds on the channel noise, characterized by $\mu$, are then derived by turning the secure key rate (\ref{LBCV}) to zero. We also consider the extension of the protocol, when trusted noise is added on the detection stage to improve the robustness of the protocol to the channel noise \cite{Garcia2009} (note that the heterodyne detection can be seen as the particular case of such noise addition and therefore was not considered in our study). This provides the maximum tolerable channel noise for a perfect CV QKD protocol with a given squeezing $1/V$ and upon given channel transmittance $T$. \subsection{Channel noise in DV QKD} \label{Sec:DVmodel} Alternatively to the above-described CV scheme we consider the use of polarization-based BB84 \cite{Bennett84} and six-state \cite{Bruss98} protocols, both belonging to the family of DV protocols, to generate secure key by Alice and Bob. The scheme which we analyze is illustrated in Fig.\,\ref{fig:DVscheme}. We assume that i) Alice's source is a perfect single-photon source and ii) Bob uses perfect single-photon detectors with no dark counts and unity efficiency. Our basic assumption on these detectors is that they do not have the ability to resolve the number of incoming photons. However, in the Appendix \ref{Sec:DifferentDetection} we analyze also the opposite possibility for the comparison. Since Alice's source is perfect, it never emits multiphoton pulses and Eve cannot perform photon-number-splitting attacks on the signal pulses. If so, Alice and Bob cannot gain anything by using decoy-pulse method \cite{Hwang03,*Wang05a,*Lo05} and we do not consider it in our analysis. \begin{figure} \caption{(color online) Our model for DV QKD scheme with lossy and noisy quantum channel connecting Alice and Bob. The following abbreviations were used in this picture: SPS -- single-photon source, PBS -- polarization beam-splitter.} \label{fig:DVscheme} \end{figure} In the model presented in Fig.\,\ref{fig:DVscheme} the channel noise, coupled to the signal during its propagation between Alice and Bob, is generated in two orthogonal polarizations by two independent sources of thermal noise. In fact this model is completely analogous to the one analyzed in Sec.\,\ref{Sec:CVmodel}, where two polarization modes are used to transmit the signal and the local oscillator. Since the effect of this noise on the bright local oscillator is negligible, it is not considered in the CV case. We assume here that the photons emitted by a given source of noise have the same polarization as signal photons transmitted through the channel to which it is coupled. We denote the probability of emitting $n$ noise photons by a given source by $p_n(\mu)$, where $\mu$ is the mean number of noise photons produced per pulse. For thermal noise this probability is given by \begin{equation} p_n(\mu)=\frac{\mu^n}{(\mu+1)^{n+1}}. \end{equation} Similarly to the CV case we assume that Eve fully controls the noise coupled to the signal in the quantum channel. Therefore, she can perform any attack which produces the same QBER as would be obtained by the trusted parties if there was no eavesdropper. We assume here that Eve executes the general collective attack, which is optimal for the DV QKD protocols under given QBER \cite{Kraus05,*Renner05}. We also consider the possibility for Alice and Bob to perform so-called preprocessing \cite{Renner05}, allowing them to improve the security of the generated key by deliberately adding some noise to it before going to the stages of error correction and privacy amplification. This technique can be seen as the DV counterpart to the noise addition on the Bob's side considered in the CV case above in order to reduce the information which is available to Eve. In the case without preprocessing, the most general collective attacks performed by Eve on BB84 protocol can give her $I_{EA}^{BB84}=I_{EB}^{BB84}=H(Q)$ \cite{Renner05}, where $H(Q)$ is Shannon entropy and $Q$ represents the level of QBER measured by Alice and Bob in their raw key. Since for the asymptotic case of infinitely long key, which we assume here, the mutual information between Alice and Bob when they are not performing preprocessing stage can be written as $I_{AB}=1-H(Q)$ \cite{Scarani09}, using equations (\ref{eq:keyDV}) and (\ref{eq:DeltaImain}) we can get the following expression for the lower bound on the secure key rate: \begin{equation} K^{(BB84)}=p_{exp}\max[0,1-2H(Q)]. \label{eq:DeltaIBB84} \end{equation} On the other hand, the upper bound on the information Eve can get by making the most general collective attacks when Alice and Bob use six-state protocol can be written as \cite{Renner05} \begin{equation} I_{EA}^{6state}=I_{EB}^{6state}=F(Q)-H(Q), \end{equation} where \begin{equation} F(Q)=-\left(1-\frac{3Q}{2}\right)\log_2\left(1-\frac{3Q}{2}\right)-\frac{3Q}{2}\log_2\frac{Q}{2} \end{equation} If so, then from (\ref{eq:keyDV}) and (\ref{eq:DeltaImain}) we get \begin{equation} K^{(6state)}=p_{exp}\max\left[0,1-F(Q)\right] \label{eq:DeltaI6state}. \end{equation} The above formulae for $K^{(BB84)}$ and $K^{(6state)}$ get more complicated, when Alice and Bob perform preprocessing, which can be done \emph{e.g.} by randomly flipping some bits of the raw key by Alice \cite{Kraus05,*Renner05}. In this case the mutual information about the key shared by Alice and Bob transforms into \begin{equation} I_{AB}(Q,x)=1-H[(1-x)Q+x(1-Q)], \end{equation} where $x$ is the probability for Alice to flip a given bit of the raw key. In turn $I_{EA}^{BB84}$ (which is still equal to $I_{EB}^{BB84}$) can be written as \begin{widetext} \begin{eqnarray} I_{EA}^{BB84}(Q,x)&=&\max_{x\in[0,1/2]}\min_{\lambda\in[0,Q]}\left[\sum_{i=1}^4A_i\log_2A_i-(1+\lambda-2Q)\log_2(1+\lambda-2Q)-\nonumber\right.\\&-&\left.2(Q-\lambda)\log_2(Q-\lambda)-\lambda\log_2\lambda\right], \label{eq:IAEBB84preprocessing} \end{eqnarray} where \begin{equation} A_{1,2}=\frac{1-Q\pm\sqrt{(1-Q)^2+16x(1-x)(\lambda-2Q+1)(\lambda-Q)}}{2} \end{equation} \end{widetext} and \begin{equation} A_{3,4}=\frac{Q\pm\sqrt{Q^2+16x(1-x)\lambda(\lambda-Q)}}{2}, \end{equation} while for six-state protocol we have \begin{equation} I_{EA}^{6state}(Q,x)=\max_{x\in[0,1/2]}\left[\sum_{i=1}^{4}B_i\log_2B_i+F(Q)\right], \label{eq:IAE6statepreprocessing} \end{equation} where \begin{equation} B_{1,2}=\frac{1-Q\pm\sqrt{(1-Q)^2-4x(1-x)Q(2-3Q)}}{2} \end{equation} and \begin{equation} B_{3,4}=\frac{Q\left[1\pm(1-2x)\right]}{2}. \end{equation} From the above analysis follows that the only parameter which Alice and Bob have to estimate in order to be able to assess the security of their DV QKD protocol is $Q$. We will further express this quantity in terms of the parameters of a given setup, taking into consideration the assumptions that were made at the beginning of this section. To do so let us first observe that since the scheme shown in Fig.\,\ref{fig:DVscheme} is perfectly symmetric in respect to polarizations, we don't have to consider separately the cases when Alice generates differently polarized photons. Instead of this, we can just consider one single case, in which Alice emits single photon in a randomly chosen polarization state, which we simply call \emph{right}. The orthogonal polarization state is called \emph{wrong} in this situation. Similarly, we call the detector to which signal photon emitted by Alice would go, if it is not lost during the propagation and if Bob chose the right basis for his measurement, the \emph{right} detector, and the other one -- the \emph{wrong} detector. Now by $p_+(k,l)$ [$p_-(k,l)$] let's denote the probability that signal photon would [would not] arrive at the right detector in a given attempt to generate a single bit of the key and at the same time $k$ noise photons would arrive at the right detector, while $l$ noise photons would arrive at the wrong detector. These two quantities are equal to \begin{equation} p_+(k,l)=T\pi_k(T)\pi_l(T) \label{eq:pplus} \end{equation} and \begin{equation} p_-(k,l)=(1-T)\pi_k(T)\pi_l(T), \label{eq:pminus} \end{equation} where \begin{equation} \pi_k(T)=\sum_{n=k}^\infty p_n(\mu){n \choose k} (1-T)^kT^{n-k}. \end{equation} Since we assume here that Bob's detectors do not have photon-number resolution, Alice and Bob automatically have to accept every situation in which all of the photons leaving the channel enter the same detector. Nevertheless, they can discard from the generated key all of the cases when both Bob's detectors click at the same time (we call this kind of event a \emph{double click} here). If they do so, the expected probability for accepting a given event by users of six-state protocol can be written as \begin{equation} p_{exp}=\sum_{k=0}^\infty p_+(k,0)+\sum_{k=1}^\infty p_-(k,0)+\sum_{l=1}^\infty p_-(0,l). \label{eq:pexpII} \end{equation} It is clear that only the last term in the above formula contributes to the error rate, so the expression for QBER in our model takes the following form: \begin{equation} Q=\frac{\sum_{l=1}^\infty p_-(0,l)}{p_{exp}}. \label{eq:QII} \end{equation} \section{Numerical results and analytical expressions} \label{Sec:NumericalResults} We now compare the security of the CV and DV QKD protocols in the presence of channel noise. To do so we perform numerical calculations in order to find the dependency of the maximal values of $\mu$, for which it is possible to generate secure key, on the transmittance $T$ of the channel connecting Alice and Bob in the cases when they use different QKD schemes. The relationships between such $\mu_\mathrm{max}^\mathrm{DV}(T)$ functions computed for BB84 and six-state protocols and the analogous function $\mu_\mathrm{max}^\mathrm{CV}(T)$ calculated for the squeezed-state scheme, both for the basic scenario and for the case when Alice and Bob try to improve the security of all these protocols by deliberately adding some noise to their raw keys (as was described in Sec.\,\ref{Sec:Models}), are presented in Fig.\,\ref{fig:relativeresults}. Let us begin the analysis of Fig.\,\ref{fig:relativeresults} by focusing on the comparison between the six-state and squeezed-state protocols. As we can see in this picture for relatively high values of $T$ the former of these two cryptographic schemes allows for significantly higher values of $\mu$ than the latter one. However, this advantage quickly vanishes when $T$ decreases and for some intermediate values of the transmittance of the channel connecting Alice and Bob squeezed-state protocol appears to be slightly better suited for noisy quantum cryptography than the six-state scheme. Nevertheless, when $T$ decreases even further, at some point six-state protocol again starts to outperform the squeezed-state scheme and its advantage grows while $T\rightarrow 0$. In fact, the relationship between BB84 and squeezed-state protocols is also very similar to the one described above. However, since for every value of $T$ BB84 protocol happens to be less resistant to the channel noise than the six-state scheme, the region of channel transmittance for which squeezed-state protocol allows for stronger channel noise than BB84 turns out to be significantly larger than in the case of the comparison between the six-state and squeezed-state protocols discussed before. Also the relative advantage of the CV protocol in this region is higher. In Fig.\,\ref{fig:relativeresults} we can also see that for the values of $T$ between roughly $10^{-0.5}$ and $10^{-2}$ adding noise to the raw key by the legitimate participants of a given QKD protocol can be more profitable for squeezed-state protocol than for DV protocols, while for $T<10^{-2}$ the situation is opposite. \begin{figure} \caption{(color online) Ratios between maximal values of $\mu$ for which it is possible to generate secure key using CV squeezed-state protocol ($\mu_\mathrm{max}^\mathrm{CV}$) and both DV protocols ($\mu_\mathrm{max}^\mathrm{DV}$) considered in our analysis, plotted as a function of channel transmission $T$ for the situation when Alice and Bob perform the randomization stage of their raw key in order to increase its security (dashed lines) or do not perform it (solid lines).} \label{fig:relativeresults} \end{figure} Although it is not possible to find any simple, analytical expressions for the functions $\mu_{max}(T)$ in the general case, the analytical boundaries approximating it in the limit of $T\rightarrow 0$ can be derived for every protocol of our interest. \textbf{Expression for DV QKD:} Derivation of the boundary for the case of six-state and BB84 protocols is relatively easy. To do it, we observe that when $T\rightarrow 0$ and $\mu\rightarrow 0$, the formula for QBER can be easily transformed into \begin{equation} Q\approx\frac{\mu}{2\mu+T}. \end{equation} If so, then for $T \ll 1$ the maximal secure value of $\mu$ depends on $T$ as follows: \begin{equation} \mu_{\mathrm max}(T)=\frac{TQ_\mathrm{th}}{1-2Q_\mathrm{th}}, \label{eq:approxmuDV} \end{equation} where $Q_\mathrm{th}$ is the threshold value of QBER, which for the cases of six-state and BB84 protocols are approximately equal to $12.6\%$ and $11\%$ respectively \cite{Renner05}. \textbf{Expression for CV QKD:} In the case of the squeezed-state CV QKD protocol, when no noise is deliberately added on the receiver side, the analytical lower bound on the secure key rate can be simplified to \begin{equation} K^{(CV)} \approx (T-\mu)\log_2{e}+\mu\log_2{\mu}, \end{equation} by using series expansion around $T=0$, taking the limit of infinite modulation $V \to \infty$ and performing series expansion around $\mu=0$. The value of $\mu$ which turns this simplified expression to zero can be calculated analytically and expressed using Lambert W function as \begin{equation} \mu_{\mathrm max}(T)=\exp[1+W_{-1}(-T/e)]. \label{eq:approxmuCV} \end{equation} The comparison between the boundaries given by formulae (\ref{eq:approxmuDV}) and (\ref{eq:approxmuCV}) and the results of our numerical calculations of the functions $\mu_{\mathrm max}(T)$ performed for the cases of six-state and squeezed-state protocols, which is illustrated in Fig.\,\ref{fig:zoomresults}, shows good agreement between our analytical and numerical results in the limit of $T\rightarrow0$. \begin{figure} \caption{(color online) Maximal values of $\mu$ for which it is possible to generate secure key as a function of channel transmittance $T$ calculated numerically (solid lines) for the cases of Alice and Bob using six-state protocol (red lines) and squeezed-state protocol (black lines), plotted along with the analytical approximations (dot-dashed lines) of the functions $\mu_{\mathrm max}(T)$ valid for the case of $T\rightarrow0$, given by formulae (\ref{eq:approxmuDV}) and (\ref{eq:approxmuCV}) respectively.} \label{fig:zoomresults} \end{figure} While our main goal in performing the analysis presented above was to identify the conditions in which only one of the two main families of QKD protocols can be used to provide security for the process of key generation, its results cannot help us with answering the question which protocol one should choose in a particular case when both CV and DV QKD schemes can be secure at the same time. Facing such a decision it is always good to compare the lower bounds for the secure key rate for different protocols. This kind of comparison, performed for the six-state and squeezed-state schemes, is presented in Fig.\,\ref{fig:KeyComparison} where the function of $K(T)$ was plotted for a few different values of $\mu$, ranging from $10^{-5}$ to $0.5$. Although typical values of $\mu$ in a dark fiber, dedicated solely for the generation of secret key, can be estimated to be on the level of $10^{-4}$--$10^{-5}$ (basing on the experimental results obtained in \cite{Jouguet2013}), in commercial QKD applications utilizing telecom fibers populated by strong classical signals the channel noise can be considerably stronger. In this situation the actual level of $\mu$ would primarily depend on the number of classical channels multiplexed in a given fiber and the power of classical signals transmitted through them \cite{Qi10}. For this reason in the analysis presented in this paper we decided not to focus on a particular level of $\mu$, but consider a broad range of its values, encompassing several orders of magnitude. From the Fig.\,\ref{fig:KeyComparison} one can conclude that if only $T$ is considerably larger than the minimal secure transmittance of the channel connecting Alice and Bob for CV squeezed-state protocol, this scheme can always provide comparable but slightly higher lower bound on the secure key rate than the six-state DV QKD protocol. Similar conclusion can be drawn from the comparison of BB84 and squeezed-state schemes. The main reason for this advantage stems from the capability of encoding more than one bit of information in a single pulse by using CV QKD protocols, which in turn is impossible for the considered DV schemes based on qubits. The results presented in Fig.\,\ref{fig:KeyComparison} can be also used to predict the outcome of a possible comparison of the robustness of the six-state and squeezed-state protocols to the channel noise for a given non-zero value of $K$. In this case one should just compare the minimal secure values of $T$ for these protocols, which can be reached for different levels of $\mu$ for a desired $K$. It is important to note, however, that the lower bound on the secure key rate in our work is calculated per use of the channel, so it contains only partial information on the achievable rate of a particular implementation of a given QKD protocol. In order to calculate the lower bound on the amount of bits of the final key per unit of time, one would have to multiply the expression for $K$ (formula (\ref{eq:keyDV}) or (\ref{eq:keyCV}) for the DV or CV protocols respectively) by the repetition rate of the system, which depends on the setup. Therefore comparing the key rates in the general case can be misleading. \begin{figure} \caption{(color online) Lower bound for the secure key rate as a function of transmittance of the channel connecting Alice and Bob, plotted for $\mu=0.5$ (red lines), $\mu=10^{-1}$ (orange lines), $\mu=10^{-2}$ (yellow lines), $\mu=10^{-3}$ (green lines), $\mu=10^{-4}$ (blue lines) and $\mu=10^{-5}$ (black lines) for six-state protocol (solid lines) and squeezed-state protocol (dashed lines) with the assumption that Alice's sources and Bob's detection systems are perfect.} \label{fig:KeyComparison} \end{figure} The analysis presented above was performed for the asymptotic case of infinite number of quantum signals exchanged by Alice and Bob during the key generation process. However, in realistic situation this number, denoted here by $N$, is always finite. Therefore, it is instructive to check the stability of the discussed results in the finite-key regime. In order to do that we utilize the calculation method introduced for the DV QKD case in \cite{Scarani08} and adopted for the CV protocols in \cite{Leverrier2010}. For definiteness we set the values of all of the failure probabilities present in the mathematical formulas introduced there to the level of $10^{-10}$. The results of this calculation are illustrated in Fig.\,\ref{fig:MuComparisonFinite}, where the ratio of $\mu^\mathrm{CV}_\mathrm{max}/\mu^\mathrm{DV}_\mathrm{max}$ for squeezed-state and six-state protocols is plotted for different numbers of $N$. As it turns out, if only the transmittance of the quantum channel connecting Alice and Bob is not particularly high, the finite-size effects have more negative influence on the squeezed-state protocol than on the DV schemes. In particular, for any finite $N$ there exists a corresponding threshold value for $T$ below which generation of secure key by utilizing squeezed-state protocol becomes impossible even for $\mu\rightarrow 0$. On the other hand, as long as the lossy and noisy channel connecting Alice and Bob is the only imperfect setup element, no such threshold appears for the DV protocols. Therefore, for limited $N$ the ratio of $\mu^\mathrm{CV}_\mathrm{max}/\mu^\mathrm{DV}_\mathrm{max}$ decreases much faster and eventually reaches zero, contrary to the asymptotic case. Furthermore, Fig.\,\ref{fig:MuComparisonFinite} shows that the value of $T$, below which six-state protocol becomes more resistant to the channel noise than a given CV QKD scheme, grows with the decreasing number of quantum signals exchanged by Alice and Bob. \begin{figure} \caption{(color online) Ratio between maximal values of $\mu$ for which it is possible to generate secure key using squeezed-state protocol ($\mu_\mathrm{max}^\mathrm{CV}$) and six-state protocol ($\mu_\mathrm{max}^\mathrm{DV}$), plotted as a function of channel transmittance $T$ for the asymptotic case in which the number of quantum signals exchanged by Alice and Bob during the protocol is infinite (solid line) and for the situations when it equals to $N=10^{10}$ (dashed line), $N=10^8$ (dot-dashed line) and $N=10^6$ (dotted line). The calculations were made with the assumption that the trusted parties do not increase the security of their raw key by performing the randomization stage.} \label{fig:MuComparisonFinite} \end{figure} \section{Requirements for nonclassical resources} \label{Sec:RealisticCase} Knowing that for the case when Alice's source and Bob's detection system are perfect DV QKD protocols can provide one with the security of key generation process for slightly higher values of $\mu$ than the squeezed-state CV protocol if only the transmittance of the channel connecting Alice and Bob is low enough, we can now consider the possibility for realizing this kind of scenario in the situation when the sources of signal owned by Alice are not ideal. In order to assess the quality of a single-photon source needed for secure realization of QKD protocols for the combinations of parameters $T$ and $\mu$ for which squeezed-state protocol is insecure, we will assume in this section that Alice's source produces genuine single-photon pulses with probability $p$ and empty pulses with probability $1-p$, \emph{i.e.}\,it never emits multiphoton pulses. The reason for adopting this particular model of Alice's source for our considerations is that while decreasing the probability for multiphoton emission to a very low level is possible these days for many different kinds of realistic single-photon sources \cite{Fasel04, Keller04,*Brokmann04,*Laurat06,*Pisanello10,*Mucke13,Claudon10}, constructing a high-quality source which would produce non-empty pulses with probability close to one remains a serious challenge for experimental physicists. This task is especially hard to be accomplished for the case of deterministic single-photon sources, which are usually affected by poor collection efficiency of generated photons \cite{Eisaman11}. However, very promising sources based on quantum dots embedded in photonic nanowires or micropillar cavities have been developed recently, with probability of producing a single-photon pulse exceeding $70\%$ and potentially reaching even $95\%$ \cite{Claudon10,Bulgarini12,*Gazzano12,Somaschi16}. Furthermore, relatively efficient probabilistic single-photon sources, based especially on the spontaneous parametric down-conversion (SPDC) process, with very low probabilities of emitting a multiphoton pulse and with the heralding efficiency exceeding $60\%$ were already developed more than a decade ago \cite{Fasel04}. Nowadays, reports on SPDC-based sources with $p>80\%$ can be find in the literature \cite{Pomarico12,*Pereira13,*Ramelow13}. \begin{figure} \caption{(color online) Requirements on the value of squeezing parameter (dashed lines) and the probability $p$ of producing non-empty signal pulse by a single-photon source (solid lines) needed to be reached for the security of, respectively, the squeezed-state protocol and a) six-state, b) BB84 protocol, plotted as functions of the transmittance of the channel connecting Alice and Bob for six different values of $\mu$: $\mu=0.5$ (red lines), $\mu=10^{-1}$ (orange lines), $\mu=10^{-2}$ (yellow lines), $\mu=10^{-3}$ (green lines), $\mu=10^{-4}$ (blue lines) and $\mu=10^{-5}$ (black lines). Vertical dotted lines denote the values of $T$ for which squeezed-state protocol becomes insecure for particular values of $\mu$.} \label{fig:CVDVrealistic} \end{figure} \begin{figure} \caption{(color online) Requirements on the probability $p$ of producing non-empty signal pulse by a single-photon source needed to be fulfilled for the generation of secure key by using six-state (red, dashed line) or BB84 (blue, solid line) protocol for the same value of $\mu$, for which squeezed-state protocol with infinite squeezing stops being secure at a given transmittance $T$.} \label{fig:DVReqComparison} \end{figure} Adopting the model for realistic single-photon source described above, we investigated the dependence of the minimal probability $p$ of producing non-empty pulse by Alice's source, required for the six-state and BB84 protocols to be secure, on the transmittance of the channel connecting Alice and Bob for a few different values of the power of the source of noise in the DV QKD model, illustrated in Fig.\,\ref{fig:DVscheme}. The results of this investigation are plotted in Fig.\,\ref{fig:CVDVrealistic}. In the same figure we also plotted the dependency of the value of squeezing parameter, required for the security of the squeezed-state protocol in the model for CV QKD pictured in Fig.\,\ref{fig:CVscheme}, on $T$. In order to make necessary calculations for squeezed-state protocol in realistic case we used the generalized state preparation model for CV QKD in which modulation and squeezing of the states emitted by Alice's source can be parametrized separately \cite{Usenko2011}. While the plots given in Fig.\,\ref{fig:CVDVrealistic} were obtained for the very strong modulation variance ($10^3$ shot-noise units), varying this quantity does not significantly affect the results. From Fig.\,\ref{fig:CVDVrealistic} one can deduce that the requirements for the probability $p$ of emitting non-empty pulse by Alice's single-photon source, which would have to be fulfilled in order to ensure security of the DV QKD protocols for the values of $T$ for which squeezed-state protocol is no longer secure, are generally quite demanding, especially if the power of the source of noise is relatively high. For different levels of $\mu$ the minimal values of $p$ which would be needed to realize this task are given by the crossing points of the solid and dotted lines of the same colors displayed in Fig.\,\ref{fig:CVDVrealistic}. While in practice overcoming the squeezed-state protocol by the DV QKD schemes may be very hard or even impossible to demonstrate for relatively high values of $\mu$, it is certainly achievable for realistic sources in the case of $\mu\ll 1$, as the requirements for $p$ shown in Fig.\,\ref{fig:CVDVrealistic} become more and more relaxed when $\mu\rightarrow 0$. This conclusion can be confirmed in Fig.\,\ref{fig:DVReqComparison}, where the minimal required values of $p$ are plotted as the functions of $\mu$ both for the BB84 and six-state protocol. The results of our analysis shown in Fig.\,\ref{fig:CVDVrealistic} and Fig.\,\ref{fig:DVReqComparison} indicate that even DV QKD schemes with inefficienct sources of photons can be capable to overcome the CV protocols for long-distance quantum cryptography with ultra low channel noise. Not surprisingly, in Fig.\,\ref{fig:DVReqComparison} one can also see that for every level of $\mu$ the value of $p$ needed to overcome squeezed-state protocol is larger for the BB84 than for the six-state protocol. This means that a demonstration of the superiority of the six-state protocol over the squeezed-state scheme in realistic situation would be easier to perform than an analogous demonstration for BB84 protocol. This conclusion justifies our choice to focus more on the six-state protocol in this work, despite much larger popularity of the BB84 scheme. \begin{figure} \caption{(color online) Minimal values of the probability $p$ of producing non-empty signal pulse by a single-photon source, needed for the six-state protocol to be secure for a given pair of values of the channel transmittance $T$ and noise mean photon number $\mu$ for which squeezed-state protocol is already insecure. White color indicates the regions of the plot where either the squeezed-state protocol is still secure or the six-state protocol is insecure even for $p=1$.} \label{fig:p1ContourPlot} \end{figure} While Fig.\,\ref{fig:DVReqComparison} shows only the minimal values of $p$ for which DV QKD protocols can still be secure for given $\mu$ and $T$ that already breaks the security of the CV QKD schemes, for higher $p$ demonstrating the superiority of BB84 or six-state protocol over the squeezed-state scheme may be realized also for the lower transmittance of the quantum channel connecting Alice and Bob. Therefore, it is reasonable to ask about the whole region of parameters $\mu$ and $T$ for which overcoming the performance of the squeezed-state protocol by a given DV QKD scheme is possible. Such a region, found for the case of six-state protocol, is illustrated in Fig.\,\ref{fig:p1ContourPlot}. One can see there that it is relatively narrow. This is because the closer $T$ is to the minimal secure transmittance of the quantum channel connecting Alice and Bob for a given $\mu$, the faster the minimal required value of $p$ goes to one. This tendency could actually be observed even before, in Fig.\,\ref{fig:CVDVrealistic}. Fig.\,\ref{fig:p1ContourPlot} also confirms that the requirement for $p$ relaxes when $\mu\rightarrow0$. Beside sources of photons, another fundamental part of the setup needed for the implementation of DV QKD protocols are single-photon detectors. In some situations imperfection of these devices can also affect the security of such schemes in significant way. In particular, every realistic single-photon detector is characterized by a non-zero dark count rate. The influence of these unwanted clicks on the results presented in this work is negligible as long as the value of $T$ is more than two orders of magnitude higher than the probability $d$ to register a dark count per single detection window. However, for lower transmittance of the quantum channel dark counts considerably affect the security of DV QKD protocols and can become the major issue. They result in threshold values of channel transmittance $T_{th}$, below which overcoming squeezed-state protocol with DV schemes becomes impossible even if the single-photon source used by Alice is perfect. These thresholds strongly depend on the relationship between $d$ and the detection efficiency $\eta$ of the measurement devices utilized by Bob. Typical values of $d/\eta$ that can be found in the literature describing recent DV QKD experiments range from $10^{-4}$ to $10^{-7}$ \cite{Zhang08,*Walenta14,*Valivarthi15,*Takemoto15,*Wang15,*Tang16}. During our work we found out that in this region $T_{th}$ can be upper-bounded by \begin{equation} T_{th}^{\,6 state}\leq10^{1.07\log_{10}\left(d/\eta\right)+1.45} \end{equation} for the case when the trusted parties implement six-state protocol or \begin{equation} T_{th}^{BB84}\leq10^{1.15\log_{10}\left(d/\eta\right)+2.12}. \end{equation} when they choose BB84 scheme. On the other hand, if Bob's measurement system does not register any dark counts, the limited detection efficiency does not affect the results of our calculations as long as $T<10^{-2}$. This is because for low values of $T$ almost all of the non-empty pulses arriving at Bob's measurement system contain either a single signal photon or a single noise photon. Therefore, since the limited detection efficiency reduces the fractions of registered signal and noise photons in exactly the same way its value does not matter for the security threshold. Only when the transmittance of the quantum channel connecting Alice and Bob is relatively high and the probability for more photons to arrive at Bob's detectors at the same time becomes significant, the situation can be different. In this case limited detection efficiency makes the requirement for the quality of Alice's source slightly more demanding. \section{Conclusions} \label{Sec:Conclusions} In the analysis presented above we compared the security of two DV protocols, namely BB84 and six-state, and CV squeezed-state protocol in the situation when the only imperfect element of the setup used by Alice and Bob is the quantum channel connecting them. We assumed here that this channel is lossy and that the noise coupled to the signal during its propagation through it is of the type of thermal reservoir, which can be seen as a typical scenario for CV QKD case. The results of our analysis, depicted in Fig.\,\ref{fig:relativeresults}, clearly show that while for some intermediate values of the channel transmittance continuous-variable squeezed-state protocol is comparably resilient to the channel noise as BB84 and six-state schemes, for the cases of $T \to 1$ and $T \ll 1$ both the DV protocols perform better. It suggests that in the scenario when Alice and Bob have high-quality sources and detectors, but the quantum channel connecting them is lossy and noisy, DV QKD technique can be seen as having more potential for generating a secure cryptographic key than CV QKD. Although exploiting this potential in practice may be challenging, it is within our reach. With the recent engineering progress in the field of single-photon sources it can be even possible to demonstrate the superiority of realistic DV protocols over the infinite-squeezing ideal CV schemes in the regime of $T \ll 1$, as can be seen in Fig.\,\ref{fig:CVDVrealistic}. This conclusion may provide some additional motivation for the experimental physicists to focus even more of their efforts on developing novel high-quality sources with high probability of producing non-empty pulse and very low probability for multiphoton emission or improving the performance of the existing ones. \noindent {\bf Acknowledgments. --} The research leading to these results has received funding from the EU FP7 under Grant Agreement No. 308803 (project BRISQ2), co-financed by M\v SMT \v CR (7E13032). M.L. and V.C.U. acknowledge the project 13-27533J of the Czech Science Foundation. M.L. acknowledges support by the Development Project of Faculty of Science, Palacky University. \appendix \section{} \label{Sec:GG02protocol} \begin{figure} \caption{(color online) Ratios between maximal values of $\mu$ for which it is possible to generate secure key using CV GG02 protocol ($\mu_\mathrm{max}^\mathrm{CV}$) and both DV protocols ($\mu_\mathrm{max}^\mathrm{DV}$) considered in our analysis, plotted as a function of channel transmission $T$ for the situation when Alice and Bob perform the randomization stage of their raw key in order to increase its security (dashed lines) or do not perform it (solid lines).} \label{fig:relativeresults2} \end{figure} In the main body of our paper we considered Gaussian squeezed-state CV QKD protocol, using it for the comparison with DV QKD protocols in terms of their robustness to the channel noise. However, due to the popularity of the GG02 scheme based on coherent states \cite{Grosshans2002}, it is meaningful to perform a similar analysis also for this protocol. In order to do all the necessary calculations in this situation, one can once again utilize the formulae introduced in Sec.\,\ref{Sec:CVmodel}, only assuming that this time the variance of the signal states is $V=1$. In Fig.\,\ref{fig:relativeresults2} we present the results of our comparison between the maximal values of the parameter $\mu$ ensuring the security of GG02 and the DV QKD protocols. This comparison is similar to the one made for the squeezed-state scheme in Sec.\,\ref{Sec:NumericalResults}, which results are depicted in Fig.\,\ref{fig:relativeresults}. By comparing the two aforementioned figures with each other one can confirm that the squeezed-state protocol is indeed more resistant to the channel noise than the GG02 scheme, as was already stated in the first paragraph of Sec.\,\ref{Sec:CVmodel}. As can be seen in Fig.\ref{fig:relativeresults2}, contrary to the case of the squeezed-state protocol, for every possible value of $T$, GG02 scheme allows for significantly lower values of $\mu_\mathrm{max}$ than the BB84 and six-state protocols. \section{} \label{Sec:DifferentDetection} In the analysis of DV QKD protocols presented in the main body of this article we assumed that Bob's detectors have perfect detection efficiency, but do not have the ability to resolve the number of photons entering them. At first sight it would seem that replacing them with photon-number-resolving detectors should improve the setup, making it more resilient to the channel noise. However this intuition does not necessarily has to be correct. Here we are going to show that in our model, when the source of channel noise has thermal statistics, equipping Bob's detectors with photon-number resolution does not change the function of $\mu_\mathrm{max}^\mathrm{DV}(T)$ in any way, while for Poisson statistics it can even have negative effect on QKD security. In order to accomplish this task, we will start with adapting the expressions for $p_{exp}$ and $Q$, given previously by formulae (\ref{eq:pexpII}) and (\ref{eq:QII}) respectively, to the case of photon-number-resolving detetcors used by Bob. We get: \begin{equation} p_{exp}^{(II)}= p_+(0,0)+p_-(1,0)+p_-(0,1) \end{equation} and \begin{equation} Q^{(II)}=\frac{ p_-(0,1)}{p_{exp}^{(II)}}. \label{eq:QIII} \end{equation} Since for both DV protocols considered here the formulae for $\Delta I$ depend only on the parameter $Q$ (and optinally the probability $x$ to flip a bit by Alice, if the preprocessing stage is being performed), it is obvious that the condition for photon-number-resolving detectors to offer better security of our DV QKD schemes than simple on/off binary detectors can be written in the form of the following inequality: \begin{equation} Q^{(II)}<Q. \end{equation} Using equations (\ref{eq:QII}) and (\ref{eq:QIII}), and taking advantage of the facts that \begin{equation} p_+(k,l)=p_+(l,k) \end{equation} and \begin{equation} p_-(k,l)=\frac{1-T}{T}p_+(k,l), \end{equation} we can transfrom this condition into \begin{equation} p_+(1,0)\cdot\sum_{k=1}^\infty p_+(k,0)<p_+(0,0)\cdot\sum_{k=2}^\infty p_+(k,0). \end{equation} After inserting (\ref{eq:pplus}) and performing some algebraic calculations, we can get the following final version of this condition: \begin{eqnarray} &&\sum_{k=1}^\infty\sum_{n=k}^\infty\sum_{m=1}^\infty\left[p_n(\mu)p_m(\mu)m{n \choose k}-\right.\\&-&\left.p_{n+1}(\mu)p_{m-1}(\mu){n+1 \choose k+1}\right](1-T)^kT^{n+m-k-1}<0\nonumber. \label{eq:condition2} \end{eqnarray} The above inequality cannot be solved analytically in the general case. However, it can be further simplified in two extreme cases of $T\rightarrow 0$ and $T\rightarrow 1$. If $T\rightarrow 0$, we can leave only the expression for $m=1$ and $n=k$ on the left-hand side of the condition (\ref{eq:condition2}). If we do it, we get: \begin{equation} \sum_{k=1}^\infty\left[p_k(\mu)p_1(\mu)-p_{k+1}(\mu)p_{0}(\mu)\right]<0, \label{eq:condition3} \end{equation} But for the thermal statistics we have \begin{equation} p_k(\mu)p_1(\mu)-p_{k+1}(\mu)p_{0}(\mu)=0 \end{equation} for every $k$. This means that equipping Bob's detectors with the ability to resolve the number of incoming photons does not have any effect on the function $\mu_\mathrm{max}(T)$ when $T\rightarrow 0$. \begin{figure} \caption{(color online) Maximal values of $\mu$ for which it is possible to generate secure key as a function of channel transmission $T$ plotted for the case of Alice and Bob using six-state protocol when the channel noise has thermal statistics and the detectors used by Bob have photon-number-resolving ability (dashed blue line) or do not have it (solid red line). Analogous results for the Poissonian type of noise are plotted with dashed green line (for detectors with photon number resolution) and solid orange line (for detectors without photon number resolution).} \label{fig:detectionmodels} \end{figure} The situation for $T\rightarrow 1$ is more complicated. In this case we can leave on the left-hand side of inequality (\ref{eq:condition2}) only the expression with lowest possible power of $(1-T)$, that is for $k=1$. Then we have \begin{equation} \sum_{n,m=1}^\infty\left[nmp_n(\mu)p_m(\mu)-{n+1 \choose 2}p_{n+1}(\mu)p_{m-1}(\mu)\right]<0 \label{eq:condition4}. \end{equation} For thermal statistics of the source of noise this condition becomes \begin{equation} \sum_{n,m=1}^\infty \frac{\mu^{n+m}}{(\mu+1)^{n+m+2}}n\left[m-\frac{n+1}{2}\right]<0. \label{eq:inequality5} \end{equation} A good method to prove that the left hand side of this inequality is equal to zero is to show that for any $c$ the term standing beside $\mu^c/(\mu+1)^{(c+2)}$, which can be actually written as \begin{equation} \sum_{n=1}^{c-1}n\left[c-n-\frac{n+1}{2}\right], \end{equation} is equal to zero. This can be done by induction. On the other hand for Poisson statistics of the sources of noise, inequality (\ref{eq:condition2}) would transform into \begin{equation} \sum_{k=1}^\infty e^{-2\mu}\mu^{k+1}\left[\frac{1}{k!}-\frac{1}{(k+1)!}\right]<0 \end{equation} for the case of $T\rightarrow 0$ or into \begin{equation} \sum_{n,m=1}^\infty e^{-2\mu}\frac{\mu^{n+m}}{2(n-1)!(m-1)!}<0 \end{equation} for the case of $T\rightarrow 1$. It is not difficult to see, that the left-hand sides of both these inequalities are actually larger than zero, which means that if the sources of noise in our DV QKD scheme had Poisson statistics, from the point of its resilience to noise it would be better for Bob to use simple on/off detectors instead of photon-number-resolving ones. The conclusions which can be drawn from the above analysis can be confirmed in Fig.\,\ref{fig:detectionmodels}, where we present the results of the numerical calculations of the function $\mu_\mathrm{max}^\mathrm{DV}(T)$ for the cases of thermal and Poisson statistics of the source of noise both in the situation when Bob uses detectors with and without the ability to resolve the number of photons entering them. In fact, it is quite easy to intuitively explain why using photon-number-resolving detectors by Bob does not seem to improve the security of our DV QKD scheme over the case of on/off detectors. The basic reason for this is that detectors with photon-number resolution exclude from the key not only all the situations in which more than one photon comes to the wrong detector (which is obviously good for the security), but also all the cases when more than one photon arrives in the right detector (which is obviously bad). So although using photon-number-resolving detectors reduces the number of errors in the key, QBER given by the formula (\ref{eq:QII}) can actually increase due to even greater reduction of $p_{exp}$ at the same time. \end{document}
\begin{document} \title{On the exterior Dirichlet problem for Hessian quotient equations\footnotemark[1]} \author{Dongsheng Li \footnotemark[3] \and Zhisu Li (\Letter)\footnotemark[2]~\footnotemark[3]} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext[1]{This research is supported by NSFC.11671316.} \footnotetext[2]{Corresponding author.} \footnotetext[3]{School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China;\\ Dongsheng Li: \href{mailto:[email protected]}{[email protected]}; Zhisu Li: \href{mailto:[email protected]}{[email protected]}.} \date{} \maketitle \begin{abstract} In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Amp\`{e}re equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric functions and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation. \end{abstract} \noindent\emph{Keywords:} {Dirichlet problem, existence and uniqueness, exterior domain, Hessian quotient equation, Perron's method, prescribed asymptotic behavior, viscosity solution} \noindent\emph{2010 MSC:} {35D40, 35J15, 35J25, 35J60, 35J96} \section{Introduction} In this paper, we consider the Dirichlet problem for the Hessian quotient equation \begin{equation}\label{eqn.hqe} \frac{\sigma_k(\lambda(D^2u))}{\sigma_l(\lambda(D^2u))}=1 \end{equation} in the exterior domain $\mathds R^n\setminus\overline{D}$, where $D$ is a bounded domain in $\mathds R^n$, $n\geq3$, $0\leq l<k\leq n$, $\lambda(D^2u)$ denotes the eigenvalue vector $\lambda:=(\lambda_1,\lambda_2,...,\lambda_n)$ of the Hessian matrix $D^2u$ of the function $u$, and \[\sigma_0(\lambda)\equiv1 \quad\text{and}\quad \sigma_j(\lambda):=\sum_{1\leq s_1<s_2<...<s_j\leq n} \lambda_{s_1}\lambda_{s_2}...\lambda_{s_j} ~(\forall 1\leq j\leq n)\] are the elementary symmetric functions of the $n$-vector $\lambda$. Note that when $l=0$, \eref{eqn.hqe} is the Hessian equation $\sigma_k(\lambda(D^2u))=1$; when $l=0$, $k=1$, it is the Poisson equation $\Delta u=1$, a linear elliptic equation; when $l=0$, $k=n$, it is the famous Monge-Amp\`{e}re equation $\det(D^2u)=1$; and when $l=1$, $k=3$, $n=3$ or $4$, it is the special Lagrangian equation $\sigma_1(\lambda(D^2u))=\sigma_3(\lambda(D^2u))$ in three or four dimension (in three dimension, this is $\det(D^2u)=\Delta u$ indeed) which arises from the special Lagrangian geometry \cite{HL82}. For linear elliptic equations of second order, there have been much extensive studies on the exterior Dirichlet problem, see \cite{MS60} and the references therein. For the Monge-Amp\`{e}re equation, a classical theorem of J\"{o}rgens \cite{Jor54}, Calabi \cite{Cal58} and Pogorelov \cite{Pog72} states that any convex classical solution of $\det(D^2u)=1$ in $\mathds R^n$ must be a quadratic polynomial. Related results was also given by \cite{CY86}, \cite{Caf95}, \cite{TW00} and \cite{JX01}. Caffarelli and Li \cite{CL03} extended the J\"{o}rgens-Calabi-Pogorelov theorem to exterior domains. They proved that if $u$ is a convex viscosity solution of $\det(D^2u)=1$ in the exterior domain $\mathds R^n\setminus\overline{D}$, where $D$ is a bounded domain in $\mathds R^n$, $n\geq3$, then there exist $A\in\mathds R^{n\times n},b\in\mathds R^n$ and $c\in\mathds R$ such that \begin{equation}\label{eqn.abc-n} \limsup_{|x|\rightarrow+\infty}|x|^{n-2} \left|u(x)-\left(\frac{1}{2}x^{T}Ax+b^Tx+c\right)\right|<\infty. \end{equation} With such prescribed asymptotic behavior at infinity, they also established an existence and uniqueness theorem for solutions of the Dirichlet problem of the Monge-Amp\`{e}re equation in the exterior domain of $\mathds R^n$, $n\geq3$. See \cite{FMM99}, \cite{FMM00} or \cite{Del92} for similar problems in two dimension. Recently, J.-G. Bao, H.-G. Li and Y.-Y. Li \cite{BLL14} extended the above existence and uniqueness theorem of the exterior Dirichlet problem in \cite{CL03} for the Monge-Amp\`{e}re equation to the Hessian equation $\sigma_k(\lambda(D^2u))=1$ with $2\leq k\leq n$ and with some appropriate prescribed asymptotic behavior at infinity which is modified from \eref{eqn.abc-n}. Before them, for the special case that $A=c_0 I$ with $c_0:=({C_n^k})^{-1/k}$ and $C_n^k:={n!}/(k!(n-k)!)$, the exterior Dirichlet problem for the Hessian equation has been investigated by Dai and Bao in \cite{DB11}. At the same time, Dai \cite{Dai11} proved the existence theorem of the exterior Dirichlet problem for the Hessian quotient equation \eref{eqn.hqe} with $k-l\geq3$, and with the prescribed asymptotic behavior at infinity of the special case that $A=c_\ast I$, that is, \begin{equation}\label{eqn.cc-kl} \limsup_{|x|\rightarrow+\infty}|x|^{k-l-2} \left|u(x)-\left(\frac{c_\ast}{2}|x|^2+c\right)\right|<\infty, \end{equation} where \begin{equation}\label{eqn.cst} c_\ast:=\left(\frac{C_n^l}{C_n^k}\right)^{\frac{1}{k-l}} ~\text{with}~C_n^i:=\frac{n!}{i!(n-i)!} ~\text{for}~i=k,l. \end{equation} As they pointed out in \cite{LB14} that the restriction $k-l\geq3$ rules out an important example, the special Lagrangian equation $\det(D^2u)=\Delta u$ in three dimension. Later, \cite{LD12} improve the result in \cite{Dai11} for \eref{eqn.hqe} with $k-l\geq3$ to that for \eref{eqn.hqe} with $0\leq l<k\leq(n+1)/2$. More recently, Li and Bao \cite{LB14} established the existence theorem of the exterior Dirichlet problem for a class of fully nonlinear elliptic equations related to the eigenvalues of the Hessian which include the Monge-Amp\`{e}re equations, Hessian equations, Hessian quotient equations and the special Lagrangian equations in dimension equal and lager than three, but with the prescribed asymptotic behavior at infinity only in the special case of \eref{eqn.abc-n} that $A=c^\ast I$ with $c^\ast$ some appropriate constant, like \eref{eqn.cc-kl} and \eref{eqn.cst}. In this paper, we focus our attention on the Hessian quotient equation \eref{eqn.hqe} and establish the existence and uniqueness theorem for the exterior Dirichlet problem of it with prescribed asymptotic behavior at infinity of the type similar to \eref{eqn.abc-n}. This extends the previous corresponding results on the Monge-Amp\`{e}re equations \cite{CL03} and on the Hessian equations \cite{BLL14} to Hessian quotient equations, and also extends those results on the Hessian quotient equations in \cite{Dai11}, \cite{LD12} and \cite{LB14} to be valid for general prescribed asymptotic behavior condition at infinity. Since we do not restrict ourselves to the case $k-l\geq3$ or $0\leq l<k\leq(n+1)/2$ only, our theorems also apply to the special Lagrangian equations $\det(D^2u)=\Delta u$ in three dimension and $\sigma_1(\lambda(D^2u))=\sigma_3(\lambda(D^2u))$ in four dimension. Indeed, we will show in our forthcoming paper \cite{LL16} that our method still works very well for the special Lagrangian equations with higher dimension and with general phase. We would like to remark that, for the interior Dirichlet problems there have been much extensive studies, see for example \cite{CIL92}, \cite{CNS85}, \cite{Ivo85}, \cite{Kry83}, \cite{Urb90}, \cite{Tru90} and \cite{Tru95}; see \cite{BCGJ03} and the references given there for more on the Hessian quotient equations; and for more on the special Lagrangian equations, we refer the reader to \cite{HL82}, \cite{Fu98}, \cite{Yuan02}, \cite{CWY09} and the references therein. For the reader's convenience, we give the following definitions related to Hessian quotient equation (see also \cite{CIL92}, \cite{CC95}, \cite{CNS85}, \cite{Tru90}, \cite{Tru95} and the references therein). We say that a function $u\in C^2\left(\mathds R^n\setminus\overline{D}\right)$ is \emph{$k$-convex}, if $\lambda(D^2u)\in\overline{\Gamma_k}$ in $\mathds R^n\setminus\overline{D}$, where $\Gamma_k$ is the connected component of $\set{\lambda\in\mathds R^n|\sigma_k(\lambda)>0}$ containing the \emph{positive cone} \[\Gamma^+:=\set{\lambda\in\mathds R^n|\lambda_i>0,\forall i=1,2,...,n}.\] It is well known that $\Gamma_k$ is an open convex symmetric cone with its vertex at the origin and that \[\Gamma_k=\set{\lambda\in\mathds R^n|\sigma_j(\lambda)>0, \forall j=1,2,...,k},\] which implies \[\set{\lambda\in\mathds R^n|\lambda_1+\lambda_2+...+\lambda_n>0} =\Gamma_1\supset...\supset\Gamma_k\supset\Gamma_{k+1} \supset...\supset\Gamma_n=\Gamma^+\] with the first term $\Gamma_1$ the half space and with the last term $\Gamma_n$ the positive cone $\Gamma^+$. Furthermore, we also know that \begin{equation}\label{eqn.pisjp} \partial_{\lambda_i}\sigma_{j}(\lambda)>0, ~\forall 1\leq i\leq n, ~\forall 1\leq j\leq k, ~\forall\lambda\in\Gamma_k, ~\forall 1\leq k\leq n \end{equation} (see \cite{CNS85} or \cite{Urb90} for more details). Let $\Omega$ be an open domain in $\mathds R^n$ and let $f\in C^0(\Omega)$ be nonnegative. Suppose $0\leq l<k\leq n$. A function $u\in C^0(\Omega)$ is said to be a \emph{viscosity subsolution} of \begin{equation}\label{eqn.hqe-f} \frac{\sigma_k(\lambda(D^2u))}{\sigma_l(\lambda(D^2u))}=f \quad\text{in}~\Omega \end{equation} \big{(}or say that $u$ satisfies \[\frac{\sigma_k(\lambda(D^2u))}{\sigma_l(\lambda(D^2u))}\geq f \quad\text{in}~\Omega\] in the viscosity sense, similarly hereinafter\big{)}, if for any function $v\in C^2(\Omega)$ and any point $x^\ast\in\Omega$ satisfying \[v(x)\geq u(x),~\forall x\in\Omega \quad\text{and}\quad v(x^\ast)=u(x^\ast),\] we have \[\frac{\sigma_k(\lambda(D^2v))}{\sigma_l(\lambda(D^2v))}\geq f \quad\text{in}~\Omega.\] A function $u\in C^0(\Omega)$ is said to be a \emph{viscosity supersolution} of \eref{eqn.hqe-f} if for any \textit{$k$-convex function} $v\in C^2(\Omega)$ and any point $x^\ast\in\Omega$ satisfying \[v(x)\leq u(x),~\forall x\in\Omega \quad\text{and}\quad v(x^\ast)=u(x^\ast),\] we have \[\frac{\sigma_k(\lambda(D^2v))}{\sigma_l(\lambda(D^2v))}\leq f \quad\text{in}~\Omega.\] A function $u\in C^0(\Omega)$ is said to be a \emph{viscosity solution} of \eref{eqn.hqe-f}, if it is both a viscosity subsolution and a viscosity supersolution of \eref{eqn.hqe-f}. A function $u\in C^0(\Omega)$ is said to be a \emph{viscosity subsolution} (respectively, \emph{supersolution, solution}) of \eref{eqn.hqe-f} and $u=\varphi$ on $\partial\Omega$ with some $\varphi\in C^0(\partial\Omega)$, if $u$ is a viscosity subsolution (respectively, supersolution, solution) of \eref{eqn.hqe-f} and $u\leq$(respectively, $\geq,=$)$\varphi$ on $\partial\Omega$. Note that in the definitions of viscosity solution above, we have used the ellipticity of the Hessian quotient equations indeed. For completeness and convenience, this will be proved in the end of \ssref{subsec.prelem}. See also \cite{CC95}, \cite{CIL92}, \cite{CNS85}, \cite{Urb90} and the references therein. Define \[\mathscr{A}_{k,l}:=\left\{A\in S(n)\big{|}\lambda(A)\in\Gamma^+, \sigma_k(\lambda(A))=\sigma_l(\lambda(A))\right\}.\] Note that there are plenty of elements in $\mathscr{A}_{k,l}$. In fact, for any $A\in S(n)$ with $\lambda(A)\in\Gamma^+$, if we set \[\varrho:=\left(\frac{\sigma_k(\lambda(A))}{\sigma_l(\lambda(A))}\right) ^{-\frac{1}{k-l}},\] we then have $\varrho A\in\mathscr{A}_{k,l}$. Let \[\mathscr{\widetilde A}_{k,l} :=\set{A\in\mathscr{A}_{k,l}\big{|}m_{k,l}(\lambda(A))>2},\] where $m_{k,l}(\lambda)$ is a quantity which plays an important role in this paper. We will give the specific definition of $m_{k,l}(\lambda)$ in \eref{eqn.mkla} in \ssref{subsec.xi}, and verify there that $\mathscr{\widetilde A}_{k,l}$ possesses the following fine properties. \begin{proposition}\label{prop.wtakl} Suppose $0\leq l<k\leq n$ and $n\geq3$. \begin{enumerate}[\quad(1)] \item If $k-l\geq2$, then $\mathscr{\widetilde A}_{k,l}=\mathscr{A}_{k,l}$. \item $\mathscr{\widetilde A}_{n,0}=\mathscr{A}_{n,0}$ and $m_{n,0}\equiv n$. \item $c_\ast I\in\mathscr{\widetilde A}_{k,l}$ and $m_{k,l}(c_\ast(1,1,...,1))=n$, where $c_\ast$ is the one defined in \eref{eqn.cst}. \end{enumerate} \end{proposition} The main result of this paper now can be stated as below. \begin{theorem}\label{thm.hqe} Let $D$ be a bounded strictly convex domain in $\mathds R^n$, $n\geq 3$, $\partial D\in C^2$ and let $\varphi\in C^2(\partial D)$. Then for any given $A\in\mathscr{\widetilde A}_{k,l}$ with $0\leq l<k\leq n$, and any given $b\in \mathds R^n$, there exists a constant $\tilde{c}$ depending only on $n,D,k,l,A,b$ and $\norm{\varphi}_{C^2(\partial D)}$, such that for every $c\geq\tilde{c}$, there exists a unique viscosity solution $u\in C^0(\mathds R^n\setminus D)$ of \begin{equation}\label{eqn.hqe-abc} \left\{ \begin{aligned} \displaystyle \frac{\sigma_k(\lambda(D^2u))}{\sigma_l(\lambda(D^2u))}&=1 \quad\text{in}~\mathds R^n\setminus\overline{D},\\%[0.05cm] \displaystyle u&=\varphi\quad\text{on}~\partial D,\\%[0.05cm] \displaystyle \limsup_{|x|\rightarrow+\infty}|x|^{m-2} &\left|u(x)-\left(\frac{1}{2}x^{T}Ax+b^Tx+c\right)\right|<\infty, \end{aligned} \right. \end{equation} where $m\in(2,n]$ is a constant depending only on $n,k,l$ and $\lambda(A)$, which actually can be taken as $m_{k,l}(\lambda(A))$. \end{theorem} \begin{remark} \begin{enumerate}[(1)] \item One can easily see that \tref{thm.hqe} still holds with $A\in\mathscr{\widetilde A}_{k,l}$ replaced by $A\in\mathscr{\widetilde A}^\ast_{k,l}$ and $\lambda(A)$ replaced by $\lambda(A^\ast)$, where \[\mathscr{\widetilde A}^\ast_{k,l} :=\left\{A\in\mathds R^{n\times n}\big{|} \lambda(A^\ast)\in\Gamma^+, \sigma_k(\lambda(A^\ast)) =\sigma_l(\lambda(A^\ast)), m_{k,l}(\lambda(A^\ast))>2\right\}\] and $A^\ast:=(A+A^T)/2$. This is to say that the above theorem can be adapted to a slightly more general form by modifying the meaning of $\mathscr{\widetilde A}_{k,l}$. \item For the special cases that $l=0$ (i.e., the Hessian equation $\sigma_k(\lambda(D^2u))=1$) and that $l=0$ and $k=n$ (i.e., the Monge-Amp\`{e}re equation $\det(D^2u)=1$), in view of \pref{prop.wtakl}-\textsl{(2)}, our \tref{thm.hqe} recovers the corresponding results \cite[Theorem 1.1]{BLL14} and \cite[Theorem 1.5]{CL03}, respectively. \item For $A=c_\ast I$ with $c_\ast$ defined in \eref{eqn.cst}, by \pref{prop.wtakl}-\textsl{(1),(3)}, our results improve those in \cite{Dai11} and \cite{LD12}. Indeed, by \pref{prop.wtakl}-\textsl{(3)}, the main results in \cite{Dai11}, \cite{LD12} and those parts concerning the Hessian quotient equations in \cite{LB14} can all be recovered by \tref{thm.hqe} as special cases. Furthermore, our results also apply to the special Lagrangian equation $\det(D^2u)=\Delta u$ in three dimension (respectively, $\sigma_1(\lambda(D^2u))=\sigma_3(\lambda(D^2u))$ in four dimension), not only for $A=\sqrt{3}I$ (respectively, $A=I$), but also for any $A\in\mathscr{A}_{3,1}$. \qed \end{enumerate} \end{remark} The paper is organized as follows. In \sref{sec.prel}, after giving some basic notations in \ssref{subsec.nott}, we introduce the definitions of $\Xi_k,\underline\xi_k,\overline\xi_k$ and $m_{k,l}$, and investigate their properties in \ssref{subsec.xi}. Then we collect in \ssref{subsec.prelem} some preliminary lemmas which will be used in this paper. \sref{sec.pmaint} is devoted to the proof of the main theorem (\tref{thm.hqe}). To do this, we start in \ssref{subsec.csubsol} to construct some appropriate subsolutions of the Hessian quotient equation \eref{eqn.hqe}, by taking advantages of the properties of $\Xi_k,\underline\xi_k,\overline\xi_k$ and $m_{k,l}$ explored in \ssref{subsec.xi}. Then in \ssref{subsec.pmaint}, after reducing \tref{thm.hqe} to \lref{lem.hqe} by simplification and normalization, we prove \lref{lem.hqe} by applying the Perron's method to the subsolutions we constructed in \ssref{subsec.csubsol}. \section{Preliminary}\label{sec.prel} \subsection{Notation}\label{subsec.nott} \quad In this paper, $S(n)$ denotes the linear space of symmetric $n\times n$ real matrices, and $I$ denotes the identity matrix. For any $M\in S(n)$, if $m_1,m_2,...,m_n$ are the eigenvalues of $M$ (usually, the assumption $m_1\leq m_2\leq...\leq m_n$ is added for convenience), we will denote this fact briefly by $\lambda(M)=(m_1,m_2,...,m_n)$ and call $\lambda(M)$ the eigenvalue vector of $M$. For $A\in S(n)$ and $\rho>0$, we denote by \[ E_\rho:=\left\{x\in\mathds R^n\big{|}x^TAx<\rho^2\right\} =\left\{x\in\mathds R^n\big{|}r_A(x)<\rho\right\} \] the ellipsoid of size $\rho$ with respect to $A$, where we set $r_A(x):=\sqrt{x^TAx}$. For any $p\in\mathds R^n$, we write \[\sigma_k(p):=\sum_{1\leq s_1<s_2<...<s_k\leq n} p_{s_1}p_{s_2}...p_{s_k}\quad(\forall 1\leq k\leq n)\] as the $k$-th elementary symmetric function of $p$. Meanwhile, we will adopt the conventions that $\sigma_{-1}(p)\equiv0$, $\sigma_0(p)\equiv1$ and $\sigma_k(p)\equiv0$, $\forall k\geq n+1$; and we will also define \[\sigma_{k;i}(p):=\left(\sigma_{k}(\lambda) \big{|}_{\lambda_i=0}\right)\Big{|}_{\lambda=p} =\sigma_{k}\left(p_1,p_2,...,\widehat{p_i},...,p_n\right)\] for any $-1\leq k\leq n$ and any $1\leq i\leq n$, and similarly \[\sigma_{k;i,j}(p):=\left(\sigma_{k}(\lambda) \big{|}_{\lambda_i=\lambda_j=0}\right)\Big{|}_{\lambda=p} =\sigma_{k}\left(p_1,p_2,...,\widehat{p_i}, ...,\widehat{p_j},...,p_n\right)\] for any $-1\leq k\leq n$ and any $1\leq i,j\leq n$, $i\neq j$, for convenience. \subsection{Definitions and properties of $\Xi_k,\underline\xi_k,\overline\xi_k$ and $m_{k,l}$}\label{subsec.xi} \quad To establish the existence of the solution of \eref{eqn.hqe}, by the Perron's method, the key point is to find some appropriate subsolutions of the equation. Since the Hessian quotient equation \eref{eqn.hqe} is a highly fully nonlinear equation which including polynomials of the eigenvalues of the the Hessian matrix $D^2u$, $\sigma_k(\lambda)$ and $\sigma_l(\lambda)$, of different order of homogeneities, to solve it we need to strike a balance between them. It will turn out to be clear that the quantities $\Xi_k,\underline\xi_k,\overline\xi_k$ and $m_{k,l}$, which we shall introduce below, are very natural and perfectly fit for this purpose. \begin{definition} For any $0\leq k\leq n$ and any $a\in\mathds R^n\setminus\{0\}$, let \[\Xi_k:=\Xi_k(a,x) :=\frac{\sum_{i=1}^n\sigma_{k-1;i}(a)a_i^2x_i^2} {\sigma_k(a)\sum_{i=1}^{n}a_ix_i^2}, ~\forall x\in\mathds R^n\setminus\{0\},\] and define \[\overline{\xi}_k:=\overline{\xi}_k(a) :=\sup_{x\in\mathds R^n\setminus\{0\}}\Xi_k(a,x)\] and \[\underline{\xi}_k:=\underline{\xi}_k(a) :=\inf_{x\in\mathds R^n\setminus\{0\}}\Xi_k(a,x).\] \end{definition} \begin{definition} For any $0\leq l<k\leq n$ and any $a\in\mathds R^n\setminus\{0\}$, let \begin{equation}\label{eqn.mkla} \displaystyle m_{k,l}:=m_{k,l}(a):=\frac{k-l}{\overline{\xi}_k(a)-\underline{\xi}_l(a)}. \end{equation} \end{definition} We remark, for the reader's convenience, that $\Xi_k$ originates from the computation of $\sigma_k(D^2\Phi(x))$ where $\Phi(x)$ is a generalized radially symmetric function (see \lref{lem.skm} and the proof of \lref{lem.Phi-subsol}), that $\underline\xi_k$ and $\overline\xi_k$ result from the comparison between $\sigma_k(\lambda)$ and $\sigma_l(\lambda)$ in the attempt to derive an ordinary differential equation from the original equation (see the last part of the proof of \lref{lem.Phi-subsol}), and that $m_{k,l}$ arises in the process of solving this ordinary differential equation (see \eref{eqn.mdlnr} in the proof of \lref{lem.psi}). By $\Xi_k,\underline\xi_k$ and $\overline\xi_k$, we get a good balance between $\sigma_k(\lambda)$ and $\sigma_l(\lambda)$, which can be measured by $m_{k,l}$. Furthermore, we will find that $m_{k,l}$ has also some special meaning related to the decay and asymptotic behavior of the solution (see \lref{lem.psi}\textsl{-(iii)}, \cref{cor.mub} and \tref{thm.hqe}). It is easy to see that \[\overline{\xi}_k(\varrho a)=\overline{\xi}_k(a), ~\underline{\xi}_k(\varrho a)=\underline{\xi}_k(a), ~\forall\varrho\neq 0, ~\forall a\in\mathds R^n\setminus\{0\}, ~\forall 0\leq k\leq n,\] and \[\underline\xi_k(C(1,1,...,1))=\frac{k}{n} =\overline\xi_k(C(1,1,...,1)), ~\forall C>0,~\forall 0\leq k\leq n.\] Furthermore, we have the following lemma. \begin{lemma}\label{lem.xik} Suppose $a=(a_1,a_2,...,a_n)$ with $0<a_1\leq a_2\leq...\leq a_n$. Then \begin{equation}\label{eqn.uxknox} 0<\frac{a_1\sigma_{k-1;1}(a)}{\sigma_k(a)}=\underline{\xi}_k(a) \leq\frac{k}{n}\leq\overline{\xi}_k(a) =\frac{a_n\sigma_{k-1;n}(a)}{\sigma_k(a)}\leq 1, ~\forall 1\leq k\leq n; \end{equation} \begin{equation}\label{eqn.olxiin} 0=\overline{\xi}_0(a)<\frac{1}{n}\leq\frac{a_n}{\sigma_1(a)} =\overline{\xi}_1(a)\leq\overline{\xi}_2(a) \leq...\leq\overline{\xi}_{n-1}(a)<\overline{\xi}_n(a)=1; \end{equation} and \begin{equation}\label{eqn.ulxiin} 0=\underline{\xi}_0(a)<\frac{a_1}{\sigma_1(a)} =\underline{\xi}_1(a)\leq\underline{\xi}_2(a) \leq...\leq\underline{\xi}_{n-1}(a)<\underline{\xi}_n(a)=1. \end{equation} Moreover, \begin{equation}\label{eqn.xkkn} \underline\xi_k(a)=\frac{k}{n}=\overline\xi_k(a) \end{equation} for some $1\leq k\leq n-1$, if and only if $a=C(1,1,...,1)$ for some $C>0$. \end{lemma} \begin{proof} ($1^\circ$) By the definitions of $\sigma_k(a)$ and $\sigma_{k;i}(a)$, we see that \begin{equation}\label{eqn.sk} \sigma_k(a)=\sigma_{k;i}(a)+a_i\sigma_{k-1;i}(a), ~\forall 1\leq i\leq n; \end{equation} and \[\sum_{i=1}^{n}\sigma_{k;i}(a) =\frac{nC_{n-1}^k}{C_n^k}\sigma_k(a) =(n-k)\sigma_k(a).\] Hence we obtain \begin{equation}\label{eqn.ksk} \sum_{i=1}^{n}a_i\sigma_{k-1;i}(a)=k\sigma_k(a). \end{equation} Now we show that \begin{equation}\label{eqn.aiski} a_1\sigma_{k-1;1}(a) \leq a_2\sigma_{k-1;2}(a)\leq... \leq a_n\sigma_{k-1;n}(a). \end{equation} In fact, for any $i\neq j$, similar to \eref{eqn.sk}, we have \[a_i\sigma_{k-1;i}(a) =a_i\left(\sigma_{k-1;i,j}(a)+a_j\sigma_{k-2;i,j}(a)\right)\] and \[a_j\sigma_{k-1;j}(a) =a_j\left(\sigma_{k-1;i,j}(a)+a_i\sigma_{k-2;i,j}(a)\right),\] thus \[a_i\sigma_{k-1;i}(a)-a_j\sigma_{k-1;j}(a) =(a_i-a_j)\sigma_{k-1;i,j}(a).\] Hence if $a_i\lessgtr a_j$, then \begin{equation}\label{eqn.aslgeq} a_i\sigma_{k-1;i}(a)\lessgtr a_j\sigma_{k-1;j}(a). \end{equation} By the definition of $\overline\xi_k$, we have \begin{eqnarray*} \displaystyle \overline{\xi}_k(a) &=&\sup_{x\neq 0}\frac{\sum_{i=1}^n\sigma_{k-1;i}(a)a_i^2x_i^2} {\sigma_k(a)\sum_{i=1}^{n}a_ix_i^2}\\ \displaystyle &\geq& \sup_{\substack{x_1=...=x_{n-1}=0,\\x_n\neq 0}} \frac{\sum_{i=1}^n\sigma_{k-1;i}(a)a_i^2x_i^2} {\sigma_k(a)\sum_{i=1}^{n}a_ix_i^2}\\ \displaystyle &=&\sup_{x_n\neq 0}\frac{\sigma_{k-1;n}(a)a_n^2x_n^2} {\sigma_k(a)a_n x_n^2}\\ \displaystyle &=&\frac{a_n\sigma_{k-1;n}(a)}{\sigma_k(a)} \end{eqnarray*} and \begin{eqnarray*} \displaystyle \overline{\xi}_k(a) &=&\sup_{x\neq 0}\frac{\sum_{i=1}^n\sigma_{k-1;i}(a)a_i^2x_i^2} {\sigma_k(a)\sum_{i=1}^{n}a_ix_i^2}\\ \displaystyle &\leq& \sup_{x\neq 0}\frac{a_n\sigma_{k-1;n}(a)\sum_{i=1}^na_ix_i^2} {\sigma_k(a)\sum_{i=1}^{n}a_ix_i^2}\qquad\text{by \eref{eqn.aiski}}\\ \displaystyle &=&\frac{a_n\sigma_{k-1;n}(a)}{\sigma_k(a)}. \end{eqnarray*} Hence we obtain \begin{equation}\label{eqn.olxik} \overline\xi_k(a)=\frac{a_n\sigma_{k-1;n}(a)}{\sigma_k(a)}. \end{equation} Similarly \begin{equation}\label{eqn.ulxik} \underline\xi_k(a)=\frac{a_1\sigma_{k-1;1}(a)}{\sigma_k(a)}. \end{equation} From \eref{eqn.ksk}, we have \[\sum_{i=1}^{n}\frac{a_i\sigma_{k-1;i}(a)}{\sigma_k(a)}=k.\] Combining this with \eref{eqn.aiski}, \eref{eqn.olxik} and \eref{eqn.ulxik}, we deduce that \[\underline\xi_k(a)\leq\frac{k}{n}\leq\overline\xi_k(a).\] Thus the proof of \eref{eqn.uxknox} is complete, and \eref{eqn.xkkn} is also clear in view of \eref{eqn.aslgeq}. ($2^\circ$) Since it follows from \eref{eqn.sk} that \[a_i\sigma_{k-1;i}(a)<\sigma_k(a), ~\forall 1\leq i\leq n, ~\forall 1\leq k\leq n-1,\] we obtain \[\underline\xi_k(a)\leq\overline\xi_k(a)<1,~\forall 0\leq k\leq n-1.\] On the other hand, we have $\overline{\xi}_n(a)=\underline{\xi}_n(a)=1$ which follows from \[a_i\sigma_{n-1;i}(a)=\sigma_n(a),~\forall 1\leq i\leq n.\] Combining \eref{eqn.olxik} and \eref{eqn.sk}, we discover that \begin{eqnarray*} \overline\xi_k(a)&=&\frac{a_n\sigma_{k-1;n}(a)}{\sigma_k(a)} =\frac{a_n\sigma_{k-1;n}(a)}{\sigma_{k;n}(a)+a_n\sigma_{k-1;n}(a)}\\ &\leq&\frac{a_n\sigma_{k;n}(a)}{\sigma_{k+1;n}(a)+a_n\sigma_{k;n}(a)} =\frac{a_n\sigma_{k;n}(a)}{\sigma_{k+1}(a)} =\overline\xi_{k+1}(a), \end{eqnarray*} where we used the inequality \[\frac{\sigma_{k-1;n}(a)}{\sigma_{k;n}(a)} \leq\frac{\sigma_{k;n}(a)}{\sigma_{k+1;n}(a)}\] which is a variation of the famous Newton inequality(see \cite{HLP34}) \[\sigma_{k-1}(\lambda)\sigma_{k+1}(\lambda) \leq\left(\sigma_{k}(\lambda)\right)^2, ~\forall\lambda\in\mathds R^n.\] Thus the proof of \eref{eqn.olxiin}, and similarly of \eref{eqn.ulxiin}, is complete. \end{proof} Since it follows from \eref{eqn.uxknox} that \[\frac{k-l}{n}\leq\overline{\xi}_k(a)-\underline{\xi}_l(a) <\overline{\xi}_k(a)\leq 1,\] we obtain \begin{corollary}\label{cor.m} If $0\leq l<k\leq n$ and $a\in\Gamma^+$, then \[1\leq k-l<m_{k,l}(a)\overline\xi_k(a)\leq m_{k,l}(a)\leq n.\] \end{corollary} As an application of \cref{cor.m} and \lref{lem.xik}, we now verify \pref{prop.wtakl}. \begin{proof}[\textbf{Proof of \pref{prop.wtakl}}] \textsl{(1)} and \textsl{(2)} are clear. For \textsl{(3)}, we only need to note that $c_\ast I\in\mathscr{A}_{k,l}$ and $m_{k,l}(c_\ast(1,1,...,1))=n>2$. \end{proof} To help the reader to become familiar with these new quantities, it is worth to give the following examples which are also the applications of the above lemma. \begin{example} Note that, for $a=(a_1,a_2,a_3)\in\mathds R^3$ with $0<a_1\leq a_2\leq a_3$, by \lref{lem.xik}, we have \[\overline\xi_3(a)\equiv1\equiv\underline\xi_3(a),\] \[\overline\xi_2(a)=\frac{a_3(a_1+a_2)}{a_1a_2+a_1a_3+a_2a_3}, \quad \underline\xi_2(a)=\frac{a_1(a_2+a_3)}{a_1a_2+a_1a_3+a_2a_3},\] \[\overline\xi_1(a)=\frac{a_3}{a_1+a_2+a_3}, \quad \underline\xi_1(a)=\frac{a_1}{a_1+a_2+a_3},\] and \[\overline\xi_0(a)\equiv0\equiv\underline\xi_0(a).\] Thus we can compute, for $a=(1,2,3)$, that \[\overline\xi_2=\frac{9}{11},~\underline\xi_2=\frac{5}{11}, ~\overline\xi_1=\frac{1}{2},~\underline\xi_1=\frac{1}{6},\] \[m_{3,2}=\frac{11}{6}<2, ~m_{3,1}=\frac{12}{5}>2, ~m_{3,0}\equiv3>2,\] \[m_{2,1}=\frac{66}{43}<2, ~m_{2,0}=\frac{22}{9}>2 ~\text{and}~m_{1,0}=2,\] and, for $a=(11,12,13)$, that \[\overline\xi_2=\frac{299}{431},~\underline\xi_2=\frac{275}{431}, ~\overline\xi_1=\frac{13}{36},~\underline\xi_1=\frac{11}{36},\] \[m_{3,2}=\frac{431}{156}>2, ~m_{3,1}=\frac{72}{25}>2, ~m_{3,0}\equiv3>2,\] \[m_{2,1}=\frac{15516}{6023}>2, ~m_{2,0}=\frac{862}{299}>2 ~\text{and}~m_{1,0}=\frac{36}{13}>2.\] \end{example} \begin{remark} \begin{enumerate}[(1)] \item By definition of $m_{k,l}$, we can easily check that for any $1<k\leq n$, $m_{k,k-1}(a)>2$ if and only if $\underline\xi_{k-1}(a)\leq\overline\xi_{k}(a)\leq\underline\xi_{k-1}(a)+1/2$. This will show us how $m_{k,l}$ plays a role in the making of a balance between different order of homogeneities as we stated in the beginning of this subsection. \item \pref{prop.wtakl}-\textsl{(1)} states that $\mathscr{\widetilde A}_{k,l}=\mathscr{A}_{k,l}$ provided $k-l\geq2$. Note that this is the best case we can expect, since in general $\mathscr{\widetilde A}_{k,k-1}\subsetneqq\mathscr{A}_{k,k-1}$, which is evident by the fact stated in the first item of this remark (and also by the above examples). For example, in $\mathds R^3$ we have \[m_{3,2}(a)>2 \Leftrightarrow \overline\xi_{3}(a)\leq\underline\xi_{2}(a)+1/2 \Leftrightarrow a_1>\frac{a_2a_3}{a_2+a_3},\] where the last inequality is not always true.\qed \end{enumerate} \end{remark} \subsection{Some preliminary lemmas}\label{subsec.prelem} \quad In this subsection, we collect some preliminary lemmas which will be mainly used in \sref{sec.pmaint}. We first give a lemma to compute $\sigma_k(\lambda(M))$ with $M$ of certain type. ~If $\Phi(x):=\phi(r)$ with $\phi\in C^2$, $r=\sqrt{x^TAx}$, $A\in S(n)\cap\Gamma^+$ and $a=\lambda(A)$ (we may call $\Phi$ a \emph{generalized radially symmetric function} with respect to $A$, according to \cite{BLL14}), one can conclude that \[\partial_{ij}\Phi(x) =\frac{\phi'(r)}{r}a_i\delta_{ij}+ \frac{\phi''(r)-\frac{\phi'(r)}{r}}{r^2}(a_ix_i)(a_jx_j), ~\forall 1\leq i,j\leq n,\] provided $A$ is normalized to a diagonal matrix (see the first part of \ssref{subsec.pmaint} and the proof of \lref{lem.Phi-subsol} for details). As far as we know, generally there is no explicit formula for $\lambda(D^2\Phi(x))$ of this type, but luckily we have a method to calculate $\sigma_k\left(\lambda(D^2\Phi(x))\right)$ for each $1\leq k\leq n$, which can be presented as follows. \begin{lemma}\label{lem.skm} If $M=\left(p_i\delta_{ij}+s q_iq_j\right)_{n\times n}$ with $p,q\in\mathds R^n$ and $s\in\mathds R$, then \[\sigma_k\left(\lambda(M)\right) =\sigma_k(p)+s\sum_{i=1}^n\sigma_{k-1;i}(p)q_i^2, ~\forall 1\leq k\leq n.\] \end{lemma} \begin{proof} See \cite{BLL14}. \end{proof} To process information on the boundary we need the following lemma. \begin{lemma}\label{lem.Qxi} Let $D$ be a bounded strictly convex domain of $\mathds R^n$, $n\geq 2$, $\partial D\in C^2$, $\varphi\in C^0(D)\cap C^2(\partial{D})$ and let $A\in S(n)$, $\det{A}\neq0$. Then there exists a constant $K>0$ depending only on $n$, $\mbox{\emph{diam}}\,D$, the convexity of $D$, $\norm{\varphi}_{C^2(\overline{D})}$, the $C^2$ norm of $\partial D$ and the upper bound of $A$, such that for any $\xi\in\partial D$, there exists $\bar{x}(\xi)\in\mathds R^n$ satisfying \[\left|\bar{x}(\xi)\right|\leq K \quad \mbox{and} \quad Q_\xi(x)<\varphi(x), ~\forall x\in \overline{D}\setminus\{\xi\},\] where \[Q_\xi(x):=\frac{1}{2}\left(x-\bar{x}(\xi)\right)^TA\left(x-\bar{x}(\xi)\right) -\frac{1}{2}\left(\xi-\bar{x}(\xi)\right)^TA\left(\xi-\bar{x}(\xi)\right) +\varphi(\xi),~\forall x\in\mathds R^n.\] \end{lemma} \begin{proof} See \cite{CL03} or \cite{BLL14}. \end{proof} \begin{remark}\label{rmk.Qxi} It is easy to check that $Q_\xi$ satisfy the following properties. \begin{enumerate}[\quad(1)] \item $Q_\xi\leq\varphi$ on $\overline{D}$ and $Q_\xi(\xi)=\varphi(\xi)$. \item If $A\in\mathscr{A}_{k,l}$, then \[\frac{\sigma_k(\lambda(D^2Q_\xi))}{\sigma_l(\lambda(D^2Q_\xi))}=1 \quad\mbox{in}~\mathds R^n.\] \item There exists $\bar{c}=\bar{c}(D,A,K)>0$ such that \[Q_\xi(x)\leq\frac{1}{2}x^TAx+\bar{c}, \quad \forall x\in\partial D,~\forall\xi\in\partial D.\] \end{enumerate} \end{remark} Now we introduce the following well known lemmas about the comparison principle and Perron's method which will be applied to the Hessian quotient equations but stated in a slightly more general setting. These lemmas are adaptions of those appeared in \cite{CNS85} \cite{Jen88} \cite{Ish89} \cite{Urb90} and \cite{CIL92}. For specific proof of them one may also consult \cite{BLL14} and \cite{LB14}. \begin{lemma}[Comparison principle]\label{lem.cp} Assume $\Gamma^+\subset\Gamma\subset\mathds R^n$ is an open convex symmetric cone with its vertex at the origin, and suppose $f\in C^1(\Gamma)$ and $f_{\lambda_i}(\lambda)>0$, $\forall \lambda\in\Gamma$, $\forall i=1,2,...,n$. Let $\Omega\subset\mathds R^n$ be a domain and let $\underline{u},\overline{u}\in C^0(\overline\Omega)$ satisfying \[f\left(\lambda\left(D^2\underline{u}\right)\right)\geq 1 \geq f\left(\lambda\left(D^2\overline{u}\right)\right)\] in $\Omega$ in the viscosity sense. Suppose $\underline{u}\leq \overline{u}$ on $\partial\Omega$ (and additionally \[\lim_{|x|\rightarrow+\infty}\left(\underline{u}-\overline{u}\right)(x)=0\] provided $\Omega$ is unbounded). Then $\underline{u}\leq \overline{u}$ in $\Omega$. \end{lemma} \begin{lemma}[Perron's method]\label{lem.pm} Assume that $\Gamma^+\subset\Gamma\subset\mathds R^n$ is an open convex symmetric cone with its vertex at the origin, and suppose $f\in C^1(\Gamma)$ and $f_{\lambda_i}(\lambda)>0$, $\forall \lambda\in\Gamma$, $\forall i=1,2,...,n$. Let $\Omega\subset\mathds R^n$ be a domain, $\varphi\in C^0(\partial\Omega)$ and let $\underline{u},\overline{u}\in C^0(\overline\Omega)$ satisfying \[f\left(\lambda\left(D^2\underline{u}\right)\right)\geq 1 \geq f\left(\lambda\left(D^2\overline{u}\right)\right)\] in $\Omega$ in the viscosity sense. Suppose $\underline{u}\leq \overline{u}$ in $\Omega$, $\underline{u}=\varphi$ on $\partial\Omega$ (and additionally \[\lim_{|x|\rightarrow+\infty}\left(\underline{u}-\overline{u}\right)(x)=0\] provided $\Omega$ is unbounded). Then \begin{eqnarray*} u(x)&:=&\sup\Big\{v(x)\big{|}v\in C^0(\Omega),~ \underline{u}\leq v\leq \overline{u}~\mbox{in}~\Omega,~ f\left(\lambda\left(D^2v\right)\right)\geq 1~\mbox{in}~\Omega\\ &~&\qquad\mbox{in the viscosity sense},~ v=\varphi~\mbox{on}~\partial\Omega\Big\} \end{eqnarray*} is the unique viscosity solution of the Dirichlet problem \[\left\{ \begin{aligned} f\left(\lambda\left(D^2u\right)\right)=1 \qquad &\mbox{in}& \Omega,\\ u=\varphi \qquad &\mbox{on}& \partial\Omega. \end{aligned}\right. \] \end{lemma} \begin{remark} In order to apply the above lemmas to the Hessian quotient operator \[f(\lambda):=\frac{\sigma_k(\lambda)}{\sigma_l(\lambda)}\] in the cone $\Gamma:=\Gamma_k$, we need to show that \begin{equation}\label{eqn.pskl} \partial_{\lambda_i}\left(\frac{\sigma_k(\lambda)}{\sigma_l(\lambda)}\right) >0,~\forall 1\leq i\leq n, ~\forall 0\leq l<k\leq n, ~\forall\lambda\in\Gamma_k, \end{equation} which indeed indicates that the Hessian quotient equations \eref{eqn.hqe} are elliptic equations with respect to its $k$-convex solution $u$. Indeed, for $l=0$, \eref{eqn.pskl} is clear in light of \eref{eqn.pisjp}. For $1\leq l<k\leq n$, since \[\partial_{\lambda_i}\sigma_k(\lambda) =\frac{\sigma_k(\lambda)-\sigma_{k;i}(\lambda)}{\lambda_i} =\sigma_{k-1;i}(\lambda)\] according to \eref{eqn.sk}, we have \[\partial_{\lambda_i}\left(\frac{\sigma_k(\lambda)} {\sigma_l(\lambda)}\right) =\frac{\sigma_{k-1;i}(\lambda)\sigma_l(\lambda) -\sigma_k(\lambda)\sigma_{l-1;i}(\lambda)} {(\sigma_l(\lambda))^2}.\] Thus to prove \eref{eqn.pskl}, it remains to verify \[\sigma_{k-1;i}(\lambda)\sigma_l(\lambda) \geq\sigma_k(\lambda)\sigma_{l-1;i}(\lambda).\] In view of \eref{eqn.sk}, this is equivalent to \[\sigma_{k-1;i}(\lambda)\sigma_{l;i}(\lambda) \geq\sigma_{k;i}(\lambda)\sigma_{l-1;i}(\lambda),\] which in turn is equivalent to \[\frac{\sigma_{l;i}(\lambda)}{\sigma_{l-1;i}(\lambda)} \geq\frac{\sigma_{k;i}(\lambda)}{\sigma_{k-1;i}(\lambda)},\] since $\sigma_{j;i}(\lambda)=\partial_{\lambda_i}\sigma_{j+1}(\lambda)>0$, $\forall 1\leq i\leq n$, $\forall 0\leq j\leq k-1$, $\forall\lambda\in\Gamma_k$, according to \eref{eqn.pisjp}. For the proof of the latter, we only need to note that \[\frac{\sigma_{j;i}(\lambda)}{\sigma_{j-1;i}(\lambda)} \geq\frac{\sigma_{j+1;i}(\lambda)}{\sigma_{j;i}(\lambda)},\] which is the variation of the Newton inequality(see \cite{HLP34}) \[\sigma_{j-1}(\lambda)\sigma_{j+1}(\lambda) \leq\left(\sigma_{j}(\lambda)\right)^2, ~\forall\lambda\in\mathds R^n,\] as we met in the proof of \lref{lem.xik}.\qed \end{remark} \section{Proof of the main theorem}\label{sec.pmaint} \subsection{Construction of the subsolutions}\label{subsec.csubsol} \quad The purpose of this subsection is to prove the following key lemma and then use it to construct subsolutions of \eref{eqn.hqe}. We remark that for the generalized radially symmetric subsolution $\Phi(x)=\phi(r)$ that we intend to construct, the solution $\psi(r)$ discussed in the the following lemma actually is equivalent to $\phi'(r)/r$ (see the proof of the \lref{lem.Phi-subsol}). \begin{lemma}\label{lem.psi} Let $0\leq l<k\leq n$, $n\geq3$, $A\in\mathscr{\widetilde A}_{k,l}$, $a:=(a_1,a_2,...,a_n):=\lambda(A)$, $0<a_1\leq a_2\leq...\leq a_n$ and $\beta\geq 1$. Then the problem \begin{equation}\label{eqn.psi} \left\{ \begin{aligned} \psi(r)^k+\overline{\xi}_k(a)r\psi(r)^{k-1}\psi'(r)\quad&\\ -\psi(r)^l-\underline{\xi}_l(a)r\psi(r)^{l-1}\psi'(r)&=0,~r>1,\\[0.1cm] \psi(1)&=\beta, \end{aligned} \right. \end{equation} has a unique smooth solution $\psi(r)=\psi(r,\beta)$ on $[1,+\infty)$, which satisfies \begin{enumerate}[\quad(i)] \item[(i)] $1\leq\psi(r,\beta)\leq\beta$, $\partial_r\psi(r,\beta)\leq0$, $\forall r\geq1$, $\forall\beta\geq1$. More specifically, $\psi(r,1)\equiv 1$, $\psi(1,\beta)\equiv\beta$; and $1<\psi(r,\beta)<\beta$, $\forall r>1$, $\forall\beta>1$. \item[(ii)] $\psi(r,\beta)$ is continuous and strictly increasing with respect to $\beta$ and \[\lim_{\beta\rightarrow+\infty}\psi(r,\beta)=+\infty,~\forall r\geq 1.\] \item[(iii)] $\psi(r,\beta)=1+O(r^{-m})~(r\rightarrow+\infty)$, where $m=m_{k,l}(a)\in(2,n]$ and the $O(\cdot)$ depends only on $k$, $l$, $\lambda(A)$ and $\beta$. \end{enumerate} \end{lemma} \begin{proof} For brevity, we will often write $\psi(r)$ or $\psi(r,\beta)$ (respectively, $\underline\xi(a),\overline\xi(a)$) simply as $\psi$ (respectively, $\underline\xi,\overline\xi$), when there is no confusion. The proof of this lemma now will be divided into three steps. \emph{Step 1.}\quad We deduce from \eref{eqn.psi} that \begin{equation}\label{eqn.psi-kl} \psi^k-\psi^l=-\frac{r}{dr}\left(\overline{\xi}_k\psi^{k-1} -\underline{\xi}_l\psi^{l-1}\right)d\psi \end{equation} and \begin{equation}\label{eqn.psid} \frac{d\psi}{dr}=-\frac{1}{r}\cdot\frac{\psi^k-\psi^l} {\overline{\xi}_k\psi^{k-1}-\underline{\xi}_l\psi^{l-1}} =-\frac{1}{r}\cdot\frac{\psi}{\overline{\xi}_k}\cdot \frac{\psi^{k-l}-1}{\psi^{k-l}-\frac{\underline{\xi}_l}{\overline{\xi}_k}} =:\frac{g(\psi)}{r}, \end{equation} where we set \[g(\nu):=-\frac{\nu}{\overline{\xi}_k}\cdot \frac{\nu^{k-l}-1}{\nu^{k-l}-\frac{\underline{\xi}_l}{\overline{\xi}_k}}.\] Hence the problem \eref{eqn.psi} is equivalent to the following problem \begin{equation}\label{eqn.psi-g} \left\{ \begin{aligned} \psi'(r)&=\frac{g(\psi(r))}{r},~r>1,\\ \psi(1)&=\beta. \end{aligned} \right. \end{equation} If $\beta=1$, then $\psi(r)\equiv 1$ is a solution of the problem \eref{eqn.psi-g} since $g(1)=0$. Thus, by the uniqueness theorem for the solution of the ordinary differential equation, we know that $\psi(r,1)\equiv 1$ is the unique solution satisfies the problem \eref{eqn.psi-g}. Now if $\beta>1$, since \[h(r,\nu):=\frac{g(\nu)}{r} \in C^{\infty}((1,+\infty)\times(\nu_0,+\infty)),\] where \[\frac{\underline{\xi}_l}{\overline{\xi}_k}<\nu_0<1\] (note that $\nu_0$ exists, since we have $\underline{\xi}_l\leq l/n<k/n\leq\overline{\xi}_k$ by \lref{lem.xik}), by the existence theorem (the Picard-Lindel\"{o}f theorem) and the theorem of the maximal interval of existence for the solution of the initial value problem of the ordinary differential equation, we know that the problem \eref{eqn.psi-g} has a unique smooth solution $\psi(r)=\psi(r,\beta)$ locally around the initial point and can be extended to a maximal interval $[1,\zeta)$ in which $\zeta$ can only be one of the following cases: \begin{enumerate}[\qquad($1^\circ$)] \item $\zeta=+\infty$; \item $\zeta<+\infty$, $\psi(r)$ is unbounded on $[1,\zeta)$; \item $\zeta<+\infty$, $\psi(r)$ converges to some point on $\{\nu=\nu_0\}$ as $r\rightarrow\zeta-$. \end{enumerate} Since \[\frac{g(\psi(r))}{r}<0,~\forall \psi(r)>1,\] we see that $\psi(r)=\psi(r,\beta)$ is strictly decreasing with respect to $r$ which exclude the case ($2^\circ$) above. We claim now that the case ($3^\circ$) can also be excluded. Otherwise, the solution curve must intersect with $\{\nu=1\}$ at some point $(r_0,\psi(r_0))$ on it and then tends to $\{\nu=\nu_0\}$ after crossing it. But $\psi(r)\equiv 1$ is also a solution through $(r_0,\psi(r_0))$ which contradicts the uniqueness theorem for the solution of the initial value problem of the ordinary differential equation. Thus we complete the proof of the existence and uniqueness of the solution $\psi(r)=\psi(r,\beta)$ of the problem \eref{eqn.psi} on $[1,+\infty)$. Due to the same reason, i.e., $\psi(r,\beta)$ is strictly decreasing with respect to $r$ and the solution curve can not cross $\{\nu=1\}$ provided $\beta>1$, assertion $\emph{(i)}$ of the lemma is also clear now, that is, $1<\psi(r,\beta)<\beta$, $\forall r>1$, $\forall\beta>1$. \emph{Step 2.}\quad By the theorem of the differentiability of the solution with respect to the initial value, we can differentiate $\psi(r,\beta)$ with respect to $\beta$ as blew: \[\left\{ \begin{aligned} &\frac{\partial\psi(r,\beta)}{\partial r}=\frac{g(\psi(r,\beta))}{r},&\\ &\psi(1,\beta)=\beta;& \end{aligned}\right.\] \[\Rightarrow\left\{ \begin{aligned} &\frac{\partial^2 \psi(r,\beta)}{\partial \beta\partial r} =\frac{g'(\psi(r,\beta))}{r}\cdot\frac{\partial\psi(r,\beta)}{\partial\beta},&\\ &\frac{\partial\psi(1,\beta)}{\partial\beta}=1.& \end{aligned}\right.\] Let \[v(r):=\frac{\partial\psi(r,\beta)}{\partial\beta}.\] We have \[\left\{ \begin{aligned} &\frac{dv}{dr} =\frac{g'(\psi(r,\beta))}{r}\cdot v,&\\ &v(1)=1.& \end{aligned}\right.\] Therefore we can deduce that \[\frac{dv}{v} =\frac{g'(\psi(r,\beta))}{r}dr,\] and hence \[\frac{\partial\psi(r,\beta)}{\partial\beta}=v(r) =\exp\int_1^r{\frac{g'(\psi(\tau,\beta))}{\tau}d\tau}.\] Since \[g'(\nu)=-\frac{\nu^{k-l}-1}{\overline{\xi}_k\nu^{k-l}-\underline{\xi}_l} -\frac{\nu}{\overline{\xi}_k}\cdot \frac{-\left(\frac{\underline{\xi}_l}{\overline{\xi}_k}-1\right)(k-l)\nu^{k-l-1}} {\left(\nu^{k-l}-\frac{\underline{\xi}_l}{\overline{\xi}_k}\right)^2}\] and \begin{equation}\label{eqn.onepsibeta} 0<\frac{\underline{\xi}_l}{\overline{\xi}_k} <1\leq\psi(r,\beta)\leq\beta=\psi(1),~\forall r\geq 1, \end{equation} we have \[g'(\psi(r,\beta))\leq -\frac{(k-l)\left(1-\frac{\underline{\xi}_l}{\overline{\xi}_k}\right)} {\overline{\xi}_k \left(\beta^{k-l}-\frac{\underline{\xi}_l}{\overline{\xi}_k}\right)^2} =-C\left(k,l,\lambda(A),\beta\right)<0,\] and hence \[0<\frac{\partial\psi(r,\beta)}{\partial\beta} \leq r^{-C}\leq 1,~\forall r\geq 1.\] Thus $\psi(r,\beta)$ is strictly increasing with respect to $\beta$. \emph{Step 3.}\quad By \eref{eqn.psi-kl}, we have \begin{eqnarray*} -d\ln r=-\frac{dr}{r}&=&\frac{\overline{\xi}_k\psi^{k-1} -\underline{\xi}_l\psi^{l-1}}{\psi^k-\psi^l}d\psi =\frac{\overline{\xi}_k}{\psi}\cdot \frac{\psi^{k-l}-\frac{\underline{\xi}_l}{\overline{\xi}_k}}{\psi^{k-l}-1}d\psi\\ &=&\frac{\overline{\xi}_k}{\psi}\left(1 +\frac{1-\frac{\underline{\xi}_l}{\overline{\xi}_k}}{\psi^{k-l}-1}\right)d\psi =\left(\frac{\overline{\xi}_k}{\psi} +\frac{\overline{\xi}_k-\underline{\xi}_l}{\psi(\psi^{k-l}-1)}\right)d\psi\\ &=&\overline{\xi}_kd\ln\psi-\frac{\overline{\xi}_k-\underline{\xi}_l}{k-l} d\ln\frac{\psi^{k-l}}{\psi^{k-l}-1}\\ &=&d\ln\left(\psi^{\overline{\xi}_k}\left(1-\psi^{-k+l}\right) ^{\frac{\overline{\xi}_k-\underline{\xi}_l}{k-l}}\right). \end{eqnarray*} Hence \begin{equation}\label{eqn.mdlnr} -md\ln r=d\ln\left(\psi(r)^{m\overline{\xi}_k} \left(1-\psi(r)^{-k+l}\right)\right), \end{equation} where \[m:=m_{k,l}(a):=\frac{k-l}{\overline{\xi}_k-\underline{\xi}_l},\] which has been already defined in \eref{eqn.mkla} in \ssref{subsec.xi}. Note that, by the assumptions on $A$ and \cref{cor.m}, we have $2<m\leq n$ and $m\overline{\xi}_k>k-l$. Integrating \eref{eqn.mdlnr} from $1$ to $r$ and recalling $\psi(1)=\beta\geq1$, we get \[\ln\left(\psi(r)^{m\overline{\xi}_k} \left(1-\psi(r)^{-k+l}\right)\right) =\ln\left(\beta^{m\overline{\xi}_k} \left(1-\beta^{-k+l}\right)\right)+\ln r^{-m},\] and hence \[\psi(r)^{m\overline{\xi}_k} \left(1-\psi(r)^{-k+l}\right) =\beta^{m\overline{\xi}_k} \left(1-\beta^{-k+l}\right)r^{-m}:=B(\beta)r^{-m},\] where we set \[B(\beta):=\beta^{m\overline{\xi}_k}\left(1-\beta^{-k+l}\right) =\beta^{m\overline{\xi}_k-k+l}\left(\beta^{k-l}-1\right).\] Since \begin{eqnarray*} &~&\psi(r)^{m\overline{\xi}_k} \left(1-\psi(r)^{-k+l}\right) =\psi(r)^{m\overline{\xi}_k-k+l} \left(\psi(r)^{k-l}-1\right)\\ &=&\psi(r)^{m\overline{\xi}_k-k+l}\left(\psi(r)-1\right) \left(\psi(r)^{k-l-1}+\psi(r)^{k-l-2}+...+\psi(r)+1\right), \end{eqnarray*} we thus conclude that \begin{equation}\label{eqn.psimoar} \frac{\psi(r)-1}{r^{-m}} =\left(\psi(r)^{m\overline{\xi}_k-k+l} \left(\psi(r)^{k-l-1}+\psi(r)^{k-l-2}+ ...+\psi(r)+1\right)\right)^{-1}B(\beta). \end{equation} Note that $m\overline{\xi}_k-k+l>0$ and \[\beta-1=\left(\beta^{m\overline{\xi}_k-k+l} \left(\beta^{k-l-1}+\beta^{k-l-2}+ ...+b+1\right)\right)^{-1}B(\beta).\] Recalling \eref{eqn.onepsibeta}, we obtain \begin{equation}\label{eqn.psi-b} \beta-1\leq\frac{\psi(r,\beta)-1}{r^{-m}} \leq\frac{B(\beta)}{k-l}, ~\forall r\geq 1. \end{equation} Thus we have \[\lim_{\beta\rightarrow+\infty}\psi(r,\beta)=+\infty, ~\forall r\geq 1,\] and \[\psi(r,\beta)\rightarrow 1~(r\rightarrow+\infty), ~\forall \beta\geq 1.\] Substituting the latter to \eref{eqn.psimoar}, we get \[\frac{\psi(r,\beta)-1}{r^{-m}} \rightarrow\frac{B(\beta)}{k-l}~(r\rightarrow+\infty), ~\forall \beta\geq 1.\] Therefore \[\psi(r,\beta)=1+\frac{B(\beta)}{k-l}r^{-m}+o(r^{-m}) =1+O(r^{-m})~(r\rightarrow+\infty),\] where $o(\cdot)$ and $O(\cdot)$ depend only on $k$, $l$, $\lambda(A)$ and $\beta$. This completes the proof of the lemma. \end{proof} \begin{remark} For $l=0$, i.e., the Hessian equation $\sigma_k(\lambda)=1$, we have an easy proof. Consider the problem \begin{equation}\label{eqn.psi-he} \left\{ \begin{aligned} \psi(r)^k+\overline{\xi}_k(a)r\psi(r)^{k-1}\psi'(r)&=1,~r>1,\\[0.1cm] \psi(1)&=\beta. \end{aligned} \right. \end{equation} Set $m:=m_{k,0}(a)=k/\overline\xi_k$. We have \[\psi^k-1=-r\overline\xi_k\psi^{k-1}\frac{d\psi}{dr} =-\frac{1}{m}\cdot r\cdot\frac{d\left(\psi^k-1\right)}{dr},\] \[\frac{d\left(\psi^k-1\right)}{\psi^k-1} =-m\frac{dr}{r}\] and \[d\ln\left(\psi(r)^k-1\right)=-md\ln r=d\ln r^{-m}.\] Integrating it from $1$ to $r$ and recalling $\psi(1)=\beta\geq1$, we get \[\psi(r)^k-1=\left(\psi(1)^k-1\right)r^{-m} =(\beta^k-1)r^{-m}\] and \begin{eqnarray} \nonumber\psi(r)&=&\left(1+(\beta^k-1)r^{-m}\right)^{\frac{1}{k}}\\ &=&\left(1+(\beta^k-1)r^{-\frac{k}{\overline\xi_k}}\right)^{\frac{1}{k}} \label{eqn.compare}\\ \nonumber&=&1+\frac{\beta^k-1}{k}r^{-m}+o(r^{-m})=1+O(r^{-m})~(r\rightarrow+\infty). \end{eqnarray} It is obvious that the $\psi(r)$ that we here solved from \eref{eqn.psi-he} for $l=0$ satisfies all the conclusions of \lref{lem.psi}. Moreover, comparing \eref{eqn.compare} with the corresponding ones in \cite{BLL14} and in \cite{CL03}, we observe that our method actually provides a systematic way for construction of the subsolutions, which gives results containing the previous ones as special cases. \qed \end{remark} Set \[\mu_R(\beta):=\int_R^{+\infty}\tau\big(\psi(\tau,\beta)-1\big)d\tau, \quad\forall R\geq 1,~\forall\beta\geq 1.\] Note that the integral on the right hand side is convergent in view of \lref{lem.psi}-\textsl{(iii)}. Moreover, as an application of \lref{lem.psi}, we have the following. \begin{corollary}\label{cor.mub} $\mu_R(\beta)$ is nonnegative, continuous and strictly increasing with respect to $\beta$. Furthermore, \[\mu_R(\beta)\geq\int_R^{+\infty}(\beta-1)r^{-m+1}d\tau \rightarrow+\infty~(\beta\rightarrow+\infty),~\forall R\geq 1;\] and \[\mu_R(\beta)=O(R^{-m+2})~(R\rightarrow+\infty),~\forall\beta\geq 1.\] \end{corollary} \begin{proof} By \lref{lem.psi}-\textsl{(ii),(iii)} and the above property \eref{eqn.psi-b} of $\psi(r,\beta)$. \end{proof} For any $\alpha,\beta,\gamma\in\mathds R$, $\beta,\gamma\geq1$ and for any diagonal matrix $A\in\mathscr{\widetilde A}_{k,l}$, let \[\phi(r):=\phi_{\alpha,\beta,\gamma}(r) :=\alpha+\int_\gamma^r\tau\psi(\tau,\beta)d\tau, ~\forall r\geq\gamma,\] and \[\Phi(x):=\Phi_{\alpha,\beta,\gamma,A}(x):=\phi(r) :=\phi_{\alpha,\beta,\gamma}(r_A(x)), ~\forall x\in\mathds R^n\setminus{E_\gamma},\] where $r=r_A(x)=\sqrt{x^TAx}$. Then we have \begin{eqnarray} \phi_{\alpha,\beta,\gamma}(r) &=&\nonumber\int_\gamma^r\tau\big(\psi(\tau,\beta)-1\big)d\tau +\frac{1}{2}r^2-\frac{1}{2}\gamma^2+\alpha\\ &=&\frac{1}{2}r^2+\left(\mu_\gamma(\beta)+\alpha -\frac{1}{2}\gamma^2\right)-\mu_r(\beta)\label{eqn.phi-mu}\\ &=&\frac{1}{2}r^2+\left(\mu_\gamma(\beta)+\alpha -\frac{1}{2}\gamma^2\right)+O(r^{-m+2}) ~(r\rightarrow+\infty),\quad\label{eqn.phi-O} \end{eqnarray} according to \cref{cor.mub}, and now we can assert that \begin{lemma}\label{lem.Phi-subsol} $\Phi$ is a smooth $k$-convex subsolution of \eref{eqn.hqe} in $\mathds R^n\setminus\overline{E_\gamma}$, that is, \[\sigma_j\left(\lambda\left(D^2{\Phi(x)}\right)\right)\geq0, ~\forall 1\leq j\leq k,~\forall x\in\mathds R^n\setminus\overline{E_\gamma},\] and \[\frac{\sigma_k\left(\lambda\left(D^2{\Phi(x)}\right)\right)} {\sigma_l\left(\lambda\left(D^2{\Phi(x)}\right)\right)}\geq 1, ~\forall x\in\mathds R^n\setminus\overline{E_\gamma}.\] \end{lemma} \begin{proof} By definition we have $\phi'(r)=r\psi(r)$ and $\phi''(r)=\psi(r)+r\psi'(r)$. Since \[r^2=x^TAx=\sum_{i=1}^{n}a_ix_i^2,\] we deduce that \[2r\partial_{x_i}r=\partial_{x_i}\left(r^2\right)=2a_ix_i \quad\text{and}\quad \partial_{x_i}r=\frac{a_ix_i}{r}.\] Consequently \[\partial_{x_i}\Phi(x)=\phi'(r)\partial_{x_i}r =\frac{\phi'(r)}{r}a_ix_i,\] \begin{eqnarray*} \partial_{x_ix_j}\Phi(x) &=&\frac{\phi'(r)}{r}a_i\delta_{ij}+ \frac{\phi''(r)-\frac{\phi'(r)}{r}}{r^2}(a_ix_i)(a_jx_j)\\ &=&\psi(r)a_i\delta_{ij}+\frac{\psi'(r)}{r}(a_ix_i)(a_jx_j), \end{eqnarray*} and therefore \[D^2\Phi=\left(\psi(r)a_i\delta_{ij} +\frac{\psi'(r)}{r}(a_ix_i)(a_jx_j)\right)_{n\times n}.\] So we can conclude from \lref{lem.skm} that \begin{eqnarray*} \sigma_j\left(\lambda\left(D^2\Phi\right)\right) &=&\sigma_j(a)\psi(r)^j+\frac{\psi'(r)}{r}\psi(r)^{j-1} \sum_{i=1}^n\sigma_{j-1;i}(a)a_i^2x_i^2\\ &=&\sigma_j(a)\psi^j+\Xi_j(a,x)\sigma_j(a)r\psi^{j-1}\psi'\\ &\geq&\sigma_j(a)\psi^j+\overline{\xi}_j(a)\sigma_j(a)r\psi^{j-1}\psi'\\ &=&\sigma_j(a)\psi^{j-1}\left(\psi+\overline{\xi}_j(a)r\psi'\right), ~\forall 1\leq j\leq n, \end{eqnarray*} where we have used the facts that $\psi(r)\geq1>0$ and $\psi'(r)\leq0$ for all $r\geq1$, according to \lref{lem.psi}-\textsl{(i)}. For any fixed $1\leq j\leq k$, in view of \lref{lem.xik} and \lref{lem.psi}-\textsl{(i)}, we have \[0\leq\frac{\psi^{k-l}-1}{\psi^{k-l}-\frac{\underline{\xi}_l(a)}{\overline{\xi}_k(a)}} <1\leq\frac{\overline{\xi}_k(a)}{\overline{\xi}_j(a)}.\] Hence it follows from \eref{eqn.psid} that \[\psi'=-\frac{1}{r}\cdot\frac{\psi}{\overline{\xi}_k(a)}\cdot \frac{\psi^{k-l}-1}{\psi^{k-l}-\frac{\underline{\xi}_l(a)}{\overline{\xi}_k(a)}} >-\frac{1}{r}\cdot\frac{\psi}{\overline{\xi}_j(a)},\] which yields $\psi+\overline{\xi}_j(a)r\psi'>0$. Since $A\in\mathscr{\widetilde A}_{k,l}$ implies $a\in\Gamma^+$, that is, $\sigma_i(a)>0$ for all $1\leq i\leq n$, we thus conclude that \[\sigma_j\left(\lambda\left(D^2\Phi\right)\right)>0, ~\forall 1\leq j\leq k.\] In particular, we have \[\sigma_k\left(\lambda\left(D^2\Phi\right)\right)>0 \quad\text{and}\quad \sigma_l\left(\lambda\left(D^2\Phi\right)\right)>0.\] On the other hand, \begin{eqnarray*} &~&\sigma_k\left(\lambda\left(D^2\Phi\right)\right) -\sigma_l\left(\lambda\left(D^2\Phi\right)\right)\\ &=&\sigma_k(a)\psi^k+\Xi_k(a,x)\sigma_k(a)r\psi^{k-1}\psi' -\sigma_l(a)\psi^l-\Xi_l(a,x)\sigma_l(a)r\psi^{l-1}\psi'\\ &\geq&\sigma_k(a)\psi^k+\overline{\xi}_k(a)\sigma_k(a)r\psi^{k-1}\psi' -\sigma_l(a)\psi^l-\underline{\xi}_l(a)\sigma_l(a)r\psi^{l-1}\psi'\\ &=&\sigma_k(a)\left(\psi^k+\overline{\xi}_k(a)r\psi^{k-1}\psi' -\psi^l-\underline{\xi}_l(a)r\psi^{l-1}\psi'\right)\\ &=&0. \end{eqnarray*} Therefore \[\frac{\sigma_k\left(\lambda\left(D^2{\Phi}\right)\right)} {\sigma_l\left(\lambda\left(D^2{\Phi}\right)\right)}\geq 1.\] This completes the proof of \lref{lem.Phi-subsol}. \end{proof} \subsection{Proof of \tref{thm.hqe}}\label{subsec.pmaint} \quad We first introduce the following lemma which is a special and simple case of \tref{thm.hqe} with the additional condition that the matrix $A$ is diagonal and the vector $b$ vanishes. \begin{lemma}\label{lem.hqe} Let $D$ be a bounded strictly convex domain in $\mathds R^n$, $n\geq 3$, $\partial D\in C^2$ and let $\varphi\in C^2(\partial D)$. Then for any given \textsl{diagonal matrix} $A\in\mathscr{\widetilde A}_{k,l}$ with $0\leq l<k\leq n$, there exists a constant $\tilde{c}$ depending only on $n,D,k,l,A$ and $\norm{\varphi}_{C^2(\partial D)}$, such that for every $c\geq\tilde{c}$, there exists a unique viscosity solution $u\in C^0(\mathds R^n\setminus D)$ of \begin{equation}\label{eqn.hqe-ac} \left\{ \begin{aligned} \displaystyle \frac{\sigma_k(\lambda(D^2u))}{\sigma_l(\lambda(D^2u))}&=1 \quad\text{in}~\mathds R^n\setminus\overline{D},\\%[0.05cm] \displaystyle u&=\varphi\quad\text{on}~\partial D,\\%[0.05cm] \displaystyle \limsup_{|x|\rightarrow+\infty}|x|^{m-2} &\left|u(x)-\left(\frac{1}{2}x^{T}Ax+c\right)\right|<\infty, \end{aligned} \right. \end{equation} where $m=m_{k,l}(\lambda(A))\in(2,n]$. \end{lemma} To prove \tref{thm.hqe}, it suffices to prove \lref{lem.hqe}. Indeed, suppose that $D,\varphi,A$ and $b$ satisfy the hypothesis of \tref{thm.hqe}. Consider the decomposition $A=Q^TNQ$, where $Q$ is an orthogonal matrix and $N$ is a diagonal matrix which satisfies $\lambda(N)=\lambda(A)$. Let \[\tilde{x}:=Qx,\quad\widetilde{D}:=\left\{Qx|x\in D\right\}\] and \[\tilde{\varphi}(\tilde{x}):=\varphi(x)-b^Tx =\varphi(Q^T\tilde{x})-b^TQ^T\tilde{x}.\] By \lref{lem.hqe}, we conclude that there exists a constant $\tilde{c}$ depending only on $n,\widetilde{D},k,l,N$ and $\norm{\tilde{\varphi}}_{C^2(\partial\widetilde{D})}$, such that for every $c\geq\tilde{c}$, there exists a unique viscosity solution $\tilde{u}\in C^0(\mathds R^n\setminus\widetilde{D})$ of \begin{equation}\label{eqn.hqe-nc} \left\{ \begin{aligned} \displaystyle \frac{\sigma_k(\lambda(D^2\tilde{u}))} {\sigma_l(\lambda(D^2\tilde{u}))}&=1 \quad\text{in}~\mathds R^n\setminus\overline{\widetilde{D}},\\%[0.05cm] \displaystyle \tilde{u}&=\tilde{\varphi} \quad\text{on}~\partial\widetilde{D},\\%[0.05cm] \displaystyle \limsup_{|\tilde{x}|\rightarrow+\infty}|\tilde{x}|^{m-2} &\left|\tilde{u}(\tilde{x})-\left(\frac{1}{2}\tilde{x}^{T}N\tilde{x} +c\right)\right|<\infty, \end{aligned} \right. \end{equation} where $m=m_{k,l}(\lambda(N))=m_{k,l}(\lambda(A))\in(2,n]$. Let \[u(x):=\tilde{u}(\tilde{x})+b^Tx=\tilde{u}(Qx)+b^Tx =\tilde{u}(\tilde{x})+b^TQ^T\tilde{x}.\] We claim that $u$ is the solution of \eref{eqn.hqe-abc} in \tref{thm.hqe}. To show this, we only need to note that \[D^2u(x)=Q^TD^2\tilde{u}(\tilde{x})Q,\quad \lambda\left(D^2u(x)\right)=\lambda\left(D^2\tilde{u}(\tilde{x})\right);\] \[u=\varphi\quad\text{on}~\partial D;\] and \begin{eqnarray*} &&\displaystyle |\tilde{x}|^{m-2}\left|\tilde{u}(\tilde{x}) -\left(\frac{1}{2}\tilde{x}^{T}N\tilde{x}+c\right)\right|\\ &=&\displaystyle \left(x^TQ^TQx\right)^{(m-2)/2}\left|u(x)-b^Tx -\left(\frac{1}{2}x^TQ^TNQx+c\right)\right|\\ &=&\displaystyle |x|^{m-2} \left|u(x)-\left(\frac{1}{2}x^{T}Ax+b^Tx+c\right)\right|. \end{eqnarray*} Thus we have proved that \tref{thm.hqe} can be established by \lref{lem.hqe}. \begin{remark} \begin{enumerate}[(1)] \item We may see from the above demonstration that the lower bound $\tilde{c}$ of $c$ in \tref{thm.hqe} can not be discarded generally. Indeed, for the radial solutions of the Hessian equation $\sigma_k(\lambda(D^2u))=1$ in $\mathds R^n\setminus\overline{B_1}$, \cite[Theorem 2]{WB13} states that there is no solution when $c$ is too small. \item Unlike the Poisson equation and the Monge-Amp\`{e}re equation, generally, for the Hessian quotient equation, the matrix $A$ in \tref{thm.hqe} can only be normalized to a diagonal matrix, and can not be normalized to $I$ multiplied by some constant. This is the reason why we study the generalized radially symmetric solutions, rather than the radial solutions, of the original equation \eref{eqn.hqe}. See also \cite{BLL14}.\qed \end{enumerate} \end{remark} Now we use the Perron's method to prove \lref{lem.hqe}. \begin{proof}[\textbf{Proof of \lref{lem.hqe}}] We may assume without loss of generality that $E_1\subset\subset D\subset\subset E_{\bar{r}} \subset\subset E_{\hat{r}}$ and $a:=(a_1,a_2,...,a_n):=\lambda(A)$ with $0<a_1\leq a_2\leq...\leq a_n$. The proof now will be divided into three steps. \emph{Step 1.}\quad Let \[\eta:=\inf_{\substack{x\in \overline{E_{\bar{r}}}\setminus D\\ \xi\in\partial D}}Q_\xi(x), \quad Q(x):=\sup_{\xi\in\partial D}Q_\xi(x)\] and \[\Phi_\beta(x):=\eta+\int_{\bar{r}}^{r_A(x)}\tau\psi(\tau,\beta)d\tau, \quad\forall r_A(x)\geq 1,~\forall \beta\geq 1,\] where $Q_{\xi}(x)$ and $\psi(r,\beta)$ are given by \lref{lem.Qxi} and \lref{lem.psi}, respectively. Then we have \begin{enumerate}[\quad(1)] \item Since $Q$ is the supremum of a collection of smooth solutions $\{Q_\xi\}$ of \eref{eqn.hqe}, it is a continuous subsolution of \eref{eqn.hqe}, i.e., \[\frac{\sigma_k(\lambda(D^2Q))}{\sigma_l(\lambda(D^2{Q}))}\geq 1\] in $\mathds R^n\setminus \overline{D}$ in the viscosity sense (see \cite[Proposition 2.2]{Ish89}). \item $Q=\varphi$ on $\partial D$. To prove this we only need to show that for any $\xi\in\partial D$, $Q(\xi)=\varphi(\xi)$. This is obvious since $Q_\xi\leq\varphi$ on $\overline{D}$ and $Q_\xi(\xi)=\varphi(\xi)$, according to \rref{rmk.Qxi}-\textsl{(1)}. \item By \lref{lem.Phi-subsol}, $\Phi_\beta$ is a smooth subsolution of \eref{eqn.hqe} in $\mathds R^n\setminus\overline{D}$. \item $\Phi_\beta\leq\varphi$ on $\partial D$ and $\Phi_\beta\leq Q$ on $\overline{E_{\bar{r}}}\setminus D$. To show them we first note that $\Phi_\beta(x)$ is strictly increasing with respect to $r_A(x)$ since $\psi(r,\beta)\geq 1>0$ by \lref{lem.psi}-\textsl{(i)}. Invoking $\Phi_\beta=\eta$ on $\partial E_{\bar{r}}$ and $\eta\leq Q$ on $\overline{E_{\bar{r}}}\setminus D$ by their definitions, we have $\Phi_\beta\leq\eta\leq Q$ on $\overline{E_{\bar{r}}}\setminus D$. On the other hand, according to \rref{rmk.Qxi}-\textsl{(1)}, we have $Q_\xi\leq\varphi$ on $\overline{D}$ which implies that $\eta\leq\varphi$ on $\overline{D}$. Combining these two aspects we deduce that $\Phi_\beta\leq\eta\leq\varphi$ on $\partial D$. \item $\Phi_\beta(x)$ is strictly increasing with respect to $\beta$ and \begin{equation}\label{eqn.Phib} \lim_{\beta\rightarrow+\infty}\Phi_\beta(x)=+\infty, ~\forall r_A(x)\geq 1, \end{equation} by the definition of $\Phi_\beta(x)$ and \lref{lem.psi}-\textsl{(ii)}. \item As we showed in \eref{eqn.phi-mu} and \eref{eqn.phi-O}, for any $\beta\geq 1$, we have \begin{eqnarray*} \Phi_\beta(x) &=&\eta+\int_{\bar{r}}^{r_A(x)}\tau \psi(\tau,\beta)d\tau\\ &=&\eta+\frac{1}{2}(r_A(x)^2-\bar{r}^2) +\int_{\bar{r}}^{r_A(x)}\tau\big(\psi(\tau,\beta)-1\big)d\tau\\ &=&\frac{1}{2}r_A(x)^2+\left(\eta-\frac{1}{2}\bar{r}^2 +\mu_{\bar{r}}(\beta)\right)-\mu_{r_A(x)}(\beta)\\ &=&\frac{1}{2}r_A(x)^2+\mu(\beta)-\mu_{r_A(x)}(\beta)\\ &=&\frac{1}{2}x^TAx+\mu(\beta)+O\left(|x|^{-m+2}\right) ~(|x|\rightarrow+\infty), \end{eqnarray*} where we set \[\mu(\beta):=\eta-\frac{1}{2}\bar{r}^2+\mu_{\bar{r}}(\beta),\] and used the fact that $x^TAx=O(|x|^2)~(|x|\rightarrow+\infty)$ since $\lambda(A)\in\Gamma^+$. \end{enumerate} \emph{Step 2.}\quad For fixed $\hat{r}>\bar{r}$, there exists $\hat{\beta}>1$ such that \[\min_{\partial E_{\hat{r}}}\Phi_{\hat{\beta}} >\max_{\partial E_{\hat{r}}}Q,\] in light of \eref{eqn.Phib}. Thus we obtain \begin{equation}\label{eqn.PhiQ} \Phi_{\hat{\beta}}>Q\quad\mbox{on}~\partial E_{\hat{r}}. \end{equation} Let \[\tilde{c}:=\max\left\{\eta,\mu(\hat{\beta}),\bar{c}\right\},\] where the $\bar{c}$ comes from \rref{rmk.Qxi}-\textsl{(3)}, and hereafter fix $c\geq\tilde{c}$. By \lref{lem.psi} and \cref{cor.mub} we deduce that \[\psi(r,1)\equiv 1\Rightarrow\mu_{\bar{r}}(1)=0 \Rightarrow\mu(1)=\eta-\frac{1}{2}\bar{r}^2<\eta\leq\tilde{c}\leq c,\] and \[\lim_{\beta\rightarrow+\infty}\mu_{\bar{r}}(\beta)=+\infty \Rightarrow\lim_{\beta\rightarrow+\infty}\mu(\beta)=+\infty.\] On the other hand, it follows from \cref{cor.mub} that $\mu(\beta)$ is continuous and strictly increasing with respect to $\beta$ \big{(}which indicates that the inverse of $\mu(\beta)$ exists and $\mu^{-1}$ is strictly increasing\big{)}. Thus there exists a unique $\beta(c)$ such that $\mu(\beta(c))=c$. Then we have \[\Phi_{\beta(c)}(x)=\frac{1}{2}r_A(x)^2+c-\mu_{r_A(x)}(\beta(c)) =\frac{1}{2}x^TAx+c+O\left(|x|^{-m+2}\right)~(|x|\rightarrow+\infty),\] and \[\beta(c)=\mu^{-1}(c)\geq\mu^{-1}(\tilde{c})\geq\hat{\beta}.\] Invoking the monotonicity of $\Phi_\beta$ with respect to $\beta$ and \eref{eqn.PhiQ}, we obtain \begin{equation}\label{eqn.PhigQ} \Phi_{\beta(c)}\geq\Phi_{\hat{\beta}}>Q\quad\mbox{on} ~\partial E_{\hat{r}}. \end{equation} Note that we already know \[\Phi_{\beta(c)}\leq Q\quad\mbox{on} ~\overline{E_{\bar{r}}}\setminus D,\] from (4) of \emph{Step 1}. Let \begin{equation*} \underline{u}(x):= \begin{cases} \max\left\{\Phi_{\beta(c)}(x),Q(x)\right\},& x\in E_{\hat{r}}\setminus D,\\ \Phi_{\beta(c)}(x),& x\in \mathds R^n\setminus E_{\hat{r}}. \end{cases} \end{equation*} Then we have \begin{enumerate}[\quad(1)] \item $\underline{u}$ is continuous and satisfies \[\frac{\sigma_k(\lambda(D^2{\underline{u}}))} {\sigma_l(\lambda(D^2{\underline{u}}))}\geq1\] in $\mathds R^n\setminus\overline{D}$ in the viscosity sense, by (1) and (3) of \emph{Step 1}. \item $\underline{u}=Q=\varphi$ on $\partial D$, by (2) of \emph{Step 1}. \item If $r_A(x)$ is large enough, then \[\underline{u}(x)=\Phi_{\beta(c)}(x) =\frac{1}{2}x^TAx+c+O\left(|x|^{-m+2}\right)~(|x|\rightarrow+\infty).\] \end{enumerate} \emph{Step 3.}\quad Let \[\overline{u}(x):=\frac{1}{2}x^TAx+c, ~\forall x\in\mathds R^n.\] Then $\overline{u}$ is obviously a supersolution and \[\lim_{|x|\rightarrow+\infty}\left(\underline{u}-\overline{u}\right)(x)=0.\] To use the Perron's method to establish \lref{lem.hqe}, we now only need to prove that \[\underline{u}\leq \overline{u}\quad\mbox{in}~\mathds R^n\setminus D.\] In fact, since \[\mu_{r_A(x)}(\beta)\geq 0, \quad\forall x\in\mathds R^n\setminus E_1, ~\forall\beta\geq 1,\] according to \cref{cor.mub}, we have \begin{equation}\label{eqn.Phiolu} \Phi_{\beta(c)}(x)=\frac{1}{2}x^TAx+c-\mu_{r_A(x)}(\beta(c)) \leq\frac{1}{2}x^TAx+c=\overline{u}(x),~\forall x\in\mathds R^n\setminus D. \end{equation} \Big{(}We remark that this \eref{eqn.Phiolu} can also be proved by using the comparison principle, in view of \[\Phi_{\beta(c)}\leq\eta\leq\tilde{c}\leq c\leq\overline{u} \quad\mbox{on}~\partial D,\] and \[\lim_{|x|\rightarrow+\infty}\left(\Phi_{\beta(c)}-\overline{u}\right)(x)=0.\Big{)}\] On the other hand, for every $\xi\in\partial D$, since \[Q_\xi(x)\leq\frac{1}{2}x^TAx+\bar{c} \leq\frac{1}{2}x^TAx+\tilde{c} \leq\frac{1}{2}x^TAx+c=\overline{u}(x), ~\forall x\in\partial D,\] and \[Q_\xi\leq Q<\Phi_{\beta(c)}\leq\overline{u} \quad\mbox{on}~\partial E_{\hat{r}}\] follows from \eref{eqn.PhigQ} and \eref{eqn.Phiolu}, we obtain \[Q_\xi\leq\overline{u}\quad\mbox{on} ~\partial\left(E_{\hat{r}}\setminus D\right).\] In view of \[\frac{\sigma_k(\lambda(D^2{Q_\xi}))}{\sigma_l(\lambda(D^2{Q_\xi}))} =1=\frac{\sigma_k(\lambda(D^2{\overline{u}}))} {\sigma_l(\lambda(D^2{\overline{u}}))} \quad\mbox{in}~E_{\hat{r}}\setminus D,\] we deduce from the comparison principle that \[Q_\xi\leq\overline{u}\quad\mbox{in}~E_{\hat{r}}\setminus D.\] Hence \begin{equation}\label{eqn.Qu} Q\leq\overline{u}\quad\mbox{in}~E_{\hat{r}}\setminus D. \end{equation} Combining \eref{eqn.Phiolu} and \eref{eqn.Qu}, by the definition of $\underline{u}$, we get \[\underline{u}\leq \overline{u}\quad\mbox{in}~\mathds R^n\setminus D.\] This finishes the proof of \lref{lem.hqe}. \end{proof} \begin{remark} To prove \lref{lem.hqe} we have used above \lref{lem.cp} and \lref{lem.pm} presented in \ssref{subsec.prelem}. In fact, one can follow the techniques in \cite{CL03} (see also \cite{DB11} \cite{Dai11} and \cite{LD12}) instead of \lref{lem.pm} to rewrite the whole proof. These two kinds of presentation look a little different but are essentially the same. \end{remark} \end{document}
\begin{document} \title{{\bf\Large{On the K\"ahler-Ricci flow with small initial $E_1$ energy (I)} \tableofcontents \section{Introduction and main results } \subsection{The motivation} In \cite{[chen-tian2]} \cite{[chen-tian1]}, a family of functionals $E_k (k = 1,2,\cdots n)$ was introduced by the first named author and G. Tian to prove the convergence of the K\"ahler-Ricci flow under appropriate curvature assumptions. The aim of this program (cf. \cite{[chen1]}) is to study how the lower bound of $E_1$ is used to derive the convergence of the K\"ahler-Ricci flow, i.e., the existence of K\"ahler-Einstein metrics. We will address this question in Subsection 1.2. The corresponding problem of the relation between the lower bound of $E_0$, which is the $K$-energy introduced by T. Mabuchi, and the existence of K\"ahler-Einstein metrics has been extensively studied (cf. \cite{[BaMa]}, \cite{[chen-tian2]}, \cite{[chen-tian1]}, \cite{[Do]}). One interesting question in this program is how the lower bound of $E_1$ compares to the lower bound of $E_0$. We will give a satisfactory answer to this question in Subsection 1.3. \\ \subsection{The lower bound of $E_1$ and K\"ahler-Einstein metrics} Let $(M, [\omega])$ be a polarized compact K\"ahler manifold with $[\omega]=2\pi c_1(M)>0 $ (the first Chern class) in this paper. In \cite{[chen1]}, the first named author proved a stability theorem of the K\"ahler-Ricci flow near the infimum of $E_1$ under the assumption that the initial metric has $Ric>-1$ and $|Rm|$ bounded. Unfortunately, this stability theorem needs a topological assumption \begin{equation}(-1)^n([c_1(M)]^{[2]}[\omega]^{[n-2]}-\frac {2(n+1)}{n}[c_2(M)][\omega]^{[n-2]})\geq 0.\label{c1}\end{equation} The only known compact manifold which satisfies this condition is ${\mathbb C} P^n$, which restricts potential applications of this result. The main purpose of this paper is to remove this assumption. \begin{theo}\label{main2} Suppose that $M$ is pre-stable, and $E_1$ is bounded from below in $ [\omega]$. For any $\delta, \Lambda>0,$ there exists a small positive constant $\epsilon(\delta, \Lambda)>0$ such that for any metric $\omega_0$ in the subspace ${\mathcal A}(\delta, \Lambda, \omega, \epsilon)$ of K\"ahler metrics $$\{\omega_{\phi}=\omega+\sqrt{-1}\partial\bar\partial \phi\;|\; Ric(\omega_{\phi})>-1+\delta, \; |Rm|(\omega_{\phi})\leq \Lambda,\; E_1(\omega_{\phi})\leq \inf E_1+\epsilon \},$$where $E_1(\omega')=E_{1, \omega}(\omega')$, the K\"ahler-Ricci flow will deform it exponentially fast to a K\"ahler-Einstein metric in the limit. \end{theo} \begin{rem} The condition that $M$ is pre-stable (cf. Defi \ref {prestable}), roughly means that the complex structure doesn't jump in the limit (cf. \cite{[chen1]}, \cite{[PhSt]}). In G. Tian's definition of $K$-stability, this condition appears to be one of three necessary conditions for a complex structure to be $K$-stable (cf. \cite{[Do]}, \cite{[Tian2]}). \end{rem} \begin{rem}This gives a sufficient condition for the existence of K\"ahler-Einstein metrics. More interestingly, by a theorem of G. Tian \cite{[Tian2]}, this also gives a sufficient condition for an algebraic manifold being weakly K-stable. One tempting question is: does this condition imply weak K-stability directly? \end{rem} \begin{rem} If we call the result in \cite{[chen1]} a ``pre-baby" step in this ambitious program, then Theorem 1.1 and 1.5 should be viewed as a ``baby step" in this program. We wish to remove the assumption on the bound of the bisectional curvature. More importantly (cf. Theorem 1.8 below), we wish to replace the condition on the Ricci curvature in both Theorem 1.1 and 1.5 by a condition on the scalar curvature. Then our theorem really becomes a ``small energy" lemma. \end{rem} If we remove the condition of ``pre-stable", then \begin{theo}\label{main} Suppose that $(M, [\omega])$ has no nonzero holomorphic vector fields and $E_1$ is bounded from below in $[\omega].$ For any $\delta, B, \Lambda>0,$ there exists a small positive constant $\epsilon(\delta, B, \Lambda, \omega)>0$ such that for any metric $\omega_0$ in the subspace ${\mathcal A}(\delta, B, \Lambda, \epsilon)$ of K\"ahler metrics $$\{\omega_{\phi}=\omega+\sqrt{-1}\partial\bar\partial \phi\;|\; Ric(\omega_{\phi})>-1+\delta,\; |\phi| \leq B, \;|Rm|(\omega_{\phi})\leq \Lambda, \; E_1(\omega_{\phi})\leq \inf E_1+\epsilon \}$$ the K\"ahler-Ricci flow will deform it exponentially fast to a K\"ahler-Einstein metric in the limit. \end{theo} \begin{rem} In light of Theorem \ref{main4} below, we can replace the condition on $E_1$ by a corresponding condition on $E_0.\;$ \end{rem} \subsection{The relations between energy functionals $E_k$ } Song-Weinkove \cite{[SoWe]} recently proved that $E_k$ have a lower bound on the space of K\"ahler metrics with nonnegative Ricci curvature for K\"ahler-Einstein manifolds. Moreover, they also showed that modulo holomorphic vector fields, $E_1$ is proper if and only if there exists a K\"ahler-Einstein metric. Shortly afterwards, N. Pali \cite{[Pali]} gave a formula between $E_1$ and the $K$-energy $E_0$, which says that the $E_1$ energy is always bigger than the $K$-energy. Tosatti \cite{[Tosatti]} proved that under some curvature assumptions, the critical point of $E_k$ is a K\"ahler-Einstein metric. Pali's theorem means that $E_1$ has a lower bound if the $K$-energy has a lower bound. A natural question is if the converse holds. To our own surprise, we proved the following result. \begin{theo}\label{main4}$E_1$ is bounded from below if and only if the $K$-energy is bounded from below in the class $[\omega].$ Moreover, we have\footnote{For simplicity of notation, we will often drop the subscript $\phi$ and write $|\nabla f|^2$ for $|\nabla f|_{\phi}^2 $. But in an integral, $|\nabla f|^2$ is with respect to the metric of the volume form. } $$\inf_{\omega'\in [\omega]} E_{1, \omega}(\omega')=2\inf_{\omega'\in [\omega]} E_{0, \omega}(\omega')-\frac 1{nV}\int_M\;|\nabla h_{\omega}|^2 \omega^n,$$ where $h_{\omega}$ is the Ricci potential function with respect to $\omega$. \end{theo} A crucial observation leads to this theorem is \begin{theo} Along the K\"ahler-Ricci flow, $E_1$ will decrease after finite time. \label{main3} \end{theo} Theorem \ref{main4} and \ref{main3} of course lead more questions than they answer to. For instance, is the properness of $E_k$ equivalent to the properness of $E_l$ for $k\neq l$? More subtlely, are the corresponding notions of semi-stability ultimately equivalent to each other? Is there a preferred functional in this family, or a better linear combination of these $E_k$ functionals? The first named author genuinely believes that this observation opens the door for more interesting questions. \\ Another interesting question is the relation of $E_k$ with various notions of stability defined by algebraic conditions. Theorem 1.1 and 1.5 suggest an indirect link of these functionals $E_k$ and stability. According to A. Futaki \cite{[futaki04]}, these functionals may directly link to the asymptotic Chow semi-stability (note that the right hand side of (1.2) in \cite{[futaki04]} is precisely equal to $\frac{d E_k}{dt}$ if one takes $p=k+1$ and $\phi = c_1^k$, cf. Theorem 2.4 below). It is highly interesting to explore further in this direction. \subsection{Main ideas of proofs of Theorem 1.1 and 1.5} In \cite{[chen1]}, a topological condition is used to control the $L^2$ norm of the bisectional curvature once the Ricci curvature is controlled. Using the parabolic Moser iteration arguments, this gives a uniform bound on the full bisectional curvature. In the present paper, we need to find a new way to control the full bisectional curvature under the flow. The whole scheme of obtaining this uniform estimate on curvatures depends on two crucial steps and their dynamical interplay.\\ {\bf STEP 1:} The first step is to follow the approach of the celebrated work of Yau on the Calabi conjecture (cf. {\cite{[Cao]}}, \cite{[Yau]}). The key point here is to control the $C^0$ norm of the evolving K\"ahler potential $\phi(t)$, in particular, the growth of $ u = {{\partial\phi}\over {\partial t}}$ along the K\"ahler-Ricci flow. Note that $u$ satisfies \[ { {\partial u}\over {\partial t}} = \triangle_{\phi} u + u. \] Therefore, the crucial step is to control the first eigenvalue of the Laplacian operator (assuming the traceless Ricci form is controlled via an iteration process which we will describe as {\bf STEP} 2 below). For our purpose, we need to show that the first eigenvalue of the evolving Laplacian operator is bigger than $1 + \gamma$ for some fixed $\gamma > 0.\;$ Such a problem already appeared in \cite{[chen0]} and \cite{[chen-tian2]} since the first eigenvalue of the Laplacian operator of K\"ahler-Einstein metrics is exactly $1.\;$ If $Aut_r(M, J)\neq 0,\;$ the uniqueness of K\"ahler-Einstein metrics implies that the dimension of the first eigenspace is fixed; while the vanishing of the Futaki invariant implies that $u(t)$ is essentially perpendicular to the first eigenspace of the evolving metrics. These are two crucial ingredients which allow one to squeeze out a small gap $\gamma$ on the first eigenvalue estimate. Following the approach in \cite {[chen0]} and \cite{[chen-tian2]}, we can show that $u$ decays exponentially. This in turn implies the $C^0$ bound on the evolving K\"ahler potential. Consequently, this leads to control of all derivatives of the evolving potential, in particular, the bisectional curvature. In summary, as long as we have control of the first eigenvalue, one controls the full bisectional curvature of the evolving K\"ahler metrics. For Theorem 1.5, a crucial technique step is to use an estimate obtained in \cite{[chenhe]} on the Ricci curvature tensor.\\ {\bf STEP 2:} Here we follow the Moser iteration techniques which appeared in \cite{[chen1]}. Assuming that the full bisectional curvature is bounded by some large but fixed number, the norm of the traceless bisectional curvature and the traceless Ricci tensor both satisfy the following inequality: \[ {{\partial u}\over {\partial t}} \leq \triangle_{\phi} u + |Rm| u \leq \triangle_{\phi} u + C\cdot u. \] If the curvature of the evolving metric is controlled in $L^p(p> n)$, then the smallness of the energy $E_1$ allows you to control the norm of the traceless Ricci tensor of the evolving K\"ahler metrics (cf. formula (\ref{2})). According to Theorem \ref{lem2.8} and \ref{theo4.18}, this will in turn give an improved estimate on the first eigenvalue in a slightly longer period, but perhaps without full uniform control of the bisectional curvature in the ``extra" time. However, this gives uniform control on the K\"ahler potential which in turn gives the bisectional curvature in the extended ``extra" time. We use the Moser iteration again to obtain sharper control on the norm of the traceless bisectional curvature.\\ Hence, the combination of the parabolic Moser iteration together with Yau's estimate, yields the desired global estimate. In comparison, the iteration process in \cite{[chen1]} is instructive and more direct. The first named author believes that this approach is perhaps more important than the mild results we obtained there. \subsection{The organization} This paper is roughly organized as follows: In Section 2, we review some basic facts in K\"ahler geometry and necessary information on the K\"ahler-Ricci flow. We also include some basic facts on the energy functionals $E_k$. In Section 3, we prove Theorem \ref{main4} and \ref{main3}. In Section 4, we prove several technical theorems on the K\"ahler-Ricci flow. The key results are the estimates of the first eigenvalue of the evolving Laplacian, which are proved in Section 4.3. In Section 5, 6 we prove Theorem \ref{main2} and Theorem \ref{main}.\\ \noindent {\bf Acknowledgements}: Part of this work was done while the second named author was visiting University of Wisconsin-Madison and he would like to express thanks for the hospitality. The second named author would also like to thank Professor W. Y. Ding and X. H. Zhu for their help and encouragement. The first named author would like to thank Professor P. Li of his interest and encouragement in this project. The authors would like to thank the referees for numerous suggestions which helped to improve the presentation. \vskip 1cm \section{Setup and known results} \subsection{Setup of notations} Let $M$ be an $n$-dimensional compact K\"ahler manifold. A K\"ahler metric can be given by its K\"ahler form $\omega$ on $M$. In local coordinates $z_1, \cdots, z_n $, this $\omega$ is of the form \[ \omega = \sqrt{-1} \displaystyle \sum_{i,j=1}^n\;g_{i \overline{j}} d\,z^i\wedge d\,z^{\overline{j}} > 0, \] where $\{g_{i\overline {j}}\}$ is a positive definite Hermitian matrix function. The K\"ahler condition requires that $\omega$ is a closed positive (1,1)-form. In other words, the following holds \[ {{\partial g_{i \overline{k}}} \over {\partial z^{j}}} = {{\partial g_{j \overline{k}}} \over {\partial z^{i}}}\qquad {\rm and}\qquad {{\partial g_{k \overline{i}}} \over {\partial z^{\overline{j}}}} = {{\partial g_{k \overline{j}}} \over {\partial z^{\overline{i}}}}\qquad\forall\;i,j,k=1,2,\cdots, n. \] The K\"ahler metric corresponding to $\omega$ is given by \[ \sqrt{-1} \;\displaystyle \sum_1^n \; {g}_{\alpha \overline{\beta}} \; d\,z^{\alpha}\;\otimes d\, z^{ \overline{\beta}}. \] For simplicity, in the following, we will often denote by $\omega$ the corresponding K\"ahler metric. The K\"ahler class of $\omega$ is its cohomology class $[\omega]$ in $H^2(M,{\mathbb R}).\;$ By the Hodge theorem, any other K\"ahler metric in the same K\"ahler class is of the form \[ \omega_{\phi} = \omega + \sqrt{-1} \displaystyle \sum_{i,j=1}^n\; {{\partial^2 \phi}\over {\partial z^i \partial z^{\overline{j}}}} \;dz^i\wedge dz^{\bar j} > 0 \] for some real valued function $\phi$ on $M.\;$ The functional space in which we are interested (often referred to as the space of K\"ahler potentials) is \[ {\cal P}(M,\omega) = \{ \phi\in C^{\infty}(M, {\mathbb R}) \;\mid\; \omega_{\phi} = \omega + \sqrt{-1} {\partial} \overline{\partial} \phi > 0\;\;{\rm on}\; M\}. \] Given a K\"ahler metric $\omega$, its volume form is \[ \omega^n = n!\;\left(\sqrt{-1} \right)^n \det\left(g_{i \overline{j}}\right) d\,z^1 \wedge d\,z^{\overline{1}}\wedge \cdots \wedge d\,z^n \wedge d \,z^{\overline{n}}. \] Its Christoffel symbols are given by \[ \Gamma^k_{i\,j} = \displaystyle \sum_{l=1}^n\;g^{k\overline{l}} {{\partial g_{i \overline{l}}} \over {\partial z^{j}}} ~~~{\rm and}~~~ \Gamma^{\overline{k}}_{\overline{i} \,\overline{j}} = \displaystyle \sum_{l=1}^n\;g^{\overline{k}l} {{\partial g_{l \overline{i}}} \over {\partial z^{\overline{j}}}}, \qquad\forall\;i,j,k=1,2,\cdots n. \] The curvature tensor is \[ R_{i \overline{j} k \overline{l}} = - {{\partial^2 g_{i \overline {j}}} \over {\partial z^{k} \partial z^{\overline{l}}}} + \displaystyle \sum_ {p,q=1}^n g^{p\overline{q}} {{\partial g_{i \overline{q}}} \over {\partial z^{k}}} {{\partial g_{p \overline{j}}} \over {\partial z^{\overline{l}}}}, \qquad\forall\;i,j,k,l=1,2,\cdots n. \] We say that $\omega$ is of nonnegative bisectional curvature if \[ R_{i \overline{j} k \overline{l}} v^i v^{\overline{j}} w^k w^ {\overline{l}}\geq 0 \] for all non-zero vectors $v$ and $w$ in the holomorphic tangent bundle of $M$. The bisectional curvature and the curvature tensor can be mutually determined. The Ricci curvature of $\omega$ is locally given by \[ R_{i \overline{j}} = - {{\partial}^2 \log \det (g_{k \overline{l}}) \over {\partial z_i \partial \bar z_j }} . \] So its Ricci curvature form is \[ {\rm Ric}({\omega}) = \sqrt{-1} \displaystyle \sum_{i,j=1}^n \;R_{i \overline{j}}\; d\,z^i\wedge d\,z^{\overline{j}} = -\sqrt{-1} \partial \overline {\partial} \log \;\det (g_{k \overline{l}}). \] It is a real, closed (1,1)-form. Recall that $[\omega]$ is called a canonical K\"ahler class if this Ricci form is cohomologous to $\lambda \,\omega$ for some constant $\lambda$\,. In our setting, we require $\lambda = 1.\;$ \subsection{The K\"ahler-Ricci flow} Let us assume that the first Chern class $c_1(M)$ is positive. Choose an initial K\"ahler metric $\omega$ in $2\pi c_1(M).\;$ The normalized K\"ahler-Ricci flow (cf. \cite{[Ha82]}) on a K\"ahler manifold $M$ is of the form \begin{equation} {{\partial g_{i \overline{j}}} \over {\partial t }} = g_{i \overline{j}} - R_{i \overline{j}}, \qquad\forall\; i,\; j= 1,2,\cdots ,n. \label{eq:kahlerricciflow} \end{equation} It follows that on the level of K\"ahler potentials, the Ricci flow becomes \begin{equation} {{\partial \phi} \over {\partial t }} = \log {{\omega_{\phi}}^n \over {\omega}^n } + \phi - h_{\omega} , \label{eq:flowpotential} \end{equation} where $h_{\omega}$ is defined by \[ {\rm Ric}({\omega})- \omega = \sqrt{-1} \partial \overline{\partial} h_ {\omega}, \; {\rm and}\;\displaystyle \int_M\; (e^{h_{\omega}} - 1) {\omega}^n = 0. \] Then the evolution equation for bisectional curvature is \begin{eqnarray}{{\partial }\over {\partial t}} R_{i \overline{j} k \overline{l}} & = & \bigtriangleup R_{i \overline{j} k \overline{l}} + R_{i \overline{j} p \overline{q}} R_{q \overline{p} k \overline{l}} - R_{i \overline{p} k \overline{q}} R_{p \overline{j} q \overline{l}} + R_{i \overline{l} p \overline{q}} R_{q \overline{p} k \overline{j}} + R_{i \overline{j} k \overline{l}} \nonumber\\ & & \;\;\; -{1\over 2} \left( R_{i \overline{p}}R_{p \overline{j} k \overline{l}} + R_{p \overline{j}}R_{i \overline{p} k \overline{l}} + R_{k \overline{p}}R_{i \overline{j} p \overline{l}} + R_{p \overline{l}}R_{i \overline{j} k \overline{p}} \right). \label{eq:evolutio of curvature1} \end{eqnarray} Here $\Delta$ is the Laplacian of the metric $g(t).$ The evolution equation for Ricci curvature and scalar curvature are \begin{eqnarray} {{\partial R_{i \bar j}}\over {\partial t}} & = & \triangle R_{i\bar j} + R_{i\bar j p \bar q} R_{q \bar p} -R_{i\bar p} R_{p \bar j},\label {eq:evolutio of curvature2}\\ {{\partial R}\over {\partial t}} & = & \triangle R + R_{i\bar j} R_{j\bar i}- R. \label{eq:evolutio of curvature3} \end{eqnarray} For direct computations and using the evolving frames, we can obtain the following evolution equations for the bisectional curvature: \begin{equation} \pd {R_{i\bar jk\bar l}}{t} =\Delta R_{i\bar jk\bar l}- R_{i\bar jk\bar l}+R_{i \bar j m\bar n}R_{n\bar m k\bar l}-R_{i\bar m k\bar n}R_{m\bar j n\bar l}+R_{i\bar l m\bar n}R_{n\bar m k\bar l} \label{eq:evolution of curvature4} \end{equation} As usual, the flow equation (\ref{eq:kahlerricciflow}) or (\ref{eq:flowpotential}) is referred to as the K\"ahler-Ricci flow on $M$. It was proved by Cao \cite{[Cao]}, who followed Yau's celebrated work \cite{[Yau]}, that the K\"ahler-Ricci flow exists globally for any smooth initial K\"ahler metric. It was proved by S. Bando \cite{[Bando]} in dimension 3 and N. Mok \cite{[Mok]} in all dimensions that the positivity of the bisectional curvature is preserved under the flow. In \cite{[chen-tian2]} and \cite{[chen-tian1]}, the first named author and G. Tian proved that the K\"ahler-Ricci flow, in a K\"ahler-Einstein manifold, initiated from a metric with positive bisectional curvature converges to a K\"ahler-Einstein metric with constant bisectional curvature. In unpublished work on the K\"ahler-Ricci flow, G. Perelman proved, along with other results, that the scalar curvature is always uniformly bounded. \subsection{Energy functionals $E_k$} In \cite{[chen-tian2]}, a family of energy functionals $E_k (k=0, 1, 2,\cdots, n)$ was introduced and these functionals played an important role there. First, we recall the definitions of these functionals. \begin{defi} For any $k=0, 1, \cdots, n,$ we define a functional $E_k^0$ on ${\mathcal P}(M, \omega)$ by $$E_{k, \omega}^0(\phi)=\frac 1V\int_M\; \Big(\log \frac {\omega^n_{\phi}} {\omega^n}-h_{\omega}\Big)\Big( \sum_{i=0}^k\; Ric(\omega_{\phi})^i\wedge \omega^{k-i}\Big)\wedge \omega_{\phi}^{n-k}+\frac 1V\int_M\; h_{\omega}\Big( \sum_{i=0}^k\; Ric(\omega)^i\wedge \omega^{k-i}\Big)\wedge \omega^{n-k}.$$ \end{defi} \begin{defi}For any $k=0, 1, \cdots, n$, we define $J_{k, \omega}$ as follows $$J_{k, \omega}(\phi)=-\frac {n-k}{V}\int_0^1\;\int_M\;\pd {\phi(t)}{t}(\omega_{\phi(t)} ^{k+1}-\omega^{k+1})\wedge \omega_{\phi(t)}^{n-k-1} \wedge dt,$$ where $\phi(t)(t\in [0, 1])$ is a path from $0$ to $\phi$ in ${\mathcal P}(M, \omega)$. \end{defi} \begin{defi}For any $k=0, 1, \cdots, n,$ the functional $E_{k, \omega}$ is defined as follows $$E_{k, \omega}(\phi)=E_{k, \omega}^0(\phi)-J_{k, \omega}(\phi).$$ For simplicity, we will often drop the subscript $\omega$. \end{defi} By direct computation, we have \begin{theo} For any $k=0, 1, 2,\cdots, n,$ we have \begin{eqnarray*} \frac {dE_k}{dt}=&&\frac {k+1}{V}\int_M\;\Delta_{\phi}\dot\phi Ric(\omega_{\phi})^k\wedge\omega_{\phi}^{n-k}\\&&-\frac {n-k}{V}\int_M\;\dot\phi (Ric(\omega_{\phi})^{k+1}-\omega_{\phi}^{k+1})\wedge \omega_{\phi}^{n-k-1}. \end{eqnarray*} Here $\phi(t)$ is any path in ${\mathcal P}(M, \omega)$. \end{theo} \begin{rem}Note that $$ \frac {dE_0}{dt}=-\frac nV\int_M\; \dot\phi (Ric(\omega_{\phi})-\omega_{\phi})\wedge \omega_{\phi}^{n-1}.$$ Thus, $E_0$ is the well-known $K$-energy. \end{rem} \begin{theo} Along the K\"ahler-Ricci flow where $Ric(\omega_{\phi})>-\omega_{\phi}$ is preserved, we have $$\frac {dE_k}{dt}\leq-\frac {k+1}{V}\int_M\;(R(\omega_{\phi})-r) Ric(\omega_{\phi})^k\wedge\omega_{\phi}^{n-k}.$$ When $k=0, 1,$ we have \begin{eqnarray*} \frac {dE_0}{dt}&=&-\frac 1V\int_M\;|\nabla \dot\phi|^2\omega_{\phi}^n \leq 0,\\ \frac {dE_1}{dt}&\leq&-\frac 2V\int_M\;(R(\omega_{\phi})-r)^2\omega_{\phi}^n\leq 0. \end{eqnarray*} \end{theo} Recently, Pali in \cite{[Pali]} found the following formula, which will be used in this paper. \begin{theo}\label{pali}For any $\phi\in {\mathcal P}(M, \omega)$, we have $$E_{1, \omega}(\phi)=2E_{0, \omega}(\phi)+\frac 1{nV}\int_M\; |\nabla u|^2 \omega_{\phi}^n -\frac 1{nV}\int_M\; |\nabla h_{\omega}|^2 \omega^n,$$ where $$u=\log\frac {\omega^n_{\phi}}{\omega^n}+\phi-h_{\omega}.$$ \end{theo} \begin{rem}This formula directly implies that if $E_0$ is bounded from below, then $E_1$ is bounded from below on ${\mathcal P}(M, \omega)$. \end{rem} \begin{rem}In a forthcoming paper \cite{[chenli]}, the second named author will generalize Theorem \ref{pali} to all the functionals $E_k(k\geq 1)$, and discuss some interesting relations between $E_k.$ \end{rem} \section{Energy functionals $E_0$ and $E_1$} In this section, we want to prove Theorem \ref{main4} and \ref{main3}. \subsection{Energy functional $E_1$ along the K\"ahler-Ricci flow} The following theorem is well-known in literature (cf. \cite{[chen2]}). \begin{lem}\label{sec2}The minimum of the scalar curvature along the K\"ahler-Ricci flow, if negative, will increase to zero exponentially. \end{lem} \begin{proof} Let $\mu(t)=-\min_M R(x, 0)e^{-t},$ then \begin{eqnarray*}\pd {}t(R+\mu(t))&=&\Delta (R+\mu(t))+|Ric|^2-(R+\mu(t))\\ &\geq &\Delta (R+\mu(t))-(R+\mu(t)). \end{eqnarray*} Since $R(x, 0)+\mu(0)\geq 0$, by the maximum principle we have $$R(x, t)\geq -\mu(t)=\min_M R(x, 0)e^{-t}.$$ \end{proof} Using the above lemma, the following theorem is an easy corollary of Pali's formula. \begin{theo}\label{thm3.2}Along the K\"ahler-Ricci flow, $E_1$ will decrease after finite time. In particular, if the initial scalar curvature $R(0)>-n+1$, then there is a small constant $\delta>0$ depending only on $n$ and $\min_{x\in M} R(0)$ such that for all time $t>0$, we have \begin{equation} \frac {d}{dt}E_1\leq -\frac {\delta}V \int_M\;|\nabla \dot \phi|^2 \omega_{\phi}^n\leq 0.\label{aaa}\end{equation} \end{theo} \begin{proof} Along the K\"ahler-Ricci flow, the evolution equation for $|\nabla \dot \phi|^2$ is $$\pd {}t|\nabla \dot \phi|^2=\Delta_{\phi} |\nabla \dot \phi|^2-|\nabla \nabla \dot \phi|^2-|\nabla \bar \nabla\dot \phi|^2+|\nabla \dot \phi|^2.$$ By Theorem \ref{pali}, we have \begin{eqnarray*}\frac {d}{dt}E_1&=&-\frac 2V\int_M\;|\nabla \dot \phi|^2\omega_{\phi} ^n +\frac 1{nV}\frac {d}{dt}\int_M\; |\nabla \dot \phi|^2\omega_{\phi}^n\\ &= &-\frac 2V\int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n +\frac 1{nV}\int_M\;(-|\nabla \nabla \dot \phi|^2-|\nabla \bar \nabla\dot \phi|^2+|\nabla \dot \phi|^2+|\nabla \dot \phi|^2(n-R))\omega_{\phi}^n \end{eqnarray*} If the scalar curvature at the initial time $R(x, 0)\geq -n+1+n\delta (n\geq 2)$ for some small $\delta>0$, by Lemma \ref{sec2} for all time $t>0$ we have $R(x, t)\geq -n+1+n\delta.$ Then we have \begin{eqnarray}\frac {d}{dt}E_1 &\leq&-\frac 2V\int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n +\frac 1{nV}\int_M\;(-|\nabla \nabla \dot \phi|^2- |\nabla \bar \nabla\dot \phi|^2+|\nabla \dot \phi|^2+|\nabla \dot \phi|^2(2n-1-n \delta))\omega_{\phi}^n\nonumber\\ &\leq&-\frac {\delta}V \int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n. \label{aeq1}\end{eqnarray} Otherwise, by Lemma \ref{sec2} after finite time $T_0=\log\frac {-\min_M R(x, 0)}{n-1-n\delta},$ we still have $R(x, t)>-n+1+n\delta$ for small $\delta >0.$ Thus, the inequality (\ref{aeq1}) holds. If $n=1$, by direct calculation we have $$\frac {dE_1}{dt}=-\frac 2V\int_M\; (R(\omega_{\phi})-1)^2\omega_{\phi}=-\frac 2V\int_M\; (\Delta_{\phi}\dot\phi)^2\omega_{\phi}.$$ If the initial scalar curvature $R(0)>0$, by R. Hamilton's results in \cite{[Ha88]} the scalar curvature has a uniformly positive lower bound. Thus, $R(t)\geq c>0$ for some constant $c>0$ and all $t>0.$ Therefore, by the proof of Lemma \ref{lem2.13} in section \ref{section4.4.1} the first eigenvalue of $\Delta_{\phi}$ satisfies $\lambda_1(t)\geq c.$ Then $$\frac {dE_1}{dt}=-\frac 2V\int_M\; (\Delta_{\phi}\dot\phi)^2\omega_ {\phi}\leq -\frac {2c}V\int_M\; |\nabla\dot\phi|^2\omega_{\phi}.$$ The theorem is proved. \end{proof} \subsection{On the lower bound of $E_0$ and $E_1$} In this section, we will prove Theorem \ref{main4}. Recall the generalized energy: \begin{eqnarray*} I_{\omega}(\phi)&=&\frac 1V\sum_{i=0}^{n-1}\int_M\; \sqrt{-1} \partial \phi\wedge\bar \partial \phi\wedge \omega^i\wedge\omega_{\phi}^ {n-1-i},\\ J_{\omega}(\phi)&=&\frac 1V\sum_{i=0}^{n-1}\frac {i+1}{n+1}\int_M\; \sqrt{-1}\partial \phi\wedge\bar \partial \phi\wedge \omega^i\wedge\omega_{\phi}^{n-1-i}. \end{eqnarray*} By direct calculation, we can prove $$0\leq I-J\leq I\leq (n+1)(I-J)$$ and for any K\"ahler potential $\phi(t)$ $$\frac {d}{dt}(I_{\omega}-J_{\omega})(\phi(t))=-\frac 1V\int_M\;\phi\Delta_{\phi} \dot\phi \;\omega_{\phi}^n.$$ The behaviour of $E_1$ for the family of K\"ahler potentials $\phi(t)$ satisfying the equation (\ref{eq1}) below has been studied by Song and Weinkove in \cite{[SoWe]}. Following their ideas, we have the following lemma. \begin{lem}\label{lema3.3}For any K\"ahler metric $\omega_0\in[\omega]$, there exists a K\"ahler metric $\omega_0'\in [\omega]$ such that $Ric(\omega_0')>0$ and $$E_0(\omega_0)\geq E_0(\omega_0').$$ \end{lem} \begin{proof}We consider the complex Monge-Amp\`{e}re equation \begin{equation}(\omega_0+\sqrt{-1}\partial\bar\partial \varphi)^n=e^{th_{0}+c_t}\omega_0^n,\label{eq1}\end{equation} where $h_{0}$ satisfies the following equation $$Ric(\omega_0)-\omega_0=\sqrt{-1}\partial\bar\partial h_{0},\qquad \frac 1V\int_M\;e^{h_0} \omega_0^n=1$$ and $c_t$ is the constant chosen so that $$\int_M\; e^{th_0+c_t}\omega_0^n=V.$$ By Yau's results in \cite{[Yau]}, there exists a unique $\varphi(t)(t\in [0, 1])$ to the equation (\ref{eq1}) with $\int_M\; \varphi \omega_0^n=0.$ Then $\varphi(0)=0.$ Note that the equation (\ref{eq1}) implies \begin{equation} Ric(\omega_{\varphi})=\omega_{\varphi}+(1-t)\sqrt{-1}\partial\bar\partial h_0-\sqrt{-1}\partial\bar\partial\varphi,\label{eq2}\end{equation} and $$\Delta_{\varphi}\dot\varphi=h_0+c_t'.$$ By the definition of $E_0$ we have \begin{eqnarray*} \frac {d}{dt}E_0(\varphi(t))&=&-\frac 1V\int_M\; \dot\varphi(R(\omega_{\varphi})-n)\omega_{\varphi}^n\\ &=&-\frac 1V\int_M\; \dot\varphi((1-t)\Delta_{\varphi}h_0-\Delta_{\varphi}\varphi)\omega_ {\varphi}^n\\ &=&-\frac {1-t}{V}\int_M\;\Delta_{\varphi} \dot\varphi h_0\; \omega_{\varphi}^n+\frac 1V\int_M\;\varphi\Delta_{\varphi}\dot\varphi\omega_{\varphi}^n\\ &=&-\frac {1-t}{V}\int_M\;(\Delta_{\varphi} \dot\varphi)^2\;\omega_ {\varphi}^n-\frac {d}{dt}(I-J)_{\omega_0}(\varphi). \end{eqnarray*} Integrating the above formula from $0$ to $1$, we have $$E_0(\varphi(1))-E_0(\omega_0)=-\frac 1V\int_0^1(1-s)\int_M\;(\Delta_ {\varphi} \dot\varphi)^2\; \omega_{\varphi}^n\wedge ds-(I-J)_{\omega_0}(\varphi(1))\leq 0.$$ By the equation (\ref{eq2}), we know $Ric(\omega_{\varphi(1)})> 0$. This proves the lemma. \end{proof} Now we can prove Theorem \ref{main4}. \begin{theo}$E_1$ is bounded from below if and only if the $K$-energy is bounded from below in the class $[\omega].$ Moreover, we have $$\inf_{\omega'\in [\omega]} E_{1}(\omega')=2\inf_{\omega'\in [\omega]} E_{0}(\omega')- \frac 1{nV}\int_M\; |\nabla h_{\omega}|^2\omega^n.$$ \end{theo} \begin{proof}It is sufficient to show that if $E_1$ is bounded from below, then $E_0$ is bounded from below. For any K\"ahler metric $\omega_0$, by Lemma \ref{lema3.3} there exists a K\"ahler metric $\omega_0'=\omega+\sqrt{-1}\partial\bar\partial \varphi_0$ such that $$Ric(\omega_0')\geq c>0,\qquad E_0(\omega_0)\geq E_0(\omega_0'),$$ where $c$ is a constant depending only on $\omega_0.$ Let $\varphi(t)$ be the solution to the K\"ahler-Ricci flow with the initial metric $\omega_0',$ $$\pd {\varphi}{t}=\log\frac {\omega_{\varphi}^n}{\omega^n}+\varphi-h_{\omega}, \qquad \varphi(0)=\varphi_0.$$ Then for any $t>s\geq0$, by Theorem \ref{thm3.2} we have \begin{equation} E_1(t)-E_1(s)\leq 2\delta(E_0(t)-E_0(s)),\label{qeq3}\end{equation} where $E_1(t)=E_1(\omega, \omega_{\varphi(t)})$ and $\delta=\frac {n-1}{2n}$ if $n\geq 2,$ or $\delta=c>0 $ if $n=1$. Here $c$ is a constant obtained in the proof of Theorem \ref{thm3.2}. By Theorem \ref{pali} we have $$E_1(t)-2E_0(s)-\frac 1{nV}\int_M\;|\nabla\dot\varphi|^2\omega_{\varphi}^n (s)+C_{\omega}\leq \delta(E_1(t) -\frac 1{nV}\int_M\;|\nabla\dot\varphi|^2\omega_{\varphi}^n(t)+C_{\omega})-2\delta E_0(s).$$ i.e. \begin{equation} E_1(t)-\frac 1{n(1-\delta)V}\int_M\;|\nabla\dot\varphi|^2\omega_{\varphi}^n(s)+ \frac {\delta}{n(1-\delta)V}\int_M\;|\nabla\dot\varphi|^2\omega_{\varphi}^n(t)+C_{\omega}\leq 2E_0(s),\label{qeq4}\end{equation} where $C_{\omega}=\frac 1{nV}\int_M\; |\nabla h_{\omega}|^2\omega^n.$ By (\ref{qeq3}) we know $E_0$ is bounded from below along the K\"ahler-Ricci flow. Thus there exists a sequence of times $t_m$ such that $$\int_M\;|\nabla\dot\varphi|^2\omega_{\varphi}^n(t_m)\rightarrow 0,\qquad m\rightarrow \infty.$$ We choose $t=t_m$ and let $m\rightarrow \infty$ in (\ref{qeq4}), $$\inf E_1-\frac 1{n(1-\delta)V}\int_M\;|\nabla\dot\varphi|^2\omega_{\varphi}^n(s)+C_{\omega}\leq 2E_0(s)\leq 2E_0(\omega_0'), $$ where the last inequality is because $E_0$ is decreasing along the K\"ahler-Ricci flow. Thus, we choose $s=t_m$ again and let $m\rightarrow \infty$, $$\inf E_1+C_{\omega}\leq 2E_0(\omega_0')\leq 2E_0(\omega_0).$$ Thus, $E_0$ is bounded from below in $[\omega]$, and $$\inf E_1\leq 2\inf E_0-C_{\omega}.$$ On the other hand, for any $\omega'\in [\omega]$ we have $$E_1(\omega')\geq 2E_0(\omega')-C_{\omega}.$$ Combining the last two inequalities, we have $\inf E_1=2\inf E_0-C_{\omega}$. Thus, the theorem is proved. \end{proof} \section{Some technical lemmas} In this section, we will prove some technical lemmas, which will be used in the proof of Theorem \ref{main2} and \ref{main}. These lemmas are based on the K\"ahler-Ricci flow $$\pd {\phi}{t}=\log\frac {\omega_{\phi}^n}{\omega^n}+\phi-h_{\omega}.$$ Most of these results are taken from \cite{[chen1]}-\cite{[chen-tian1]}. The readers are referred to these papers for the details. Here we will prove some of them for completeness. \subsection{Estimates of the Ricci curvature} The following result shows that we can control the curvature tensor in a short time. \begin{lem}\label{lem2.1}(cf. \cite{[chen1]}) Suppose that for some $\delta>0$, the curvature of $\omega_0=\omega+\sqrt{-1}\partial\bar\partial \phi(0)$ satisfies the following conditions $$\left \{\begin{array}{lll}|Rm|(0)&\leq& \Lambda,\\ R_{i\bar j}(0)&\geq& -1+\delta. \end{array}\right. $$ Then there exists a constant $T(\delta, \Lambda)>0$, such that for the evolving K\"ahler metric $\omega_t(0\leq t\leq 6T)$, we have the following \begin{equation} \left \{\begin{array}{lll}|Rm|(t)&\leq& 2\Lambda,\\ R_{i\bar j}(t)&\geq& -1+\frac {\delta}2. \end{array}\right. \label{1}\end{equation} \end{lem} \begin{lem} \label{lem2.2}(cf. \cite{[chen1]})If $E_1(0)\leq \inf_{\omega'\in [\omega] }E_1(\omega')+\epsilon,$ and $$Ric(t)+\omega(t)\geq \frac {\delta}2>0,\qquad \forall t\in [0, T],$$ then along the K\"ahler-Ricci flow we have \begin{equation}\frac 1V\int_0^{T}\int_M\;\;|Ric-\omega|^2(t)\omega_ {\phi}^n\wedge dt\leq \frac {\epsilon}2.\label{2}\end{equation} \end{lem} Since we have the estimate of the Ricci curvature, the following theorem shows that the Sobolev constant is uniformly bounded if $E_1$ is small. \begin{prop}\label{lem2.3}(cf. \cite{[chen1]}) Along the K\"ahler-Ricci flow, if $E_1(0)\leq \inf_{\tilde\omega\in [\omega] }E_1(\tilde\omega)+\epsilon,$ and for any $t\in [0, T],$ $$Ric(t)+\omega(t)\geq 0, $$ the diameter of the evolving metric $\omega_{\phi}$ is uniformly bounded for $t\in [0, T].$ As $\epsilon\rightarrow 0, $ we have $D\rightarrow \pi.$ Let $\sigma(\epsilon)$ be the maximum of the Sobolev and Poincar\'e constants with respect to the metric $\omega_{\phi}$. As $\epsilon\rightarrow 0,$ we have $\sigma(\epsilon)\leq \sigma<+\infty.$ Here $\sigma$ is a constant independent of $\epsilon.$ \end{prop} Next we state a parabolic version of Moser iteration argument (cf. \cite{[chen-tian1]}). \begin{prop}\label{lem2.17} Suppose the Sobolev and Poincar\'e constants of the evolving K\"ahler metrics $g(t)$ are both uniformly bounded by $\sigma$. If a nonnegative function $u$ satisfies the following inequality $$\pd {}tu\leq \Delta_{\phi} u+f(t, x)u, \;\; \forall \,t\in (a, b),$$ where $|f|_{L^p(M, g(t))}$ is uniformly bounded by some constant $c$ for some $p>\frac m2 $, where $m=2n=\dim_{{\mathbb R}}M$, then for any $\tau\in (0, b-a)$ and any $t\in (a+\tau, b)$, we have\footnote{The constant $C$ may differ from line to line. The notation $C(A, B, ...)$ means that the constant $C$ depends only on $A, B, ...$.} $$u(t)\leq \frac {C(n, \sigma, c)}{\tau^{\frac {m+2}{4}}}\Big(\int_{t- \tau}^t\int_M \;u^2 \,\omega_{\phi}^n\wedge ds\Big)^{\frac 12}.$$ \end{prop} By the above Moser iteration, we can show the following lemma. \begin{lem}\label{lem2.5} For any $\delta, \Lambda>0,$ there exists a small positive constant $\epsilon(\delta, \Lambda)>0$ such that if the initial metric $\omega_0$ satisfies the following condition: \begin{equation} Ric(0)>-1+\delta,\; |Rm(0)|\leq \Lambda, \;E_1(0)\leq\inf E_1+\epsilon,\label{z1}\end{equation} then after time $2T$ along the K\"ahler-Ricci flow, we have \begin{equation}|Ric-\omega|(t)\leq C_1(T, \Lambda)\epsilon, \qquad\forall t\in [2T, 6T]\label{2.5}\end{equation} and \begin{equation}|\dot\phi-c(t)|_{C^0}\leq C(\sigma)C_1(T, \Lambda)\epsilon, \qquad\forall t \in [2T, 6T],\label{z2}\end{equation} where $c(t)$ is the average of $\dot\phi$ with respect to the metric $g(t)$, and $\sigma$ is the uniformly upper bound of the Sobolev and Poincar\'e constants in Proposition \ref{lem2.3}. \end{lem} \begin{proof} Let $Ric^0=Ric-\omega.$ Then $u=|Ric^0|^2(t)$ satisfies the parabolic inequality $$\pd ut\leq \Delta_{\phi} u+c(n)|Rm|_{g(t)}u,$$ Note that by Lemma \ref{lem2.1}, $|Rm|(t)\leq 2\Lambda,$ for $0\leq t\leq 6T.$ Then applying Lemma \ref{lem2.1} again and Lemma \ref{lem2.2} for $t\in [2T, 6T]$, we have \footnote{Since the volume $V$ of the K\"ahler manifold $M$ is fixed for the metrics in the same K\"ahler class, the constant $C(T, \Lambda)$ below should depend on $V$, but we don't specify this for simplicity.} \begin{eqnarray*} |Ric^0|^2(t)&\leq &C(\Lambda, T)\Big(\int_{0}^{6T}\int_M\;\;|Ric-\omega|^4(t)\omega_{\phi}^n\wedge dt\Big)^{\frac 12}\\ &\leq &C(\Lambda, T)(1+\Lambda)\Big(\int_{0}^{6T}\int_M\;\;|Ric-\omega |^2(t)\omega_{\phi}^n\wedge dt\Big)^{\frac 12}\\ &\leq &C(\Lambda, T)\sqrt{\epsilon}. \end{eqnarray*} Thus, \begin{equation}|Ric-\omega|(t)\leq C(\Lambda, T)\epsilon^{\frac 14}. \label{z3}\end{equation} Recall that $\Delta_{\phi} \dot\phi=n-R(\omega_{\phi})$, by the above estimate and Proposition \ref{lem2.3} we have \begin{equation}|\dot\phi-c(t)|_{C^0}\leq C(\sigma)C(T, \Lambda)\epsilon^{\frac 14}, \forall t\in [2T, 6T].\label{z4}\end{equation} For simplicity, we can write $\epsilon^{\frac 14}$ in the inequalities (\ref{z3}) and (\ref{z4}) as $\epsilon$, since we can assume $E_1(0)\leq \inf E_1+\epsilon^4$ in the assumption. The lemma is proved. \end{proof} \subsection{Estimate of the average of $\pd {\phi}t$} In this section, we want to control $c(t)=\frac 1V\int_M\;\dot \phi \omega^n_{\phi}$. Here we follow the argument in \cite{[chen-tian2]}. Notice that the argument essentially needs the lower bound of the $K$-energy, which can be obtained by Theorem \ref{main4} in our case. Observe that for any solution $\phi(t)$ of the K\"ahler-Ricci flow, $$\pd {\phi}t=\log\frac {\omega_{\phi}^n}{\omega^n}+\phi-h_{\omega},$$ the function $\tilde\phi(t)=\phi(t)+Ce^t$ also satisfies the above equation for any constant $C$. Since $$\pd {\tilde \phi}{t}(0)=\pd {\phi}{t}(0)+C,$$ we have $\tilde c(0)=c(0)+C.$ Thus we can normalize the solution $\phi(t)$ such that the average of $\dot\phi(0)$ is any given constant. The proof of the following lemma will be used in section 5 and 6, so we include a proof here. \begin{lem}\label{lem2.6}(cf. \cite{[chen-tian2]})Suppose that the $K $-energy is bounded from below along the K\"ahler-Ricci flow. Then we can normalize the solution $\phi(t)$ so that $$c(0)=\frac 1V\int_0^{\infty}\;e^{-t}\int_M\;|\nabla \dot\phi|^2\omega_ {\phi}^n\wedge dt<\infty. $$ Then for all time $t>0$, we have $$0<c(t),\;\;\int_0^{\infty}\;c(t)dt<E_0(0)-E_0(\infty),$$ where $E_0(\infty)=\lim_{t\rightarrow \infty}E_0(t)$. \end{lem} \begin{proof} A simple calculation yields $$c'(t)=c(t)-\frac 1V\int_M\;|\nabla \dot\phi|^2\omega_{\phi}^n.$$ Define $$\epsilon(t)=\frac 1V\int_M\;|\nabla \dot\phi|^2\omega_{\phi}^n.$$ Since the $K$-energy has a lower bound along the K\"ahler-Ricci flow, we have $$\int_0^{\infty}\;\epsilon(t)dt=\frac 1V\int_0^{\infty}\int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n\wedge dt= E_0(0)-E_0(\infty).$$ Now we normalize our initial value of $c(t)$ as \begin{eqnarray*} c(0)&=&\int_0^{\infty}\;\epsilon(t)e^{-t}dt\\ &=&\frac 1V\int_0^{\infty}\;e^{-t}\int_M\;|\nabla \dot\phi|^2\omega_{\phi}^n \wedge dt\\ &\leq &\frac 1V\int_0^{\infty}\int_M\;|\nabla \dot\phi|^2\omega_ {\phi}^n\wedge dt\\ &= &E_0(0)-E_0(\infty). \end{eqnarray*} From the equation for $c(t)$, we have $$(e^{-t}c(t))'=-\epsilon(t)e^{-t}.$$ Thus, we have \beqs0<c(t)=\int^{\infty}_t \;\epsilon(\tau)e^{-(\tau-t)}d\tau \leq E_0(0)-E_0(\infty) \end{eqnarray*} and $$\lim_{t\rightarrow \infty}c(t)=\lim_{t\rightarrow \infty}\int^{\infty}_t \;\epsilon(\tau) e^{-(\tau-t)}d\tau=0.$$ Since the $K$-energy is bounded from below, we have $$\int_0^{\infty}\;c(t)dt=\frac 1V\int_0^{\infty}\int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n\wedge dt-c(0)\leq E_0(0)-E_0(\infty).$$ \end{proof} \begin{lem}\label{lem5.9}Suppose that $E_1$ is bounded from below on ${\mathcal P}(M, \omega)$. For any solution $\phi(t)$ of the K\"ahler-Ricci flow with the initial metric $\omega_0$ satisfying $$E_1(0)\leq \inf E_1+\epsilon, $$ after normalization for the K\"ahler potential $\phi(t)$ of the solution, we have $$0<c(t),\;\;\int_0^{\infty}c(t)\omega^n_{\phi}\leq \frac {\epsilon}2.$$ \end{lem} \begin{proof}By Theorem \ref{main4}, the $K$-energy is bounded from below, then one can find a sequence of times $t_m\rightarrow \infty$ such that $$\int_M\; |\nabla \dot\phi|^2\omega^n_{\phi}\Big|_{t=t_m}\rightarrow 0.$$ By Theorem \ref{pali}, we have $$E_1(t)=2E_0(t)+\frac 1V\int_M\; |\nabla\dot\phi|^2\omega_{\phi}^n-C_{\omega}.$$ Then \begin{eqnarray*} 2(E_0(0)-E_0(t_m))&=&E_1(0)-E_{1}(t_m)-\frac 1V\int_M\;|\nabla\dot\phi|^2\omega_ {\phi}^n\Big|_{t=0}+ \frac 1V\int_M\;|\nabla\dot\phi|^2\omega_{\phi}^n\Big|_{t=t_m}\\&\leq&\epsilon+\frac 1V\int_M\;| \nabla\dot\phi|^2\omega_{\phi}^n\Big|_{t=t_m}\\&\rightarrow&\epsilon.\end{eqnarray*} Since the $K$-energy is decreasing along the K\"ahler-Ricci flow, we have $$E_0(0)-E_0(\infty)\leq \frac {\epsilon}2.$$ By the proof of Lemma \ref{lem2.6}, for any solution of the K\"ahler-Ricci flow we can normalize $\phi(t)$ such that $$0<c(t),\;\;\int_0^{\infty}c(t)\omega^n_{\phi}\leq E_0(0)-E_0(\infty) \leq \frac {\epsilon}2. $$ The lemma is proved. \end{proof} \subsection{Estimate of the first eigenvalue of the Laplacian operator} \subsubsection{Case 1: $M$ has no nonzero holomorphic vector fields}\label{section4.4.1} In this subsection, we will estimate the first eigenvalue of the Laplacian when $M$ has no nonzero holomorphic vector fields. In order to show that the norms of $\phi$ decay exponentially in section 4.5, we need to prove that the first eigenvalue is strictly greater than $1$. \begin{theo}\label{lem2.8}Assume that $M$ has no nonzero holomorphic vector fields. For any $A, B>0$, there exist $\eta(A, B, \omega)>0$ such that for any metric $\omega_{\phi}=\omega+\sqrt{-1}\partial\bar\partial \phi, $ if $$-\eta \omega_{\phi}\leq Ric(\omega_{\phi})-\omega_{\phi}\leq A\omega_{\phi}\; \;\;{ and}\;\;\; |\phi|\leq B, $$ then the first eigenvalue of the Laplacian $\Delta_{\phi}$ satisfies $$\lambda_1>1+\gamma(\eta, B, A, \omega),$$ where $\gamma>0$ depends only on $\eta, B, A$ and the background metric $ \omega$. \end{theo} The following lemma is taken from \cite{[chen-tian2]}. \begin{lem}\label{lem2.9}(cf. \cite{[chen-tian2]}) If the K\"ahler metric $\omega_{\phi}$ satisfies $$Ric(\omega_{\phi})\geq \alpha\omega_{\phi},\; \;\;{ and}\;\;\; |\phi|\leq B$$ for two constants $\alpha$ and $B,$ then there exists a uniform constant $C$ depending only on $\alpha, B$ and $\omega$ such that $$\inf_M \log \frac {\omega_{\phi}^n}{\omega^n}(x)\geq -4C(\alpha, B, \Lambda) e^{2 (1+\int_M\; \log \frac {\omega_{\phi}^n}{\omega^n}\omega^n_{\phi})}.$$ \end{lem} The following crucial lemma is taken from Chen-He \cite{[chenhe]}. Here we include a proof. \begin{lem}\label{lem2.10}For any constant $A, B>0$, if $|Ric(\omega_{\phi})|\leq A$ and $|\phi|\leq B,$ then there is a constant $C$ depending only on $A, B$ and the background metric $\omega$ such that $|\phi|_{C^{3, \beta}(M, \omega)}\leq C(A, B, \omega, \beta)$ for any $\beta\in (0, 1)$. In particular, one can find two constants $C_2(A, B, \omega)$ and $C_3(A, B, \omega)$ such that $$C_2(A, B, \omega)\omega\leq \omega_{\phi}\leq C_3(A, B, \omega)\omega.$$ \end{lem} \begin{proof} We use Yau's estimate on complex Monge-Amp\`ere equation to obtain the $C^{3, \beta}$ norm of $|\phi|.$ Let $F=\log \frac {\omega^n_{\phi}}{\omega^n}.$ Then we have \begin{eqnarray*} \Delta_{\omega} F=g^{i\bar j}\partial_i\partial_{\bar j}\log \frac {\omega^n_{\phi}}{\omega^n} =-g^{i\bar j}R_{i\bar j}(\phi)+R(\omega), \end{eqnarray*} where $\Delta_{\omega}$ denotes the Laplacian of $\omega$. On the other hand, we choose normal coordinates at a point such that $ g_{i\bar j}=\delta_{ij}$ and $g_{i\bar j}(\phi)=\lambda_i\delta_{ij},$ then \begin{eqnarray*} g^{i\bar j}R_{i\bar j}({\phi})=\sum_i R_{i\bar i}(\phi) \leq A\sum_i g_{i\bar i}(\phi) =A(n+\Delta_{\omega} \phi) \end{eqnarray*} and $$g^{i\bar j}R_{i\bar j}(\phi)\geq -A(n+\Delta_{\omega}\phi).$$ Hence, we have \begin{eqnarray} \Delta_{\omega} (F-A\phi)&\leq &R(\omega)+An\label{f1}\\ \Delta_{\omega} (F+A\phi)&\geq &R(\omega)-An. \label{f2}\end{eqnarray} Appling the Green formula, we can bound $F$ from above. In fact, \begin{eqnarray*} F+A\phi&\leq &\frac 1V\int_M\; -G(x, y)\Delta_{\omega} (F+A\phi)(y)\omega^n (y)+\frac 1V\int_M\;(F+A\phi) \omega^n\\ &\leq &\frac 1V\int_M\; -G(x, y)(R(\omega)-An)\omega^n(y)+\frac 1V\int_M\; (F+A\phi) \omega^n\\ &\leq &C(\Lambda, A, B), \end{eqnarray*} where $\Lambda$ is an upper bound of $|Rm|_{\omega}$. Notice that in the last inequality we used $$\frac 1V\int_M\; F\omega^n\leq \log \Big(\frac 1V\int_M\; e^F \omega^n\Big)=0.$$ Hence, $F\leq C(\Lambda, A, B).$ Consider complex Monge-Amp\`ere equation \begin{equation}(\omega+\sqrt{-1}\partial\bar\partial\phi)^n=e^F\omega^n,\label{f3}\end{equation} by Yau's estimate we have \begin{eqnarray*} \Delta_{\phi}(e^{-k\phi}(n+\Delta_{\omega}\phi))&\geq & e^{-k\phi} (\Delta_{\omega} F-n^2\inf_{i\neq j}R_{i\bar ij\bar j}(\omega))\\ &-&ke^{-k\phi}n(n+\Delta_{\omega}\phi)+(k+\inf_{i\neq j}R_{i\bar ij\bar j}(\omega))e^{-k\phi+ \frac {-F}{n-1}}(n+\Delta_{\omega}\phi)^{1+\frac 1{n-1}}\\ &\geq &e^{-k\phi}(R(\omega)-An-\Delta_{\omega}\phi-n^2\inf_{i\neq j}R_{i\bar ij \bar j}(\omega))\\ &-&ke^{-k\phi}n(n+\Delta_{\omega}\phi)+(k+\inf_{i\neq j}R_{i\bar ij\bar j} (\omega))e^{-k\phi+ \frac {-F}{n-1}}(n+\Delta_{\omega}\phi)^{1+\frac 1{n-1}}. \end{eqnarray*} The function $e^{-k\phi}(n+\Delta_{\omega}\phi)$ must achieve its maximum at some point $p.$ At this point, $$0\geq -An -\Delta_{\omega}\phi(p)-kn(n+\Delta_{\omega}\phi)+(k-\Lambda)e^{\frac {-F(p)} {n-1}}(n+\Delta_{\omega}\phi)^{1+\frac 1{n-1}}(p).$$ Notice that we can bound $\sup F$ by $C(\Lambda, A, B).$ Thus, the above inequality implies $$n+\Delta_{\omega}\phi \leq C_4(\Lambda, A, B).$$ Since we have an upper bound on $F$, the lower bound of $F$ can be obtained by Lemma \ref{lem2.9} \begin{eqnarray*}\inf F\geq -4C(\Lambda, A, B) \exp (2+2\int_M\; F \omega_{\phi}^n) =C(\Lambda, A, B).\end{eqnarray*} On the other hand, \begin{eqnarray*} \inf F\leq\log \frac {\omega^n_{\phi}}{\omega^n}=\log\prod_i \;(1+\phi_{i\bar i})\leq \log (\prod_i(n+\Delta_{\omega} \phi)^{n-1}(1+\phi_{i\bar i})). \end{eqnarray*} Hence, $1+\phi_{i\bar i}\geq C_5(\Lambda, A, B)>0.$ Thus, $$C_5(\Lambda, A, B)\leq n+\Delta_{\omega} \phi \leq C_4(\Lambda, A, B).$$ By (\ref{f1}) and (\ref{f2}), we have $$|\Delta_{\omega} F|\leq C(A, B, \Lambda).$$ By the elliptic estimate, $F\in W^{2, p}(M, \omega)$ for any $p>1.$ Recall that $F$ satisfies the equation (\ref{f3}), we have the H\"older estimate $\phi\in C^{2, \alpha}(M, \omega)$ for some $\alpha\in (0, 1)$ (cf. \cite{[Siu]},\cite{[Tru]}). Let $\psi$ be a local potential of $\omega$ such that $\omega=\sqrt{-1}\partial\bar\partial \psi$. Differential the equation (\ref{f3}), we have $$\Delta_{\phi}\pd {}{z^i}(\phi+\psi)-\pd {}{z^i}\log \omega^n=\pd {F}{z^i}\in W^{1, p}(M, \omega).$$ Note that the coefficients of $\Delta_{\phi}$ is in $C^{\alpha}(M, \omega)$, by the elliptic estimate $\phi\in W^{4, p}(M, \omega)$. Then by the Sobolev embedding theorem for any $\beta\in (0, 1),$ $$|\phi|_{C^{3, \beta}(M, \omega)}\leq C(A, B, \omega, \beta).$$ The lemma is proved. \end{proof} For convenience, we introduce the following definition. \begin{defi} For any K\"ahler metric $\omega,$ we define $$W(\omega)=\inf_f \Big\{\int_M \;|f_{\alpha\beta}|^2\omega^n\;\;\Big|\;f\in W^ {2,2}(M, \omega), \int_M\;f^2\omega^n=1, \int_M\;f\omega^n=0\Big\}.$$ \end{defi} Assume that $M$ has no nonzero holomorphic vector fields, then the following lemma gives a positive lower bound of $W(\omega).$ \begin{lem}\label{lem2.12}Assume that $M$ has no nonzero holomorphic vector fields. For any constant $A, B>0$, there exists a positive constant $C_6$ depending on $A, B$ and the background metric $\omega$, such that for any K\"ahler metric $\omega_{\phi}=\omega+\sqrt{-1}\partial\bar\partial\phi,$ if $$|Ric(\omega_{\phi})|\leq A, \; \;\;{ and}\;\;\; |\phi|\leq B,$$ then $$W(\omega_{\phi})\geq C_6>0.$$ \end{lem} \begin{proof}Suppose not, we can find a sequence of metrics $\omega_m=\omega +\sqrt{-1}\partial\bar\partial\phi_m$ and functions $f_m$ satisfying $$|Ric(\omega_m)|\leq A, \qquad |\phi_m|\leq B,$$ and $$\int_M\;f_m^2\omega_{m}^n=1, \int_M\;f_m\omega_{m}^n=0, \int_M \;|f_{m, \alpha\beta}|_{g_m}^2\omega_{m}^n\ri0.$$ Note that the Sobolev constants with respect to the metrics $\omega_m$ are uniformly bounded. By Lemma \ref{lem2.10}, we can assume that $\omega_m$ converges to a K\"ahler metric $\omega_{\infty}$ in $C^{1, \beta}(M, \omega)$ norm for some $\beta\in (0, 1).$ Now define a sequence of vector fields \begin{equation} X_m^i=g_m^{i\bar k}\pd {f_{m}}{z^{\bar k}},\qquad X_m=X_m^i\pd {}{z^i}.\label{f4}\end{equation} By direct calculation, we have $$|X_m|^2_{g_m}=|\nabla f_m|_{g_m}^2,$$ and \begin{eqnarray*} \Big|\pd {X_m}{\bar z}\Big|_{g_m}^2=\sum_{i, j}\Big|\pd {X_m^i}{z^{\bar j}}\Big|_{g_m}^2=|f_{m, \alpha\beta}|_{g_m}^2. \end{eqnarray*} Then \begin{equation}\int_M \;\Big|\pd {X_m}{\bar z}\Big|_{g_m}^2\omega_{g_m}^n\ri0.\label{la1}\end{equation} Next we claim that there exist two positive constants $C_7$ and $C_8$ which depend only on $A$ and the Poincar\'e constant $\sigma$ such that \beq0<C_7(\sigma)\leq \int_M \; |X_m|_{g_m}^2 \omega_{g_m}^n\leq C_8(A).\label{la2}\end{equation} In fact, since the Poincar\'e constant is uniformly bounded in our case, $$\int_M \; |X_m|_{g_m}^2 \omega_{g_m}^n=\int_M \; |\nabla f_m|_{g_m}^2 \omega_ {g_m}^n\geq C(\sigma)\int_M\;f_m^2\omega_{g_m}^n=C(\sigma).$$ On the other hand, since the Ricci curvature has a upper bound, we have \begin{eqnarray*}\int_M \; |\Delta_m f_m|_{g_m}^2 \omega_{g_m}^n&=&\int_M \;|f_{m, \alpha \beta}|_{g_m}^2\omega_{g_m}^n+\int_M \;R_{i\bar j}f_{m, \bar i}f_{m, j}\omega_ {g_m}^n\\ &\leq &\int_M \;|f_{m, \alpha\beta}|_{g_m}^2\omega_{g_m}^n+A\int_M \;|\nabla f_m| _{g_m}^2\omega_{g_m}^n\\ &\leq &\int_M \;|f_{m, \alpha\beta}|_{g_m}^2\omega_{g_m}^n+\frac 12\int_M \; | \Delta_m f_m|_{g_m}^2 \omega_{g_m}^n+\frac {A^2}2 \int_M \;f_m^2\omega_{g_m}^n\\ \end{eqnarray*} Then $$\int_M \; |\Delta_m f_m|_{g_m}^2 \leq 1+A^2.$$ Therefore, \begin{eqnarray*}\int_M \; |X_m|_{g_m}^2 \omega_{g_m}^n&=&\int_M \; |\nabla f_m|_{g_m} ^2 \omega_{g_m}^n\\ &\leq&\frac 12\int_M \; |\Delta_m f_m|_{g_m}^2 + \frac 12\int_M\;f_m^2 \omega_{g_m}^n\\ &\leq &C(A). \end{eqnarray*} This proves the claim. Now we have $$\int_M\;f_m^2\omega_{m}^n=1, \qquad \int_M\;|\nabla\bar \nabla f_m|_{g_m}^2 \omega_{m}^n\leq C(A),\; \int_M\; |f_{m, \alpha\beta}|_{g_m}^2 \omega_m^n\rightarrow 0,$$ then $f_m\in W^{2,2}(M, \omega_m).$ Note that the metrics $\omega_m$ are $C^{1, \beta}$ equivalent to $\omega_{\infty}$, then $f_m\in W^{2, 2}(M, \omega_{\infty}),$ thus we can assume $f_m$ strongly converges to $f_{\infty}$ in $W^{1, 2}(M, \omega_{\infty})$. By (\ref{f4}) $X_{m}$ strongly converges to $ X_{\infty}$ in $L^2(M, \omega_{\infty}).$ Thus, by (\ref{la2}), \begin{equation} 0<C_7\leq \int_M \; |X_{\infty}|^2 \omega_{\infty}^n\leq C_8.\label{f5}\end{equation} Next we show that $X_{\infty}$ is holomorphic. In fact, for any vector valued smooth function $\xi=(\xi^1, \xi^2, \cdots, \xi^n),$ \begin{eqnarray*} \Big|\int_M\; \xi\cdot\bar \partial X_m\omega_{\infty}^n \Big|^2&=&\Big|\int_M\; \xi^k \frac {\partial X_m}{\partial \bar z^{ k}}\;\omega_{\infty}^n \Big|^2\\&\leq &\int_M\; |\xi|^2\omega_{\infty}^n\int_M\; \Big|\pd {X_m}{\bar z}\Big|^2\omega_{\infty}^n\\&\leq &C \int_M\; |\xi|^2\omega_{\infty}^n\int_M\; \Big|\pd {X_m}{\bar z}\Big|_{g_m}^2\omega_{g_m}^n \rightarrow 0. \end{eqnarray*} On the other hand, $$\int_M\; \xi\cdot\bar \partial X_m\omega_{\infty}^n =-\int_M\; \bar \partial \xi\cdot X_m \; \omega_{\infty}^n\rightarrow -\int_M\; \bar \partial \xi\cdot X_{\infty} \; \omega_{\infty}^n.$$ Then $X_{\infty}$ is a weak holomorphic vector field, thus it must be holomorphic. By (\ref{f5}) $X_{\infty}$ is a {{nonzero}} holomorphic vector field, which contradicts the assumption that $M$ has no nonzero holomorphic vector fields. The lemma is proved. \end{proof} \begin{lem}\label{lem2.13} If the K\"ahler metric $\omega_g$ satisfies $Ric(\omega_g)\geq (1-\eta)\omega_g$ where $0<\eta< \frac {\sqrt{C_6}}2$. Here $C_6$ is the constant obtained in Lemma \ref{lem2.12}. Then the first eigenvalue of $\Delta_g$ satisfies $\lambda_1\geq 1+\gamma$, where $\gamma=\frac {\sqrt{C_6}}{2}.$ \end{lem} \begin{proof} Let $ u$ is any eigenfunction of $\omega_g$ with eigenvalue $\lambda_1,$ so $\Delta_g u=-\lambda_1u.$ Then by direct calculation, we have \begin{eqnarray*} \int_M\; u_{ij}u_{\bar i\bar j}\;\omega_g^n&=&-\int_M\;u_{ij\bar j} u_{\bar i}\;\omega_g^n\\ &=& -\int_M\;(u_{j\bar ji}+R_{i\bar k}u_{k})u_{\bar i}\;\omega_g^n\\ &=&\int_M ((\Delta_g u)^2-R_{i\bar j}u_{j}u_{\bar i})\;\omega_g^n. \end{eqnarray*} This implies \begin{eqnarray*} C_6\int_M\; u^2 \omega_g^n&\leq & \int_M\; ((\Delta_g u)^2-R_{i\bar j}u_{j}u_{\bar i})\;\omega_g^n\\ &\leq &\lambda_1^2 \int_M\; u^2 \omega^n-(1-\eta)\int_M\;|\nabla u|^2 \omega_g^n\\ &=&(\lambda_1^2-(1-\eta)\lambda_1)\int_M\; u^2 \omega_g^n. \end{eqnarray*} Thus, we have $\lambda_1^2-(1-\eta)\lambda_1-C_6\geq 0.$ Then, $$\lambda_1\geq 1+\frac {\sqrt{C_6}}{2}.$$ \end{proof} \begin{flushleft} \begin{proof}[Proof of Theorem \ref{lem2.8}] The theorem follows directly from the above Lemma \ref{lem2.12} and \ref{lem2.13}. \end{proof} \end{flushleft} \subsubsection{Case 2: $M$ has nonzero holomorphic vector fields}\label{section4.4.2} In this subsection, we will consider the case when $M$ has nonzero holomorphic vector fields. Denote by $Aut(M)^{\circ}$ the connected component containing the identity of the holomorphic transformation group of $M$. Let $K$ be a maximal compact subgroup of $Aut(M)^{\circ}$. Then there is a semidirect decomposition of $Aut(M)^{\circ}$(cf. \cite{[FM]}), $$Aut(M)^{\circ}=Aut_r(M)\propto R_u,$$ where $Aut_r(M)\subset Aut(M)^{\circ}$ is a reductive algebraic subgroup and the complexification of $K$, and $R_u$ is the unipotent radical of $Aut(M)^{\circ}$. Let $\eta_r(M, J)$ be the Lie algebra of $Aut_r(M, J).$ Now we introduce the following definition which is a mild modification from \cite{[chen1]} and \cite{[PhSt]}. \begin{defi}\label{prestable} The complex structure $J$ of $M$ is called pre-stable, if no complex structure of the orbit of diffeomorphism group contains larger (reduced) holomorphic automorphism group (i.e., $Aut_r(M)$). \end{defi} Now we recall the following $C^{k, \alpha}$ convergence theorem of a sequence of K\"ahler metrics, which is well-known in literature (cf. \cite{[PhSt]}, \cite{[Tian4]}). \begin{theo}\label{conv1} Let $M$ be a compact K\"ahler manifold. Let $(g(t), J(t))$ be any sequence of metrics $g(t)$ and complex structures $J(t)$ such that $g(t)$ is K\"ahler with respect to $J(t)$. Suppose the following is true: \begin{enumerate}\item For some integer $k\geq 1$, $|\nabla^lRm|_{g(t)}$ is uniformly bounded for any integer $l (0\leq l< k)$; \item The injectivity radii $i(M, g(t))$ are all bounded from below; \item There exist two uniform constant $c_1$ and $c_2$ such that $0<c_1\leq {\rm Vol}(M, g(t))\leq c_2$. \end{enumerate} Then there exists a subsequence of $t_j$, and a sequence of diffeomorphism $F_j: M\rightarrow M$ such that the pull-back metrics $\tilde g(t_j)=F_j^*g(t_j)$ converge in $C^{k, \alpha}(\forall \,\alpha\in (0, 1))$ to a $C^{k, \alpha}$ metric $g_{\infty}$. The pull-back complex structure tensors $\tilde J(t_j)=F_j^*J(t_j)$ converge in $C^{k, \alpha}$ to an integral complex structure tensor $\tilde J_{\infty}$. Furthermore, the metric $ g_{\infty}$ is K\"ahler with respect to the complex structure $\tilde J_{\infty}$. \end{theo} \begin{theo}\label{theo4.18}Suppose $M$ is pre-stable. For any $\Lambda_0, \Lambda_1>0$, there exists $\eta>0$ depending only on $\Lambda_0$ and $\Lambda_1$ such that for any metric $\omega\in 2\pi c_1(M),$ if \begin{equation}|Ric(\omega)-\omega|\leq \eta,\;\; |Rm(\omega)|\leq \Lambda_0,\;\;|\nabla Rm(\omega)|\leq \Lambda_1, \label{r1}\end{equation} then for any smooth function $f$ satisfying $$ \int_M\; f\omega^n=0 {\;\;{ and}\;\;} Re\left(\int_M\; X(f)\omega^n\right)=0, \qquad \forall X\in \eta(M, J),$$ we have $$\int_M\; |\nabla f|^2\omega^n>(1+\gamma(\eta, \Lambda_0, \Lambda_1))\int_M\; |f|^2\omega^n,$$ where $\gamma>0$ depends only on $\eta, \Lambda_0$ and $\Lambda_1.$ \end{theo} \begin{proof}Suppose not, for any positive numbers $\eta_m\rightarrow 0$, there exists a sequence of K\"ahler metrics $\omega_m\in 2\pi c_1(M)$ such that \begin{equation} |Ric(\omega_m)-\omega_{m}|\leq \eta_m,\;\; |Rm(\omega_m)|\leq \Lambda_0,\;\;\;|\nabla_m Rm(\omega_m)|\leq \Lambda_1,\label{r2}\end{equation} where $Rm_m$ is with respect to the metric $\omega_m$, and smooth functions $f_m$ satisfying $$\int_M\; f_m\omega_m^n=0, \qquad Re\left(\int_M\; X(f_m)\omega_m^n\right)=0, \qquad \forall X\in \eta(M, J),$$ \begin{equation}\int_M\; |\nabla_m f_m|^2\omega_m^n<(1+\gamma_m)\int_M\; |f_m|^2\omega_m^n,\label{eq5.25}\end{equation} where $0<\gamma_m\rightarrow 0.$ Without loss of generality, we may assume that \[ \int_M\; f_m^2 \omega_m^n = 1, \qquad \forall m \in {\mathbb N}, \] which means \[ \int_M\; |\nabla_m f_m|^2\omega_m^n\leq 1 + \gamma_m < 2. \] Then, $f_m $ will converge in $W^{1,2}$ if $(M, \omega_m)$ converges. However, according to our stated condition, $(M, \omega_m, J)$ will converge in $C^{2, \alpha}(\alpha\in (0, 1))$ to $(M, \omega_\infty, J_\infty).\;$ In fact, by (\ref{r2}) the diameters of $\omega_m$ are uniformly bounded. Note that all the metrics $\omega_m$ are in the same K\"ahler class, the volume is fixed. Then by (\ref{r2}) again, the injectivity radii are uniformly bounded from below. Therefore, all the conditions of Theorem \ref{conv1} are satisfied. Note that the complex structure $J_\infty$ lies in the closure of the orbit of diffeomorphisms, while $\omega_\infty$ is a K\"ahler-Einstein metric in $(M, J_\infty)$. By the standard deformation theorem in complex structures, we have \[ \dim Aut_r (M, J) \leq \dim Aut_r(M, J_\infty). \] By abusing notation, we can write \[ Aut_r(M, J) \subset Aut_r(M, J_\infty). \] By our assumption of pre-stable of $(M, J)$, we have the inequality the other way around. Thus, we have \[ \dim Aut_r(M, J) = \dim Aut_r(M, J_\infty),\;\;\;{\rm or}\;\;\; Aut_r(M, J) = Aut_r(M, J_\infty). \] Now, let $f_\infty $ be the $W^{1,2}$ limit of $f_m$, then we have \[ 1 \leq |f_\infty|_{W^{1,2}(M,\, \omega_\infty)} \leq 3 \] and \[ \int_M f_\infty \omega_\infty^n = 0, \qquad Re\left(\int_M\; X(f_\infty) \omega_\infty^n\right) = 0,\qquad \forall X\in \eta(M, J). \] Thus, $f_\infty$ is a non-trivial function. Since $\omega_\infty$ is a K\"ahler-Einstein metric, we have \[ \int_M\; \theta_X f_\infty \omega_\infty^n = 0, \] where \[ {\cal L}_X \omega_\infty =\sqrt{-1}\partial\bar\partial\theta_X. \] This implies that $f_\infty $ is perpendicular to the first eigenspace\footnote{Note that $\triangle \theta_X = -\theta_X $ is totally real for $X \in Aut_r(M, J_\infty).\;$ Moreover, the first eigenspace consists of all such $\theta_X.\;$} of $\triangle_{\omega_\infty}.\;$ In other words, there is a $\delta > 0$ such that \[ \int_M |\nabla f_\infty|^2 \omega_\infty^n > (1+ \delta) \int_M f_\infty^2 \omega_\infty^n > 1+ \delta. \] However, this contradicts the following fact: \begin{eqnarray*} \int_M \;|\nabla f_\infty|^2 \omega_\infty^n & \leq & \displaystyle \lim_{m\rightarrow \infty} \int_M |\nabla_m f_m|^2 \omega_m^n \\ & \leq & \displaystyle \lim_{m\rightarrow \infty} (1+ \gamma_m) \int_M f_\infty^2 \omega_\infty^n = 1. \end{eqnarray*} The lemma is then proved. \end{proof} \subsection{Exponential decay in a short time}\label{section4.5} In this subsection, we will show that the $W^{1,2}$ norm of $\dot \phi$ decays exponentially in a short time. Here we follow the argument in \cite{[chen-tian2]} and use the estimate of the first eigenvalue obtained in the previous subsection. \begin{lem}\label{lem2.14} Suppose for any time $t\in [T_1, T_2]$, we have $$|Ric-\omega|(t)\leq C_1\epsilon\; \;\;{ and}\;\;\; \lambda_1(t)\geq 1+\gamma>1.$$ Let $$\mu_0(t)=\frac 1V\int_M\;(\dot\phi-c(t))^2\omega_{\phi}^n.$$ If $\epsilon$ is small enough, then there exists a constant $\alpha_0>0$ depending only on $\gamma, \sigma$ and $C_1\epsilon$ such that $$\mu_0(t)\leq e^{-\alpha_0 (t-T_1)}\mu_0(T_1), \qquad\forall t\in [T_1, T_2].$$ \end{lem} \begin{proof} By direct calculation, we have \begin{eqnarray*} \frac {d}{dt}\mu_0(t)&=&\frac 2V\int_M\;(\dot\phi-c(t))(\ddot \phi-c(t)')\omega_{\phi}^n+\frac 1V\int_M\;(\dot \phi-c(t))^2\Delta_{\phi}\dot \phi\omega_{\phi}^n\\&=&-\frac 2V\int_M\;(1+\dot \phi-c(t))|\nabla (\dot \phi-c(t))|^2\omega_{\phi}^n+\frac 2V\int_M\;(\dot \phi-c(t))^2\omega_{\phi}^n. \end{eqnarray*} By the assumption, we have for $t\in [T_1, T_2]$ \begin{eqnarray*} \frac {d}{dt}\mu_0(t)&=&-\frac 2V\int_M\;(1+\dot\phi-c(t))|\nabla \dot\phi|^2\omega_{\phi}^n+\frac 2V\int_M\;(\dot\phi-c(t))^2\omega_{\phi}^n\\ &\leq&-\frac 2V\int_M\;(1-C(\sigma)C_1\epsilon)|\nabla \dot\phi|^2\omega_{\phi}^n +\frac 2V\int_M\;(\dot \phi-c(t))^2\omega_{\phi}^n\\ &\leq&-\frac 2V\int_M\;(1-C(\sigma)C_1\epsilon)(1+\gamma)(\dot \phi-c(t))^2\omega_{\phi}^n+\frac 2V\int_M\;(\dot \phi-c(t))^2\omega_{\phi}^n\\&=&-\alpha_0\mu_0(t). \end{eqnarray*} Here $$\alpha_0=2(1-C(\sigma)C_1\epsilon)(1+\gamma)-2>0,$$ if we choose $\epsilon$ small enough. Thus, we have $$\mu_0(t)\leq e^{-\alpha_0 (t-T_1)}\mu_0(T_1).$$ \end{proof} \begin{lem}\label{lem2.15}Suppose for any time $t\in [T_1, T_2]$, we have $$|Ric-\omega|(t)\leq C_1\epsilon\; \;\;{ and}\;\;\; \lambda_1(t)\geq 1+\gamma>1.$$ Let $$\mu_1(t)=\frac 1V\int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n.$$ If $ \epsilon$ is small enough, then there exists a constant $\alpha_1>0$ depending only on $\gamma$ and $C_1\epsilon$ such that $$\mu_1(t)\leq e^{-\alpha_1(t-T_1)}\mu_1(T_1), \qquad\forall t\in [T_1, T_2].$$ \end{lem} \begin{proof} Recall that the evolution equation for $|\nabla \dot\phi|^2$ is $$\pd {}t|\nabla \dot \phi|^2=\Delta_{\phi} |\nabla \dot \phi|^2-|\nabla \nabla \dot \phi|^2-|\nabla \bar \nabla\dot \phi|^2+|\nabla \dot \phi|^2.$$ Then for any time $t\in [T_1, T_2],$ \begin{eqnarray*} \frac d{dt}\mu_1(t)&=&\frac 1V\int_M\;(-|\nabla \nabla \dot \phi|^2- |\nabla \bar \nabla\dot \phi|^2+|\nabla \dot \phi|^2+|\nabla \dot \phi|^2\Delta_{\phi} \dot\phi)\;\omega_{\phi}^n\\ &\leq &\frac 1V\int_M\;(-\gamma|\nabla \dot \phi|^2+(n-R(\omega_{\phi}))|\nabla \dot \phi|^2) \omega_{\phi}^n\\ &\leq& -(\gamma-C_1\epsilon)\mu_1(t). \end{eqnarray*} Thus, we have $$\mu_1(t)\leq e^{-\alpha_1(t-T_1)}\mu_1(T_1)$$ where $\alpha_1=\gamma-C_1\epsilon>0$ if we choose $\epsilon$ small. \end{proof} \subsection {Estimate of the $C^0$ norm of $\phi(t)$ }\label{section4.6} In this subsection, we derive some estimates on the $C^0$ norm of $|\phi|$. Recall that in the previous subsection we proved that the $W^{1, 2}$ norm of $|\dot\phi-c(t)|$ decays exponentially. Based on this result we will use the parabolic Moser iteration to show that the $C^0$ norm of $|\dot\phi-c(t)|$ also decays exponentially. \begin{lem}\label{lem2.18} Suppose that $\mu_0(t), \mu_1(t)$ decay exponentially for $t\in [T_1, T_2]$ as in Lemma \ref{lem2.14} and \ref{lem2.15}, then we have $$\Big|\pd {\phi}t-c(t)\Big|_{C^{0}}\leq \frac {C_9(n, \sigma)} {\tau^{\frac {m}{4}}}\Big(\mu_0(t-\tau)+\frac 1{\alpha_1^2}\mu_1^2(t-\tau)\Big)^{\frac 12},\;\;\;\forall \;t\in [T_1+\tau, T_2]$$ where $m=\dim_{{\mathbb R}}M$ and $\tau<T_2-T_1$. \end{lem} \begin{proof} Let $u=\pd {\phi}t-c(t),$ the evolution equation for $u$ is $$\pd ut=\Delta_{\phi} u+u+\mu_1(t),$$ where $\mu_1(t)=\frac 1V\int_M\;|\nabla \dot \phi|^2\omega_{\phi}^n.$ Note that in the proof of Lemma \ref{lem2.15}, we derived $$\pd {}t\mu_1(t)\leq -\alpha_1 \mu_1(t).$$ Thus, we have $$\pd {}t(u_++\frac 1{\alpha_1}\mu_1)\leq \Delta_{\phi} (u_++\frac 1{\alpha_1} \mu_1)+(u_++\frac 1{\alpha_1}\mu_1).$$ where $u_+=\max\{u, 0\}$. Since $u_++\frac 1{\alpha_1}\mu_1$ is a nonnegative function, we can use the parabolic Moser iteration, $$(u_++\frac 1{\alpha_1}\mu_1)(t)\leq \frac {C(n, \sigma)}{\tau^{\frac {m +2}{4}}} \Big(\int_{t-\tau}^t\;\int_M (u_++\frac 1{\alpha_1}\mu_1)^2(s) \;\omega_{\phi}^n\wedge ds\Big)^{\frac 12}.$$ Since $\mu_0$ and $\mu_1$ are decreasing, \begin{eqnarray}& &(u_++\frac 1{\alpha_1}\mu_1)(t)\nonumber\\ &\leq&\frac {C(n, \sigma)}{\tau^{\frac {m+2}{4}}}\Big(\int_{t-\tau}^t \;(\mu_0(s)+\frac 1{\alpha_1^2} \mu_1^2(s))ds\Big)^{\frac 12}\nonumber\\ &\leq&\frac {C(n, \sigma)}{\tau^{\frac {m}{4}}}\Big(\mu_0(t-\tau)+\frac 1{\alpha_1^2}\mu_1^2(t-\tau)\Big)^{\frac 12}. \label{x1}\end{eqnarray} On the other hand, the evolution equation for $-u$ is $$\pd {}t(-u)=\Delta_{\phi} (-u)+(-u)-\mu_1(t)\leq \Delta_{\phi} (-u)+(-u).$$ Thus, $$\pd {}t(-u)_+\leq \Delta_{\phi} (-u)_++(-u)_+.$$ By the parabolic Moser iteration, we have \begin{eqnarray} (-u)_+&\leq &\frac {C(n, \sigma)}{\tau^{\frac {m+2}{4}}}\Big(\int_{t-\tau}^t\; \int_M (-u)_+^2\omega_{\phi}^n\wedge ds\Big)^{\frac 12}\nonumber\\ &\leq&\frac {C(n, \sigma)}{\tau^{\frac {m}{4}}}\mu_0(t-\tau)^{\frac 12}. \label{x2}\end{eqnarray} Combining the two inequalities (\ref{x1})(\ref{x2}), we obtain the estimate $$\Big|\pd {\phi}t-c(t)\Big|_{C^{0}}\leq \frac {C(n, \sigma)}{\tau^ {\frac {m}{4}}}\Big (\mu_0(t-\tau)+\frac 1{\alpha_1^2}\mu_1^2(t-\tau)\Big)^{\frac 12}.$$ This proved the lemma.\end{proof} \begin{lem}\label{lem2.19}Under the same assumptions as in Lemma \ref{lem2.18}, we have $$|\phi(t)|\leq |\phi(T_1+\tau)|+\frac {C_{10}(n, \sigma)}{\alpha \tau^{\frac {m}{4}}}(\sqrt{\mu_0(T_1)}+\frac 1{\alpha_1}\mu_1(T_1)) + \tilde C,\qquad \forall\, t\in [T_1+\tau, T_2].$$ Here $\tilde C=E_0(0)-E_0(\infty)$ is a constant in Lemma \ref{lem2.6}. \end{lem} \begin{proof} \begin{eqnarray*} |\phi(t)|&\leq &|\phi(T_1+\tau)|+\int_{T_1+\tau}^{t}\; \Big|\pd {\phi(s)} s-c(s)\Big| ds+\int_{T_1+\tau}^{t}\;c(s) ds\\ &\leq &|\phi(T_1+\tau)|+\frac {C(n, \sigma)}{\tau^{\frac {m}{4}}}\int_{T_1+\tau} ^{t}\;\Big(\mu_0(s-\tau)+\frac 1{\alpha_1^2}\mu_1^2(s-\tau)\Big)^{\frac 12}ds + \tilde C\\ &\leq &|\phi(T_1+\tau)|+\frac {C(n, \sigma)}{\tau^{\frac {m}{4}}}(\sqrt {\mu_0(T_1)}+\frac 1{\alpha_1}\mu_1(T_1))\int_{T_1+\tau}^{t}\;e^{- \alpha (s- \tau-T_1)}ds +\tilde C\\ &\leq&|\phi(T_1)|+\frac {C(n, \sigma)}{\alpha \tau^{\frac {m}{4}}}(\sqrt{\mu_0(T_1)}+\frac 1{\alpha_1}\mu_1(T_1)) +\tilde C \end{eqnarray*} where $\alpha=\min\{\frac {\alpha_0}2, \alpha_1\}$ and $\tilde C=E_0(0)-E_0(\infty)$ is a constant in Lemma \ref{lem2.6}. \end{proof} \subsection{Estimate of the $C^k$ norm of $\phi(t)$ } In this subsection, we shall obtain uniform $C^k$ bounds for the solution $\phi(t)$ of the K\"ahler-Ricci flow $$\pd {\phi}t=\log\frac {\omega^n_{\phi}}{\omega^n}+\phi-h_{\omega}$$ with respect to any background metric $\omega$. For simplicity, we normalize $h_{\omega}$ to satisfy $$\int_M\; h_{\omega}\;\omega^n=0.$$ The following is the main result in this subsection. \begin{theo}\label{theoRm}For any positive constants $\Lambda, B>0$ and small $\eta>0$, there exists a constant $C_{11}$ depending only on $B, \eta, \Lambda$ and the Sobolev constant $\sigma$ such that if the background metric $\omega$ satisfies $$|Rm(\omega)|\leq \Lambda, \qquad |Ric(\omega)-\omega|\leq \eta, $$ and $|\phi(t)|, |\dot\phi(t)|\leq B,$ then $$|Rm|(t)\leq C_{11}(B, \Lambda, \eta, \sigma).$$ \end{theo} \begin{proof}Note that $R(\omega)-n=\Delta_{\omega} h_{\omega}$, by the assumption we have $$|\Delta_{\omega} h_{\omega}|\leq \eta.$$ Since the Sobolev constant with respect to the metric $\omega$ is uniformly bounded by a constant $\sigma$, we have $$|h_{\omega}|_{C^0}\leq C(\sigma)\eta.$$ Now we use Yau's estimate to obtain higher order estimate of $\phi.$ Define $$F=\dot\phi-\phi+h_{\omega},$$ then the K\"ahler-Ricci flow can be written as $$(\omega+\sqrt{-1}\partial\bar\partial \phi)^n=e^F \omega^n.$$ By Yau's estimate we have \begin{eqnarray*} \Delta_{\phi}(e^{-k\phi}(n+\Delta_{\omega}\phi))&\geq & e^{-k\phi}(\Delta_{\omega} F-n^2 \inf_{i\neq j}R_{i\bar ij\bar j}(\omega))\\ &-&ke^{-k\phi}n(n+\Delta_{\omega}\phi)+(k+\inf_{i\neq j}R_{i\bar ij\bar j}(\omega))e^{-k\phi+ \frac {-F}{n-1}}(n+\Delta_{\omega}\phi)^{1+\frac 1{n-1}}. \end{eqnarray*} Note that \begin{eqnarray*} \pd {}t(e^{-k\phi}(n+\Delta_{\omega} \phi))&=&-k\dot\phi e^{-k \phi}(n+\Delta_{\omega} \phi)+e^{-k\phi}\Delta_{\omega} \dot\phi\\ &=&-k\dot\phi e^{-k\phi}(n+\Delta_{\omega} \phi)+e^{-k\phi}\Delta_{\omega} (F+\phi-h_{\omega}).\end{eqnarray*} Combing the above two inequalities, we have \begin{eqnarray*} (\Delta_{\phi}-\pd {}t)(e^{-k\phi}(n+\Delta_{\omega}\phi))&\geq & e^{-k\phi}(\Delta_{\omega} h_{\omega}+n-n^2\inf_{i\neq j}R_{i\bar ij\bar j}(\omega))\\&+&(k\dot\phi-kn-1) e^{-k\phi}(n+\Delta_{\omega} \phi)\\&+&(k+\inf_{i\neq j}R_{i\bar ij\bar j}(\omega))e^{-k\phi+ \frac {-F}{n-1}}(n+\Delta_{\omega}\phi)^{1+\frac 1{n-1}}.\end{eqnarray*} Since $\phi, \Delta_{\omega} h_{\omega}, |h_{\omega}|, |Rm(\omega)|, \dot\phi$ are bounded, by the maximum principle we can obtain the following estimate $$n+\Delta_{\omega} \phi\leq C_{12}(B, \eta, \Lambda, \sigma).$$ By the definition of $F$, $$\log\frac {\omega^n_{\phi}}{\omega^n}=F\geq -C_{13}(B, \eta, \sigma).$$ On the other hand, we have \begin{eqnarray*}\log\frac {\omega^n_{\phi}}{\omega^n}&=&\log\prod_{i=1}^n(1+\phi_{i\bar i})\leq\log ((n+\Delta_{\omega}\phi)^{n-1}(1+\phi_{i\bar i})). \end{eqnarray*} Thus, $1+\phi_{i\bar i}\geq e^{-C_{13}}C_{12}^{-\frac 1{n-1}},$i.e. $C_{14}\omega\leq \omega_{\phi}\leq C_{12}\omega.$ Following Calabi's computation (cf. \cite{[chen-tian1]},\cite{[Yau]}), we can obtain the following $C^3$ estimate: $$|\phi|_{C^3(M, \,\omega)}\leq C_{14}(B, \eta, \Lambda, \sigma).$$ Since the metrics $\omega_{\phi}$ are uniformly equivalent, the flow is uniform parabolic with $C^1$ coefficients. By the standard parabolic estimates, the $C^4$ norm of $\phi$ is bounded, and then all the curvature tensors are also bounded. The theorem is proved. \end{proof} \section{Proof of Theorem \ref{main} } In this section, we shall prove Theorem \ref{main}. This theorem needs the technical condition that $M$ has no nonzero holomorphic vector fields, which will be removed in Section \ref{section6}. The idea is to use the estimate of the first eigenvalue proved in Section \ref{section4.4.1}. \begin{theo}Suppose that $M$ has no nonzero holomorphic vector fields and $E_1$ is bounded from below in $[\omega].$ For any $\delta, B, \Lambda>0,$ there exists a small positive constant $\epsilon(\delta, B, \Lambda, \omega)>0$ such that for any metric $\omega_0$ in the subspace ${\mathcal A}(\delta, B, \Lambda, \epsilon)$ of K\"ahler metrics $$\{\omega_{\phi}=\omega+\sqrt{-1}\partial\bar\partial \phi\;|\; Ric(\omega_{\phi})>-1+\delta,\; |\phi| \leq B, \;|Rm|(\omega_{\phi})\leq \Lambda, \; E_1(\omega_{\phi})\leq \inf E_1+\epsilon \}$$ the K\"ahler-Ricci flow will deform it exponentially fast to a K\"ahler-Einstein metric in the limit. \end{theo} \begin{flushleft} \begin{proof} Let $\omega_0=\omega+\sqrt{-1}\partial\bar\partial \phi(0)\in {\mathcal A}(\delta, B, \Lambda, \epsilon)$, where $\epsilon$ will be determined later. Note that $E_1(0)\leq \inf E_1 +\epsilon, $ by Lemma \ref{lem5.9} we have $$E_0(0)-E_0(\infty)\leq \frac {\epsilon}2<1.$$ Here we choose $\epsilon<2.$ Therefore, we can normalize the K\"ahler-Ricci flow such that for the normalized solution $\psi(t)$,\begin{equation} 0< c(t), \;\int_0^{\infty} c(t)dt<1,\label{a1}\end{equation} where $c(t)=\frac 1V\int_M\;\dot\psi\omega^n_{\psi}.$ Now we give the details on how to normalize the solution to satisfy (\ref{a1}). Since $\omega_0=\omega+\sqrt{-1}\partial\bar\partial \phi(0)\in {\mathcal A}(\delta, B, \Lambda, \epsilon)$, by Lemma {\ref{lem2.10}} we have $$C_2(\Lambda, B, \omega)\omega\leq \omega_0\leq C_3(\Lambda, B, \omega)\omega.$$ By the equation of K\"ahler-Ricci flow, we have $$|\dot\phi|(0)= \Big|\log \frac {\omega_{\phi}^n}{\omega^n}+\phi-h_{\omega} \Big|_{t=0}\leq C_{16}(\omega, \Lambda, B).$$ Set $\psi(t)=\phi(t)+C_0e^t,$ where $$C_0=\frac 1V\int_0^{\infty}\;e^{-t}\int_M\; |\nabla \dot\phi|^2\;\omega^n_ {\phi}\wedge dt -\frac 1V\int_M\; \dot\phi \omega_{\phi}^n\Big|_{t=0}.$$ Then (\ref{a1}) holds and $$|C_0|\leq 1+C_{16},$$ and $$|\psi|(0), \;|\dot\psi|(0)\leq B+1+C_{16}:=B_0.$$ \vskip 10pt \textbf{STEP 1.}(Estimates for $t\in [2T_1, 6T_1]$). By Lemma \ref{lem2.1} there exists a constant $T_1(\delta, \Lambda)$ such that \begin{equation} Ric(t)>-1+\frac {\delta}2,\; \;\;{ and}\;\;\; |Rm|(t)\leq 2\Lambda, \qquad \forall t\in [0, 6T_1].\label{5.20}\end{equation} By Lemma {\ref{lem2.5}} and the equation (\ref{5.20}), we can choose $\epsilon$ small enough so that \begin{equation}|Ric-\omega|(t)\leq C_1(T_1, \Lambda)\epsilon<\frac 12,\qquad \forall t\in [2T_1, 6T_1],\label{5.21}\end{equation} and \begin{equation}|\dot \psi-c(t)|\leq C(\sigma)C_1(T_1, \Lambda)\epsilon<1, \qquad\forall t\in [2T_1, 6T_1].\label{5.22}\end{equation} Then by the inequality (\ref{a1})\begin{equation}|\dot\psi|(t)\leq 1+|c(t)|\leq 2,\qquad \forall t\in [2T_1, 6T_1]. \label{5.25}\end{equation} Note that the equation for $\dot\psi$ is $$\pd {}t\dot\psi=\Delta_{\psi} \dot\psi+\dot\psi,$$ we have \begin{equation} |\dot\psi|(t)\leq |\dot\psi|(0)e^{2T_1}\leq B_0e^{2T_1},\qquad \forall t\in [0, 2T_1]. \label{5.24} \end{equation} Thus, for any $t\in [2T_1, 6T_1]$ we have \begin{eqnarray*}|\psi|(t)&\leq &|\psi|(0)+\int_0^{2T_1}\; |\dot\psi|ds+\int_{2T_1}^t|\dot\psi|ds\\ &\leq &B_0+2T_1B_0e^{2T_1}+8T_1,\end{eqnarray*} where the last inequality used (\ref{5.25}) and (\ref{5.24}). For simplicity, we define \begin{eqnarray*} B_1:&=&B_0+2T_1B_0e^{2T_1}+8T_1+2,\\ B_k:&=&B_{k-1}+2,\qquad 2\leq k\leq 4. \end{eqnarray*} Then $$|\dot\psi|(t),\;|\psi|(t)\leq B_1,\qquad \forall t\in [2T_1, 6T_1].$$ By Theorem \ref{theoRm} we have $$|Rm|(t)\leq C_{11}(B_1, \Lambda_{\omega}, 1),\qquad \forall t\in [2T_1, 6T_1],$$ where $\Lambda_{\omega}$ is an upper bound of curvature tensor with respect to the metric $\omega,$ and $C_{11}$ is a constant obtained in Theorem \ref{theoRm}. Set $\Lambda_0=C_{11}(B_4, \Lambda_{\omega}, 1)$, we have $$|Rm|(t)\leq \Lambda_0, \qquad \forall t\in [2T_1, 6T_1].$$ \vskip 10pt \textbf{STEP 2.}(Estimate for $t\in [2T_1+2T_2, 2T_1+6T_2]$). By STEP 1, we have $$|Ric-\omega|(2T_1)\leq C_1\epsilon<\frac 12 \; \;\;{ and}\;\;\; |Rm|(2T_1)\leq\Lambda_0.$$ By Lemma {\ref{lem2.1}}, there exists a constant $T_2(\frac 12, \Lambda_0)\in (0, T_1]$ such that $$|Rm|(t)\leq 2\Lambda_0,\; \;\;{ and}\;\;\; Ric(t)\geq 0,\qquad \forall t\in [2T_1, 2T_1 +6T_2].$$ Recall that $E_1\leq \inf E_1+\epsilon ,$ by Lemma \ref{lem2.2} and Lemma \ref{lem2.5} there exists a constant $C_1'(T_2, \Lambda_0)$ such that $$|Ric-\omega|(t)\leq C_1'(T_2, \Lambda_0)\epsilon, \qquad\forall t\in [2T_1 +2T_2, 2T_1+6T_2].$$ Choose $\epsilon$ small enough so that $C_1'(T_2, \Lambda_0)\epsilon<\frac 12.$ Then by Lemma \ref{lem2.5}, $$|\dot\psi-c(t)|_{C^0}\leq C(\sigma)C_1'(T_2, \Lambda_0)\epsilon,\qquad \forall t\in [2T_1+2T_2, 2T_1+6T_2].$$ Choose $\epsilon$ small enough so that $ C(\sigma)C_1'(T_2, \Lambda_0)\epsilon<1.$ Thus, we can estimate the $C^0$ norm of $\psi$ for any $t\in [2T_1+2T_2, 2T_1+6T_2]$ \begin{eqnarray*}|\psi(t)|&\leq &|\psi|(2T_1+2T_2)+\Big|\int_{2T_1+2T_2}^t\; \Big(\pd {\psi}{s}-c(s)\Big)ds\Big|+\Big|\int_0^t\;c(s)ds\Big|\\ &\leq &B_1+ 4T_2C(\sigma)C_1'(T_2, \Lambda_0)\epsilon+1. \end{eqnarray*} Choose $\epsilon$ small enough such that $4T_2C(\sigma)C_1'(T_2, \Lambda_0)\epsilon<1,$ then $$|\psi(t)|\leq B_2, \qquad\forall t\in [2T_1+2T_2, 2T_1+6T_2].$$ Since $M$ has no nonzero holomorphic vector fields, applying Theorem \ref{lem2.8} for the parameters $\eta=C_1'\epsilon,\;A=1\;, |\phi|\leq B_4,$ if we choose $\epsilon$ small enough, there exists a constant $\gamma(C_1'\epsilon, B_4, 1, \omega)$ such that the first eigenvalue of the Laplacian $\Delta_{\psi}$ satisfies $$\lambda_1(t)\geq 1+\gamma>1, \qquad \forall t\in [2T_1+2T_2, 2T_1+6T_2].$$ \textbf{STEP 3.} In this step, we want to prove the following claim: \begin{claim}For any positive number $S\geq 2T_1+6T_2$, if $$|Ric-\omega|(t)\leq C_1'(T_2, \Lambda_0)\epsilon<\frac 12 \; \;\;{ and}\;\;\; |\psi(t)| \leq B_3, \qquad\forall t\in [2T_1+2T_2, S],$$ then we can extend the solution $g(t)$ to $[2T_1+2T_2, S+4T_2]$ such that the above estimates still hold for $t\in [2T_1+2T_2, S+4T_2]$. \end{claim} \begin{proof}By the assumption and Lemma {\ref{lem2.5}}, we have $$|\dot \psi(t)-c(t)|_{C^0}\leq C(\sigma)C_1'(T_2, \Lambda_0)\epsilon, \qquad \forall t\in [2T_1+2T_2, S].$$ Note that in step 2, we know that $C(\sigma)C_1'(T_2, \Lambda_0)\epsilon<1.$ Then $$|\dot\psi|(t)\leq 2,\qquad \forall t\in [2T_1+2T_2, S].$$ Therefore, $|\psi|, |\dot\psi|\leq B_3$. By Theorem \ref{theoRm} and the definition of $\Lambda_0,$ we have $$|Rm|(t)\leq \Lambda_0,\qquad \forall t\in [2T_1+2T_2, S].$$ By Lemma {\ref{lem2.1}} and the definition of $T_2$, $$|Rm|(t)\leq 2\Lambda_0, \;\;Ric(t)\geq 0, \qquad\forall t\in [S-2T_2, S +4T_2].$$ Thus, by Lemma \ref{lem2.2} and Lemma \ref{lem2.5} we have $$|Ric-\omega|(t)\leq C_1'(T_2, \Lambda_0)\epsilon,\qquad \forall t\in [S, S +4T_2],$$ and $$|\dot\psi-c(t)|_{C^0}\leq C(\sigma)C_1'(T_2, \Lambda_0)\epsilon, \qquad \forall t\in [S, S+4T_2].$$ Then we can estimate the $C^0$ norm of $\psi$ for $t\in [S, S+4T_2],$ \begin{eqnarray*}|\psi(t)|&\leq &|\psi|(S)+\Big|\int_S^{S+4T_2}\; \Big(\pd {\psi}{s}-c(s)\Big)ds\Big| +\Big|\int_0^{\infty}\;c(s)ds\Big|\\ &\leq &B_3+4T_2C(\sigma)C_1'(T_2, \Lambda_0)\epsilon+1 \\ &\leq &B_4. \end{eqnarray*} Then by Theorem \ref{lem2.8} and the definition of $\gamma$, the first eigenvalue of the Laplacian $\Delta_{\psi}$ $$\lambda_1(t)\geq 1+\gamma>1, \qquad\forall t\in [2T_1+2T_2, S+4T_2].$$ Note that $$\mu_0(2T_1+2T_2)=\frac 1V\int_M\;(\dot\psi-c(t))^2\omega_{\psi}^n \leq(C(\sigma)C_1'\epsilon)^2 $$ and \begin{eqnarray*}\mu_1(2T_1+2T_2)&=&\frac 1V\int_M\;|\nabla\dot\psi|^2\omega_ {\psi}^n\\ &=&\frac 1V\int_M\;(\dot\psi-c(t))(R(\omega_{\psi})-n)\omega_{\psi} ^n\\ &\leq&C(\sigma)(C_1'\epsilon)^2. \end{eqnarray*} By Lemma \ref{lem2.19}, we can choose $\epsilon$ small enough such that \begin{eqnarray*}|\psi(t)|&\leq& |\psi(2T_1+3T_2)|+\frac {C(n, \sigma)}{\alpha T_2^{\frac {m}{4}}} (\sqrt{\mu_0(2T_1+2T_2)}+\frac 1{\alpha_1}\mu_1(2T_1+2T_2)) +1\\ &\leq&B_2+\frac {C(n, \sigma)}{\alpha T_2^{\frac {m}{4}}}(1+\frac 1{\alpha_1}C_1'\epsilon) C(\sigma)C_1'\epsilon+1\\ &\leq &B_3 \end{eqnarray*} for $t\in [S, S+4T_2].$ Note that $\epsilon$ doesn't depend on $S$ here, so it won't become smaller as $S\rightarrow \infty.$ \end{proof} \textbf{STEP 4.} By step 3, we know the bisectional curvature is uniformly bounded and the first eigenvalue $\lambda_1(t)\geq 1+\eta>1$ uniformly for some positive constant $\eta>0.$ Thus, following the argument in \cite{[chen-tian2]}, the K\"ahler-Ricci flow converges to a K\"ahler-Einstein metric exponentially fast. This theorem is proved. \end{proof} \end{flushleft} \section{Proof of Theorem \ref{main2}}\label{section6} In this section, we shall use the pre-stable condition to drop the assumptions that $M$ has no nonzero holomorphic vector fields, and the dependence of the initial K\"ahler potential. The proof here is roughly the same as in the previous section, but there are some differences. In the STEP 1 of the proof below, we will choose a new background metric at time $t=2T_1$, so the new K\"ahler potential with respect to the new background metric at $t=2T_1$ is $0$, and has nice estimates afterwards. Notice that all the estimates, particularly in Theorem \ref{theo4.18} and \ref{theoRm}, are essentially independent of the choice of the background metric. Therefore the choice of $\epsilon$ will not depend on the initial K\"ahler potential $\phi(0)$. This is why we can remove the assumption on the initial K\"ahler potential. As in Theorem \ref{main}, the key point of the proof is to use the improved estimate on the first eigenvalue in Section \ref{section4.4.2} (see Claim \ref{last} below). Since the curvature tensors are bounded in some time interval, by Shi's estimates the gradient of curvature tensors are also bounded. Then the assumptions of Theorem \ref{theoRm} are satisfied, and we can use the estimate of the first eigenvalue. Now we state the main result of this section. \begin{theo}Suppose $M$ is pre-stable, and $E_1$ is bounded from below in $[\omega]$. For any $\delta, \Lambda>0,$ there exists a small positive constant $\epsilon(\delta, \Lambda)>0$ such that for any metric $\omega_0$ in the subspace ${\mathcal A}(\delta, \Lambda, \omega, \epsilon)$ of K\"ahler metrics $$\{\omega_{\phi}=\omega+\sqrt{-1}\partial\bar\partial \phi\;|\; Ric(\omega_{\phi})>-1+\delta, \; |Rm|(\omega_{\phi})\leq \Lambda,\; E_1(\omega_{\phi})\leq \inf E_1+\epsilon \},$$ the K\"ahler-Ricci flow will deform it exponentially fast to a K\"ahler-Einstein metric in the limit. \end{theo} \begin{flushleft} \begin{proof} Let $\omega_0\in {\mathcal A}(\delta, \Lambda, \omega, \epsilon)$, where $\epsilon$ will be determined later. By Lemma \ref{lem2.1} there exists a constant $T_1(\delta, \Lambda)$ such that $$Ric(t)>-1+\frac {\delta}2 \; \;\;{ and}\;\;\; |Rm|(t)\leq 2\Lambda, \qquad\forall t\in [0, 6T_1].$$ By Lemma {\ref{lem2.5}}, we can choose $\epsilon$ small enough so that \begin{equation}|Ric-\omega|(t)\leq C_1(T_1, \Lambda)\epsilon<\frac 12, \qquad\forall t\in [2T_1, 6T_1],\label{8.29}\end{equation} and \begin{equation}|\dot \phi-c(t)|\leq C(\sigma)C_1(T_1, \Lambda)\epsilon<1, \qquad\forall t\in [2T_1, 6T_1].\label{8.30}\end{equation} \textbf{STEP 1.}(Choose a new background metric). Let $\underline \omega=\omega+\sqrt{-1}\partial\bar\partial \phi(2T_1)$ and let $\underline \phi(t)$ be the solution to the following K\"ahler-Ricci flow $$\left\{\begin{array}{l} \pd {\underline \phi(t)}t=\log \frac {(\underline \omega+\sqrt{-1}\partial\bar\partial \underline\phi)^n}{\underline \omega^n} +\underline \phi-h_{\underline\omega},\qquad t\geq 2T_1,\\ \underline \phi(2T_1)=0.\\ \end{array} \right. $$ Here $h_{\underline \omega}$ satisfies the following conditions $$Ric(\underline \omega)-\underline \omega=\sqrt{-1}\partial\bar\partial h_{\underline \omega}\; \;\;{ and}\;\;\; \int_M\; h_{\underline \omega} \underline \omega^n=0.$$ Then the metric $\underline \omega(t)=\underline \omega+\sqrt{-1}\partial\bar\partial \underline\phi(t)$ satisfies $$\pd {}t\underline\omega(t)=-Ric (\underline \omega(t))+\underline \omega(t)\; \;\;{ and}\;\;\; \underline \omega(2T_1)=\omega+ \sqrt{-1}\partial\bar\partial \phi(2T_1).$$ By the uniqueness of K\"ahler-Ricci flow, we have $$\underline \omega(t)=\omega+\sqrt{-1}\partial\bar\partial \phi(t),\qquad \forall t\geq 2T_1.$$ Since the Sobolev constant is bounded and $$|\Delta_{\underline \omega}h_{\underline \omega}|=|R(\underline \omega)-n|\leq C_1(T_1, \Lambda)\epsilon,$$ we have $$\Big|\pd {\underline \phi}{t}\Big|(2T_1)=|h_{\underline \omega}|\leq C(\sigma)C_1(T_1, \Lambda)\epsilon.$$ Since $E_1$ is decreasing in our case, we have $$E_1(\underline \omega)\leq E_1(\omega+\sqrt{-1}\partial\bar\partial \phi(0))\leq \inf E_1+\epsilon.$$ By Lemma \ref{lem5.9}, we have $$E_0(\underline \omega)\leq \inf E_0+\frac \ee2.$$ Thus, by Lemma \ref{lem2.6} we have $$\frac 1V\int_{2T_1}^{\infty}\; e^{-t}\int_M\; \Big|\nabla \pd {\underline\phi} {t}\Big|^2\underline\omega(t)^n\wedge dt <\frac {\epsilon}2<1.$$ Here we choose $\epsilon<2.$ Set $\psi(t)=\underline \phi(t)+C_0e^{t-2T_1},$ where $$C_0=\frac 1V\int_{2T_1}^{\infty}\; e^{-t}\int_M\; \Big|\nabla \pd {\underline \phi}{t}\Big|^2\underline\omega(t)^n\wedge dt-\frac 1V \int_M\; \pd {\underline \phi}t\underline\omega(t)^n\Big|_{t=2T_1}. $$ Then $$|\psi(2T_1)|,\;\; \Big|\pd {\psi}{t}\Big|(2T_1)\leq 2,$$ and $$0< \underline c(t),\;\;\int_{2T_1}^{\infty}\; \underline c(t)dt<1,$$ where $\underline c(t)=\frac 1V\int_M\; \pd {\psi}{t} \underline\omega_{\psi}^n.$ Since $$|\dot\psi-\underline c(t)|=|\dot\phi-c(t)|\leq C(\sigma)C_1(T_1, \Lambda) \epsilon,\qquad \forall t\in [2T_1, 6T_1],$$ we have $$|\dot\psi|(t)\leq 2, \qquad \forall t\in [2T_1, 6T_1],$$ and \begin{eqnarray*} |\psi|(t)&\leq& |\psi|(2T_1)+\Big|\int_{2T_1}^{t}\; (\dot\psi-\underline c(t))\Big|+\Big|\int_{2T}^{\infty}\; \underline c(s)ds\Big|\\&\leq&3+4T_1C(\sigma)C_1(T_1, \Lambda)\epsilon, \qquad \forall t\in [2T_1, 6T_1].\end{eqnarray*} Choose $\epsilon$ small enough such that $4T_1 C(\sigma)C_1(T_1, \Lambda)\epsilon<1$, and define $B_k=2k+2$. Then $$|\psi|, |\dot\psi|\leq B_1, \qquad \forall t\in [2T_1, 6T_1].$$ By Theorem \ref{theoRm}, we have $$|Rm|(t)\leq C_{11}(B_1, 2\Lambda, 1),\qquad \forall t\in [2T_1, 6T_1].$$ Here $C_{11}$ is a constant obtained in Theorem \ref{theoRm}. Let $\Lambda_0:= C_{11}(B_3, 2\Lambda, 1)$, then $$|Rm|(t)\leq \Lambda_0,\qquad \forall t\in [2T_1, 6T_1].$$ \vskip 10pt \textbf{STEP 2.}(Estimates for $t\in [2T_1+2T_2, 2T_1+6T_2]$). By step 1, we have $$|Ric-\omega|(2T_1)\leq C_1(T_1, \Lambda)\epsilon<\frac 12,\; \;\;{ and}\;\;\; |Rm|(2T_1)\leq \Lambda_0.$$ By Lemma {\ref{lem2.1}}, there exists a constant $T_2(\frac 12, \Lambda_0)\in (0, T_1]$ such that \begin{equation}|Rm|(t)\leq 2\Lambda_0,\; \;\;{ and}\;\;\; Ric(t)\geq 0,\qquad \forall t\in [2T_1, 2T_1+6T_2].\label{7.28}\end{equation} Recall that $E_1\leq \inf E_1+\epsilon ,$ by Lemma \ref{lem2.2} and Lemma \ref{lem2.5} there exists a constant $C_1'(T_2, \Lambda_0)$ such that $$|Ric-\omega|(t)\leq C_1'(T_2, \Lambda_0)\epsilon, \qquad \forall t\in [2T_1+2T_2, 2T_1+6T_2].$$ Choose $\epsilon$ small enough so that $C_1'(T_2, \Lambda_0)\epsilon<\frac 12$. Then by Lemma \ref{lem2.5}, $$|\dot\psi(t)-\underline c(t)|_{C^0}\leq C(\sigma)C_1'(T_2, \Lambda_0)\epsilon, \qquad \forall t\in [2T_1+2T_2, 2T_1+6T_2].$$ {{Choose $\epsilon$ small such that $C(\sigma)C_1'(T_2, \Lambda_0)\epsilon<1.$}} Thus, we can estimate the $C^0$ norm of $\psi$ for any $t\in [2T_1+2T_2, 2T_1+6T_2]$ \begin{eqnarray*}| {\psi}(t)|&\leq&|\psi|(2T_1+2T_2)+ \Big|\int_{2T_1+2T_2}^t\; \Big(\pd { \psi}s-\underline c(s)\Big)ds\Big|+\Big|\int_{2T_1+2T_2}^{\infty}\; \underline c(s)ds\Big|\\ &\leq &B_1+4T_2 C(\sigma)C_1'(T_2, \Lambda_0)\epsilon +1\\&\leq &B_2. \end{eqnarray*} Here we choose $\epsilon$ small enough such that $4T_2 C(\sigma)C_1'(T_2, \Lambda_0)\epsilon<1$. Thus, by the definition of $\Lambda_0,$ we have $$|Rm|(t)\leq \Lambda_0, \qquad\forall t\in [2T_1+2T_2, 2T_1+6T_2].$$ \textbf{STEP 3.} In this step, we want to prove the following claim: \begin{claim}\label{last}For any positive number $S\geq 2T_1+6T_2$, if $$|Ric-\omega|(t)\leq C_1'(T_2, \Lambda_0)\epsilon<\frac 12, \; \;\;{ and}\;\;\; |Rm|(t)\leq \Lambda_0,\qquad \forall t\in [2T_1+2T_2, S],$$ then we can extend the solution $g(t)$ to $[2T_1+2T_2, S+4T_2]$ such that the above estimates still hold for $t\in [2T_1+2T_2, S+4T_2]$. \end{claim} \begin{proof} By Lemma {\ref{lem2.1}} and the definition of $T_2$, $$|Rm|(t)\leq 2\Lambda_0,\; Ric(t)\geq 0, \qquad\forall t\in [2T_1+2T_2, S +4T_2].$$ Thus, by Lemma \ref{lem2.2} and Lemma \ref{lem2.5} we have $$|Ric-\omega|(t)\leq C_1'(T_2, \Lambda_0)\epsilon,\qquad \forall t\in [S-2T_2, S +4T_2].$$ Therefore, we have $$|\dot\psi(t)-\underline c(t)|_{C^0}\leq C(\sigma)C_1'(T_2, \Lambda_0)\epsilon, \qquad \forall t\in [2T_1+2T_2, S+4T_2].$$ By Theorem \ref{main4} the $K$-energy is bounded from below, then the Futaki invariant vanishes. Therefore, we have $$\int_M\; X(\dot\psi)\underline\omega_{\psi}^n=0,\qquad \forall X\in \eta_r(M, J).$$ By the assumption that $M$ is pre-stable and Theorem \ref{theo4.18}, if $\epsilon$ is small enough, there exists a constant $\gamma(C_1'\epsilon, 2\Lambda_0)$ such that $$\int_M\; |\nabla \dot\psi|\underline\omega^n_{\psi}\geq (1+\gamma)\int_M\; | \dot\psi-\underline c(t)|^2\underline\omega^n_{\psi}.$$ Therefore, Lemma \ref{lem2.14} still holds, i.e. there exists a constant $\alpha(\gamma, C_1'\epsilon, \sigma)>0$ such that for any $t\in [2T_1+2T_2, S+4T_2]$ $$\mu_1(t)\leq \mu_1(2T_1+2T_2)e^{-\alpha(t-2T_1-2T_2)},$$ and $$\mu_0(t)\leq \frac {1}{1-C_1'\epsilon}\mu_1(t)\leq 2\mu_1(2T_1+2T_2)e^{- \alpha(t-2T_1-2T_2)}. $$ Then by Lemma \ref{lem2.19}, we can choose $\epsilon$ small enough such that \begin{eqnarray*}|\psi(t)|&\leq& |\psi(2T_1+3T_2)|+\frac {C_{10}(n, \sigma)}{\alpha T_2^{\frac {m}{4}}} (\sqrt{\mu_0(2T_1+2T_2)}+\frac 1{\alpha_1}\mu_1(2T_1+2T_2)) +1\\ &\leq&B_2+\frac {C_{10}(n, \sigma)}{\alpha T_2^{\frac {m}{4}}}(1+\frac 1{\alpha_1}C_1'\epsilon) C(\sigma)C_1'\epsilon+1\\ &\leq &B_3 \end{eqnarray*} for $t\in [S, S+4T_2].$ By the definition of $\Lambda_0,$ we have $$|Rm|(t)\leq \Lambda_0,\qquad \forall t\in [S, S+4T_2].$$ \end{proof} \textbf{STEP 4.} By step 3, we know the bisectional curvature is uniformly bounded and the $W^{1, 2}$ norm of $\underline {\dot \phi}-\underline c(t)$ decays exponentially. Thus, following the argument in \cite{[chen-tian2]}, the K\"ahler-Ricci flow converges to a K\"ahler-Einstein metric exponentially fast. This theorem is proved. \end{proof} \end{flushleft} \vskip3mm Xiuxiong Chen, Department of Mathematics, University of Wisconsin-Madison, Madison WI 53706, USA; [email protected]\\ Haozhao Li, School of Mathematical Sciences, Peking University, Beijing, 100871, P.R. China; [email protected]\\ Bing Wang, Department of Mathematics, University of Wisconsin-Madison, Madison WI 53706, USA; [email protected]\\ \end{document}
\begin{document} \begin{frontmatter} \title{An introduction to Lie group integrators -- basics, new developments and applications} \author[ntnu]{Elena Celledoni} \ead{[email protected]} \author[ntnu]{H{\aa}kon Marthinsen} \ead{[email protected]} \author[ntnu]{Brynjulf Owren} \ead{[email protected]} \address[ntnu]{Department of Mathematical Sciences, NTNU, N--7491 Trondheim, Norway} \begin{abstract} We give a short and elementary introduction to Lie group methods. A selection of applications of Lie group integrators are discussed. Finally, a family of symplectic integrators on cotangent bundles of Lie groups is presented and the notion of discrete gradient methods is generalised to Lie groups. \end{abstract} \begin{keyword} Lie group integrators, symplectic methods, integral preserving methods \end{keyword} \end{frontmatter} \section{Introduction} The significance of the geometry of differential equations was well understood already in the nineteenth century, and in the last few decades such aspects have played an increasing role in numerical methods for differential equations. Nowadays, there is a rich selection of integrators which preserve properties like symplecticity, reversibility, phase volume and first integrals, either exactly or approximately over long times~\cite{hairer10gni}. Differential equations are inherently connected to Lie groups, and in fact one often sees applications in which the phase space is a Lie group or a manifold with a Lie group action. In the early nineties, two important papers appeared which used the Lie group structure directly as a building block in the numerical methods. Crouch and Grossman \cite{crouch93nio} suggested to advance the numerical solution by computing flows of vector fields in some Lie algebra. Lewis and Simo \cite{lewis94caf} wrote an influential paper on Lie group based integrators for Hamiltonian problems, considering the preservation of symplecticity, momentum and energy. These ideas were developed in a systematic way throughout the nineties by several authors. In a series of three papers, Munthe-Kaas \cite{munthe-kaas95lbt, munthe-kaas98rkm,munthe-kaas99hor} presented what are now known as the Runge--Kutta--Munthe-Kaas methods. By the turn of the millennium, a survey paper \cite{iserles00lgm} summarised most of what was known by then about Lie group integrators. More recently a survey paper on structure preservation appeared with part of it dedicated to Lie group integrators \cite{christiansen11tis}. The purpose of the present paper is three-fold. First, in section~\ref{sec:basics} we give an elementary, geometric introduction to the ideas behind Lie group integrators. Secondly, we present some examples of applications of Lie group integrators in sections \ref{sec:appcm} and \ref{sec:dataanalysis}. There are many such examples to choose from, and we give here only a few teasers. These first four sections should be read as a survey. But in the last two section, new material is presented. Symplectic Lie group integrators have been known for some time, derived by Marsden and coauthors \cite{marsden01dma} by means of variational principles. In section~\ref{sec:symplie} we consider a group structure on the cotangent bundle of a Lie group and derive symplectic Lie group integrators using the model for vector fields on manifolds defined by Munthe-Kaas in~\cite{munthe-kaas99hor}. In section~\ref{sec:discdiff} we extend the notion of discrete gradient methods as proposed by Gonzalez \cite{gonzalez96tia} to Lie groups, and thereby we obtain a general method for preserving first integrals in differential equations on Lie groups. We would also like to briefly mention some of the issues we are \emph{not} pursuing in this article. One is the important family of Lie group integrators for problems of linear type, including methods based on the Magnus and Fer expansions. An excellent review of the history, theory and applications of such integrators can be found in \cite{blanes09tme}. We will also skip all discussions of order analysis of Lie group integrators. This is a large area by itself which involves technical tools and mathematical theory which we do not wish to include in this relatively elementary exposition. There have been several new developments in this area recently, in particular by Lundervold and Munthe-Kaas, see e.g.\ \cite{lundervold11hao}. \section{Lie group integrators} \label{sec:basics} The simplest consistent method for solving ordinary differential equations is the Euler method. For an initial value problem of the form \begin{equation*} \dot{y} = F(y),\quad y(0)=y_0, \end{equation*} one takes a small time increment $h$, and approximates $y(h)$ by the simple formula \begin{equation*} y_{1} = y_0 + hF(y_0), \end{equation*} advancing along the straight line coinciding with the tangent at $y_0$. Another way of thinking about the Euler method is to consider the constant vector field $F_{y_0}(y) \coloneqq F(y_0)$ obtained by parallel translating the vector~$F(y_0)$ to all points of phase space. A step of the Euler method is nothing else than computing the exact $h$-flow of this simple vector field starting at $y_0$. In Lie group integrators, the same principle is used, but allowing for more advanced vector fields than the constant ones. A Lie group generalisation of the Euler method is called the Lie--Euler method, and we shall illustrate its use through an example \cite{crouch93nio}. \paragraph{Example, the Duffing equation} Consider the system in $\mathbf{R}^2$ \begin{equation} \label{eq:duffing} \begin{aligned} \dot{x} &= y, \\ \dot{y} &= -a x - b x^3, \end{aligned} \qquad a\geq 0, b\geq 0, \end{equation} a model used to describe the buckling of an elastic beam. Locally, near a point $(x_0,y_0)$ we could use the approximate system \begin{equation} \begin{alignedat}{2} \label{eq:duff:sl2:frozen} \dot{x} &= y, &\qquad x(0) &= x_0,\\ \dot{y} &= -(a+b x_0^2) x, & y(0) &= y_0, \end{alignedat} \end{equation} which has the exact solution \begin{equation}\label{eq:duffing:sl2:flow} \bar{x}(t) = x_0\cos\omega t+ \frac{y_0}{\omega}\sin\omega t,\quad \bar{y}(t) = y_0\cos\omega t- \omega x_0\sin\omega t,\quad \omega=\sqrt{a+bx_0^2}. \end{equation} Alternatively, we may consider the local problem \[ \begin{aligned} \dot{x} &= y, \\ \dot{y} &= -ax - bx_0^3, \end{aligned} \] having exact solution \[ \begin{aligned} \bar{x}(t) &= x_0\cos\alpha t + \frac{y_0}{\alpha}\sin\alpha t + b\,x_0^3\,\frac{\cos\alpha t-1}{\alpha^2},\\ \bar{y}(t) &= y_0\cos\alpha t - \alpha x_0\sin\alpha t - b\,x_0^3\, \frac{\sin\alpha t}{\alpha}, \end{aligned} \qquad \alpha=\sqrt{a}. \] In each of the two cases, one may take $x_1=\bar{x}(h)$, $y_1=\bar{y}(h)$ as the numerical approximation at time $t=h$. The same procedure is repeated in subsequent steps. \begin{figure} \caption{$(\mathbf{R}^d,+)$-frozen vector field (left) and $\mathfrak{sl}(2)$-frozen vector field (right) for the Duffing equation. Both are frozen at $(x_0,y_0)=(0.75,0.75)$. The thin black curve in each plot shows the flows of the frozen vector fields $0\leq t\leq 20$. The thicker curve in each plot is the exact flow of the Duffing equation. } \label{fig:duffing} \end{figure} A common framework for discussing these two cases is provided by the use of frames, i.e.\ a set of of vector fields which at each point is spanning the tangent space. In the first case, the numerical method applies the frame \begin{equation} \label{eq:XYsl2} X = \begin{bmatrix} y\\0 \end{bmatrix} \eqqcolon y\, \partial x,\quad Y = \begin{bmatrix} 0\\x \end{bmatrix} \eqqcolon x\, \partial y. \end{equation} Taking the negative Jacobi--Lie bracket (also called the commutator) between $X$ and $Y$ yields the third element of the standard basis for the Lie algebra $\mathfrak{sl}(2)$, i.e.\ \begin{equation} \label{eq:Hsl2} H = -[X,Y] = x\,\partial x - y\,\partial y, \end{equation} so that the frame may be augmented to consist of $\{X, Y, H\}$. In the second case, the vector fields $E_1=y\,\partial x - ax\,\partial y$ and $E_2=\partial y$ can be used as a frame, but again we choose to augment these two fields with the commutator $E_3=-[E_1,E_2]=\partial x$ to obtain the Lie algebra of the special Euclidean group $\SE(2)$ consisting of translations and rotations in the plane. The situation is illustrated in Figure~\ref{fig:duffing}. In the left part, we have considered the constant vector field corresponding to the Duffing vector field evaluated at $(x_0,y_0)=(0.75,0.75)$, and the exact flow of this constant field is just the usual Euler method, a straight line. In the right part, we have plotted the vector field defined in \eqref{eq:duff:sl2:frozen} with the same $(x_0,y_0)$ along with its flow~\eqref{eq:duffing:sl2:flow}. The exact flow of \eqref{eq:duffing} is shown in both plots (thick curve). In general, a way to think about Lie group integrators is that we have a manifold $M$ where there is such a frame available; $\{E_1,\ldots,E_d\}$ such that at any point $p\in M$ one has \[ \operatorname{span}\{E_1(p),\ldots,E_d(p)\} = T_pM. \] Frames with this property are said to be locally transitive. The frame may be a linear space or in many cases even a Lie algebra $\ensuremath{\mathfrak{g}}$ of vector fields. In the example with Duffing's equation, the set $\{X, Y, H\}$ is locally transitive on $\mathbf{R}^2\setminus\{0\}$ and $\{E_1,E_2,E_3\}$ is locally transitive on $\mathbf{R}^2$. Given an arbitrary vector field $F$ on $M$, then at any point $p\in M$ there exists a vector field $F_p$ in the span of the frame vector fields such that $F_p(p) = F(p)$. An explicit way of writing this is by using a set of basis vector fields $E_1,\ldots,E_d$ for $\ensuremath{\mathfrak{g}}$, such that any smooth vector field $F$ has a representation \begin{equation} \label{VFframerep} F(y) = \sum_{k=1}^d f_k(y) E_k(y), \end{equation} for some functions $f_k \colon M\rightarrow\mathbf{R}$. The vector fields $F_p\in\ensuremath{\mathfrak{g}}$, called \emph{vector fields with frozen coefficients} by Crouch and Grossman \cite{crouch93nio}, are then obtained as \[ F_p(y) = \sum_{k=1}^d f_k(p) E_k(y). \] In the example with the Duffing equation we took $E_1=X, E_2=Y$, $f_1(x,y)=1$ and $f_2(x,y)=-(a+bx^2)$. The Lie--Euler method reads in general \begin{equation} \label{eq:Lie--Euler} y_{n+1} = \exp(hF_{y_n})y_n, \end{equation} where $\exp$ denotes the flow of a vector field. A more interesting example, also found in \cite{crouch93nio} is obtained by choosing $M=S^2$, the 2-sphere. A suitable way to induce movements of the sphere is that of rotations, that is, by introducing the Lie group $\SO(3)$ consisting of orthogonal matrices with unit determinant. The corresponding Lie algebra $\mathfrak{so}(3)$ of vector fields are spanned by \[ E_1(x,y,z)=-z\,\partial y + y\,\partial z,\quad E_2(x,y,z)=z\,\partial x - x\,\partial z,\quad E_3(x,y,z)=-y\,\partial x + x\,\partial y. \] We note that $xE_1(x,y,z)+yE_2(x,y,z)+zE_3(x,y,z)=0$, showing that the functions $f_k$ in \eqref{VFframerep} are not unique. A famous example of a system whose solution evolves on $S^2$ is the free rigid body Euler equations \begin{equation} \label{eq:euler_frb} \dot{x} = \left(\frac1{I_3}-\frac1{I_2}\right)\,yz, \quad \dot{y} = \left(\frac1{I_1}-\frac1{I_3}\right)\,xz, \quad \dot{z} = \left(\frac1{I_2}-\frac1{I_1}\right)\,xy, \end{equation} where $x, y, z$ are the coordinates of the angular momentum relative to the body, and $I_1, I_2, I_3$ are the principal moments of inertia. A choice of representation \eqref{VFframerep} is obtained with \[ f_1(x,y,z) = -\frac{x}{I_1},\quad f_2(x,y,z)=-\frac{y}{I_2},\quad f_3(x,y,z) = -\frac{z}{I_3}, \] so that the ODE vector field can be expressed in the form \[ F(x,y,z)= -\frac{x}{I_1} \begin{bmatrix*}[r] 0\\ -z\\ y \end{bmatrix*} -\frac{y}{I_2} \begin{bmatrix*}[r] z\\ 0\\ -x \end{bmatrix*} -\frac{z}{I_3} \begin{bmatrix*}[r] -y\\ x\\ 0 \end{bmatrix*}. \] We compute the vector field with coefficients frozen at $p_0=(x_0,y_0,z_0)$, \[ F_{p_0}(x,y,z)= \mathbf{F}_{p_0} \begin{bmatrix} x\\ y \\ z \end{bmatrix} \coloneqq \begin{bmatrix*}[r] 0 & \frac{z_0}{I_3} & -\frac{y_0}{I_2} \\ -\frac{z_0}{I_3} & 0 & \frac{x_0}{I_1} \\ \frac{y_0}{I_2} & -\frac{x_0}{I_1} & 0 \end{bmatrix*} \begin{bmatrix} x\\ y \\ z \end{bmatrix}. \] The $h$-flow of this vector field is the solution of a linear system of ODEs and can be expressed in terms of the matrix exponential $\mathtt{expm}(h\mathbf{F}_{p_0})$. The Lie--Euler method can be expressed as follows: \begin{algorithmic} \State $p_0 \gets (x_0,y_0,z_0)$ \For{$n \gets 0, 1, \dotsc$} \State $p_{n+1} \gets \mathtt{expm}(h\mathbf{F}_{p_n}) p_n$ \EndFor \end{algorithmic} Notice that the matrix to be exponentiated belongs to the matrix group $\mathfrak{so}(3)$ of real skew-symmetric matrices. The celebrated Rodrigues' formula \[ \mathtt{expm}(A) = I + \frac{\sin\alpha}{\alpha} A + \frac{1-\cos\alpha}{\alpha^2}A^2,\qquad \alpha^2=\lVert A\rVert_2^2=\tfrac12\lVert A\rVert_F^2,\quad A\in\mathfrak{so}(3), \] provides an inexpensive way to compute this. Whereas the notion of frames was used by Crouch and Grossman in their pioneering work \cite{crouch93nio}, a different type of notation was used in a series of papers by Munthe-Kaas \cite{munthe-kaas95lbt,munthe-kaas98rkm,munthe-kaas99hor}, see also \cite{lundervold11hao} for a more modern treatment. Let $G$ be a finite dimensional Lie group acting transitively on a manifold $M$. A Lie group action is generally a map from $G\times M$ into $M$, having the properties that \begin{equation*} \label{eq:LGaction} e\cdot m=m,\;\forall m\in M,\qquad g\cdot(h\cdot m)=(g\cdot h)\cdot m,\ \forall g,h\in G,\; m\in M, \end{equation*} where $e$ is the group identity element, and the first $\cdot$ in the right hand side of the second identity is the group product. Transitivity means that for any two points $m_1, m_2\in M$ there exists a group element $g\in G$ such that $m_2=g\cdot m_1$. We denote the Lie algebra of $G$ by $\ensuremath{\mathfrak{g}}$. For any element $\xi\in\ensuremath{\mathfrak{g}}$ there exists a vector field on~$M$ \begin{equation} \label{eq:groupaction} X_{\xi}(m) = \left.\frac{\ensuremath{\mathrm{d}}}{\ensuremath{\mathrm{d}} t}\right\rvert_{t=0} \exp(t\xi)\cdot m \eqqcolon \lambda_*(\xi)(m). \end{equation} Munthe-Kaas introduced a generic representation of a vector field $F\in\mathcal{X}(M)$ by a map $f \colon M\rightarrow\ensuremath{\mathfrak{g}}$ such that \begin{equation} \label{eq:genpres} F(m) =\lambda_*(f(m))(m). \end{equation} The corresponding frame is obtained as $E_i=\lambda_*(e_i)$ where $\{e_1,\ldots,e_d\}$ is some basis for $\ensuremath{\mathfrak{g}}$ and one chooses the functions $f_i \colon M\rightarrow\mathbf{R}$ such that $f(m) = \sum_{i=1}^d f_i(m) e_i$. The map $\lambda_*$ is an anti-homomorphism of the Lie algebra $\ensuremath{\mathfrak{g}}$ into the Lie algebra of vector fields $\mathcal{X}(M)$ under the Jacobi--Lie bracket, meaning that \[ \lambda_*([X_m, Y_m]_{\ensuremath{\mathfrak{g}}}) =- [\lambda_*(X_m), \lambda_*(Y_m)]_{\mathrm{JL}}. \] This separation of the Lie algebra $\ensuremath{\mathfrak{g}}$ from the manifold $M$ allows for more flexibility in the way we represent the frame vector fields. For instance, in the example with Duffing's equation and the use of $\mathfrak{sl}(2)$, we could have used the matrix Lie algebra with basis elements \[ X_m = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},\quad Y_m = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix},\quad H_m = \begin{bmatrix*}[r] 1 & 0 \\ 0 & -1 \end{bmatrix*}, \] rather than the basis of vector fields \eqref{eq:XYsl2}, \eqref{eq:Hsl2}. The group action by $g\in \SL(2)$ on a point $m\in\mathbf{R}^2$ would then be simply $g\cdot m$, matrix-vector multiplication, and the $\exp$ in \eqref{eq:groupaction} would be the matrix exponential. The map~$f(x,y)$ would in this case be \[ f \colon (x,y) \mapsto \begin{bmatrix} 0 & y \\ -(a+bx^2) & 0 \end{bmatrix}, \] but note that since the dimension of the manifold is just two whereas the dimension of $\mathfrak{sl}(2)$ is three, there is freedom in the choice of $f$. In the example we chose not to use the third basis element $H$. \subsection{Generalising Runge--Kutta methods} \label{GeneralizingRK} In order to construct general schemes, as for instance a Lie group version of the Runge--Kutta methods, one needs to introduce intermediate stage values. This can be achieved in a number of different ways. They all have in common that when the methods are applied in Euclidean space where the Lie group is $(\mathbf{R}^m,+)$, they reduce to conventional Runge--Kutta schemes. Let us begin by studying the simple second order Heun method, sometimes called the improved Euler method. \[ k_1 = F(y_n),\quad k_2=F(y_n+hk_1),\qquad y_{n+1} = y_n + \tfrac12 h(k_1+k_2). \] Geometrically, we may think of $k_1$ and $k_2$ as constant vector fields, coinciding with the exact ODE $F(y)$ at the points $y_n$ and $y_n+hk_1$ respectively. The update $y_{n+1}$ can be interpreted in at least three different ways, \begin{equation} \label{eq:flowalts} \exp\left(\frac{h}{2}(k_1+k_2)\right) \cdot y_n, \quad \exp\left(\frac{h}{2}k_1\right) \cdot \exp\left(\frac{h}{2}k_2\right) \cdot y_n, \quad \exp\left(\frac{h}{2}k_2\right) \cdot \exp\left(\frac{h}{2}k_1\right) \cdot y_n. \end{equation} The first is an example of a Runge--Kutta--Munthe-Kaas method and the second is an example of a Crouch--Grossman method. All three fit into the framework of \emph{commutator-free} Lie group methods. All three suggestions above are generalisations that will reduce to Heun's method in $(\mathbf{R}^m,+)$. In principle we could extend the idea to Runge--Kutta methods with several stages \[ y_{n+1} = y_n + h\sum_{i=1}^sb_iF(Y_i),\quad Y_i = y_n + h\sum_{j=1}^s a_{ij} F(Y_j),\ i=1,\ldots,s, \] by for instance interpreting the summed expressions as vector fields with frozen coefficients whose flows we apply to the point $y_n\in M$. But it is unfortunately not true that one in this way will retain the order of the Runge--Kutta method when applied to cases where the acting group is non-abelian. Let us first describe methods as proposed by Munthe-Kaas \cite{munthe-kaas99hor}, where one may think of the method simply as a change of variable. As before, we assume that the action of $G$ on $M$ is locally transitive. Since the exponential mapping is a local diffeomorphism in some open set containing $0\in\ensuremath{\mathfrak{g}}$, it is possible to represent \emph{any} smooth curve $y(t)$ on $M$ in some neighbourhood of a point $p\in M$ by means of a curve $\sigma(t)$ through the origin of $\ensuremath{\mathfrak{g}}$ as follows \begin{equation} \label{eq:solrep} y(t) = \exp(\sigma(t))\cdot p,\qquad \sigma(0)=0, \end{equation} though $\sigma(t)$ is not necessarily unique. We may differentiate this curve with respect to $t$ to obtain \begin{equation} \label{eq:doty} \dot{y}(t) = \lambda_*\bigl(\dexp_{\sigma(t)}\dot{\sigma}(t)\bigr)(y(t)) = F(y(t))= \lambda_*\bigl(f(\exp(\sigma(t))\cdot p)\bigr)(y(t)). \end{equation} The details are given in \cite{munthe-kaas99hor} and the map $\dexp_\sigma \colon \ensuremath{\mathfrak{g}}\rightarrow\ensuremath{\mathfrak{g}}$ was derived by Hausdorff in \cite{hausdorff06dse} as an infinite series of commutators \begin{equation} \label{eq:dexp} \dexp_{\sigma}(v) = v + \frac{1}{2}[\sigma,v] + \frac{1}{6}[\sigma,[\sigma,v]]+\dotsb = \sum_{k=0}^\infty \frac{1}{(k+1)!} \ad_\sigma^k v = \left.\frac{\exp(z)-1}{z}\right\rvert_{z=\ad_\sigma} v, \end{equation} with the usual definition of $\ad_u(v)$ as the commutator $[u,v]$. The map $\lambda_*$ does not have to be injective, but a sufficient condition for \eqref{eq:doty} to hold is that \[ \dot{\sigma} = \dexp_{\sigma}^{-1} (f(\exp(\sigma)\cdot p)). \] This is a differential equation for $\sigma(t)$ on a linear space, and one may choose any conventional integrator for solving it. The map $\dexp_\sigma^{-1} \colon \ensuremath{\mathfrak{g}}\rightarrow\ensuremath{\mathfrak{g}}$ is the inverse of $\dexp_\sigma$ and can also be obtained by differentiating the logarithm, i.e.\ the inverse of the exponential map. From \eqref{eq:dexp} we find that one can write $\dexp_{\sigma}^{-1}(v)$ as \begin{equation} \label{eq:dexpinv} \dexp_{\sigma}^{-1}(v) = \left.\frac{z}{\exp(z)-1}\right\rvert_{z=\ad_\sigma}v = v - \frac{1}{2}[\sigma,v] + \frac{1}{12}[\sigma, [\sigma, v]]+\cdots. \end{equation} The coefficients appearing in this expansion are scaled Bernoulli numbers $\frac{B_k}{k!}$, and $B_{2k+1}=0$ for all $k\geq 1$. One step of the resulting Runge--Kutta--Munthe-Kaas method is then expressed in terms of evaluations of the map $f$ as follows \begin{align*} y_{1} &= \exp\Bigl(h\sum_{i=1}^s b_i k_i\Bigr) \cdot y_0, \\ k_i &= \dexp_{h\sum_j a_{ij}k_j}^{-1} f\biggl(\exp\Bigl(h\sum_j a_{ij}k_j\Bigr) \cdot y_0\biggr),\quad i=1,\ldots,s. \end{align*} This is not so surprising seen from the perspective of the first alternative in \eqref{eq:flowalts}, the main difference is that the stages $k_i$ corresponding to the frozen vector fields $\lambda_*(k_i)$ need to be ``corrected'' by the $\dexp^{-1}$ map. Including this map in computational algorithms may seem awkward, however, fortunately truncated versions of \eqref{eq:dexpinv} may frequently be used. In fact, by applying some clever tricks involving graded free Lie algebras, one can in many cases replace $\dexp^{-1}$ with a low order Lie polynomial while retaining the convergence order of the original Runge--Kutta method. Details of this can be found in \cite{munthe-kaas99cia, casas03cel}. There are also some important cases of Lie algebras for which $\dexp_\sigma^{-1}$ can be computed exactly in terms of elementary functions, among those is $\mathfrak{so}(3)$ reported in \cite{celledoni03lgm}. Notice that the representation \eqref{eq:solrep} does not depend on the use of the exponential map from $\ensuremath{\mathfrak{g}}$ to $G$. In principle, one can replace this map with any local diffeomorphism $\varphi$, where one usually scales $\varphi$ such that $\varphi(0)=e$ and $T_0\varphi = \mathrm{Id}_\ensuremath{\mathfrak{g}}$. An example of such map is the Cayley transformation \cite{diele98tct} which can be used for matrix Lie groups of the type $G_P = \{X\in\mathbf{R}^{n\times n} \mid X^TPX = P\}$ for a nonsingular $n\times n$-matrix $P$. These include the orthogonal group~$\LieO(n)=G_I$ and the linear symplectic group~$\Sp(n)=G_J$ where $J$ the skew-symmetric matrix of the standard symplectic form. Another possibility is to replace the exponential map by canonical coordinates of the second kind \cite{owren01imb}. We present here the well-known Runge--Kutta--Munthe-Kaas method based on the popular fourth order method of Kutta \cite{kutta01bzn}, having Butcher tableau \begin{equation} \label{eq:kutta} \morespacearray \begin{array}{r|rrrr} 0 & \\ \tfrac{1}{2} & \tfrac{1}{2} \\ \tfrac{1}{2} & 0 & \tfrac{1}{2} \\ 1 & 0 & 0 & 1 \\ \hline & \tfrac{1}{6} & \tfrac{1}{3} & \tfrac{1}{3} & \tfrac{1}{6} \end{array} \end{equation} In the Lie group method, the $\dexp^{-1}$ map has been replaced by the optimal Lie polynomials. \begin{align*} k_1 &= h f(y_0),\\ k_2 &= h f(\exp(\tfrac{1}{2}k_1) \cdot y_0), \\ k_3 &= h f(\exp(\tfrac{1}{2}k_2-\tfrac{1}{8}[k_1,k_2])\cdot y_0), \\ k_4 &= h f(\exp(k_3)\cdot y_0), & \\ y_1 &= \exp(\tfrac{1}{6}(k_1+2k_2+2k_3+k_4-\tfrac12[k_1,k_4]))\cdot y_0. \end{align*} An important advantage of the Runge--Kutta--Munthe-Kaas schemes is that it is easy to preserve the convergence order when extending them to Lie group integrators. This is not the case with for instance the schemes of Crouch and Grossman \cite{crouch93nio, owren99rkm}, where it is necessary to develop order conditions for the non-abelian case. This is also true for the commutator-free methods developed by Celledoni et al.\ \cite{celledoni03cfl}. In fact, these methods include those of Crouch and Grossman. The idea here is to allow compositions of exponentials or flows instead of commutator corrections. With stages $k_1,\ldots, k_s$ in the Lie algebra, one includes expressions of the form \[ \exp\Bigl(\sum_{i}\beta_J^i k_i\Bigr) \dotsm \exp\Bigl(\sum_{i}\beta_2^i k_i\Bigr) \cdot \exp\Bigl(\sum_{i}\beta_1^i k_i\Bigr) \cdot y_0, \] both in the definition of the stages and the update itself. In some cases it is also possible to reuse flow calculations from one stage to another, and thereby lower the computational cost of the scheme. An extension of \eqref{eq:kutta} can be obtained as follows, setting $k_i=hf(Y_i)$ for all $i$, \begin{align*} Y_1&= y_0, \\ Y_2&=\exp(\tfrac{1}{2}k_1)\cdot y_0, \\ Y_3&=\exp(\tfrac{1}{2}k_2)\cdot y_0 \\ Y_4&= \exp(k_3-\tfrac{1}{2}k_1)\cdot Y_2, \\ y_{\frac{1}{2}} &= \exp(\tfrac{1}{12}(3k_1+2k_2+2k_3-k_4))\cdot y_0, \\ y_{1} &= \exp(\tfrac{1}{12}(-k_1+2k_2+2k_3+3k_4))\cdot y_{\frac{1}{2}}. \end{align*} Note in particular in this example how the expression for $Y_4$ involves $Y_2$ and thereby one exponential calculation has been saved. \subsection{A plenitude of group actions} We saw in the first examples with Duffing's equation that the manifold $M$, the group $G$ and even the way $G$ acts on $M$ can be chosen in different ways. It is not obvious which action is the best or suits the purpose in the problem at hand. Most examples we know from the literature are using matrix Lie groups~$G\subseteq \GL(n)$, but the choice of group action depends on the problem and the objectives of the simulation. We give here several examples of situations where Lie group integrators can be used. \paragraph{$G$ acting on $G$} In the case $M=G$, it is natural to use either left or right multiplication as the action \[ L_g(m) = g\cdot m\quad\text{or}\quad R_g(m) = m\cdot g,\qquad g,m\in G. \] The correspondence between the vector field $F\in\mathcal{X}(M)$ and the map \eqref{eq:genpres} is then just the tangent map of left or right multiplication \[ F(g) = T_eL_g(f(g))\quad\text{or}\quad F(g)=T_eR_{g}(\tilde{f}(g)),\quad g\in G. \] When working with matrices, this simply amounts to setting $F(g) = g\cdot f(g)$ or $F(g) = \tilde{f}(g) \cdot g$. Note that $\tilde{f}(g)$ is related to $f(g)$ through the adjoint representation of $G$, $\Ad \colon G\rightarrow \operatorname{Aut}(\ensuremath{\mathfrak{g}})$, \[ \tilde{f}(g) = \Ad_g f(g),\qquad \Ad_g = T_eL_g\circ T_eR_g^{-1}. \] \paragraph{The affine group and its use in semilinear PDE methods} Lie group integrators can also be used for approximating the solution to partial differential equations, the most obvious choice of PDE model being the semilinear problem \begin{equation} \label{eq:semilinear} u_t = Lu + N(u), \end{equation} where $L$ is a linear differential operator and $N(u)$ is some nonlinear map, typically containing derivatives of lower order than $L$. After discretising in space, \eqref{eq:semilinear} is turned into a system of $n_d$ ODEs, for some large $n_d$, $L$ becomes an $n_d\times n_d$-matrix, and $N \colon \mathbf{R}^{n_d}\rightarrow \mathbf{R}^{n_d}$ a nonlinear function. We may now as in \cite{munthe-kaas99hor} introduce the action on $\mathbf{R}^{n_d}$ by some subgroup of the affine group represented as the semidirect product $G=\GL(n_d)\ltimes\mathbf{R}^{n_d}$. The group product, identity, and inverse are given as \[ (A_1,b_1)\cdot (A_2,b_2) = (A_1 A_2, A_1b_2+b_1),\quad e=(I,0),\quad (A,b)^{-1}=(A^{-1}, -A^{-1}b). \] The action on $\mathbf{R}^{n_d}$ is \[ (A,b)\cdot x = Ax + b,\qquad (A,b)\in G,\ x\in\mathbf{R}^{n_d}, \] and the Lie algebra and commutator are given as \[ \ensuremath{\mathfrak{g}} = (\xi, c),\ \xi\in\mathfrak{gl}(n_d),\ c\in\mathbf{R}^{n_d},\quad [(\xi_1,c_1),(\xi_2,c_2)] = ([\xi_1,\xi_2], \xi_1c_2-\xi_2c_1+c_1). \] In many interesting PDEs, the operator $L$ is constant, so it makes sense to consider the $n_d+1$-dimensional subalgebra $\ensuremath{\mathfrak{g}}_L$ of $\ensuremath{\mathfrak{g}}$ consisting of elements $(\alpha L, c)$ where $\alpha\in\mathbf{R}$, $c\in\mathbf{R}^d$, so that the map $f \colon \mathbf{R}^{n_d}\rightarrow \ensuremath{\mathfrak{g}}_L$ is given as \[ f(u) = (L,N(u)). \] One parameter subgroups are obtained through the exponential map as follows \[ \exp(t(L,b)) = (\exp(tL), \phi(tL) tb). \] Here the entire function $\phi(z)=(\exp(z)-1)/z$ familiar from the theory of exponential integrators appears. As an example, one could now consider the Lie--Euler method \eqref{eq:Lie--Euler} in this setting, which coincides with the exponential Euler method \[ u_{1} = \exp(h(L,N(u_0))\cdot u_0 = \exp(hL)u_0 + h\phi(hL)N(u_0). \] There is a large body of literature on exponential integrators, going almost half a century back in time, see~\cite{hochbruck10ei} and the references therein for an extensive account. \paragraph{The coadjoint action and Lie--Poisson systems} Lie group integrators for this interesting case were studied by Eng{\o} and Faltinsen \cite{engo01nio}. Suppose $G$ is a Lie group and the manifold under consideration is the dual space~$\ensuremath{\mathfrak{g}}^*$ of its Lie algebra~$\ensuremath{\mathfrak{g}}$. The coadjoint action by $G$ on $\ensuremath{\mathfrak{g}}^*$ is denoted $\Ad_g^*$ defined for any $g\in G$ as \begin{equation} \label{eq:coadjoint-action} \langle \Ad_g^*\mu, \xi\rangle = \langle \mu, \Ad_g\xi\rangle,\quad\forall\xi\in\ensuremath{\mathfrak{g}}, \end{equation} for a duality pairing $\langle {\cdot}, {\cdot} \rangle$ between $\ensuremath{\mathfrak{g}}^*$ and $\ensuremath{\mathfrak{g}}$. It is well known (see e.g.\ section~13.4 in \cite{marsden99itm}) that mechanical systems formulated on the cotangent bundle $T^*G$ with a left or right invariant Hamiltonian can be reduced to a system on $\ensuremath{\mathfrak{g}}^*$ given as \[ \dot{\mu} = \pm\ad^*_{\frac{\partial H}{\partial\mu}} \mu, \] where the negative sign is used in case of right invariance. The solution to this system preserves coadjoint orbits, which makes it natural to suggest the group action \[ g\cdot\mu = \Ad_{g^{-1}}^*\mu, \] so that the resulting Lie group integrator also respects this invariant. For Euler's equations for the free rigid body, the Hamiltonian is left invariant and the coadjoint orbits are spheres in $\ensuremath{\mathfrak{g}}^*\cong\mathbf{R}^3$. \paragraph{Homogeneous spaces and the Stiefel and Grassmann manifolds} The situation when $G$ acts on itself by left of right multiplication is a special case of a homogeneous space \cite{munthe-kaas97nio}, where the assumption is only that $G$ acts transitively and continuously on some manifold $M$. Homogeneous spaces are isomorphic to the quotient~$G/G_x$ where $G_x$ is the \emph{isotropy group} for the action at an arbitrarily chosen point $x\in M$ \[ G_x = \{ h\in G \mid h\cdot x = x\}. \] Note that if $x$ and $z$ are two points on $M$, then by transitivity of the action, $z=g\cdot x$ for some $g\in G$. Therefore, whenever $h\in G_z$ it follows that $g^{-1} \cdot h \cdot g\in G_x$ so isotropy groups are isomorphic by conjugation~\cite{bryant95ait}. Therefore the choice of $x\in M$ is not important for the construction of the quotient. For a readable introduction to this type of construction, see \cite{bryant95ait}, in particular Lecture 3. A much encountered example is the hypersphere $M=S^{d-1}$ corresponding to the left action by $G=\SO(d)$, the Lie group of orthogonal $d\times d$ matrices with unit determinant. One has $S^{d-1} = \SO(d)/\SO(d-1)$. We have in fact already discussed the example of the free rigid body \eqref{eq:euler_frb} where $M=S^2$. The Stiefel manifold $\St(d,k)$ can be represented by the set of $d\times k$-matrices with orthonormal columns. An action on this set is obtained by left multiplication by $G=\SO(d)$. Lie group integrators for Stiefel manifolds are extensively studied in the literature, see e.g.\ \cite{celledoni03oti,krogstad03alc} and some applications involving Stiefel manifolds are discussed in Section~\ref{sec:dataanalysis}. An important subclass of the homogeneous spaces is the symmetric spaces, also obtained through a transitive action by a Lie group $G$, where $M=G/G_x$, but here one requires in addition that the isotropy subgroup is an open subgroup of the fixed point set of an involution of $G$~\cite{munthe-kaas01aos}. A prominent example of a symmetric space in applications is the Grassmann manifold, obtained as $\SO(d)/(\SO(k)\times \SO(d-k))$. \paragraph{Isospectral flows} In isospectral integration one considers dynamical systems evolving on the manifold of $d\times d$-matrices sharing the same Jordan form. Considering the case of symmetric matrices, one can use the transitive group action by $\SO(d)$ given as \[ g\cdot m = g m g^T. \] This action is transitive, since any symmetric matrix can be diagonalised by an appropriately chosen orthogonal matrix. If the eigenvalues are distinct, then the isotropy group is discrete and consists of all matrices in $\SO(d)$ which are diagonal. Lie group integrators for isospectral flows have been extensively studied, see for example \cite{calvo96rkm1,calvo97nso}. See also~\cite{celledoni01ano} for an application to the KdV equation. \paragraph{Tangent and cotangent bundles} For mechanical systems the natural phase space will often be the tangent bundle $TM$ as in the Lagrangian framework or the cotangent bundle $T^*M$ in the Hamiltonian framework. The seminal paper by Lewis and Simo \cite{lewis94caf} discusses several Lie group integrators for mechanical systems on cotangent bundles, deriving methods which are symplectic, energy and momentum preserving. Eng{\o}~\cite{engo03prk} suggested a way to generalise the Runge--Kutta--Munthe-Kaas methods into a partitioned version when $M$ is a Lie group. Marsden and collaborators have developed the theory of Lie group integrators from the variational viewpoint over the last two decades. See \cite{marsden01dma} for an overview. For more recent work pertaining to Lie groups in particular, see \cite{lee07lgv,bou-rabee09hpi,saccon09mrf}. In Section~\ref{sec:symplie} we present what we believe to be the first symplectic partitioned Lie group integrators on $T^*G$ phrased in the framework we have discussed here. Considering trivialised cotangent bundles over Lie groups is particularly attractive since there is a natural way to extend action by left multiplication from $G$ to $G\times\ensuremath{\mathfrak{g}}^*$ via \eqref{eq:prodGxgs}. \subsection{Isotropy -- challenges and opportunities} An issue which we have already mentioned a few times is that the map $\lambda_* \colon \ensuremath{\mathfrak{g}} \rightarrow \mathcal{X}(M)$ defined in \eqref{eq:groupaction} is not necessarily injective. This means that the choice of $f \colon M\rightarrow\ensuremath{\mathfrak{g}}$ is not unique. In fact, if $g \colon M\rightarrow\ensuremath{\mathfrak{g}}$ is any map satisfying $\lambda_*(g(m))(m)=0$ for all $m\in M$, then we could replace the map $f$ by $f+g$ in \eqref{eq:genpres} without altering the vector field $F$. But such a modification of $f$ \emph{will have} an impact on the numerical schemes that we consider. This freedom in the setup of the methods makes it challenging to prove general results for Lie group methods, it might seem that some restrictions should apply to the isotropy choice for a more well defined class of schemes. However, the freedom can of course also be taken advantage of to obtain approximations of improved quality. An illustrative example is the two-sphere $S^2$ acted upon linearly by the special orthogonal group $\SO(3)$. Representing elements of the Lie algebra $\mathfrak{so}(3)$ by vectors in $\mathbf{R}^3$, and points on the sphere as unit length vectors in $\mathbf{R}^3$, we may facilitate \eqref{eq:genpres} as \[ F(m)= f(m) \times m = (f(m) + \alpha(m)m)\times m, \] for any scalar function $\alpha \colon S^2 \rightarrow \mathbf{R}$. Using for instance the Lie--Euler method one would get \begin{equation} \label{eq:LieEulerS2} m_1 = \exp(f(m_0)+\alpha(m_0)m_0) m_0, \end{equation} where the $\exp$ is the matrix exponential of the $3\times 3$ skew-symmetric matrix associated to a vector in $\mathbf{R}^3$ via the hat-map~\eqref{eq:hatmap}. Clearly the approximation depends on the choice of $\alpha(m)$. The approach of Lewis and Olver \cite{lewis08gia} was to use the isotropy to improve certain qualitative features of the solution. In particular, they studied how the orbital error could be reduced by choosing the isotropy in a clever way. In Figure~\ref{fig:isotropy} we illustrate the issue of isotropy for the Euler free rigid body equations. The curve drawn from the initial point~$z_0$ to $z_1$ is the exact solution, i.e.\ the momenta in body coordinates. The broken line shows the terminal points using the Lie--Euler method for $\alpha$ varying between $0$ and $25$. \begin{figure} \caption{The effect of isotropy on $S^2$ for Euler's free rigid body equations. The curve drawn from the initial point $z_0$ to $z_1$ is the exact solution, i.e.\ the momenta in body coordinates. The broken line shows the terminal points using the Lie--Euler method for $\alpha(z_0)$ (as in \eqref{eq:LieEulerS2}) varying between $0$ and $25$. } \label{fig:isotropy} \end{figure} Another potential difficulty with isotropy is the increased computational complexity when the group~$G$ has much higher dimension than the manifold $M$. This could for instance be the case with the Stiefel manifold~$\St(d,k)$ if $d\gg k$. Linear algebra operations used in integrating differential equations on the Stiefel manifold should preferably be of complexity $\mathcal{O}(dk^2)$. But solving a corresponding problem in the Lie algebra $\mathfrak{so}(d)$ would typically require linear algebra operations of complexity $\mathcal{O}(d^3)$, see for example \cite{celledoni03oti} and references therein. By taking advantage of the many degrees of freedom provided by the isotropy, it is actually possible to reduce the cost down to the required $\mathcal{O}(dk^2)$ operations as explained in for instance \cite{celledoni03lgm} and \cite{krogstad03alc}. \section{Applications to nonlinear problems of evolution in classical mechanics} \label{sec:appcm} The emphasis on the use of Lie groups in modelling and simulation of engineering problems in classical mechanics started in the eighties with the pioneering and fundamental work of J.C.~Simo and his collaborators. In the case of rod dynamics, for example, models based on partial differential equations were considered where the configuration of the centreline of the rod is parametrised via arc-length, and the movement of a rigid frame attached to each of the cross sections of the rod is considered (see Figure~\ref{fig1}). This was first presented in a geometric context in \cite{simo85afs}. \begin{figure} \caption{Geometric rod model. Here $\phi$ is the line of centroids and a cross section is identified by the frame $\Lambda=[\mathbf{t}_1,\mathbf{t}_2,\mathbf{t}_3]$, $\phi_r$ is the initial position of the line of centroids. } \label{fig1} \end{figure} In robot technology, especially robot locomotion and robot grasping, the occurrence of non-holonomically constrained models is very common. The motion of robots equipped with wheels is not always locally controllable, but is often globally controllable. A classical example is the parking of a car that cannot be moved in the direction orthogonal to its wheels. The introduction of Lie groups and Lie brackets to describe the dynamics of such systems, has been considered by various authors, see for example \cite{murray93nmp}. The design of numerical integration methods in this context has been addressed in the paper of Crouch and Grossman,~\cite{crouch93nio}. These methods have had a fundamental impact to the successive developments in the field of Lie group methods. The need for improved understanding of non-holonomic numerical integration has been for example advocated in \cite{mclachlan96aso}. Recent work in this field has led to the construction of low order non-holonomic integrators based on a discrete Lagrange--d'Alembert principle, \cite{cortes01nhi,mclachlan06ifn}. The use of Lie group integrators in this context has been considered in \cite{leok05alg,mclachlan06ifn}. We have already mentioned the relevance of rigid body dynamics to the numerical discretisation of rod models. There are many other research areas in which the accurate and efficient simulation of rigid body dynamics is crucial: molecular dynamics, satellite dynamics, and celestial mechanics just to name a few, \cite{leimkuhler04shd}. In some of these applications, it is desirable to produce numerical approximations which are accurate possibly to the size of roundoff. The simulations of interest occur over very long times and/or a large number of bodies and this inevitably causes propagation of errors even when the integrator is designed to be very accurate. For this reason accurate symplectic rigid body integrators are of interest because they can guarantee that the roundoff error produced by the accurate computations can stay bounded also in long time integration. This fact seems to be of crucial importance in celestial mechanics simulations, \cite{laskar04aln}. A symplectic and energy preserving Lie group integrator for the free rigid body motion was proposed in \cite{lewis94caf}. The method computes a time re-parametrisation of the exact solution. Some recent and promising work in this field has been presented in \cite{mclachlan05tdm,celledoni06ets,celledoni07eco,hairer06pdm}. The control of rigid bodies with variational Lie group integrators was considered in \cite{leok05alg}. In the next section we illustrate the use of Lie group methods in applications on a particular case study, the pipe-laying process from ships to the bottom of the sea. \subsection{Rigid body and rod dynamics} \paragraph{Pipe-laying problem} The simulation of deep-water risers, pipelines and drill rigs requires the use of models of long and thin beams subject to large external forces. These are complicated nonlinear systems with highly oscillatory components. We are particularly interested in the correct and accurate simulation of the pipe-laying process from ships on the bottom of the sea, see Figure~\ref{fig2}. The problem comprises the modelling of two interacting structures: a long and thin pipe (modelled as a rod) and a vessel (modelled as a rigid body). The system is subject to environmental forces (such as sea and wind effects). The control parameters for this problem are the vessel position and velocity, the pay-out speed and the pipe tension while the control objectives consist in determining the touchdown position of the pipe as well as ensuring the integrity of the pipe and to avoid critical deformations, \cite{jensen10anp,safstrom09mas}. \begin{figure} \caption{The pipe-laying process. } \label{fig2} \end{figure} The vessel rigid body equations determine the boundary conditions of the rod. They are expressed in six degrees of freedom as \begin{equation*}\label{eq:vesseleq} M\dot{\boldsymbol\nu} + C(\boldsymbol\nu)\boldsymbol\nu + D(\boldsymbol\nu)\boldsymbol\nu + g(\boldsymbol\eta) = \boldsymbol\tau, \end{equation*} where $M$ is the system inertia matrix, $C(\boldsymbol\nu)$ the Coriolis-centripetal matrix, $D(\boldsymbol\nu)$ the damping matrix, $g(\boldsymbol\eta)$ the vector of gravitational and buoyancy forces and moments, and $\boldsymbol\tau$ the vector of control inputs and environmental disturbances such as wind, waves and currents (see \cite{perez07kmf} for details). The vector $\boldsymbol\nu$ contains linear and angular velocity and $\boldsymbol\eta$ is the position vector. It has been shown in \cite{jensen10anp} that the rigid body vessel equations are input-output passive. The equations can be integrated numerically with a splitting and composition technique where the vessel equations are split into a free rigid body part and a damping and control part. The free rigid body equations can be solved with a method proposed in \cite{celledoni06ets} where the angular momentum is accurately and efficiently computed by using Jacobi elliptic functions, the attitude rotation is obtained using a Runge--Kutta--Munthe-Kaas Lie group method, and the control and damping part is solved exactly. Simulations of the whole pipe-lay problem with local parametrisations of the pipe and the vessel based on Euler angles have been obtained in \cite{jensen10anp}. \paragraph{Rod dynamics} At fixed time each cross section of the pipe is the result of a rigid rotation in space of a reference cross section, and analogously, for each fixed value of the space variable the corresponding cross section evolves in time as a forced rigid body, see Figure~\ref{fig1}. In absence of external forces the equations are \begin{align*} \rho_A \partial_{tt} \phi &= \partial_S\mathbf{n},\qquad S\in[0,L],\quad t\ge0,\\ \partial_t \pi+ (I_{\rho}^{-1} \pi)\times \pi &= \partial_S\mathbf{m}+(\partial_S \phi)\times \mathbf{n}, \end{align*} here $\phi=\phi(S,t)$ is the line of centroids of the rod, $\mathbf{m}$ and $\mathbf{n}$ are the stress couple and stress resultant, $\mathbf{\pi}$ is the angular momentum density, $I_{\rho}$ is the inertia in spatial coordinates, and $\rho_A=\rho_A(S)$ is the mass per unit length of the rod (see \cite{simo87otd} and \cite{celledoni10aha}). The kinematic equations for the attitude rotation matrix are \[ \partial_t\Lambda=\hat{\mathbf{w}} \Lambda, \qquad \partial_S\Lambda=\hat{\mathbf{M}} \Lambda, \] where $\Lambda(S,t)=[\mathbf{t}_1,\mathbf{t}_2,\mathbf{t}_3]$, $I_{\rho}^{-1}\pi=\mathbf{w}$, $\mathbf{M}=C_2 \Lambda^T\mathbf{m}$ and $C_2$ is a constant diagonal matrix. We denote by ``$\hat{\quad}$" the hat-map identifying $\mathbf{R}^3$ with $\mathfrak{so}(3)$: \begin{equation} \label{eq:hatmap} \mathbf{v}=\begin{bmatrix} v_1\\ v_2\\ v_3 \end{bmatrix} \mapsto \hat{\mathbf{v}}= \begin{bmatrix*}[r] 0 &-v_3&v_2\\v_3&0&-v_1\\ -v_2&v_1&0 \end{bmatrix*}. \end{equation} With no external forces one assumes pure displacement boundary conditions providing $\phi$ and $\Lambda$ on the boundaries $S=0$ and $S=L$. In \cite{simo87otd}, partitioned Newmark integrators, of Lie group type, and of moderate order were considered for this problem. While classical Newmark methods are variational integrators and as such are symplectic when applied to Hamiltonian systems \cite{marsden01dma}, the particular approach of \cite{simo87otd} involves the use of exponentials for the parametrisation of the Lie group $\SO(3)$, and the geometric properties of this approach are not trivial to analyse. Moreover, since the model is a partial differential equation, space and time discretisations should be designed so that the overall discrete equations admit solutions and are stable. It turns out that conventional methods perform poorly on such problem in long time simulations. To obtain stable methods reliable in long-time simulation, an energy-momentum method was proposed for the rod problem in \cite{simo95ndo}. Later, this line of thought has been further developed in \cite{romero01aof}. The Hamiltonian formulation of this model allows one to derive natural structure preserving discretisations into systems of coupled rigid bodies \cite{leyendecker06oem}. Following the geometric space-time integration procedure proposed in \cite{frank04gst}, a multi-Hamiltonian formulation\footnote{For a definition of the multi-symplectic structure of Hamiltonian partial differential equations, see \cite{bridges01msi}.} of these equations has been proposed in \cite{celledoni10aha}, using the Lie group of Euler parameters. The design of corresponding multi-symplectic Lie group discretisations is still under investigation. \section{Applications to problems of data analysis and statistical signal processing} \label{sec:dataanalysis} The solution of the many-body Schr{\"o}dinger eigenvalue problem \begin{equation} \label{eq:schr} \hat{\mathbf{H}}\Psi =E\Psi, \end{equation} where the so called electronic ground state (the smallest eigenstate) is sought, is an important problem of computational chemistry. The main difficulty is the curse of dimensionality. Since $\hat{\mathbf{H}}$ is a differential operator in several space dimensions, a realistic simulation of (\ref{eq:schr}) would require the numerical discretisation and solution of a partial differential equation in several space dimensions. The number of space dimensions grows with the number of electrons included in the simulation. The eigenvalue problem admits an alternative variational formulation. Instead of looking for the smallest eigenvalue and eigenfunction of the Schr{\"o}dinger equation, one minimises directly the ground state energy. After appropriate spatial discretisation, the problem becomes a minimisation problem on a Riemannian manifold $\mathcal{M}$, \begin{equation}\label{genopt} {\min_{x\in\mathcal{M}}}\, \phi(x), \end{equation} where $\phi \colon \mathcal{M}\rightarrow \mathbf{R}$ is a smooth discrete energy function to be minimised on $\mathcal{M}$. The discrete energy $\phi$ considered here is the so called Kohn--Sham energy, \cite{baarman12dma}. For a related application of Lie group techniques in quantum control, see \cite{degani09qcw}. The general optimisation problem giving rise to (\ref{genopt}) appears in several applied fields, ranging from engineering to applied physics and medicine. Some specific examples are principal component/subspace analysis, eigenvalue and generalised eigenvalue problems, optimal linear compression, noise reduction, signal representation and blind source separation. \subsection{Gradient-based optimisation on Riemannian manifolds} Let $\mathcal{M}$ be a Riemannian manifold with metric $\langle\cdot, \cdot \rangle_{\mathcal{M}}$ and $\phi \colon \mathcal{M}\rightarrow \mathbf{R}$ be a smooth cost function to be minimised on $\mathcal{M}$. We want to solve (\ref{genopt}). The optimisation method based on gradient flow -- written for the minimisation problem only, for the sake of easy reading -- consists in setting up the differential equation on the manifold, \begin{equation}\label{eqdiff} \dot{x}(t)=-\grad \phi\big(x(t)\big), \end{equation} with appropriate initial condition $x(0)=x_0\in\mathcal{M}$. The equilibria of equation (\ref{eqdiff}) are the critical points of the function $\phi$. In the above equation, the symbol $\grad \phi$ denotes the Riemannian gradient of the function $\phi$ with respect to the chosen metric. Namely, $\grad \phi(x)\in T_x\mathcal{M}$ and $T_x \phi(v)=\langle \grad \phi (x),v\rangle_{\mathcal{M}}$ for all $v\in T_x\mathcal{M}$. The solution of (\ref{eqdiff}) on $\mathcal{M}$ may be locally expressed in terms of a curve on the tangent space $T_{x_0}\mathcal{M}$ using a retraction map $\mathcal{R}$. Retractions are tangent space parametrisations of $\mathcal{M}$, and allow us to write \[x(t)=\mathcal{R}_{x_0}(\sigma(t)), \quad \sigma(t)\in T_{x_0}\mathcal{M}, \quad t\in [0,t_f],\] for small enough $t_f$, see \cite{shub86sro} for a precise definition. In most applications of interest, see for example \cite{brockett91dst,helmke94oad}, $\mathcal{M}$ is a matrix manifold endowed with a Lie group action and there is a natural way to define a metric and a retraction. In fact, let $\mathcal{M}$ be a manifold acted upon by a Lie group $G$, with a locally transitive group action $\Lambda(g,x)=\Lambda_x(g)$. Let us also consider a coordinate map $\psi$, \[ \psi \colon \ensuremath{\mathfrak{g}} \rightarrow G, \quad \text{and} \quad \rho_x \coloneqq T_0(\Lambda_x\circ \psi). \] One can prove that if there exists a linear map $a_x \colon T_x\mathcal{M}\rightarrow \ensuremath{\mathfrak{g}}$ such that $\rho_x\circ a_x =\mathrm{Id}_{T_x\mathcal{M}}$, then $\mathcal{R}_x$, given by \[  \mathcal{R}_x(v) \coloneqq (\Lambda_x \circ \psi \circ a_x )(v), \] is a retraction, see \cite{celledoni03oti}. The existence of $a_x$ is guaranteed, at least locally, by the transitivity of the action and the fact that $\psi$ is a local diffeomorphism. The approach is analogous to the one described for differential equations in section~\ref{GeneralizingRK}. Therefore, we can construct retractions using any coordinate map from the Lie algebra $\ensuremath{\mathfrak{g}}$ to the group. Any metric on $\mathfrak{g}$, $\langle\cdot ,\cdot\rangle_{\mathfrak{g}}$ induces a metric on $\mathcal{M}$ by \[\langle v_x,w_x\rangle_{\mathcal{M}}=\langle a_x(v_x) ,a_x(w_x)\rangle_{\mathfrak{g}}.\] Also, we may define the image of the tangent space under the map $a_x$: \[ \mathfrak{m}_x \coloneqq a_x(T_x\mathcal{M})\subset \ensuremath{\mathfrak{g}}. \] The set $\ensuremath{\mathfrak{m}}_x$ is a linear subspace of the Lie algebra $\ensuremath{\mathfrak{g}}$, often of lower dimension. Parametrisations of the solution of (\ref{eqdiff}) involving the whole Lie algebra are in general more computationally intensive than those restricted to $\ensuremath{\mathfrak{m}}_x$, but, if the isotropy is chosen suitably, they might lead to methods which converge faster to the optimum. For the sake of illustration, we consider the minimisation on a two-dimensional torus $T^2=S^1\times S^1$. Here we denote by $S^1$ the circle, i.e.\ \begin{gather*} S^1=\{g(\alpha) \mathbf{e}_1 \in \mathbf{R}^2 \mid g(\alpha)\in \SO(2)\}, \\ g(\alpha)=\exp(\alpha E), \quad E=\begin{bmatrix*}[r] 0 & -1\\ 1 & 0\\ \end{bmatrix*}, \quad 0 \le \alpha < 2\pi, \end{gather*} where $\mathbf{e}_1$ is the first canonical vector and $\SO(2)$ is the commutative Lie group of planar rotations. Any element in $T^2$ is of the form \[x_0\in T^2,\quad x_0=(g(\theta)\mathbf{e}_1,g(\varphi)\mathbf{e}_1), \quad g(\theta), g(\varphi)\in \SO(2).\] The Lie group acting on $T^2$ is $\SO(2)\times \SO(2)$, its corresponding Lie algebra is $\mathfrak{so}(2)\times \mathfrak{so}(2)$, which has dimension $d=2$ and basis $\{ (E,O), (O,E)\}$, where $O$ is the zero element in $\mathfrak{so}(2)$. The Lie group action is \[\Lambda_{x_0}(h_1,h_2)=(h_1 g(\theta)\mathbf{e}_1,h_2 g(\varphi)\mathbf{e}_1),\qquad (h_1,h_2)\in \SO(2)\times \SO(2),\] and $\psi=\exp$. Any $v_{x_0}\in T_{x_0}T^2$ can be written as \[v_{x_0}=(\alpha E\mathbf{e}_1,\beta E\mathbf{e}_1),\] for some $\alpha, \beta \in\mathbf{R},$ so \[a_{x_0}(v_{x_0})=(\alpha E,\beta E).\] Assume the cost function we want to minimise is simply the distance from a fixed plane in $\mathbf{R}^3$, say the plane with equation $y=8$. This gives \begin{equation*} \label{cost1} \phi(g(\theta)\mathbf{e}_1,g(\varphi)\mathbf{e}_1)=\lvert(1+\cos (\theta)) \sin (\varphi)-8\rvert, \end{equation*} and the minimum is attained in $\theta =0$ and $\varphi=\pi/2$.\footnote{ We have used a parameterisation of $T^2$ in $\mathbf{R}^3$ in angular coordinates, obtained applying the following mapping \[ (g(\theta)\mathbf{e}_1,g(\varphi)\mathbf{e}_1) \mapsto \left\{ \begin{alignedat}{2} x &= (1+\mathbf{e}_1^Tg(\theta)\mathbf{e}_1) \mathbf{e}_1^Tg(\varphi)\mathbf{e}_1 & &= (1+\cos (\theta)) \cos (\varphi),\\ y &= (1+\mathbf{e}_1^Tg(\theta)\mathbf{e}_1) \mathbf{e}_2^Tg(\varphi)\mathbf{e}_1 & &= (1+\cos (\theta)) \sin (\varphi),\\ z &= \mathbf{e}_2^Tg(\theta)\mathbf{e}_1 & &= \sin (\theta), \end{alignedat} \right. \] with $0 \le \theta , \varphi < 2\pi$. This is equivalent to the composition of two planar rotations and one translation in $\mathbf{R}^3.$} In Figure~\ref{fig0} we plot $-\grad \phi$, the negative gradient vector field for the given cost function. The Riemannian metric we used is \[ \langle(\alpha_1E\mathbf{e}_1,\beta_1E\mathbf{e}_1),(\alpha_2E\mathbf{e}_1,\beta_2E\mathbf{e}_1)\rangle_{T^2}=\alpha_1\alpha_2+\beta_1\beta_2, \] and $(\alpha_1E\mathbf{e}_1,\beta_1E\mathbf{e}_1)\in T_{(\mathbf{e}_1,\mathbf{e}_1)}T^2$. This metric can be easily interpreted as a metric on the Lie algebra~$\mathfrak{g}=\mathfrak{so}(2)\times \mathfrak{so}(2)$: \[ \langle(\alpha_1E,\beta_1E),(\alpha_2E,\beta_2E)\rangle_{\mathfrak{g}}=\alpha_1\alpha_2+\beta_1\beta_2.\] At the point $p_0=( g(\theta_0)\mathbf{e}_1, g(\varphi_0)\mathbf{e}_1)\in T^2$, the gradient vector field can be represented by \[ (\gamma E g(\theta_0)\mathbf{e}_1,\delta E g(\varphi_0)\mathbf{e}_1), \] where $\gamma$ and $\delta$ are real values given by \[ \gamma=-C\sin(\theta_0)\sin(\varphi_0),\qquad \delta=C(1+\cos(\theta_0))\cos(\varphi_0),\] and \[C=2((1+\cos(\theta_0)\sin(\varphi_0)-8). \] Gradient flows are not the only type of differential equations which can be used to solve optimisation problems on manifolds. Alternative equations have been proposed in the context of neural networks \cite{celledoni04nlb, celledoni08dmf}. Often they arise naturally as the Euler--Lagrange equations of a variational problem. \begin{figure} \caption{The gradient vector field of the cost function $\phi (g(\theta)\mathbf{e}_1,g(\varphi)\mathbf{e}_1)= ((1+\mathbf{e}_1^Tg(\theta)\mathbf{e}_1) \mathbf{e}_2^Tg(\varphi)\mathbf{e}_1-8)^2$ on the torus. The vector field points towards the two minima, the global minimum is marked with a black spot in the middle of the picture. } \label{fig0} \end{figure} \subsection{Principal component analysis} Data reduction techniques are statistical signal processing methods that aim at providing efficient representations of data. A well-known data compression technique consists of mapping a high-dimensional data space into a lower dimensional representation space by means of a linear transformation. It requires the computation of the data covariance matrix and then the application of a numerical procedure to extract its eigenvalues and the corresponding eigenvectors. Compression is then obtained by representing the signal in a basis consisting only of those eigenvectors associated with the most significant eigenvalues. In particular, principal component analysis (PCA) is a second-order adaptive statistical data processing technique that helps removing the second-order correlation among given random signals. Let us consider a stationary multivariate random process $x(t)\in\mathbf{R}^{n}$ and suppose its covariance matrix $A=E[(x-E[x])(x-E[x])]^{T}]$ exists and is bounded. Here the symbol $E[\cdot]$ denotes statistical expectation. If $A\in\mathbf{R}^{n\times n}$ is not diagonal, then the components of $x(t)$ are statistically correlated. One can remove this redundancy by partially diagonalising $A$, i.e.\ computing the operator $F$ formed by the eigenvectors of the matrix $A$ corresponding to its largest eigenvalues. This is possible since the covariance matrix $A$ is symmetric (semi) positive-definite, and $F\in\St(n,p)$. To compute $F$ and the corresponding $p$ eigenvalues of the $n\times n$ symmetric and positive-definite matrix~$A$, we consider the maximisation of the function \[ \phi(X)=\frac{1}{2}\operatorname{trace}(X^TAX), \] on the Stiefel manifold, and solve numerically the corresponding gradient flow with a Lie group integrator. As a consequence the new random signal defined by $y(t) \coloneqq F^{T}(x(t)-E[x(t)])\in\mathbf{R}^{p}$ has uncorrelated components, with $p\leq n$ properly selected. The component signals of $y(t)$ are the so called \emph{principal components of the signal} $x(t)$, and their relevance is proportional to the corresponding eigenvalues $\sigma_{i}^2=E[y_{i}^2]$ which here are arranged in descending order ($\sigma_{i}^{2}\geq\sigma_{i+1}^{2}$). Thus, the data stream $y(t)$ is a compressed version of the data stream $x(t)$. After the reduced-size data has been processed (i.e.\ stored, transmitted), it needs to be recovered, that is, it needs to be brought back to the original structure. However, the principal-component-based data reduction technique is not lossless, thus only an approximation $\hat{x}(t)\in\mathbf{R}^{n}$ of the original data stream may be recovered. An approximation of $x(t)$ is given by $\hat{x}(t)=Fy(t)+E[x]$. Such approximate data stream minimises the reconstruction error~$E[\lVert x-\hat{x}\rVert_2^{2}]=\sum_{i=n+1}^{p}\sigma_{i}^2$. For a scalar or a vector-valued random variable $x\in\mathbf{R}^n$ endowed with a probability density function~$p_x \colon x\in\mathbf{R}^n\rightarrow p_x(x)\in\mathbf{R}$, the expectation of a function $\beta \colon \mathbf{R}^n\rightarrow \mathbf{R}$ is defined as \[ E[\beta] \coloneqq \int_{\mathbf{R}^n}\beta(x)p_x(x) \, \ensuremath{\mathrm{d}}^nx. \] Under the hypothesis that the signals whose expectation is to be computed are ergodic, the actual expectation (ensemble average) may be replaced by temporal-average on the basis of the available signal samples, namely \[ E[\beta]\approx \frac{1}{T}\sum_{t=1}^T\beta(x(t)). \] \subsection{Independent component analysis} An interesting example of a problem that can be tackled via statistical signal processing is the \emph{cocktail-party problem}. Let us suppose $n$ signals $x_1(t),\dots ,x_n(t)$ were recorded from $n$ different positions in a room where there are $p$ sources or speakers. Each recorded signal is a linear mixture of the voices of the sources~$s_1(t),\dots , s_p(t)$, namely \begin{align*} x_1(t) &= a_{1,1} s_1(t) +\dotsb +a_{1,p} s_p(t),\\ &\vdotswithin{=} \\ x_n(t) &= a_{n,1} s_1(t) +\dotsb +a_{n,p} s_p(t), \end{align*} where the $n p$ coefficients $a_{i,j}\in\mathbf{R}$ denote the mixing proportions. The mixing matrix $A=(a_{i,j})$ is unknown. The cocktail party problem consists in estimating signals $s_1(t),\dots , s_p(t)$ from only the knowledge of their mixtures $x_1(t), \dots ,x_n(t)$. The main assumption on the source signals is that $s_1(t), \dots , s_p(t)$ are statistically independent. This problem can be solved using independent component analysis (ICA). Typically, one has $n>p$, namely, the number of observations exceeds the number of actual sources. Also, a typical assumption is that the source signals are spatially white, which means $E[ss^T]=I_p$, the $p \times p$~identity matrix. The aim of independent component analysis is to find estimates $y(t)$ of signals in $s(t)$ by constructing a de-mixing matrix $W\in\mathbf{R}^{n\times p}$ and by computing $y(t) \coloneqq W^Tx(t)$. Using statistical signal processing methods, the problem is reformulated into an optimisation problem on homogeneous manifolds for finding the de-mixing matrix $W$. The geometrical structure of the parameter space in ICA comes from a signal pre-processing step named \emph{signal whitening}, which is operated on the observable signal $x(t)\rightarrow \tilde{x}(t)\in\mathbf{R}^p$ in such a way that the components of the signal $\tilde{x}(t)$ are uncorrelated and have variances equal to $1$, namely $E[\tilde{x}\tilde{x}^T]=I_p$. This also means that redundant observations are eliminated and the ICA problem is brought back to the smallest dimension $p$. This can be done by computing $E[xx^T]=VDV^T$, with $V\in\St(n,p)$ and $D\in\mathbf{R}^{p\times p}$ diagonal invertible. Then \[ \tilde{x}(t) \coloneqq D^{-\frac{1}{2}}V^Tx(t), \] and with $\tilde{A} \coloneqq D^{-\frac{1}{2}}V^TA$ we have $E[\tilde{x}\tilde{x}^T]=\tilde{A}E[ss^T]\tilde{A}^T=\tilde{A}\tilde{A}^T=I_p$. After observable signal whitening, the de-mixing matrix may be searched for such that it solves the optimisation problem \begin{equation*}\label{pri1} {\max_{W\in \LieO(p)}}\,\phi(W). \end{equation*} As explained, after whitening, the number of projected observations in the signal $\tilde{x}(t)$ equals the number of sources. However, in some applications it is known that not all the source signals are useful, so it is sensible to analyse only a few of them. In these cases, if we denote by $\overline{p}\ll p$ the actual number of independent components that are sought after, the appropriate way to cast the optimisation problem for ICA is \begin{equation*}\label{pri2} \max_{W\in \St(n,\overline{p})} \phi(W), \quad \text{with} \quad \overline{p}\ll p. \end{equation*} The corresponding gradient flows obtained in this case are differential equations on the orthogonal group or on the Stiefel manifold, and can be solved numerically by Lie group integrators. As a possible principle for reconstruction, the maximisation or minimisation of non-Gaussianity is viable. It is based on the notion that the sum of independent random variables has distribution closer to Gaussian than the distributions of the original random variables. A measure of non-Gaussianity is the kurtosis, defined for a scalar signal $z\in\mathbf{R}$ as \[ \operatorname{kurt}(z) \coloneqq E[z^4]-3E^2[z^2]. \] If the random signal $z$ has unitary variance, then the kurtosis computes as $\operatorname{kurt}(z)=E[z^4]-3$. Maximising or minimising kurtosis is thus a possible way of estimating independent components from their linear mixtures, see \cite{celledoni08dmf} and references therein for more details. \subsection{Computation of Lyapunov exponents} The Lyapunov exponents of a continuous dynamical system $\dot x=F(x)$, $x(t)\in \mathbf{R}^n$, provide a qualitative measure of its complexity. They are numbers related to the linearisation $A(t)$ of $\dot x=F(x)$ along a trajectory~$x(t)$. Consider the solution $U$ of the matrix problem \[ \dot{U}=A(t)U, \qquad U(0)=U_0, \qquad U(t) \in \mathbf{R}^{n\times n}. \] The logarithms of the eigenvalues of the matrix \[ \Lambda=\lim_{t\rightarrow \infty}\left(U(t)^{T}U(t)\right)^{\frac{1}{2t}}, \] are the Lyapunov exponents for the given dynamical system. In \cite{dieci95coa} a procedure for computing just $k$ of the $n$ Lyapunov exponents of a dynamical system is presented. The exponents are computed by solving an initial value problem on $\St (n,k)$ and computing a quadrature of the diagonal entries of a $k\times k$ matrix-valued function. The initial value problem is defined as follows: \[ \dot{Q}=(A-QQ^{T}A+QSQ^{T})Q, \] with random initial value in $\St(n,k)$ and \[ S_{k,j}= \begin{cases} (Q^TAQ)_{k,j}, & k>j,\\ 0, & k=j,\\ -(Q^TAQ)_{j,k}, & k<j, \end{cases} \qquad k,j=1,\dots ,p. \] It can be shown that the $i$-th Lyapunov exponent $\lambda_i$ can be obtained as \begin{equation} \label{int} \lambda_i=\lim_{t\rightarrow \infty}\frac{1}{t}\int_0^tB_{i,i}(s) \, \ensuremath{\mathrm{d}} s, \quad i=1,\dots ,k, \end{equation} and \[ B=Q^TAQ-S. \] One could use for example the trapezoidal rule to approximate the integral (\ref{int}) and compute $\lambda_i$ ($i=1,\dots ,k$). We refer to \cite{dieci95coa} for further details on the method, and to \cite{celledoni02aco} for the use of Lie group integrators on this problem. Lie group methods for ODEs on Stiefel manifolds have also been considered in \cite{celledoni03oti,krogstad03alc,celledoni03cfl}. We have here presented a selection of applications that can be dealt with by solving differential equations on Lie groups and homogeneous manifolds. For these problems, Lie group integrators are a natural choice. We gave particular emphasis to problems of evolution in classical mechanics and problems of signal processing. This is by no means an exhaustive survey; other interesting areas of application are for example problems in vision and medical imaging, see for instance \cite{xu12aol,lee07gds}. \section{Symplectic integrators on the cotangent bundle of a Lie group} \label{sec:symplie} In this section we shall assume that the manifold is the cotangent bundle $T^*G$ of a Lie group $G$. Let $R_g\colon G\rightarrow G$ be the right multiplication operator such that $R_g(h)=h \cdot g$ for any $h\in G$. The tangent map of $R_g$ is denoted $R_{g*} \coloneqq TR_g$. Any cotangent vector $p_g\in T_g^*G$ can be associated to $\mu\in\ensuremath{\mathfrak{g}}^*$ by right trivialisation as follows: Write $v_g\in T_gG$ in the form $v_g = R_{g*}\xi$ where $\xi\in\ensuremath{\mathfrak{g}}$, so that $\langle p_g, v_g\rangle=\langle p_g, R_{g*}\xi\rangle = \langle R_g^* p_g, \xi\rangle$, where we have used $R_g^*$ for the dual map of $R_{g*}$, and $\langle{\cdot},{\cdot}\rangle$ is a duality pairing. We therefore represent $p_g\in T_g^*G$ by $\mu=R_g^* p_g\in\ensuremath{\mathfrak{g}}^*$. Thus, we may use as phase space $G\times\ensuremath{\mathfrak{g}}^*$ rather than $T^*G$. For applying Lie group integrators we need a transitive group action on $G\times\ensuremath{\mathfrak{g}}^*$ and this can be achieved by lifting the group structure of $G$ and using left multiplication in the extended group. The semidirect product structure on $\mathbf{G} \coloneqq G\ltimes\ensuremath{\mathfrak{g}}^*$ is defined as \begin{equation} \label{eq:prodGxgs} (g_1, \mu_1)\cdot (g_2, \mu_2) = (g_1\cdot g_2, \mu_1 + \Ad_{g_1^{-1}}^*\mu_2), \end{equation} where the coadjoint action $\Ad^{*}$ is defined in \eqref{eq:coadjoint-action}. Similarly, the tangent map of right multiplication extends as \[ TR_{(g,\mu)}(R_{h*}\,\zeta, \nu) = (R_{hg*}\;\zeta, \nu-\ad_\zeta^*\,\Ad_{h^{-1}}^*\mu),\quad g,h \in G,\ \zeta\in\ensuremath{\mathfrak{g}},\ \mu,\nu\in\ensuremath{\mathfrak{g}}^*. \] Of particular interest is the restriction of $TR_{(g,\mu)}$ to $T_e\mathbf{G}\cong \ensuremath{\mathfrak{g}} \times \ensuremath{\mathfrak{g}}^*$, \[ T_eR_{(g,\mu)}(\zeta,\nu) = (R_{g*}\zeta, \nu - \ad_\zeta^*\mu). \] The natural symplectic form on $T^*G$ (which is a differential two-form) is defined as \[ \Omega_{(g,p_g)}((\delta v_1, \delta \pi_1),(\delta v_2,\delta\pi_2)) =\langle \delta \pi_2, \delta v_1\rangle - \langle \delta \pi_1, \delta v_2\rangle, \] and by right trivialisation it may be pulled back to $\mathbf{G}$ and then takes the form \begin{equation} \label{eq:sympform} \omega_{(g,\mu)}( (R_{g*} \xi_1, \delta\nu_1), (R_{g*}\xi_2, \delta\nu_2)) = \langle\delta\nu_2,\xi_1 \rangle - \langle\delta\nu_1, \xi_2 \rangle - \langle\mu, [\xi_1,\xi_2]\rangle, \qquad \xi_1,\xi_2 \in \ensuremath{\mathfrak{g}}. \end{equation} The presentation of differential equations on $T^*G$ as in \eqref{eq:genpres} is now achieved via the action by left multiplication, meaning that any vector field $F\in\mathcal{X}(\mathbf{G})$ is expressed by means of a map $f\colon \mathbf{G} \rightarrow T_e \mathbf{G}$, \begin{equation} \label{eq:Fpres} F(g,\mu) = T_e R_{(g,\mu)} f(g,\mu) = (R_{g*} f_1, f_2-\ad_{f_1}^*\mu), \end{equation} where $f_1=f_1(g,\mu)\in\ensuremath{\mathfrak{g}}$, $f_2=f_2(g,\mu)\in\ensuremath{\mathfrak{g}}^*$ are the two components of $f$. We are particularly interested in the case that $F$ is a Hamiltonian vector field which means that $F$ satisfies the relation \begin{equation} \label{eq:iF} \mathrm{i}_{F}\omega = \ensuremath{\mathrm{d}} H, \end{equation} for some Hamiltonian function $H\colon T^*G\rightarrow\mathbf{R}$ and $\mathrm{i}_F$ is the interior product defined as $\mathrm{i}_F \omega (X) \coloneqq \omega(F, X)$ for any vector field $X$. From now on we let $H \colon \mathbf{G} \to \mathbf{R}$ denote the trivialised Hamiltonian. A simple calculation using \eqref{eq:sympform}, \eqref{eq:Fpres} and \eqref{eq:iF} shows that the corresponding map $f$ for such a Hamiltonian vector field is \[ f(g,\mu) = \left(\frac{\partial H}{\partial\mu}(g,\mu), -R_g^*\frac{\partial H}{\partial g}(g,\mu)\right). \] We have come up with the following family of symplectic Lie group integrators on $G\times\ensuremath{\mathfrak{g}}^*$ \begin{align*} (\xi_i, \bar{n}_i) &= h f(G_i, M_i),\qquad n_i = \Ad_{\exp(X_i)}^* \bar{n}_i,\quad i=1,\ldots s, \\ (g_1, \mu_1) &= \exp\Bigl(Y, (\dexp_{Y}^{-1})^{*}\sum_{i=1}^s b_i n_i\Bigr)\cdot (g_0, \mu_0), \\ Y&= \sum_{i=1}^s b_i \xi_i,\qquad X_i = \sum_{j=1}^s a_{ij} \xi_j,\quad i=1,\ldots,s, \\ G_i &= \exp(X_i)\cdot g_0,\quad i=1,\ldots, s, \\ M_i &= \dexp_{-Y}^* \mu_0 + \sum_{j=1}^s \left(b_j\dexp_{-Y}^* - \frac{b_ja_{ji}}{b_i}\dexp_{-X_j}^*\right) n_j,\quad i=1,\ldots,s. \end{align*} Here, $a_{ij}$ and $b_i$ are coefficients where it is assumed that $\sum_{i=1}^s b_i = 1$ and that $b_i\neq 0,\ 1\leq i\leq s$. The symplecticity of these schemes is a consequence of their derivation from a variational principle, following ideas similar to that of \cite{bou-rabee09hpi} and \cite{saccon09mrf}. One should be aware that order barriers for this type of schemes may apply, and that further stage corrections may be necessary to obtain high order methods. \paragraph{Example, the $\theta$-method for a heavy top} Let us choose $s=1$ with coefficients $b_1=1$ and $a_{11}=\theta$, i.e.\ the RK coefficients of the well known $\theta$-method. Inserting this into our method and simplifying gives us the method \begin{align*} (\xi, \bar n) &= h f\bigl(\exp(\theta \xi) \cdot g_0, \dexp_{-\xi}^{*} \mu_0 + (1 - \theta) \dexp_{-(1-\theta)\xi}^{*} \bar n\bigr), \\ (g_1, \mu_1) &= (\exp(\xi), \Ad_{\exp(-(1 -\theta) \xi)}^{*} \bar n) \cdot (g_0, \mu_0). \end{align*} \begin{figure} \caption{Heavy top simulations with the symplectic (SLGI) $\theta$-methods and RKMK $\theta$-methods with $\theta=0,\tfrac{1}{2}$. The curves show the time evolution of the centre of mass of the body. The simulations were run over $10^5$ steps with step size $h=0.05$. See the text, page \pageref{vale:pgref}, for all other parameter values. } \label{fig:heavytop} \end{figure} In Figure~\ref{fig:heavytop} we show numerical experiments for the heavy top, where the Hamiltonian is given as \[ H \colon \mathbf{G} \to \mathbf{R}, \qquad H(g,\mu) =\frac{1}{2} \langle \mu, \mathbb{I}^{-1}\mu\rangle + \mathbf{e}_3^Tgu_0, \] where $G = \SO(3)$, $\mathbb{I}\colon \ensuremath{\mathfrak{g}}\rightarrow \ensuremath{\mathfrak{g}}^*$ is the inertia tensor, here represented as a diagonal \label{vale:pgref} $3\times 3$ matrix, $u_0$ is the initial position of the top's centre of mass, and $\mathbf{e}_3$ is the canonical unit vector in the vertical direction. We have chosen $\mathbb{I}=10^3\,\operatorname{diag}(1,5,6)$ and $u_0=\mathbf{e}_3$. The initial values used were $g_0=I$ (the identity matrix), and $\mu_0=10\,\mathbb{I}\,(1,1,1)^T$. We compare the behaviour of the symplectic schemes presented here to the Runge--Kutta--Munthe-Kaas (RKMK) method with the same coefficients. In Figure~\ref{fig:heavytop} we have drawn the time evolution of the centre of mass, $u_n=g_n u_0$. The characteristic band structure observed for the symplectic methods was reported in \cite{celledoni06ets}. The RKMK method with $\theta=\frac{1}{2}$ exhibits a similar behaviour, but the bands are expanding faster than for the symplectic ones. We have also found in these experiments that none of the symplectic schemes, $\theta=0$ and $\theta=\frac{1}{2}$ have energy drift, but this is also the case for the RKMK method with $\theta=\frac{1}{2}$. This may be related to the fact that both methods are symmetric for $\theta=\frac{1}{2}$. For $\theta=0$, however, the RKMK method shows energy drift as expected. These tests were done with step size $h=0.05$ over $10^5$ steps. See Table~\ref{tab:experiment} for a summary of the properties of the tested methods. \begin{table} \centering \morespacearray \begin{tabular}{r|c|c|c|c|} \cline{2-5} & \multicolumn{2}{c|}{RKMK} & \multicolumn{2}{c|}{SLGI} \\ \cline{2-5} & $\theta = 0$ & $\theta = 1/2$ & $\theta = 0$ & $\theta = 1/2$ \\ \hline \multicolumn{1}{|r|}{Symplectic} & no & no & yes & yes \\ \multicolumn{1}{|r|}{Symmetric} & no & yes & no & yes \\ \multicolumn{1}{|r|}{No energy drift} & no & yes & yes & yes \\ \hline \end{tabular} \caption{Properties of the tested methods. The energy drift was observed numerically.} \label{tab:experiment} \end{table} \section{Discrete gradients and integral preserving methods on Lie groups} \label{sec:discdiff} The discrete gradient method for preserving first integrals has to a large extent been made popular through the works of Gonzalez \cite{gonzalez96tia} and McLachlan et al.\ \cite{mclachlan99giu}. The latter proved the result that under relatively general circumstances, a differential equation which has a first integral $I(x)$ can be written in the form \[ \dot{x} = S(x) \nabla I(x), \] for some non-unique solution-dependent skew-symmetric matrix $S(x)$. The idea is to introduce a mapping which resembles the true gradient; a \emph{discrete gradient} $\overline{\nabla}I \colon \mathbf{R}^d\times\mathbf{R}^d\rightarrow \mathbf{R}^d$ which is a continuous map that satisfies the following two conditions: \begin{align*} \overline{\nabla}I(x,x) &= \nabla I(x),\quad \forall x, \\ I(y)-I(x) &= \overline{\nabla}I(x,y)^T(y-x),\quad \forall x\neq y. \end{align*} An integrator which preserves $I$, that is, $I(x_n)=I(x_0)$ for all $n$ is now easily devised as \[ \frac{x_{n+1}-x_n}{h} = \tilde{S}(x_n,x_{n+1})\overline{\nabla}I(x_n,x_{n+1}), \] where $\tilde{S}(x,y)$ is some consistent approximation to $S(x)$, i.e.\ $\tilde{S}(x,x)=S(x)$. There exist several discrete gradients, two of the most popular are \begin{equation} \label{eq:cavf} \overline{\nabla}I(x,y) = \int_0^1 \nabla I(\zeta y + (1-\zeta) x) \, \ensuremath{\mathrm{d}} \zeta, \end{equation} and \begin{equation} \label{eq:gonz} \overline{\nabla} I(x,y) = \nabla I\left(\frac{x+y}{2}\right) + \frac{I(y)-I(x)-\nabla I\left(\frac{x+y}{2}\right)^T(y-x)}{\lVert y-x\rVert^2} (y-x). \end{equation} The matrix $\tilde{S}(x,y)$ can be constructed with the purpose of increasing the accuracy of the resulting approximation, see e.g.\ \cite{quispel08anc}. We now generalise the concept of the discrete gradient to a Lie group $G$. We consider differential equations which can, for a given dual two-form\footnote{By dual two-form, we here mean a differential two-form on $G$ such that on each fibre of the cotangent bundle we have $\omega_x \colon T_x^*G\times T_x^*G\rightarrow\mathbf{R}$, a bilinear, skew-symmetric form. Such forms are sometimes called bivectors or 2-vectors.} $\omega\in\Omega_2(G)$ and a function $H \colon G\rightarrow\mathbf{R}$ be written in the form \begin{equation*} \label{eq:intprod} \dot{x} = \mathrm{i}_{\ensuremath{\mathrm{d}} H}\omega, \end{equation*} where $\mathrm{i}_{\alpha}$ is the interior product $\mathrm{i}_\alpha\omega(\beta)=\omega(\alpha,\beta)$ for any two one-forms $\alpha, \beta\in\Omega^1(G)$. The function $H$ is a first integral since \[ \frac{\ensuremath{\mathrm{d}}}{\ensuremath{\mathrm{d}} t} H(x(t)) = \ensuremath{\mathrm{d}} H_{x(t)}(\dot{x}(t))=\omega(\ensuremath{\mathrm{d}} H, \ensuremath{\mathrm{d}} H)=0. \] We define the \emph{trivialised discrete differential} (TDD) of the function $H$ to be a continuous map $\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H \colon G\times G \rightarrow \ensuremath{\mathfrak{g}}^*$ such that \begin{align*} H(x')-H(x) &= \langle \ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x'), \log(x' \cdot x^{-1}) \rangle, \\ \ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x) &= R_x^*\,\ensuremath{\mathrm{d}} H_x. \end{align*} A numerical method can now be defined in terms of the discrete differential as \[ x' = \exp(h \, \mathrm{i}_{\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x')}\bar{\omega}(x,x')) \cdot x. \] where $\bar{\omega}$ is a continuous map from $G\times G$ into the space of exterior two-forms on $\ensuremath{\mathfrak{g}}^*$, $\Lambda^2(\ensuremath{\mathfrak{g}}^*)$. This exterior form is some local trivialised approximation to $\omega$, meaning that we impose the following consistency condition \begin{equation*} \label{eq:consomb} \bar{\omega}(x,x)(R_x^*\alpha, R_x^*\beta) = \omega_x(\alpha,\beta), \qquad \text{for all } \alpha, \beta\in T_x^*G. \end{equation*} We easily see that this method preserves $H$ exactly, since \begin{align*} H(x')-H(x) &= \langle\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x'), \log(x' \cdot x^{-1})\rangle \\ &= \langle\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x'), h\,\mathrm{i}_{\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x')}\bar\omega(x, x')\rangle \\ &= h\,\bar{\omega}(x,x') (\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x'), \ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x')) = 0. \end{align*} Extending \eqref{eq:cavf} to the Lie group setting, we define the following TDD: \begin{equation*} \label{eq:tdgavf} \ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x') = \int_0^1 R_{\ell(\xi)}^* \, \ensuremath{\mathrm{d}} H_{\ell(\xi)} \, \ensuremath{\mathrm{d}}\xi,\qquad \ell(\xi) = \exp(\xi\log(x'\cdot x^{-1})) \cdot x. \end{equation*} Similarly, for any given inner product on $\ensuremath{\mathfrak{g}}$, we may extend the discrete gradient \eqref{eq:gonz} to \begin{equation*} \label{eq:gonz1} \ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x') = R_{\bar{x}}^* \, \ensuremath{\mathrm{d}} H_{\bar{x}} + \frac{H(x')-H(x)-\langle R_{\bar{x}}^* \, \ensuremath{\mathrm{d}} H_{\bar{x}},\eta\rangle }{\lVert\eta\rVert^2}\eta^\flat,\quad\eta=\log(x' \cdot x^{-1}), \end{equation*} where $\bar{x}\in G$ for instance could be $\bar{x}=\exp(\eta/2) \cdot x$, a choice which would cause $\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x,x')=\ensuremath{\overline{\ensuremath{\mathrm{d}}}} H(x',x)$. The standard notation $\eta^\flat$ is used for index-lowering, the inner product $({\cdot},{\cdot})$ associates to any element $\eta\in\ensuremath{\mathfrak{g}}$ the dual element $\eta^\flat\in\ensuremath{\mathfrak{g}}^*$ through $\langle\eta^\flat, \zeta\rangle = (\eta,\zeta)$, $\forall\zeta\in\ensuremath{\mathfrak{g}}$. Suppose that the ODE vector field $F$ is known as well as the invariant $H$. A dual two-form $\omega$ can now be defined in terms of a Riemannian metric on $G$. By index raising applied to $\ensuremath{\mathrm{d}} H$, we obtain the Riemannian gradient vector field $\grad H$, and we define \[ \omega = \frac{\grad H\wedge F}{\lVert\grad H\rVert^2}\qquad \Rightarrow\qquad \mathrm{i}_{\ensuremath{\mathrm{d}} H}\omega = F. \] \paragraph{Example} We consider the equations for the attitude rotation of a free rigid body expressed using Euler parameters. The set $S^3=\{ \mathbbm{q}\in \mathbf{R}^4 \mid \lVert\mathbbm{q}\rVert_2=1\}$ with $\mathbbm{q}=(q_0,\mathbf{q})$, ($q_0\in\mathbf{R}$ and $\mathbf{q}\in\mathbf{R}^3$), is a Lie group with the quaternion product \[ \mathbbm{p}\cdot \mathbbm{q}=(p_0q_0-\mathbf{p}^T\mathbf{q}, p_0\mathbf{q}+q_0\mathbf{p}+\mathbf{p}\times \mathbf{q}), \] with unit $\mathbbm{e}=(1, 0, 0, 0)$ and inverse $\mathbbm{q}_c=(q_0,\,-\mathbf{q})$. We denote by ``$\hat{\quad}$" the hat-map defined in \eqref{eq:hatmap}. The Lie group $S^3$ can be mapped into $\SO(3)$ by the Euler--Rodrigues map: \[ \mathcal{E}(\mathbbm{q})=I_3+2q_0\hat{\mathbf{q}}+2\hat{\mathbf{q}}^2, \] where $I_3$ denotes the $3\times 3$ identity matrix. The Lie algebra $\mathfrak{s}^3$ of $S^3$ is the set of so called pure quaternions, the elements of $\mathbf{R}^4$ with first component equal to zero, identifiable with $\mathbf{R}^3$ and with $\mathfrak{so}(3)$ via the hat-map. The equations for the attitude rotation of a free rigid body on $S^3$ read \[ \dot{\mathbbm{q}}=f(\mathbbm{q})\cdot \mathbbm{q},\qquad f(\mathbbm{q})=\mathbbm{q}\cdot \mathbbm{v} \cdot \mathbbm{q}_c, \] and \[ \mathbbm{v}=(0, \mathbf{v}),\qquad \mathbf{v} = \frac{1}{2} \mathbb{I}^{-1} \mathcal{E} (\mathbbm{q}_c)\mathbf{m}_0, \] where $\mathbf{m}_0$ is the initial body angular momentum and $\mathbb{I}$ is the diagonal inertia tensor, and according to the notation previously used in this section $F(\mathbbm{q})=f(\mathbbm{q})\cdot \mathbbm{q}$. The energy function is \[H(\mathbbm{q})=\frac{1}{2}\,\mathbf{m}_0^T\mathcal{E}(\mathbbm{q})\mathbb{I}^{-1}\mathcal{E}(\mathbbm{q}_c)\mathbf{m}_0.\] We consider the $\mathbf{R}^3$ Euclidean inner product as metric in the Lie algebra $\mathfrak{s}^3$, and obtain by right translation a Riemannian metric on $S^3$. The Riemannian gradient of $H$ with respect to this metric is then \[\grad H=(I_4-\mathbbm{q}\mathbbm{q}^T)\nabla H,\] where $I_4$ is the identity in $\mathbf{R}^{4\times4}$ and $\nabla H$ is the usual gradient of $H$ as a function from $\mathbf{R}^4$ to $\mathbf{R}$. We identify $\mathfrak{s}^3$ with its dual, and using $\grad H$ in \eqref{eq:gonz} we obtain the (dual) discrete differential $\overline{\grad H}(\mathbbm{q},\mathbbm{q}')\in \mathfrak{s}^3$. The two-form $\omega = \frac{\grad H\wedge F}{\lVert\grad H\rVert^2}$ with respect to the right trivialisation can be identified with the $4\times 4$ skew-symmetric matrix \[\omega_R(\mathbbm{q})=\frac{\xi\,\gamma^T-\gamma\,\xi^T}{\lVert\gamma\rVert^2},\qquad \xi ,\, \gamma\in\mathfrak{s}^3,\qquad \xi\cdot \mathbbm{q}=F(\mathbbm{q}), \qquad \gamma \cdot \mathbbm{q}=\grad H(\mathbbm{q}),\] where $\omega_R(\mathbbm{q})$ has first row and first column equal to zero. We choose $\bar{\omega}$ to be \[\bar{\omega}(\mathbbm{q},\mathbbm{q}')=\omega_R(\bar{\mathbbm{q}}),\qquad \bar{\mathbbm{q}}=\exp(\eta/2) \cdot \mathbbm{q},\qquad \eta=\log (\mathbbm{q}'\cdot \mathbbm{q}_c),\] i.e.\ $\omega_R$ frozen at the mid-point $\bar{\mathbbm{q}}$. The energy-preserving Lie group method of second order is \[\mathbbm{q}'=\exp( h\,\bar{\omega}(\mathbbm{q},\mathbbm{q}') \overline{\grad H}(\mathbbm{q},\mathbbm{q}'))\cdot \mathbbm{q},\] and $\exp$ is the exponential map from $\mathfrak{s}^3$ to $S^3$ with $\log \colon S^3\rightarrow \mathfrak{s}^3$ as its inverse, defined locally around the identity. In Figure~\ref{fig:EP} we plot the body angular momentum vector $\mathbf{m}=\mathcal{E}(\mathbbm{q}_c)\mathbf{m}_0$ on a time interval $[0,T]$, $T=1000$, for four different methods: the Lie group energy-preserving integrator just described (top left), the built-in \textsc{Matlab} routine \texttt{ode45} with absolute and relative tolerance $10^{-6}$ (top right); the \texttt{ode45} routine with tolerances $10^{-14}$ (bottom left); and the explicit Heun RKMK Lie group method (bottom right). \begin{figure} \caption{Free rigid body angular momentum, time interval $[0,1000]$, moments of inertia $I_1=1$, $I_2=5$, $I_3=60$, initial angular velocity $\mathbb{I}\,\mathbf{m}_0=(1, 1/2, -1)^T$. (Top left) energy-preserving Lie group method, $h=1/64$; (top right) \texttt{ode45} with tolerances $10^{-6}$; (bottom left) \texttt{ode45} with tolerances $10^{-14}$; (bottom right) Heun RKMK, $h=1/64$.} \label{fig:EP} \end{figure} The two Lie group methods both have order $2$. The energy preserving method is both symmetric, energy preserving and it preserves the constraint $\lVert\mathbbm{q}\rVert_2=1$. The Lie group integrators use a step-size $h=1/64.$ The solution of the built-in \textsc{Matlab} routine at high precision is qualitatively similar to the highly accurate solution produced by \textsc{Matlab} with tolerances $10^{-14}$. The energy error is also comparable for these two experiments. The performance of other \textsc{Matlab} built-in routines we tried was worse than for \texttt{ode45}. We remark that the equations are formulated as differential equations on $S^3$, a formulation of the problem in form of a differential algebraic equation would possibly have improved the performance of the \textsc{Matlab} built-in routines. However it seems that the preservation of the constraint alone can not guarantee the good performance of the method. In fact the explicit (non-symmetric) Lie group integrator preserves the constraint $\lVert\mathbbm{q}\rVert_2=1$, but performs poorly on this problem (see Figure~\ref{fig:EP} bottom right). The cost per step of the explicit Lie group integrator is much lower than for the energy-preserving symmetric Lie group integrator. We have given an introduction to Lie group integrators for differential equations on manifolds using the notions of frames and Lie group actions. A few application areas have been discussed. An interesting observation is that when the Lie group featuring in the method can be chosen to be the Euclidean group, the resulting integrator always reduce to some well-known numerical scheme like for instance a Runge-Kutta method. In this way, one may think of the Lie group integrators as a superset of the traditional integrators and a natural question to ask is whether the Euclidean choice will always be superior to any other Lie group and group action. Lie group integrators that are symplectic for Hamiltonian problems in the general setting presented here are, as far as we know, not known. However, we have shown that such methods exist in the important case of Hamiltonian problems on the cotangent bundle of a Lie group. There are however still many open questions regarding this type of schemes, like for instance how to obtain high order methods. The preservation of first integrals in Lie group integrators has been achieved in the literature by imposing a group action in which the orbit, i.e. the reachable set of points, is contained in the level set of the invariant. But it is not always convenient to impose such group actions, and we have here suggested a type of Lie group integrator which can preserve any prescribed invariant for the case where the manifold is a Lie group acting on itself by left or right multiplication. An interesting idea to pursue is the generalisation of this approach to arbitrary smooth manifolds with a group action. \end{document}
\begin{document} \title{Analytic Continuation of Holomorphic Correspondences and Equivalence of Domains in ${\mathbb C}^n$.} \author{Rasul Shafikov \\{\small \tt [email protected]}} \date{\today} \maketitle \begin{abstract} The following result is proved: Let $D$ and $D'$ be bounded domains in $\mathbb C^n$, $\partial D$ is smooth, real-analytic, simply connected, and $\partial D'$ is connected, smooth, real-algebraic. Then there exists a proper holomorphic correspondence $f:D\to D'$ if and only if $\partial D$ and $\partial D'$ a locally CR-equivalent. This leads to a characterization of the equivalence relationship between bounded domains in $\mathbb C^n$ modulo proper holomorphic correspondences in terms of CR-equivalence of their boundaries. \end{abstract} \section{Definitions and Main Results.} Following Stein \cite{st1} we say that a holomorphic correspondence between two domains $D$ and $D'$ in ${\mathbb C}^n$ is a complex-analytic set $A \subset D\times D'$ which satisfies: (i) $A$ is of pure complex dimension $n$, and (ii) the natural projection $\pi:A\to D$ is proper. A correspondence $A$ is called proper, if in addition the natural projection $\pi': A\to D'$ is proper. We may also think of $A$ as the graph of a multiple valued mapping defined by $F:=\pi'\circ\pi^{-1}$. Holomorphic correspondences were studied for instance, in \cite{bb}, \cite{bb2}, \cite{be2}, \cite{bsu}, \cite{dp1}, \cite{dfy}, \cite{p3}, and \cite{v}. In this paper we address the following question: Given two domains $D$ and $D'$, when does there exist a proper holomorphic correspondence $F:D\to D'$? Note (see \cite{st2}) that the existence of a correspondence $F$ defines an equivalence relation $D \sim D'$. This equivalence relation is a natural generalization of biholomorphic equivalence of domains in ${\mathbb C}^n$. To illustrate the concept of equivalence modulo holomorphic correspondences, consider domains of the form $\Omega_{p,q}= \{|z_1|^{2p}+|z_2|^{2q}<1 \}, \ \ p,q\in{\mathbb Z}^+$. Then $f(z)=({z_1}^{p/s},{z_2}^{q/t})$ is a proper holomorphic correspondence between $\Omega_{p,q}$ and $\Omega_{s,t}$, while a proper holomorphic map from $\Omega_{p,q}$ to $\Omega_{s,t}$ exists only if $s|p$ and $t|q$, or $s|q$ and $t|p$. For details see \cite{bd} or \cite{l}. The main result of this paper is the following theorem. \begin{theorem}\label{t1} Let $D$ and $D'$ be bounded domains in ${\mathbb C}^n$, $n>1$. Let $\partial D$ be smooth, real-analytic, connected and simply connected, and let $\partial D'$ be connected, smooth, real-algebraic. Then there exists a proper holomorphic correspondence $F:D\to D'$ if and only if there exist points $p\in\partial D$, $p'\in\partial D'$, neighborhoods $U\ni p$ and $U'\ni p'$, and a biholomorphic map $f:U\to U'$, such that $f(U\cap\partial D)= U'\cap\partial D'$. \end{theorem} In other words, we show that a germ of a biholomorphic mapping between the boundaries extends analytically to a holomorphic correspondence of the domains. By the example above, the extension will not be in general single-valued. Note that we do not require pseudoconvexity of either $D$ or $D'$. Also $\partial D'$ is not assumed to be simply connected. If both $D$ and $D'$ have real-algebraic boundary, i.e. each is globally defined by an irreducible real polynomial, then we can drop the requirement of simple connectivity of $\partial D$. \begin{theorem}\label{t2} Let $D$ and $D'$ be bounded domains in ${\mathbb C}^n$, $n>1$, with connected smooth real-algebraic boundaries. Then there exists a proper holomorphic correspondence between $D$ and $D'$ if and only if there exist points $p\in\partial D$, $p'\in\partial D'$, neighborhoods $U\ni p$ and $U'\ni p'$, and a biholomorphic map $f:U\to U'$, such that $f(U\cap\partial D)= U'\cap\partial D'$. \end{theorem} The proof of Theorem \ref{t2} is simpler than that of Theorem \ref{t1} due to the fact that when both domains are algebraic, Webster's theorem \cite{w} can be applied. We give a separate complete proof of Theorem \ref{t2} in Section 3 to emphasize the ideas of the proof of Theorem \ref{t1} and the difficulties that arise in the case when one of the domains is not algebraic. Local CR-equivalence of the boundaries, which is used in the above theorems to characterize correspondence equivalence, is a well-studied subject. Chern and Moser \cite{cm} gave the solution to local equivalence problem for real-analytic hypersurfaces with non-degenerate Levi form both in terms of normalization of Taylor series of the defining functions and in terms of intrinsic differential-geometric invariants of the hypersurfaces. See also \cite{se}, \cite{ca} and \cite{t}. Note that for a bounded domain in ${\mathbb C}^n$ with smooth real-analytic boundary, the set of points where the Levi form is degenerate is a closed nowhere dense set. Thus we may reformulate Theorem \ref{t1} in the following way. \begin{theorem} Let $D$ and $D'$ be as in Theorem \ref{t1}. Then $D$ and $D'$ are correspondence equivalent if and only if there are points $p\in\partial D$ and $p'\in\partial D'$ such that the Levi form of the defining functions are non-degenerate at $p$ and $p'$, and the corresponding Chern-Moser normal forms are equivalent. \end{theorem} Theorem \ref{t1} generalizes the result in \cite{p1}, which states that a bounded, strictly pseudoconvex domain $D\subset{\mathbb C}^n$ with connected, simply-connected, real-analytic boundary is biholomorphically equivalent to ${\mathbb B}^n={\mathbb B}(0,1)$, the unit ball centered at the origin, if and only if there exists a local biholomorphism of the boundaries. It was later shown in \cite{cj} (see also \cite{ber}) that simple connectivity of $\partial D$ can be replaced by simple connectivity of the domain $D$ and connectedness of $\partial D$. In \cite{p2} Pinchuk also established sufficient and necessary conditions of equivalence for bounded, strictly pseudoconvex domains with real-analytic boundaries. Two such domains $D$ and $D'$ are biholomorphically equivalent if and only if there exist points $p\in\partial D$ and $p'\in\partial D'$ such that $\partial D$ and $\partial D'$ are locally equivalent at $p$ and $p'$. If in Theorem \ref{t1} $\partial D$ is not assumed to be simply connected, then the result is no longer true. Indeed, in a famous example Burns and Shnider \cite{bs} constructed a domain $\O$ with the boundary given by \begin{equation}\label{bad} \partial \O=\{z\in{\mathbb C}^2 : \sin(\ln|z_2|) + |z_1|^2=0, \ e^{-\pi}\le|z_1|\le 1\}, \end{equation} which is real-analytic and strictly pseudoconvex but not simply connected. There exists a mapping $f:{\mathbb B}^2\to \O$ such that $f$ does not extend even continuously to a point on $\partial {\mathbb B}^2$. The inverse to $f$ gives a local biholomorphic map from $\partial \O$ to $\partial {\mathbb B}^2$, but nevertheless $f^{-1}$ does not extend to a global proper holomorphic correspondence between $\O$ and ${\mathbb B}^2$. Furthermore, suppose that there exists a proper correspondence $g:\O\to {\mathbb B}^2$. Since ${\mathbb B}^2$ is simply connected, $g^{-1}$ is a proper holomorphic mapping, which extends holomorphically to a neighborhood of $\overline{ {\mathbb B}^2}$. Let $p\in \partial \O$, $q^1\in f^{-1}(p)$, and $q^2\in g(p)$. By the result in \cite{bb} $g$ splits at every point in ${\overline \O}$. Therefore, with a suitable choice of the branch of $g$ near $p$, $\phi:=g\circ f$ defines a local biholomorphic mapping from a neighborhood $U_1\ni q^1$ to some neighborhood $U_2\ni q^2$. Moreover, $\phi(U_1\cap \partial {\mathbb B}^2)\subset (U_2\cap \partial {\mathbb B}^2)$. By the theorem of Poincar\'e-Alexander (see e.g. \cite{a}), $\phi$ extends to a global automorphism of ${\mathbb B}^2$. Thus after a biholomorphic change of coordinates, by the uniqueness theorem $f$ and $g^{-1}$ must agree in ${\mathbb B}^2$. But $f:{\mathbb B}^2\to \Omega$ is not a proper map. This contradiction shows that there are no proper holomorphic correspondences between $\O$ and ${\mathbb B}^2$. Thus the condition of simple connectivity of $\partial D$ in Theorem \ref{t1} cannot be in general weakened. One direction in the proof of Theorems \ref{t1} and \ref{t2} is essentially contained in the work of Berteloot and Sukhov \cite{bsu}. The proof in the other direction is based on the idea of extending the mapping $f$ along $\partial D$ as a holomorphic correspondence. It is not known whether Theorem \ref{t1} holds if $\partial D'$ is real-analytic. The main difficulty is to prove local analytic continuation of holomorphic mappings along real-analytic hypersurfaces in the case when hypersurfaces are not strictly pseudoconvex. In particular, Lemma \ref{l:alongSV} cannot be directly established for real-analytic hypersurfaces. In Section 2 we present background material on Segre varieties and holomorphic correspondences. In Section 3 we prove a technical result, important for the proof of the main theorems. Section 4 contains the proof of Theorem~\ref{t2}. In Section 5 we prove local extendability of holomorphic correspondences along hypersurfaces. Theorem \ref{t1} is proved in Section 6. \section{Background Material.} Let $\G$ be an arbitrary smooth real-analytic hypersurface with a defining function $\rho(z,\overline z)$ and let $z^0\in\G$. In a suitable neighborhood $U\ni z^0$ to every point $w\in U$ we can associate its so-called Segre variety defined as \begin{equation} Q_w=\left\{ z\in U: \rho(z,\overline w)=0 \right\}, \end{equation} where $\rho(z,\overline w)$ is the complexification of the defining function of $\G$. After a local biholomorphic change of coordinates near $z^0$, we can find neighborhoods $U_1 \Subset U_2$ of $z^0$, where \begin{equation}\label{e:1.00} U_2={'U_2}\times {{''U_{2}}}\subset {\mathbb C}^{n-1}_{{\ 'z}}\times {\mathbb C}_{z_n}, \end{equation} such that for any $w\in U_1$, $Q_w$ is a closed smooth complex-analytic hypersurface in $U_2$. Here $z=({'z},z_n)$. Furthermore, a Segre variety can be written as a graph of a holomorphic function, \begin{equation}\label{e:implicit} Q_w=\left\{({'z},z_n)\in {'U_2}\times {''U_{2}} : \ z_n=h({'z}, \overline w)\right\}, \end{equation} where $h(\cdot,\overline w)$ is holomorphic in ${'U_{2}}$. Following \cite{dp1} we call $U_1$ and $U_2$ a \textsl{standard} pair of neighborhoods of $z^0$. A detailed discussion of elementary properties of Segre varieties can be found in \cite{dw}, \cite{df2}, \cite{dp1} or \cite{ber1}. The map $\lambda:z\to Q_z$ is called the Segre map. We define $I_w=\l^{-1}\circ \l(w)$. This is equivalent to \begin{equation} I_w=\left\{z\in U_1: Q_z=Q_w\right\}. \end{equation} If $\G$ is of finite type in the sense of D'Angelo, or more generally, if $\G$ is essentially finite, then there exists a neighborhood $U$ of $\G$ such that for any $w\in U$ the set $I_w$ is finite. Due to the result in \cite{df1}, this is the case for compact smooth real-analytic hypersurfaces, in particular for the boundaries of $D$ and $D'$ in Theorem \ref{t1}. The last observation is crucial for the construction of proper holomorphic correspondences used throughout this paper. We remark that our choice of a standard pair of neighborhoods of any point $z\in\G$ is always such that for any $w\in U_2$, the set $I_w$ is finite. For the proof of Theorem \ref{t1} we will need the following lemma. \begin{lemma}\label{l-finite} Let $\G$ be a compact, smooth, real-algebraic hypersurface. Then there exist a neighborhood $U$ of $\G$ and an integer $m\ge 1$ such that for almost any $z\in U$, $\#I_z = m$. \end{lemma} \noindent{\it Proof.} Let $P(z,\overline z)$ be the defining polynomial of $\G$ of degree $d$ in $z$. The complexified defining function can be written in the form \begin{equation} P(z,\overline w) = \sum_{|K|=0}^d{a_K(\overline w) z^K},\ \ K=(k_1,\dots, k_n). \end{equation} We may consider the projectivized version of the polynomial $P:\mathbb P^n \times\mathbb C^n \to \mathbb C$: \begin{equation}\label{proj-poly} \tilde P(\tilde\zeta,\overline w)=\zeta_0^d \sum_{|K|=0}^d{a_K(\overline w) \left(\frac{\zeta}{\zeta_0}\right)^K}= \sum_{|K|\le d} a_K(\overline w)\tilde \zeta^K, \end{equation} where $z_j=\frac{\zeta_j}{\zeta_0}$, $\zeta=(\zeta_1,\dots,\zeta_n)$, and $\tilde\zeta = (\zeta_0,\zeta)$. Let $\tilde Q_w = \{\tilde\zeta\in \mathbb P^n : \tilde P(\tilde\zeta,\overline w)=0\}$. Then $Q_w=\tilde Q_w\cap \{\zeta_0=1\}$. Define the map $\hat \l$ between the set of points in $\mathbb C^n$ and the space of complex hypersurfaces in $\mathbb P^n$, by letting $\hat \l (w) = \{\tilde\zeta\in\mathbb P^n : \tilde P(\tilde\zeta, \overline w)=0\}$. Hypersurfaces in $\mathbb P^n$ can be parametrized by points in $\mathbb P^N$, where $N$ is some large integer, thus $\hat \l: \mathbb C^n\to \mathbb P^{N}$. Note that each component of $\hat\l$ is defined by a (antiholomorphic) polynomial. It follows that $\hat\l^{-1}\circ\hat\l (w)=I_w$ for every $w\in\mathbb C^n$, for which $Q_w$ is defined. Indeed, suppose $\xi\in I_w$. Then $\{P(z,\overline\xi)=0\}=\{P(z,\overline w)=0\}$, and therefore, \begin{equation}\label{p=p} \{\tilde P(\tilde\zeta,\overline\xi)=0\}= \{\tilde P(\tilde\zeta,\overline w)=0\}. \end{equation} Thus $\hat\l(\xi)=\hat\l(w)$. The converse clearly also holds. From the reality condition on $P(z,\overline z)$ (see \cite{dw}), \begin{equation} I_w\subset \G, {\rm\ \ for\ } w\in\G. \end{equation} Since $\G$ is compact, $I_w=\hat\l^{-1}\circ\hat\l (w)$ is finite. Let $Y=\hat\l(\mathbb C^n)$. Then $\dim Y = n$, and $\hat\l$ is a dominating regular map. It follows (see e.g. \cite{m}), that there exists an algebraic variety $Z\subset Y$ such that for any $q\in Y\setminus Z$, \begin{equation}\label{deg} \#\hat\lambda^{-1} (q)=\deg(\hat\lambda)=m, \end{equation} where $m$ is some positive integer. From (\ref{deg}) the assertion follows. $\square$ The following statement describes the invariance property of Segre varieties under holomorphic mappings. It is analogous to the classical Schwarz reflection principle in dimension one. Suppose that $\G$ and $\G'$ are real-analytic hypersurfaces in ${\mathbb C}^n$, $(U_1, U_2)$ and $(U'_1, U'_2)$ are standard pairs of neighborhoods for $z_0\in\G$ and $z'_0\in\G'$ respectively. Let $f: U_2 \to U'_2$ be a holomorphic map, $f(U_1)\subset U'_1$ and $f(\G\cap U_2)\subset(\G'\cap U'_2)$. Then \begin{equation}\label{e-invar} f(Q_w\cap U_2)\subset Q'_{f(w)}\cap U'_2,\ \ {\rm for\ all\ } w\in U_1. \end{equation} Moreover, a similar invariance property also holds for a proper holomorphic correspondence $f:U_2 \to U'_2$, $f(\G\cap U_2)\subset(\G'\cap U'_2)$. In this case (\ref{e-invar}) indicates that any branch of $f$ maps any point from $Q_w$ to $Q_{w'}$ for any $w'\in f(w)$. For details, see \cite{dp1} or \cite{v}. Let $f:D\to D'$ be a holomorphic correspondence. We say that $f$ is {\it irreducible}, if the corresponding analytic set $A\subset D\times D'$ is irreducible. The condition that $f$ is proper is equivalent to the condition that \begin{equation} \sup \{ {\rm dist}(f(z),\partial D') \}\to 0,\ \ {\rm as}\ \ {\rm dist}(z,\partial D)\to 0. \end{equation} Recall that if $A\subset D\times D'$ is a proper holomorphic correspondence, then $\pi: A\to D$ and $\pi':A\to D'$ are proper. There exists a complex subvariety $S\subset D$ and a number $m$ such that \begin{equation} f:=\pi'\circ \pi^{-1}=(f^1(z),\dots,f^m(z)), \ \ z\in D, \end{equation} where $f^j$ are distinct holomorphic functions in a neighborhood of $z\in D\setminus S$. The set $S$ is called the {\it branch locus} of $f$. We say that the correspondence $f$ {\it splits} at $z\in {\overline D}$ if there is an open subset $U\ni z$ and holomorphic maps $f^j:D\cap U\to D'$, $i=1,2,\dots,m$ that represent $f$. Given a proper holomorphic correspondence $A$, one can find the system of canonical defining functions \begin{equation}\label{e-canon} \Phi_I(z,z') = \sum_{|J|\le m}\phi_{IJ}(z){z'}^J,\ \ |I|=m, \ \ (z,z')\in{\mathbb C}^n\times{\mathbb C}^n, \end{equation} where $\phi_{IJ}(z)$ are holomorphic on $D$, and $A$ is precisely the set of common zeros of the functions $\Phi_I(z,z')$. For details see e.g. \cite{c}. We define analytic continuation of analytic sets as follows. Let $A$ be a locally complex analytic set in ${\mathbb C}^n$ of pure dimension $p$. We say that $A$ {\it extends analytically} to an open set $U\subset{\mathbb C}^n$ if there exists a (closed) complex-analytic set $A^*$ in $U$ such that (i) $\dim A^*=p$, (ii) $A\cap U\subset A^*$ and (iii) every irreducible component of $A^*$ has a nonempty intersection with $A$ of dimension $p$. Note that if conditions (i) and (ii) are satisfied, then the last condition can always be met by removing certain components of $A^*$. It follows from the uniqueness theorem that such analytic continuation of $A$ is uniquely defined. From this we define analytic continuation of holomorphic correspondences: \begin{definition} Let $U$ and $ U'$ be open sets in ${\mathbb C}^n$. Let $f:U\to U'$ be a holomorphic correspondence, and let $A\subset U\times U'$ be its graph. We say that $f$ extends as a holomorphic correspondence to an open set $V$, $U\cap V\not=\varnothing$, if there exists an open set $V'\subset {\mathbb C}^n$ such that $A$ extends analytically to a set $A^*\subset V\times V'$ and $\pi:A^*\to V$ is proper. \end{definition} Note that we can always choose $V'={\mathbb C}^n$ in the definition above. In general, correspondence $g=\pi'\circ\pi^{-1}:V\to V'$, where $\pi':A^*\to V'$ is the natural projection, may have more branches in $U\cap V$ than $f$. The following lemma gives a simple criterion for the extension to have the same number of branches. \begin{lemma}\label{l-intersection} Let $A^*\subset V\times{\mathbb C}^n$ be a holomorphic correspondence which is an analytic extension of the correspondence $A\subset U\times {\mathbb C}^n$. Suppose that for any $z\in (V\cap U)$, \begin{equation}\label{pre} \#\{\pi^{-1}(z)\}=\#\{\pi^{*-1}(z)\}, \end{equation} where $\pi:A\to U$ and $\pi^*:A^*\to V$. Then $A\cup A^*$ is a holomorphic correspondence in $(U\cup V)\times {\mathbb C}^n$. \end{lemma} \noindent{\it Proof.} We only need to check that $A\cup A^*$ is closed in $(U\cup V)\times {\mathbb C}^n$. If not, then there exists a sequence $\{q^j\}\subset A^*$ such that $q^j\to q^0$ as $j\to\infty$, $q^0\in U\times {\mathbb C}^n$, and $q^0\not\in A$. Then $q^j\not\in A$ for $j$ sufficiently large. Since by the definition of analytic continuation of correspondences $A\cap (U\cap V)\times {\mathbb C}^n\subset A^*$, we have $$ \#\{\pi^{-1}(\pi^*(q^j))\} < \#\{\pi^{*-1}(\pi^*(q^j))\}. $$ But this contradicts (\ref{pre}). $\square$ \section{Extension along Segre Varieties.} Before we prove the main results of the paper we need to establish a technical result of local extension of holomorphic correspondence along Segre varieties. This will be used on several occasions in the proof of the main theorems. \begin{lemma}\label{l:alongSV} Let $\G\subset{\mathbb C}^n$ be a smooth, real-analytic, essentially finite hypersurface, and let $\G'\subset{\mathbb C}^n$ be a smooth, real-algebraic, essentially finite hypersurface. Let $0\in\G$, and let $U_1$, $U_2$ be a sufficiently small standard pair of neighborhoods of the origin. Let $f:U\to{\mathbb C}^n$ be a germ of a holomorphic correspondence such that $f(U\cap\G)\subset\G'$, where $U$ is some neighborhood of the origin. Then there exists a neighborhood $V$ of $Q_0\cap U_1$ and an analytic set $\L\subset V$, $\dim_{\mathbb C} \L\le n-1$, such that $f$ extends to $V\setminus \L$ as a holomorphic correspondence. \end{lemma} \noindent{\it Proof.} In the case when $\G'$ is strictly pseudoconvex and $f$ is a germ of a biholomorphic mapping, the result was established in \cite{s1}. Here we prove the lemma in a more general situation. Lemma \ref{l:alongSV} only makes sense if $U\subset U_1$. We shrink $U$ and choose $V$ in such a way that for any $w\in V$, the set $Q_w\cap U$ is non-empty and connected. Note that if $w\in Q_0$, then $0\in Q_w$ and $Q_w\cap U\not=\varnothing$. Let $S\subset U$ be the branch locus of $f$, and let \begin{equation} \S=\{z\in V: (Q_z\cap U)\subset S\} \end{equation} Since $\dim_{\mathbb C} S=n-1$ and $\G$ is essentially finite, $\S$ is a finite set. Define \begin{equation}\label{e:A1} A=\left\{ (w,w')\in (V\setminus\S)\times {\mathbb C}^n : f\left(Q_w\cap U\right)\subset Q'_{w'} \right\} \end{equation} We establish the following facts about the set $A$: \begin{list}{}{} \item[(i)] $A$ is not empty \item[(ii)] $A$ is locally complex analytic \item[(iii)] $A$ is closed \item[(iv)] $\S\times{\mathbb C}^n$ is a removable singularity for $A$. \end{list} (i) $A\not=\varnothing$ since by the invariance property of Segre varieties, $A$ contains the graph of $f$. (ii) Let $(w,w')\in A$. Consider an open simply connected set $\O\in(U\setminus S)$ such that $Q_w\cap\Omega\not=\varnothing$. Then the branches of $f$ are correctly defined in $\O$. Since $Q_w\cap U$ is connected, the inclusion $f(Q_w\cap U)\subset Q'_{w'}$ is equivalent to \begin{equation}\label{e:omega1} f^j(Q_w\cap\Omega)\subset Q'_{w'}, \ j=1,\dots,m, \end{equation} where $f^j$ denote the branches of $f$ in $\Omega$. Note that such neighborhood $\Omega$ exists for any $w\in V\setminus\S$. Inclusion (\ref{e:omega1}) can be written as a system of holomorphic equations as follows. Let $\rho'(z,\overline z)$ be the defining function of $\G'$. Then \begin{equation} \rho'(f^j(z), \overline{w'})=0, \ {\rm\ for\ any}\ z\in (Q_w\cap \Omega), \ \ j=1,2,\dots,m. \end{equation} We can choose $\Omega$ in the form \begin{equation} \O={'\O}\times{\O_n}\subset{\mathbb C}^{n-1}_{'z}\times{\mathbb C}_{z_n} \end{equation} Combining this with (\ref{e:implicit}) we obtain \begin{equation}\label{e-system1} \rho'(f^j({'z},h('z,\overline w)), \overline{w'})=0 \end{equation} for any $'z\in {'\Omega}$. Then (\ref{e-system1}) is an infinite system of holomorphic equations in $(w,w')$ thus defining $A$ as a locally complex analytic variety in $(V\setminus\S)\times{\mathbb C}^n$. (iii) Let us show now that $A$ is a closed set. Suppose that $(w^j, {w'}^j)\to(w^0, {w'}^0)$, as $j\to\infty$, where $(w^j, {w'}^j)\in A$ and $(w^0,{w'}^0)\in (V\setminus\S)\times{\mathbb C}^n$. Then by the definition of $A$, $f(Q_{w^j}\cap U)\subset Q'_{{w'}^j}$. Since $Q_{w^j}\to Q_{w^0}$, and $Q'_{{w'}^j}\to Q'_{w^0}$ as $j\to\infty$, by analyticity $f(Q_{w^0}\cap U)\subset Q'_{{w'}^0}$, which implies that $(w^0,{w'}^0)\in A$ and thus $A$ is a closed set. Since $A$ is locally complex-analytic and closed, it is a complex variety in $(V\setminus\S)\times{\mathbb C}^n$. We now may restrict considerations only to the irreducible component of $A$ which coincides with the graph of $f$ at the origin. Then $\dim A = n$. (iv) Let us show now that $\overline A$ is a complex variety in $V\times{\mathbb C}^n$. Let $q\in \S$, then \begin{equation} \overline {A}\cap(\{q\}\times{\mathbb C}^n) \subset \{q\}\times \{z': f(Q_q)\subset Q'_{z'}\}. \end{equation} Notice that if $w'\in f(Q_q)\subset Q'_{z'}$, then $z'\in Q'_{w'}$. Hence the set $\{z': f(Q_q)\subset Q'_{z'}\}$ has dimension at most $2n-2$, and $\overline{A}\cap(\S\times{\mathbb C}^n)$ has Hausdorff $2n$-measure zero. It follows from Bishop's theorem on removable singularities of analytic sets (see e.g. \cite{c}) that $\overline{A}$ is an analytic set in $V\times {\mathbb C}^n$. Thus from (i) - (iv) we conclude that (\ref{e:A1}) defines a complex-analytic set in $V\times {\mathbb C}^n$ which we denote again by $A$. Also we observe that since $\G'$ is algebraic, the system of holomorphic equations in (\ref{e-system1}) is algebraic in $w'$ and thus we can define the closure of $A$ in $V\times {\mathbb P}^n$. Let $\pi:A\to V$ and $\pi': A\to {\mathbb P}^n$ be the natural projections. Since ${\mathbb P}^n$ is compact, $\pi^{-1}(K)$ is compact for any compact set $K\subset V$, and thus $\pi$ is proper. This, in particular, implies that $\pi(A)=V$. We let $\L_1=\pi({\pi'}^{-1}(H_0))$, where $H_0\subset{\mathbb P}^n$ is the hypersurface at infinity. It is easy to see that $\L_1$ is a complex analytic set of dimension at most $n-1$. We also consider the set $\L_2:=\pi\{ (w,w')\in A : \dim_{\mathbb C} \pi^{-1}(w) \ge 1\}$. It was shown in \cite{s1} Prop.~3.3, that $\L_2$ is a complex-analytic set of dimension at most $n-2$. Let $\L=\L_1\cup\L_2$. Then $\pi'\circ \pi^{-1}|_{V\setminus\L}$ is the desired extension of $f$ as a holomorphic correspondence. $\square$ \section{Proof of Theorem \ref{t2}.} For completeness let us repeat the argument of \cite{bsu} to prove the ``only if'' direction of Theorems \ref{t1} and \ref{t2}. Suppose that $f:D\to D'$ is a proper holomorphic correspondence. Let us show that $\partial D$ and $\partial D'$ are locally CR-equivalent. If $D$ is not pseudoconvex, then for $p\in\widehat D$, there exists a neighborhood $U\ni p$ such that all the functions in the representation (\ref{e-canon}) of $f$ extend holomorphically to $U$. Here $\widehat D$ refers to the envelope of holomorphy of $D$. Moreover, we can replace $p$ by a nearby point $q\in U\cap\partial D$ so that $f$ splits at $q$ and at least one of the holomorphic mappings of the splitting is biholomorphic at $q$. If $D$ is pseudoconvex, then $D'$ is also pseudoconvex. By \cite{bsu} $f$ extends continuously to $\partial D$ and we can choose $p\in \partial D$ such that $f$ splits in some neighborhood $U\ni p$ to holomorphic mappings $f^{j}: D\cap U\to D'$, $j=1,\dots,m$. Since $f^{-1}:D'\to D$ also extends continuously to $\partial D'$, the set $\{f^{-1}\circ f(p)\}$ is finite. Therefore, by \cite{be3}, $f^j$ extend smoothly to $\partial D\cap U$. It follows that $f^j$ extend holomorphically to a neighborhood of $p$ by \cite{be} and \cite{df2}. Finally, choose $q\in U\cap\partial D$ such that $f^j$ is biholomorphic at $q$ for some $j$. To prove Theorem \ref{t2} in the other direction, consider a neighborhood $U$ of $p\in\partial D$ and a biholomorphic map $f:U\to{\mathbb C}^n$ such that $f(U\cap\partial D)\subset \partial D'$. Let us show that $f$ extends to a proper holomorphic correspondence $F:D\to D'$. Let $\G=\partial D$ and $\G'=\partial D'$. Since the set of Levi non-degenerate points is dense in $\G$, by Webster's theorem \cite{w}, $f$ extends to an algebraic mapping, i.e. the graph of $f$ is contained in an algebraic variety $X\subset{\mathbb C}^n \times {\mathbb C}^n$ of dimension $n$. Without loss of generality assume that $X$ is irreducible, as otherwise consider only the irreducible component of $X$ containing $\G_f$, the graph of the mapping $f$. Let $E=\{ z\in \mathbb C^n : \dim \pi^{-1}(z)>0\}$, where $\pi:X\to \mathbb C^n$ is the coordinate projection to the first component. Then $E$ is an algebraic variety in $\mathbb C^n$. Let $f:{\mathbb C}^n \setminus E \to {\mathbb C}^n$ now denote the multiple valued map corresponding to $X$. Let $S\subset{\mathbb C}^n\setminus E$ be the branch locus of $f$, in other words, for any $z\in S$ the coordinate projection onto the first component is not locally biholomorphic near $\pi^{-1}(z)$. To prove Theorem \ref{t2} it is enough to show that $E\cap \G=\varnothing$. \begin{lemma}\label{l2.1} Let $p\in\G$. If $Q_p\not\subset E$, then $p\not\in E$. \end{lemma} \noindent{\it Proof.} Suppose, on the contrary, that $p\in E$. Since $Q_p\not\subset E$, there exist a point $b\in Q_p$ and a small neighborhood $U_b$ of $b$ such that $U_b\cap E = \varnothing$. Choose neighborhoods $U_b$ and $U_p$ such that for any $z\in U_p$, the set $Q_z\cap U_b$ is non-empty and connected. Let \begin{equation} \S=\{z\subset U_p: Q_z\cap U_b\subset S\}. \end{equation} Similar to (\ref{e:A1}), consider the set \begin{equation} A=\left\{ (w,w')\in (U_p\setminus\S)\times {\mathbb C}^n : f\left(Q_w\cap U_b\right)\subset Q'_{w'} \right\}. \end{equation} Then $A\not=\varnothing$. Indeed, since $\dim_{\mathbb C} E\le n-1$, there exists a sequence of points $\{p^j\}\subset (U_p\cap\G)\setminus (E\cup\S)$ such that $p^j\to p$ as $j\to\infty$. By the invariance property of holomorphic correspondences, for every $p^j$ there exists a neighborhood $U_j\ni p^j$ such that $f(Q_{p^j}\cap U_j)\subset Q'_{{p'}^j}$, where ${p'}^j\in f(p^j)$. But this implies that $f(Q_{p^j}\cap U_b)\subset Q'_{{p'}^j}$, and therefore $(p^j,{p'}^j)\in A$. Moreover, it follows that \begin{equation}\label{e:A=X} A|_{U_{j}\times{\mathbb C}^n}=X|_{U_{j}\times{\mathbb C}^n}, \ \ j=1,2,\dots,m. \end{equation} Similar to the proof of Lemma \ref{l:alongSV}, one can show that $A$ is a complex analytic variety in $(U_p\setminus\S)\times{\mathbb C}^n$, and that $\S\times{\mathbb C}^n$ is a removable singularity for $A\subset U_p\times{\mathbb C}^n$. Denote the closure of $A$ in $U_p\times{\mathbb C}^n$ again by $A$. Without loss of generality we assume that $A$ is irreducible, therefore in view of (\ref{e:A=X}) we conclude that $A|_{U_{p}\times{\mathbb C}^n}=X|_{U_{p}\times{\mathbb C}^n}$. Let $\hat f$ be a multiple valued mapping corresponding to $A$. Then by analyticity, there exists $p'\in\G'\cap \hat f(p)$. Moreover, by construction, $\hat f(p) = I'_{p'}$. By \cite{dw}, \begin{equation}\label{e-inG} I'_{z'}\subset\G', \ \ {\rm for\ any\ } z'\in\G'. \end{equation} Now choose $U_p$ so small that $\overline A \cap (U_p \times \partial U') =\varnothing$, where $U'$ is a neighborhood of $\G'$. This is always possible, since otherwise there exists a sequence of points $\{(z^j, {z'}^j), j=1,2,\dots\}\subset A$, such that $z^j\to p$ and ${z'}^j\to {z'}^0 \in\partial U'$ as $j\to\infty$. Then $(p,{z'}^0)\in A$ and ${z'}^0\not\in \G'$. But this contradicts (\ref{e-inG}). This shows that $\hat f:U_p\to U'$ is a holomorphic correspondence extending $f$, which contradicts the assumption $p\in E$. $\square$ \begin{lemma}\label{l2.2} Let $p\in\G$. Then there exists a change of variables, which is biholomorphic near $\overline {D'}$, such that in the new coordinate system $Q_p\not\subset E$. \end{lemma} {\it Proof.} Suppose that $Q_p\subset E$. Then we find a point $a\in(\G\setminus E)$ such that $Q_a\cap Q_p\not=\varnothing$. The existence of such $a$ follows, for example, from \cite{s1} Prop~4.1. (Note, that $\dim E\cap \G \le 2n-3$). By Lemma \ref{l:alongSV} the germ of the correspondence $f$ defined at $a$, extends holomorphically to a neighborhood $V$ of $Q_a$. Let $\L_1$ and $\L_2$ be as in Lemma~\ref{l:alongSV}. Since $\dim \L_2<n-1$, we may assume that $(Q_p\cap V)\not\subset \L_2$. If $(Q_p\cap V)\subset\L_1$, we can perform a linear-fractional transformation such that $H_0$ is mapped onto another complex hyperplane $H\subset{\mathbb P}^n$ and such that $H\cap\G'=\varnothing$. Note that after such transformation $\G'$ remains compact in ${\mathbb C}^n$. Then we may also assume that $(Q_p\cap V)\not\subset \L_1$. Thus holomorphic extension along $Q_a$ defines $f$ on a non-empty set in $Q_p$. $\square$ Theorem \ref{t2} now follows. Indeed, from Lemmas \ref{l2.1} and \ref{l2.2} we conclude that $E\cap\G=\varnothing$. Since $D$ is bounded, $D\cap E=\varnothing$, and $X\cap(D\times D')$ defines a proper holomorphic correspondence from $D$ to $D'$. \section{Local Extension.} To prove Theorem \ref{t1} we first establish local extension of holomorphic correspondences. \begin{definition} Let $\G$ and $\G'$ be smooth, real-analytic hypersurfaces in ${\mathbb C}^n$. Let $f: U\to \mathbb C^n$ be a holomorphic correspondence such that $f(U\cap\G)\subset \G'$. Then $f$ is called {\it complete} if for any $z\in U\cap\G$, $f(Q_z\cap U)\subset Q'_{z'}$ and $f(z)=I_{z'}$. \end{definition} By the invariance property of Segre varieties, $f(_{z}Q_z)\subset Q_{f(z)}$, where $_{z}Q_z$ is the germ of $Q_z$ at $z$, for any $z\in U\cap \G$. The condition $f(Q_z\cap U)\subset Q'_{z'}$ in the definition is somewhat stronger: it states that every connected component of $Q_z\cap U$ is mapped by $f$ into the same Segre variety. Note that in general Segre varieties are defined only locally, while the set $U$ can be relatively large. In this case the inclusion $f(Q_z\cap U)\subset Q'_{z'}$ should be considered only in a suitable standard pair of neighborhoods of $z$. The condition $f(z)=I_{z'}$ in the definition above indicates that $f$ has the maximal possible number of branches. It is convenient to establish analytic continuation of complete correspondences, as such continuation does not introduce additional branches. \begin{lemma}\label{l3.2}\label{l:sublocal} Let $f:U\to {\mathbb C}^n$ be a complete holomorphic correspondence, $f(\G\cap U)\subset \G'$, where $\G=\partial D$ and $\G'=\partial D'$, $D$ and $D'$ are as in Theorem \ref{t1}. Suppose $p\in \partial U\cap\G$ is such that $Q_p\cap U\ne\varnothing$. Then there exists a neighborhood $U_p$ of $p$ such that $f$ extends to a holomorphic correspondence $\hat f: U_p\to {\mathbb C}^n$. \end{lemma} \noindent{\it Proof.} The proof of this lemma repeats that of Lemma \ref{l2.1}. Let $b\in Q_p\cap U$. Consider a small neighborhood $U_b$ of $b$, $U_b\subset U$, and a neighborhood $U_p$ of $p$ such that for any $z\in U_b$, the set $Q_z\cap U_p$ is non-empty and connected. As before, let $S\subset U$ be the branch locus of $f$, and $\S=\{z\subset U_p: Q_z\cap U_b\subset S\}$. Define \begin{equation}\label{e:A} A=\left\{ (w,w')\in (U_p\setminus\S)\times {\mathbb C}^n : f\left(Q_w\cap U_b\right)\subset Q'_{w'} \right\}. \end{equation} Observe that since $f$ is complete, for any $w\in U \cap\G$, the inclusion $f(Q_w\cap U_b)\subset Q'_{w'}$ implies that $f(Q_w\cap U)\subset Q'_{w'}$. In particular, this holds for any $w$ arbitrary close to $p$. Therefore $A$ is well-defined if the neighborhood $U_p$ is chosen sufficiently small. Analogously to Lemma \ref{l2.1}, one can show that $A$ is a non-empty closed complex analytic set in $(U_p\setminus\S)\times{\mathbb C}^n$. Similar argument also shows that $\S\times{\mathbb C}^n$ is a removable singularity for $A$, and thus $\overline A$ defines a closed-complex analytic set in $U_p\times{\mathbb C}^n$. Let us show now that $A$ defines a holomorphic correspondence $\hat f:U_p\to U'$, where $U'$ is a suitable neighborhood of $\G'$. Consider the closure of $A$ in $U_p\times {\mathbb P}^n$. Recall, that since ${\mathbb P}^n$ is compact, the projection $\pi:\overline A \to U_p$ is proper. In particular, $\pi (\overline A)= U_p$. Let $U'$ be a neighborhood of $\G'$ as in Lemma \ref{l-finite}. To simplify the notation, denote the restriction of $\overline A$ to $U_p \times U'$ again by $A$. Let $\pi: A\to U_p$ and $\pi': A\to U'$ be the natural projections, and let $\hat f=\pi'\circ \pi$. Let $Z={\pi'}^{-1}(\G')$. Then since $f(\G\cap U)\subset \G'$, $\pi^{-1}(\G\cap U\cap U_p)\subset Z$. Therefore there exists at least one irreducible component of $\pi^{-1}(\G\cap U_p)$ which is contained in $Z$. Thus for any $z\in\G\cap U_p$, there exists $z'\in\G'$ such that $z'\in \hat f(z)$. By construction, if $z\in U_p$ and $z'\in \hat f(z)$, then $\hat f(z) = I'_{z'}$. In view of (\ref{e-inG}) we conclude that $\hat f(\G\cap U_p)\subset \G'$. Now the same argument as in Lemma \ref{l2.1} shows that $U_p$ can be chosen so small that $\hat f$ is a holomorphic correspondence. $\square$ \begin{theorem}[Local extension] \label{t:local} Let $D$ and $D'$ be as in Theorem \ref{t1}, $\G=\partial D$ and $\G'=\partial D'$. Let $f:U\to {\mathbb C}^n$ be a complete holomorphic correspondence, such that $f(\G\cap U)\subset \G'$, where $\G\cap U$ is connected, and $\G\cap \partial U$ is a smooth submanifold. Let $p\in\partial U\cap \G$. Then there exists a neighborhood $U_p$ of the point $p$ such that $f$ extends to a holomorphic correspondence $\hat f:U_p\to {\mathbb C}^n$. Moreover, $\hat f|_{U\cap U_p} = f|_{U\cap U_p}$, and the resulting correspondence $F:U\cup U_p \to {\mathbb C}^n$ is complete. \end{theorem} \noindent{\it Proof.} We call a point $p\in\partial U\cap \G$ {\it regular}, if $\partial U\cap \G$ is a generic submanifold of $\G$, i.e. $T^c_p(\partial U\cap\G)=n-2$. We prove the theorem in three steps. First we prove the result under the assumption $Q_p\cap U\not=\varnothing$, then for regular points in $\partial U\cap\G$, and finally for arbitrary $p\in \partial U\cap\G$. {\it Step 1.} Suppose that $Q_p\cap U\not=\varnothing$. Then by Lemma \ref{l:sublocal} $f$ extends as a holomorphic correspondence $\hat f$ to some neighborhood $U_p$ of $p$. It follows from the construction that for any $z\in U\cap U_p$ the number of preimages of $f(z)$ and $\hat f(z)$ is the same. Thus by Lemma~\ref{l-intersection}, $f$ and $\hat f$ define a holomorphic correspondence in $U\cup U_p$. Denote this correspondence by $F$. We now show that $F$ is also complete in $U\cup U_p$. Since $f$ is complete, for any $z\in U\cap U_p\cap\G$, arbitrarily close to $p$, $f(Q_z\cap U)\subset Q'_{f(z)}$. Thus if $U_p$ is chosen sufficiently small, then for any $z\in U_p\cap\G$, \begin{equation} F(Q_z\cap(U\cup U_p))\subset Q'_{F(z)}. \end{equation} Suppose now that there exists some point $z$ in $(U\cup U_p)\cap\G$ such that not all components of $Q_z\cap (U\cup U_p)$ are mapped by $F$ into the same Segre variety. From the argument above, $z\notin U_p$. Since $U\cap\G$ is connected, there exists a simple smooth curve $\gamma\subset \G\cap U$ connecting $z$ and $p$. By Lemma \ref{l:alongSV} for every point $\zeta\in\g$, the germ of a correspondence $F$ at $\zeta$ extends as a holomorphic correspondence along the Segre variety $Q_\zeta$. Moreover, for $\zeta\in\g$ which are close to $p$, the extension of $F$ along $Q_\zeta$ coincides with the correspondence $f$ in $U$ (even if $Q_\zeta\cap U$ is disconnected). Since $\cup_{\zeta\in\g}Q_\zeta$ is connected, this property holds for all $\zeta\in\g$. The extension of $F$ along $Q_\zeta$ clearly maps $Q_\zeta$ into $Q'_{F(\zeta)}$, and therefore $Q_\zeta\cap U$ is mapped by $f$ into the same Segre variety. But this contradicts the assumption that the components of $Q_z\cap U$ are mapped into different Segre varieties. This shows that $F$ is also a complete correspondence. {\it Step 2.} Suppose now that $Q_p\cap U=\varnothing$, but $p$ is a regular point. Then by \cite{s1} Prop. 4.1, there exists a point $a\in U$ such that $Q_a\cap Q_p\not=\varnothing$. We now apply Lemma \ref{l:alongSV} to extend the germ of the correspondence at $a$ along $Q_a$. We note that such extension along $Q_a$ may not in general define a complete correspondence, since apriori $Q_a\cap \G$ may be disconnected from $U\cap\G$. Let $\L$ be as in Lemma \ref{l:alongSV}. Then after performing, if necessary, a linear-fractional transformation in the target space, we can find a point $b\in Q_p\cap Q_a$, such that $b\notin \L$. Let $U_b$ be a small neighborhood of $b$ such that $U_b\cap\L=\varnothing$ and $f$ extends to $U_b$ as a holomorphic correspondence $f_b$. Then for any $z\in U\cap\G$ such that $Q_z\cap U_b\not=\varnothing$, the sets $f(Q_z\cap U)$ and $f_b(Q_z\cap U_b)$ are contained in the same Segre variety. Indeed, if not, then we can connect $a$ and $z$ by a smooth path $\g\subset\G\cap U$ and apply the argument that we used to prove completeness of $F$ in Step 1. Now the same proof as in Step 1 shows that $f$ extends as a holomorphic correspondence to some neighborhood of $p$, and that the resulting extension is also complete. {\it Step 3.} Suppose now that $p\in \partial U\cap \G$ is not a regular point. Let \begin{equation} M=\left\{z\in\partial U\cap\G: T_z(\partial U\cap\G)=T^c_z(\G)\right\}. \end{equation} It is easy to see that $M$ is a locally real-analytic subset of $\G$. Moreover, since $\G$ is essentially finite, $\dim M < 2n-2$. Choose the coordinate system such that $p=0$ and the defining function of $\G$ is given in the so-called normal form (see \cite{cm}): \begin{equation} \rho(z,\overline z)= 2x_n+\sum_{|k|,|l|\ge 1}\rho_{k,l}(y_n)('z)^k(\overline{'z})^l, \end{equation} where $'z=(z_1,\dots,z_{n-1})$. Since the extendability of $f$ through regular points is already established, after possibly an additional change of variables, we may assume that $f$ extends as a holomorphic correspondence to the points $\{z\in\partial U\cap\G: x_1>0\}$. Let $L_c$ denote the family of real hyperplanes in the form $\{z\in {\mathbb C}^n : x_1=c\}$. Then there exists $\epsilon>0$ such for any $c\in[-\epsilon,\epsilon]$, \begin{equation}\label{good-c} T^c_z(\G) \not= T_z(L_c\cap\G), {\rm \ \ for\ any\ } z\in L_c\cap\G\cap {\mathbb B}(0,\epsilon). \end{equation} Let $\Omega_{c,\delta}$ be the intersection of $\G$, the $\delta$-neighborhood of $x_1$-axis and the set bounded by $L_c$ and $L_{c+\delta}$, that is \begin{equation} \Omega_{c,\delta}=\left\{ z\in\G\cap {\mathbb B}(0,\epsilon) : c<x_1<c+\delta, \ y_1^2+\sum_{j=2}^n|z_j|^2<\delta \right\}. \end{equation} Then there exist $\delta>0$ and $c>0$, such that $f$ extends as a holomorphic correspondence to a neighborhood of the set $\Omega_{c,\delta}$. Since $L_c\cap\G$ consists only of regular points, from Steps 1 and 2 we conclude that $f$ extends to a neighborhood of any point in $L_c\cap\G$ that belongs to the boundary of $\Omega_{c,\delta}$. Let $c_0$ be the smallest number such that $f$ extends past $L_c\cap\G$. Then from (\ref{good-c}) and previous steps, $c_0<0$, and therefore, $f$ extends to a neighborhood of the origin. $\square$ \section{Proof of Theorem \ref{t1}.} The proof of the local equivalence of boundaries $\partial D$ and $\partial D'$ is equivalent to that of Theorem \ref{t2}. To prove the theorem in the other direction let us first show that a germ of a biholomorphic map $f:U\to {\mathbb C}^n$, $f(U\cap\G)\subset\G'$ can be replaced by a complete correspondence. Without loss of generality we assume that $0\in U\cap \G$. We choose a neighborhood $U_0$ of the origin and shrink $U$ in such a way, that $Q_w\cap U$ is non-empty and connected for any $w\in U_0$. Define \begin{equation}\label{comp} A=\left\{ (w,w')\in U_0\times{\mathbb C}^n: f(Q_w\cap U)\subset Q'_{w'} \right\}. \end{equation} Then (\ref{comp}) defines a holomorphic correspondence, which in particular contains the germ of the graph of $f$ at the origin. Let $\hat f$ be the multiple valued mapping corresponding to $A$. Then by construction and from (\ref{e-inG}), $w'\in\hat f(w)$ implies $\hat f(w)=I'_{w'}$. Thus $\hat f$ is a complete correspondence. If $\partial U_0$ is smooth, then By Theorem \ref{t:local} we can locally extend $\hat f$ along $\G$ past the boundary of $U_0\cap\G$ to a larger open set $\Omega$. However, local extension in general does not imply that $A$ is a closed set in $\Omega\times{\mathbb C}^n$. Indeed, there may exist a point $p\in \G\cap\partial \Omega$ such that for any sufficiently small neighborhood $V$ of $p$, $\Omega\cap V\cap\G$ consists of two connected components, say $\G_1$ and $\G_2$. Then local extension from $\G_1$ to a neighborhood of $p$ may not coincide with the correspondence $\hat f$ defined in $\G_2$. Therefore, local extension past the boundary of $\Omega\cap\G$ does not lead to a correspondence defined globally in a neighborhood of $\overline\Omega\times \mathbb C^n$. Note that this cannot happen if $\Omega$ is sufficiently small. We now show that $\hat f$ extends analytically along any path on $\G$. \begin{lemma}\label{along-paths} Let $\g:[0,1]\to\G$ be a simple curve without self-intersections, and $\g(0)=0$. Then there exist a neighborhood $V$ of $\g$ and a holomorphic correspondence $F:V\to\mathbb C^n$ which extends $\hat f$. \end{lemma} \noindent{\it Proof.} Suppose that $\hat f$ does not extend along $\g$. Then let $\zeta\in\gamma$ be the first point to which $\hat f$ does not extend. Let $\epsilon_0>0$ be so small that $\mathbb B(\zeta,\epsilon)\cap\G$ is connected and simply connected for any $\epsilon \le \epsilon_0$. Choose a point $z\in B(\zeta,\epsilon_0/2)\cap \g$ to which $\hat f$ extends. Let $\delta$ be the largest positive number such that $\hat f$ extends holomorphically to $\mathbb B(z,\delta)\cap\G$. By Theorem \ref{t:local} $\hat f$ extends to a neighborhood of every point in $\partial\mathbb B(z,\delta)\cap\G$. Moreover, if $\mathbb B(z,\delta)\subset \mathbb B(\zeta,\epsilon_0)$, then the extension of $\hat f$ is a closed complex analytic set. Thus $\delta>\epsilon/2$. This shows that $\hat f$ also extends to $\zeta$, and therefore extends along $\g$. $\square$ Note that analytic continuation of $\hat f$ along $\g$ in Lemma \ref{along-paths} always yields a complete correspondence. The Monodromy theorem cannot be directly applied for multiple valued mappings, and we need to show that analytic continuation is independent of the choice of a curve connecting two points on $\G$. \begin{lemma} Suppose that $\g\subset\G$ is a Jordan curve $\g(0)=\g(1)=0$. Let $F$ be the holomorphic correspondence defined near the origin and obtained by analytic continuation of $\hat f$ along $\g$. Then $F=\hat f$ in some neighborhood of the origin. \end{lemma} \noindent{\it Proof.} Since $\G$ is simply connected and compact, there exists $\epsilon_0 >0$ such that for any $z\in\G$, $\mathbb B(z,\epsilon)\cap\G$ is connected and simply connected for any $\epsilon\le\epsilon_0$. Let $\phi$ be the homotopy map, that is $\phi(t,\tau):I\times I\to\G$, $\phi(t,0)=\g(t)$, $\phi(t,1)\equiv 0$, $I=[0,1]$. Let $\{(t_j,\tau_k)\in I\times I$, $j,k=0,1,2,\dots,m\}$ be the set of points satisfying: \begin{enumerate} \item[(i)] $t_0=\tau_0=0$, $t_m=\tau_m=1$, \item[(ii)] $\{\phi(t,\tau_k): t_j\le t \le t_{j+1}\} \subset \mathbb B(\phi(t_{j},\tau_{k}),\epsilon_0/2)$, \\ $\{\phi(t_{j},\tau): \tau_k \le \tau \le \tau_{k+1}\} \subset \mathbb B(\phi(t_{j},\tau_{k}),\epsilon_0/2)$, for any $j,k<m$. \end{enumerate} Suppose that $f$ is a complete holomorphic correspondence defined in a ball $B$ of small radius centered at $\phi(tj,\tau_k)\in\G$. By Theorem \ref{t:local}, $f$ extends holomorphically past every boundary point of $\partial B\cap\G$. Since $B(\phi(t_j,\tau_k),\epsilon_0)$ is connected and simply connected, $f$ extends at least to a ball of radius $\epsilon_0/2$. Consider the closed path $\g_{j,k}=\{\phi(t,\tau_k): t_j\le t \le t_{j+1}\}\cup \{\phi(t_{j+1},\tau): \tau_k\le \tau \le \tau_{k+1}\}\cup \{\phi(t,\tau_{k+1}): t_j\le t \le t_{j+1}\}\cup \{\phi(t_{j},\tau): \tau_k\le \tau\le \tau_{k+1}\},$ where the second and fourth pieces are traversed in the opposite direction. Then $\g_{j,k}$ is entirely contained in $\mathbb (B(\phi(tj,\tau_k),\epsilon_0/2)$. Therefore, analytic continuation of $f$ along $\g_{j,k}$ defines the same correspondence at $\phi(t_j,\tau_k)$. Analytic continuation of $\hat f$ along $\g$ can be reduced to continuation along paths $\g_{j,k}$. Since continuation along each path $\g_{j,k}$ does not introduce new branches of $\hat f$, $F=\hat f$. $\square$ Thus simple connectivity of $\G$ implies that the process of local extension of $\hat f$ leads to a global extension of $\hat f$ to some neighborhood of $\G$. Since $\hat f(\G)\subset \G'$, there exist neighborhoods $U$ of $\G$ and $U'$ of $\G'$ such that $\hat f: U\to U'$ is a proper holomorphic correspondence. Let $A$ be the analytic set corresponding to $\hat f$. By (\ref{e-canon}) there exist functions $\phi_{IJ}$ holomorphic in $U$ such that $A$ is determined from the system $\sum_{|J|\le m}\phi_{IJ}(z){z'}^J=0$. By Hartog's theorem all $\phi_{IJ}$ extend holomorphically to a neighborhood of $\overline D$, (recall that $\G=\partial D$). This defines a proper holomorphic correspondence $f:D\to D'$. $\square$ \begin{small} \end{small} \end{document}
\begin{document} \title{Implementation: The conjugacy problem in right-angled Artin groups} \author{\small{\uppercase{Gemma Crowe, Michael Jones}}} \date{} \maketitle \begin{abstract} In 2009, Crisp, Godelle and Wiest constructed a linear-time algorithm to solve the conjugacy problem in right-angled Artin groups. This algorithm has now been implemented in Python, and the code is freely available on \href{https://github.com/gmc369/Conjugacy-problem-RAAGs}{GitHub}. This document provides a summary of how the code works. As well as determining whether two elements $w_{1}, w_{2}$ are conjugate in a RAAG $A_{\Gamma}$, our code also returns a conjugating element $x \in A_{\Gamma}$ such that $w_{1} = x^{-1}w_{2}x$, if $w_{1}$ and $w_{2}$ are conjugate. \end{abstract} \section{Introduction} In their paper `\textit{The conjugacy problem in subgroups of right-angled Artin groups}', Crisp, Godelle and Wiest created a linear-time algorithm to solve the conjugacy problem in right-angled Artin groups (RAAGs) \cite{CGW}. This algorithm has now been implemented as a Python program, and is freely available on \href{https://github.com/gmc369/Conjugacy-problem-RAAGs}{GitHub}. \par This document provides an overview of the code. We recommend the reader first takes a look at the original paper \cite{CGW}, to understand the key tools and steps of this algorithm. \begin{rmk} Our Python code requires the \texttt{networkx} module. You may also need to install Pillow and nose. Details on how to install these modules can be found here: \url{https://networkx.org/documentation/stable/install.html}. \end{rmk} \subsection{Notation} Throughout we let $A_{\Gamma}$ denote a RAAG with defining graph $\Gamma$. We let $N$ denote the number of vertices in the generating set, i.e. the size of the standard generating set for $A_{\Gamma}$. We assume generators commute if and only if there does not exist an edge between corresponding vertices in $\Gamma$, to match with the convention used in \cite{CGW}. \par Examples are provided both in the code as well as this document to assist the user. Throughout this summary, we will often use the RAAG defined in Example 2.4 from \cite{CGW}, which has the following presentation: \begin{equation}\label{CGW example} A_{\Gamma} = \langle a_{1}, a_{2}, a_{3}, a_{4} \; | \; [a_{1}, a_{4}] = 1, [a_{2}, a_{3}] = 1, [a_{2}, a_{4}] = 1 \rangle. \end{equation} This RAAG is defined by the following graph: \begin{figure} \caption{Defining graph $\Gamma$} \label{fig:my_label} \end{figure} \section{Summary of algorithm} We summarise the key steps of the algorithm from \cite{CGW}. For further details on piling representations, see Chapter 14 of \cite{OfficeHour}. \par \underline{ALGORITHM}: Conjugacy problem in RAAGs. \par \textbf{Input}: \begin{enumerate} \item RAAG $A_{\Gamma}$: number of vertices of the defining graph, and list of commuting generators. \item Words $v,w$ representing group elements in $A_{\Gamma}$. \end{enumerate} \textbf{Step 1: Cyclic reduction} \begin{adjustwidth}{1.5cm}{} Produce the piling representation $\pi(v)$ of the word $v$, and apply cyclic reduction to $\pi(v)$ to produce a cyclically reduced piling $p$. Repeat this step for the word $w$ to get a cyclically reduced piling $q$. \end{adjustwidth} \textbf{Step 2: Factorisation} \begin{adjustwidth}{1.5cm}{} Factorise each of the pilings $p$ and $q$ into non-split factors. If the collection of subgraphs do not coincide, \textbf{Output} = \texttt{\color{blue}False}.\\ Otherwise, continue to Step 3. \end{adjustwidth} \textbf{Step 3: Compare non-split factors} \begin{adjustwidth}{1.5cm}{} If $p = p^{(1)}\dots p^{(k)}$ and $q = q^{(1)}\dots q^{(k)}$ are the factorisations found in Step 2, then for each $i = 1, \dots, k$, do the following: \begin{adjustwidth}{1.5cm}{} \begin{enumerate}[label=(\roman*)] \item Transform the non-split cyclically reduced pilings $p^{(i)}$ and $q^{(i)}$ into pyramidal pilings $\Tilde{p}^{(i)}$ and $\Tilde{q}^{(i)}$. \item Produce the words representing these pilings in cyclic normal form $\sigma^{\ast}\left(\Tilde{p}^{(i)}\right)$ and $\sigma^{\ast}\left(\Tilde{q}^{(i)}\right)$. \item Decide whether $\sigma^{\ast}\left(\Tilde{p}^{(i)}\right)$ and $\sigma^{\ast}\left(\Tilde{q}^{(i)}\right)$ are equal up to a cyclic permutation. If not, \textbf{Output} = \texttt{\color{blue}False}. \end{enumerate} \end{adjustwidth} \textbf{Output} = \texttt{\color{blue}True}. \end{adjustwidth} \begin{exmp} We provide an example of how to implement this algorithm with our Python code, using the RAAG $A_{\Gamma}$ defined in Equation \ref{CGW example}. Consider the following two words: \[ w_{1} = a^{-2}_{2}a^{-1}_{4}a_{3}a_{2}a_{4}a_{1}a_{2}a^{-1}_{1}a^{2}_{2}a^{-1}_{4}, \quad w_{2} = a_{4}a_{3}a^{-1}_{4}a_{2}a_{1}a_{2}a^{-1}_{1}a^{-1}_{4}. \] These represent conjugate elements in $A_{\Gamma}$, since \[ w_{1} = \left(a^{2}_{4}a^{2}_{2}\right)^{-1}\cdot w_{2} \cdot \left(a^{2}_{4}a^{2}_{2}\right). \] To check this, we input the following code. See \cref{sec:inputs} on how to input words and commutation relations from a RAAG. \begin{align*} \texttt{w\_{1}} &= \texttt{[-2,-2,-4,3,2,4,1,2,-1,2,2,-4]}\\ \texttt{w\_{2}} &= \texttt{[4,3,-4,2,1,2,-1,-4]} \\ \texttt{CGW\_Edges} &= \texttt{[(1,4), (2,3), (2,4)]}\\ &\texttt{is\_conjugate(w\_1, w\_2, 4, CGW\_Edges)} \end{align*} The output from this is \[ \texttt{\color{blue}True}, \texttt{[-2, -2, -4, -4]} \] The first argument is a \texttt{\color{blue}True} or \texttt{\color{blue}False} statement of whether $w_{1}$ and $w_{2}$ are conjugate. The second argument returns a conjugating element $x$ such that $w_{1} = x^{-1}w_{2}x$. \end{exmp} It is important to note that the order of the input words determines the conjugating element. In particular, when we input $w_{1}$ following by $w_{2}$, we obtain a conjugator $x$ such that $w_{1} = x^{-1}w_{2}x$. If we reverse the order of $w_{1}$ and $w_{2}$, the program will return the inverse of this conjugator, since $w_{2} = xw_{1}x^{-1}$. \section{Inputs}\label{sec:inputs} \subsection{Words} The approach of using symbols $a,b,c...$ to represent letters in a word is limiting, since there are only finitely many letters in the alphabet. Instead, we use the positive integers to represent the generators, and the negative integers to represent their inverses: \begin{center} \begin{tabular}{|c|c||c|c|} \hline Generator & Representation & Generator & Representation\\ \hline\hline $a$ & \texttt{1} & $a^{-1}$ & \texttt{-1}\\ \hline $b$ & \texttt{2} & $b^{-1}$ & \texttt{-2}\\ \hline $c$ & \texttt{3} & $c^{-1}$ & \texttt{-3}\\ \hline ... & ... & ... & ... \\ \hline \end{tabular} \end{center} This also matches with our convention of ordering which will be shortlex, i.e. \[ 1 < -1 < 2 < -2 < \dots \] We note this convention is the opposite of the normal form convention in \cite{CGW}. \par Words in $A_{\Gamma}$ are represented by lists of integers. The following table gives some examples. Here $\varepsilon$ denotes the empty word. \begin{center} \begin{tabular}{|c|c|} \hline Word & Representation\\ \hline\hline $abc$ & \texttt{[1,2,3]}\\ \hline $\varepsilon$ & \texttt{[]}\\ \hline $a^{-2}c^3b^{-1}$ & \texttt{[-1,-1,3,3,3,-2]}\\ \hline $a^{100}b^{-1}$ & \texttt{([1]*100)+[-2]}\\ \hline $abc...xyz$ & \texttt{\color{pu}list\color{black}(\color{pu}range\color{black}(1,27))}\\ \hline \end{tabular} \end{center} \subsection{Commuting generators} In many of the functions in our code, we need to define which generators commute in $A_{\Gamma}$. This is represented by a list of tuples, each of which represents a pair of commuting generators. Here are some examples: \begin{center} \begin{tabular}{|c|c|} \hline Group & Commuting generators list\\ \hline\hline $\mathbb{F}_n$ & \texttt{[]}\\ \hline $\mathbb{Z}^2$ & \texttt{[(1,2)]}\\ \hline $\mathbb{F}_2\times\mathbb{Z}$ & \texttt{[(1,3),(2,3)]}\\ \hline $\mathbb{Z}^n$ & \texttt{[(i,j) \color{orange}for \color{black}i \color{orange}in \color{pu}range\color{black}(1,n) \color{orange}for \color{black}j \color{orange}in \color{pu}range\color{black}(i+1,n+1)]}\\ \hline \end{tabular} \end{center} By convention, we do not include edges $(n,n)$, and we order our list by shortlex ordering. For example, if $(3,4)$ is in our list, we do not need to also include $(4,3)$. Also, the code does not require inverses of generators to be taken into account. In particular, for every tuple $(n,m)$ in the list, we do not need to also include $(-n, m), (n, -m)$ or $(-n, -m)$. \par This list should contain precisely one tuple for each non-edge in the defining graph. If no list is entered, the program takes \texttt{[]} as default (i.e. the free group). \begin{rmk} In the Python code, we use the command \texttt{commuting\_elements} to denote commuting generators. \end{rmk} \section{Pilings} Pilings are represented by lists of lists. Each list within the main list corresponds to a column of the piling, reading from bottom to top. A `$+$' bead is represented by \texttt{1}, a `$-$' bead is represented by \texttt{-1} and a `$0$' bead is represented by \texttt{0}. Here are some examples: \begin{center} \begin{tabular}{|c|c|} \hline Piling & Representation\\ \hline\hline \includegraphics[scale=0.1]{piling2.png} & \texttt{[[0],[1,0,-1],[0,1,0]]}\\ \hline \includegraphics[scale=0.1]{piling3.png} & \texttt{[[1,0,-1,0],[0,1,0,1],[0,0],[]]}\\ \hline \includegraphics[scale=0.1]{commutative_group_piling.png} & \texttt{[[-1],[1],[1],[],[1]]}\\ \hline \end{tabular} \end{center} We note the empty piling is represented by \texttt{[[]$\ast N$]} where $N$ denotes the number of generators. \par Reading pilings as a list of lists is not as user friendly as a pictorial representation. We refer the reader to Appendix \ref{appen:draw}, which explains how to use the \texttt{draw\_piling()} function. This allows the user to create a pictorial representation of pilings. \subsection{Programs} One of the most useful steps of the algorithm is converting words, which represent group elements in a RAAG, into piling representations. One key fact from these constructions is that if two words $u,v \in A_{\Gamma}$ represent the same group element, then the piling representations for $u$ and $v$ will be the same. In particular, every group element in $A_{\Gamma}$ is uniquely determined by its piling. We describe this function here. \par \textbf{Function:} \texttt{piling(w, N, commuting\_elements=[])} \par \textbf{Input}: \begin{enumerate} \item Word $w$ representing a group element in $A_{\Gamma}$. \item $N=$ number of vertices in defining graph. \item List of commuting generators. \end{enumerate} \textbf{Output}: Piling representation of $w$, as a list of lists. \par By construction, the piling will reduce any trivial cancellations in $w$ after shuffling, so we do not require our input word $w$ to be reduced. \begin{exmp} Suppose you wish to compute the piling for the word $ab^{2}a^{-1}b\in\mathbb{F}_2$. Then you would input \texttt{piling([1,2,2,-1,2], 2)}, which outputs the following: \[ \texttt{[[1, 0, 0, -1, 0], [0, 1, 1, 0, 1]]} \] Similarly we could compute the piling for the word $a^{-2}_{2}a^{-1}_{4}a_{3}a_{2}a_{4}a_{1}a_{2}a^{-1}_{1}a^{2}_{2}a^{-1}_{4} \in A_{\Gamma}$ from Equation \ref{CGW example}. This outputs the following piling: \[ \texttt{[[0, 0, 1, 0, -1, 0, 0], [-1, 0, 1, 0, 1, 1], [0, 1, 0, 0], [-1, 0]]} \] \begin{figure} \caption{$ab^{2}a^{-1}b\in\mathbb{F}_2$ and $a^{-2}_{2}a^{-1}_{4}a_{3}a_{2}a_{4}a_{1}a_{2}a^{-1}_{1}a^{2}_{2}a^{-1}_{4} \in A_{\Gamma}$ } \label{fig:ex pilings} \end{figure} Figure \ref{fig:ex pilings} gives a pictorial representation of these pilings. Each list represents a column in the piling. \end{exmp} We now present a function which computes the normal form representative from a piling. For us, this is the shortlex shortest word which represents the piling. We remind the reader that this convention is the opposite ordering for normal forms in \cite{CGW}. \textbf{Function:} {\texttt{word(p, commuting\_elements=[])}} \par \textbf{Input}: \begin{enumerate} \item $p$ = piling. \item List of commuting generators. \end{enumerate} \textbf{Output}: normal form word which represents the piling $p$. \begin{exmp} The following is an example of how to use this function for the following piling in $A_{\Gamma} = \langle a,b,c \; | \; ac=ca\rangle$: \begin{align*} &\texttt{p=[[1,0],[0,0,-1],[-1,0]]}\\ &\texttt{w=word(p,[(1,3)])}\\ &\texttt{\color{pu}print\color{black}(w)} \end{align*} The output is: \begin{center} \texttt{[1,-3,-2]} \end{center} \end{exmp} \section{Cyclic reduction} After constructing pilings from our input words, the next step of the conjugacy algorithm is to cyclically reduce each piling. \par \textbf{Function:} {\texttt{cyclically\_reduce(p, commuting\_elements=[])}} \par \textbf{Input}: \begin{enumerate} \item $p =$ piling. \item List of commuting generators. \end{enumerate} \textbf{Output}: \begin{enumerate} \item Cyclically reduced piling. \item Conjugating element. \end{enumerate} \begin{exmp} Let $w = abca^{-1}\in\langle a,b,c\; | \; bc=cb\rangle$. We can compute all possible cyclic reductions on $w$ as follows: \begin{align*} &\texttt{p=piling([1,2,3,-1], 3,[(2,3)])}\\ &\texttt{p\_cr=cyclically\_reduce(p,[(2,3)])}\\ &\texttt{w=word(p\_cr[0],[(2,3)])}\\ &\texttt{\color{pu}print\color{black}(w, p\_cr[1])} \end{align*} The output from this is: \begin{center} \texttt{[2,3], [1]} \end{center} This function is the first example where we output a conjugating element. In this example, when we cyclically reduce $w = a\cdot bc \cdot a^{-1}$ to the word $bc$, we have taken out the conjugating element $a$. \end{exmp} \section{Graphs} We remind the reader that for the functions in this section, we need to import the \texttt{networkx} module into Python. This makes working with the defining graph much easier, in particular computing induced subgraphs and connected components. \par The next step in the conjugacy algorithm is to compute the non-split factors from a word based on the defining graph. First, we need a method to input our defining graph. \par \textbf{Function:} {\texttt{graph\_from\_edges(N, commuting\_edges=[])}} \par \textbf{Input}: \begin{enumerate} \item $N =$ number of vertices in defining graph. \item List of commuting generators. \end{enumerate} \textbf{Output}: \texttt{networkx} graph, which represents the defining graph. \par We recall that in \cite{CGW}, the convention is to add edges for non-commuting vertices. This is precisely what this function does - the output is a graph on $N$ vertices, with edges between vertices $(x,y)$ if and only if $(x,y)$ is not in the list of commuting generators. \begin{exmp}\label{exmp: graph} Let's compute the defining graph from Equation \ref{CGW example}. We input the following: \begin{align*} &\texttt{CGW\_Edges = [(1,4), (2,3), (2,4)]} \\ &\texttt{g = graph\_from\_edges(4, CGW\_Edges)} \\ &\texttt{nx.draw(g)} \end{align*} This outputs the following graph: \begin{figure} \caption{\texttt{networkx} graph defined in Example 6.1.} \label{fig:my_label2} \end{figure} \end{exmp} \subsection{Factorising} We now want to build up a function which takes our defining graph $g$ and our input word $w$, and computes the non-split factors related to $w$. \par \textbf{Function:} {\texttt{factorise(g, p)}} \par \textbf{Input}: \begin{enumerate} \item $g =$ \texttt{networkx} graph representing the defining graph of $A_{\Gamma}$. \item $p =$ piling. \end{enumerate} \textbf{Output}: List of subgraphs of $g$, one for each non-split factor of $p$. \par This function first checks which columns in the piling contain a \texttt{1} or \texttt{-1} bead (this is the \texttt{supp\_piling()} function). From this subset of vertices, we build the induced subgraph, and then return a list of the connected components of this subgraph. Each of these connected components represents the non-split factors. \par At this stage, we have extracted each of the subgraphs which correspond to the factors. We now want to extract pilings which represent each factor. \comm{ \par \textbf{Function:} {\texttt{graphs\_to\_nsfactor(g, w)}} \par \textbf{Input}: \begin{enumerate} \item $g =$ connected component of defining graph. \item $w =$ word representing a group element of $A_{\Gamma}$. \end{enumerate} \textbf{Output}: non-split factor of $w$ based on $g$.} \par \textbf{Function:} {\texttt{graphs\_to\_nsfactors(l, w, N, commuting\_elements)}} \par \textbf{Input}: \begin{enumerate} \item $l =$ list of subgraphs representing connected components. \item $w =$ word representing a group element of $A_{\Gamma}$. \item $N =$ number of vertices in defining graph. \item List of commuting generators. \end{enumerate} \textbf{Output}: list of the non-split factors from $w$ represented by pilings. \begin{exmp} We use the example from Figure 4 of \cite{CGW}. We input the following: \begin{align*} &\texttt{w = [2,3,-4]}\\ &\texttt{CGW\_Edges = [(1,4), (2,3), (2,4)]} \\ &\texttt{g = graph\_from\_edges(4, CGW\_Edges)}\\ &\texttt{graphs = factorise(g, piling(w, 4, CGW\_Edges))} \\ &\texttt{\color{pu}print\color{black}(graphs\_to\_nsfactors(graphs, w, 4, CGW\_Edges))} \end{align*} The output is \[ \texttt{[[[0], [1], [], []], [[0], [], [1, 0], [0, -1]]]} \] where the first piling represents the factor $a_{2}$, and the second piling represents the factor $a_{3}a^{-1}_{4}$. \end{exmp} \section{Pyramidal Pilings} For the final step in the algorithm, we need a function which converts pilings into pyramidal pilings. After this, we need to check when two pyramidal pilings are equal up to cyclic permutation. \par \textbf{Function:} {\texttt{pyramidal\_decomp(p, commuting\_elements=[])}} \par \textbf{Input}: \begin{enumerate} \item $p = $ non-split piling. \item List of commuting generators. \end{enumerate} \textbf{Output}: factors $p_{0}, p_{1}$ as pilings. \par It is important to note that the input piling must be non-split, otherwise we cannot achieve a pyramidal piling and the function will run forever. \begin{exmp} Let's compute the decomposition in Figure 5 of \cite{CGW}. We input the following: \begin{align*} &\texttt{CGW\_Edges = [(1,4), (2,3), (2,4)]} \\ &\texttt{p = [[0,1,0,-1,0], [0,1,0,1], [0,1,0,0], [-1,0]]} \\ &\texttt{decomp = pyramidal\_decomp(p, CGW\_Edges)}\\ &\texttt{\color{pu}print\color{black}(decomp)} \end{align*} The output is then \[ \texttt{([[0], [], [0, 1], [-1, 0]], [[1, 0, -1, 0], [0, 1, 0, 1], [0, 0], []])} \] \end{exmp} \textbf{Function:} {\texttt{pyramidal(p, N, commuting\_elements=[])}} \par \textbf{Input}: \begin{enumerate} \item $p = $ non-split cyclically reduced piling. \item $N = $ number of vertices in defining graph. \item List of commuting generators. \end{enumerate} \textbf{Output}: \begin{enumerate} \item Pyramidal piling of $p$. \item Conjugating element. \end{enumerate} The \texttt{pyramidal()} function iteratively applies \texttt{pyramidal\_decomp()} until the factor $p_{0}$ is empty. Since the only operation we apply on the piling is cyclic permutations, we can add these steps to our conjugating element. \begin{exmp} Again we take the same example above. Our input is: \begin{align*} &\texttt{CGW\_Edges = [(1,4), (2,3), (2,4)]} \\ &\texttt{p = [[0,1,0,-1,0], [0,1,0,1], [0,1,0,0], [-1,0]]} \\ &\texttt{pyr = pyramidal(p, 4, CGW\_Edges)} \end{align*} Figure \ref{fig:pyr} gives the pictorial representation of this pyramidal piling. The conjugating element output in this example is \texttt{\color{blue}[-4,3,-4]}. \begin{figure} \caption{Pyramidal piling.} \label{fig:pyr} \end{figure} \end{exmp} \textbf{Function:} {\texttt{is\_cyclic\_permutation(w,v)}} \par \textbf{Input}: Two words $w,v$ representing group elements in $A_{\Gamma}$.\\ \textbf{Output}: If $w$ is a cyclic permutation of $v$, the function outputs two arguments: \begin{enumerate} \item \texttt{\color{blue}True} \item Conjugating element. \end{enumerate} Otherwise, return \texttt{\color{blue}False}. \section{Implementation of the Conjugacy Problem} \subsection{Simple Cases} If we want to solve the conjugacy problem in either $\mathbb{F}_n$ or $\mathbb{Z}^n$, the algorithm for implementing the conjugacy problem is more straightforward, and using the \texttt{is\_conjugate()} function detailed below is wasteful. Certainly in the latter case, the conjugacy problem is trivial; it is equivalent to checking if two words are equal, which is just a case of comparing the number of occurrences of each letter in the words. For free groups, two cyclically reduced words are conjugate if and only if they are cyclic permutations of each other. Hence one can make a shorter program for free groups by simply using the \texttt{cyclically\_reduce()} and \texttt{is\_cyclic\_permutation()} functions. \par For general RAAGs, we now have the necessary functions to implement the conjugacy problem. \par \textbf{Function:} {\texttt{is\_conjugate(w1, w2, N, commuting\_elements=[])}} \par \textbf{Input}: \begin{enumerate} \item Two words $w_{1}, w_{2}$ representing group elements in $A_{\Gamma}$. \item $N = $ number of vertices in defining graph. \item List of commuting generators. \end{enumerate} \textbf{Output}: If $w_{1}$ is conjugate to $w_{2}$ in $A_{\Gamma}$ then return: \begin{enumerate} \item \texttt{\color{blue}True} \item Conjugating element $x$ such that $w_{1} = x^{-1}w_{2}x$. \end{enumerate} Otherwise, return \texttt{\color{blue}False}. \par We note that when computing the conjugating element, we find a reduced word $x$ which represents an element which conjugates $w_{1}$ to $w_{2}$. \appendix \section{The `draw piling' Function}\label{appen:draw} The representation of pilings as a list of lists of \texttt{1}s, \texttt{-1}s and \texttt{0}s is not very readable in the Python interpreter. For small pilings it is tolerable, but for larger ones the \texttt{draw\_piling()} function is useful. \par \noindent\texttt{draw\_piling()} can take a total of nine arguments, however only the first one, which is the piling to be drawn, is needed. The function returns nothing, but by default it will show a picture of the piling in a new window, as well as save a \texttt{PNG} file of it. \par The following table lists all the arguments of the \texttt{draw\_piling()} function and what they do: \begin{center} \begin{tabular}{|c|c|c|} \hline Argument & Type & Description\\ \hline\hline \texttt{piling} & Piling & \tiny The (mandatory) piling to draw.\\ \hline \texttt{scale} & Float & \tiny The scale to draw the piling at. Default is \texttt{100.0}.\\ \hline \texttt{plus\_colour} & Colour & \tiny The colour to draw the `$+$' beads. Default is red.\\ \hline \texttt{zero\_colour} & Colour & \tiny The colour to draw the `$0$' beads. Default is grey.\\ \hline \texttt{minus\_colour} & Colour & \tiny The colour to draw the `$-$' beads. Default is blue.\\ \hline \texttt{anti\_aliasing} & Integer & \tiny The super-sampling resolution for anti-aliasing.\\ & & \tiny Only allows positive integers. Default is 4.\\ \hline \texttt{filename} & String & \tiny The filename to save the piling as. Default is \texttt{\color{gr}"piling.png"}.\\ \hline \texttt{show} & Boolean & \tiny Whether to show the piling in a window. Default is \texttt{\color{orange}True}.\\ \hline \texttt{save} & Boolean & \tiny Whether to save the piling. Default is \texttt{\color{orange}True}.\\ \hline \end{tabular} \end{center} \section{Solution to the Word Problem} With the tools that \texttt{pilings.py} provides, it is not hard to solve the Word Problem in any given RAAG. The following code defines a function that decides if a given word is equal to the identity: \begin{align*} \texttt{\color{orange}def} \; &\texttt{\color{blue}identity\color{black}(w, N, commuting\_elements=[]):}\\ &\texttt{\indent p=piling(w, N, commuting\_elements)\color{red}\#generate reduced piling}\\ &\texttt{\indent reduced\_w=word(p, commuting\_elements)\color{red}\#read off reduced word}\\ &\texttt{\indent \color{orange}return\color{black}(reduced\_w==[])\color{red}\#return whether it equals the identity} \end{align*} The following code defines a function that decides if two given words are equal in a given RAAG: \begin{align*} \texttt{\color{orange}def} \; &\texttt{\color{blue}equal\color{black}(w1, w2, N, commuting\_elements=[]):}\\ &\texttt{\indent \color{red}\#generate reduced pilings}\\ &\texttt{\indent p1=piling(w1, N, commuting\_elements)}\\ &\texttt{\indent p2=piling(w2, N, commuting\_elements)}\\ &\texttt{\indent \color{red}\#read off reduced words}\\ &\texttt{\indent reduced\_w1=word(p1, commuting\_elements)}\\ &\texttt{\indent reduced\_w2=word(p2, commuting\_elements)}\\ &\texttt{\indent \color{red}\#return whether they are equal}\\ &\texttt{\indent \color{orange}return\color{black}(reduced\_w1==reduced\_w2)} \end{align*} \section*{Acknowledgments} The second author would like to thank the Department of Mathematics at Heriot-Watt University for supporting this summer project. Both authors would like to thank Laura Ciobanu for supervising this project. \par Whilst we have made our best efforts to debug and correctly implement code for this algorithm, there may still be mistakes! If you spot any errors or issues with our code, please let us know. \uppercase{School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, Scotland, EH14 4AS} \par Email address: \texttt{[email protected], [email protected]} \end{document}